Introduction
- In monolithic architecture, multiple components are combined in a single large application.
- Single Codebase
- Deployed in single bundle
- For change in one service, whole application needs to be redeployed.
- Building problem
- Developers has to communicate
- Problem in scaling
- Cumbersome overtime
- Micro services solve all the above problems
- Large applications are divided into small parts
- Different code base
- Each module is managed independently
- Different tech stack By micro service Is possible.
- Handling of micro service is complex
Steps we can take to Convert a monolithic application to a micro services application
- Goal is to improve scalability and make the system more resilient to The changes.
- Divide the monolithic Into service specific architecture.
- Implement saga pattern to manage distributed transactions.
- Implement API gateway pattern where the client directly calls the API gateway instead of calling each micro service individually. The Gateway then routes request to micro services based on context, Path.
- Use message queues For asynchronous communication Between microservices Like active MQ.
- Monolithic can be used to initially start of system due to following advantages
- Experiment quickly
- Focus on business logic
- Prevent initial over design
- Small teams can manage changes
- O(1) debugging, Security and operations.
- We also have service oriented architecture, which has following advantages
- We build services a bit more Decoupled Then Monolithic App.
- We have four types of services
- A business layer at the top.
- A bunch of enterprise services
- And at the bottom, we have infrastructure services
- All the above are tied with enterprise, service bus.
- A single database is shared along all the services.
- A change in database had to be changed across all the services.
- Scaling enterprise Service bus was an issue
- The enterprise service bus was a single point of failure.
- Need for Micro services starts up with following issues
- growing product requirements
- On boarding, new team members
- Too much cognitive load.
- Delaying requirements to sync features.
- No independent scaling.
- Messy to Debug
- Micro services are independently, deployable and production and consumption of API’s Makes application to scale really fast and enable innovation.
- Now, Micro services are deployed in containers, which makes them simpler to deploy.
- Benefits of micro services
- Easy on boarding process
- Simple to understand
- Allows to work across Different Technology stacks.
- They have isolated responsibility.
- Deploy independently
- Scalable and distributable
- Reuse business logic throughout your business application.
- Fault tolerant, we can use patterns like circuit breaker pattern.
- We need to answer following questions
- How big should be the services?
- We can use the concept of service Granularity To determine the size of micro service.
- No one size fits all approach.
- It depends upon your infrastructure
- How much overhead in terms of Source control management, deployment, testing, et cetera.
- Does creating a service take more time, then implementing it if this is the case, then our service is too small, which can be a good approach.
- Domain driven design
- Bound contexts Where each component of our application is responsible for single functionality
- Single responsibility principle
- One strong entity per service
- Beware of nano services That is our services are too small that it is not feasible to maintain them.
- How do I go about splitting them up?
- We can use Strangler pattern. It gets its name from the strangler vines of Fig tree. The wine start at the top of the tree From the leaves and slowly work their way down until they are mixed in soil and are entirely consumed in tree after which tree dies.
- Add a proxy, which wraps request coming from a user to the monolithic application.
- This is just a layer which we have set up to begin redirecting traffic to new services as we implement them.
- Next, we can start implementing larger domains and related services into separate microservices. This micro service will have its own dedicated database.
- Next step is to set up a link between proxy and new binary/service, which is running.
- As we implement the new service, we can start using the proxy to redirect some of the traffic into the new microservice.
- Once the functionality is implemented, remove it from a monolithic application.
- Next we can, tackle Next domain and create a new micro Service binary with a separate DB and we remove it from the monolithic application.
- We can repeat the same for all the binary, and we see that our microservices application is built out of monolithic application.
- Where to get started from
- Design Documents
- Create design documents for each individual micro services.
- Entities handled
- Business logic related to them
- The contract between each micro service and other micro services that were consuming, it or were being consumed by it.
- Create a strategy
- Final architecture design
- Design each service
- It can be a costly refactoring process That we might have to delay feature development for a while or add more resources.
- Decide the architecture design based on the domains that your Application was working on.
- Split the repository
- If using github, we can fork out a new repository from our current repository.
- Clear dependency Relationships
- Create a repository per micro service.
- Deploy each micro service separately.
- Set team responsibilities
- In a Small team one member Maybe responsible for multiple micro services.
- In large team, we may have sub team responsible for single service.
- Create a separate repository For shared code across multiple micro services, like DTO, DAO, common contracts. This repository May not be needed to deployed But can be compiled as a jar and added to artefactory. This jar Can then be consumed by different micro services.
- Next is regarding picking a technology stack for running microservice's
- Run as App engine application
- Customised compute engine
- In Kubernetes(Google Kubernetes Engine)
- Deploy a wide variety of applications
- Self healing capabilities
- Auto scaling capabilities
- Declarative configuration
- Easy network set up
- Upgrade rollouts
- Integration with GCP services like stack driver, Cloud Pub/Sub, Data store, et cetera.
- In cloud functions
- In K native
- Backend, we can use frameworks like spring, which provide following Functionality
- Familiarity with Java
- Mature framework
- Open source
- Cloud native
- Flexible
- Lots of documentation
- Easy to get started.
- For frontend We can use frameworks like vue.js, React.js, Polymer, Angular etc.
- Each framework allows web components, where each piece of UI is a separate component with isolated responsibility.
- Exposing the web services
- Gateways
- Gateway decouples clients From downstream services.
- Gateway provides the optimal API for each client.
- Aggregate calls to services
- Throttling on requests
- Retry logic
- Perimeter security
- Aggregator or Composite services
- Some services can aggregate calls to other services.
- This allows you to simplify the calls and implement additional logic like retry or Exception handling.
- Configuring services
- The configuration services include
- Ports
- URL
- Timeouts
- Retries
- Feature flags
- Pubsub
- We can create a micro service that listens to changes in a file in our repository Containing the configuration settings For all our micro services. When a change to a file is detected, it send a message via cloud publish subscribe to all the other Micro services to tell them regarding configuration update.
- Handle transaction or coordination between micro services
- We use saga patterns that is orchestration and choreography
- In Orchestration, we have a single coordinator or service that makes sure things are happening and handle logic to perform complex operations.
- If this coordinator fails, the other services can’t work.
- In choreography services produce and Listen to events and react accordingly. There is no single service coordinating the actions.
- There may be overheads
- Keeping track of my requests
- Distributed tracing
- Allows developers to track a single request as it bounces from service to service. This allows developers to troubleshoot and get metrics About their services.
- Zipkins is used by spring To add breadcrumbs To the HTTP headers automatically to all your requests.
- We can troubleshoot errors And gather metrics about the way services are communicating.
- Alternative tools are like Envoy and Istio To enhance messages for you. They create a service mesh To enhance HTTP requests Between services as running on any tech stack And provide distributed tracing Capabilities automatically.
- How do the services find each other
- We use service discoverability, which allows services to find each other.
- We have tools like Netflix Eureka
- There is a registry service when new service arrives, they talk to the registry and register themselves.
- Other services can the registry to find out what other services are available
- What if a service goes down
- Use behavioural patterns like circuit breaker or self healing services.
- Circuit Baker is a pattern that is used to detect failures and allow services to react to these failures.
- Self, healing for example Kubernetes Is able to get status of our service if it is down, it will kill the pod And create a new instance of our application. This ensures our service is always up and running.
- In spring, we can use actuator to Track The status of service.
- How do I manage my services
- Use existing tools to help you automate your infrastructure as code/configuration like ansible, Puppet, Chef, Google cloud, AWS Terraform.
- Have a CICD pipeline per microservice.
- Have a rollout strategy.
- Set up error reporting, Logging, metrics, etc.
- Don’t forget about security and testing.
- Migrating from a Monolithic application to micro service can be costly, but it may be worth the benefits.
- Take advantage of existing pattern like the Strangler Pattern, saga patterns, circuit breaker etc.
- There is no one size fits all Solution.
SOLID principles in micro services
- Single responsibility principle
- Open close principal
- Liscov substitution
- Interface segregation
- Dependency inversion
A Micro services architecture defines an application as different modules. Where in each module act as a service.
Traditional Approach
Data Management Service
Traditional Approach
- Traditional approach was kind of monolithic where there were different modules which used to interact with same database.
- These modules may have been or might not have been tightly coupled.
- For example in a Banking System may have interaction within different systems
- User Module
- Accounts Module
- Transactions Module
- All these systems interact with same database
- A minor issue in one area may halt the entire application.
- There will be a lot of inter-dependencies between these modules which we will have to manage during the development phase.
- Monolithic Applications have all concerns coupled together on a single runtime.
- This is a micro-services based approach.
- Instead of building one huge application we build microservices
- For example we may have a
- Microservice for User Module
- Microservice for Accounts Module
- Microservice for Transactions Module
- Accounts and Transactions Micro-services will separately depend on the User micro-service.
- The data is transferred in between these micoservices in form of Representational State Transfer or REST
- Fetch User Information from User Module,Transactions from Transactions Module and show in Accounts Module.
- All these modules are independent microservices which are built and deployed independently.
- They have their own database
- Accounts Module will be the entry point which will look up the users service and Transactions service to display information to the user.
- All data is shown by Message Queues or services from database.
- Data Storage is not shown.
- Micro services, every concern has its own run-time And scales independently
- Push notification
- Web server
- REST API
- Email Service
- A software that makes it self available over internet.
- We build web services to expose data to other applications over Http/Ftp protocols.
- Data is exchanged in json or other formats
- Representative State Transfer(REST) is a popular way of transferring data.
- We then map the json or xml object received to the object compatible with our system.
- Build individual Modules and expose them as services which can be consumed by other micro services.
- We expose micro-services by REST API so that they can be accessed over HTTP by remote clients.
- Each micro-service has a bounded context.We break a domain into different modules and build services in each of those areas.
- This helps to avoid overlap between different modules.
- This is called as domain driven micro services design.
- Microservices are loosely coupled you build test and deploy independently.
- We can recover from failure quickly with Micro-services design pattern.
- If there is an issue in one service then it should not bring down the entire system.
- The micro services are designed to handle failures in this way.
- Microservices also help us to scale application easily
- We can have multiple instances of a microservice if we realize there is lot of traffic in one particular area within the application.
- Let's talk here of a solution based on micro-services architecture which defines each component as a service.
- We can have a layer for database which acts as a service.
- A service to store System key value pairs in Redis
- Message Queue as service using Rabbit MQ etc.
- Next we can have Infrastructure Management services like
- Audit Service
- Log Service
- Eureka Server
- Eureka Server provides service discovery of these services to other services under Business Logic Micro services.
- We can have Business related Micro services which include
- Data Management Services.
- Publishing records to downstream systems.
- Other Business Functions
- We can have micro services related to security infrastructure
- Access Control Service using Redis
- Once access is authenticated using service we use Redis key value storage.
- Elastic search engine is used to enable full text search.
- From message queues by using tools like Rabbit MQ and Business Logic Services we then expose these services to consumers.
- An application may have generic non functional Web services like.
- Entity Meta Data service
- Common component service(DAO,DTO).
- Reporting service for creating views or dashboard data
- Business Role management service.
- Publish subscribe service for asynchronous communication.
- Authorization service.
- Service for encryption and description of PII.
- API gateway service if not using cloud gateway.
- Notification service.
- Caching service for caching key value pairs in Redis.
- Logging service.
- Elastic search service.
- Data quality and Enrichment service.
- Webhooks Service
- Admin service for admin panel.
- Upload and download service to upload and download reports.
- An e-commerce app may have following functional services
- Basket.
- Catalog.
- Identity
- Ordering
- Payment
- A Web-service may have further packages like
- Properties.
- Config.
- Controllers.
- Entity.
- Enums
- Filter
- Repository
- Service
- Utility
- Exceptions.
- Infrastructure.
- Integration events.
- Migrations.
- Services.
Transactions in micro services
- We can use two phase commits as a standardised protocol that shows that a database commit is implemented successfully Among all the services.
- Here the commit operation is broken down into two separate parts.
- We have an orchestration, service that orchestrates the process
- The service asks the 2 services regarding an update condition possibility.
- Once both the services are ready to update or perform the operation, the orchestrator asks Both the services to perform the operations .
- After the first phase, both the services will put the lock on resources, and once commit has happened, resources, are released.
- The orchestration service is notified by both the services regarding the successful completion of operation.
- This is a sync Apple Watch and can’t sustain long holds over resources, which is the drawback of this methodology.
- We must ensure that operations once performed leave all the micro services in consistent state.
- In case of any failure, the orchestrator service takes the remedial action.
- Zookeeper is used for two phase commit implementation.
- Addition latest communication and coordination steps can be latency.
- Services become dependent on each other and the coordinator, causing blocking.
- Network communication is complex, which involves error handling, And coordination between multiple services.
- If the coordinator does not recover in between a transaction, then one has to manually intervene and make a decision.
- Until Coordinator recovers all nodes will have to wait or prohibit read/write operations.
- In distributed transaction, maintaining consistency is a challenge.
- The other approach is the saga choreography where a sequence of transactions that update a service, publish a message to the message book like Kafka, which triggers an event for the transactions to occur.
- Saga’s offer an alternative approach
- It consist of a sequence of transactions, each updating a single service.
- If any local transaction fails, compensating transactions are made to bring the system back to consistent state.
- Task is broken into local transactions
- Each transaction is responsible for single task, and communicates with each other through events.
- In case of failure, we undo all the changes in each transaction. This is called as backward recovery.
- In forward recovery, we try to retry again the failed transaction
- In orchestration We follow command and control approach.
- Preferable for simpler Saga’s or When you need Audit trail and centralised control.
- In choreography, we have an approach of trust, but verify.
- Better for complex Saga with many services or when you need high scalability with loose coupling
- We can use co-relation ID to detect failures in choreography approach.
- 2 faced commits is a distributed approach and should be avoided
- Check for metadata of fields using a metadata service which provides authentication of metadata.
- Checks whether all necessary fields are available in a set of data which needs to be saved.
- Check for duplicates
- Check for data quality rules for example if a data is of type enum it should be from specific values only.
- Create a service for delivery to Downstream systems based on their definition of data and checks.
- After these checks publish data to downstream systems.
- All the publishing also is based on rules based on the metadata of downstream systems for which we have rule engine.
- To this data we also need to have a service for searching data for example services like Elastic micro services can be used for this.
- Systems like Spotfire,IDL can be used for Analytical reporting.
- Need to have a service which can Extract,transform and load based on Analytical report systems.
- It is preferred to have a separate Data Environment or Data Lake for these so that essential data is only presented to them.
- Also services which secure sensitive data like Bank Account Numbers,Debit Card Numbers must be used accordingly here.
- We can use tools like GPG to encrypt data.
- For OLTP processing we can use an API proxy service system like Akana.
- Through this proxy service outside apps can access our systems for small transactions.
- All the authentication is done by API proxy server.
- A service is needed to manage Master Data Management.
- This may include direct entry from forms/dashboard
- Entry can be from third party sources such as Excels or upstream systems.
- A service may be needed to format and send data to downstream systems or SOA layers which interact with downstream systems.
- A service may be needed to provide access to Reference data of Master Data or Metadata of Master Data this may also include validations for example if an employee is CA then he must have CA certification number etc.
- A service may be define for different workflows in the system. These workflow may have a set of templates to gather information from user.Example of workflows can be as follows.
- While procurement of goods one may need to send mails to suppliers then we wait for quotations from suppliers.Once quotations are received the system may select lowest quotation depending on quality and quantity of product.This quotation may need approval from quality and finance department before final review.
- After review there may be a workflow how the received goods may be managed in a department.
- Workflow for updates on purchase orders etc.
- A service may be needed to convert data from various sources in a format which can be consumed by Master Data Management Service.
- The data provided by upstream systems may need to be validated. A service may be required to manage such validation rules.
- A service may be needed for access control management in a system.
- A service is needed to perform global search using elastic search or any such tool and technique.
- A service is needed for logging events from all the services.
- A service to audit inbound and outbound connections between services.
- A service for encrypting sensitive data.
- A service to check duplicate data.
- Need of MDC
- All logs go in a single log file From multiple clients. If a controller is serving more than one clients then log will be mixed and it is very difficult to identify which log belongs to which client.
- MDC helps to identify log statements with respect to each client uniquely.
- To avoid this mixup we can add a correlation ID or UUID in the log statements which acts as a client identifier.
- This client Identifier is added from the request handler. But this can be a repetitive work if we have many log entries which will consume more time and resources.
- MDC also known as mapped diagnostic context is used to enhance application logging by adding some meaningful information to log entries.
- Mapped diagnostic Context is a map maintained by the logging framework where the application code provides key value pairs which can then be inserted by logging framework in log messages.
- SLF4J supports MDC.
- Example of MDC can be found at springmvc/voting-system at master · gauravmatta/springmvc (github.com)
Debugging micro services using Dead Letter Queue
- There is a queue of items being processed by Services in an order.
- Now if one of the service fails or its instance fails, there will be some items are partially processed.
- These partially processed items are placed in a new queue called as Dead Letter Queue.
- These are later given to other working instances of same service.
- If we have some failed tasks/events we can put them in dead letter Queue so that they can be tried again after a fix.
Micro service Authentication and Authorisation
- Authentication is a process to identify if the user is who it is claiming to be.
- Authentication can be done through API key, Authentication token, User ID/password, JWT token etc.
- In a B2B application, authentication is done via API key for example between upstream/downstream systems or between microservices.
- In B2C we use authentication token for authentication.
- After a user is authenticated, the user is authorised to check what are the access level of the user.
- Authorisation can go against a data source to identify granular access level of the user.
- When there are less micro services, we can use a load balancer, which redirects the request to specific micro service.
- Each micro service handles authentication and authorisation.
- When the micro services are more, we can use an API Gateway, which perform authentication and then redirects Request.
- The authorisation is still done by the microservice.
- Microservice can use a middleware like common NuGet package To handle authorisation.
- We get request from load balancer to API gateway which manages Both authentication and authorisation.
- In cloud, Best API gateway, we can use a lambda for authentication and Lambda for authorisation. However We cannot apply that at more granular levels.
- We always require some authorisation logic in services for showing data at granular Level so this approach Is not much widely used.
- These authentication and authorisation approaches are only for HTTP web services, we do not have authentication and authorisation for asynchronous web services Working from a Queue or stream.
- These are basically downstream processing systems.
No comments:
Post a Comment