The Wayback Machine - https://web.archive.org/web/20180213091302/http://soa.sys-con.com/node/4177673

Welcome!

Microservices Expo Authors: Pat Romanski, Karthick Viswanathan, Elizabeth White, Automic Blog, Liz McMillan

Related Topics: Microservices Expo, @CloudExpo, @DevOpsSummit

Microservices Expo: Article

Reference Architecture #Microservices | @CloudExpo #Serverless #AI #DevOps

The goal of microservices is to improve software delivery speed and increase system safety as scale increases

The goal of Microservices is to improve software delivery speed and increase system safety as scale increases. Microservices being modular these are faster to change and enables an evolutionary architecture where systems can change, as the business needs change. Microservices can scale elastically and by being service oriented can enable APIs natively. Microservices also reduce implementation and release cycle time and enables continuous delivery. This paper provides a logical overview of the Microservices Reference Architecture that highlights various sub systems needed to support Microservices deployment and execution.

Introduction
Switch over to Microservices is an hour of need in web application development and delivery and is crucial for the success of Enterprises today.

Of late, enterprises are adopting technologies like analytics, mobility, social media, IoT and smart embedded devices to change customer relationships, internal processes and value propositions. Microservices acts like a bridge between these technologies and provides the building blocks for developing modern distributed enterprise systems to become one of the enablers in the digital transformation journey of the enterprise.

Adoption of Microservices provides the agility, reliability, maintainability, scalability and deploy ability to the enterprises as part of digital transformation process.

There exists many different architecture definition approaches for implementing Microservices across Industry. Many of them are unique and specific to the needs of individual development teams. Many enterprises, without understanding or highlighting the name, have been using an approach towards leveraging APIs that classified as Microservices. This demands a need for designing Reference Architecture to develop and deliver Microservices‑based applications that consumed across the Enterprises.

There exists, numerous articles and blogs on Microservices reference architecture, architecture principles and best practices. This paper summarizes the purpose of Microservices Reference Architecture, drivers for adopting Microservices, Microservices architecture principles and logical view of Microservices Reference Architecture.

Drivers for Microservice Reference Architecture
Microservices are smaller in scope, determined by a focus on domain boundaries and consistent domain modelling, and require less code. In addition, in Monolithic applications, the application components communicate in memory. In the case of Microservices based applications, the communication happens over the network. This leads to software development and deployment becomes more reliable and faster.

With Microservices, each service can scale independently to meet temporary traffic, complete batch processing and other business needs. Improved fault isolation restricts service issues, such as memory leaks or open database connections. The scalability of Microservices complements the flexibility of cloud services improves service and handle more customers simultaneously without interrupting service.

The following diagram represents few key drivers for Enterprises adopting Microservices.

Microservices Architecture Principles
Below are few key architectural principles for Microservices:

Single Responsibility Principle: Each microservice must be responsible for a specific feature or a business functionality or aggregation of cohesive functionality completely.

Granular: Microservice granularity is contained within the intersection of a single functional domain, a single data domain and its immediate dependencies, a self-sufficient packaging and a technology domain.

Domain Driven Design: Domain driven design is an architectural principle in-line with object-oriented approach. It considers the business domain, elements and behaviors and interactions between business domains.

Encapsulation: Each Microservice encapsulates the internal implementation details, so that the external system utilizes the services need not worry about the internals. Encapsulation reduces the complexity and enhances the flexibility of the system.

Loose Coupling: There must be zero coordination necessary for the deployment with other Microservices. The changes in one microsystem should have zero or minimum impact on other services in the eco-system.  

Separation of Concerns: Develop the microservices based on distinct features with zero overlap with other functions. The main objective is to reduce the interaction between services so that they are highly cohesive and loosely coupled. If we separate, the functionality across wrong boundaries will lead tight coupling and increased complexity between services.

Language Neutral: Microservices are composed together to form a complex application, and they do not need to be written using the same programming language. For example, Java might be the correct language for one application, and in others, it might be Python etc.

Hexagonal Architecture: Microservice exposes RESTful APIs for external communication, message broker interface for event notification and database adapters for persistence. This makes hexagonal architecture as a most suitable style for Microservice development.

Sizing of Microservices
It is important that, while designing a Microservice, there is a need to decide on the number and size of individual Microservices. There is no strict rule regarding the optimal size of the Microservice and it depends on the partitioning of the problem space if it is a new development application or splitting the existing monolithic application into individual Microservices.

Also, Microservices cannot be too large or too small. Large services are hard to work with, hard to deploy, and take longer to start and stop. On the other side, when Microservices are too small, the resource cost of deploying and operating such a service overshadows its utility.

Microservice granularity can also be determined based on business needs. Making services too granular, or requiring too many dependencies on other Microservices, can introduce latency.

Microservices allow teams to plan, develop, and deploy features of a system in the cloud without tight coordination. It is therefore, Microservice number and size be dictated by business and technical principles.

Microservices Reference Architecture
Microservices is an architecture style, in which software systems or applications are composed of one or more independent and self-contained services. It is not a product, framework, or platform. It is a strategy for building large distributed systems and are loosely coupled and deployed independently of one another.

The following guidelines to be adapted while designing the services of Microservices Reference Architecture.

  • Lightweight: To facilitate smaller memory footprints and faster start up times
  • Reactive: Applicable for services with concurrent loads or longer response times
  • Stateless: Services scale better and start faster as there is no state to be passivated on shutdown or activated on start up
  • Atomic: helps to do the smallest business unit of work that can be done independently
  • Externalized Configuration: externalize the configurations in the config server, so that it can be maintained in hierarchical structure per environment
  • Consistent: Services should be written in a consistent style as per the coding standards and naming convention guidelines
  • Resilient: Service should handle exceptions arising from technical reasons (connectivity, runtime), and business reasons (invalid inputs) and not crash
  • Reporting: Usage statistics, number of times accessed, average response time, etc. via JMX API
  • Versioned: Support multiple versions for different clients, till all clients migrate to higher versions

Below is the logical view of the Microservices reference architecture.

Fig 2: Microservices Reference Architecture - Logical View

Various components of the reference architecture are described below:

Channels: Channels represents various client side or consumer applications, which will interact with Microservices.

Edge Server: API services or edge services reside on edge server or the API Gateway. Channels interact with the edge services, which decouples the microservices, and keep them channel agnostic.

API gateway is the single entry point for all clients. API Gateway is responsible for aggregating data or, in some cases, acting as a simple routing layer for appropriate services. The API gateway act as a single point of failure. For the coordination between cloud and on premise communication, an API Gateway is used.

The following diagram shows the API gateway interaction patterns

Load Balancer
It is a software based load balancer used for communication between Microservices. Configured for each service for availability, scalability and reliability.

Below diagram show load-balancing scheme.

Decentralized load balancing is the appropriate mechanism for distributing requests between available Microservice instances. Each Microservice can have its own load balancer handling only requests for that Microservice. Client is directly responsible for routing requests to an available Microservice.

API Service: API services need to be exposed either on the edge server or on API Gateway. The Service expose client specific API and can also act as coarse-grained services to orchestrate across multiple Microservices within or across bounded contexts. These services can interact with composite Microservices or directly with core Micro services.

API services can also communicate with the Enterprise Integration Infrastructure (typically ESB/MOM) to access any on premise Enterprise Applications

Composite Service: Composite services orchestrate across multiple core services. These services shall be communicating with the other services using the event-sourcing model or orchestration.

Core services: These services are the basic building blocks for Microservices architecture. These services encapsulate an entity or an aggregate (fine grained) within a given bounded context. Best practice is that, the granularity of the Microservices are always fine grained.

Circuit Breaker: Fault tolerance will ensure that when there is failure, the failed services does not adversely affect the entire system. Without proper mechanisms in place, errors, latencies will trickle up to the calling clients where they will potentially exhaust limited resources. When cascading failures occur, the overall system availability is significantly affected.

The three states of the Circuit Breaker is depicted below,

Closed State
When the service dependency is healthy and no issues detected, the Circuit Breaker is in state of closed. All invocations can pass through the service.

Open State
Circuit Breaker considers the following invocations as failed and factors them in deciding circuit open:

  • Request to the remote service time out
  • Thread pool and bounded task queue used to interact with Service dependency are at 100% capacity
  • Client library used to interact with a service dependency throws an exception

In open state, the circuit breaker reject invocations by either

  • Throwing an exception
  • Returning a fallback output

Half Open State
When the circuit breaker is in Open state, it periodically leaves through one invocation at a configurable interval. If the invocation succeeds, the circuit is closed again.

Cloud Config: It is a single source of configuration data for all other services in a Microservice-based application. Each service can have its configuration in a repository, which centralizes the configuration across all environments. It decouples the configuration from the implementation, which helps to update the configuration without affecting any of the services. Every update on the configuration files in the repository are automatically propagated to the running instances.

Service Discovery: In Microservices application, the dynamic assignment of service instances network locations happens automatically. In addition, the set of service instances changes dynamically because of auto-scaling, failures and upgrades. Consequently, the client code needs to use a service discovery mechanism.

There are two main service discovery patterns: Client-side discovery and Server-side discovery.

  • Client-Side Discovery Pattern: Client is responsible for determining the network locations of available service instances and load balancing requests across them. The client queries a service registry, which is a database of available service instances. The client then uses a load-balancing algorithm to select one of the available service instances and makes a request.
  • Server-Side Discovery Pattern: Client makes the request to a service via a load balancer. The load balancer queries the service registry and routes each request to an available service instance.
  • Service Registry: Service registry is a key part of service discovery. It is a database containing the network locations of service instances. A service registry needs to be highly available and up to date. Clients can cache network locations obtained from the service registry. However, that information eventually becomes out of date and clients become unable to discover service instances.
  • Self-Registration Pattern: Service instance is responsible for registering and deregistering itself with the service registry. In addition, a service instance sends heartbeat requests to prevent its registration from expiring.
  • Third-Party Registration Pattern: In this case, another system component known as the service registrar handles the registration. The service registrar tracks changes to the set of running instances by either polling the deployment environment or subscribing to events. It registers and deregisters the service instances.

Messaging & Events Stream: Lightweight messaging platforms like AMQP shall be used for exchange of messages between microservice within or across bounded contexts or in case of event sourcing used as part of the choreography.

Monitoring: As microservices are distributed and heterogeneous in nature, it is critical to monitor and visualize making sure that software is reliable, available, and performs as expected. Monitoring typically involves collecting metrics from all the applicable systems involved and analyzing/visualizing call graphs. These gets complex as the application complexity (again depends on number microservices and their interactions in a call graph) grows. There are many commercial and open source tools for monitoring microservices.

Distributed Tracing: Distributed tracing helps in how a request traverse through the application, especially when you may not have any insight into the implementation of the microservice you are calling. Tracing tools introduces unique IDs for logging which are consistent between Microservice calls which makes possible to find how a single request travels from one Microservice to the next.

Security: With microservices, security becomes a challenge primarily because no middleware component handles security-based functionality. Instead, each service must handle security on its own, or in some cases, the API layer be made more intelligent to handle the security aspects of the application. Security Frameworks like oAuth2 address the security concern. There are multiple ways of configuring security for microservices; by making API Gateway behave like reverse proxy or by securing each microservice using the security service (backing service like IAM) provided by the PaaS provider.

Backing Services: A PaaS provides services grouped into various categories like database, analytics, security, Data warehouse etc. used by the cloud application. These services expose lightweight protocols (like REST) and consumed by the cloud application by binding. PaaS providers provide consoles (UI based) to manage backing services.

On Premise Integration: API Gateway helps to mediate the communication between cloud and on premise applications. API Services will be used to orchestrate calls that span cloud and on premise by calling the enterprise integration infrastructure for calling any applications on premise.

Infrastructure: Infrastructure has two components; PaaS and IaaS. IaaS is the abstraction onto the hardware and provides on demand resource provisioning. Resources can be scaled out or in based on usage pattern. PaaS is cloud platform residing on top of IaaS. This layer provide the required support for the cloud applications to deploy and run. PaaS provides various runtimes and many backing services required for a cloud application.

Conclusions
Microservices is not a product, framework, or platform. It is a strategy for building large enterprise distributed systems. One of the characteristic of microservice is of loosely coupled and deployed independently of one another. Microservices architecture can offer enterprises many advantages, from independent scalability of diverse application components to faster, easier software development and maintenance. Sizing of the Microservices is very critical for the design of the better services. Open source technology solutions and organizational methods are leading the Microservices market. As a result, Microservices reduce vendor lock-in and eliminate long-term technology commitment, helps to choose the tools need to meet IT and business goals.

In addition, Microservices Reference Architecture need to be developed based on industry‑standard components like Docker containers, and a wide range of languages - Java, PHP, Python, Node.js/JavaScript and Ruby etc. Developed Microservice must expose JavaScript Object Notation (JSON) or Extensible Mark up Language (XML) over the HTTP to provide a REST API. These standards would provide guidelines about how to describe, maintain, and retire Microservices.

Finally, Microservices based System Design is an ongoing story. It is not something that has to done once, and immediately. With the right people, processes, and tools, Microservices can deliver faster development and deployment, easier maintenance, improved scalability, and freedom from long-term technology commitment.

References

  1. "Microservices". martinfowler.com. Retrieved 2017-02-06
  2. S. Newman, Building Microservices - Designing Fine-Grained Systems, O'Reilly, 2015
  3. E. Wolff, Microservices: Flexible Software Architecture, Addison-Wesley, 2016

Acknowledgements

The authors would like to thank Hari Kishan Burle, Raju Alluri of Global Enterprise Architecture

Group of Wipro Technologies for giving the required time and support in many ways in bringing

Up the article as part of Global Enterprise Architecture Practice efforts.

Authors

Dr. Gopala Krishna Behara is a Lead Enterprise Architect in the SCA Practice division of Wipro. He has a total of 21 years of IT experience. Reached at [email protected]

Tirumala Khandrika is a Senior Architect in the SCA Practice division of Wipro. He has a total of 16 years of IT experience. Reached at [email protected]

Sridhar Chalasani is an Architect in the SCA Practice division of Wipro. He has a total of 12 years of IT experience. Reached at [email protected]

Disclaimer

The views expressed in this article/presentation are that of authors and Wipro does not subscribe to the substance, veracity or truthfulness of the said opinion

More Stories By Gopala Krishna Behara

Dr. Gopala Krishna Behara is a Senior Enterprise Architect in the Enterprise Architecture & Solutions division of Wipro. He has a total of 16 years of IT experience. He can be reached at [email protected]

More Stories By Sridhar Chalasani

Sridhar Chalasani is an Architect in the Global Enterprise Architecture Practice division of Wipro. He has a total of 12 years of IT experience. He can be reached at [email protected]

More Stories By Tirumala Khandrika

Tirumala Khandrika is a Senior Architect in the Global Enterprise Architecture Practice division of Wipro. He has a total of 16 years of IT experience. He can be reached at [email protected]

@MicroservicesExpo Stories
You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You’re looking at private cloud solutions based on hyperconverged infrastructure, but you’re concerned with the limits inherent in those technologies.
From manual human effort the world is slowly paving its way to a new space where most process are getting replaced with tools and systems to improve efficiency and bring down operational costs. Automation is the next big thing and low code platforms are fueling it in a significant way. The Automation era is here. We are in the fast pace of replacing manual human efforts with machines and processes. In the world of Information Technology too, we are linking disparate systems, softwares and tool...
"Grape Up leverages Cloud Native technologies and helps companies build software using microservices, and work the DevOps agile way. We've been doing digital innovation for the last 12 years," explained Daniel Heckman, of Grape Up in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Kubernetes is an open source system for automating deployment, scaling, and management of containerized applications. Kubernetes was originally built by Google, leveraging years of experience with managing container workloads, and is now a Cloud Native Compute Foundation (CNCF) project. Kubernetes has been widely adopted by the community, supported on all major public and private cloud providers, and is gaining rapid adoption in enterprises. However, Kubernetes may seem intimidating and complex ...
With continuous delivery (CD) almost always in the spotlight, continuous integration (CI) is often left out in the cold. Indeed, it's been in use for so long and so widely, we often take the model for granted. So what is CI and how can you make the most of it? This blog is intended to answer those questions. Before we step into examining CI, we need to look back. Software developers often work in small teams and modularity, and need to integrate their changes with the rest of the project code b...
DevOps is under attack because developers don’t want to mess with infrastructure. They will happily own their code into production, but want to use platforms instead of raw automation. That’s changing the landscape that we understand as DevOps with both architecture concepts (CloudNative) and process redefinition (SRE). Rob Hirschfeld’s recent work in Kubernetes operations has led to the conclusion that containers and related platforms have changed the way we should be thinking about DevOps and...
As many know, the first generation of Cloud Management Platform (CMP) solutions were designed for managing virtual infrastructure (IaaS) and traditional applications. But that's no longer enough to satisfy evolving and complex business requirements. In his session at 21st Cloud Expo, Scott Davis, Embotics CTO, explored how next-generation CMPs ensure organizations can manage cloud-native and microservice-based application architectures, while also facilitating agile DevOps methodology. He expla...
While some developers care passionately about how data centers and clouds are architected, for most, it is only the end result that matters. To the majority of companies, technology exists to solve a business problem, and only delivers value when it is solving that problem. 2017 brings the mainstream adoption of containers for production workloads. In his session at 21st Cloud Expo, Ben McCormack, VP of Operations at Evernote, discussed how data centers of the future will be managed, how the p...
Some people are directors, managers, and administrators. Others are disrupters. Eddie Webb (@edwardawebb) is an IT Disrupter for Software Development Platforms at Liberty Mutual and was a presenter at the 2016 All Day DevOps conference. His talk, Organically DevOps: Building Quality and Security into the Software Supply Chain at Liberty Mutual, looked at Liberty Mutual's transformation to Continuous Integration, Continuous Delivery, and DevOps. For a large, heavily regulated industry, this task ...
Enterprises are adopting Kubernetes to accelerate the development and the delivery of cloud-native applications. However, sharing a Kubernetes cluster between members of the same team can be challenging. And, sharing clusters across multiple teams is even harder. Kubernetes offers several constructs to help implement segmentation and isolation. However, these primitives can be complex to understand and apply. As a result, it’s becoming common for enterprises to end up with several clusters. Thi...
Let's do a visualization exercise. Imagine it's December 31, 2018, and you're ringing in the New Year with your friends and family. You think back on everything that you accomplished in the last year: your company's revenue is through the roof thanks to the success of your product, and you were promoted to Lead Developer. 2019 is poised to be an even bigger year for your company because you have the tools and insight to scale as quickly as demand requires. You're a happy human, and it's not just...
We all know that end users experience the Internet primarily with mobile devices. From an app development perspective, we know that successfully responding to the needs of mobile customers depends on rapid DevOps – failing fast, in short, until the right solution evolves in your customers' relationship to your business. Whether you’re decomposing an SOA monolith, or developing a new application cloud natively, it’s not a question of using microservices – not doing so will be a path to eventual b...
How often is an environment unavailable due to factors within your project's control? How often is an environment unavailable due to external factors? Is the software and hardware in the environment up to date with the target production systems? How often do you have to resort to manual workarounds due to an environment? These are all questions that you should ask yourself if testing environments are consistently unavailable and affected by outages. Here are three key metrics that you can tra...
Cavirin Systems has just announced C2, a SaaS offering designed to bring continuous security assessment and remediation to hybrid environments, containers, and data centers. Cavirin C2 is deployed within Amazon Web Services (AWS) and features a flexible licensing model for easy scalability and clear pay-as-you-go pricing. Although native to AWS, it also supports assessment and remediation of virtual or container instances within Microsoft Azure, Google Cloud Platform (GCP), or on-premise. By dr...
DevOps promotes continuous improvement through a culture of collaboration. But in real terms, how do you: Integrate activities across diverse teams and services? Make objective decisions with system-wide visibility? Use feedback loops to enable learning and improvement? With technology insights and real-world examples, in his general session at @DevOpsSummit, at 21st Cloud Expo, Andi Mann, Chief Technology Advocate at Splunk, explored how leading organizations use data-driven DevOps to close th...
How is DevOps going within your organization? If you need some help measuring just how well it is going, we have prepared a list of some key DevOps metrics to track. These metrics can help you understand how your team is doing over time. The word DevOps means different things to different people. Some say it a culture and every vendor in the industry claims that their tools help with DevOps. Depending on how you define DevOps, some of these metrics may matter more or less to you and your team.
The enterprise data storage marketplace is poised to become a battlefield. No longer the quiet backwater of cloud computing services, the focus of this global transition is now going from compute to storage. An overview of recent storage market history is needed to understand why this transition is important. Before 2007 and the birth of the cloud computing market we are witnessing today, the on-premise model hosted in large local data centers dominated enterprise storage. Key marketplace play...
For many of us laboring in the fields of digital transformation, 2017 was a year of high-intensity work and high-reward achievement. So we’re looking forward to a little breather over the end-of-year holiday season. But we’re going to have to get right back on the Continuous Delivery bullet train in 2018. Markets move too fast and customer expectations elevate too precipitously for businesses to rest on their laurels. Here’s a DevOps “to-do list” for 2018 that should be priorities for anyone w...
In a recent post, titled “10 Surprising Facts About Cloud Computing and What It Really Is”, Zac Johnson highlighted some interesting facts about cloud computing in the SMB marketplace: Cloud Computing is up to 40 times more cost-effective for an SMB, compared to running its own IT system. 94% of SMBs have experienced security benefits in the cloud that they didn’t have with their on-premises service
DevOps failure is a touchy subject with some, because DevOps is typically perceived as a way to avoid failure. As a result, when you fail in a DevOps practice, the situation can seem almost hopeless. However, just as a fail-fast business approach, or the “fail and adjust sooner” methodology of Agile often proves, DevOps failures are actually a step in the right direction. They’re the first step toward learning from failures and turning your DevOps practice into one that will lead you toward even...