(LSJ) Why you should consider use of gRPC services over HTTP APIs?

What is gRPC?

gRPC - high-performance universal RPC framework

gRPC is a modern RPC framework that can run in any environment. It can efficiently connect services in and across data centers with pluggable support for load balancing, tracing, health checking and authentication. It is also applicable in last mile of distributed computing to connect devices, mobile applications and browsers to backend services.

What is Protobuf?

What are protocol buffers?

Protocol buffers are Google's language-neutral, platform-neutral, extensible mechanism for serializing structured data – think XML, but smaller, faster, and simpler. You define how you want your data to be structured once, then you can use special generated source code to easily write and read your structured data to and from a variety of data streams and using a variety of languages.

Protocol buffers currently support generated code in Java, Python, Objective-C, and C++. With proto3 language version, you can also work with Dart, Go, Ruby, and C#. How do protocol buffers work?

gRPC messages are serialized using Protobuf, an efficient binary message format. Protobuf serializes very quickly on the server and client. Protobuf serialization results in small message payloads, important in limited bandwidth scenarios like mobile apps.

gRPC Concepts and recommended scenarios

gRPC is well suited to the following main usage scenarios:

  • Microservices – gRPC is designed low latency and high throughput communication. gRPC is great for lightweight microservices where efficiency is critical.

  • Point-to-point real-time communication – gRPC has excellent support for bi-directional streaming. gRPC services can push messages in real-time without polling.

  • Polyglot computing environments – gRPC tooling supports all popular development languages, making gRPC a good choice for multi-language environments.

  • Network constrained environments – gRPC messages are serialized with Protobuf, a lightweight message format. A gRPC message is always smaller than an equivalent JSON message.

  • The main usage scenarios:

    • Efficiently connecting polyglot services in microservices architecture.

      • What are microservices?

        Microservices - also known as the microservice architecture - is an architectural style that structures an application as a collection of services that are

        1. Highly maintainable and testable

        2. Loosely coupled

        3. Independently deployable

        4. Organized around business capabilities.

        The microservice architecture enables the continuous delivery/deployment of large, complex applications. It also enables an organization to evolve its technology stack. Read more.

    • Connecting mobile devices, browser clients to backend services

    • Generating efficient client libraries

  • Core Features that make gRPC awesome:

    • Idiomatic client libraries in 10 languages

    • Highly efficient on wire and with a simple service definition framework

    • Bi-directional streaming with HTTP/2 based transport protocol.

      • The primary goals for HTTP/2 protocol are

        • to reduce latency by enabling full request and response multiplexing,

        • minimize protocol overhead via efficient compression of HTTP header fields, and

        • add support for request prioritization and server push.

      • To implement these requirements, there is a large supporting cast of other protocol enhancements, such as new flow control, error handling, and upgrade mechanisms, but these are the most important features that every web developer should understand and leverage in their applications.

    • Pluggable auth, tracing, load balancing and health checking

gRPC in Other Projects

gRPC Motivation and Design Principles.

join active gRPC Community

gRPC ecosystem

developer documentation for gRPC

aspnet.core3

Cloud Endpoints for gRPC

Transcoding HTTP/JSON to gRPC

API Design Guide

This guide applies to both REST APIs and RPC APIs, with specific focus on gRPC APIs. 

gRPC to AWS Lambda: Is it Possible?

gRPC + AWS:  Some gotchas


Happy Birthday gRPC !

Copyright gRPC Blog. All rights for the picture remain at the Artist.

Copyright gRPC Blog. All rights for the picture remain at the Artist.

Numerous challenges are introduced with cloud-native applications -- migrating to a microservices architecture is no easy feat. All of a sudden, there are exponentially more services to monitor, numerous API surfaces to secure and a plethora of traffic to manage between services.

You and your team need to stay ahead of the game

Istio, an open source tool to connect and manage microservices that is becoming an industry leading service-mesh for Kubernetes.



What is Istio?



ISTIO : Connect, secure, control, and observe services.

ISTIO : Connect, secure, control, and observe services.

Cloud platforms provide a wealth of benefits for the organizations that use them. However, there’s no denying that adopting the cloud can put strains on DevOps teams. Developers must use microservices to architect for portability, meanwhile operators are managing extremely large hybrid and multi-cloud deployments. Istio lets you connect, secure, control, and observe services.

At a high level, Istio helps reduce the complexity of these deployments, and eases the strain on your development teams. It is a completely open source service mesh that layers transparently onto existing distributed applications. It is also a platform, including APIs that let it integrate into any logging platform, or telemetry or policy system. Istio’s diverse feature set lets you successfully, and efficiently, run a distributed microservice architecture, and provides a uniform way to secure, connect, and monitor microservices.

What is Service Mesh?

Istio addresses the challenges developers and operators face as monolithic applications transition towards a distributed microservice architecture. To see how, it helps to take a more detailed look at Istio’s service mesh.

The term service mesh is used to describe the network of microservices that make up such applications and the interactions between them. As a service mesh grows in size and complexity, it can become harder to understand and manage. Its requirements can include discovery, load balancing, failure recovery, metrics, and monitoring. A service mesh also often has more complex operational requirements, like A/B testing, canary rollouts, rate limiting, access control, and end-to-end authentication.

Istio provides behavioral insights and operational control over the service mesh as a whole, offering a complete solution to satisfy the diverse requirements of microservice applications.


Istio Core Features

Traffic Management

  • load balancing

  • transport layer encryption

Performance and Scalability

Security

  • mutual service-to-service authentication

Policies and Telemetry

  • application telemetry

requiring minimal- if any- changes to the code of individual services.



#technology #redhat #istio #cloudnative #servicemesh #k8