Written by Karthik Ruthiravelu

Are you building your application with a Microservices architecture? Are there too many microservices in your application and wondering how to make the network calls faster between your inter-service calls? Well, then gRPC is what you are looking for.

Let’s see what is gRPC and why gRPC would be the solution for faster inter-service communication and also let’s see what are the pros and cons of using gRPC in your next distributed system.

What is gRPC?

gRPC is a Remote Procedure Call protocol developed by Google, gRPC uses HTTP 2.0 as the underlying protocol for communication and uses Protocol buffers for exchanging messages.

Well that’s what every article about gRPC says, but why does it use HTTP 2.0 and Protocol buffers? Let’s divide and conquer these questions to understand better.

Why gRPC uses HTTP 2.0?

Well, a simple answer is HTTP 2.0 is faster, but how is it faster? Let’s try to compare HTTP 1.1 (used by REST calls) and HTTP 2.0 to understand this better.

In a typical REST call using HTTP 1.1 protocol, when a client has to make a request to the server there are three phases in a single request:

  1. Open a TCP Connection.
  2. Create a request to the server.
  3. Wait for the server to respond.

Whereas in HTTP 2, there is multiplexing, which means with just one TCP connection created, the client and server can exchange multiple messages, because opening a new TCP connection for every request is costlier in terms of time consumption.

If you want to see a live demo on the above, you can open the following links in your chrome browser, while keeping the Network tab active in your chrome dev tools:

HTTP 1.1: https://http1.golang.org/gophertiles?latency=200

HTTP 2.0 : https://http2.golang.org/gophertiles?latency=200

Now if you observe the network requests made here, in HTTP 1.1, you would see 6 TCP connections created by the browser to make multiple requests in parallel, whereas if you look at HTTP 2, you would only see one connection, and the page loads blazing fast.

The other thing that makes a difference in HTTP 2 is that it exchanges information in binary format, whereas HTTP 1.1 uses plain text to exchange information. So the use of binary format here significantly reduces the size of the information exchanged.

Now that we know why HTTP 2.0 helps make gRPC calls faster, let’s see how the use of Protocol buffers adds to the above benefits.

What are Protocol Buffers?

Protocol buffers is an Interface Definition Language, to put it simply it is a language to define the DTOs and services that will be part of our gRPC microservice.

All that you have to do is to create a file with an extension as .proto and then you have to compile these proto files using the protoc compiler, these compilers are available in specific languages as plugins and you could also have a global protoc compiler to generate DTOs for any specific client language.

Let’s quickly take a look at the example of a proto file definition for a user creation process in a system :

// User DTO
message User { 
  int32 user_id = 1;
  string name = 2;
  repeated string addresses = 3;
  bool is_active = 4;
  UserType user_type = 5;
}
// UserType enum which is used in user DTO
enum UserType {
  REGULAR = 0;
  PREMIUM = 1;
}
// Service definition that accepts a user dto and returns a user dto
service UserService {
  rpc createUser(User) returns (User);
}

In the above proto file, we have a User DTO, which we will use to create a user using the createUser method of UserService. Now when a protoc compiler reads this proto file, it will create a DTO class named User, a service class named UserService, and an enum UserType.

Now since we have got the language-specific classes or code generated, we can use this code to make the gRPC requests from the client to server, one thing to note here is that these classes should be shared across both the client and server by packaging them as a library. So in this way you need not have separate DTOs and service definitions.

In terms of object-oriented languages like Java, we can Inherit these classes to provide the business logic for user creation.

What are the numbers against each property in proto definition?

These are the numbers that will actually be used as keys to serialize and deserialize a DTO, and the names of the properties will be used only for human readability.

So have you guessed how protocol buffers already optimize the size of the information?

If your answer to the above question is No, then just have a look at this comparison between JSON and Proto for the same user creation request:

// JSON User payload
{
  userId: "123",
  name: "John Doe",
  addresses: ["no 1, 3rd street", "chennai"],
  isActive: true,
  userType: "REGULAR"
}
// Protobuf user payload
{
  1: "123",
  2: "John Doe",
  3: ["no 1, 3rd street", "chennai"],
  4: true,
  5: "REGULAR",
}

If you look at the above payloads of JSON and proto-buff, you can see that there is a significant reduction in the amount of information exchanged, This is one of the reasons that will help in reducing the time taken for any request.

You might have a question that how do I make a service request from a client, all that you have to do is to define where your server is hosted and then use the API’s provided by gRPC to call the service, and since the proto definitions are shared between the client and server, this is made possible, so you need not have any URL’s associated with your services like REST API’s, it would simply look like a method call on your client-side. Now, this has both a positive and negative side to it.

The positive side of not having any URLs associated with individual requests makes it easy to use them by not having to construct specific API URL endpoints at your client side.

The negative side of this is that you cannot directly make a public request to any of the gRPC services without having the proto definitions. So it creates a very tight coupling between the client-side and server-side for them to interact with each other.

That pretty much sums up how gRPC services and Protocol buffers work together to facilitate blazing-fast interservice communication along with their pros and cons.

If you would like to know more about the syntax for writing your protocol buffers and language-specific implementation, please refer to the references posted.

Thanks for taking the time to read this article, please feel free to add comments and ask questions.

References:

  1. https://www.udemy.com/course/grpc-the-complete-guide-for-java-developers/
  2. https://grpc.io/docs/what-is-grpc/introduction/
  3. https://grpc.io/docs/what-is-grpc/core-concepts/

It’s never too late to learn! Keep Learning.