gRPC

A curious case of gRPC

5040 VIEWS

Does the world need another RPC framework? Yes, it does. In order to clarify why, let me rephrase the original question. Does the world need a highly performant, language-neutral, platform-neutral, binary safe, real-time, efficient, and low-memory footprint RPC framework? Yes. That framework is gRPC. It’s developed by Google and hosted under the umbrella of the Cloud Native Computing Foundation, and in this post I go over gPRC in detail.

What is it all about?

RPCs are a form of inter-process communication (IPC). They are used particularly in distributed computing to perform operations among processes that are geographically apart from each other. RPCs are an old story going back to the ‘60s up until the advent of gRPC, and there were a lot of hit-and-miss scenarios among existing implementations.

The g before the RPC

I will tell you why gRPC is so good. It epitomises the idea of scalable inter-process communication by providing a fully fledged and modernised approach to an old problem. It opens the barrier of reliable operations across geographically dispersed processes by using HTTP2, which, among other things, offers server push events, multiplexing, pipelining and data compression. It breaks the traditional synchronous request-response model of RPCs, thus making it suitable for IoT and microservice architectures.

The bill of materials, of course, is the increased complexity of some operations in the case of network partitions or failures. The technical details of HTTP2, which may be more of a burden for some people, need to be considered.

The figure below clarifies a little bit further how gRPC works:

Source: grpc.io

The gRPC server implements the service interface and runs an RPC server to handle client calls to its service methods. Clients can be written in the same or different supported languages. Clients use a stub implementation of service methods which are simply delegates to the actual method on the server side. The delegation medium is the protobuf request-response cycle.

gRPC so far has native support for 10 languages: C++, Java, Go, Python, Ruby, Node.js, Android Java, C#, Objective-C, and PHP.

To give you an example of how it works, we will take a simple echo service and expose it via a gRPC interface. We are going to use a gRPC-proxy to generate client front-end proxies for us.

It only takes three steps:

  1. Define your function: Write the function you want to expose to your clients.
  2. Define your protocol interface: Write a protocol buffers definition for that function and generate the code for your language of choice.
  3. Finish with boilerplate client and server code: You’ll write mostly boilerplate code to initiate the server. In the client side you only need to open a gRPC channel, create a stub, and call the provided function. Everything else is handled by the protocol.

So, let’s get cracking.

We define our simple echo service by writing the proto definitions:

 // File service.proto
syntax = "proto3";
option go_package = "echo";

// Echo Service
//
// Echo Service API consists of a single service which returns
// a message.
package grpc.examples.echo;

import "google/api/annotations.proto";

// SimpleMessage represents a simple message sent to the Echo service.
message SimpleMessage {
    // Id represents the message identifier.
    string id = 1;
}

// Echo service responds to incoming echo requests.
service EchoService {
    // Echo method receives a simple message and returns it.
    //
    // The message posted as the id parameter will also be
    // returned.
    rpc Echo(SimpleMessage) returns (SimpleMessage) {
        option (google.api.http) = {
			post: "/v1/echo/{id}"
		};
    }
    // EchoBody method receives a simple message and returns it.
    rpc EchoBody(SimpleMessage) returns (SimpleMessage) {
        option (google.api.http) = {
			post: "/v1/echo_body"
			body: "*"
		};
    }
}

We have defined two endpoints. One is Echo with the endpoint /v1/echo/{id}, and the other is EchoBody, with the endpoint /v1/echo_body. The first one just echoes back the ID you pass and the second one just echoes back the ID in the POST payload you pass.

Now that we have our definitions, let’s generate the proxy and the server stub code:

Generate Server Code:

 protoc -I/usr/local/include -I. \
  -I$GOPATH/src \ -I$GOPATH/src/github.com/grpc-ecosystem/grpc-gateway/third_party/googleapis \
  --go_out=plugins=grpc:. \
  ./service.proto

Generate Client Proxy Code:

 protoc -I/usr/local/include -I. \                                         
  -I$GOPATH/src \
  -I$GOPATH/src/github.com/grpc-ecosystem/grpc-gateway/third_party/googleapis \
  --grpc-gateway_out=logtostderr=true:. \
  ./service.proto

The commands will create the files service.pb.go and service.pb.gw.go.

Now that we have the associated Go code for the server and proxy definitions, we need to create a client that will register the gateway code, and a server that will implement our echo service.

First, let’s create the gateway:

 // File client/main.go
package main

import (
	"flag"
	"net/http"

	"github.com/golang/glog"
	"golang.org/x/net/context"
	"github.com/grpc-ecosystem/grpc-gateway/runtime"
	"google.golang.org/grpc"

	gw "github.com/theodesp/golang-grpc-example/src/proto"
)

var (
	echoEndpoint = flag.String("echo_endpoint", "localhost:9090", "endpoint of EchoService")
)

func run() error {
	ctx := context.Background()
	ctx, cancel := context.WithCancel(ctx)
	defer cancel()

	mux := runtime.NewServeMux()
	opts := []grpc.DialOption{grpc.WithInsecure()}
	err := gw.RegisterEchoServiceHandlerFromEndpoint(ctx, mux, *echoEndpoint, opts)
	if err != nil {
		return err
	}

	return http.ListenAndServe(":8080", mux)
}

func main() {
	flag.Parse()
	defer glog.Flush()

	if err := run(); err != nil {
		glog.Fatal(err)
	}
}

Most of the code is boilerplate. The most useful function is the call to gw.RegisterEchoServiceHandlerFromEndpoint that will attach all the service endpoints to the HTTP server.

Now, the code for the server is similar:

 // file server/main.go
package main

import (
	"net"

	examples "github.com/theodesp/golang-grpc-example/src/proto"
	"google.golang.org/grpc"
	"golang.org/x/net/context"
	"github.com/golang/glog"
	"google.golang.org/grpc/metadata"
	"flag"
)

type echoServer struct{}

func NewEchoServer() examples.EchoServiceServer {
	return new(echoServer)
}

func (s *echoServer) Echo(ctx context.Context, msg *examples.SimpleMessage) (*examples.SimpleMessage, error) {
	glog.Info(msg)
	return msg, nil
}

func (s *echoServer) EchoBody(ctx context.Context, msg *examples.SimpleMessage) (*examples.SimpleMessage, error) {
	glog.Info(msg)
	grpc.SendHeader(ctx, metadata.New(map[string]string{
		"foo": "foo1",
		"bar": "bar1",
	}))
	grpc.SetTrailer(ctx, metadata.New(map[string]string{
		"foo": "foo2",
		"bar": "bar2",
	}))
	return msg, nil
}

func Run() error {
	l, err := net.Listen("tcp", ":9090")
	if err != nil {
		return err
	}
	s := grpc.NewServer()
	examples.RegisterEchoServiceServer(s, NewEchoServer())


	s.Serve(l)
	return nil
}


func main() {
	flag.Parse()
	defer glog.Flush()

	if err := Run(); err != nil {
		glog.Fatal(err)
	}
}

And now we are ready to use our service:

 $> curl -X POST http://localhost:8080/v1/echo/1   
{"id":"1"}%                              

$> curl -X POST http://localhost:8080/v1/echo/2
{"id":"2"}%                              

curl http://localhost:8080/v1/echo_body -d '{"id":"2"}' -v 
*   Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 8080 (#0)
> POST /v1/echo_body HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/7.54.0
> Accept: */*
> Content-Length: 10
> Content-Type: application/x-www-form-urlencoded
> 
* upload completely sent off: 10 out of 10 bytes
< HTTP/1.1 200 OK
< Content-Type: application/json
< Grpc-Metadata-Bar: bar1
< Grpc-Metadata-Foo: foo1
< Trailer: Grpc-Trailer-Foo
< Trailer: Grpc-Trailer-Bar
< Date: Mon, 22 Jan 2018 19:34:50 GMT
< Transfer-Encoding: chunked
< 
* Connection #0 to host localhost left intact
{"id":"2"}%

Conclusion

In this article, we have managed to create a gRPC server in Go and have made RPC calls from a grpc-proxy client based on a protocol buffer definition. We also learned that gRPC is not only important, but should also be one of the things you should learn if you are interested in microservices engineering. It is no small matter that the CNCF has accepted it into their list of hosted projects.

Resources

https://grpc.io/


Theo Despoudis is a Senior Software Engineer, a consultant and an experienced mentor. He has a keen interest in Open Source Architectures, Cloud Computing, best practices and functional programming. He occasionally blogs on several publishing platforms and enjoys creating projects from inspiration. Follow him on Twitter @nerdokto. Theo is a regular contributor at Fixate IO.


Discussion

Click on a tab to select how you'd like to leave your comment

Leave a Comment

Your email address will not be published. Required fields are marked *

Menu
Skip to toolbar