Cloud Conversion, part 1: Cloud Endpoints API
This series of articles are tidings from our Google Cloud Platform (GCP) “lab” where we tinker, experiment and do feasibility study on the wide range of GCP technology offering for upcoming customer cases.
We are always looking for new ways to deliver cost effective and efficient ways to build services to meet customers’ business needs in a no-ops manner, where “fire and forget” deployment is a solid reality and no longer just a slide in marketing material. Going from bare metal to the cloud has never been this fluent.
When looking at new ways to deliver high availability backends while building a BigQuery loader setup, I wanted to stretch my legs on a technology we have not yet used in production. I started hacking away on Cloud Endpoints API, which among other nice things provides very easy scalability and performance for your mobile / microservice APIs.
Building a streaming-based BigQuery data loader service
The good thing about Cloud Endpoints API is that it also supports constructing your API using gRPC. Since I was building a streaming-based BigQuery data loader service, it was the logical choice for that. If you ever built a streaming-based BigQuery data loader service, you know it’s something that needs to process its operations faster than the events are generated, in order not to congest the data loading flow.
Cloud Endpoints APIs can be defined using an OpenAPI (see: Swagger) specification or as a gRPC service definition. Using OpenAPI, you can provide a REST API for your service and run it almost anywhere in the Google’s Cloud offering. For testing out OpenAPI, my natural selection was AppEngine. Its PaaS model offers the minimal setup / boilerplate required to get a project going.
Getting the service up and running was a breeze. However, REST was not the way I wanted to go here, so further exploration into the gRPC needed to take place.
Introducing gRPC and the Extensible Service Proxy
As for the gRPC API, the classic AppEngine does not do HTTP/2, and therefore I was forced to pick another solution. HTTP/2 is used as the carrier protocol for gRPC, enabling its most powerful features, and I couldn’t manage without that. Instead, I went with my other favorite combo: Node.js with Kubernetes.
With Node.js and Kubernetes it’s possible to utilize something called Extensible Service Proxy (ESP). That enables you to dynamically transcode REST to gRPC, and allows you to expose a REST API for your gRPC service on-the-fly, with no REST-specific code needed at your backend. What a lovely mechanism! This is handy for allowing access to systems where business requirements / technological choices prohibit one from using a gRPC client.
Another important feature of Cloud Endpoints’ Extensible Service Proxy is that it also supports authentication via JSON web token (JWT). This means it can easily be configured to allow access only from authenticated users (or, in the case of microservices, other service accounts), relieving your backend code of doing that.
Lessons learned
What would I use this for? For any high availability backend, really. Personally, I would go ahead and implement the API ‘frontend’ with a blazing fast gRPC/Node.js setup. Then, I’d implement my ‘backend’ microservices in, say, Python or Golang and deploy them in AppEngine or Kubernetes. They would be doing the heavy lifting for our business logic and be dynamically scaled to meet the processing load. These microservices would be addressed through Google’s extremely low latency ingress networking and thus could be implemented in REST/JSON also if desired.
With everything up and running within a few days, it is easy to conclude with confidence that the Cloud Endpoints and gRPC will definitely be an integral part of our future in cloud service development. I see it as an important part of the future of anyone building intensely data-heavy microservices.
Further reading
https://cloud.google.com/endpoints/docs/architecture-overview
https://cloud.google.com/appengine/docs/
https://www.grpc.io/