After eight months and many experiments, I will add my opinion. Hope this saves you some time.
Choose your structure first
There are various endpoint offerings available on Google Cloud. All of them can be used for the JSON / REST API. This did not immediately seem to me. Cloud endpoints are a very high-level phrase covering the development, deployment, and management of APIs across multiple Google Cloud plugins.
The point here is that after deciding to use Cloud Endpoints, you still need to decide which server technologies your API will serve. The documentation is a bit hidden, but I highly recommend starting with the Google Cloud Endpoints doc .
You can choose between:
Choose the second version
Within each API, there is a choice of cloud implementations that your API (service) can run on:
OpenAPI specification - for JSON / REST APIs implemented on:
- Flexible Google App Engine
- Google compute engine
- Google container engine
- Kubernetes
Endpoint frames - for the JSON / REST APIs implemented on:
- Standard Google App Engine with Java
- Standard Google App Engine with Python
gRPC - for gRPC APIs implemented on:
- Google compute engine
- Google container engine
- Kubernetes
When posting the question here, I used Endpoints Frameworks running in the standard Google App Engine environment with Python. Then I migrated my API (service) to gRPC in the Google Compute Engine.
The observer of you may notice how the OpenAPI Specification and Endpoints Frameworks can be used for the JSON / REST API, and gRPC provides only the gRPC API. So how did I port my REST API from Endpoints Frameworks to gRPC ? Answer Transcoding HTTP / JSON in gRPC (which I recognized along the way, and it did not immediately become clear to me). Therefore, do not rule out gRPC just because you want REST / HTTP.
Answer
So how does this relate to my initial question?
The fact that I tried to convert between .proto files and gRPC annotations in general, meant that I was mistaken - a turn on this path.
If you want to write an application using simple .proto files, select gRPC in the Compute Engine. If you need this to be a REST API, you can do it, but you need to add ESP to your backend configuration. This is pretty much setting NGINX as a reverse proxy. The only drawback here is that you will need Docker knowledge to ensure that the ESP (proxy) and your gRPC server can communicate (Docker networking).
If your code is already in App Engine and you want to present it as a REST API with minimal effort and get good API management features, select Endpoint Outlines . Warning: I moved away from this because it was prohibitively expensive (I received a bill of $ 100 per month).
If you want to avoid .protos at all, go to the OpenAPI Specification .
Finally, if you want to offer software integration, client libraries, or want to offer microservice , then really think gRPC.It is easy to remove ESP (proxies) and run the gRPC server on almost any machine (until Buffer Runtime is installed.
I ended up focusing on gRPC on the Compute Engine with Docker. I also have an ESP for sending HTTP transcoding to gRPC and vice versa. I like this approach for several reasons:
- You will learn a lot: Docker, Docker Networking, NGINX configuration, protocol buffers, ESP (Cloud Proxy), gRPC servers.
- Service logic (core business) can be written using simple gRPC. This allows you to start the service on any computer without a web server. Your business logic is the server :)
- The protocol / gRPC buffers are excellent for isolating business logic as a service ... or microservice. They are also good for providing well-defined interfaces and libraries.
Avoid these mistakes.
The implementation of the first structure / architecture you will find. If I could start again, I would not choose Endpoints Frameworks . It's expensive and uses annotations rather than .proto files, which, IMO, makes the code harder for the port.
Read Always Free Use Restrictions before deciding on structure and implementation. Content platforms use the backend instances of App Engine, which have almost no free quota. The vague, front-end App Engine instances have a very generous free quota .
Consider local development. Cloud Endpoint on-premises local servers are not officially supported (at least they were not at the time of my question). Conversely, there is a whole page called โLaunch Local Extensible Proxy Serverโ .