coherence.yml
yes, more YAML
Just like every other devops/infrastructure tool, we have a .yml file we're looking for you to put in the root of each repo you connect to Coherence. The file must be called coherence.yml
(we don't support coherence.yaml
- let us know if that's an issue...) and should be placed at the root of your repo.
In this file, you'll set up the services that Coherence will deploy to your IDE and environments. You can have multiple instances of each service type, but the name
, repo_path
and url_path
of each one must be unique.
With this file, you'll be getting gitops, CI/CD, a cloud IDE, and a cloud based CLI system, all managed for you!
Service Types
- backend
- frontend
Service Config
In addition to type, you can configure the name of the service with the "key" value of each service stanza. In the examples below, the name of the service matches they type, but this is not required.
Repo config
repo_path
will tell Coherence where in the repo the service is located. ADockerfile
(must be calledDockerfile
) should be in the root of this directory.url_path
will configure the load balancer for the application to send traffic with this path prefix to the service.build_path
- forfrontend
services, this is the path in the repo where thebuild
command will place compiled assets that we should copy and serve from the CDN
Commands
backend services
For backend
services, the container built from the Dockerfile
at the root of repo_path
will be used as the container served by the container runtime (e.g. Cloud Run).
dev
this command will be run in your container on a Development Workspacesprod
this command will be run when serving your container in a review or production environment on Google Cloud Runtest
defines the image and command used to run tests against this service. if theimage
is blank, we will run using the service's image. if both properties are not provided or blank, no test step will be generated in CI/CDseed
this command will be used to place data into the database on workspaces and review environments. It should be idempotent, as it will be run on each pipeline execution or workspace creationmigrate
this command will be used to migrate your database schema. It is run at workspace startup as well as a part of each CI pipeline.compile
if your app is written in a compiled language, this step will be run before we build your app image in order to generate compiled binaries. These will be places into the repo folder (which is the default working directory of the command) at whatever path your build outputs them, and will be carried forward to subsequent steps (including the build step) where yourDockerfile
can pick them up and use them. An alternative approach to compiling like this is to use a multi-stage docker build, or to just run as part of your Dockerfile. The approach you choose depends on your requirements.
If seed
, or migrate
are not defined, the relevant steps will not be generated in CI. This is perfectly fine if it is the behavior you'd like.
workers
will provision a private-nodes kubernetes cluster to run your workloads in your app's VPC. The service container will be run as a deployment per worker with the command specified. You can scale replicas per-environment in the Coherence UI (Environment's infra tab). Autoscaling configuration is coming soon. The special worker named dev_workspace
will replace all workers on a workspace with the given command (to save resources in dev and listen to multiple queues or run multiple jobs). If dev_workspace
worker is not provided, all workers will be run.
scheduled_tasks
will share the kubernetes cluster with workers. Coherence will create CronJob
resources in the cluster at the schedule specified.
frontend services
For frontend
services, the container built from the Dockerfile
at the root of repo_path
will be used in Cloud IDEs (Workspaces) as the dev server for the service, and in CI Build Pipelines to build static assets to be served by the CDN.
build
defines the command used to compile assets for productiondev
defines the command used to run the web server on Development Workspacestest
defines the image and command used to run tests against this service. if theimage
is blank, we will run using the service's image. if both properties are not provided or blank, no test step will be generated in CI/CD
System config
For backend
services:
- Resources
cpu
andmemory
can be configured in thesystem
block, and will be used in dev, review, and production environments for workers, tasks, and backend web processes.system.dev.port
will be used on workspaces to determine the port to forward through the proxy for that service. On production, your service should look for aPORT
environment variable, and accept connections from host0.0.0.0
on that port (this is the standard expectation of Cloud Run) platform_settings
will control high-level behavior in deployed infrastructure. For Cloud Run,min_scale
defaults to 0 (which means "cold start" boot times) and you can set it to a higher value to ensure this many replicas are always running. You can also changethrottle_cpu
to false to keep cpu always-allocated. Keep in mind the cost of these decisions. You can change runtime generation in Cloud Run betweengen1
andgen2
.
For frontend
services:
system.dev.port
will be used on workspaces to determine the port to forward through the proxy for that service.
For all services
local_packages
defines any paths in your repo that are installed to during container building, and which need to be copied to the host volume when mounting files into that container, for example in the cloud IDE or in CI/CD. The most common use case here is an app that installs tonode_modules
.- The port
3000
is reserved by Coherence for the IDE service. The ports80
and8088
are reserved for internal Coherence use.
Resources
Config
For backend
services, resources can be configured. Supported types are database
and cache
which will be used to provision the appropriate cloud resources (on GCP, that means Memorystore and Cloud SQL). Multiple resources of each type can be configured but their names must be distinct. If you have an existing database you'd like to use, you can point at it (rather than creating a new one) with yml
like:
use_existing:
project_type: review
instance_name: EXISTING INSTANCE NAME
manage_databases: true
In this configuration, manage_databases
determines if Coherence will create and destroy databases on the instance to match the environments in your project. For review instances, it should generally be true
, while in production setting to false
is a safeguard against our ability to accidentally delete the database.
Support project_types
are production
and review
and correspond to the review_project_id
and production_project_id
configured in the Coherence dashboard for your application.
Types
- backend resources support
database
andcache
types. Under the hood these map toCloud SQL
andMemorystore
in GCP. Theengine
andversion
attributes will accept any valid values for those platforms
Integration Testing
- You can run integration tests as part of your build process in Google Cloud Build. Include the image of your test container, a command to run it, and we will include your tests as a build step. Any environment configuration variables that your tests need (
CYPRESS_RECORD_KEY
, for example) can be set using our config UI in Coherence.COHERENCE_BASE_URL
will be set as an environment variable that describes the url of the Coherence environment you are running in. Your tests can make requests to this url.
Build Settings
- You can set the
platform_settings
property formachine_type
using the values here for Google Cloud Build to configure the machine type for CI pipelines generated by Coherence.
Full Example
frontend:
type: frontend
index_file_name: index.html
url_path: /
repo_path: frontend
assets_path: build
local_packages: ["node_modules"]
build: ["yarn", "build"]
test:
image: "foo/bar:123"
command: ["foo", "bar"]
dev: ["yarn", "dev"]
system:
dev:
port: 8910
backend:
type: backend
url_path: /api/
repo_path: backend
migration: ["migration", "command"]
seed: ["seed", "command"]
dev: ["run", "command"]
test:
image: "foo/bar:123"
command: ["foo", "bar"]
prod: ["run", "command"]
compile:
image: "foo/bar:1.2.3"
command: ["foo", "bar"]
entrypoint: "foo"
workers:
- name: dev_workspace
command: ["worker", "dev", "command"]
- name: default queue worker 1
command: ["worker", "command"]
- name: default queue worker 2
command: ["worker", "command"]
# see https://cloud.google.com/kubernetes-engine/docs/how-to/cronjobs#schedule for the schedule element
scheduled_tasks:
- name: task 1
command: [“sleep”]
schedule: "* * * * *"
resources:
- name: db1
engine: postgres
version: 13
type: database
- name: redis
engine: redis
version: 4
type: cache
system:
dev:
port: 8912
memory: 2G
cpu: 1
platform_settings:
min_scale: 1
throttle_cpu: false
execution_environment: gen2
build_settings:
platform_settings:
machine_type: "N1_HIGHCPU_8"
integration_test:
type: integration_test
command: ["cypress", "run", "--record"]
image: "cypress/included:9.4.1"
Rails Example
server:
type: backend
url_path: /
repo_path: backend
migration: ["rails", "server:migrate"]
dev: ["rails", "server", "-p", "$PORT]
prod: ["rails", "server", "-e", "production", "-p", "$PORT"]
system:
dev:
port: 4000
memory: 2G
cpu: 1
resources:
- name: db1
engine: mysql
version: 8.0
type: database
- name: redis
engine: redis
version: 4
type: cache
Updated 13 days ago