Processes

The Processes page shows your application’s runtime processes; here, you can add new processes, edit the resources for existing processes, and define your own runtime commands.

Web process

The web process runs your application. When you add an application, if you do not specify the Start command for the web process, Kinsta attempts to detect it automatically during the first deployment. For example, the start command for a Node.js application may be npm start or yarn start.

You cannot remove the web process, and you can only have one web process per application.

Edit the Web Process

You can edit the web process within Processes > Update process.

Update your application's web process.
Update your application’s web process.

The following fields are available:

  • Name: Your web process name.
  • Custom start command: This is the start command for your web process. If you left this blank when adding the application, you should see the command that was automatically detected to run it
  • Healthcheck path: Enter your healthcheck path. If your application includes a healthcheck, there is zero downtime between deployments. When the application deploys or redeploys, the old pods continue to run until the new pods are ready. Healthcheck monitoring is also active during the application’s runtime; Kinsta checks the healthcheck path every 10 seconds, and if the check fails to respond three times in a row, it restarts the pods.
  • Horizontal autoscaling: If you enable horizontal autoscaling, you need to define a minimum and maximum number of instances you want the web process to be able to use. When autoscaling is enabled, if the CPU usage on the current pod(s) reaches 80% of the available CPU resources, the web process will automatically increase by one, up to the maximum number you’ve set. If the CPU usage decreases and the current number of pod(s) is no longer needed, the number of pods will be reduced to what is needed. The instance count will not go lower than the minimum number you’ve set.
  • Instance count: The number of instances required, up to a maximum of 20. Each instance represents one pod, and the instances all use the same pod size. You cannot define a different pod size for each instance.
  • Instance size: This determines the CPU and RAM dedicated to the process.

Background Worker

A background worker is a process that runs in the background, separate from the main application, and is inaccessible from the internet. Using a background worker for ongoing tasks like processing large data sets keeps these tasks separate from the main application and helps to maintain a good user experience.

This type of process isn’t meant to be run as a one-time job that finishes after a certain amount of time. If a background process finishes after it completes its job, the pod will shut down, restart itself, and repeat the process. For a process that finishes after completing its job, use a cron job process.

Add a Background Worker Process

You can add a background worker within Processes. While there is no limit to the number of background worker processes you can add, each process requires at least one pod to run.

To add a new background worker, click Create process > Background worker and complete the fields as follows:

Create a background worker process.
Create a background worker process.
  • Name: The process name. By default, this is populated with three random words. 
  • Custom start command: The command required to start the process, for example, npm run [process].
  • Horizontal autoscaling: If you enable horizontal autoscaling, you need to define a minimum and maximum number of instances you want the background process to be able to use. When autoscaling is enabled, if the CPU usage on the current pod(s) reaches 80% of the available CPU resources, the background process will automatically increase by one, up to the maximum number you’ve set. If the CPU usage decreases and the current number of pod(s) is no longer needed, the number of pods will be reduced to what is needed. The instance count will not go lower than the minimum number you’ve set.
  • Instance count: The number of instances required, up to a maximum of 20. Each instance represents one pod, and the instances all use the same pod size. You cannot define a different pod size for each instance.
  • Instance size: This determines the CPU and RAM dedicated to the process.

Click Create to finish creating the new process. The additional process costs are added to your monthly invoice and are prorated for the first month. For example, if you add a background worker with the S1 instance (0.5 CPU / 1 GB RAM) on January 20th, you’ll be charged for 11 days in January: $20 / 31 * 11 = $7.10.

You can change the details of any process at any time, including the instance size (vertical scaling) and the number of instances running simultaneously (horizontal scaling). To learn more, refer to Scalability.

Cron Job Process

A cron job allows you to schedule a process at a specific interval for your application. This lets you automate repetitive tasks like sending reports or performing maintenance tasks in a timed manner without a continuously running pod.

A cron job process is similar to a background worker, but it only launches based on the configured timing and shuts down after finishing the required operation.

Add a Cron Job Process

You can add a cron job within Processes. While there is no limit to the number of cron job processes you can add, each process requires at least one pod to run.

To add a new cron job, click Create process > Cron and complete the fields as follows:

Create a cron job process.
Create a cron job process.
  • Name: The process name. By default, this is populated with three random words. 
  • Custom start command: The command required to start the process, for example, npm run [process].
  • Cron expression: The cron expression determines when and how often the cron job runs. It is made up of the following 5 fields:
    • * Minute (0-59)
    • * Hour (0-23)
    • * Day of the month (1-31)
    • * Month (1-2)
    • * Day of the week (0-6)

    There is lots of support available online about how to format your expression, such as Cronitor. The following are some common examples of how to format your expression:

    • to run the cron job every minute, use * * * * *
    • to run the cron job every 30 minutes, use */30 * * * * note that */30 ensures the process runs every 30 minutes (12:00, 12:30, etc.); if you use 30 without the */ it will only run at minute 30 (12:30, 13:30, etc.).
    • to run the cron job every hour, use 0 * * * *
    • to run the cron job every day at 12:00 AM, use 0 0 * * *
    • to run the cron job at 12:00 AM, on Fridays only, use 0 0 * * 5
    • to run the cron job at 12:00 AM on the first day of every month, use 0 0 1 * *
  • Instance size: This determines the CPU and RAM dedicated to the process.

Click Create to finish creating the new process. The additional process costs are added to your monthly invoice and are prorated for the first month. For example, if you add a background worker with the S1 instance (0.5 CPU / 1 GB RAM) on January 20th, you’ll be charged for 11 days in January: $20 / 31 * 11 = $7.10.

You can change the details of any process at any time, including the instance size (vertical scaling). You cannot add persistent storage or horizontal scaling to cron jobs.

Defining Processes in a Procfile

Procfiles define processes from your application’s code, which should be committed to your repository. A Procfile contains one process per line in the following format:

process_name: command

For example, to run a Laravel application, you might want to use the following:

web: php artisan serve --host 0.0.0.0 --port 8080

If you are using a Procfile, you will need to define a process named web to ensure the container will fulfill web requests.

Scaling Application Resources

You can change the pod size of any process (vertical scaling) and the number of pods that run at the same time (horizontal scaling) for the web process and background workers. Any changes you make, except changing the name, automatically trigger the application’s rollout process.

  • Vertical scaling is great for giving pods more power to complete resource-intensive tasks.
  • Horizontal scaling is great for resilience and load balancing for applications that process many requests. For example, you could run three versions of the same pod. The underlying technology routes requests to one of the three pods, effectively distributing the load between them. If one pod becomes unstable, requests will route to the other two until the third pod is healthy again.

You can change the details of any process, including the Pod size, at any time. If your application is stateless (no persistent storage), you can enable automatic horizontal scaling for the web process or background worker. This lets you set a minimum and maximum number of instances (up to 10) that the process can scale between as needed. To learn more about changing pod size and other scaling options, see Scalability.

Update Build Process Resources

To change the resources used for the build process, click Settings > Build > Update resource, choose the required Build resources, and click Update resource.

Was this article helpful?