In the previous section of the tutorial, you learned how work pools are a bridge between the Prefect orchestration layer and infrastructure for flow runs that can be dynamically provisioned.
You saw how you can transition from persistent infrastructure to dynamic infrastructure by using flow.deploy instead of flow.serve.
Work pools that rely on client-side workers take this a step further by enabling you to run work flows in your own Docker containers, Kubernetes clusters, and serverless environments such as AWS ECS, Azure Container Instances, and GCP Cloud Run.
The architecture of a worker-based work pool deployment can be summarized with the following diagram:
Notice above that the worker is in charge of provisioning the flow run infrastructure.
In context of this tutorial, that flow run infrastructure is an ephemeral Docker container to host each flow run.
Different worker types create different types of flow run infrastructure.
Now that we’ve reviewed the concepts of a work pool and worker, let’s create them so that you can deploy your tutorial flow, and execute it later using the Prefect API.
For this tutorial you will create a Docker type work pool via the CLI.
Using the Docker work pool type means that all work sent to this work pool will run within a dedicated Docker container using a Docker client available to the worker.
Other work pool types
There are work pool types for serverless computing environments such as AWS ECS, Azure Container Instances, Google Cloud Run, and Vertex AI.
Kubernetes is also a popular work pool type.
These options are expanded upon in various How-to Guides.
Workers are a lightweight polling process that kick off scheduled flow runs on a specific type of infrastructure (such as Docker).
To start a worker on your local machine, open a new terminal and confirm that your virtual environment has prefect installed.
Run the following command in this new terminal to start the worker:
prefectworkerstart--poolmy-docker-pool
You should see the worker start.
It's now polling the Prefect API to check for any scheduled flow runs it should pick up and then submit for execution.
You’ll see your new worker listed in the UI under the Workers tab of the Work Pools page with a recent last polled date.
You should also be able to see a Ready status indicator on your work pool - progress!
You will need to keep this terminal session active for the worker to continue to pick up jobs.
Since you are running this worker locally, the worker will terminate if you close the terminal.
Therefore, in a production setting this worker should run as a daemonized or managed process.
Now that you’ve set up your work pool and worker, we have what we need to kick off and execute flow runs of flows deployed to this work pool.
Let's deploy your tutorial flow to my-docker-pool.
Now it’s time to put it all together.
We're going to update our repo_info.py file to build a Docker image and update our deployment so our worker can execute it.
The updates that you need to make to repo_info.py are:
Change flow.serve to flow.deploy.
Tell flow.deploy which work pool to deploy to.
Tell flow.deploy the name to use for the Docker image that will be built.
For this tutorial, your Docker worker is running on your machine, so we don't need to push the image built by flow.deploy to a registry. When your worker is running on a remote machine, you will need to push the image to a registry that the worker can access.
Remove the push=False argument, include your registry name, and ensure you've authenticated with the Docker CLI to push the image to a registry.
Now that you've updated your script, you can run it to deploy your flow to the work pool:
pythonrepo_info.py
Prefect will build a custom Docker image containing your workflow code that the worker can use to dynamically spawn Docker containers whenever this workflow needs to run.
What Dockerfile?
In this example, Prefect generates a Dockerfile for you that will build an image based off of one of Prefect's published images. The generated Dockerfile will copy the current directory into the Docker image and install any dependencies listed in a requirements.txt file.
If you want to use a custom Dockerfile, you can specify the path to the Dockerfile using the DeploymentImage class:
If you need to make updates to your deployment, you can do so by modifying your script and rerunning it. You'll need to make one update to specify a value for job_variables to ensure your Docker worker can successfully execute scheduled runs for this flow. See the example below.
The job_variables section allows you to fine-tune the infrastructure settings for a specific deployment. These values override default values in the specified work pool's base job template.
When testing images locally without pushing them to a registry (to avoid potential errors like docker.errors.NotFound), it's recommended to include an image_pull_policy job_variable set to Never. However, for production workflows, always consider pushing images to a remote registry for more reliability and accessibility.
Here's how you can quickly set the image_pull_policy to be Never for this tutorial deployment without affecting the default value set on your work pool: