Skip to content

Contact sales

By filling out this form and clicking submit, you acknowledge our privacy policy.

Building Async Python Services with Starlette

Aug 5, 2020 • 5 Minute Read

Introduction

Python and the WSGI web server frameworks within its ecosystem, such as Flask, are often enough to get the job done as far as building simple, synchronous APIs. But what if you need to scale the backend of your app? As an example, what if you have a machine learning model that can scale its computational predictions? Well, you would also need to ensure that the API layer sitting on top of your model can scale as well. In these instances, synchronous WSGI frameworks may not be enough... Starlette to the rescue!

Starlette is an ASGI web server framework that can run completely asynchronously. As such, Starlette can handle requests at scale, solving the problem broached above! In the sections to come, you will learn the following:

  • How to get up and running with Starlette
  • How to add routes and serve static files
  • How to make scale your service asynchronously

Let's get started!

Installation

Starlette requires Python version 3.6 or greater and is available to download using pip.

To install Starlette using pip, you can run the following command:

      pip install starlette
    

If you are inside of a virtual environment and wish to install via a requirements.txt file, you can do so like this:

      # requirements.txt
starlette==0.13.6
    
      pip install -r requirements.txt
    

Once the library is downloaded, you can begin using the framework. As you will see, Starlette has a number of imports to use. Here are a few of the really important ones:

      # app.py

from starlette.applications import Starlette
from starlette.routing import Route
    

Creating a Starlette Service

It is very straightforward to create a Starlette service or API. The below code demonstrates how to instantiate a Starlette app and then create routes for it.

      from starlette.applications import Starlette
from starlette.routing import Route
from starlette.responses import PlainTextResponse
from starlette.responses import JSONResponse
import uvicorn

from my_model import predict


def index(request):
    return PlainTextResponse("My Index Page!")

def model_stats(request):
    return JSONResponse({'stats': [1, 0 , 2, 3]})

def model_predict(request):
    prediction_req = request.json()
    prediction = predict(prediction_req)
    return JSONResponse(prediction)


routes = [
    Route('/', index),
    Route('/stats', model_stats),
    Route('/predict', model_predict, methods=['POST'])
]

app = Starlette(debug=True, routes=routes)

if __name__ == "__main__":
    uvicorn.run(app, host='0.0.0.0', port=8000)
    

To create a route that services static files, just use Mount like this:

      from starlette.routing import Mount
from starlette.staticfiles import StaticFiles

routes = [
    Route('/', index),
    Route('/stats', model_stats),
    Route('/predict', model_predict, methods=['POST'])
    Mount('/media', app=StaticFiles(directory='media'), name='media')
]
    

Making Your Service Asynchronous

This is cool, but can you scale your prediction route by making it run asynchronously? Python 3.5 brought the async/await syntax to the language via the new, asyncio portion of the Python standard library. Using this syntax, you can create completely non-blocking API routes.

Note: Careful--asynchronous code is notoriously hard to debug. To turn on debug mode for asyncio, ensure that the PYTHONASYNCIODEBUG environment variable is set.

Ex: PYTHONASYNCIODEBUG=1 python app.py

Here is the new, asynchronous code:

      from starlette.applications import Starlette
from starlette.routing import Route
from starlette.responses import PlainTextResponse
from starlette.responses import JSONResponse
import uvicorn
import asyncio

from my_model import predict


async def index(request):
    return PlainTextResponse("My Index Page!")

async def model_stats(request):
    return JSONResponse({'stats': [1, 0 , 2, 3]})

async def model_predict(request):
    prediction_req = request.json()
    prediction = await predict(prediction_req)
    return JSONResponse(prediction)


routes = [
    Route('/', index),
    Route('/stats', model_stats),
    Route('/predict', model_predict, methods=['POST'])
]

app = Starlette(debug=True, routes=routes)

if __name__ == "__main__":
    uvicorn.run(app, host='0.0.0.0', port=8000)
    

In the above example, the asyncio library was first imported so that you could use async/await. Next, all of the routes were designated as async routes. Note the model_predict route which is used to wrap the machine learning model's prediction capability. Now the await keyword is being used in order to take full advantage of the asynchronous capabilities of the model behind the scenes. Now the API will scale alongside the model!

Conclusion

In this guide, you have learned how to use Starlette to create a Python HTTP service built on the ASGI framework. More importantly, you have discovered how to make your service completely asynchronous so that it can scale with the number of requests coming in.

This guide has only been a brief intro into the world of Starlette and ASGI. There are many more capabilities of this framework in addition to what was discussed here. For more information, and for advanced usage, please check out the Starlette documentation.

Zachary Bennett

Zachary B.

Zach is currently a Lead Software Developer at OpalSoft where he uses tools such as Scala, TypeScript, Python, Docker, Node, and Angular. Zach has a passion for GIS programming along with open-source software. You can view some of his work on GitHub (https://github.com/zbennett10) and Stack Overflow (https://stackoverflow.com/users/6879849/zachary-bennett).

More about this author