Investigating concurrency in a Flask App

Conclusion

After investigating how the Flask default server behaves, I discovered that from version 1.0, Flask's inbuilt server handles even 400 requests concurrently just fine. This change is reflected in the documentation. In deployment guidelines for 0.12 it specifically mentions that Flask can only handle 1 request at time. This language is absent from version 1.

What is the same however is that Flask's default server is not performant enough. And for proper production use, the Flask app must be started with a proper WSGI server like uWSGI. The latter is what should be placed behind a reverse proxy like nginx. For my purposes of having a server for deployment pipelines however, the default server will work just fine. In fact, based on my initial testing I could hammer it with 500 requests concurrently and it just breezed through things (I was limited in testing more. Pretty sure it would have handled that too). Which should be more than enough for most applications starting out.


Intro

I've assumed that there's a problem with Flask apps by default. They can only ever handle one request at a time. At least that's the impression I have from the documentation. In order to handle more than 1 request at a time we need something that can spawn workers or threads or something like that and that will be responsible for responding to multiple requests at a time.

I'm first going to check that the one request at a time is the case. And then I'm going to figure out how to place something called WSGI in front of it and have nginx proxy its requests to the WSGI thingamabob if needed. Prior to this I have only ever run a Flask app, and stuck it behind an instance of nginx where the requests are sent off using proxy_pass 0.0.0.0:5000 or something similar. This is not the proper production method apparently so I'm going to try and work this out once and for all.

Checking if a Flask app can only handle one request at a time (August 2nd 2018)

15:16 Firstly I need to setup a basic nginx -> flask application. I'll be using Docker for this with docker-compose.

15:34 So I have setup the following things (Commit):

  • The docker compose file
  • The Dockerfile for python app and nginx
  • A barebones flask app
  • The requirements.txt file for the flask application

15:38 Now I'm actually going to test if things work. docker-compose up -d please work!

  • Urgh. Broke at the first build. The way I remember the build directive is wrong. Clearly I need to do some katas here to keep my knowledge fresh.
  • Ah. build.dockerfile not build.file.
  • Ha. That was all that was wrong... So far.
  • So the run command doesn't work. Sigh. Let's get to work.

15:43 A quick change in the command parameter got things working. Instead of running FLASK_APP=app.py flask --host 0.0.0.0 run, I moved FLASK_APP to environment in the docker-compose file. And the command is now simply flask --host 0.0.0.0 run. Thankfully that worked.

  • I won't lie. The whole array based parameter for ENTRYPOINT, CMD, and command has always confused me. I should probably invest some time and learn them better.
  • But my reverse proxy isn't starting up. Hmm. Log dive!
    • Oh ok. Simple syntax error in my nginx.conf file

15:48 Syntax error corrected. Let's see if it works!

15:50 Now to test if I actually can only serve 1 request at a time. The plan is to throw a sleep of 5 seconds in for the hello world request.

16:09 Well it looks like that was a load of nonsense. Multiple requests are handled just fine!

  • Slow running requests are here -> Github
  • Code for running concurrent-ish requests:
for ((i=0; i<10; i++))
do
  curl localhost/slow&
done

And the requests all fire off just fine. I wonder how it would work without nginx though.

16:21 Well it looked like that worked too just fine. Go figure. So why the WSGI stuff?

16:31 Hmmm. is it possible that I've misunderstood how to test things. From my readings I'm still seeing that Flask is a single threaded application by default. Lemme do one more round of tests after which I'm going to jump out of this I think.

16:50 Shelving this for now. I got what I want. Even a long running request to an external source will NOT block other requests from coming in. But for a high traffic site, a proper WSGI server behind a reverse proxy is the way to go. Why? A dive for another time.

  • The purpose of this exercise was to validate if the server I've built for deployment pipelines can handle a few long running requests all at once. I've validated that. Time to move on.

16:55 Ok I lied. One more test. I think something changed between Flask 0.12 and Flask 1.0.

  • In the docs for 0.12, the text has these words (emphasis mine):

While lightweight and easy to use, Flask’s built-in server is not suitable for production as it doesn’t scale well and by default serves only one request at a time.

That bit about one request at a time is missing in v1.0 docs.

17:05 Done switching to Flask 0.12. Time to test.

  • BOOM! Version 0.12 can handle only 1 request at a time. Version 1.0 breezes along just fine. This is great. Now I know what I need to check with my deployment pipeline stuff.

Posted on August 02 2018 by Adnan Issadeen