Hu:toma recently launched their open beta, and since I've had a bit of fun playing around with it, I think it deserves a quick post. So what is Hu:toma? It's a platform for developing, publishing and selling conversational bots. It enables developing and training bots that your users can have a free-text conversation, while integrating with your APIs. For example:
Hu:toma lets developers upload some training data in the form of example conversations. It then trains the bot using their deep learning network to allow the bots to understand variations of the questions and requests from the users. For more info, watch this quick video or check out their site. This post will give you some easy to follow steps to write your first bot that can interact with a user and call your web APIs to perform some actions.
So you've written an aiohttp (or Flask, or Tornado..) app in Python, and want to run it as a production service.
The easiest approach would be to simply run it in production the same way you do while developing, simply
python server.py. While this is a simple approach, and works well while developing, aiohttp is
not actually a webserver - it's a framework that is intended to be hosted in a production-ready webserver.
One such webserver is Gunicorn, used to host Python-based web services. The main argument for using Gunicorn
is raw performance: Gunicorn will run enough aiohttp processes to use all available CPU cores, while aiohttp's
development webserver would only run it in a single process. Other benefits of using Gunicorn are improved security and its configurability. There is a good post on this topic found on
by the Gunicorn's developer.
Note: this walkthrough assumes you have Python 3 and docker installed on your machine.
Python 3.4 added support for asynchronous I/O code, known as asyncio. Asyncio allows writing performant code that would have previously been bottlenecked by IO performance, and has spawned a number of great libraries based on it. One of these libraries is aiohttp, an asynchronous HTTP client/server which can support much larger number of parallel requests when compared to other client-side libraries (e.g. urllib or requests), or server-side libraries (e.g. flask).
After a cursory search, I could not find a Docker image with a basic 'hello world' implementation of an aiohttp server, so decided to build one, and document the process.