Building high-perf image processing pipeline to create vernacular catalogs
Building infra to create personalized vernacular image content to support multiple languages for billions of users.
Indian products need to be built for the scale of diverse audiences. There are 22 major languages in India, written in 13 different scripts, with over 720 dialects. Providing content for Billions of users in their native scripts is a data-intensive problem. There’s a need to keep the visual experience consistent across all the languages.
Native ad-network (GreedyGame.com)/OTT/market-place platforms are solving this problem with the creation of vernacular content programmatically on run-time. The backbone of this infra is asynchronous image processing pipelines.
Why an async pipeline for processing images?
Reading, manipulating, and saving images are high compute tasks and take time. Doing these operations on the main app-server may create a race-condition for other critical APIs. For a holistic experience, it’s better to execute this task on separate workers pool.
A good rule of thumb is to avoid api requests which run longer than 300ms. ~ Experience
The major components in processing pipelines are:
Setting high throughput message broker
Broker selection depends on the nature of the data. For payment-related transaction data, it’s advised to go with a persistent distributed queue like Kafka. Whereas for short-lived jobs, when the scale is preferred than consistency, an in-memory broker like Redis is a strong candidate. Since persistence is not the main goal of this data store, disabling Redis snapshots add quite a lot to the performance.
Redis’s RPush and LPop operation can be utilized to use data as a queue. Both operations have a time-complexity of O(1). Mentioned below is a quick benchmarking of these operations on Google’s cloud architecture on Redis.
The key concern to be addressed while selecting workers framework is:
- Having a lock to avoid executing the same job in multiple workers.
- Able to handle and store failed jobs to be able to triage.
- Able to prioritize jobs based on the message.
Based on the above points, RQ seems the best option which works with Redis. RQ (Redis Queue) is a simple Python library for queueing jobs and processing them in the background with workers. It is backed by Redis and it is designed to have a low barrier to entry.
Monitoring and Dashboard
RQ-dashboard provides a necessary basic view of queues with pending and failed tasks. It’s a lightweight, Flask-based web front-end to monitor your RQ queues, jobs, and workers in realtime.
Implementation with Python and Redis
Generic configurations: Json based style configuration stores information about the coordinates, font, and color for each language.
Micro-service: A thin flask-based service is used to receive and produce tasks in the queue. The service has enough schema validation to capture invalid data before enqueuing.
Image Processor: Image processing is based on ImageMagick. It is an industry defacto for image processing and utilizes multiple computational threads to increase performance and can read, process, or write mega-, giga-, or tera-pixel image sizes. The Image Processor logic uses Wand.py, a python bind of ImageMagick.
Sample Project on Docker
The sample project is available on https://github.com/arinkverma/vernacular-image. It’s free to download and explore.
Other use-cases for Image processing pipeline
- Adaptive resolution: Adjusting image size and resolution for a better experience across screen size.
- Annotations: Adding badges, icons over the image to grab attention.
- Creative Filter: Treating image to blend and make the image visually appealing
- Redis’s Push/Pop operation: https://redis.io/commands/rpush
- RQ: https://github.com/Parallels/rq-dashboard
- RQ-Dashboard: https://github.com/Parallels/rq-dashboard
ctypes-based simple ImageMagick binding for Python.