LCD Driver Benchmarking


Posted by ben on 2018/08/15 - 0 Comments


We recognize a few universal truths: Biological life... errr... finds a way. Every software program expands until it can check email, or dies trying. All online discussions of sufficient length end in references to Nazis and utter disdain for opposing viewpoints. And all screen-bearing hardware evolves to kill bad guys at the maximum screen rate possible. So, bowing to the evolutionary pressures that are the natural order of things, we've started benchmarking and optimizing our graphics performance. Also, we were really tired of debugging hardware and wanted a fun break for a bit.

Background

Originally we were thinking to put a much smaller TFT display in the WiPhone. Something like 1.4" or 1.8" display. These screens are usually driven by an ST7735 driver over SPI.

Small Screens

Trouble is, nobody likes phones with tiny screens like that. They're ugly. And killing bad guys on a tiny screen is unsatisfying. Luckily, we quickly abandoned the smaller screens and switched to a 2.4 inch screen with 320×240 pixels. But the ESP32 isn't blessed with an abundance of extra I/O, so we are stuck with the lower bandwidth SPI bus now needing to push roughly 4x the original data out to the screen.

As you can see in our first demo video, the graphics library that came as example code with our screen was struggling to refresh even a basic phone interface. Each widget is noticeably re-drawn.

Most phones with a screen this size use a parallel 8-bit bus (a.k.a. "8080 series MCU interface"). But given the limited pins on the ESP32 and all the other things a phone needs to do, something needed to give.

Benchmarking

Luckily, we found a beautiful and fast library called TFT_eSPI by Github user Bodmer, which is optimized for ESP32 chips. Our ST7789 driver IC was not yet supported, but an initialization routine and a pull request later we were pleasantly surprised by a 3-12x speedup!

Here we run Xark's benchmark with our bigger screen using TFT_eSPI library, as well as the original library we developed:

NVIDIA, eat your heart out.