When working on embedded systems, often the tradition user interface control – keyboards, mouse, touch screen, and display – may not available. In these cases, an “audio menu” system may be the perfect solution to the user experience challenge.

Hawkeye Pi Camera with Audio Menus interface
Hawkeye Pi Camera with Audio Menus interface

For the tl;dr crowd, here is a demonstration video.

The printing camera provides a web interface for rich control but this can be time consuming if all that is needed is to turn the camera’s printer on or off, or switch between camera mode and photo booth mode.

An early solution used a combination of the two buttons and the two LEDs to implement a code-flashing user interface. It was horrible.

By adding a small speaker (and tiny audio amplifier), the camera is able to produce sounds.

The cryptic code-flashing interface was replaced with a spoken menu and spoken choices. The audio was created using an on-line text-to-speech service.

The complete settings menu is a single audio file as is all of the choices for an individual setting. The camera code uses a technique similar to image sprites.

Sprites combine a collection of graphics into a single long strip and then CSS is used to index to a portion of the total image to render a single graphic.

The camera code is able to index into an audio file and play a small segment. The embedded operating handles caching of the audio files so they are accessed quickly.

The audio menu system worked so well it was an easy extension to add some audio help message for the overall usage of the camera as well as the settings mode interface.

The entire project’s code is available on GitLab as part of the project set from Brad├ín Lane Studio.