SMS Comics was a research project conducted at Nokia Research Center in Palo Alto in 2008-2009 together with Vidya Setlur. The goal of SMS Comics was to augment text messages with contextual information. Instead of being represented as a sequence of texts, text messages are displayed as images composed of several graphical elements: an image representing the place where the person sent the text message from, a picture of the sender and the receiver, and an image depicting the topic of conversation.
V. Setlur, A. Battestini, “Using Comics as a Visual Metaphor for Enriching SMS Messages with Contextual and Social Media”, MobileHCI 2009, Workshop on Sharing Experiences with Social Mobile Media, 2009.
I developed most of the backend system and the mobile client application.
The mobile client was developed with Python For Series 60 and could be run on Nokia phones such as the N95. The client was responsible for detecting when the user wrote or received a text message, and then captured that text message, the name of the contact, and other contextual information such as the time and location. This information was processed on a backend server to transform it into an image, retrieved by the mobile client to be viewed by the user.
The most important tasks carried by the backend system were:
- Receive and store data from the mobile clients
- Sort and group the text messages into conversations
- Fetch images from different sources based on the content and context of the text messages:
- The latitude/longitude are reverse geocoded to the place name using the Flickr API. That information is used to retrieve a location image.
- To retrieve location images, the server executes Flickr searches with specific parameters, then fetches the image files, and runs each image through a filter process that determines if the image fits the constraints we set for location images. Once an image is selected, it is cached so that the server has a list of location images for each place. If the search queries do not retrieve any result, the search queries are degraded until they return results. The filter process consists in determining if the image is a panorama picture of the place (high ratio width/height, no prominent object in the image).
- Perform a text analysis of each conversation to find the most important words using NLTK and extract a general topic of the conversation.
- Once a general topic of conversation is found, the server queries an external stock image service, fetches images and filters them to determine which fit the constraints set for all topic images. Images are then associated to a topic and cached in the system.
- Each user profile in the system was associated to their Facebook profile. To find a picture for the contact, the system retrieves the list of Facebook friends, and tries to match the SMS contact to a friend. If a friend is found, his profile picture is retrieved and automatically cropped to enclose only the face.
- Generate the final image (I did not develop the Flash/Flex component, but integrated it in the system).
- Provide the users a Web view of their SMS comics, which they could share by sending a link via SMS to any number
The most time-consuming tasks were to fetch from 3rd party services (Flickr, stock image service, Facebook) and filter those images until one or several images were found. Upon reception of new data from mobile clients, the requests were processed asynchronously across several instances of virtual machines: one VM for the frontend website and communication with the mobile clients, one VM for Flickr searches, one VM for Stock images searches. Communications between the processes on the VMs were coordinated via Erlang node servers and Python processes.
I integrated different scripts and components developed by Vidya: the Flash/Flex component to create the final image, the scripts to automatically crop profile images around the face, determine if an image has a prominent object in it, and the comic-stylized rendering of images.