33 Seconds

The idea was to choose a place and record the events that took place over the course of 33 seconds, transcribe them and represent them using only type with a number of other limitations. I chose my location as Fabric and filmed down at the dance floor from the balcony in room 1.

I wanted to capture the intensity of the environment and looked for ways that I could generate large amounts of data from the footage. To do this I would have to write the code myself, so I taught myself Python. I used computer vision techniques from the OpenCV library to extract data from the frames of the video, including: dominant colours, brightest points, and overall brightness, as well as detecting the lines of the lasers and their angles. This data gave me around 40,000 characters of text to work with and was represented on the front as a huge block of data.

To show how this data was meaningful, I then wrote a programme that would take the data as an input and create an animation to simulate the original video. This code was printed on the reverse side. It was printed on thin paper, which was then perforated and punched to simulate dot-matrix printer paper.

 

Input video:

 

Output animation:

 

Front Side:

 

Rear Side: