I stumbled upon a new method for an individual to acquire faster reading capabilities. Imagine tearing through a book in a matter of minutes!

From www.spritz.com

I find it really exciting to see new technologies that would provide better platforms for presenting information. This method, called Spritz, relies on a presentation method of focusing words of a sentence into a singular visual focus (called “Optimal Recognition Point”, a focused visual space) with a visual aid (called “redicle”, essentially a red-coloured letter) and displaying said-words at various speeds to train the individual’s level of textual processing.

According to the company, this methodology seeks to “empower effective reading on a small display area”. It really does make sense on display devices that are small in nature (e.g. smartwatches, dinky smartphones and so on) – in fact, the company proposes a whole tonne of applications in their FAQs.

Naturally, I would wonder if such a methodology can be adopted in a gallery/museum space. The artwork or object labels (and their extended versions) often present challenging layout issues for the curators and exhibition designers – the slab of words can be quite the distraction on the wall when viewed with the artworks or objects. Just imagine a tiny screen using this methodology to provide information on the artwork or object.

Argh~ Looks like my imagination struck a obstacle.

In my view, there are some problems with this methodology – focus control and adoption.

Focus control would refer to how there is a need to “focus” on the methodology’s Optimal Recognition Point (ORP). Due to the nature on how Spritz displays individual words from sentences and paragraphs into a singular visual space, there is a certain amount of focus or concentration needed. When viewing an artwork, object or even an animated projection, the individual will look at these items and maybe go back to the information – there will be a disparity in informational processing when going back and forth between the item and text. Additionally it can be frustrating when the individual cannot quickly revert to the last point of the textual information – when the individual views the traditional label, there is a residual reference point made in memory when the individual looks away (similar to visual heuristics).

In that aspect, applications for gallery/museum space is not ideal. Their proposed usage of the methodology for closed captioning on television broadcast can be similarly dismissed as well due to the visual disparity – I want to watch what is going on the television first than split my attention (if it is possible) on another focused area (the ORP). I do feel challenged when I am watching drama serials with captions, and that splits my attention from the acting and the closed captioning going on.

Adoption would refer to how individuals and galleries/museums can adopt this methodology. For individuals, it would require an individual to “train” the way he/she views textual information based on this new methodology. This will take a few minutes or longer, based on the individual. However, should an untrained individual enter a gallery/museum space with such a visual presentation method, the individual would find it difficult to master this reading methodology on the spot. There would be priorities for the gallery/museum-going individual, and being trained to use a new technology would not be one of them (this goes for many complicated technologies that are being implemented in galleries and museums today).

For the galleries/museums, it would require staff to train visitors/customers to learn this new method of textual processing. It does not make economic sense and it definitely ties up already-overburdened resources which can be better implemented for other more important purposes.

Therefore, at least in the context of galleries/museums, the Spritz methodology will find it challenging to gain a foothold in as it is compounded by the focus control requirement and adoption by individuals and galleries/museums.

That being said, the ability of Spritz to appear on small devices (especially smartwatches) is definitely a good idea given the physical limitations of such products.

Go Spritz! There is definitely a  focused space for you!

More Permanent Head Damage

After spending a few days learning how to use a video production tool (HitFilm), I managed to trim and cut synced videos of dad’s first batch of field test videos.

This particular video production tool, while easy to learn, caused some confusion earlier when I was trying to export the video files properly. There were so many settings for exporting videos and I just went blank (bit rate, frames per second, resolution… what?). Thankfully, the video production tool allows me to save my editing work as project files before committing to export them out as finalised videos.

To be honest, each session’s video took a few steps before making it onto the finalised cut. One of the issues I encountered was that the GoPro cameras, that I am using for data capture, outputs separated files once each video file (on the camera itself) reaches about 4 GB worth. That means I have to stitch all these separate files and combine them into a singular video file (plus fisheye correction rendering) on the video production tool.

I was rubbish with this step at first. So I blew an entire weekend trying to render all four sessions of videos from each camera – for a total of twelve videos (three cameras, four sessions) each averaging a total render time of four hours. And then, I realised I screwed up somewhere… and had to redo everything again.

Frustration, yes. But I think I learnt how to use
this video production tool on my own. =_=

Once those twelve videos were done, I had to spend almost a day each to eyeball and sync two videos from each session (there are four videos, from the perspective of the overhead camera, which are not used for data capture). This was where most of the editing and trimming work are done – once the project files were ready (averaging about two hours to trim and sync as a project file), I rendered them as high resolution video files for archival purposes. The final step is to queue them into another tool (Handbrake) to cut the file sizes without losing too much quality.

For these four sessions, the video production process took almost took an entire week to complete (not including the foul ups earlier). I guess I would much better prepared for the next twelve sessions!

While waiting for the next sessions to start again, I am currently tabulating the data captured by the session videos. Each video averages about 40 minutes in duration, but tabulating each session video takes about four to six hours (watching the videos from start till end repeatedly, as I need to collect information for about seven data points).

Dad hard at work

While assessing the data capture, I realised that there are ways to improve the data capture. However, for the sake of consistency, I would not be improving the data capture but discuss about the improvements in the final research paper itself.

Another artist has caught wind of what I am
doing… and wants to be documented as well.

More data for the data god!

A friend who has been working late for a few days in a row, managed to send me some messages late at night (probably not knowing whether I am still awake). For some reason, this friend’s messages kind of encouraged me to work harder (and late through the next day) for some reason.

When the crates come, couriers and curators
are really dynamic people to work with.

Probably won’t have much update on this until the field test session start again (they are based on different conditions) when dad is back from overseas.

In the meantime, ボンジュール鈴木「羊曜日に猫ごっこして」