By Mona Sakr

Embodiment isn’t just about recognising the importance of the body. It highlights the need to ‘read’ bodily actions and make sense of the semiotic work that these actions are doing. In a current project, I am attempting to make sense of the work done by different hand actions in the context of scientific inquiry. Is it possible to map hand actions to stages in scientific inquiry?

‘Reading’ parts of the body requires classification systems – systems that will make sense of all of the types of movement that can be performed. Hands might be used to move objects, to ‘know’ the texture or weight of objects, to gesture at objects present, or to gesture about objects absent. Systems designed to classify these types of hand action are a starting point for making sense of embodied forms of interaction.

One such system – based very much on function rather than form – is presented by Streeck (2009) in Gesturecraft. Streeck presents 6 categories of hand action, or, as he calls them, ‘gesture ecologies’. What do you think of the distinctions he draws?

1. Making sense of the world at hand (moving and touching objects)

2. Disclosing the world within sight (drawing attention to a shared visual focus e.g. through pointing)

3. Depiction (gestures used to represent content)

4. Thinking by hand (gesture that facilitates thought e.g. grasping at the air when you are trying hard to describe something)

5. Displaying communicative action (showing or foreshadowing aspects of the communicative act)

6. Ordering and mediating transactions (regulating the input of other participants; managing your interaction in an exchange)

But there are problems with classification systems that work purely on function, just as there are problems with those that relate purely to form. If we classify movements and actions only on the basis of function, how do we go about making the classification? It becomes an activity that relies entirely on ‘reading’ the surrounding context – you don’t end up ‘reading’ the body at all. For example, in order to know whether someone is currently ‘thinking by hand’, I would need to know what’s going on in their mind.

On the other hand, systems based only on form (e.g. those that distinguish pointing from grasping) are only helpful when making sense of the body’s semiotic work if you assume that the work of the body is inflexible – that form maps neatly, one-to-one, onto function. We know that isn’t the case… we know that pointing isn’t always about establishing a shared reference point, and we know that establishing a shared reference point isn’t always done through pointing.

What you need is a system that takes both form and function into account, one that encourages the user to think about the surrounding context – the ‘multimodal ensemble of activity’ (Goodwin, 2001) – but at the same time enables some insights to be made on the basis of the body itself.

Streeck, J. (2009) Gesturecraft: The manu-facture of meaning. Amsterdam: John Benjamins Publishing Comapny. 

Advertisements