Archives for posts with tag: gesturecraft

By Mona Sakr

A few days ago, I talked about Streeck’s taxonomy of gesture in Gesturecraft. It’s now time to share the taxonomy of hand action we’ve developed at the lab in response to a study of the hands in scientific inquiry. This taxonomy relates particularly to scientific inquiry contexts (though it may be useful for looking at hand action in other forums of experience) and is based on ‘reading’ both the form and function of hand actions. It’s inspired by literature in the field and video analysis of students involved in inquiry learning about the behaviour of light.

1.     Ergotic movements

Ergotic movements are those that change the surrounding environment . Such movements may involve changing the position of an object, or attempting to change its physical properties. In the context of scientific inquiry, ergotic movements are necessary in order to facilitate observations of particular phenomena.

2.     Epistemic movements

Epistemic movements are those that enable an individual to know more about the physical properties of an object. While ergotic movements are designed to change the surrounding environment, epistemic movements enable better perception of the surrounding environment e.g. through feeling the texture of an object.

3.     Deictic gesture

Deictic gestures are used to point to or physically highlight objects or areas in the physical world. They may be used to draw attention to a representational field or a particular aspect within a field.

4.     Re-enactment gestures

While deictic gestures draw attention to particular parts of the environment, re-enactment gestures focus on descriptive processes and so have an added temporal dimension of expression. Through using re-enactment gestures processes that are otherwise too fast to be visible can be slowed down.

5.      Ideational gestures

While all of the actions described above relate to physical phenomena that are present, ideational gestures can be used to indicate content that is not present in any respect, like abstract ideas or previous experiences. In the context of scientific inquiry, students may wish to invoke previously learned knowledge in order to make sense of what is currently occurring. Gesture may be helpful in this because it constitutes a way of representing absent knowledge.

Advertisements

By Mona Sakr

Embodiment isn’t just about recognising the importance of the body. It highlights the need to ‘read’ bodily actions and make sense of the semiotic work that these actions are doing. In a current project, I am attempting to make sense of the work done by different hand actions in the context of scientific inquiry. Is it possible to map hand actions to stages in scientific inquiry?

‘Reading’ parts of the body requires classification systems – systems that will make sense of all of the types of movement that can be performed. Hands might be used to move objects, to ‘know’ the texture or weight of objects, to gesture at objects present, or to gesture about objects absent. Systems designed to classify these types of hand action are a starting point for making sense of embodied forms of interaction.

One such system – based very much on function rather than form – is presented by Streeck (2009) in Gesturecraft. Streeck presents 6 categories of hand action, or, as he calls them, ‘gesture ecologies’. What do you think of the distinctions he draws?

1. Making sense of the world at hand (moving and touching objects)

2. Disclosing the world within sight (drawing attention to a shared visual focus e.g. through pointing)

3. Depiction (gestures used to represent content)

4. Thinking by hand (gesture that facilitates thought e.g. grasping at the air when you are trying hard to describe something)

5. Displaying communicative action (showing or foreshadowing aspects of the communicative act)

6. Ordering and mediating transactions (regulating the input of other participants; managing your interaction in an exchange)

But there are problems with classification systems that work purely on function, just as there are problems with those that relate purely to form. If we classify movements and actions only on the basis of function, how do we go about making the classification? It becomes an activity that relies entirely on ‘reading’ the surrounding context – you don’t end up ‘reading’ the body at all. For example, in order to know whether someone is currently ‘thinking by hand’, I would need to know what’s going on in their mind.

On the other hand, systems based only on form (e.g. those that distinguish pointing from grasping) are only helpful when making sense of the body’s semiotic work if you assume that the work of the body is inflexible – that form maps neatly, one-to-one, onto function. We know that isn’t the case… we know that pointing isn’t always about establishing a shared reference point, and we know that establishing a shared reference point isn’t always done through pointing.

What you need is a system that takes both form and function into account, one that encourages the user to think about the surrounding context – the ‘multimodal ensemble of activity’ (Goodwin, 2001) – but at the same time enables some insights to be made on the basis of the body itself.

Streeck, J. (2009) Gesturecraft: The manu-facture of meaning. Amsterdam: John Benjamins Publishing Comapny.