Archives for posts with tag: hand action

By Mona Sakr

Exploring the usefulness of the ‘body-thing dialogue’ metaphor for understanding embodied interaction in digital environments

In 2006, Larssen, Robertson & Edwards presented the paper ‘How it feels, not just how it looks: When bodies interact with technology’ at the Australian Computer-Human Interaction conference (OZCHI). In this paper, they suggest that  the embodied nature of interactions with technology can be accessed by thinking about each interaction as a body-thing dialogue. The body-thing dialogue is the bodily interaction that occurs between an individual and an artifact. The body-thing dialogue happens through the mode of movement and makes possible the potential for action that forms the basis of the interaction. Through this metaphor, Larssen et al. hoped to shift the focus towards the bodily nature of interactions with technologies – the extent to which these experiences are physically felt.

I would argue that the body-thing dialogue is a useful metaphor in some ways and a misleading one in other ways.

It is useful because it draws attention to the distinct and temporal nature of each embodied interaction:

  • Every dialogue we engage in is different. Similarly, in human-computer interaction each interaction with technology unfolds in a specific way and in a particular context. Flow diagrams based on a non-existent ‘typical’ user do not help us to access the nature of embodied interaction.
  • Every dialogue unfolds over time and can radically change from moment to moment. Similarly, each embodied interaction takes place over time and is historical – each aspect of the interaction happens in relation to the aspects that have preceded it.

But the metaphor of the body-thing dialogue is also a misleading way to think about embodied interactions in digital environments:

  • Movement is a different mode to speech, with different opportunities and constraints. Is it right to apply the notion of ‘dialogue’ in the context of movement?
  • Can we apply the term ‘dialogue’ to make sense of the way movements unfold between ‘a body’ and ‘a thing’? Certainly, the movements of the body and the movements of an object are not equivalents in the way that the speech of two human participants is.
  • Larssen et al. suggest that the body-thing dialogue is a useful way of looking at ‘how we use our proprioceptive sense and motor skills when incorporating a tool in our bodily space so that it becomes an extension of our bodies’ (p. 2). The notion of dialogue takes us away, however, from the concept of incorporation. In a dialogue, there is self and other – the object responds to us, rather than becoming an extension of us.

So what alternative metaphors or conceptual tools enable us to think about embodied interaction in digital environments? I have yet to come across a theory that helps to frame interactions with artefacts so that the focus is on the body and felt experience, but does not trip into the pitfalls outlined above. We need to conceptualise interactions as physical couplings without using metaphors that draw on other modes of communication.

Larssen, A. T., Robertson, T., & Edwards, J. (2006, November). How it feels, not just how it looks: when bodies interact with technology. In Proceedings of the 18th Australia conference on Computer-Human Interaction: Design: Activities, Artefacts and Environments (pp. 329-332). ACM.

Advertisements

By Mona Sakr

A few days ago, I talked about Streeck’s taxonomy of gesture in Gesturecraft. It’s now time to share the taxonomy of hand action we’ve developed at the lab in response to a study of the hands in scientific inquiry. This taxonomy relates particularly to scientific inquiry contexts (though it may be useful for looking at hand action in other forums of experience) and is based on ‘reading’ both the form and function of hand actions. It’s inspired by literature in the field and video analysis of students involved in inquiry learning about the behaviour of light.

1.     Ergotic movements

Ergotic movements are those that change the surrounding environment . Such movements may involve changing the position of an object, or attempting to change its physical properties. In the context of scientific inquiry, ergotic movements are necessary in order to facilitate observations of particular phenomena.

2.     Epistemic movements

Epistemic movements are those that enable an individual to know more about the physical properties of an object. While ergotic movements are designed to change the surrounding environment, epistemic movements enable better perception of the surrounding environment e.g. through feeling the texture of an object.

3.     Deictic gesture

Deictic gestures are used to point to or physically highlight objects or areas in the physical world. They may be used to draw attention to a representational field or a particular aspect within a field.

4.     Re-enactment gestures

While deictic gestures draw attention to particular parts of the environment, re-enactment gestures focus on descriptive processes and so have an added temporal dimension of expression. Through using re-enactment gestures processes that are otherwise too fast to be visible can be slowed down.

5.      Ideational gestures

While all of the actions described above relate to physical phenomena that are present, ideational gestures can be used to indicate content that is not present in any respect, like abstract ideas or previous experiences. In the context of scientific inquiry, students may wish to invoke previously learned knowledge in order to make sense of what is currently occurring. Gesture may be helpful in this because it constitutes a way of representing absent knowledge.

By Mona Sakr

Embodiment isn’t just about recognising the importance of the body. It highlights the need to ‘read’ bodily actions and make sense of the semiotic work that these actions are doing. In a current project, I am attempting to make sense of the work done by different hand actions in the context of scientific inquiry. Is it possible to map hand actions to stages in scientific inquiry?

‘Reading’ parts of the body requires classification systems – systems that will make sense of all of the types of movement that can be performed. Hands might be used to move objects, to ‘know’ the texture or weight of objects, to gesture at objects present, or to gesture about objects absent. Systems designed to classify these types of hand action are a starting point for making sense of embodied forms of interaction.

One such system – based very much on function rather than form – is presented by Streeck (2009) in Gesturecraft. Streeck presents 6 categories of hand action, or, as he calls them, ‘gesture ecologies’. What do you think of the distinctions he draws?

1. Making sense of the world at hand (moving and touching objects)

2. Disclosing the world within sight (drawing attention to a shared visual focus e.g. through pointing)

3. Depiction (gestures used to represent content)

4. Thinking by hand (gesture that facilitates thought e.g. grasping at the air when you are trying hard to describe something)

5. Displaying communicative action (showing or foreshadowing aspects of the communicative act)

6. Ordering and mediating transactions (regulating the input of other participants; managing your interaction in an exchange)

But there are problems with classification systems that work purely on function, just as there are problems with those that relate purely to form. If we classify movements and actions only on the basis of function, how do we go about making the classification? It becomes an activity that relies entirely on ‘reading’ the surrounding context – you don’t end up ‘reading’ the body at all. For example, in order to know whether someone is currently ‘thinking by hand’, I would need to know what’s going on in their mind.

On the other hand, systems based only on form (e.g. those that distinguish pointing from grasping) are only helpful when making sense of the body’s semiotic work if you assume that the work of the body is inflexible – that form maps neatly, one-to-one, onto function. We know that isn’t the case… we know that pointing isn’t always about establishing a shared reference point, and we know that establishing a shared reference point isn’t always done through pointing.

What you need is a system that takes both form and function into account, one that encourages the user to think about the surrounding context – the ‘multimodal ensemble of activity’ (Goodwin, 2001) – but at the same time enables some insights to be made on the basis of the body itself.

Streeck, J. (2009) Gesturecraft: The manu-facture of meaning. Amsterdam: John Benjamins Publishing Comapny.