Hostname: page-component-745bb68f8f-hvd4g Total loading time: 0 Render date: 2025-02-06T06:57:50.269Z Has data issue: false hasContentIssue false

Towards a Cognitive Model of Human Mobility: An Investigation of Tactile Perception for use in Mobility Devices

Published online by Cambridge University Press:  14 July 2016

E. Pissaloux*
Affiliation:
(UPMC & Université de Rouen, France)
R. Velazquez
Affiliation:
(Universidad Panamericana, Mexico)
M. Hersh
Affiliation:
(Glasgow University, Scotland)
G. Uzan
Affiliation:
(Université Paris 8, France)
*
Rights & Permissions [Opens in a new window]

Abstract

This paper reports the results of the first three in a series of experiments on tactile perception which form part of a larger project on tactile perceptions and spatial representations and the design of tactile interfaces for mobility devices for blind, partially sighted and deafblind people. The results indicate the potential of tactile interfaces, including to support environmental exploration and mobility. The participants showed reasonably good ability to determine the direction of motion of an arrow, with best recognition rates in the up and right directions. They showed reasonably good ability to use a tactile interface to detect and avoid obstacles after a very short learning period and more limited ability to learn and remember an environmental representation using information from a tactile interface and walking through the environment without specific instructions.

Type
Research Article
Copyright
Copyright © The Royal Institute of Navigation 2016 

1. INTRODUCTION: MOBILITY, SENSORY PERCEPTIONS AND TYPES OF SPACE

This paper presents three experiments investigating the ability of subjects to determine the direction of motion via touch and use information from a tactile interface to avoid obstacles and subsequently recognise the correct representation of the space navigated with the aid of the interface. These experiments form part of a larger project on tactile perceptions and spatial representations and the design of tactile interfaces for mobility devices for blind, partially sighted and deafblind people and (older) people with progressive visual impairments.

Mobility is a very complex signal and cognitive processing problem which analyses sensory information to determine a direction of motion, either to a specific destination or without a destination, for instance when exploring an area or walking for pleasure. Sighted people generally rely on sight to extract information from the environment, whereas blind people use information from all their senses, with auditory and tactile information generally the most important. Partially sighted people with significant (residual) vision may largely use vision and people with progressive visual impairments may often experience difficulties in adapting from visual mobility to mobility using all their senses (Hersh, Reference Hersh2009a). There is evidence that spatial representation is not a purely visual activity and vision is neither necessary nor sufficient on its own for spatial coding (Millar, Reference Millar1988). Table 1 indicates that blind and partially sighted people generally lack preview and overview information about the route and objects on it; information useful for independent physical displacement.

Table 1. Comparison of information from the different senses (adapted from Hersh, Reference Hersh2016).

Touch and movement can be combined to provide information about shape, configurations and the relationship between surfaces, but memory for shapes and configurations obtained from vision has been found to be better than that obtained from touch (Millar, Reference Millar1995). However, good organisation and coding facilities make it easier to remember information, regardless of the sensory modality. According to the CAPIN (Convergent Activity Processing in Interrelated Networks) model, information from the different senses is both specialised and complementary and overlapping (Millar, Reference Millar1994; Reference Millar1995). This facilitates understanding and coding spatial information (Millar, Reference Millar1995). Prior knowledge and experience are generally used to help make sense of the perceived information and it is often necessary to reconstruct fragmentary information in order to understand space, regardless of whether it was obtained from haptics or vision, but haptics generally have greater information processing delays than vision.

The lack of preview and overview information by blind people means that they use specific landmarks detected by all their senses much more frequently than sighted people in order to check that they are on the correct route. While auditory, olfactory, proprioceptive and kinaesthetic information largely serve as landmarks, tactilely perceived objects can be landmarks or obstacles, sometimes both simultaneously or they may change roles during a journey (Hersh, Reference Hersh2009a). This lack of overview and preview information also makes route learning particularly important (Hersh, Reference Hersh2009b; Reference Hersh2016; Pissaloux, Reference Pissaloux2013; Pissaloux and Velázquez, Reference Pissaloux and Velázquez2016). Thus, remembered knowledge about the route can play the same role for blind people as preview and overview information for sighted people.

Blind people use shore lines, such as kerbs or the edge of grass verges, which are detected by the cane in order to walk in a straight line. The details of the information available to a blind pedestrian and the locomotion strategies used are often dependent on their particular mobility aid. An example of this is the differences in motion using a long cane and a guide dog. A blind person using a cane will obtain obstacle information from the cane and can use this information as landmarks, while also needing to avoid the obstacles, whereas a guide dog user may not be aware of obstacles which the dog guides them round and will therefore need to use other types of landmarks.

The importance of learning and remembering routes for the successful mobility of blind people means that they need to be able to obtain appropriate spatial representations. The simplest categorisation of space is into small-scale or near space which can be seen from one vantage point; and large-scale or far space which requires movement to be experienced (Downs and Stea, Reference Downs and Stea1977; Lynch, Reference Lynch1960; Tversky, Reference Tversky, Frank and Campari1993; Reference Tversky and Gattis2001; Reference Tversky, Shah and Miyake2005; Ungar, Reference Ungar, Kitchin and Freundschuh2000). Later models include small, medium and large-scale spaces, (Gärling and Golledge, Reference Gärling and Golledge1987; Mandler, Reference Mandler and Mussen1983; Siegel, Reference Siegel1981) with the possible addition of the further category of maps (Siegel, Reference Siegel1981). A number of studies show that the primary perceptual space or spatial framework model is used in practice (Bryant et al, Reference Bryant, Tversky and Franklin1992; Franklin and Tversky, Reference Franklin and Tversky1990). It consists of the three up-down, left-right and front-back axes, which have been used to model the physical space close to the body.

This paper is concerned with tactile perceptions in (relatively) near space. Tactile/haptic information is generally obtained from a touch stimulation device such as a long cane or guide dog. However, as described later in the paper, the combination of a touch stimulating interface and various sensors could be used to obtain information from medium or far space, though a significant reduction in the information context would be required to present it to the user in a meaningful form. Important questions include: (i) the type and amount of information required to provide a satisfactory representation; (ii) the ability of blind (and sighted) people to process this information to form a satisfactory representation; and (iii) how understanding of the tactile perceptions of blind, partially sighted, deafblind (and sighted) people can be used to improve the design of mobility aids for them.

This paper contributes to answering these questions. It is organised as follows. Section 2 provides a brief overview of the current state of the art on the development of mobility aids for blind and partially sighted people. Section 3 presents the three experiments on touch stimulation and perception which are the main contribution of the paper. Section 4 summarises the results of these experiments and Section 5 presents conclusions and further work.

2. MOBILITY AIDS: THE EVOLVING STATE OF THE ART

Although the earliest technological aids for mobility for Visually Impaired People (VIP) date back to 1880 and numerous electronic aids have been developed subsequently, the most popular aids are still the long cane (Hersh, Reference Hersh2016) and the guide dog, both of which can be used to detect and avoid obstacles. The long cane has the advantages of simplicity, robustness, low cost and reliable performance. It has the disadvantages of not providing information on distant or high level obstacles or way finding and navigation. Guide dogs provide similar guidance to a human guide, but only on known routes, and are only suitable for people who love dogs and are able to care for them.

A number of technological solutions have been proposed to support mobility by VIP (Hersh and Johnson, Reference Hersh and Johnson2008; Pissaloux, Reference Pissaloux, Maingreaud, Fontaine and Velazquez2006; Reference Pissaloux2013; Pissaloux and Velázquez, Reference Pissaloux and Velázquez2016). However, most of them are only used to a limited extent due to high costs, limited benefits compared to the cane, weight, unattractive and obtrusive appearance, lack of information about the difficulties in learning to use them and lack of easily available low cost or free training, among other factors.

The first phase of aid development focused on obstacle detection at a height and or a greater distance. Many of these aids involved modifications to the cane, either through a box that could be clipped to the cane or modification of the cane itself. Information is provided to users in tactile and/or auditory modalities (e.g. CASBliP FP6 EU project (Dunai et al., Reference Dunai, Lengua, Tortajada and Fernando Brusola2014; Bujacz et al., Reference Bujacz, Skulimowski and Strumiłło2012; Maidenbaum et al., Reference Maidenbaum, Levy-Tzedek, Chebat, Namer-Furstenberg and Amedi2014; Abboud et al., Reference Abboud, Hanassy, Levy-Tzedek, Maidenbaum and Amedi2014)). Tactile information was generally in the form of vibration and auditory information involving possibly musical sounds of different frequencies and/or intensities, rather than speech. However, the authors consider that many of these aids did not pay sufficient attention to the difficulties involved in mental processing of complex spatial information or the need for auditory information not to interfere with access to environmental sounds, for instance of the cane, which are very important for the mobility of VIP.

Examples include the laser cane (Hersh and Johnson, Reference Hersh and Johnson2008), the smart cane (Terlau et al., Reference Terlau and Penrod2008), the ultracane (Hoyle and Dodds, Reference Hoyle and Dodds2006) and the Tom Pouce and Télétact (Farcy, Reference Farcy2006) (Figure 1). Robot canes, such as the smart cane (Borenstein and Ulrich, Reference Borenstein and Ulrich2001) include a (small) motor and power source and are therefore able to move in order to avoid obstacles. This movement is communicated to the user via the handle and the user moves to follow the cane movement. This therefore provides an intuitive and easy to use obstacle avoidance system and avoids the need for the extensive training required by many other cane-based devices. However the authors are not aware of any robot canes which have gone beyond the prototype stage.

Figure 1. From left: Robotised smart cane, smart cane (K-Sonar), intelligent canes (UltraCane, TomPouce).

These canes all use infrared, ultrasonic and/or laser sensors to obtain environmental information. More recent developments (e.g. Arditi and Tyan, Reference Arditi and Tian2013; Yusro et al., Reference Yusro, Hou, Pissaloux, Shi, Ramli and Sudiana2013; Kumar et al., Reference Kumar, Patra, Manjunatha, Mukhopadhyay and Majumdar2011), use cameras with signal processing algorithms to extract environmental information. Such devices are able to provide additional and more detailed information than other types of cane, raising the question of how much information can usefully be provided to the user in a format they can easily and quickly understand and without leading to cognitive overload.

The second phase of aid design involved the development of navigation and wayfinding devices. They require the location of either the user or a particular point in the environment, leading to the development of two distinct approaches with overlapping functionality (Hersh, Reference Hersh2009b):

  • Global Navigation Satellite Systems (GNSS), currently most commonly using Global Positioning Systems (GPS), which locate the user,

  • Environmental information beacons, which locate a point in space.

Both these types of systems can be used for outdoor navigation, but to date only the second type has been used for indoor navigation. GPS systems suitable for blind people include both stand-alone devices, such as the Trekker Breeze, Trekker GPS, Navigator and Captain and software, such as wayfinder, that can be used on a mobile phone or other mobile device. All the environmental information beacons and many of the satellite navigation systems use speech to provide information, though Braille GPS systems, such as BrailleNote GPS, are available. The greatest choice of systems is available in English, but examples in other languages include the Polish Navigator and the Czech system of environmental information beacons. Other environmental information beacons include The Talking Signs system (Brabyn, Reference Brabyn, Crandall and Gerrey1993) and the Haptic Pointer Interface (Santa Barbara) (Figure 2). A GNSS/GPS system was developed by the European FP7 HAPTIMAP project for use on a smart phone (Figure 3). It modifies a periodic continuously played tune to indicate changes of direction while walking. However, this could both be very irritating and block the perception of important environmental audio information.

Figure 2. Beacon and receiver for Talking Sign System.

Figure 3. European FP7 HAPTIMAP prototype.

Both types of systems may provide additional functions (e.g. points of interest information and the location of facilities in the case of GPS and information about facilities and the ability to send signals to request vehicle doors are opened in the case of environmental information beacons). Environmental information beacons can frequently be used both indoors and outdoors, whereas GNSS/GPS systems can generally only be used outdoors due to the lack of satellite access indoors. The provision of information in a comprehensible format is also dependent on the availability of maps of the area, e.g. to enable information to be linked to particular streets and feasible turns to be indicated.

A small number of aids combine obstacle avoidance and wayfinding/navigation functionality. These include the Stick for Environment ExplorationS (SEES system, (Yusro et al., Reference Yusro, Hou, Pissaloux, Shi, Ramli and Sudiana2013), Figure 4), a long cane-based device which combines a satellite-based GPS navigation function with obstacle avoidance information from a camera. It can also provide contextual information, such as the location of the nearest traffic lights.

Figure 4. SEES system – an intelligent cane.

Aids that locate the user raise privacy and security issues, which may not have been considered in aid design. Some of the technologies involved such as Radio Frequency Identification (RFID) tags are particularly insecure and can be easily read by unauthorised people. This may allow targeting of VIP who could be considered particularly vulnerable by ill-intentioned individuals. There are also issues of the user's awareness of the associated information privacy issues, the importance of the privacy of their personal information to them and their tradeoff choices between information privacy and device functionality.

The current phase of travel aid development involves apps for smart mobile devices (Kane et al., Reference Kane, Jayant, Wobbrock and Ladner2009). They seem to be a natural progression from the first two phases, with phase one involving mainly hardware, phase two a combination of hardware and software and phase three purely software, with the hardware provided by an existing mobile device. The move towards apps has been made possible by developments in information and communication technology which have significantly increased the computing power that can be contained on small mobile devices. Many apps are open source or otherwise cost free, whereas others are commercially available. While many of the mobility related apps developed to date provide very specific contextual information or otherwise have limited applications, they have the potential to be used to provide wide ranging navigation and information systems and, in combination with appropriate hardware, possibly also obstacle avoidance. Many of these apps, such as “Find my bus” or “Find my bus stop”, provide information of relevance to both blind and sighted people. If designed to be compatible with a range of different interface modalities they can be used by both groups.

The last type of travel aid comprises maps and route descriptions. If appropriately designed they can be used to provide information about medium and large/far-scale space and assist navigation (Koch and Teller, Reference Koch and Teller2008; Kammoun et al., Reference Kammoun, Dramas, Oriolaand and Jouffrais2010). Tactile maps should be carefully designed to be easily readable. This includes the use of easily distinguishable, preferably standardised, symbols and the avoidance of excessive information on each sheet. A tactile map can only convey a fraction of the information in a printed map of the same size, making it less portable than print maps. For instance, a book with fairly large pages may be required to provide a tactile schematic of a metro system, with each line having its own page plus an overview sheet, whereas the same information can be provided for sighted people on one page. The information on each sheet can be augmented by putting the map on a tactile table with audio output able to, for instance, read out location names and possibly also provide additional information about them.

There are a number of different ways of producing tactile maps, including home made versions using a variety of readily availability objects, including materials with different textures and pens that produce raised lines. More professional approaches include computer design of the map followed by printing on thermal paper (Pissaloux, Reference Pissaloux2013; see Figure 5, left). Three dimensional tactile models have been used, but need to be well-designed to be of use. Some examples, such as the two and a half dimension concrete map (Figure 5, right) have too much detail to be of much use.

Figure 5. Mobility map technologies: thermoformed map (Paris) and concrete map (Hamburg).

3. EXPERIMENTS

These are the first in a series of experiments investigating tactile perceptions and the use of tactile interfaces to support mobility. The experiemental apparatus involved a TactiPad and a Perception-Movement Platform (PMP). The TactiPad has 8 × 8 taxels (a tactile equivalent of pixels) realised with shape memory alloy technology (Velazquez et al., Reference Velázquez, Pissaloux, Hafez and Szewczyk2008) with 2·6 mm spacing between them on a cube with 8 cm sides and weighing 200 g. The display refresh rate is 1·5 Hz, which is related to the human cognitive ability to perceive and mentally process tactile stimuli.

The PMP is a 5 m x 7 m room with several stationary, but moveable objects which act as obstacles. One participant at a time moves round the room using the TactiPad to avoid these obstacles (Figure 6, lower). Their position and orientation are tracked by a wide view camera tracking system on a platform over the PMP using a bicoloured arrow on their hat. The camera obtains images of the whole platform at a rate of 8 Hz and passes them to the PC tracking software for processing. The PC screen displays three images for control purposes (Figure 6): the space perceived by the camera (central image), part of the near space perceived by the participant (small rectangle in the central image) and the space displayed on the TactiPad (south-east portion of the central image).

Figure 6. PMP: processed data flux (left), user carrying the TactiPad (right), real experiment (lower).

3.1. Experiment 1: Perception of Moving Tactile Stimuli Displayed on the Tactipad

This experiment investigated the ability to determine the direction of motion of a moving stimulus and whether this is dependent on the type of signal. It will be assumed that correctly determining the direction of movement indicates that the participant has successfully perceived and interpreted the tactile stimulus.

The experiment involved ten blindfolded sighted participants, seven men and three women of average age 25 years and without previous experience of the TactiPad. It lasted approximately 30 minutes. Participants sat in front of the TactiPad (Figure 7) and freely explored the tactile surface with one hand to detect the stimuli. There were four different shapes, namely a line segment and an arrow with three different types of arrowhead, full small triangle (3 × 3 taxels), framed large triangle (4 × 4 taxels) and framed small triangle (3 × 3 taxels) (Table 2). The displayed pattern was moved one taxel every 300 ms in a North (N), South (S), East (E) or West (W) direction. Participants were asked to state the direction of movement and this was recorded by the researcher. Consecutive directions of movement were determined randomly. If an incorrect answer was given the moving shape was displayed again and participants were given one further opportunity to identify it. At the end of the experiment participants were asked their feelings about the device and how easy it was to use it to recognise moving stimuli.

Figure 7. Experimental setup for tactile stimuli cognitive perception.

Table 2. NEWS Direction recognition for 2 patterns and 3 representations of an arrow.

The quantitative data on average recognition rates of the four moving shapes (Tables 2 and 3) indicates that it is easier to recognise the direction of movement for a framed rather than a full pattern, with the highest recognition rate for the framed small arrow. However, this more accurate recognition is at the expense of longer recognition times, (30 seconds compared to 14–16 seconds), indicating possible tradeoffs between accuracy and time. The average recognition times for the other two arrows and the line segment were comparable (14, 15 and 16 seconds), but the recognition rates varied greatly (81 and 79% for the framed large arrow and line segment respectively compared to only 65% for the full small arrow). The average direction recognition rate and the time taken were 80% and 19 seconds respectively, with the highest recognition rate for movement in the E and N directions (82/81%) and lowest in the S and W directions (76/75%).

Table 3. Average recognition rates and standard deviations over all directions and shapes.

Recognition rates for N and S movement were highest for the small framed arrow (100 and 90% respectively) and fastest in the N direction (6·5 s compared to 17–20 s). However, for E movement recognition was highest for the line segment (89%) and just over twice as fast for the full arrow (15 s) compared to the small framed arrow (82% and 33 s). In the W direction recognition rates were similar for the two framed arrows and the line segment (82/3%) and fastest for the large framed arrow (11 s). While recognition rates of N and S movement were relatively good at nearly 80% for the large framed arrow and over 70% for the line segment, a significant proportion of the errors resulted from N/S confusion.

All participants used passive rather than active exploration of the tactile surface, possibly because they were sighted and had no experience of using the device or tactile perception. They also noted difficulties in distinguishing the large arrow from a line segment moving in a diagonal direction. The better recognition of framed shapes may be due to the larger tactile gradient on the passive hand. The higher rate of N and E movement recognition may be due to them being more ‘natural’ directions for finger movement and the greater density of mechanoreceptors on these finger surfaces. The loss of physical continuity between adjacent fingers during TactiPad passive exploration and lower density of mechanoreceptors for perceiving W movement increase the cognitive load and make recognition more difficult and prone to recognition errors. However, there may also have been E/W confusion. While greater ease of recognition of which main axis the signal is moving along than the direction of movement along this axis may not be surprising, this will still require further investigation.

The arrow seems better suited to indicating the direction or movement than the moving line. However, as indicated by fairly high standard deviations, inter-subject variability was high, showing the need for further investigation with larger numbers of (visually impaired) participants

3.2. Experiment 2: Tactile Personal Cognitive Map (PCM) for Mobility Assistance

Two experiments are presented in this section to investigate the following:

  1. (1) obstacle awareness, detection and localisation (experiment 2·1)

  2. (2) spatial awareness and the ability to identify spatial representations (experiment 2·2).

These experiments involved ten blindfolded sighted participants, nine men and one woman between 21 and 59 years of age and without previous experience of the TactiPad. A brief questionnaire determined their preferred hand and ability to use touch screens and play video games.

3.2.1. Experiment 2·1: Obstacle Awareness

This experiment investigated the following three measures of obstacle awareness:

(1) tactile interface efficiency of informing participants of the obstacle; (2) effectively explored space, and (3) perceived space.

To support localisation and indicate direction, each participant wore a hat with a two-coloured arrow. They also wore headphones which transmitted pink noise and the TactiPad (Figure 6, right picture). Six rectangular obstacles were randomly placed on the PMP, with five of them near the edges of the platform and the sixth inside the platform (Figure 9). The experiment was carried out with each participant separately, so only one person was on the PMP at any one time, and divided into three parts:

  1. 1. Explanation: the researcher introduced the TactiPad and explained the information code used, the relationship between tactile displacements on the TactiPad and object positions and the different ways of using the tactile interface to detect obstacles.

  2. 2. Assisted learning of the TactiPad (two minutes): the researcher accompanied the participant moving round the platform while using the TactiPad to determine object positions and correlate the positions of real obstacles identified by touch with their positions on the TactiPad.

  3. 3. Autonomous navigation: the participant was positioned at a random location unknown to them on the platform and allowed to walk freely round it while using the TactiPad to navigate it and avoid obstacles. No other instructions were given to participants. Three researchers observed the participant's movements and recorded the time and location of each obstacle contact. Their trajectory and direction of gaze were also recorded using the position and direction of the arrow on their hat.

At the end of the experiment the participant was accompanied, while still blindfolded to prevent them seeing the PMP, to a place out of sight of the platform. They then removed their blindfolds and answered a short series of questions about their feelings during the experiment.

The number of registered contacts with obstacles during the walking period was used as a measure of the participant's efficiency in using the tactile interface to avoid obstacles (Figure 9b). The area walked through in a given period was used as a measure of effectively explored physical space (Figure 8a). This was estimated using overlapping rectangles formed by the width of the participant's shoulders and the length of their step (rectangle in central image in Figure 6 left and Figure 8b).

Figure 8. Two proposed metrics for obstacles awareness quantification.

Figure 9. PMP for obstacle awareness: (left) layout of five obstacles and (right) contact frequency during unconstrained navigation with the TactiPad.

The collected results (Figure 9b) show that over a period of ten minutes: on average (i) 54% of the available area was explored; (ii) 94% of the available area was perceived; and (iii) there were five contacts with obstacles i.e. one contact every two minutes. These results indicate that participants had relatively few contacts with obstacles and were able to avoid contacts for some time. Consequently, they were able to use the TactiPad tactile interface to obtain useful information about obstacle locations and use this information to support obstacle avoidance.

In most obstacle contacts, participants gave an indication that they were aware of the presence of an obstacle, for instance by turning to avoid it or approaching it slowly. In combination with the collision data, this indicates that the TactiPad enabled participants to determine the presence of obstacles and their approximate locations. However, the low spatial resolution of the TactiPad may have been responsible for the inaccurate object locations, leading to over or underestimation of the distance to obstacles and contact with them. A tactile interface with a larger number of taxels and consequently better resolution might enable participants to locate obstacles more accurately and make it easier to avoid them. In addition, some obstacle contacts may have been due to the lack of participant familiarity with the TactiPad and tactile navigation.

3.2.2. Experiment 2·2: Reconstruction of Spatial Layout

This experiment had two parts: (i) drawing a representation of the recently explored platform; (ii) simultaneously viewing six different representations (Figure 10) and indicating which representation was the most similar to the platform layout.

Figure 10. Different representations proposed to a subject as test for space structure reconstruction.

Only 50% of participants chose the correct representation. Further investigation will be required to draw conclusions about participants’ ability to use tactile perception to form a topological map of the space. Possible explanations of difficulties in determining the correct representation include:

  1. 1. TactiPad's poor resolution and lack of detailed information. It is only able to indicate obstacle presence or absence

  2. 2. Poor sampling of the space. Only 54% of the space was explored on average (results of experiment 2·1)

  3. 3. Lack of experience with using the TactiPad and tactile navigation strategies

  4. 4. Too little time to explore the space fully.

In response to questions after the experiments, almost all the participants stated that they had used elimination strategies in part 2 of the experiment. The majority said that they had attempted to remember the number of objects and their positions relative to their initial positions and that several of the proposed representations were sufficiently different from the real representation to be easily discarded. While this could suggest difficulties in forming or remembering environmental representations, participants were not asked to try and learn the space. Therefore, the results present an indication of what participants were able to learn ‘automatically’ about the space by walking through it without trying to explore the whole space and/or consciously learn it. These results seem to confirm those of other experiments with blind people (cited in Thinus-Blanc and Gaunet (Reference Thinus-Blanc and Gaunet1997)), i.e. that using experience to make inferences about the environment is in general difficult for blindfolded sighted as well as early and late blind people.

All the participants raised the need for additional information from the TactiPad. This could be provided through speech or by improving the resolution and increasing the range of options associated with a taxel from the binary obstacle/no obstacle.

4. SUMMARY OF THE MAIN RESULTS

The conclusions can be summarised as follows:

  1. 1. Participants showed reasonably good ability to determine the direction of motion of an arrow, with best recognition for north (up) and east (right) movement and a framed arrow with small head and fastest recognition in the N/S (up/down) directions and for framed large and full small arrows (Experiment 1)

  2. 2. Participants showed reasonable ability to use a tactile interface to detect and avoid obstacles after a very short learning period (Experiment 2·1).

  3. 3. Participants showed some, but limited, ability to learn and remember a representation of the environment using information from a tactile interface and walking through it when not specifically instructed to remember it (Experiment 2·2).

These results (tentatively) confirm the value of a tactile interface and that blindfolded sighted subjects are able to use tactile perceptions to form representations of the environment. However, the results have been affected by a number of factors including the short learning time, with the ability to form tactile representations increasing with greater learning time (Velazquez et al., Reference Velázquez, Pissaloux, Hafez and Szewczyk2008), and the poor TactiPad resolution with only 8 × 8 taxels and high (2·6 mm) intertaxel distance. The latter factor may have led to poor sampling of the physical space and inaccurate distance estimation, increasing the likelihood of obstacle contacts. This could be improved in future tactile interfaces. In addition, older people may have reduced tactile sensitivity and/or slower cognitive processing (Thornbury and Mistretta, Reference Thornbury and Mistretta1981). Older participants used a more cautious approach and took more time to understand the principle of correlating tactile information in the TactiPad with the environment, whereas younger participants used the interface immediately without necessarily understanding it, followed by a trial and error approach to learning. However, neither strategy was found to significantly affect performance, though experiments with a greater number of participants would be required to confirm this.

In summary, the results indicate the potential of tactile representations of space and the use of tactile interfaces, such as the TactiPad, to support end-users. However, further work is required to confirm and further develop the results presented here, including with blind and older people, larger numbers of participants and improved tactile interfaces with a much higher number of taxels while still being easily portable and easy to hold in one hand and explore with the other.

5. CONCLUSIONS

This paper has presented the first three in a series of experiments to investigate the ability to form tactile representations of space and the use of tactile interfaces to support this and their application in mobility assistance. They involved recognition of a moving pattern; obstacle avoidance while walking using information from a tactile interface; and forming environmental representations from tactile information and information obtained from walking. While considerable further research will be required, the initial results (see previous section) indicate the potential of tactile interfaces, including in supporting mobility. However, the results also indicate the need for an improved tactile interface with a larger number of more closely spaced taxels. There may also be a role for the use of experiments in much wider psycho-cognitive tasks such as training short and long term memory.

In particular, research should involve a much larger number of participants, a considerably improved interface, as well as comparative investigation of the performance of blind, partially sighted, and sighted participants, and comparative performance on a number of demographic indicators, such as age, gender, education. Other factors which could usefully be investigated include:

  1. 1. The impact of learning on these and other experiments, including the impacts of the length of learning time and of previous experience with the TactiPad or other tactile interfaces.

  2. 2. Navigation experiments with different numbers, densities and layouts of obstacles and varying time durations.

  3. 3. Investigation of the ability to use tactile information to form representations, including of areas with different numbers, densities and layouts of obstacles and over varying time durations.

  4. 4. Recognition of the shape and direction of moving objects from tactile perceptions, including the effect of different types of shapes and shape features, as well as directions of movement (oblique as well as along the main axes) and speed of movement on ease of recognition; differences between passive and active exploration; and whether the habitual direction of reading has any impact on the relative ease of recognising the direction of motion and the moving object for E and W directions.

  5. 5. The effect of the use of egocentric and allocentric models and different points of reference on the abilities to navigate while avoiding obstacles and represent and recreate obstacle layouts.

ACKNOWLEDGEMENT

The authors would like to thank the Commisariat à l'Energy Atomique et aux Energies Renouvellables, France), the Centre National de Recherche Scientifique, France ROBEA program, and Ministry of Education for financial support of this research. This research was partly financed by the FP7 EU project “AsTeRICS”.

References

REFERENCES

Abboud, S., Hanassy, S., Levy-Tzedek, S., Maidenbaum, S. and Amedi, A. (2014). EyeMusic:introducing a “visual” colorful experience for the blind using auditory sensory substitution. Restorative Neurology and Neuroscience 32, 247257.Google Scholar
Arditi, A. and Tian, Y. (2013). User interface preferences in the design of a camera-based navigation and wayfinding aid. Journal of Visual Impairment & Blindness, 107(2), 18129.Google Scholar
Brabyn, J., Crandall, W. and Gerrey, W. (1993). Talking signs: a remote signage, solution for the blind, visually impaired and reading disabled. Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 13091310.Google Scholar
Borenstein, J. and Ulrich, I. (2001). The GuideCane - Applying Mobile Robot Technologies to Assist the Visually Impaired, IEEE Trans . SMC, Part A: Systems and Humans, 31(2), 131136.Google Scholar
Bryant, D.J., Tversky, B. and Franklin, N. (1992). Internal and external spatial frameworks for represnting described scenes. Journal of Memory and Language, 31, 7498.Google Scholar
Bujacz, M., Skulimowski, P. and Strumiłło, P. (2012). Naviton - a prototype mobility aid for auditory presentation of 3D scenes, Journal of Audio Engineering Society, 60(9), 696708.Google Scholar
Downs, R.M. and Stea, D. (1977). Maps in Minds: Reflections on Cognitive Mapping. Harper and Row.Google Scholar
Dunai, L., Lengua, I., Tortajada, I. and Fernando Brusola, S. (2014). Obstacle detectors for visually impaired people. 2014 International Conference on Optimization of Electrical and Electronic Equipment (OPTIM), 809816.Google Scholar
Farcy, R. (2006). Electronic Travel Aids and Electronic Orientation Aids for blind people: technical, rehabilitation and everyday life points of view. CVHI 2006. Kufstein, Austria.Google Scholar
Franklin, N. and Tversky, B. (1990). Searchng Imagined Environments. Journal Of Experimental Psychology: General, 119(1), 6376.Google Scholar
Gärling, T. and Golledge, R. (1987). Behaviour and Environment : Psychological and Geographical Approaches. North Holland.Google Scholar
Hersh, M. and Johnson, M. (2008). (ed.), Assistive Technology for Visually Impaired and Blind People, Springer.CrossRefGoogle Scholar
Hersh, M.A. (2009a). Designing assistive technology to support independent travel for blind and visually impaired people. CVHI ‘09, Wrocław, Poland.Google Scholar
Hersh, M.A. (2009b). The application of information and other technologies to improve the mobility of blind, visually impaired and deafblind people, Travel Health Informatics and Telehealth. Selected Papers from EFMI Special Topic Conference, Antalya, Turkey, G. Mihalaş et. al. (eds.), 1124, Victor Babes University Publishing House.Google Scholar
Hersh, M.A. (2016). Travel and information processing by blind people: a new three-component model, Biomedical Engineering, University of Glasgow Report, http://web.eng.gla.ac.uk/assistive/pages/publications.php Google Scholar
Hoyle, B. and Dodds, S. (2006). The UltraCane® Mobility Aid at Work Training Programmes to Case Studies. CVHI, Kufstein, Austria.Google Scholar
Kammoun, S., Dramas, F., Oriolaand, B. and Jouffrais, C. (2010). Route selection algorithm for Blind pedestrian, Int. Conf. on Control Automation and Systems (ICCAS). 2223–2228 Google Scholar
Kane, S.K., Jayant, C., Wobbrock, J.O. and Ladner, R.E. (2009). Freedom to roam: a study of mobile device adoption and accessibility for people with visual and motor disabilities. In Proceedings of the 11th international ACM SIGACCESS conference on Computers and accessibility, 115–122. ACM.Google Scholar
Koch, O. and Teller, S. (2008). A Vision-based Navigation Assistant. ECCV Workshop on Computer Vision Applications for the Visually Impaired, Marseille, France.Google Scholar
Kumar, A., Patra, R., Manjunatha, M., Mukhopadhyay, J. and Majumdar, A.K. (2011). An electronic travel aid for navigation of visually impaired persons. In Communication Systems and Networks (COMSNETS), 2011 Third International Conference on, 15.Google Scholar
Lynch, K. (1960). The Image of the City. MIT Press, Cambridge, Mass.Google Scholar
Maidenbaum, S., Levy-Tzedek, S., Chebat, D.R., Namer-Furstenberg, R. and Amedi, A. (2014). The Effect of Extended Sensory Range via the EyeCane Sensory Substitution Device on the characteristics of Visionless Virtual Navigation. Multisensory Research, 27, 379397.Google Scholar
Mandler, J.M. (1983). Representation. In Mussen, P. (ed.), Handbook of Child Psychology, Vol III (4th ed.), Wiley, 420494.Google Scholar
Millar, S. (1988). Models of sensory deprivation: the nature nurture dichotomy and spatial representation in the blind. International Journal of Behavioural Development, 11(1), 6987.Google Scholar
Millar, S. (1994). Understanding and Representing Space : Theory and Evidence from Studies with Blind and Sighted Children. Clarendon Press, Oxford.Google Scholar
Millar, S. (1995). Understanding and Representing Spatial information. British Journal of Visual Impairment, 13(1), 811.Google Scholar
Pissaloux, E., Maingreaud, F., Fontaine, E. and Velazquez, R. (2006). Towards space concept integration in navigation tools. ENACTIVE'2006, 3rd International Conference on Enactive Interfaces, 2021 Nov, Montpellier, FRANCE Google Scholar
Pissaloux, E. (2013). Visually impaired mobility and ICT supports. IEEE Signal Processing: Algorithms, Architectures, Arrangements, and Applications (SPA). ISSN : 2326–0262.Google Scholar
Pissaloux, E. and Velázquez, R. (2016). Cognitive Model of Human Mobility, in Pissaloux Velázquez (ed.), Mobility in Visually Impaired People - Fundamentals and ICT Assistive Technologies, Springer (to appear).Google Scholar
Siegel, A.W. (1981). The externalization of cognitive maps by children and adults: in search of ways to ask better questions. In L.S. Liben, A.H. Patterson and N. Newcombe (eds.), Spatial Representation and Behaviour Across the Life Span: Theory and Application, Academic, 167194.Google Scholar
Terlau, T. and Penrod, W.M. (2008). ‘K’ Sonar Curriculum Handbook. American Printing House for the Blind, Inc.Google Scholar
Thinus-Blanc, C. and Gaunet, F. (1997). Representation of space in blind persons: Vision as a spatial sense? Psychological Bulletin, 121, 2042.Google Scholar
Thornbury, J.M. and Mistretta, C.M. (1981). Tactile sensitivity as a function of age. Journal of Gerontology, 36(1), 3439.CrossRefGoogle ScholarPubMed
Tversky, B. (1993). Cognitive maps, cognitive collages, and spatial mental models , in Frank, A.U. & Campari, I. (Eds) Spatial information theory : A theoretical basis for GIS, 1424, Springer-Verlag Google Scholar
Tversky, B. (2001). Spatial schemas in depictions . In Gattis, M. (Editor), Spatial schemas and abstract thought, MIT Press.Google Scholar
Tversky, B. (2005). Functional significance of visuospatial representations . in Shah, P., Miyake, A., Handbook of higher-level visuospatial thinking. Cambridge University Press.Google Scholar
Ungar, S. (2000). Cognitive mapping without visual experience. In Kitchin, R. and Freundschuh, S. (eds.), Cognitive mapping: past, present, and future, 4, 221.Google Scholar
Velázquez, R., Pissaloux, E.E., Hafez, M. and Szewczyk, J., (2008). Tactile Rendering with Shape Memory Alloy Pin-Matrix. IEEE Transactions on Instrumentation and Measurement, 57(5), 10511057.CrossRefGoogle Scholar
Yusro, M., Hou, K.M., Pissaloux, E., Shi, H.L., Ramli, K. and Sudiana, D. (2013). SEES: Concept and Design of a Smart Environment Explorer Stick, IEEE HSI 2013.Google Scholar
Figure 0

Table 1. Comparison of information from the different senses (adapted from Hersh, 2016).

Figure 1

Figure 1. From left: Robotised smart cane, smart cane (K-Sonar), intelligent canes (UltraCane, TomPouce).

Figure 2

Figure 2. Beacon and receiver for Talking Sign System.

Figure 3

Figure 3. European FP7 HAPTIMAP prototype.

Figure 4

Figure 4. SEES system – an intelligent cane.

Figure 5

Figure 5. Mobility map technologies: thermoformed map (Paris) and concrete map (Hamburg).

Figure 6

Figure 6. PMP: processed data flux (left), user carrying the TactiPad (right), real experiment (lower).

Figure 7

Figure 7. Experimental setup for tactile stimuli cognitive perception.

Figure 8

Table 2. NEWS Direction recognition for 2 patterns and 3 representations of an arrow.

Figure 9

Table 3. Average recognition rates and standard deviations over all directions and shapes.

Figure 10

Figure 8. Two proposed metrics for obstacles awareness quantification.

Figure 11

Figure 9. PMP for obstacle awareness: (left) layout of five obstacles and (right) contact frequency during unconstrained navigation with the TactiPad.

Figure 12

Figure 10. Different representations proposed to a subject as test for space structure reconstruction.