Evaluation methods technologies for visually impaired people
 Share
The version of the browser you are using is no longer supported. Please upgrade to a supported browser.Dismiss

 
$
%
123
 
 
 
 
 
 
 
 
 
ABCDEFGHIJKLMNOPQRSTUVWXYZAAABAC
1
These are the articles retrieved for a systematic literature review. This stemmed from the question: how should we evaluate technologies made for visually impaired users? We searched scopus with the following query: TITLE-ABS-KEY(blind OR "deaf blind" OR deafblind OR deaf-blind OR "eye disorders" OR "partially sighted" OR "vision disorder*" OR "visual disabilit*" OR blindness OR "visual impairment" OR "visually impaired" OR "partial vision") AND TITLE-ABS-KEY("assistive technolog*” OR "instructional technolog*” OR "assistive device*” OR "communication device*” OR "mobility device*” OR “interactive *”) AND TITLE-ABS-KEY(evaluat* OR assess* OR "user stud*" OR usability OR "user trial*") AND ( LIMIT-TO ( SUBJAREA,"COMP" ) ). We are in the process of triaging them. If an articles of yours is not listed here, please add it and indicate who and when you did it! Contact for this study: e.t.brule@sussex.ac.uk ; marcos.serrano@irit.fr
2
AuthorsAuthor(s) IDTitleYearSource titleAdded to the dataset by
Inclusion:
device or interaction technique explicitely designed for visually impaired people (only, or as part of the initial research question)
Exclusion:
visually impaired users are only cited as potential beneficiaries; or type of papers workshops, abstracts, posters, panels, editorials, lectures and demonstrations; quality check: exclude if evaluation method is not described; device is not described; participants are not described
Object of evaluation:
tool, technology, user characteristics, perceptual aspects, etc..
Type of system under evaluation:
Is it an interaction technique, an integral system, a low-fidelity prototype, etc.
Type of evaluation:
quantitative, qualitative, mix
Type of study:
Is it a performance study, design session, online survey, etc..
Experiment design:
Is it a within or between, number and type of independent variables, etc..
Number of studies per paper
Evaluation length
Participants: who's considered as a participant? (e.g. "potential users", professionals, parents...)
Participants: number of participants in the study, or average in all studies
Participants: Age and gender
Participants: Description of visual impairments
Reflection or discussion on the evaluation ?
Limitations, difficulties, etc.
DOIAbstractAuthor Keywords
Document Type
3
Add here a study missed by the Scopus search[Name of contributor] and [date]
4
David McGookin, Euan Robertson, and Stephen BrewsterClutching at Straws: Using Tangible Interaction to Provide Non-Visual Access to Graphs2010CHI 2010Marcos
5
Julie Ducasse, Marc Macé, Marcos Serrano and Christophe JouffraisTangible Reels: Construction and Exploration of Tangible Maps by Visually Impaired Users2016CHI 2016MarcosXtoolintegral systemquantitativeperformancewithin; 2 independent for study 1; 1 independent variable for study 22 studies??potential users6 (4+8)46.5; 8 males 4 femalesYes: legally blind and description of age
6
Sandra Bardot, Marcos Serrano and Christophe JouffraisFrom Tactile to Virtual: Using a Smartwatch to Improve Spatial Map Exploration for Visually Impaired Users2016MobileHCI 2016Marcos
7
Sandra Bardot, Marcos Serrano, Bernard Oriola and Christophe JouffraisIdentifying how Visually Impaired People Explore Raised-line Diagrams to Improve the Design of Touch Interfaces2017CHI 2017Marcos
8
Sandra Bardot, Marcos Serrano, Simon Perrault, Shengdong Zhao and Christophe JouffraisInvestigating Feedback for Two-Handed Exploration of Digital Maps without Vision2019Interact'19. SpringerMarcos
9
Pagan J., Fallahzadeh R., Pedram M., Risco-Martin J.L., Moya J.M., Ayala J.L., Ghasemzadeh H.56707039300;56039899200;56940135000;6504443236;7102437939;7005129065;24470943700;Toward Ultra-Low-Power Remote Health Monitoring: An Optimal and Adaptive Compressed Sensing Framework for Activity Recognition2019IEEE Transactions on Mobile ComputingScopusExcluded: out-of-scope: no visually impaired users10.1109/TMC.2018.2843373Activity recognition, as an important component of behavioral monitoring and intervention, has attracted enormous attention, especially in Mobile Cloud Computing (MCC) and Remote Health Monitoring (RHM) paradigms. While recently resource constrained wearable devices have been gaining popularity, their battery life is limited and constrained by the frequent wireless transmission of data to more computationally powerful back-ends. This paper proposes an ultra-low power activity recognition system using a novel adaptive compressed sensing technique that aims to minimize transmission costs. Coarse-grained on-body sensor localization and unsupervised clustering modules are devised to autonomously reconfigure the compressed sensing module for further power saving. We perform a thorough heuristic optimization using Grammatical Evolution (GE) to ensure minimal computation overhead of the proposed methodology. Our evaluation on a real-world dataset and a low power wearable sensing node demonstrates that our approach can reduce the energy consumption of the wireless data transmission up to 81.2 and 61.5 percent, with up to 60.6 and 35.0 percent overall power savings in comparison with baseline and a naive state-of-the-art approaches, respectively. These solutions lead to an average activity recognition accuracy of 89.0 percent-only 4.8 percent less than the baseline accuracy-while having a negligible energy overhead of on-node computation. © 2002-2012 IEEE.activity recognition; adaptive; Compressed sensing; energy efficiency; feature selection; optimization; ultra-low powerArticle
10
Ahmed F., Mahmud M.S., Yeasin M.57196026885;57195770993;18039042000;An Interactive Device for Ambient Awareness on Sidewalk for Visually Impaired20192018 IEEE International Smart Cities Conference, ISC2 2018Scopusx10.1109/ISC2.2018.8656966Ambient awareness on a sidewalk is critical for safe navigation, especially for the people who are visually impaired. Awareness of obstacles such as debris, potholes, construction site, and traffic movement pattern improve mobility and independence of them. To address this problem, we implemented an interactive and portable sidewalk assistive device (IPSAD). The key idea is to use the power of deep learning to model the 'obstacles' on a sidewalk to provide personalized feedback to the user. We focus on transfer learning and fine tuning of pre-Trained Convolutional Neural Network (CNN) models for real-Time obstacle recognition that can be deployed in small form factor devices. This approach also account for issues related to the execution time and energy efficiency. We empirically evaluate a number of state-of-The-Art architectures to choose the best model based on fewer parameters and lower energy consumption. Finally, we built fully integrated IPSAD prototype on Raspberry Pi3 (RPi3). Audio feedback scheme were implemented to accommodate user preferences and personalization. We perform quantitative evaluation of the prototype system based on the accuracy of the model on Ambient Awareness on Sidewalk (AS) dataset, on-field system performance, data communication, response time (capturing images, recognition of obstacle and feedback), and failure points. Empirical evaluation showed that the accuracy of the model is 87% on the AS dataset and 78.75% on-field system. The system can operate either standalone mode when there is no Internet or cloud mode. In addition, the system can use human intelligence through crowd source and caregiver(s) to assist the users when automated system fails. © 2018 IEEE.Conference Paper
11
do Carmo Nogueira T., Ferreira D.J., de Carvalho S.T., de Oliveira Berretta L., Guntijo M.R.57201476166;20433664900;56459776000;57201488010;57201488221;Comparing sighted and blind users task performance in responsive and non-responsive web design2019Knowledge and Information SystemsScopusx10.1007/s10115-018-1188-8This article highlights the necessity to go beyond accessibility guidelines in interactive web design, and more specifically in new web trends, as well as the importance of investigations concerning universal web design usability for blind users. In this work, we compared the task performance of blind and sighted users in responsive and non-responsive web design. The results comparing task performance in responsive and non-responsive Web sites indicate that when responsive web design involves deep structures with fewer choices and more levels, it might be considered a design strategy that does not work well for both sighted and blind user populations, showing that a reasonable criterion that should be considered is to design Web sites with good usability for blind as well as sighted users. © 2018, Springer-Verlag London Ltd., part of Springer Nature.Accessibility; Blind users; Responsive design; UsabilityArticle
12
Kandhari M.S., Zulkemine F., Isah H.57206889753;57206902284;56636645700;A Voice Controlled E-Commerce Web Application20192018 IEEE 9th Annual Information Technology, Electronics and Mobile Communication Conference, IEMCON 2018Scopusx10.1109/IEMCON.2018.8614771Automatic voice-controlled systems have changed the way humans interact with a computer. Voice or speech recognition systems allow a user to make a hands-free request to the computer, which in turn processes the request and serves the user with appropriate responses. After years of research and developments in machine learning and artificial intelligence, today voice-controlled technologies have become more efficient and are widely applied in many domains to enable and improve human-to-human and human-to-computer interactions. The state-of-the-art e-commerce applications with the help of web technologies offer interactive and user-friendly interfaces. However, there are some instances where people, especially with visual disabilities, are not able to fully experience the serviceability of such applications. A voice-controlled system embedded in a web application can enhance user experience and can provide voice as a means to control the functionality of e-commerce websites. In this paper, we propose a taxonomy of speech recognition systems (SRS) and present a voice-controlled commodity purchase e-commerce application using IBM Watson speech-to-text to demonstrate its usability. The prototype can be extended to other application scenarios such as government service kiosks and enable analytics of the converted text data for scenarios such as medical diagnosis at the clinics. © 2018 IEEE.Amazon Alexa; Google API; IBM Watson; Recognition Rate; Speech Recognition Systems; Speech-to-Text; Text-to-Speech; Voice Recognition Systems; Word Error RateConference Paper
13
Aqle A., Al-Thani D., Jaoua A.57090857000;56444337800;7004059214;Conceptual Interactive Search Engine Interface for Visually Impaired Web Users2019Proceedings of IEEE/ACS International Conference on Computer Systems and Applications, AICCSAScopus10.1109/AICCSA.2018.8612874The Internet is the main source of information nowadays. Consequently, end users need to be knowledgeable about how to use search engines in order to locate relevant information in a reasonable time with minimal effort. On the other hand, search engines must provide different and alternative ways to represent the search results to facilitate the user access to the information especially for the visually impaired (VI) users. Our research aim is to produce a new representational model for the search engine results targeting VI users. The result of this study will be a functional prototype that summarizes the search results as main ideas that are identified as concepts. Formal Concept Analysis (FCA) defines a concept as the maximum number of objects that are sharing the maximum number of features or attributes. Concepts are discovered by analyzing data patterns for the text of the study. The outcome of the first step of summarization concepts as keywords is used to minimize the number of listed websites and URLs that match the user selection of the multi-level tree of concepts. This scenario of summarization can give the user different directions for the shortest path to reach the target information with the minimum amount of time and effort required. The purpose of these directions can be either to proceed with reading the whole document in detail, or to continue the search for finding other related documents that match the user's inquiry. Experiments run on an iterative testing basis until VI users find proper results that satisfy their needs for the search context. User observations and interpretations based on the experiments are used for the user evaluation. This study will guide us for designing a new model for summarizing search results based on the FCA algorithm to the VI end users, and with a new representation interface based on the discovered concepts' weights. © 2018 IEEE.information seeking; search engine interface; search results representation; text summarization; visually impaired usersConference Paper
14
Lopez R.M., Pinder S.D., Claire Davies T.57202835380;57204318615;57016883200;Co-designing: Working with braille users in the design of a device to teach braille2019Advances in Intelligent Systems and ComputingScopus10.1007/978-3-319-94947-5_78The objective of this research was to develop a “paper-based” prototype design of a device that will eventually be developed to teach braille in a co-design process that involved end-user input throughout the process. Questionnaires aimed to explore the use of assistive technologies to help learn or teach braille. Features of existing assistive technologies were identified by the participants. Taking these features into consideration, seven conceptual design solutions were developed by six designers. A weighted evaluation matrix (WEM) ranked potential designs. A weighting for each design feature was calculated using the frequency of that feature. The responses from each participant group were weighted equally. Two semi-structured interviews were conducted with braille teachers. The design preferred by both the teachers was ranked fifth in the weighted evaluation matrix. Designs that were ranked poorly according to the WEM, were actually ranked highly by the end-users. The co-design process was essential in identifying these differences. © 2019, Springer International Publishing AG, part of Springer Nature.Braille device; Co-design; Visual impairmentConference Paper
15
Palani H.P., Tennison J.L., Giudice G.B., Giudice N.A.55497109200;57190182588;57202831490;15724751900;Touchscreen-based haptic information access for assisting blind and visually-impaired users: Perceptual parameters and design guidelines2019Advances in Intelligent Systems and ComputingScopus10.1007/978-3-319-94947-5_82Touchscreen-based smart devices, such as smartphones and tablets, offer great promise for providing blind and visually-impaired (BVI) users with a means for accessing graphics non-visually. However, they also offer novel challenges as they were primarily developed for use as a visual interface. This paper studies key usability parameters governing accurate rendering of haptically-perceivable graphical materials. Three psychophysically-motivated usability studies, incorporating 46 BVI participants, were conducted that identified three key parameters for accurate rendering of vibrotactile lines. Results suggested that the best performance and greatest perceptual salience is obtained with vibrotactile feedback based on: (1) a minimum width of 1 mm for detecting lines, (2) a minimum gap of 4 mm for discriminating lines rendered parallel to each other, and (3) a minimum angular separation (i.e., cord length) of 4 mm for discriminating oriented lines. Findings provide foundational guidelines for converting/rendering visual graphical materials on touchscreen-based interfaces for supporting haptic/vibrotactile information access. © 2019, Springer International Publishing AG, part of Springer Nature.Assistive technology; Design guidelines; Haptic information access; Haptic interaction; Multimodal interfaceConference Paper
16
Khan A., Khusro S.55932784700;35302658300;Blind-friendly user interfaces – a pilot study on improving the accessibility of touchscreen interfaces2019Multimedia Tools and ApplicationsScopus10.1007/s11042-018-7094-yTouchscreen devices such as a smartphone, smartwatch, and tablets are essential assistive devices for visually impaired and blind people in performing activities of daily living. The vision alternative accessibility services such as screen readers, multimodal interactions, vibro-tactical, haptic feedback, and gestures are helping blind people in operating touchscreen interfaces. Part of usability problem with today touchscreen user interfaces contributes to a trade-off in discoverability, navigational complexity, cognitive overload, layout persistency, a cumbersome input mechanism, accessibility, and cross-device interactions. One solution to these problems is to design an accessibility-inclusive blind-friendly user interface framework for performing common activities on a smartphone. This framework re-organizes/re-generates the interface components into a simplified blind-friendly user interface based on user profile and contextual recommendations. The paper reports an improvement in the user experience of blind people in performing activities on a smartphone. Forty-one blind people have participated in this empirical study, resulting in improved users and interaction experience in an operating smartphone. © 2019, Springer Science+Business Media, LLC, part of Springer Nature.Accessibility; Blind-friendly; HCI; U X; Usability; User interfacesArticle in Press
17
Spinczyk D., Maćkowski M., Kempa W., Rojewska K.14523511800;6602140149;6507209963;51562617200;Factors influencing the process of learning mathematics among visually impaired and blind people2019Computers in Biology and MedicineScopus10.1016/j.compbiomed.2018.10.025Effective instruction and comprehension of mathematics are important for achieving academic and professional success but are especially difficult for visually impaired individuals because of the inherent difficulty in managing structural information included in math formulae. An evaluation of an alternative for computer-aided math instruction and comprehension among visually impaired students was developed, and the evaluation included seven detailed categories of factors: behavioral, emotional, cognitive, social, distracting, motivational, and modeling factors. Then, the proposed method was used to compare the alternative teaching method, including problem decomposition and vector knowledge, to the classical teaching method with a teacher. The assessment of the impact of the developed approach on improving the process of teaching mathematics in a group of blind and visually impaired students was carried out by the completion of a questionnaire prepared by a psychologist. The alternative teaching method achieved significantly better results in six of the seven proposed assessment categories. These experiments extend the knowledge base on the limitations and challenges associated with teaching and learning mathematics among blind people. © 2018 Elsevier LtdAlternative math presentation; Distance learning; Interactive multimedia math presentation; Math tutoring platformArticle
18
[No author name available][No author id available]AHFE International Conference on Design for Inclusion, 20182019Advances in Intelligent Systems and ComputingScopusThe proceedings contain 35 papers. The special focus in this conference is on Design for Inclusion. The topics include: Playgrounds for All: Practical strategies and guidelines for designing inclusive play areas for children; Preprocessing the structural optimization of the SPELTRA robotic assistant by numerical simulation based on finite elements; device design for the learning processes of children with cerebral palsy; design for the sensitive experience: Inclusive design in historical-archaeological contexts; inclusive, active and adaptive design as approaches to user-centered design; the influence of adding vibrations on the impression of messaging on smartphones; inclusive design of open spaces for visually impaired persons: A comparative study of Beijing and Hong Kong; lighting in the workplace: Recommended illuminance (lux) at workplace environs; the inclusion of children with total visual impairment in learning activities of daily living, especially the act of eating independently; user perceptions of haptic fidgets on mobile devices for attention and task performance; inclusive responsiveness – Why responsive web design is not enough and what we can do about this; evaluating accessibility and usability of an experimental situational awareness room; design guidelines for adaptable videos and video players on the web; social inclusion and territorial enhancement: A project of tourism interactive information system for bike users; understanding the experience of teenagers as bus passengers for the design of a more inclusive bus service; inclusive smart parking: Usability analysis of digital parking meter for younger and older users; design of a low-cost wheelchair for open-source platform: First phase.Conference Review
19
Andrade R., Baker S., Waycott J., Vetere F.57193239858;56255744000;6603125468;10039716000;Echo-house: Exploring a virtual environment by using echolocation2018ACM International Conference Proceeding SeriesScopus10.1145/3292147.3292163The graphics-intensive nature of most virtual environments (VEs) prevents many people with visual impairment from being able to successfully explore them. A percentage of the population of people with visual impairment are known to use echolocation -sound waves and their reflections- to better explore their surroundings. In this paper, we describe the development of an echolocation-enabled VE (Echo-House) and evaluate the feasibility of using echolocation as a novel technique to explore this environment. Results showed that echolocation gave participants an improved sense of space in the VE. However, the evaluation also identified a range of orientation and mobility issues and found that participants needed additional support to gain confidence in their use of echolocation in the VE. Our findings suggest that with proper support, echolocation has the potential to improve access to VEs for people who are blind or visually impaired by revealing features that would be otherwise inaccessible. © 2018 Copyright held by the owner/author(s). Publication rights licensed to ACM.Accessibility; Echolocation; Exploration; Games; Virtual environments; Visual impairmentConference Paper
20
Siebra C., Correia W., Penha M., Macedo J., Quintino J., Anjos M., Florentin F., Da Silva F.Q.B., Santos A.L.M.36979783900;55189755200;55804354100;57143980800;56206323300;57144027800;57144563500;35229324700;8832605700;Virtual assistants for mobile interaction: A review from the accessibility perspective2018ACM International Conference Proceeding SeriesScopus10.1145/3292147.3292232The technology of virtual assistants (VAs) is a powerful option to support the interaction of human users with computational systems. These VAs are able, for example, to identify interaction problems and offer recommendations on the execution of commands. This work analyses the use of VAs as a resource of accessibility for mobile devices. This analysis was carried out by means of a literature review, which considered both academic studies and commercial solutions. The results showed that there are very few researches in this area and this fact motivated the development of an evaluation protocol, and related set of tests cases, which can verify if current VAs are in fact able to support the interaction of motor and visually impaired users with their mobile devices. © 2018 Copyright held by the owner/author(s).Accessibility; Mobile application; Virtual assistantsConference Paper
21
Hersh M.A., García Ramírez A.R.7005937731;55622245300;Evaluation of the electronic long cane: improving mobility in urban environments2018Behaviour and Information TechnologyScopus10.1080/0144929X.2018.1490454A wide range of portable and wearable electronic travel aids have been developed to enable visually impaired people to move around public spaces without a sighted guide. However, few of them have gone beyond the prototype stage and the long cane and guide dog are still the main mobility aids. Despite the importance of evaluation to determine, for instance, effective functioning and end-user satisfaction, a standard approach has not yet been developed for mobility aids. The paper reports the evaluation of a low-cost electronic long cane, developed by the authors and colleagues in Brazil. It used a two-part methodology involving an experimental investigation of performance of the electronic long cane and a questionnaire to explore user satisfaction. The results of the experiments and questionnaire demonstrated both the cane’s usefulness and the need for modifications to improve its functioning. This work is also important for the development of methodologies for effective evaluation, as this is the first evaluation of a mobility device developed and carried out in Brazil. In addition, it is one of only a small number of evaluations in real locations with real obstacles. Finally, a series of recommendations for evaluating mobility devices is presented. What this paper adds? A standard approach to evaluating electronic travel for visually impaired people has not yet been developed and the most appropriate approach may depend on the objectives of the evaluation. Existing approaches generally use participants with no previous experience of using the device being evaluated and is carried out indoors with artificial obstacles. The training or device familiarisation period usually provided might be insufficient for participants to obtain optimal device performance or an effective comparison to be made of different devices. The approach to evaluating an electronic long cane reported in this paper has three main advantages over previous methods. The participants were experienced users of the electronic long cane who had been using it to support their daily mobility for at least a month. The evaluation was carried out in two different real urban environments with real obstacles. This has the advantages of being close to real-life cane use and participants being able to make informed comments and suggestions for improvements as a result of their experience. A questionnaire included questions on user satisfaction with and evaluation of a number of different cane features based on their experiences of cane use over a period. The work is also significant as the first detailed mobility device evaluation carried out in Brazil and in the presentation of a series of recommendations divided into themes for effective evaluation of mobility devices. © 2018, © 2018 Informa UK Limited, trading as Taylor & Francis Group.Assistive technology; blindness; electronic cane; mobility; user experienceArticle
22
Jafri R., Khan M.M.19639942600;56824667200;User-centered design of a depth data based obstacle detection and avoidance system for the visually impaired2018Human-centric Computing and Information SciencesScopus10.1186/s13673-018-0134-9The development of a novel depth-data based real-time obstacle detection and avoidance application for visually impaired (VI) individuals to assist them in navigating independently in indoors environments is presented in this paper. The application utilizes a mainstream, computationally efficient mobile device as the development platform in order to create a solution which not only is aesthetically appealing, cost-effective, lightweight and portable but also provides real-time performance and freedom from network connectivity constraints. To alleviate usability problems, a user-centered design approach has been adopted wherein semi-structured interviews with VI individuals in the local context were conducted to understand their micro-navigation practices, challenges and needs. The invaluable insights gained from these interviews have not only informed the design of our system but would also benefit other researchers developing similar applications. The resulting system design along with a detailed description of its obstacle detection and unique multimodal feedback generation modules has been provided. We plan to iteratively develop and test the initial prototype of the system with the end users to resolve any usability issues and better adapt it to their needs. © 2018, The Author(s).Assistive technologies; Blindness; Google Project Tango; Navigation; Obstacle avoidance; Obstacle detection; Visual impairmentArticle
23
Spiers A.J., Van Der Linden J., Wiseman S., Oshodi M.56940234600;35485961600;7005984434;53364251200;Testing a shape-changing haptic navigation device with vision-impaired and sighted audiences in an immersive theater setting2018IEEE Transactions on Human-Machine SystemsScopus10.1109/THMS.2018.2868466Flatland was an immersive 'in-the-wild' experimental theater and technology project, undertaken with the goal of developing systems that could assist 'real-world' pedestrian navigation for both vision-impaired (VI) and sighted individuals, while also exploring inclusive and equivalent cultural experiences for VI and sighted audiences. A novel shape-changing handheld haptic navigation device, the 'Animotus,' was developed. The device has the ability to modify its form in the user's grasp to communicate heading and proximity to navigational targets. Flatland provided a unique opportunity to comparatively study the use of novel navigation devices with a large group of individuals (79 sighted, 15 VI) who were primarily attending a theater production rather than an experimental study. In this paper, we present our findings on comparing the navigation performance (measured in terms of efficiency, average pace, and time facing targets) and opinions of VI and sighted users of the Animotus as they negotiated the 112 m2 production environment. Differences in navigation performance were nonsignificant across VI and sighted individuals and a similar range of opinions on device function and engagement spanned both groups. We believe more structured device familiarization, particularly for VI users, could improve performance and incorrect technology expectations (such as obstacle avoidance capability), which influenced overall opinion. This paper is intended to aid the development of future inclusive technologies and cultural experiences. © 2018 IEEE.Assistive technology; blindness; haptics technology; human factors and ergonomics; navigation; system design and analysis; user interfacesArticle
24
Vitiello G., Sebillo M., Fornaro L., Di Gregorio M., Cirillo S., De Rosa M., Fuccella V., Costagliola G.7004003647;8393057100;57205680542;57204505496;57205474260;37053751200;9640308300;7003667801;Do you like my outfit? Cromnia, a mobile assistant for blind users2018ACM International Conference Proceeding SeriesScopus10.1145/3284869.3284908The community of visually impaired people has been very active during the last decades with initiatives devoted to raise awareness about their specific needs in the society and encourage the adoption of any innovative assistive solutions for their personal empowerment. A contextual inquiry conducted in Europe revealed that some of the major concerns to those people refer to independence of living. According to recent studies most existing remote control home automation systems miss certain specific usability and accessibility features which could address specific blind users’ needs. In the present paper we present an assistive mobile app, designed to allow autonomy of blind/visually impaired users in the everyday activity of getting dressed. The user testing activities conducted so far are described and the derived results are discussed. © 2018 Association for Computing Machinery.Assistive technology; Universal Design; Usability requirements; Visual ImpairmentConference Paper
25
Cho H., Park K., Choi S.57205432110;56419061800;57207282214;Equal-Level Interaction: A Case Study for Improving User Experiences of Visually-Impaired and Sighted People in Group Activities2018HAVE 2018 - IEEE International Symposium on Haptic, Audio-Visual Environments and Games, ProceedingsScopus10.1109/HAVE.2018.8547502Assistive technologies improve the independence of visually-impaired users for various tasks. However, using them in group activities among visually-impaired and sighted users is apt to encourage sighted users to behave as supervisors but to induce visually-impaired users to be followers. This undermines the overall user experiences of both user groups, sometimes significantly. We address this issue by conceptualizing equal-level interaction, in which all users have the same roles and provide similar levels of contributions to the group activity. We have explored the proposed concept by designing a multiplayer card game running on a smartphone and conducting a user study with four groups of visually-impaired and sighted users. Results demonstrated positive support for realizing equal-level interaction. We hope that this paper could encourage further research efforts into realizing truly equal user experiences in group activities among mixed-ability users. © 2018 IEEE.accessibility; equal-level interaction; group activity; sighted users; Visually-impaired usersConference Paper
26
da Silva C.F., Leal Ferreira S.B., Sacramento C.57194690088;55804617600;56031793000;Mobile application accessibility in the context of visually impaired users2018ACM International Conference Proceeding SeriesScopus10.1145/3274192.3274224The use of smartphones as a technological resource not only provides access to information and services but has also improved the quality of life of people with visual disabilities, allowing them to be more independent in daily activities. The objective of this research was to characterize the problems that visually impaired users encounter when using mobile applications and relate them to the Web Content Accessibility Guidelines (WCAG) 2.0, in order to identify possible gaps that make the guidelines insufficient to solve all users' difficulties. Mercado Livre was the application selected as object of study and five sessions of accessibility evaluation involving visually impaired users were carried out. The results demonstrated that many problems experienced by users could be related to violations of accessibility guidelines of the WCAG 2.0 as long as they were adapted to the context of mobile applications. Considering the importance of accessible mobile applications, we observed the need to extend studies that support developers and designers in achieving better accessibility in user interface design for users of assistive technologies, such as screen readers. © 2018 Association for Computing Machinery.Accessibility; Mobile applications; Visual impairmentConference Paper
27
Schneider O., Shigeyama J., Kovacs R., Roumen T.J., Marwecki S., Boeckhoff N., Gloeckner D.A., Bounama J., Baudisch P.56414254100;57200202565;57014726900;57015474600;57200301898;57204731700;57204722654;57204735607;10039576700;Dualpanto: A haptic device that enables blind users to continuously interact with virtual worlds2018UIST 2018 - Proceedings of the 31st Annual ACM Symposium on User Interface Software and TechnologyScopus10.1145/3242587.3242604We present a new haptic device that enables blind users to continuously track the absolute position of moving objects in spatial virtual environments, as is the case in sports or shooter games. Users interact with DualPanto by operating the me handle with one hand and by holding on to the it handle with the other hand. Each handle is connected to a pantograph haptic input/output device. The key feature is that the two handles are spatially registered with respect to each other. When guiding their avatar through a virtual world using the me handle, spatial registration enables users to track moving objects by having the device guide the output hand. This allows blind players of a 1-on-1 soccer game to race for the ball or evade an opponent; it allows blind players of a shooter game to aim at an opponent and dodge shots. In our user study, blind participants reported very high enjoyment when using the device to play (6.5/7). © 2018 Association for Computing Machinery.Accessibility; Blind; Force-feedback; Gaming; Haptics; Visually impairedConference Paper
28
Guo A., McVea S., Wang X., Clary P., Goldman K., Li Y., Zhong Y., Bigham J.P.56789541900;57204736103;57190019684;57205735961;57207502759;55929450800;55620320400;16238221500;Investigating cursor-based interactions to support non-visual exploration in the real world2018ASSETS 2018 - Proceedings of the 20th International ACM SIGACCESS Conference on Computers and AccessibilityScopus10.1145/3234695.3236339The human visual system processes complex scenes to focus attention on relevant items. However, blind people cannot visually skim for an area of interest. Instead, they use a combination of contextual information, knowledge of the spatial layout of their environment, and interactive scanning to find and attend to specific items. In this paper, we define and compare three cursor-based interactions to help blind people attend to items in a complex visual scene: window cursor (move their phone to scan), finger cursor (point their finger to read), and touch cursor (drag their finger on the touchscreen to explore). We conducted a user study with 12 participants to evaluate the three techniques on four tasks, and found that: window cursor worked well for locating objects on large surfaces, finger cursor worked well for accessing control panels, and touch cursor worked well for helping users understand spatial layouts. A combination of multiple techniques will likely be best for supporting a variety of everyday tasks for blind users. Copyright is held by the author/owner(s). © 2018 Copyright is held by the author/owner(s).Accessibility; Blind; Computer vision; Cursor; Interaction; Mobile devices; Non-visual exploration; Visually impairedConference Paper
29
Tomlinson B.J., Kaini P., Zhou S., Smith T.L., Moore E.B., Walker B.N.56414307800;55788980500;57204719801;57190286942;55426014900;7402208804;Design and evaluation of a multimodal science simulation2018ASSETS 2018 - Proceedings of the 20th International ACM SIGACCESS Conference on Computers and AccessibilityScopus10.1145/3234695.3241009We present a multimodal science simulation, including visual and auditory (descriptions, sound effects, and sonifications) display. The design of each modality is described, as well as evaluation with learners with and without visual impairments. We conclude with challenges and opportunities at the intersection of multiple modalities. © 2018 Copyright is held by the owner/author(s).Evaluation; Interactive simulation; Learning; Multimodal; Visual impairmentConference Paper
30
Rodrigues A., Camacho L., Nicolau H., Montague K., Guerreiro T.56577139300;57204703815;34881822600;36701846300;23396568900;AIDME: Interactive non-visual smartphone tutorials2018MobileHCI 2018 - Beyond Mobile: The Next 20 Years - 20th International Conference on Human-Computer Interaction with Mobile Devices and Services, Conference Proceedings AdjunctScopus10.1145/3236112.3236141The constant barrage of updates and novel applications to explore creates a ceaseless cycle of new layouts and interaction methods that we must adapt to. One way to address these challenges is through in-context interactive tutorials. Most applications provide onboarding tutorials using visual metaphors to guide the user through the core features available. However, these tutorials are limited in their scope and are often inaccessible to blind people. In this paper, we present AidMe, a system-wide authoring and playthrough of non-visual interactive tutorials. Tutorials are created via user demonstration and narration. Using AidMe, in a user study with 11 blind participants we identified issues with instruction delivery and user guidance providing insights into the development of accessible interactive non-visual tutorials. © 2018 Copyright is held by the owner/author(s).Accessibility; Assistance; Blind; Smartphone; TutorialsConference Paper
31
Nel E., Kristensson P.O., MacKay D.57203461836;6507412583;7402243706;Ticker: An Adaptive Single-Switch Text Entry Method for Visually Impaired Users2018IEEE Transactions on Pattern Analysis and Machine IntelligenceScopus10.1109/TPAMI.2018.2865897Ticker is a novel probabilistic stereophonic single-switch text entry method for visually-impaired users with motor disabilities who rely on single-switch scanning systems to communicate. Ticker models and tolerates a wide variety of noise, which is inevitably introduced in practical use of single-switch systems. Efficacy evaluation consists of performance modelling and three user studies. IEEEaccessibility; augmentative and alternative communication; Bayesian inference; single-switch systemsArticle in Press
32
Osinski D., Hjelme D.R.56769866100;7003553187;A sensory substitution device inspired by the human visual system2018Proceedings - 2018 11th International Conference on Human System Interaction, HSI 2018Scopus10.1109/HSI.2018.8431078The purpose of this paper is to introduce and initially evaluate an experimental sensory substitution device (SSD), which converts color to audible sound. Our system, called Colorophone provides continuous information about the color, light intensity and distance. The color sonification method is inspired by the human visual system. While designing sonification method we took into consideration mismatch in sensory bandwidth, possible sensory overload, nature of sensory spatiotemporal continuity and cross-modal correspondences between colors and sounds. We conducted a preliminary experimental evaluation of the color and object recognition abilities as well as orientation ability. We show that blindfolded participants can easily acquire the color information coded in the auditory stimuli generated by the Colorophone system after a short learning session. Our preliminary experiments indicate that sonified color information helps in object identification and enhances orientation ability. The Colorophone proved to be an easy to understand and intuitive visual-to-auditory coding system, which shows promising results in future visual rehabilitation of the blind. © 2018 IEEE.Assistive devices; blindness; image sonification; sensory aids; sensory substitutionConference Paper
33
Rodríguez A., Boada I., Sbert M.57199131239;55911019800;8427067100;An Arduino-based device for visually impaired people to play videogames2018Multimedia Tools and ApplicationsScopus10.1007/s11042-017-5415-1Blind players have many difficulties to access video games since most of them rely on impressive graphics and immersive visual experiences. To overcome this limitation, we propose a device designed for visually impaired people to interact with virtual scenes of video games. The device has been designed considering usability, economic cost, and adaptability as main features. To ensure usability, we considered the white cane paradigm since this is the most used device by the blind community. Our device supports left to right movements and collision detection as well as actions to manipulate scene objects such as drag and drop. To enhance realism, it also integrates a library with sounds of different materials to reproduce object collision. To reduce the economic cost, we used Arduino as the basis of our development. Finally, to ensure adaptability, we created an application programming interface that supports the connection with different games engines and different scenarios. To test the acceptance of the device 12 blind participants were considered (6 males and 6 females). In addition, we created three mini-games in Unity3D that require navigation and walking as principal actions. After playing, participants filled a questionnaire related to usability and suitability to interact with games, among others. They scored well in all features without distinction among player gender and being blind from birth. The relationship between device responsiveness and user interaction has been considered satisfactory. Despite our small test sample, our main goal has been accomplished, the proposed device prototype seems to be useful to visually impaired people. © 2017, Springer Science+Business Media, LLC, part of Springer Nature.Interacion devices; Video games; Visually impairedArticle
34
Ducasse J., Macé M., Oriola B., Jouffrais C.57147386900;7005499567;26422444900;7801364165;BotMap: Non-visual panning and zooming with an actuated tabletop tangible interface2018ACM Transactions on Computer-Human InteractionScopus10.1145/3204460The development of novel shape-changing or actuated tabletop tangible interfaces opens new perspectives for the design of physical and dynamic maps, especially for visually impaired (VI) users. Such maps would allow non-visual haptic exploration with advanced functions, such as panning and zooming. In this study, we designed an actuated tangible tabletop interface, called BotMap, allowing the exploration of geographic data through non-visual panning and zooming. In BotMap, small robots represent landmarks and move to their correct position whenever the map is refreshed. Users can interact with the robots to retrieve the names of the landmarks they represent. We designed two interfaces, named Keyboard and Sliders, which enable users to pan and zoom. Two evaluations were conducted with, respectively, ten blindfolded and eight VI participants. Results show that both interfaces were usable, with a slight advantage for the Keyboard interface in terms of navigation performance and map comprehension, and that, even when many panning and zooming operations were required, VI participants were able to understand the maps. Most participants managed to accurately reconstruct maps after exploration. Finally, we observed three VI people using the system and performing a classical task consisting in finding the more appropriate itinerary for a journey. © 2018 Association for Computing Machinery.Actuated interface; Interactive map; Non-visual interaction; Pan; Tactile map; Tangible interaction; Tangible user interface; Visual impairment; ZoomArticle
35
Myers N.D., Dietz S., Prilleltensky I., Prilleltensky O., McMahon A., Rubenstein C.L., Lee S.7102086256;24461005000;7003433628;6602346095;56315814400;56315464100;57192256280;Efficacy of the fun for wellness online intervention to promote well-being actions: A secondary data analysis2018Games for Health JournalScopus10.1089/g4h.2017.0132Objective: Fun For Wellness (FFW) is a new online intervention designed to promote growth in well-being by providing capability-enhancing learning opportunities (e.g., play an interactive game) to participants. The purpose of this study was to provide an initial evaluation of the efficacy of the FFW intervention to increase well-being actions. Materials and Methods: The study design was a secondary data analysis of a large-scale prospective, double-blind, parallel-group randomized controlled trial. Data were collected at baseline and 30 and 60 days postbaseline. A total of 479 adult employees at a major university in the southeast of the United States of America were enrolled. Participants who were randomly assigned to the FFW group were provided with 30 days of 24-hour access to the intervention. A two-class linear regression model with complier average causal effect estimation was fitted to well-being actions scores at 30 and 60 days. Results: Intent-to-treat analysis provided evidence that the effect of being assigned to the FFW intervention, without considering actual participation in the FFW intervention, had a null effect on each dimension of well-being actions at 30 and 60 days. Participants who complied with the FFW intervention, however, had significantly higher well-being actions scores, compared to potential compliers in the Usual Care group, in the interpersonal dimension at 60 days, and the physical dimension at 30 days. Conclusions: Results from this secondary data analysis provide some supportive evidence for both the efficacy of and possible revisions to the FFW intervention in regard to promoting well-being actions. © 2018 Mary Ann Liebert, Inc.Complier average causal effect modeling; I COPPE actions scale; Intent to treatArticle
36
Jung J.-Y., Yang C.-M., Kim J.-J.55263200900;57201129013;22733954500;Evaluation of gait characteristic of elderly people with visual impairment based on plantar pressure distirubtion analysis2018Proceedings - 2017 European Conference on Electrical Engineering and Computer Science, EECS 2017Scopus10.1109/EECS.2017.31Generally, elderly people with visual impairment walked with the walking assistive device for increasing mobility. Up to now, many types of functional assistive devices have been developed to provide safety environment during walking. However, it is very important to understand basically biomechanical characteristics of users for evaluating improved walking assistive device. Therefore, the purpose of this study was to investigate the differences in the plantar pressure distribution of elderly people with visual impairment between walking without and walking with the white cane. The plantar pressure distributions of 10 subjects were divided into six regions: whole foot, forefoot, midfoot, rearfoot, medial foot and lateral foot region. All measured data were analyzed by the maximum force, peak pressure, mean pressure, and contact area. The results showed that the plantar pressure in the whole foot of the right side more increased significantly than the left side when walking with the white cane. In addition, more asymmetrical postural balance patterns of elderly people with visual impairment were presented. These results suggest that the walking assistive device can affect the gait characteristic and postural balance continuously during walking. © 2017 IEEE.Elderly people; Gait; Plantar pressure distribution; Visual impairment; Walking assistive deviceConference Paper
37
Mohamad M., Yahaya W.A.J.W., Wahid N.A.57204666035;36172600500;57200417655;The preliminary study of a mobile health application for visual impaired individual2018ACM International Conference Proceeding SeriesScopus10.1145/3206129.3268914Statistics of Malaysia Social Welfare Department reported that the majority of persons with visual impaired are adults. This group is more susceptible towards the main chronic illnesses in Malaysia. Support ought to be provided through various platforms, and one of the potential medium is through mobile application or specifically known as mobile assistive technology. This paper reports the preliminary study which investigates the development of a mobile health application which was designed to disseminate information regarding major chronic illness in Malaysia; diabetes, high blood pressure, heart disease and kidney disease. The interface developed is compatible with renowned software; JAWS and NVDA which can support the navigation for those with visual impairment. 2 respondents are chosen in this preliminary study. 1 respondent who is totally blind and 1 respondent with low vision were interviewed. The evaluation revealed that this mobile application fulfilled the usability aspect; it was confirmed to be easy to use and supported with suitable multimedia elements. As there is no existing mobile health application to support visual impaired audience in Malay Language, this study could be considered a groundbreaking attempt to provide the mechanism. It is with the aim of the study that the mobile health application would have the potential to be used to support this group. © 2018 Association for Computing Machinery.Assistive technology; Malaysia; Mobile application; Mobile health application; Visual impairedConference Paper
38
[No author name available][No author id available]Proceedings of the 9th ACM Multimedia Systems Conference, MMSys 20182018Proceedings of the 9th ACM Multimedia Systems Conference, MMSys 2018ScopusThe proceedings contain 69 papers. The topics discussed include: combining skeletal poses for 3D human model generation using multiple kinects; blind image quality assessment based on multiscale salient local binary patterns; favor: fine-grained video rate adaptation; watermarked video delivery: traffic reduction and CDN management; category-aware hierarchical caching for video-on-demand content on YouTube; dynamic input anomaly detection in interactive multimedia services; from theory to practice: improving bitrate adaptation in the DASH reference player; automated profiling of virtualized media processing using telemetry and machine learning; and mobile data offloading system for video streaming services over SDN-enabled wireless networks.Conference Review
39
Prescher D., Bornschein J., Köhlmann W., Weber G.36342844900;56266855800;35102444600;56395998400;Touching graphical applications: bimanual tactile interaction on the HyperBraille pin-matrix display2018Universal Access in the Information SocietyScopus10.1007/s10209-017-0538-8Novel two-dimensional tactile displays enable blind users to not only get access to the textual but also to the graphical content of a graphical user interface. Due to the higher amount of information that can be presented in parallel, orientation and exploration can be more complex. In this paper we present the HyperBraille system, which consists of a pin-matrix device as well as a graphical screen reader providing the user with appropriate presentation and interaction possibilities. To allow for a detailed analysis of bimanual interaction strategies on a pin-matrix device, we conducted two user studies with a total of 12 blind people. The task was to fill in.pdf forms on the pin-matrix device by using different input methods, namely gestures, built-in hardware buttons as well as a conventional PC keyboard. The forms were presented in a semigraphic view type that not only contains Braille but also tactile widgets in a spatial arrangement. While completion time and error rate partly depended on the chosen input method, the usage of special reading strategies seemed to be independent of it. A direct comparison of the system and a conventional assistive technology (screen reader with single-line Braille device) showed that interaction on the pin-matrix device can be very efficient if the user is trained. The two-dimensional output can improve access to.pdf forms with insufficient accessibility as the mapping of input controls and the corresponding labels can be supported by a spatial presentation. © 2017, Springer-Verlag Berlin Heidelberg..pdf forms; Blind users; Gesture input; Key input; Planar tactile display; Screen readerArticle
40
Lin T.-C., Hou L., Liu H., Li Y., Truong T.-K.18134184200;57201009333;56044317800;56031112000;34769389300;Reconstruction of Single Image from Multiple Blurry Measured Images2018IEEE Transactions on Image ProcessingScopus10.1109/TIP.2018.2811048The problem of blind image recovery using multiple blurry images of the same scene is addressed in this paper. To perform blind deconvolution, which is also called blind image recovery, the blur kernel and image are represented by groups of sparse domains to exploit the local and nonlocal information such that a novel joint deblurring approach is conceived. In the proposed approach, the group sparse regularization on both the blur kernel and image is provided, where the sparse solution is promoted by ℓ1-norm. In addition, the reweighted data fidelity is developed to further improve the recovery performance, where the weight is determined by the estimation error. Moreover, to reduce the undesirable noise effects in group sparse representation, distance measures are studied in the block matching process to find similar patches. In such a joint deblurring approach, a more sophisticated two-step interactive process is needed in which each step is solved by means of the well-known split Bregman iteration algorithm, which is generally used to efficiently solve the proposed joint deblurring problem. Finally, numerical studies, including synthetic and real images, demonstrate that the performance of this joint estimation algorithm is superior to the previous state-of-the-art algorithms in terms of both objective and subjective evaluation standards. The recovery results of real captured images using unmanned aerial vehicles are also provided to further validate the effectiveness of the proposed method. © 2012 IEEE.group sparse; joint estimation; Multiple image blind deblurringArticle
41
Ahmed F., Mahmud M.S., Al-Fahad R., Alam S., Yeasin M.57196026885;57195770993;57193643596;55437782100;18039042000;Image captioning for ambient awareness on a sidewalk2018Proceedings - 2018 1st International Conference on Data Intelligence and Security, ICDIS 2018Scopus10.1109/ICDIS.2018.00020Ambient awareness on a sidewalk is critical for safe navigation, especially for the people who are blind or visually impaired. An affordable and interactive system is necessary for this purpose. In this paper, we present the the outcome of the experiment of off-The-shelf image captioning systems. Design and implementation of a system embedded in a RPi3 is part of the experiment. The main component of the system include: A. generation of meaningful caption from images, b. implementation of personalized feedback mechanism for efficient communication with a minimal cognitive load, and c. an interactive user interface and energy efficient integration that account for multiple configurations to be more inclusive. In particular, the performance of off-The-shelf image captioning systems was compared e.g., Microsoft Cognitive Service, Clarifai, Google Vision API, and IBM BlueMix to determine best platform for meaningful caption generation. We implemented three different schemes, namely text-To-speech synthesis, haptics, and ring tone to provide personalized feedback. The implemented system interface is energy efficient and interactive to provide ambient awareness. The objective evaluation of the fully integrated system was performed on the sidewalk. In particular, we focus on the accuracy of the captioning system and the usage analytics to provide helpful tips on the spot and to understand long term system behavior. © 2018 IEEE.Ambient awareness; Image captioning; Visually impairedConference Paper
42
Guerreiro J., Ohn-Bar E., Ahmetovic D., Kitani K., Asakawa C.56967056100;55511352100;53867884600;15835267300;6603028733;How context and user behavior affect indoor navigation assistance for blind people2018Proceedings of the 15th Web for All Conference : Internet of Accessible Things, W4A 2018Scopus10.1145/3192714.3192829Recent techniques for indoor localization are now able to support practical, accurate turn-by-turn navigation for people with visual impairments (PVI). Understanding user behavior as it relates to situational contexts can be used to improve the ability of the interface to adapt to problematic scenarios, and consequently reduce navigation errors. This work performs a fine-grained analysis of user behavior during indoor assisted navigation, outlining different scenarios where user behavior (either with a white-cane or a guide-dog) is likely to cause navigation errors. The scenarios include certain instructions (e.g., slight turns, approaching turns), cases of error recovery, and the surrounding environment (e.g., open spaces and landMarks). We discuss the findings and lessons learned from a real-world user study to guide future directions for the development of assistive navigation interfaces that consider the users? behavior and coping mechanisms. © 2018 Copyright held by the owner/author(s). Publication rights licensed to ACM.Assistive technologies; Blind navigation; Navigation strategies; People with visual impairments; User behaviorConference Paper
43
Albouys-Perrois J., Laviole J., Briant C., Brock A.M.57202047815;55144156600;57202043436;37013142800;Towards a multisensory augmented reality map for blind and low vision people: A participatory design approach2018Conference on Human Factors in Computing Systems - ProceedingsScopus10.1145/3173574.3174203Current low-tech Orientation & Mobility (O&M) tools for visually impaired people, e.g. tactile maps, possess limitations. Interactive accessible maps have been developed to overcome these. However, most of them are limited to exploration of existing maps, and have remained in laboratories. Using a participatory design approach, we have worked closely with 15 visually impaired students and 3 O&M instructors over 6 months. We iteratively designed and developed an augmented reality map destined at use in O&M classes in special education centers. This prototype combines projection, audio output and use of tactile tokens, and thus allows both map exploration and construction by low vision and blind people. Our user study demonstrated that all students were able to successfully use the prototype, and showed a high user satisfaction. A second phase with 22 international special education teachers allowed us to gain more qualitative insights. This work shows that augmented reality has potential for improving the access to education for visually impaired people. © 2018 Copyright held by the owner/author(s).Accessibility; Augmented reality; Geographic maps; Participatory design; Visual impairmentConference Paper
44
Al-Busaidi A., Kumar P., Jayakumari C., Kurian P.57203244932;57203247133;57203248585;57203244169;User experiences system design for visually impaired and blind users in Oman20182017 6th International Conference on Information and Communication Technology and Accessbility, ICTA 2017Scopus10.1109/ICTA.2017.8336067The assistive technology is a technology that used for helping the visually challenged people to use the computer such as transaction through online, railway reservation etc. The main aim of this paper is to explain the possible element of technology that can enhance the efficiency of the usage by visually impaired and blind users in Sultanate of Oman. The objective of this research to administrate the power of user experience in developing a system for visually impaired and blind users. In addition to help blind users in defining the criteria of systems that is appropriate for them. This paper has accomplished to point out the importance of the technology that can assist without interference of human and show the loopholes in the current technology. There are different methodology has been used to implement the development and design of application. The traditional approach to develop software has been followed along with some new methodology that is incorporating new devices in the system. My research designed to find out how people feel or what they think while using system. Project management methodologies simply define principles, like agile that is one of the best SDLC models that is very flexible, iterative design and build process. The approach that is used called prototype modelling which used for interviewing the user and make a demo type of the final product by adopted a both type of approach to analyses the procedure of development. © 2017 IEEE.accessibility; Evaluation methods; Human Computer Interface; Interactive media; usability; User Experience; user Interface design; visually impaired and blind users; Voice reorganizationConference Paper
45
Patil K., Jawadwala Q., Shu F.C.49862163900;57200643974;57200648979;Design and Construction of Electronic Aid for Visually Impaired People2018IEEE Transactions on Human-Machine SystemsScopus10.1109/THMS.2018.2799588The NavGuide is a novel electronic device to assist visually impaired people with obstacle free path-finding. The highlight of the NavGuide system is that it provides simplified information on the surrounding environment and deduces priority information without causing information overload. The priority information is provided to the user through vibration and audio feedback mechanisms. The proof-of-concept device consists of a low power embedded system with ultrasonic sensors, vibration motors, and a battery. To test the effectiveness of the NavGuide system in daily-life mobility of visually impaired people, we performed an evaluation using 70 blind people of the 'school & home for the blind.' All evaluations were performed in controlled, real-world test environments with the NavGuide and traditional white cane. The evaluation results show that NavGuide is a useful aid in the detection of obstacles, wet floors, and ascending staircases and its performance is better than that of a white cane. © 2018 IEEE.Assistive technology; blind people; electronic navigation aid; man machine systems; navigation; obstacle detection; rehabilitation; visually impaired people; wearable systemArticle
46
Modanwal G., Sarawadekar K.57188931753;36062046000;A New Dactylology and Interactive System Development for Blind-Computer Interaction2018IEEE Transactions on Human-Machine SystemsScopus10.1109/THMS.2017.2734065Although a lot of work has been performed in the gesture-based human-computer interface, blind users still feel it is difficult to interact with computers. One of the major stumbling blocks is the lack of knowledge about their preferences toward hand gestures. A user evaluation study is conducted with 25 blind users to understand this fact and an optimal gesture set is devised. These gestures are selected based on the performance and preference measure analysis. Performance measure includes rating of gestures on four subjective criteria: easiness , naturalness, learning, and reproducibility. In preference measure, a new parameter called as preference index is proposed in this paper. The optimal gestures are further categorized into two groups: tier-1 and tier-2. On the basis of these gestures, a dactylology is proposed for blind users so that they can interact with a computer easily. A prototype model of the proposed interactive system has been developed and encouraging experimental results are obtained. © 2017 IEEE.Blind; dactylology; gesture-based interaction; hand/wrist posture; human-computer interface (HCI); participatory designArticle
47
Maćkowski M.S., Brzoza P.F., Spinczyk D.R.6602140149;14521770000;14523511800;Tutoring math platform accessible for visually impaired people2018Computers in Biology and MedicineScopus10.1016/j.compbiomed.2017.06.003Background: There are many problems with teaching and assessing impaired students in higher education, especially in technical science, where the knowledge is represented mostly by structural information like: math formulae, charts, graphs, etc. Developing e-learning platform for distance education solves this problem only partially due to the lack of accessibility for the blind. Method: The proposed method is based on the decomposition of the typical mathematical exercise into a sequence of elementary sub-exercises. This allows for interactive resolving of math exercises and assessment of the correctness of exercise solutions at every stage. The presented methods were prepared and evaluated by visually impaired people and students. Results: The article presents the accessible interactive tutoring platform for math teaching and assessment, and experience in exploring it. The results of conducted research confirm good understanding of math formulae described according to elaborated rules. Regardless of the level of complexity of the math formulae the level of math formulae understanding is higher for alternative structural description. Conclusions: The proposed solution enables alternative descriptions of math formulae. Based on the research results, the tool for computer-aided interactive learning of mathematics adapted to the needs of the blind has been designed, implemented and deployed as a platform for on-site and online and distance learning. The designed solution can be very helpful in overcoming many barriers that occur while teaching impaired students. © 2017 Elsevier LtdAlternative presentation of math structural information; Assistive technology; Education for people with special needs; Self-learning of mathematics; Visually impaired peopleArticle
48
Savi A.O., Ruijs N.M., Maris G.K.J., van der Maas H.L.J.56533617100;26538865300;6603160467;6701525820;Delaying access to a problem-skipping option increases effortful practice: Application of an A/B test in large-scale online learning2018Computers and EducationScopus10.1016/j.compedu.2017.12.008We report on an online double-blind randomized controlled field experiment (A/B test) in Math Garden, a computer adaptive practice system with over 150,000 active primary school children. The experiment was designed to eliminate an unforeseen opportunity to practice with minimal effort. Some children tend to skip problems that require deliberate effort, and only attempt problems that they can spontaneously answer. The intervention delayed the option to skip a problem, thereby promoting effortful practice. The results reveal an increase in the exerted effort, without being at the expense of engagement. Whether the additional effort positively affected the children's learning gains could not be concluded. Finally, in addition to these substantial results, the experiment demonstrates some of the advantages of A/B tests, such as the unique opportunity to apply truly blind randomized field experiments in educational science. © 2017 Elsevier LtdElementary education; Evaluation methodologies; Evaluation of CAL systems; Interactive learning environments; Teaching/learning strategiesArticle
49
Delahoz Y., Labrador M.A.56403205900;7003977683;A deep-learning-based floor detection system for the visually impaired2018Proceedings - 2017 IEEE 15th International Conference on Dependable, Autonomic and Secure Computing, 2017 IEEE 15th International Conference on Pervasive Intelligence and Computing, 2017 IEEE 3rd International Conference on Big Data Intelligence and Computing and 2017 IEEE Cyber Science and Technology Congress, DASC-PICom-DataCom-CyberSciTec 2017Scopus10.1109/DASC-PICom-DataCom-CyberSciTec.2017.148The American Foundation for the Blind (AFB) has recently reported that over 25 million people in the U.S. suffer from total or partial vision loss. As a result of their visual impairment, they are constantly affected by the risk of a fall and its consequences. Most of this affected population relies on assistive technologies that allow their integration to society. Fall prevention is an area of research that focuses on the improvement of people's lives through the use of pervasive computing. This work introduces a fall prevention system for the blind and its different modules and focuses on the first module: a deep-learning approach for floor detection. A combination of convolutional neural layers and fully connected layers is used to create a network topology capable of identifying floor areas in pictures of multiple indoor environments. This task is remarkably difficult due to the complexity of identifying the patterns of a floor area in different scenarios, the noise added by the movement of the camera while walking, and the real-time nature of the system. This paper provides a general description of the fall prevention system, and a detailed description of the floor detection system. Finally, the evaluation of the proposed floor detection approach is presented: an accuracy of 92.7%, a precision of 90.2%, and a recall of 90.5% were obtained. © 2017 IEEE.Conference Paper
50
[No author name available][No author id available]2017 IEEE 14th International Scientific Conference on Informatics, INFORMATICS 2017 - Proceedings20182017 IEEE 14th International Scientific Conference on Informatics, INFORMATICS 2017 - ProceedingsScopusThe proceedings contain 78 papers. The topics discussed include: augmented virtuality for the next generation production intelligence; manual techniques for evaluating domain usability; how to apply model-driven paradigm in information system (re)engineering; deep learning powered automated tool for generating image based datasets; determination of the critical congestion point in urban traffic networks: a case study; complex networks analysis of international import-export trade; nearest neighbor method using non-nested generalized exemplars in breast cancer diagnosis; unsupervised method for detection of high severity distresses on asphalt pavements; on a top down aspect mining approach for monitoring crosscutting concerns identification; heuristic sampling for the subgraph isomorphism problem; prediction of electricity consumption using biologically inspired algorithms; creation of interactive panoramic video for phobia treatment; defining camera-based traffic scenarios and use cases for the visually impaired by means of expert interviews; vertebrae detection in x-ray images based on deep convolutional neural networks; logistic conception for real-time based info-communication system applied in selective waste gathering; new clustering-based forecasting method for disaggregated end-consumer electricity load using smart grid data; and resource oriented BDI architecture for IDS.Conference Review
51
[No author name available][No author id available]ACM International Conference Proceeding Series2018ACM International Conference Proceeding SeriesScopusThe proceedings contain 22 papers. The topics discussed include: analysis of FinTech mobile app usability for geriatric users in India; building guitar strum models for an interactive air guitar prototype; designing a natural musical interface for a virtual musical Kompang using user-centered approach; disaster risk management and emergency preparedness: a case-driven training simulation using immersive virtual reality; designing a virtual empowerment mobile app for the blinds; face recognition similarity index with location-based system toward peace process and conflict resolution in the Philippines; and childlike computing: systems that think like humans and act like children.Conference Review
52
Edirisinghe C., Podari N., Cheok A.D.35190352500;55899700800;7003447496;A multi-sensory interactive reading experience for visually impaired children; a user evaluation2018Personal and Ubiquitous ComputingScopus10.1007/s00779-018-1127-4The children’s experience of reading is enhanced by visual displays, and through picture book experiences, young children expose themselves to develop socially, personally, intellectually, and culturally. While a sighted person’s mental imagining is constructed mostly through visual experiences, a visually impaired person’s mental images are a product of haptic, taste, smell, and sounds. In this paper, we are introducing a picture book with multi-sensory interactions for the visually impaired children. The key novelty in our concept is the integration of multi-sensory interactions (touch, sound, and smell) to create a new reading experience for visually impaired. Also, this concept is highlighting the lack of appropriately designed sensory reading experiences for visually impaired children. We have conducted a user study with 10 educators, and 25 children from a special school for visually impaired in Malaysia, and our evaluation revealed that this book is engaging and a novel experience of multi-sensory interactions to both children and educators. © 2018 Springer-Verlag London Ltd., part of Springer NatureAssistive technologies; Children; Multi-sensory; Participatory design; Picture book; Storytelling; Visual impairmentArticle in Press
53
Chen N.-C., Suh J., Verwey J., Ramos G., Drucker S., Simard P.56158976500;57014172600;57201636521;55993680200;35902233200;24349687000;Anchorviz: Facilitating classifier error discovery through interactive semantic data exploration2018International Conference on Intelligent User Interfaces, Proceedings IUIScopus10.1145/3172944.3172950When building a classifier in interactive machine learning, human knowledge about the target class can be a powerful reference to make the classifier robust to unseen items. The main challenge lies in finding unlabeled items that can either help discover or refine concepts for which the current classifier has no corresponding features (i.e., it has feature blindness). Yet it is unrealistic to ask humans to come up with an exhaustive list of items, especially for rare concepts that are hard to recall. This paper presents AnchorViz, an interactive visualization that facilitates error discovery through semantic data exploration. By creating example-based anchors, users create a topology to spread data based on their similarity to the anchors and examine the inconsistencies between data points that are semantically related. The results from our user study show that AnchorViz helps users discover more prediction errors than stratified random and uncertainty sampling methods. © 2018 Copyright held by the owner/author(s). Publication rights licensed to ACM.Error discovery; Interactive machine learning; Semantic data exploration; Unlabeled data; VisualizationConference Paper
54
Billah S.M., Ashok V., Ramakrishnan I.V.56022850400;55633359800;7003899264;Write-it-yourself with the aid of smartwatches: A Wizard-of-Oz experiment with blind people2018International Conference on Intelligent User Interfaces, Proceedings IUIScopus10.1145/3172944.3173005Working with non-digital, standard printed materials has always been a challenge for blind people, especially writing. Blind people very often depend on others to fill out printed forms, write checks, sign receipts and documents. Extant assistive technologies for working with printed material have exclusively focused on reading, with little to no support for writing. Also, these technologies employ special-purpose hardware that are usually worn on fingers, making them unsuitable for writing. In this paper, we explore the idea of using off-the-shelf smartwatches (paired with smartphones) to assist blind people in both reading and writing paper forms including checks and receipts. Towards this, we performed a Wizard-of-Oz evaluation of different smartwatch-based interfaces that provide user-customized audio-haptic feedback in real-time, to guide blind users to different form fields, narrate the field labels, and help them write straight while filling out these fields. Finally, we report the findings of this study including the technical challenges and user expectations that can potentially inform the design of Writeit-Yourself AIDS based on smartwatches. © 2018 Copyright is held by the owner/author(s). Publication rights licensed to ACM.Accessibility; Audio-haptic; Blind; Directional guidance; Smartwatch; Visual impairments; Wearables; Writing aidConference Paper
55
Katzschmann R.K., Araki B., Rus D.56031680900;56641871000;7004511052;Safe local navigation for visually impaired users with a time-of-flight and haptic feedback device2018IEEE Transactions on Neural Systems and Rehabilitation EngineeringScopus10.1109/TNSRE.2018.2800665This paper presents ALVU (Array of Lidars and Vibrotactile Units), a contactless, intuitive, hands-free, and discreet wearable device that allows visually impaired users to detect low- and high-hanging obstacles, as well as physical boundaries in their immediate environment. The solution allows for safe local navigation in both confined and open spaces by enabling the user to distinguish free space from obstacles. The device presented is composed of two parts: a sensor belt and a haptic strap. The sensor belt is an array of time-of-flight distance sensors worn around the front of a user's waist, and the pulses of infrared light provide reliable and accurate measurements of the distances between the user and surrounding obstacles or surfaces. The haptic strap communicates the measured distances through an array of vibratory motors worn around the user's upper abdomen, providing haptic feedback. The linear vibration motors are combined with a point-loaded pretensioned applicator to transmit isolated vibrations to the user. We validated the device's capability in an extensive user study entailing 162 trials with 12 blind users. Users wearing the device successfully walked through hallways, avoided obstacles, and detected staircases. © 2001-2011 IEEE.Assistive device; haptic feedback array; human-robot interaction; perception; sightless navigationArticle
56
Reichinger A., Carrizosa H.G., Wood J., Schröder S., Löw C., Luidolt L.R., Schimkowitsch M., Fuhrmann A., Maierhofer S., Purgathofer W.36716739000;57202913459;57208220709;35180203000;57189040471;57207928436;57207926534;7004448962;14822297800;6603751465;Pictures in your mind: Using interactive gesture-controlled reliefs to explore art2018ACM Transactions on Accessible ComputingScopus10.1145/3155286Tactile reliefs offer many benefits over the more classic raised line drawings or tactile diagrams, as depth, 3D shape, and surface textures are directly perceivable. Although often created for blind and visually impaired (BVI) people, a wider range of people may benefit from such multimodal material. However, some reliefs are still difficult to understand without proper guidance or accompanying verbal descriptions, hindering autonomous exploration. In this work, we present a gesture-controlled interactive audio guide (IAG) based on recent low-cost depth cameras that can be operated directly with the hands on relief surfaces during tactile exploration. The interactively explorable, location-dependent verbal and captioned descriptions promise rapid tactile accessibility to 2.5D spatial information in a home or education setting, to online resources, or as a kiosk installation at public places. We present aworking prototype, discuss design decisions, and present the results of two evaluation studies: the first with 13 BVI test users and the second follow-up study with 14 test users across a wide range of people with differences and difficulties associated with perception, memory, cognition, and communication. The participant-led research method of this latter study prompted new, significant and innovative developments. 2018 Copyright is held by the owner/author(s). © 2018 Association for Computing Machinery. All Rights Reserved.Auditory interface; Blind; Cognitive disability; Design for all; Gestures; Learning disability; Low vision; Multimodal interactionArticle
57
Guevarra E.C., Camama M.I.R., Cruzado G.V.57201031873;57201027490;57201031267;Development of guiding cane with voice notification for visually impaired individuals2018International Journal of Electrical and Computer EngineeringScopus10.11591/ijece.v8i1.pp104-112Navigation in the physical environment is a challenge for those people who have very limited sense of sight or no vision at all. Assistive technologies for blind mobilization is not new and always have a room for improvement. Moreover, these assistive devices are limited in terms of its sensing and feedback abilities. This paper presents a microcontroller-based guiding stick capable of detecting several conditions of the environment such as obstacles in front, left and right positions of the user and detects ascending and descending staircases. The feedback is delivered by an audio output which dictates the direction to go or what condition the sensor detects in front of the user. Technical evaluation proves that the device was functional in terms of its accuracy, responsiveness and correctness. On the other hand, in the actual evaluation of the device with the visually impaired individuals, the device did not perform efficiently. It was also found that the device has the potential to be used effectively by the visually impaired who acquired their blindness in later stage of their life provided that they will have a proper training in using the device while navigating in the physical environment. © 2018 Institute of Advanced Engineering and Science.Audio notifications; Gizduino; Microcontroller; Ultrasonic sensors; Voice kit IIArticle
58
Martinez M., Roitberg A., Koester D., Stiefelhagen R., Schauerte B.7404594284;56622647500;7004367115;6602180348;35234793300;Using Technology Developed for Autonomous Cars to Help Navigate Blind People2018Proceedings - 2017 IEEE International Conference on Computer Vision Workshops, ICCVW 2017Scopus10.1109/ICCVW.2017.169Autonomous driving is currently a very active research area with virtually all automotive manufacturers competing to bring the first autonomous car to the market. This race leads to billions of dollars being invested in the development of novel sensors, processing platforms, and algorithms. In this paper, we explore the synergies between the challenges in self-driving technology and development of navigation aids for blind people. We aim to leverage the recently emerged methods for self-driving cars, and use it to develop assistive technology for the visually impaired. In particular we focus on the task of perceiving the environment in realtime from cameras. First, we review current developments in embedded platforms for real-time computation as well as current algorithms for image processing, obstacle segmentation and classification. Then, as a proof-of-concept, we build an obstacle avoidance system for blind people that is based on a hardware platform used in the automotive industry. To perceive the environment, we adapt an implementation of the stixels algorithm, designed for self-driving cars. We discuss the challenges and modifications required for such an application domain transfer. Finally, to show its usability in practice, we conduct and evaluate a user study with six blindfolded people. © 2017 IEEE.Conference Paper
59
Mocanu B., Tapu R., Zaharia T.24822846400;26424843800;6601999900;Seeing Without Sight - An Automatic Cognition System Dedicated to Blind and Visually Impaired People2018Proceedings - 2017 IEEE International Conference on Computer Vision Workshops, ICCVW 2017Scopus10.1109/ICCVW.2017.172In this paper we present an automatic cognition system, based on computer vision algorithms and deep convolutional neural networks, designed to assist the visually impaired (VI) users during navigation in highly dynamic urban scenes. A first feature concerns the realtime detection of various types of objects existent in the outdoor environment relevant from the perspective of a VI person. The objects are followed between successive frames using a novel tracker, which exploits an offline trained neural-network and is able to track generic objects using motion patterns and visual attention models. The system is able to handle occlusions, sudden camera/object movements, rotation or various complex changes. Finally, an object classification module is proposed that exploits the YOLO algorithm and extends it with new categories specific to assistive devices applications. The feedback to VI users is transmitted as a set of acoustic warning messages through bone conducting headphones. The experimental evaluation, performed on the VOT 2016 dataset and on a set of videos acquired with the help of VI users, demonstrates the effectiveness and efficiency of the proposed method. © 2017 IEEE.Conference Paper
60
Flores G.H., Manduchi R.56266976300;7004297978;A Public Transit Assistant for Blind Bus Passengers2018IEEE Pervasive ComputingScopus10.1109/MPRV.2018.011591061Public transit is the key to independence for many blind persons but, despite recent progress in assistive technology, remains challenging for those without sight. To this end, the authors developed a prototype mobile application that communicates information via Wi-Fi access points installed in buses and at bus stops to help blind bus passengers reach their destination. A user study of the system yielded insights into general accessibility issues for blind public transit riders as well as ways to improve the proposed system. © 2002-2012 IEEE.access point; accessibility; assistive technology; blindness; pervasive computing; PTA; public transit; public transit assistant; public transportation; visual impairment; Wi-FiArticle
61
Zeng L., Einert B., Pitkin A., Weber G.36192451600;57202915688;57202911874;56395998400;Hapticrein: Design and development of an interactive haptic rein for a guidance robot2018Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)Scopus10.1007/978-3-319-94274-2_14In daily life, a guide dog as a companion assists blind and visually impaired people (BVIP) to perform independent and safe journey. However, due to several reasons (e.g., cost and long-term training) only a few BVIP would own their guide dogs. To let much more BVIP would have accessible guidance services in unfamiliar public or private areas, we plan to develop an interactive guide dog robot prototype. In this paper, we present one of the most important components of a guidance robot for BVIP, an interactive haptic rein which consists of force-sensing sensors to control and balance the walking speed between BVIP and robots in a natural way, and vibrated actuators under fingers to acquire haptic information, e.g., turning left/right. The results of preliminary user studies indicated that the proposed haptic rein allow BVIP to control and communicate with a guidance robot via a natural haptic interaction. © The Author(s) 2018.Blind and visually impaired people; Guidance robot; Haptic interactionConference Paper
62
Thevin L., Brock A.M.56736921800;37013142800;Augmented reality for people with visual impairments: Designing and creating audio-tactile content from existing objects2018Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)Scopus10.1007/978-3-319-94274-2_26Tactile maps and diagrams are widely used as accessible graphical media for people with visual impairments, in particular in the context of education. They can be made interactive by augmenting them with audio feedback. It is however complicated to create audio-tactile graphics that have rich and realistic tactile textures. To overcome these limitations, we propose a new augmented reality approach allowing novices to easily and quickly augment real objects with audio feedback. In our user study, six teachers created their own audio-augmentation of objects, such as a botanical atlas, within 30 min or less. Teachers found the tool easy to use and were confident about re-using it. The resulting augmented objects allow two modes: exploration mode provides feedback on demand about an element, while quiz mode provides questions and answers. We evaluated the resulting audio-tactile material with five visually impaired children. Participants found the resulting interactive graphics exciting to use independently of their mental imagery skills. © Springer International Publishing AG, part of Springer Nature 2018.Conference Paper
63
Oliveira S., Doro L., Okimoto M.L.55189869300;57194797726;36848911400;Study and design of a tactile map and a tactile 3D model in brazil: Assistive technologies for people with visual impairment2018Advances in Intelligent Systems and ComputingScopus10.1007/978-3-319-60582-1_72This paper presents the study and the design method for the development of a Tactile Map and 3D Tactile Model in Brazil to be used as assistive technology for people with visual impairments. In this context, we present bibliographic review that offers requirements to design various tactile products. The method presents techniques for measuring the usability of these products. Thus, researchers have been prototyping the tactile map on the fuser printer and the tactile 3D model on three-dimensional printer for a National Conference of Assistive Technology for the Integration between Design and Engineering in Brazil. Thus, from the perspective of built environment, researchers defined the location points in the products (map and 3D model, both tactile). The map and 3D model were adapted based on a user-centered design process of an individual with complete visual impairment. People with visual impairments have been testing this Assistive Technologies (AT) during the conference. The main objective of this study was to investigate the accessible production of inexpensive tactile aid products in order to improve the user experience and practices of visually impaired people. This paper shows that Tactile Map and 3D Tactile Model presented in this study were able to help individuals with vision impairments in the conference environment. © Springer International Publishing AG 2018.Assistive technology; Blind; Low vision; Visual impairment; WayfindingConference Paper
64
Oliveira Lima A.C., Queiroz Vieira M.D.F., da Silva Ferreira R., Aguiar Y.P.C., Bastos M.P., Lopes Junior S.L.M.57207879947;57204958808;57207874197;36170462900;57207879770;57207883885;Evaluating system accessibility using an experimental protocol based on usability2018MCCSIS 2018 - Multi Conference on Computer Science and Information Systems; Proceedings of the International Conferences on Interfaces and Human Computer Interaction 2018, Game and Entertainment Technologies 2018 and Computer Graphics, Visualization, Computer Vision and Image Processing 2018ScopusThis article aims to a systematic approach to assess product accessibility and the adapted system of an experimental protocol originally designed to evaluate product’s usability. The adapted protocol approach is focused on products and systems for visually impaired. The developed study with the proposed protocol investigates assistive technology adequacy to target users, regardless of their gender, age or previous experience in this technology usage. The tasks performed by 30 users community were categorized as activities of entertainment, learning and social inclusion. The data obtained from the experiment carried out with the protocol application enabled the test of a set of assumptions about the protocol usage. © 2018.Accessibility; Assistive Technology; UsabilityConference Paper
65
Mun C., Lee O.57195479158;7103020210;Integrated Supporting Platform for the Visually Impaired: Using Smart devices2018ICIS 2017: Transforming Society with Digital InnovationScopusFor the visually impaired (VI)'s safe walking, this platform provide both VI and guardians with optimal information through commonly used smart devices. This contributes to enhancing safety of walk, and composing a single platform which integrates each assistive technology by using web based interface. The overall platform is composed of four categories, and each category interacts with each other through HTTP connection. The experiment was conducted by walk simulation, performance test and descriptive evaluation to test the performance of the platform. As a result, the practicality and defects of the platform are induced from the measurement data and feedback data. The appearance of this platform allows VI not only to get the optimal information but also to improve safety of walk. As for guardians, they can administrate the application, and update the latest information at any time, which contributes to individual administration.Aid to Visually Impaired; Electronic Data Processing; Systems Analysis; Systems IntegrationConference Paper
66
Tapu R., Mocanu B., Zaharia T.26424843800;24822846400;6601999900;Wearable assistive devices for visually impaired: A state of the art survey2018Pattern Recognition LettersScopus10.1016/j.patrec.2018.10.031Recent statistics of the World Health Organization (WHO), published in October 2017, estimate that more than 253 million people worldwide suffer from visual impairment (VI) with 36 million of blinds and 217 million people with low vision. In the last decade, there was a tremendous amount of work in developing wearable assistive devices dedicated to the visually impaired people, aiming at increasing the user cognition when navigating in known/unknown, indoor/outdoor environments, and designed to improve the VI quality of life. This paper presents a survey of wearable/assistive devices and provides a critical presentation of each system, while emphasizing related strengths and limitations. The paper is designed to inform the research community and the VI people about the capabilities of existing systems, the progress in assistive technologies and provide a glimpse in the possible short/medium term axes of research that can improve existing devices. The survey is based on various features and performance parameters, established with the help of the blind community that allows systems classification using both qualitative and quantitative measu.res of evaluation. This makes it possible to rank the analyzed systems based on their potential impact on the VI people life. © 2018 Elsevier B.V.Assistive devices survey; Sensorial networks ETAs; Video camera based ETAs; Visually impaired peopleArticle in Press
67
Mariani E., Giacaglia M.E.57194797175;54881856400;Guidelines for electronic systems designed for aiding the visually impaired people in metro networks2018Advances in Intelligent Systems and ComputingScopus10.1007/978-3-319-60441-1_96The research described herein sought to contribute to the theoretical field pertaining assistive technology systems for the visual impaired. It aimed to investigate by what means the visual impaired interact within the metro environment, seeking to understand their abilities, limitations and fears, in light of their cognitive variables. The study was conducted through participant observation of users with visual impairment and also survey questionnaires to specialists, instructors of orientation and mobility for the visual impaired, and members of the focused users, in São Paulo (Brazil), and Porto (Portugal), which include, respectively, the assessment of two electronic guidance systems, NavGATe and Navmetro. It resulted in a set of guidelines for design, implementation and operation of mobile based assistive systems for the visual impaired within metro networks that fill existing gaps in current knowledge. © Springer International Publishing AG 2018.Metro; Navigation; Spatial awareness; Visually impairedConference Paper
68
Melfi G., Schwarz T., Stiefelhagen R.57202916221;56266788700;6602180348;An inclusive and accessible LaTeX editor2018Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)Scopus10.1007/978-3-319-94277-3_90In this paper, we discuss the reasons which led us to propose LaTeX as mathematical notation to visually impaired students and why there is need for an accessible LaTeX editor. We performed an accessibility test for a selection of three LaTeX editors, to investigate their capabilities to satisfy the needs of these users. The evaluation of the results showed that it was necessary to develop an own editor. The first prototype was preliminarily tested, proving that, paying attention to the GUI library and the development environment, it is possible to develop a GUI with a high level of compatibility with assistive technologies like screen readers and magnifiers. The editor achieved a satisfying ranking in the user-tests which encourages further development of the software. © Springer International Publishing AG, part of Springer Nature 2018.Accessibility to mathematics; Accessible editor; LaTeXConference Paper
69
Damasio Oliveira J., Teixeira Borges O., Stangherlin Machado Paixão-Cortes V., de Borba Campos M., Mendes Damasceno R.57190282486;57195068598;57203126735;55803627100;57203126246;LêRótulos: A mobile application based on text recognition in images to assist visually impaired people2018Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)Scopus10.1007/978-3-319-92049-8_25The autonomy of the visual impaired person can be evaluated in day to day activities like recognizing objects, identifying textual information, among others. This paper features the OCR technology-based LêRótulos application, with the objective of helping visually impaired users to identify textual object information that is captured by the camera of an smartphone. The design of the prototype followed guidelines and recommendations for usability and accessibility, aiming for greater user autonomy. There was an evaluation with specialists and end users, in real situations of use. The results indicated that the application has good usability and meets accessibility criteria for blind and low vision users. However, some improvements were indicated. Related work is presented, the LêRótulos design process, the results of usability and accessibility assessments, and lessons learned for the development of assistive technology aimed at visually impaired users. © Springer International Publishing AG, part of Springer Nature 2018.Accessibility; Assistive technology; Evaluation; Mobile devices; OCR; Usability; Visually impaired peopleConference Paper
70
Lozano M.D., Penichet V.M.R., Leporini B., Fernando A.35496796100;6506739314;9133799500;7005706996;Tangible user interfaces to ease the learning process of visually-impaired children2018Proceedings of the 32nd International BCS Human Computer Interaction Conference, HCI 2018Scopus10.14236/ewic/HCI2018.87People with visual impairment face significant challenges during their learning process, especially children in the early stages of this process. Different assistive technologies have been developed in last decades to support people with visual impairment when interacting with computers. This kind of technologies assume certain skills and level of maturity in the users. Nevertheless, there is a lack of technologies for children, specifically designed to support their personal development especially in the earliest stages. In this paper we propose a novel system based on Tangible User Interfaces aiming at providing children with visual impairment a new way of learning basic concepts. The system implements an audio-based game through which children are guided and motivated with different activities designed to learn basic concepts by identifying objects physically available that children must grasp with their hands to interact with the system. We have also performed a preliminary evaluation of the system with real users obtaining positive results. © Dupré et al. Published by BCS Learning and Development Ltd. Proceedings of British HCI 2018. Belfast, UKAccessibility; Tangible User Interfaces; User Interface Design; Visual ImpairmentConference Paper
71
Smaradottir B.F., Håland J.A., Martinez S.G., De Pietro G.56728878100;57195960112;55804594700;57206305016;User evaluation of the smartphone screen reader voiceover with visually disabled participants2018Mobile Information SystemsScopus10.1155/2018/6941631Touchscreen assistive technology is designed to support speech interaction between visually disabled people and mobile devices, allowing hand gestures to interact with a touch user interface. In a global perspective, the World Health Organization estimates that around 285 million people are visually disabled with 2/3 of them over 50 years old. This paper presents the user evaluation of VoiceOver, a built-in screen reader in Apple Inc. products, with a detailed analysis of the gesture interaction, familiarity and training by visually disabled users, and the system response. Six participants with prescribed visual disability took part in the tests in a usability laboratory under controlled conditions. Data were collected and analysed using a mixed methods approach, with quantitative and qualitative measures. The results showed that the participants found most of the hand gestures easy to perform, although they reported inconsistent responses and lack of information associated with several functionalities. User training on each gesture was reported as key to allow the participants to perform certain difficult or unknown gestures. This paper also reports on how to perform mobile device user evaluations in a laboratory environment and provides recommendations on technical and physical infrastructure. © 2018 Berglind F. Smaradottir et al.Article
72
Acosta T., Acosta-Vargas P., Salvador-Ullauri L., Luján-Mora S.57200384739;57192678428;57192689183;6603381780;Method for accessibility assessment of online content editors2018Advances in Intelligent Systems and ComputingScopus10.1007/978-3-319-73450-7_51This paper defines a method for evaluating the accessibility of online content editors by considering Web Content Accessibility Guidelines 2.0 (WCAG 2.0) and part B of the Authoring Tool Accessibility Guidelines 2.0 (ATAG 2.0). The method includes 63 accessibility features that should be met by the images, headings and tables, which are inserted through an online content editor. The compliance of these guidelines contributes to the creation of accessible content so that visually impaired people using assistive technologies can easily access the content. Furthermore, the results of this study provide criteria for those people who have the responsibility of selecting an accessible online content editor. The proposed method has made it possible to meet the objective set out in this research document and can be used to evaluate the accessibility of learning management systems and course management systems. © Springer International Publishing AG 2018.Accessibility; ATAG 2.0; CMS; Content editors; Content management systems; Disabilities; E-learning; Learning management systems; LMS; W3C; WCAG 2.0Conference Paper
73
Teixeira Borges O., Damasio Oliveira J., de Borba Campos M., Marczak S.57195068598;57190282486;55803627100;24477940900;Fair play: A guidelines proposal for the development of accessible audiogames for visually impaired users2018Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)Scopus10.1007/978-3-319-92049-8_29The area of games, digital entertainment, and development of assistive technologies is constantly growing. However, there are still groups of users who face barriers to using games, such as visually impaired people. Audiogames defined as games based on sound interface, have been an initiative for the inclusion of this audience. Conversely, these are not always games with good accessibility. In order to address this issue, this study presents Fair Play, a set of 33 guidelines for audiogames design. Fair Play aims, aiming to promote good accessibility, gameplay, and usability in audiogames. Fair Play was proposed based on the results of a literature review. The guidelines were validated following 6 steps, detailed in this study. Also available online for the use of the community. © Springer International Publishing AG, part of Springer Nature 2018.Accessibility; Accessible games; Audiogames; Usability; Visually impaired usersConference Paper
74
Palani H.P., Giudice G.B., Giudice N.A.55497109200;57202831490;15724751900;Haptic information access using touchscreen devices: Design guidelines for accurate perception of angular magnitude and line orientation2018Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)Scopus10.1007/978-3-319-92049-8_18The overarching goal of our research program is to address the long-standing issue of non-visual graphical accessibility for blind and visually-impaired (BVI) people through development of a robust, low-cost solution. This paper contributes to our research agenda aimed at studying key usability parameters governing accurate rendering and perception of haptically-accessed graphical materials via commercial touchscreen-based smart devices, such as smart phones and tablets. The current work builds on the findings from our earlier studies by empirically investigating the minimum angular magnitude that must be maintained for accurate detection and angular judgment of oriented vibrotactile lines. To assess the minimum perceivable angular magnitude (i.e., cord length) between oriented lines, a psychophysically-motivated usability experiment was conducted that compared accuracy in oriented line detection across four angles (2°, 5°, 9°, and 22°) and two radiuses (1-in. and 2-in.). Results revealed that a minimum 4 mm cord length (which corresponds to 5° at a 1-in. radius and 2° at a 2-in. radius) must be maintained between oriented lines for supporting accurate haptic perception via vibrotactile cuing. Findings provide foundational guidelines for converting/rendering oriented lines on touchscreen devices for supporting haptic information access based on vibrotactile stimuli. © Springer International Publishing AG, part of Springer Nature 2018.Assistive technology; Design guidelines; Haptic information access; Haptic interaction; Multimodal interfaceConference Paper
75
Bateman A., Zhao O.K., Bajcsy A.V., Jennings M.C., Toth B.N., Cohen A.J., Horton E.L., Khattar A., Kuo R.S., Lee F.A., Lim M.K., Migasiuk L.W., Renganathan R., Zhang A., Oliveira M.A.57191255306;57191256393;57191261693;57191260867;57191261677;57191252213;57191258353;57191251087;57191252626;57191260808;57191261622;57191260790;57191249850;57191251306;15081470300;A user-centered design and analysis of an electrostatic haptic touchscreen system for students with visual impairments2018International Journal of Human Computer StudiesScopus10.1016/j.ijhcs.2017.09.004Students who are visually impaired face unique challenges when learning mathematical concepts due to the visual nature of graphs, charts, tables, and plots. While touchscreens have been explored as a means to assist people with visual impairments in learning mathematical concepts, many devices are not standalone, were not developed with a user-centered design approach, and have not been tested with users who are visually impaired. This research details the user-centered design and analysis of an electrostatic touchscreen system for displaying graph-based visual information to individuals who are visually impaired. Feedback from users and experts within the visually-impaired community informed the iterative development of our software. We conducted a usability study consisting of locating haptic points in order to test the efficacy and efficiency of the system and to determine patterns of user interactions with the touchscreen. The results showed that: (1) participants correctly located haptic points with an accuracy rate of 69.83% and an average time of 15.34 s out of 116 total trials, (2) accuracy increased across trials, (3) efficient patterns of user interaction involved either a systematic approach or a rapid exploration of the screen, and (4) haptic elements placed near the corners of the screen were more easily located. Our user-centered design approach resulted in an intuitive interface for people with visual impairments and laid the foundation for demonstrating this device's potential to depict mathematical data shown in graphs. © 2017 Elsevier LtdAssistive technology; Electrostatic touchscreen; Haptic; Mathematics education; User-centered design; Visual impairmentArticle
76
Saitis C., Parvez M.Z., Kalimeri K.55177925000;57201535716;36163092300;Cognitive Load Assessment from EEG and Peripheral Biosignals for the Design of Visually Impaired Mobility Aids2018Wireless Communications and Mobile ComputingScopus10.1155/2018/8971206Reliable detection of cognitive load would benefit the design of intelligent assistive navigation aids for the visually impaired (VIP). Ten participants with various degrees of sight loss navigated in unfamiliar indoor and outdoor environments, while their electroencephalogram (EEG) and electrodermal activity (EDA) signals were being recorded. In this study, the cognitive load of the tasks was assessed in real time based on a modification of the well-established event-related (de)synchronization (ERD/ERS) index. We present an in-depth analysis of the environments that mostly challenge people from certain categories of sight loss and we present an automatic classification of the perceived difficulty in each time instance, inferred from their biosignals. Given the limited size of our sample, our findings suggest that there are significant differences across the environments for the various categories of sight loss. Moreover, we exploit cross-modal relations predicting the cognitive load in real time inferring on features extracted from the EDA. Such possibility paves the way for the design on less invasive, wearable assistive devices that take into consideration the well-being of the VIP. © 2018 Charalampos Saitis et al.Article
77
Mesquita L., Sánchez J., Andrade R.M.C.57190183970;7403997772;56214455600;Cognitive impact evaluation of multimodal interfaces for blind people: Towards a systematic review2018Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)Scopus10.1007/978-3-319-92049-8_27Visual disability has a major impact on people’s quality of life. Although there are many technologies to assist people who are blind, most of them do not necessarily guarantee the effectiveness of the intended use. Then, we have conducted a systematic literature review concerning the cognitive impact evaluation of multimodal interfaces for blind people. We report in this paper the preliminary results of the systematic literature review with the purpose of understanding how the cognitive impact is currently evaluated when using multimodal interfaces for blind people. Among twenty-five papers retrieved from the systematic review, we found a high diversity of experiments. Some of them do not present the data clearly and do not apply a statistical method to guarantee the results. Besides this, other points related to the experiments are analyzed. We conclude that there is a need to better plan and present data from experiments on technologies for cognition of blind people. Moreover, as the next step in this research, we will investigate these preliminary results with a qualitative analysis. © Springer International Publishing AG, part of Springer Nature 2018.Blind people; Cognitive evaluation; Impact evaluation; Multimodal interfacesConference Paper
78
Vaz R., Fernandes P.O., Veiga A.C.R.57192591274;35200741800;57192589001;Designing an interactive exhibitor for assisting blind and visually impaired visitors in tactile exploration of original museum pieces2018Procedia Computer ScienceScopus10.1016/j.procs.2018.10.076Blind and visually impaired visitors experience a lot of constraints when visiting museums exhibitions, giving the ocular centricity of these institutions, the lack of cognitive, physical and sensorial access to exhibits or replicas, increased by the inaccessibility to use the digital media technologies designed to provide different experiences, among other constraints. This paper aims to present the design and implementation of an exhibitor to communicate original museum samples to blind and visually impaired patrons, without the need to be replicated, that interactively "tell stories about their lives" whenever picked up. Tests performed with 13 partially sighted and blind participants at the main exhibition museum space, demonstrated very positive evaluations regarding pragmatic and hedonic qualities of the interaction, a positive capacity to mentally conceptualize the exhibits according to the audio descriptions, and how to enhance the experience of using more exhibitors like this one during a future visit to the museum. © 2018 The Authors. Published by Elsevier Ltd..Accessibility; Blind people; Human-computer interaction; Museum; Tactile exploration; Visit experience; Visually impaired peopleConference Paper
79
Leporini B., Palmucci E.9133799500;57200513254;Accessible question types on a touch-screen device: The case of a mobile game app for blind people2018Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)Scopus10.1007/978-3-319-94277-3_42This study investigates accessibility and usability via screen reader and gestures on touch-screen mobile devices. We specifically focus on interactive tasks performed to complete exercises, answer questionnaires or quizzes. These tools are frequently exploited for evaluation tests or in serious games. Single-choice, multiple-choice and matching questions may create difficulties when using gestures and screen readers to interact on a mobile device. The aim of our study is (1) to gather information on interaction difficulties faced by blind people when answering questions on a mobile touch-screen device, and (2) to investigate possible solutions to overcome the detected accessibility and usability issues. For this purpose, a mobile app delivering an educational game has been developed in order to apply the proposed approach. The game includes the typical question types and exercises used in evaluation tests. Herein we first describe the main accessibility and usability issues reported by a group of visually-impaired people. Next, the game and its different exercises are introduced in order to illustrate the proposed solutions. © Springer International Publishing AG, part of Springer Nature 2018.Accessibility; Mobile games; Mobile interaction; Visually-impaired usersConference Paper
80
Skulimowski P., Owczarek M., Radecki A., Bujacz M., Rzeszotarski D., Strumillo P.24175143000;57191585449;36608803900;24174091900;6504165490;6602080015;Interactive sonification of U-depth images in a navigation aid for the visually impaired2018Journal on Multimodal User InterfacesScopus10.1007/s12193-018-0281-3In this paper we propose an electronic travel aid system for the visually impaired that utilizes interactive sonification of U-depth maps of the environment. The system is comprised of a depth sensor connected to a mobile device and a dedicated application for segmenting depth images and converting them into sounds in real time. An important feature of the system is that the user can interactively select the 3D scene region for sonification by simple touch gestures on the mobile device screen. The sonification scheme is using stereo panning for azimuth angle localization of scene objects, loudness for their size and frequency for distance encoding. Such a sonic representation of 3D scenes allows the user to identify the geometric structure of the environment and determine the distances to potential obstacles. The prototype application was tested by three visually impaired users who managed to successfully perform indoor mobility tasks. The system’s usefulness was evaluated quantitatively by means of system usability and task-related questionnaires. © 2018, The Author(s).Depth maps; Electronic travel aid; Image sonification; Interactive sonification; U-depth; U-disparity; Visually impairedArticle in Press
81
Reichinger A., Carrizosa H.G., Travnicek C.36716739000;57202913459;57202915545;Designing an interactive tactile relief of the meissen table fountain2018Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)Scopus10.1007/978-3-319-94274-2_28In this paper we highlight the practical experience gained during the first design iteration of a tactile relief for the Meissen table fountain exhibited at the Victoria & Albert Museum, London. Based on a 3D scan, we designed a 2.5D relief that is usable for our gesture-based interactive audio guide. We present a mixed-perspective view projection technique that combines the advantages of a top-down view and a frontal view, and developed a detail-preserving depth-compression technique to flatten less important parts. Finally, we present the results of a preliminary evaluation with 14 members of our participative research group, and give an outlook for improvements to be targeted in our ongoing research. © Springer International Publishing AG, part of Springer Nature 2018.Accessibility; Blind people; Design for all; Museum; Tactile relief; Visually impaired peopleConference Paper
82
Ramya N., Akshai R., Baba Skandar R., Balamurugan S.M.57205197500;57205453015;57205452589;57205458484;Retina the real time interactive solution for visually impaired2018International Journal of Engineering and Technology(UAE)ScopusThe main objective of the project is to provide an application that acts as an all-in-one tool for disabled people. Most activities are performed digitally, a prime example being online shopping platforms which have replaced the traditional means. In this era of modern technology, people are entirely reliant on electronic gadgets and make use of several features to run their daily lives. To enhance the usability of such features for disabled people, this application has been developed to take care of their basic needs. The dependence of the general public on mobile applications is justified but that is not the case for disabled people. This application has been designed specifically to cater to such people with disabilities. This proposal requires our "Retina App" and breath analyzer module only. The differentiating feature of this application is the ability to perform various functions such as booking an Uber, reserving tables at restaurants, calling ambulances and fire trucks in addition to the detection of diseases. It can detect the level of alcohol in a person's breath and when it surpasses a certain level, it displays a notification which can help the user to book a cab directly to take him/her back to his/her residence. © 2018 Authors.API; Breath analyzer; GPS; Mobile application; Online services; Phone calls; Uber appArticle
83
Nielsen L., Christensen L.R., Sabers A.35561663800;35781744600;6603601971;Do we have to include HCI issues in clinical trials of medical devices?-a discussion2017ACM International Conference Proceeding SeriesScopus10.1145/3152771.3156135Digital devices play an important role in medical treatment and will in the future play a larger role in connection to cures of health-related issues. Traditionally medicine has been tested by clinical double blind, randomized trials to document the efficacy and safety profile. When it comes to the use of digital devices in treatments the protocols from the field of medicine is adopted. The question is whether or not this evidence based approach is useful when dealing with digital devices and whether the understanding of the efficiency of a treatment can be obtained without also looking at usability and lifestyle issues. Based on a case study of epilepsy, a literature study of protocols for investigating treatments using digital medical devices, the set-up of studies, the design of a current protocol for clinical trials, and finally preliminary results, we discuss if clinical trials have to include usability studies to determine if a treatment is effective. © 2017 Association for Computing Machinery. All rights reserved.Clinical Trial; Evidence-based medicine; Lifestyle issues; Medical devise; UsabilityConference Paper
84
Shin H., Kim H.-K., Gil Y.-H., Lee J., Yu C., Jee H.-K.23398217600;35519826800;16480239100;57200440241;23494381900;8863316400;Improved and Accessible E-book Reader Application for Visually Impaired People2017SIGGRAPH Asia 2017 Posters, SA 2017Scopus10.1145/3145690.3145748This1paper presents a study of an accessible e-book reader application for visually impaired people. We interviewed 27 visually impaired people to understand their usage patterns of e-books and user requirements in terms of functions and interface of an e-book reader application. Based on this survey, we were able to establish the basic direction of development of our e-book reader application; we implemented the first version of the e-book reader focusing on basic functionality. This version of the e-book reader obtained a value of user satisfaction of more than 75% in the usability test. We are continuing to develop the next version of this e-book reader with differentiated functions for reading professional books that include equations, tables, graphs, and so on. In addition, we are considering supporting a simple and fast input method and providing personalized UI. Beyond e-book readers, we hope that our study will be useful when designing and developing various mobile applications, considering that visually impaired users want to obtain information and experience equal to that available to the non-visually disabled.E-book reader application; User interface design; User study; Visually impaired peopleConference Paper
85
Savindu H.P., Iroshan K.A., Panangala C.D., Perera W.L.D.W.P., De Silva A.C.57201299188;57205392499;57201284065;57201291065;57201295030;BrailleBand: Blind support haptic wearable band for communication using braille language20172017 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2017Scopus10.1109/SMC.2017.8122806Visually impaired people are neglected from many modern communication and interaction procedures. Assistive technologies such as text-to-speech and braille displays are the most commonly used means of connecting such visually impaired people with mobile phones and other smart devices. Both these solutions face usability issues, thus this study focused on developing a user friendly wearable solution called the 'Braille- Band' with haptic technology while preserving affordability. The 'BrailleBand' enables passive reading using the Braille language. Connectivity between the BrailleBand and the smart device (phone) is established using Bluetooth protocol. It consists of six nodes in three bands worn on the arm to map the braille alphabet, which are actuated to give the sense of touch corresponding to the characters. Three mobile applications were developed for training the visually impaired and to integrate existing smart mobile applications such as navigation and short message service (SMS) with the device BrailleBand. The adaptability, usability and efficiency of reading was tested on a sample of blind users which reflected progressive results. Even though, the reading accuracy depends on the time duration between the characters (character gap) an average Character Transfer Rate of 0.4375 characters per second can be achieved with a character gap of 1000 ms. © 2017 IEEE.Conference Paper
86
Ishikiriyama J., Suzuki K.57201291797;22037104400;An interactive virtual mirror to support makeup for visually impaired persons20172017 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2017Scopus10.1109/SMC.2017.8122808Because visually impaired persons are not able to confirm the appearance of their own face, they are afraid of and uneasy about makeup. We have been developing a system that assists makeup application through verbal feedback according to the appearance of the user's face. The system encourages social communication by helping the user feel confident. In this paper, we introduce a new method of using a dedicated camera device and an image processing algorithm to quantify lip makeup. We designed the camera device to capture face images under different illumination types; white light and green light. In the image processing, we extract the lip area from a segmentation that uses the difference in saturation between two lighting conditions. We also developed a symmetry histogram-based geometrical feature of lip shape to estimate the symmetrical characteristics. The experimental results show that the proposed approach yields results close to a human's subjective evaluation. © 2017 IEEE.Conference Paper
87
Burlacu A., Baciu A., Manta V.I., Caraiman S.23666339700;57200273902;6603231376;36093924000;Ground geometry assessment in complex stereo vision based applications20172017 21st International Conference on System Theory, Control and Computing, ICSTCC 2017Scopus10.1109/ICSTCC.2017.8107094Accurate ground area detection is one of the most important tasks in various stereo vision based applications, such as autonomous driving or assistive technologies for visually impaired. Correct assertion of the ground geometry improves obstacle detection algorithms by eliminating false positive locations in image. In this paper we provide an application oriented evaluation on the correlation of the ground geometry with the quality of the disparity map. The disparity map is processed in its V-map representations and two methods for 3D ground area points identification are discussed. Next, different types of surfaces are fitted and evaluated using real data from automotive and visually impaired assistive applications. © 2017 IEEE.automotive; disparity map; evaluation; ground geometry; visually impairedConference Paper
88
Jafri R., Campos R.L., Ali S.A., Arabnia H.R.19639942600;57196261575;55990870300;7003910983;Visual and Infrared Sensor Data-Based Obstacle Detection for the Visually Impaired Using the Google Project Tango Tablet Development Kit and the Unity Engine2017IEEE AccessScopus10.1109/ACCESS.2017.2766579A novel visual and infrared sensor data-based system to assist visually impaired users in detecting obstacles in their path while independently navigating indoors is presented. The system has been developed for the recently introduced Google Project Tango Tablet Development Kit equipped with a powerful graphics processor and several sensors which allow it to track its motion and orientation in 3-D space in real-Time. It exploits the inbuilt functionalities of the Unity engine in the Tango SDK to create a 3-D reconstruction of the surrounding environment, then associates a Unity collider component with the user and utilizes it to determine his interaction with the reconstructed mesh in order to detect obstacles. The user is warned about any detected obstacles via audio alerts. An extensive empirical evaluation of the obstacle detection component has yielded favorable results, thus, confirming the potential of this system for future development work. © 2013 IEEE.assistive technologies; blind; multimodal sensors; navigation; obstacle avoidance; obstacle detection; Project Tango; Unity; Visually impairedArticle
89
Reusser D., Knoop E., Siegwart R., Beardsley P.57200504697;55922286800;35926876800;7005022858;Feeling fireworks2017UIST 2017 Adjunct - Adjunct Publication of the 30th Annual ACM Symposium on User Interface Software and TechnologyScopus10.1145/3131785.3131811We present Feeling Fireworks, a tactile firework show. Feeling Fireworks is aimed at making fireworks more inclusive for blind and visually impaired users in a novel experience that is open to all. Tactile effects are created using directable water jets that spray onto the rear of a flexible screen, with different nozzles for different firework effects. Our approach is low-cost and scales well, and allows for dynamic tactile effects to be rendered with high spatial resolution. A user study demonstrates that the tactile effects are meaningful analogs to the visual fireworks that they represent. Beyond the specific application, the technology represents a novel and costeffective approach for making large scalable tactile displays. Copyright © 2017 is held by the owner/author(s).Accessibility; Haptic device; Large interactive screenConference Paper
90
Minhat M., Abdullah N.L., Idrus R., Keikhosrokiani P.57200164929;36607367600;6507359601;54420191700;TacTalk: Talking tactile map for the visually impaired2017ICIT 2017 - 8th International Conference on Information Technology, ProceedingsScopus10.1109/ICITECH.2017.8080045A major constraint for the visually impaired is to navigate independently to unfamiliar places. One of the widely used assistive technologies is the tactile map that consists of Braille labeling and features annotation. The tactile map enables the visually impaired to understand the geographical information of a particular place. However, despite its wide usage, tactile map still lacks in its usability and availability especially in Malaysia. This paper aims to investigate the usability issues of tactile map, identify the requirements and proposed a conceptual prototype named TacTalk to overcome the usability issues of the existing tactile map. Interviews were conducted with the visually impaired at Saint Nicholas' Home for the Blind in Penang. The results shows lack of information, misinterpretation, complexity, and hard to remember the map as the four main usability issues. Hence the proposed solution is named TacTalk, a 'talking tactile map' which consist of a tactile map with embedded buttons that will connect to TacTalk mobile application via Bluetooth to play the audio files once the button is pressed by the user. © 2017 IEEE.assistive technology; navigation; Tactile map; visually impairedConference Paper
91
Albusays K., Ludi S., Huenerfauth M.57189681874;57204223407;12240800100;Interviews and observation of blind software developers at work to understand code navigation challenges2017ASSETS 2017 - Proceedings of the 19th International ACM SIGACCESS Conference on Computers and AccessibilityScopus10.1145/3132525.3132550Integrated Development Environments (IDEs) play an important role in the workflow of many software developers, e.g. providing syntactic highlighting or other navigation aids to support the creation of lengthy codebases. Unfortunately, such complex visual information is difficult to convey with current screen-reader technologies, thereby creating barriers for programmers who are blind, who are nevertheless using IDEs. To better understand their usage strategies and challenges, we conducted an exploratory study to investigate the issue of code navigation by developers who are blind. We observed 28 blind programmers using their preferred coding tool while they performed various programming activities, in particular while they navigated through complex codebases. Participants encountered many navigation difficulties when using their preferred coding software with assistive technologies (e.g., screen readers). During interviews, participants reported dissatisfaction with the accessibility of most IDEs due to the heavy use of visual abstractions. To compensate, participants used multiple input methods and workarounds to navigate through code comfortably and reduce complexity, but these approaches often reduced their speed and introduced mistakes, thereby reducing their efficiency as programmers. Our findings suggest an opportunity for researchers and the software industry to improve the accessibility and usability of code navigation for blind developers in IDEs. © 2017 Association for Computing Machinery.Accessibility; Blind Programmers; Code Navigation Difficulties; Programming Challenges; User StudiesConference Paper
92
Guerreiro J., Ahmetovic D., Kitani K.M., Asakawa C.56967056100;53867884600;15835267300;6603028733;Virtual navigation for blind people: Building sequential representations of the real-world2017ASSETS 2017 - Proceedings of the 19th International ACM SIGACCESS Conference on Computers and AccessibilityScopus10.1145/3132525.3132545When preparing to visit new locations, sighted people often look at maps to build an a priori mental representation of the environment as a sequence of step-by-step actions and points of interest (POIs), e.g., turn right after the coffee shop. Based on this observation, we would like to understand if building the same type of sequential representation, prior to navigating in a new location, is helpful for people with visual impairments (VI). In particular, our goal is to understand how the simultaneous interplay between turn-by-turn navigation instructions and the relevant POIs in the route can aid the creation of a memorable sequential representation of the world. To this end, we present two smartphone-based virtual navigation interfaces: VirtualLeap, which allows the user to jump through a sequence of street intersection labels, turn-by-turn instructions and POIs along the route; and VirtualWalk, which simulates variable speed step-by-step walking using audio effects, whilst conveying similar route information. In a user study with 14 VI participants, most were able to create and maintain an accurate mental representation of both the sequential structure of the route and the approximate locations of the POIs. While both virtual navigation modalities resulted in similar spatial understanding, results suggests that each method is useful in different interaction contexts. © 2017 Copyright held by the owner/author(s).Assistive Technology; Blind Navigation; Cognitive Mapping; Orientation and Mobility; Virtual NavigationConference Paper
93
Suzuki R., Stangl A., Gross M.D., Yeh T.57193575409;55258331900;7403744359;57194267314;FluxMarker: Enhancing tactile graphics with dynamic tactile markers2017ASSETS 2017 - Proceedings of the 19th International ACM SIGACCESS Conference on Computers and AccessibilityScopus10.1145/3132525.3132548For people with visual impairments, tactile graphics are an impor-tant means to learn and explore information. However, raised line tactile graphics created with traditional materials such as emboss-ing are static. Whileavailable refreshable displays can dynamically change the content, they are still too expensive for manyusers, and are limited in size. Thesefactors limit wide-spread adoption and the representation of large graphics or data sets. In this paper, we present FluxMaker, an inexpensive scalable system that renders dy-namic information on top of static tactile graphics with movable tactile markers. These dynamic tactile markers can be easily re-configured and used to annotate static raised line tactile graphics, including maps, graphs, and diagrams. We developed a hardware prototype that actuates magnetic tactile markers driven by low-cost and scalable electromagnetic coil arrays, which can befabricated with standard printed circuit boardmanufacturing.Weevaluate our prototype with six participants with visual impairments and found positive results across four application areas: location finding or navigating on tactile maps, data analysis, and physicalization, fea-ture identification for tactile graphics, and drawing support. The user study confirms advantages in application domains such as ed-ucation and data exploration. © 2017 Copyright heldby theowner/author(s).Dynamic tactile markers; Interactive tactile graphics; Tangible interfaces; Visual impairmentConference Paper
94
Stearns L., DeSouza V., Yin J., Findlater L., Froehlich J.E.56613109500;57200497672;57200503127;10040303000;7101665384;Augmented reality magnification for low vision users with the microsoft hololens and a finger-worn camera2017ASSETS 2017 - Proceedings of the 19th International ACM SIGACCESS Conference on Computers and AccessibilityScopus10.1145/3132525.3134812Recent technical advances have enabled new wearable augmented reality (AR) solutions that can aid people with visual impairments (VI) in their everyday lives. Here, we investigate an AR-based magnification solution that combines a small finger-worn camera with a transparent augmented reality display (the Microsoft Hololens). The image from the camera is processed and projected on the Hololens to magnify visible content below the user's finger such as text and images. Our approach offers: (i) a close-up camera view (similar to a CCTV system) with the portability and processing power of a smartphone magnifier app, (ii) access to content through direct touch, and (iii) flexible placement of the magnified image within the wearer's field of view. We present three proof-of-concept interfaces and plans for a user evaluation. © 2017 Copyright is held by the owner/author(s).Assistive technology; Augmented reality; Magnification; Reading printed text; Visually impaired; Wearable computingConference Paper
95
Iqbal M.Z., Shahid S., Naseem M.57190985880;23398723200;57055455000;Interactive Urdu braille learning system for parents of visually impaired students2017ASSETS 2017 - Proceedings of the 19th International ACM SIGACCESS Conference on Computers and AccessibilityScopus10.1145/3132525.3134809Braille literacy is one of the core pillars of education for visually impaired children. Previous studies have highlighted the importance of Braille for Visually impaired children by suggesting that students who read Braille outside of the classroom have higher reading speed and fluency. In resource -constrained countries, such as Pakistan, the visually impaired students may acquire the knowledge of Urdu Braille at special education schools. However, there is a dearth of resources for the parents of such children, when it comes to learning Urdu Braille. Therefore, we designed a web-based Urdu Braille Translator and interactive Braille learning tool to enhance the Urdu Braille learning experience for parents of visually impaired. The usability study of this tool was conducted with 15 parents of the visually impaired students. © 2017 Copyright is held by the owner/author(s).Braille learning; ICT for Education; Self-Learning; Special EducationConference Paper
96
Jaramillo-Alcázar A., Luján-Mora S.57199996424;6603381780;Mobile serious games: An accessibility assessment for people with visual impairments2017ACM International Conference Proceeding SeriesScopus10.1145/3144826.3145416Nowadays, serious games allow to educate in an amused way in different areas. They have become a great tool in the learning process. However, the vast majority do not focus on vulnerable groups such as visually impaired people. On the other hand, the use of mobile devices has been growing and have become an important actor in the learning process, because there are many serious games developed for environments in this devices. Despite this, accessibility for visually impaired has not been considered as a necessary element in the development of this kind of video games. Some authors and companies of development of video games have proposed general guidelines that recommend for the construction of this type of applications. However, these initiatives have not been formalized so they need to be consolidated and analyzed to define a model for the accessibility assessment of video games oriented to people with visual impairments. This paper presents a compilation and analysis of accessibility guidelines for the development of video games for people with visual impairment. It also proposes a categorization of guidelines that can be used to analyze the level of accessibility of a video game and especially of serious games. As a case study, this categorization is used to evaluate some selected mobile serious games and identify their level of accessibility for people with visual impairment. We propose an analysis tool for those who wish to develop mobile serious games for the visually impaired. © 2017 Association for Computing Machinery.Accessibility guidelines; Low vision; Mobile devices; Serious games; Visual impairmentConference Paper
97
Oszust M., Padjasek J., Kasprzyk P.24824819700;57193795837;57193788550;An approach to vision-based localisation with binary features for partially sighted people2017Signal, Image and Video ProcessingScopus10.1007/s11760-017-1083-xIn this paper, an approach to the development of a localisation system for supporting visually impaired people is proposed. Instead of using unique visual markers or radio tags, this approach relies on image recognition with local feature descriptors. In order to provide fast and robust keypoint description, a new binary descriptor is introduced. The descriptor computation pipeline selects four image patches with scale-dependent sizes around the keypoint and then places five square pixel blocks within each patch. The binary string is obtained in pairwise tests between directional gradients obtained for blocks. In contrary to other binary descriptors, tests take into account gradient values obtained for blocks from all patches. The proposed approach is extensively tested using six demanding image datasets. Some of them contain labelled indoor and outdoor images under different real-world transformations, as well as challenging illumination conditions. Two datasets were prepared for the needs of this research. Experimental evaluation reveals that the introduced binary descriptor is more robust and achieves shorter computation time than state-of-the-art floating-point and binary descriptors. Furthermore, the approach outperforms other techniques in image recognition tasks, making it more suitable for the vision-based localisation. © 2017, The Author(s).Assistive technology; Binary descriptor; Image recognition; Keypoint matchingArticle
98
Smaradottir B.F., Martinez S.G., Haland J.A.56728878100;55804594700;57195960112;Evaluation of touchscreen assistive technology for visually disabled users2017Proceedings - IEEE Symposium on Computers and CommunicationsScopus10.1109/ISCC.2017.8024537Touchscreen assistive technology is designed to support speech interaction between visually disabled people and mobile devices, allowing the use of a choreography of gestures to interact with a touch user interface. This paper presents the evaluation of VoiceOver, a screen reader in Apple Inc. products, made in the research project Visually impaired users touching the screen- A user evaluation of assistive technology together with six visually disabled test participants. The aim was to identify challenges related to the performance of the gestures for screen interaction and evaluate the system response to the gestures. The main results showed that most of the hand gestures were easy to perform for the test participants. The system adequately responded to gesture interaction, but some inconsistent responses associated to several functionalities and lack of information were found. © 2017 IEEE.E-Accessibility; Mobile assistive technology; Smartphone; Speech-assisted navigation; Usability evaluation; Visually disabled usersConference Paper
99
Muniandy M., Sulaiman S.56104806700;25825620900;Touch sensation as part of multimedia design elements to improve computer accessibility for the blind users2017International Conference on Research and Innovation in Information Systems, ICRIISScopus10.1109/ICRIIS.2017.8002447Many HCI researchers are keen in producing accessible computer applications for the users. However, not many of these findings focus on disable user especially the blind. Most of the technological advancement meant for the blind user is in the form of assistive technologies mainly screen reader. Blind user has highlighted the inefficiency of screen reader as well as other unobtainable and expensive assistive technologies. This paper proposes the use of touch sensation as one of the essential design element to exist alongside with other multimedia design elements. A prototype was developed incorporating the multimedia design elements and was tested among blind users to measure the usability score. The result indicates that touch sensation plays an important role in improving the representation of a computer application to blind. Associating touch sensation in improving the accessibility of computer application has produced totally new experience to the blind users in perceiving information thus enhanced the learning process altogether. © 2017 IEEE.Blind users; Computer accessibility; Multimedia design elements; Touch sensationConference Paper
100
Muscat A., Belz A.24766654200;24831581600;Learning to Generate Descriptions of Visual Data Anchored in Spatial Relations2017IEEE Computational Intelligence MagazineScopus10.1109/MCI.2017.2708559The explosive growth of visual data both online and offline in private and public repositories has led to urgent requirements for better ways to index, search, retrieve, process and manage visual content. Automatic methods for generating image descriptions can help with all these tasks, and also play an important role in assistive technology for the visually impaired. The task we address in this paper is the automatic generation of image descriptions that are anchored in spatial relations. We construe this as a three-step task where the first step is to identify objects in an image, the second step detects spatial relations between object pairs on the basis of language and visual features; and in the third step, the spatial relations are mapped to natural language (NL) descriptions. We describe the data we have created, and compare a range of machine learning methods in terms of the success with which they learn the mapping from features to spatial relations, using automatic and human-assessed evaluations. We find that a random forest model performs best by a substantial margin. We examine aspects of our approach in more detail, including data annotation and choice of features. We describe six alternative natural language generation (NLG) strategies, and evaluate the generated NL strings using measures of correctness, naturalness and completeness. Finally, we discuss evaluation issues, including the importance of extrinsic context in data creation and evaluation design. © 2017 IEEE.Article
Loading...