7.3 Inclusive Design Interfaces for Interactives
For our current interface approach to inclusive design, the CMHR uses three possible interfaces, depending upon the needs of the specific interactions. The Universal Keypads (UKPs) we developed resulted from early design decisions and are specific to the design needs of CMHR.
Universal Keypad (UKP)
Some visitors, such as those who are blind, or those with a mobility impairment, may find it difficult to impossible to navigate a touchscreen kiosk. The challenge for the CMHR was to make this type of installation more accessible in a way that could be applied consistently across the Museum’s exhibitions and easily learned by any user.
Given that users had been navigating websites with text-to-speech readers and typical keyboards for quite some time prior to our opening, all the equipment required to provide a solution already existed. The design concept was to create a keypad that would simply be those keys critical to the navigation of the digital interfaces, and then to map those keys to the functional associations of the typical keyboard when used in this manner.
Working with the CMHR’s inaugural core exhibition designers (Ralph Applebaum and Associates), we created design schematics visualizing what this concept could look like and ensuring such a device would work within the exhibition design. We contracted the Inclusive Design Research Centre (IDRC) at the Ontario College of Art and Design University (OCAD) to take the concept and validate or invalidate it, and, if validated, provide technical specification as to making such a device.
The Museum UKPs – UKP-A (audio keypad) and UKP-I (interactive keypad) – are bespoke pieces of hardware that emulate keyboards. They were designed and developed by Electrosonic, the company responsible for all of the audio-visual hardware and integration at the CMHR.
Early on, it was decided that the UKPs would emulate keyboards to make implementation with each of the media producers easier, because working versions of the board wouldn’t be available for testing until late in the exhibit development process. Ideally, the boards would have a custom communications protocol that would take place via serial communication. In turn, each of the custom boards were fitted and installed by the exhibit fabricator.
UKP-A
The UKP-A is a stand-alone piece of hardware made to deliver audio content. It provides audio enhancement for a playback device, typically simple audio or video playback that does not require any interaction from the visitor.
The UKP-A is a passive piece of hardware as far as content delivery hardware is concerned. It is headphone-jack aware and resets itself to a baseline audio level whenever a headphone is plugged in. In other words, if a user has turned up the volume on the UKP-A, when headphones are unplugged and the next set of headphones is plugged in, the volume will reset itself to the original pre-set volume level. The UKP-A does not communicate the presence of inserted headphones to any parent device and no keypress is registered.
The UKP-A has four components:
- Audio button: switches the audio playback between Channels 3 and 4
- Decrease Volume button: decreases the volume of the audio being played
- Increase Volume button: increases the volume of audio being played
- Headphone jack: made for most standard mini-headphone jacks
UKP-I
The UKP-I has the same components as the UKP-A and adds a direction pad (D-Pad) and additional controls to enable visitors to interact with an application. As with the UKP-A, volume is independent of any parent application and the volume of the UKP-I is reset upon removal and insertion of headphones into the jack.
The UKP-I acts as a keyboard, sending back key presses to an application. The UKP-I also has a toggle switch allowing two complete sets of keyboard mappings. This was provided for interactives that might have up to two keypads connected to the same computer.
The keypad emulates specific buttons on a keyboard, and software developers are expected to map these specific keystrokes to particular functionality in the developed application. Care must be taken by developers here: They should be aware of the keypad as an alternate device if a keyboard is expected to be part of a given interactive experience, so conflicts don’t arise in use.
The UKP-I has the following components (four replicated from the UKP-A):
- Zoom In button: zooms the screen in one order of magnification to the object of focus
- Zoom Out button: returns the screen to the original display size
- Audio button: switches the audio playback between Channels 3 and 4
- Decrease Volume button: decreases the volume of audio being played
- Increase Volume button: increases the volume of audio being played
- Headphone jack: made for most standard mini-headphone jacks
- Back button: returns the user to their departure point in the previous hierarchy of content
- Home button: reset button for the exhibit, returning the user to the initial interaction experience; double tap exits accessibility mode
- Help button: plays a single static message for the specific interactive exhibit
- Four directional arrow buttons: directional keypad with four buttons used for on-screen navigation
- Select button: used to initiate an action or choose a selection
Mobile Application
The CMHR’s mobile application is available for both iOS and Android. It provides a layer of accessibility when neither of the two universal keypad approaches are implemented because of limitations in physical construction, design, or content. As a general rule, at this time, the mobile application is the least preferred solution, since it is not synchronized with the exhibit experience and is removed from any direct interaction with the exhibition as part of a visitor shared experience.
For more information on the Museum’s mobile application, see the Mobile Program section of this guide.
Universal Access Point (UAP)
How does one make printed text, an image in a frame, a graphic across a wall, or an artefact within a case accessible to someone who cannot see or read it? While we found a clear path forward to make digital content inclusive through captioning, descriptive tracks, and interpreters, we learned that the path for making non-digital content inclusive was not so clear.
As we knew that the CMHR would be creating an audio guide and that the preference was for an app/bring-your-own-device solution, the driver for the CMHR’s mobile program became an inclusivity extension of our core exhibition program.
Given that all content was being developed on computers in word processing applications, and that all content was being catalogued digitally in an enterprise content management system, it would be a relatively short leap to have digital content fed to digital end-points and read aloud by the text-to-speech reader of a mobile device. As such, the concept for the CMHR Universal Access Point (UAP) was born.
The UAP is a system made up of four components:
- A tactile floor strip that is cane detectable and lets visitors know there is content nearby.
- A tactile, high-contrast square marker fixed to the wall, furniture, or built environment in consistently relevant distance to the floor strip. The square marker contains tactile-embossed and braille numbers, as well as a tactile Bluetooth icon.
- A low-energy Bluetooth iBeacon.
- The CMHR mobile application.
The UAP functions by letting visitors know they are in the vicinity of static content. The user either enters the number into the mobile application or accepts the low-energy Bluetooth prompt. Both point to content identified within the enterprise content management system that is then delivered to the mobile device and is read aloud by the text-to-speech functions of the device. The static content of the Museum, such as images, texts, labels, and artefacts, are thus made accessible to visitors who cannot see or read the content.
Exhibit Audio Architecture
Early on, we realized that multiple audio channels would be required. Thus, every exhibit computer has a device to provide multiple audio channels from the stereo output from most computers. Additionally, the need for multichannels imposed software development restrictions: Any platform that couldn’t provide multiple channels would be fairly difficult to use. In hindsight, even more channels would have been desired to clean up the overall architecture and provide better flexibility.
In the current audio technical implementation at CMHR, each exhibit comes with four channels of audio:
- Channels 1 and 2: stereo (left and right) mix
- Channel 3: a mono mix of the stereo output of Channels 1 and 2
- Channel 4: the mono mix of Channel 3 with descriptive audio and/or text-to-speech (media producers are responsible for audio ducking)
Channels 1 and 2 are typically delivered to the public speakers of an exhibition, Channels 3 and 4 are delivered to both the UKP-A and the UKP-I. It is convenient to think of Channel 3 as intended to accommodate hard-of-hearing listening, and Channel 4 most appropriately accommodates vision-impaired users.
A Reaper project file was provided to producers that achieves the above four-channel setup within the constraints of the available hardware was provided to producers. Its software track that maps to hardware Channel 4 autoducks whenever output is detected on the text-to-speech track, and mixing occurs in software from 1⁄2 to both Channel 3 (no ducking) and Channel 4 (ducking with text-to-speech being louder). This Reaper project file makes certain assumptions about the audio configuration of both the hardware audio device and the software audio routing of the underlying operating system (e.g., the Soundflower settings on Mac).
When CMHR did the in-house descriptive audio (DA) work for the insight stations, inconsistencies in volumes were noticed as part of the iterative development process. In the future, both the DA track and the video’s volume track should be normalized to each other. Otherwise, if media production is responsible for manually audio ducking and then exporting a flattened track that contains both channels, then this isn’t as much of an issue.
A text-to-speech delivery mechanism called Ventriloquist was developed for use by other external producers. This tool was developed to work around the limitations in available sound channels for app implementations that can only address two channels of audio output. The code repository for Ventriloquist is available as an open source project, maintained on GitHub at https://github.com/humanrights/ventriloquist.
Visual Feedback
As possible confusion of control can exist when multiple users are in front of an interactive experience, visual feedback becomes critically important to reinforce when the touchscreen or UKP interfaces are in use. While our team iterated early on trying to resolve an interaction model that would always seem to be “just right” and respond appropriately, a handful of edge cases would always creep into the mix. The final decision to provide a visual cue ultimately resolved any possible confusion.
There should be on-screen visual feedback whenever the UKP-I is being used through a screen notification and a focus highlight. These two elements are a minimum requirement, while additional techniques (typeface bolding, background changes, and other assorted visual techniques) should be used that are not intended to be mutually exclusive with the minimum requirements.
Screen notification The screen notification is simply a small rectangular overlay situated in the upper right quadrant of the display indicating that accessibility mode has been activated. This notification gives visual feedback to the sighted user that the keypad is the interface in control at the moment; this notification disappears when either the accessibility interface has timed out or the user has touched the screen to return focus to the touchscreen controls.
Focus highlight A visual highlight or focus rectangle provides additional visual affordance, with the highlight indicating either the content being explored or the interface elements being used by the UKP-I. The highlight/rectangle has a transparent interior so it does not obscure the specific content; an opaque outline focus rectangle provides additional visual affordance, with the highlight indicating either the content being explored or the interface elements being used by the UKP-I. A soft glow around the exterior of the rectangle can be used to provide additional visual feedback.
Magnification Controls
To serve a certain subset of visual users, we realized that we needed a visual analogue to the audio approach of allowing users to increase the volume of the experience. As a result, all of our interactive elements allow users to zoom in to screen content. While working through a combination of possible zoom amounts and interface controls, we saw that a fairly simple approach would provide most of the benefit. It was fairly straightforward for most developers to implement a single level of zoom with focus that followed the same focus provided by the UKP.
The magnification controls of the keypad that provide increased detail for users require larger text and images to interact and review content. The magnification controls work in conjunction for the focus highlight described above by zooming on the object currently highlighted. There is a single level of magnification; the Zoom In and Zoom Out buttons alternate between this magnified level and the default view of the application. Pressing either button repeatedly does not increase or decrease the magnification multiple times. As the user navigates around the screen, the magnified area follows the focus highlight, jumping around the screen as needed. By zooming back out, users restore the default view of the application and it no longer jumps the entire screen around as the focus highlight moves.
While the physical size of the screen and actual on-screen experience should be considered in determining the zoom levels, the general specification is to have the default view be 100 percent and to have the zoomed in view be 200 percent of the default view.
While the current implementation jumps focus around for the user, an ideal version would smoothly transition between the different areas of focus and a quick pan. Timing of this would need refinement, and the motion ramp would need to be finessed (non-linear, with ramps at the start and end).
Audio Feedback
A handful of audio sounds were developed for the Museum to provide consistent feedback to users across all of the interactive elements. All of the sounds are quick, simple, and distinctive, intended to augment experience and not detract from the content being experienced. As the complexity of future interfaces evolves, additional sounds may be required as part of the baseline audio experience.
The sounds we developed are:
- back.wav: to be played when the Back button action is completed
- exit.wav: to be played when accessibility mode is terminated
- option_next.wav: to be played when the right arrow key is pressed
- option_previous.wav: to be played when the left arrow key is pressed
- option_select.wav: to be played when the Select button is pressed
- option_wrap.wav: to be played any time a list of items wraps from start to end or end to start
- screen_change.wav: to be played when a new screen of content is presented
- startup.wav: to be played when accessibility mode is initiated
Sounds that were not originally in the overall specification but should be added to the suite of interface sounds include:
- container_change: to be played when the focus changes containers
- screen_wrap: to be played when focus moves first to last or last to first containers
Touchscreen vs. UKP-I Control
When a user presses any of the buttons on the interactive portion of the UKP-I, the interactive should switch to accessibility mode, which is indicated on-screen through visual feedback, and the experience proceeds with the “Initial Interaction.” When the user touches the screen while accessibility mode is engaged, a short countdown should initiate, which indicates a return to touchscreen controls. This countdown needs to also be conveyed verbally to the eyes-free user.
- If this countdown completes, control is then assumed to be on-screen, and accessibility mode is turned off.
- If the countdown does not complete because any key on the keypad is pressed again, accessibility mode is maintained and the countdown is disabled.
Initial Interaction
Every time a visitor engages with an interactive that has been reset to the default state, the inclusive interface requires two selections:
- Language selection
- Playback speed
Language selection
We optimized language selection for a single purpose: to get the system set to the appropriate language for the user as quickly as possible. To this end, we designed this selection to quickly present a binary choice in both languages.
Below, the language to be spoken is indicated (e.g., EN or FR). Colour is also used to indicate the language the given text should be spoken in.
Input | Output |
---|---|
Any Key | EN: “Press Select to continue in English” FR: “Use Arrow Keys to choose French” |
Any Arrow Key | FR: “Press Select to continue in French” EN: “Use Arrow Keys to choose English” |
Select | (Set language, and go to speed selection.) |
Playback speed There are five possible speed values: slowest, slow, standard, fast, and fastest. Below is a mapping of canonical speed names to words per minute (wpm):
- Slowest: 80 to 90 wpm/25 percent of max value
- Slow: 100 to 110 wpm/35 percent of max value
- Standard: 160 wpm/50 percent of max value
- Fast: 240 wpm/75 percent of max value
- Fastest: 320 wpm/100 percent of max value
Input | Output |
---|---|
(Got here from Language Selection) | (In Normal speed) “Speed, Standard, 3 of 5, (PAUSE), use Left and Right Arrow Keys to adjust speed. Press Select to activate.” |
Right Arrow | (In Fast speed) “Fast, 4 of 5, (PAUSE), use Left and Right Arrow Keys to adjust speed, press Select to activate.” |
Right Arrow | (In Fastest speed) “Fastest, 5 of 5, (PAUSE), use Left and Right Arrow Keys to adjust speed. Press Select to activate.” |
Right Arrow | Wrap Sound (In Slowest speed) “Slowest, 1 of 5, (PAUSE), use Left and Right Arrow Keys to adjust speed. Press Select to activate.” |
Select | (Set speed of voice and proceed to main content) |
Preferred voices (as of 2015) These are the preferred English and French voices specified for use throughout the Museum’s exhibits:
- OS X – French: Julie
- OS X – English: Samantha
- Windows – French: Harmonie
- Windows – English: Heather
Captioning
Experiences that deliver content by audio must have other means of accessing this information for people who are hard of hearing or are Deaf, or for those who speak another language than that being projected. Captioning allows us to render speech and other audible information in the written language of the audio, differentiating captions from subtitles, which render the speech of audio containing alternative language or dialect. Captions allow descriptive audible information and subtitle information to be displayed together, which has advantages for the Deaf and hard of hearing.
Caption files are delivered as a separate digital file (e.g., an .srt text file) to allow the media program to be customized for a variety of presentations. For example, the use of a caption file allows for a customized presentation whether the media is being delivered in a theatre, as part of a built exhibit, or to a mobile device. This also provides greater accessibility for visitors playing media on a personal mobile device for which they have set their own preferences. Captions are considered “closed” and can be toggled between an “on” and “off” state. By default, all media in theatre presentations have captions displayed. For instances where the presentation is more intimate or discrete (e.g., digital kiosk), the player allows for captions to be turned off.
Experiences must be organized so important sightlines are established and maintained. For example, if a person who is hard of hearing or Deaf is watching a screen, captioning and sign language must be in-line with the stage area, so both can be watched at the same time. Therefore, it is important to consider the placement of the captions to allow for sign language to appear adjacent to the text.
Mobile Program
Rapid advancements in smart phone technology have changed the nature of mobile experience in Museums. Where tour-based audio guides were once the only type of mobile experience available to Museum visitors, we are now witnessing an explosion in the types of experiences available to visitors. CMHR has determined that a phased approach is the best strategy for introducing mobile technology into daily visits. We introduced a baseline mobile device service for our 2014 inauguration; with subsequent service offerings, we will build strategically upon programs coming to maturity after the initial opening.
The first phase of our mobile program supports the Museum’s mission by providing content to all types of audiences, making in-gallery content available, accessible and pertinent in new and innovative ways. Our mobile application gives visitors opportunities to access additional relevant content not normally available to Museum visitors, thereby broadening their understanding and increasing their engagement with the subject matter.
Whereas success of the mobile program will be measured over time by its ability to engage audiences and support learning programs, the first goal of our mobile program has been to facilitate accessibility to Museum in-gallery content.
During this first phase, our mobile project has addressed these objectives:
Guarantee accessibility to in-gallery content to the broadest possible audiences.
Enhance in-gallery content by providing visitors with additional information and educational opportunities.
Help on-site visitors find and organize their visit using wayfinding technology.
Offer additional programming in Tower of Hope and Ceremonial Terrace.
During this first phase, our mobile project has addressed these objectives:
- Guarantee accessibility to in-gallery content to the broadest possible audiences.
- Enhance in-gallery content by providing visitors with additional information and educational opportunities.
- Help on-site visitors find and organize their visit using wayfinding technology.
- Offer additional programming in Tower of Hope and Ceremonial Terrace.
Mobile application and Bluetooth iBeacons
We developed our mobile application alongside the exhibit program in response to several priorities of the Museum. One priority is to ensure the exhibit program is accessible to the broadest possible audience. Our mobile application provides a layer of accessibility when neither of the two keypad approaches are implemented because of limitations in physical construction, design or content.
For example, at certain locations which rely on gesture interaction, use of a UKP became impractical and would require substantial changes in either the physical design or experience. Our mobile application became the common platform that could serve content even if it fell short of delivering the same sort of interactive experience.
From the outset, we developed our mobile application for both iOS and Android with full accessibility embedded, using the current best practices of mobile platforms. Ironically, this sort of accessibility is some of the best developed, given the high proliferation of mobile multi-touch devices over the last decade, and given an incredibly active community. In a nod to experience design, we determined early on to use low-energy Bluetooth iBeacons as a mechanism to trigger the availability of nearby experiences. Our intent was to have the application feel fairly contextual, giving visitors ready and quick access initially to content that was physically nearby as they moved through the Museum, and then to allow additional exploration if desired.
We selected Estimote beacons from a growing field of options, and tested the beacons multiple times in the galleries to better understand the reflectivity and blockage of signal in the physical environment. Finding the right balance between signal strength, distance, frequency of signal, and battery life was fairly tricky and required a fair amount of individual attention to fully resolve. Additionally, we removed all the beacons from their rubber housing and inserted them into custom exhibit boxes that were more easily located throughout the exhibits and better integrated with the overall design.