Video details

React Native Accessibility Basics by Nivedita Singh - React & React Native Meetup

React Native

Nivedita Singh took the stage to talk about React Native Accessibility Basics with viewers. Watch her session to know more about it.


Hello everyone. I hope my screen is visible. I am Nivedita Singh, a mobile engineer, internet small case. I work on the cross platform app and the website. Today I'm going to share what I've learned about accessibility and its implicit in the past few months. So let's start with what are we going to discuss today? This talk will be about the basics of accessibility. Why should we even care about accessibility? Some guidelines that were set up to ensure that this is implemented well, and how Android and iOS natively allow developers to make accessible apps and the features provided by React native for the same certain problem that I face that we as a team faced in implementing Allied for mobile apps. And then we'll have a deep discussion on allied tools. So let's get started. What's the whole buzz about? What is ally? So ally is accessibility? How usable a website, app or other digital experiences by all possible users, regardless of the ability or the signature? Why should you care about making your apps accessible, apps or websites? Because over a billion people in the world are disabled and they have a higher chance of being unemployed. Every one of us is likely to be disabled in some form at some point in our life. And some countries like the USA, Canada, they require you to make accessible websites and apps by the law. This is only for Indian government websites here, but I hope we have some improvement in this field. Also, when we think about disability, I think the first thing that comes to mind is how it's about people who cannot see. But there are several types of disability. First, of course, people who are visually impaired. This disability can range from blindness, low level vision to color blindness. Such people use screen magnifiers and use the Zoom capabilities of their screen reader. Then hearing impairments. Such people are deaf and hard of hearing. This can change from various levels of changes from mind to profound. And we have people with mobility impairments. They can have disabilities concerning movement. So their impairment can be triggered from some physical problems or neurological disorder because of which they cannot move their hands or feet. And then we have people with cognitive impairment, people with intellectual disabilities or plain simple old age. They will have difficulty thinking and remembering. So let's start with an example of how inaccessible certain things in our world are and what we as developers can do to fix it. So we all know about Word, which is a popular word game where you have to get the five letter word and then you get some clues along the way. It will help you nudge you towards the right now answer. Some of us have played World and we actually share the results on our social media, like on our Twitter feed. Like we can see it continuously now. It gets annoying sometimes, but it gets really annoying for those people who need screen readers to go through that bit of feed because it's inaccessible for them and they cannot choose to look away their screen reader like it gets difficult for them to stop their screen reader from reading out whatever is happening. I'm going to share, like a clip of how a screen reader without burning results on Twitter. They read out a long, indecipherable list of green square, yellow square, white square, which is inaccessible and confusing. Three green square white large square white large square New line, Three green square white large square yellow square New line, Five green square. This is my latest verdle, a grid of colored squares representing letters, which is five across and four down. The first and second rows both show three green and two Gray squares. The green represents a letter that's in the word and in the right place. Gray represents a letter that's not in the word. The third row shows three green squares, 1 Gy square, and one yellow square. The yellow represents a letter that's in the word but in the wrong place. The fourth row shows five green squares, which means all the letters are in the right place to make up the word. The second half of the video was about someone who cared. They made a tool which allows us to take a screenshot of the emojis or the emojis in the water results, and then they turned it into a descriptive alternative text of what was happening. The second part of the video was informative to everybody, even the people who don't know anything about a model. It provided a context, and it was generally accessible to people who require stream readers. So we might be discriminating against people who require assistive technology. If our app doesn't work for us, like we're not doing something special if our apps are accessible, it's like deliberately using a certain color, let's say yellow throughout our app. When we know that a significant portion of our users cannot see the color yellow, perceive anything about related to it, and we have will be of no use to them. Something called the social model of disability says that people are disabled by barriers of society, not by their impairments. And removing these barriers creates equality and offers disabled people independent. So someone who requires who is on a wheelchair and requires a Ram to go to a building is not disabled because they need it. They're disabled because society for that particular building does not provide a ramp for them to access that particular place. So now that we know that we should implement like Nikka accessible, how do we go about it? Are there any rules that determine the right way to do things? Or we can just generally Cook up ways to make it work? Assets So there is something called the Web Content Accessibility Guidelines, which were developed through the Worldwide Web Consortium called W Three C. In cooperation with individuals and organizations, these guidelines provide a single shared standard for web content accessibility that meets the needs of people. They are organized under four principles major things that web and app content must be in order to be considered accessible guidelines. There are testable success criteria at three levels AAA, where each is progressively stricter. So let's say a website is following level A, then that particular website won't differentiate between a positive signal and a negative signal that keeps a color alone. For example, it will have some text, it won't just have a red button, and a green button would mean the text color meets contrast requirements. The ratio between the text color and the background color is of a particular value, and level triple A would mean a very dark color or a very light background, a very high contrast requirement, which is difficult to meet. So websites usually go for level up AA. So let's start with the first guideline of one of the four guidelines being perceivable small screen sizes limit the information people can view at a time when they need magnification to view it. Let's start with providing a reasonable default size for content so that they don't have to keep on. People using screen readers don't have to keep on zooming in and out to view content. They can position form the form feels below their labels so that people using screen readers don't have to scroll horizontally for accessing content. The success criteria related to Zoom magnification is that tech should be resizable without assistive tech up to 200%, and the level a contrast requirement is 4.5%. The second would be overput. Users must be able to control UI elements to mouse, to mice, keyboards, or voice commands if they are not able to use a touch. Touch target should be at least 9 mm like button icon buttons, and there should be some inactive space around. This would also help people who have had fingers. Touch screen testers and apps should be as easy and predictable to carry out, and touch and events should be triggered like only when the user removes their fingers from that particular element, and there should be some on screen indicators that remind how and when to use that particular feature. On the Gmail app, we have a feature where we can swipe right on a particular element to archival. So if you're implementing something like this, we should give an indicator about it and then device manipulation gestures, things that are triggered by shaking the device. They are fine, but they're not accessible to people who cannot move. So for them there should be some alternative control options. Keyboard touch control options and buttons should be placed in such a way that they're easily accessible to people who are both like left handed or right handed and cannot move their devices too much because, let's say the device is attached to their weakness. The third guideline would be content should be understandable, and it should support both orientations because the same reason as the last thing. Sometimes the devices are attached to the feature, the navigation content should be consistent and they should appear in the same order in every screen as much as possible, and any component that have the same functionality should be identified consistently should have the same text and we should provide some clear indication that elements are actionable and we should provide tool tips and tutorials for custom touchscreen and device manipulation. Gesture. The last guideline being the content should be developed using well adopted that standards that will work across different platforms now and in the future, one of them being we should set the virtual keyboard to the type of data entry required. If you want to use it to input numbers, allow them to only input numbers using a numpad. Don't confuse them. And text entry can be confusing for some people. It can be a long drawn process, so you can shorten it up by using tag buttons. Checkboxes and automatically find out the user's date or time set and support the characteristic properties of the platform. Allow Zoom, larger fonts captions and most platforms have the ability to set large font size, but they do not wrap the text, causing a lot of horizontal. Now that we have come to the next part, I don't think there are any questions, so I'll just go to the next part. So allied for mobile apps, how do we implement accessibility on mobile apps using platforms like Android iOS and React? Native? So the intention would be to design well defined, clear task source with minimal navigation steps, especially for major user tasks, and then labor the UI components and make sure users can navigate your screen layouts using keyboards mice properly. So this is a brief description of how accessibility can be implemented on Android platforms. Natively. We start with describing each UI element by using the content description XML layout attribute. We can make this parent element focusable by setting screen reader focusable for true. This will ensure that when the screen reader is reading out a card with several text with a lot of text in it, the user doesn't have to separately focus on each element for the screen reader to read it out. All of the text would be collected together and read out by the screen reader. And for editor where users can input information, the labels that we use for them can have something called label form. This will help the accessibility service. Similarly for iOS we have UI accessibility, which is a set of methods provided to the NS object. The NS object is a root class of most objective C class class hierarchy from which subtasses inherit a basic interface to the runtime system. Now all these methods like UI Accessibility, UI Accessibility, Container, UI Accessibility, Action, Focus, and dragging. They provide certain services like the first one provides information about views and controls in the appui. The second one makes subcomponents accessible as separate elements. The third one would be supporting specific actions, and the fourth would be whether an assistive app such as Voiceover has focused on a particular element. This would return boolean, and the last one would be a pair of properties that allow you to fine tune how drag and drop is implemented. So now we're going to talk. I'm going to talk about react. Native Accessibility API Both Android and iOS provide AI for educating apps with assistive technologies like we just saw. Rn has complimentary APIs. It let your app accommodate your app. So we have accessibility properties like accessible. When this is true, this is similar to when we set the screen reader focusable on Android. This indicates with the view that all the children of this particular view will be clubbed together and read out in a single go by the screen reader instead of user having to individually focus on each element. We'll talk about this in a bit. Accessibility Roles it communicates a purpose of the component. It can be either a button, a link. We can also have heading as the accessibility role, which will be helpful for the user when they are navigating through the different pages of the app. This will help them know what is there in a particular page so they can quickly skip it if it's not interesting for them. We also have something called Accessible label. In this particular case, we don't need a label because we have something called submit code. This is a button with a role button and state will come to that later. Something called submit code. This is the text of the particular button, so we know that after we click on this particular button, it's going to submit quote. Let's say it didn't have a texture but an icon. Let's say a refresh icon. Screen reader wouldn't know what's happening here. Then we would need to add an accessible label here to tell him that this is a refresh. This refreshes everything, right? There's something that's called Accessibility state, which denotes the current state of a component. We have a disable button. We can just set it through here, then we can have selected or expanded States. Lastly, we have something called Accessibility Hint. This is for situations where, let's say clicking on a certain button is going to open the mobile browser from the app. It's not going to open a new blue. He would then like to inform the user beforehand about the effects of this particular interactive app. These are the other properties available from React. Native of them are only specific to one platform like Accessibility live region is a good way to make announcements specific to Android. What happens here is let's say you have a particular view on the top of your app where you make announcements and you're currently focused on some other views. There's a change to that particular view. You will be automatically informed about it by your screen reader. There are two types like polite or assertive. Polite would mean your screen reader wouldn't interrupt whatever is going through right now and then come back to that announcement later. Okay, I don't think there are any questions, so I'll just continue. So it starts with some examples of all the problems that I face. The we as a team face while making while starting on a journey of making our app accessible. So a major problem is making images accessible. The screen reader can easily read out the text, but what happens when we have to convey some important information using images? Let's say you have a carousel which belongs to a particular class of elements. In this case, small case, special team leader won't be able to read out the text in the image because it's a part of the image. So there should be some accessibility label assigned to this particular image to tell the screen reader what this particular browser is about. Similarly, you can have icons as buttons free reader will read out. For instance, it won't tell you that a buy button exists there unless you point out that it has a role and a label. Now I will talk about this particular issue. This is an example of a small case, but we can take a general example of a particular item with a few properties like let's say name table, description, minimum amount category, and an icon for watch listing for keeping a track of how it progresses. The card itself is clickable. It'll lead us to a page where we get more information about this particular entity and we also want the watchlist icon to be clickable. Now if we assign a label to the button and ask the screen reader to go through it, this is what it's going to read, this particular line and in the end it's going to be tap to watch list. This is wrong information because tapping on this card is going to lead us to the profile and not going to watch list this element. Why is this happening? This is happening because of some rule that was breaking out that we cannot have interactive control having focusable defense. If the card itself is interactive, its child should not be focusable. Hence accessible is true here by default because this is a touchable element. As a result, all elements are gathered together, read out and the icon cannot be individually focused. So the way to fix this because we want three leader uses to watch list. It is to either set accessible folder where we can individually focus on the elements or to only make the header clickable, which is the better solution. The next problem would be let's say the screen reader has to read this out information in two columns. How the screen reader is going to read it is by saying stats when equipped plot head efficiency AMA five protection 30%. It's going to mess up the whole thing for someone who cannot look at the screen, they will have no clue what's going on. The problem here is with accessibility focus order in react native on the web determine the flow in which things are by looking at the Dom thing and going from top to bottom. We have no such structure on mobile app. We don't have semantic exchange, so iOS natively reads elements from left to right, top to bottom, which is not appropriate for our case. Right now. React native has no official solution to it, but a walk around would be that we would create a wrapper that adjusts its behavior according to the native platform it's working on, and then it will group all sub use together for accessibility purposes. This is about implementation. I don't think there are any questions till now, so I'll just continue. So, Allied Tooling, what is that? We're going to talk briefly about tools that we can use to test our content as developers, whether the apps are actually accessible to users. So we can start with iOS. For simulators, there's something called the Accessibility Inspector, which we can use the crosshair icon here, this one, and then tap on the element sorry, Hover on elements to find out their accessibility properties. And we can also use Voiceover with the simulator. Voiceover is a screen reader for the Apple Products platform, and we can use it for the simulator by clicking on the control command, which is called Zeo, since it's always used together on Mac and then space key to click on it. We have a figuresick device. You can use Voiceover and we have the following gestures as a screen reader, how exactly a user would use it? Next, for Android, we have something called Accessibility Scanner. This is not for the simulator. We can also use it for the physical device. It is an app which can record workflows or take screenshots screens and then audit them for accessibility issues like your apps, and it can check whether your content has labored, whether the touch target size is large enough, whether the items are clickable or there's some contrast in the text, and then it can propose solutions. For example, this is the home page of the small case app, and we don't Accessibility Scanner on it. We get a few suggestions like let's say we have the help icon is not accessible at all, even the account screen, this card itself. If we tap on this watchlist icon, we can see more information here that this item may not have a lay label and the touch target size could be larger. Here are a few stockback gestures. What is of that? It is a screen reader used for Android. App panels can be used for tools, but they cannot replace human. Testing your app for accessibility. Accessibility needs to be approached holistically because tools cannot convey whether your UI conveys semantic information simply and clearly. It cannot report whether it can tell you that the certain element is missing. A label, but it can't tell you whether the element you've added makes any sense or not. And it cannot report on how well an app supports multiple modes of interaction like touch or voice. So we cannot just rely on tooling. We need manual testing to test our apps for being accessible. A few links that I had referred to while working on this topic. Thank you.