Creating Localizable Learning Apps
So every child with smartphone access can learn.
The ability to learn to read and use that knowledge to learn represents a transformation of the social, intellectual, and emotional advancement of the individual and society. Yet, an estimated 617 million children and adolescents in the world are not learning to read. The aim of this effort is to facilitate the development of early educational content on mobile devices that can be “localized” or adapted (not just translated) to other languages and cultures so that everyone can gain access to quality learning opportunities. This document provides an overview and specific guidelines concerning the process of localizing early learning apps in general while addressing some of the specifics of localizing literacy apps to the languages and content needed by populations lacking effective learning opportunities.
If you would like to contribute suggestions or make edits to this guide for review, please do so here!
Table of Contents
When our research revealed the dearth of early education apps in mother tongues, we started looking for good open source apps that were amenable to being localized. One of the very few we found was Feed The Monster, an early literacy learning app that won All Children Reading’s EduApp4Syria competition. Subsequently, Feed The Monster has been localized to over 20 languages (https://play.google.com/store/apps/developer?id=Curious+Learning). Thanks to the support of Unicef we have been able to compile information from both our experience and that of other app designers and developers to begin this living document as a resource for those wanting to create apps that can more easily be localized to multiple languages and cultures.
The intent of this document is to facilitate the creation of a large library of learning apps to reach those with the most need. The Principles for Digital Development is a great resource and all of these principles are relevant and helpful to anyone working in this space. They can provide a larger view where this document takes a deeper dive into how these principles can come to life in the context of creating learning apps.
The growth in mobile apps represents a unique opportunity to push learning and educational content to children in more personalized and adaptable ways. Learning apps represent a range of content and experiences from early childhood games that teach young children the names of shapes to complex tools that can chart the movements of stars and galaxies. While most learning apps are designed to complement a more formal learning experience, many apps could stand alone to teach children subjects they otherwise would never access.
Mobile apps are embraced by students and educators alike and embody several key characteristics that open up numerous possibilities for learning. One of those advantages is that mobile apps extend the learning period beyond the classroom, which is important for children with limited access to qualified teachers or who often miss school, like those displaced and on the move. Another advantage of mobile apps for learning lies in the ease of pushing flexible content to learners. Children have the opportunity to explore topics of their own interest that may not be adequately explored in the classroom or repeat content that they have not mastered. Finally, mobile apps encourage children to engage in self-learning. In a modern economy, people who are willing to learn flexibly and on their own initiative are prepared to confront the challenges of an ever-changing environment. Mobile apps can allow learning to occur anywhere and at a student’s own pace.
Access to mobile devices, especially among the most impoverished communities is exploding. It is estimated that in 2020 over 140 million smart mobile devices will be sold in Sub-Saharan Africa alone (Dediu, 2014). India, with a population of over 1 billion and 2/3 of the world’s illiterate adults, also has surpassed 1 billion mobile subscribers in 2016 and is estimated to surpass 200 million smartphone owners in that same year (Rai, 2016). The ubiquity of mobile devices offers a ripe opportunity to reach and share content with individuals irrespective of location, lifestyle, and income level. Moreover, smart devices allow users to give feedback or interact with content even if they can’t read. Thus, mobile devices allow an organization to monitor and gather learning results from the end-user, regardless of content or education level. It is worth noting that internet access in the least developed countries is not keeping pace with the access to devices, suggesting that apps that do not require a internet connection will have the largest impact on the poorest communities, at least in the near future. (https://drive.google.com/open?id=1bgCE-Nz-Nw1Wn_bffu4rD61OCuX9JChS)
A great array of mobile apps exist for early learning. A brief search of the iTunes store reveals over 75,000 education apps and a Pew Research Center report (Olmstead & Atkinson, 2015) found over 85,000 apps on the Google Play store classified as educational apps. Among these are many thousands of apps that claim to teach areas important for young children, like vocabulary learning or early numeracy skills. However, many of these apps are of questionable pedagogical quality and few exist in the languages required for learning in socially marginalized populations. While most apps focus on the English speaking learner with a healthy selection of apps in languages such as Spanish, French, Cantonese or Hindi, but the supply of apps for languages like Igbo and Tagalog is essentially zero. There is, therefore, a need to create apps that can be easily localized to the languages spoken by those populations.
Not only will this process catalyze the development of learning apps that will allow many more children to acquire basic numeracy and literacy skills, but the creation of apps that have been localized to mother tongue will create a sense of ownership and belonging to the modern world for those users. Many groups in minority cultures around the world voice the need for their children to carry on their linguistic heritage. Having learning opportunities in their mother tongue is not only important for achieving higher educational outcomes but will provide schools and governments with diverse models with the potential for higher outcomes in all academic areas.
The guide has been developed with primary audiences being, app designers and developers. Others will find it helpful as well. In particular, program managers that are planning to incorporate or comission apps or incorporate apps into their programs and funders who are encouraging or funding digital development. For someone building a learning app of any kind, this document will help them understand how to design and implement it so that it can be more easily localized. It could also address questions from a designer that wants to learn how to maximize the impact of early learning apps by developing apps that can be shared with many different cultures and communities. By developing this guide, we also hope to share more information about what types of literacy learning apps are needed.
This guide is not intended to provide designers with creative guidelines for how to design an app. Nor is it intended to provide instruction on app development or technology. In addition, you will not find research on particular pedagogies for learning areas of educational content. Rather this guide provides tools for guiding your design and development process for apps that are intended to be localized and adapted to various language communities and cultures.
The best way to use this guide is to first read it so that it can inform your design process. Then incorporate review points in your design and development process where you use the worksheets provided to better understand how your decisions will affect the long term localizability of your produce.
Here is an example of what that might look like. On the left you have the common development process of iterative rapid design and implementation with regular testing and evaluation with users. On the right the process is augmented to include a review point to evaluate the localizability of the design and return to the design process as needed. The “eval worksheets” included in this document are intended to be used as part of this localizability review. It is important to note that the broader your test audience is (the more fully it represents the diverse set of social economic and cultural users you intent to reach) the more likely you are to identify where cultural biases in the design will incomber the localization process.
Localizable means designing and building the app so the process of converting to another language and/or culture is easier, faster, and less expensive. The highlights are:
(The exception is when the educational content is language elements, i.e. vocabulary learning.)
(examples: Chimple: http://chimple.org/about.html;
KitKit School: http://kitkitschool.com/;
Feed the Monster: https://play.google.com/store/apps/developer?id=Curious+Learning)
Converting an app to another language and culture can be an arduous and expensive process. If you intend to take your app through this process then designing and developing with localizability in mind can greatly improve the process. Many of the things can be done with little additional upfront effort and in the places where it is not possible then at least you are making an informed decision about how you are impacting the later localization process.
Typically an app is first developed in one language for one region or culture. Only after it is complete is the localization process started. If multiple localizations are conducted then a pipeline to take the app through this process for each language and/or regional version to be constructed. (An example of such a pipeline is given later in this document.)
This document gives you the tools to inform both the design and implementation process so that the resulting app can be more easily localized, thus lowers the cost and complexity of the localization pipeline.
While more often than not the app is first created in only one language as illustrated above, a good way to help identify future localization issue is to develop the app in at least two languages simultaneously. This gives you a better chance of catching future localization issues.
When considering how best to make an app localizable, the best time to start is day zero of the design process. Many fundamental aspects of an app’s design can either help or hurt the localization process. If publishing the app in multiple languages is a priority, then it should be treated as such from the outset. When sitting down for design sessions, keep these guidelines in mind to inform your process. Worksheets are provided to facilitate a review and evaluation of how your design will accommodate later localization.
An app’s instructions can be its most language heavy component. A basic alphabet app may use complex audio or text instructions to teach a child how to play its matching game. The game may alternatively rely on a character avatar using lip-synced animation to introduce educational content or interaction mechanics. Previous research has indicated that these methods of instruction are detrimental to a child’s engagement, and our analysis shows that using them can also increase the cost per language of localization by a nontrivial amount. When designing an app, consider what percentage of total language content is comprised of instructing the users as to how to use the app. If it is greater than 15% the app may benefit from a revision of this content.
When revising instructional usage content to use less language, consider using discoverable interaction mechanics. An interaction is discoverable when a child can ‘stumble upon’ the correct mode of play through natural, curious exploration of the play space and positive reinforcement. Discoverable mechanics are therefore simple or comprised of a sequence of easy to find steps. An example of a basic discoverable mechanic is tap-the-correct-block, which only requires that game assets be highly responsive to touch interaction and an easily identifiable and engaging reward for target selection. All of these goals can be accomplished without using language, allowing any child to engage with the app and understand how to play no matter what language they speak.
Usage instruction in an app tends to fall into the following types:
Written -- easy to translate to another language. Often boring and not engaging for children. Useless for a pre-literate child.
Spoken -- more work to localize. More engaging -- but needs to be very short to avoid becoming boring to a child.
Animated -- if the animation does not include a character speaking it can be a very effective tool that needs no localization. A speaking character with lip sync animation compounds the work needed to localize.
Discoverable -- a fully discoverable interface has the benefit of both not needing localization and engaging the child’s sense of curiosity to play and discover what to do. This is not true if the interface is so complicated or obscure that a child could not figure it out.
To simplify the localization process as much as possible the rule of thumb is to use discoverable interfaces wherever possible. When instruction is needed, it is advisable to accomplish the same end using animation (i.e. animated hand showing here to touch) with no language content.
Types of Usage Instruction
If RED answer
Are some or all of the usage instructions WRITTEN?
Try to replace the written usage instruction with animation or discoverable interaction?
Are some or all of the usage instruction SPOKEN?
Try to replace the spoken usage instruction with animation or discoverable interaction?
Are some or all of the usage instruction ANIMATION?
(note without voice over or lip sync)
Is the interaction DISCOVERABLE?
If you have written or spoken usage instruction
Can you reduce or remove written or spoken usage instruction?
Know that using written and spoken usage instruction will add to your localization effort and cost.
Where you are using written or spoken usage instruction have you deemed it necessary and worth increasing the localization effort later?
For Literacy Apps
Have you avoided all written usage instruction?
Red answers are places where improvements could be made to make localization easier.
Green answers are places where you are doing well.
The use of sound and text can help guide a young player’s understanding of the story and characters in an app. However, apps with a great deal of text, speech or animation with lip synced audio can present many difficulties for localization. Some difficulties are quite obvious. All text and speech or audio used in an app must be translated. Text is not accessible to pre-literate users and audio or speech must be re-recorded for every language. An engaging narrative that is developmentally appropriate for small children can often be told with creative animation and decrease the need for translation.
Narrative and character development must also take cultural characteristics and practices into account. Literacy apps depend on the introduction of short common vocabulary items to teach children basic decoding or reading skills. It is often the most common words that can be most easily misunderstood. For example, the English word ‘house’ is frequently used in apps to teach vocabulary words. A picture often accompanies the word, but a house can look very different in different environments. A child who lives in a one room hut in India may identify the picture of a large house as a school or official building, not a cozy home. Furthermore, characters that seem culturally neutral may represent very specific associations in different cultures. An owl in most European cultures symbolizes wisdom. In Japan, owls bring good luck, but in East Africa, owls bring illness to small children. By incorporating culturally specific characters into the design of the app, designers will need to develop different characters when localizing that app.
Apps that we feel have done a good job creating character and narrative that work across cultures:
KitKit School: http://kitkitschool.com/;
Feed the Monster: https://play.google.com/store/apps/developer?id=Curious+Learning)
It is common in a design and development process for paper drawings of character and narrative to be tested with small groups (focus groups) of users (children) to get their reaction. While in most cases it is sufficient to do this with a small number of prospective users that are easily accessible, when you are planning for localization you would ideally do with groups from all the regions and cultures you are planning to localize to. Therefore we encourage you to do this early testing with as diverse a set of users as is practical.
Story and Character Media
If RED answer
Are some or all of the story WRITTEN?
Try to replace the written elements with animation?
Are some or all of the story SPOKEN?
Try to replace the spoken elements with animation?
Are some or all of the story ANIMATED?
(note without voice over or lip sync)
Are characters culturally specific?
Try to replace character that are culturally neutral.
Are storylines culturally specific?
Try to adjust the storyline to be more culturally neutral
If your story and/or characters are culturally specific
Are the story & character important to what the app is teaching?
(note if yes -- this will increase the localization effort)
Consider if adding additional characters would help to make it more broadly accepted and engaging.
Are the story and character culturally acceptable for all the cultures and languages that you plan to localize to?
Red answers are places where improvements could be made to make localization easier.
Green answers are places where you are doing well.
Here we address the interaction paradigm. For apps that are more gamified (having a more game like structure) the interaction paradigm or game play is often codified as a set of game mechanics or interactions the users do to progress through the game. An example of this may be an app that has you solving puzzles to feed a character or collect reward points. Other apps may be simpler and have a set of interactions that respond to the user -- what we often refer to as “interactive toys”. An example of this would be a letter block app where letters respond to the child’s touch by saying and animating words that start with that letter. There is no particular goal other than to motivate the child’s exploration. The interaction paradigm then is the way in which the user (child) interacts with the app. This of course is wrapped up in how they interact with the content that we hope they will learn from playing with the app.
The interaction paradigm may or may not be either specific to the content and/or specific to the pedagogy. The pedagogy refers to the method of instruction in the design. A scope and sequence is a representation of what content and the order of the content that will be taught. If the interaction paradigm is indeed specific to the content and the content will need to change as you localize to different languages, then there will be the need to change the interaction or game play to allow it be localized. This can be a very costly and time consuming process as the interaction paradigm is more often than not tightly interwoven into many different parts of an app.
This is particularly true with early literacy apps. The scope and sequence -- the order in which you learn letter and words -- can be very different in different languages. You may even have intermediate learning steps -- like learning syllables before learning words in some languages.
While it may not always be possible, the more the interaction paradigm is agnostic to the content or the pedagogy the more adaptable the app, making the localization process easier.
If RED answer
Is the game or interaction mechanic content specific?
Is the content the same across languages & cultures?
While this is uncommon there are some STEM skills that may be able to be presented agnostic of language & culture.
Otherwise, is there a game or interaction mechanic that can continue to be used as the content changes with localization.
Pedagogy / Scope & Sequence
Is the game or interaction mechanic pedagogy specific?
Can the pedagogy stay the same across languages & cultures?
Reference several scopes & sequences (lit examples in appendix) and see if game or interaction mechanics can be changed to work for all of them.
Early literacy apps are unlikely to have the same pedagogy or scope and sequence across language.
Red answers are places where improvements could be made to make localization easier.
Green answers are places where you are doing well.
Whereas design choices have the effect of increasing or reducing the overall scope of a localization project, decisions made during the development of the app can impact the structure of the localization pipeline, and simplify or complicate asset localization based on how developers decided to implement an app. This comes down to two main issues: how fonts are displayed in the app and how assets (text, images, audio, animation) are incorporated into the game and code base.
Many app designers and developers have chosen to implement the way text appears in an app by embedding the text in image files. While an image file gives the designer more control over the look, it creates more work for the localization process. Graphic text requires an artist to modify the asset in addition to the translation of the text. This can be complicated by the fact that words in different languages can have very different lengths driving the need to make large changes to layout.
Generally, the solution to this issue is the use of rendered fonts instead of graphic text. With fonts, the text can be generated by a translator and procedurally adapted to fit the visual design of the app. If designers can find a font that fits within the visual aesthetic of the app and leave adequate room for larger word lengths, design work only needs to happen during the first implementation of the app, and subsequent localizations can use the same visual framework with different text. Text can be translated and replaced.
This approach has the added advantage of moving to new alphabets easier as well. Fonts generally support Unicode characters and allow for accents, special characters, and different alphabets without any significant changes to assets or the app’s codebase. Text layout must also accommodate left-to-right reading versus right-to-left reading.
An easy way to keep time and costs down during a localization project is by identifying the list of assets that will need to be changed prior to development, and ensuring that they can be easily accessed and replaced without relying on developers and engineers. When assets such as gameplay text or voiceover are directly embedded in the codebase, replacing it becomes a complicated and tedious job. A content expert working on translating the educational material in the app needs neither access to the codebase nor an intimate knowledge of where the assets are used in the code. Abstracting these assets into a data structure that is referenced by the codebase at runtime allows the code to work independent of what content is contained in the structure, so long as it matches the filetype and naming conventions that are expected. By taking this approach, the time saved on replacing these assets can be significant.
If RED answer
Is on screen text font rendered using a unicode font?
Consider switch to a unicode font.
Do any image files have language in them?
Look into having these font render in the app.
Are assets embedded or hard coded into the App?
(as opposed in a data file i.e. JSON or XML)
Consider making the investment to use a asset database.
Is text layout amenable to language with different length words?
Try to leave space so that languages with longer words will fit in the layout. Or consider smaller fonts for these languages.
Is text layout amenable to both left-to-right as well as right-to-left?
Are you only planning to localize to languages with the same reading direction?
Try to adjust the layout so it is amenable to text in either direction. If that is not possible consider creating two layout, one for each direction.
The wider circumstances under which an app will be used are valuable considerations at the design stage in an effort to maximise it potential reach and impact. . For example, one should consider if the app is intended to be used with a teacher or a parent to supporting the child. If yes, is it necessary for the parent to be literate to use the app? In our experience an app designed to be used by a child without adult help or guidance can also be used in a classroom or with a parent. The additional learning structure that an app may provide can have other unintended positive consequences. In South Africa second graders were using an early literacy app in school during free time when the teacher would observe. After noticing how well the children were learning from the app, the teachers looked more closely at how the app was guiding the children and adapted their own teaching style to mimic this more effective pedagogy.
Technological choices can also have an impact on how well an app functions on low end devices and in more remote areas where internet connectivity is intermittent or expensive. Design choices can impact how readily the app will be adopted in certain regions of the world. Does the app take up too much memory when that space is precious? Does it need an internet connection that is either unreliable or expensive? Is the app available on Android? If the goal is to make apps available to those in the most need then these choices need to be taken into consideration as well.
If RED answer
Can the App be used by a child without help? (An App that interface can be figured out by a child on his/her own, can be used with or without a guardian or teacher.)
Test prototypes with small focus groups of children, observe what then find difficult to figure out. (more importantly what they give up on). Then use discoverable interfaces and simple animation to help them better understand what is expected of them.
Can the App work on a low end smartphone?
(see questions below)
See questions below.
Does the App promo exploration & encourage curiosity?
Focus groups to understand what is and is not working.
Is the app engaging to children?
Focus groups to understand what is and is not working.
Does the App or its documentation outline what a child will learn?
(This will help parents and teachers more easily adopt the app)
This should be included in both the descriptions in the store as well as an about in the app.
Technology for low end mobile devices
Is the app small in size?
Often assets are the largest part of a App. try reducing the resolution and the number of animation frames.
Does the app work without an internet connection?
Look at including all the assets needed into the App package.
Can the app work on a lower end processor?
Reducing complexity of animation assets can often help here as well.
Is the App available on Android?
The majority of low end smartphones are Android based.
To effectively localize early literacy apps it is not possible to rely on translating the content of the app from one language to another. Languages differ in the characteristics of their writing systems and in the words that are suitable for learning to read. Writing systems vary in their general attributes and the app must reflect those differences. In alphabetic languages, like English or French, each symbol represents a discrete sound. In syllabaries, symbols represent sound combinations or syllables. Alphabets tend to have fewer symbols with more predictability while syllabaries have larger inventories of characters that vary when combined with other characters. For example, Hindi consonant characters change visually when followed by a different vowel sound. (See Appendix XX for an overview of writing systems). This variance between the phonological structure and orthographic representation of common vocabulary words drives the need for each language to have its own scope and sequence.
Another reason why it is impossible to simply translate an app from one language to another is that languages differ in terms of word length and complexity. From a child’s perspective, it is easier to learn to read words that are short, have predictable letter patterns and are commonly used in oral language. Hence, in English, reading instruction often begins with 3 letter words like cat and tip. However, the same word meaning might have a very different structure in another language. The word ‘eight’ in English is a useful and common word; in Spanish, “ocho” is a simple word with a common consonant-team that also helps children practice the vowel-consonant-vowel structure common to the language. In isiZulu however, “isishiyagalombili” is an incredibly difficult word to learn to read. For languages with few or no short words, it is necessary to introduce an intermediate stage between learning single letters and full words.
In the course of adapting apps for early literacy skills, we have created initial scope and sequences for many languages. While these documents only cover the earliest of literacy skills they represent a starting point and a resource to seed the development of an open source set of learning and language material for all languages. (see Appendix ??)
If your goal is to reach as many people as possible with effective learning software, you should require apps built with your funding to be both localizable and open source. By including localizability in the RFP and funding criteria, you can prompt developers to create apps that can be localized for much lower cost – often without significantly increasing the cost of development or the effectiveness of the resulting product.
For example, by emphasizing localizability, Norad was able to ensure that Feed The Monster could be localized for 1% of the app’s development cost – instead of 10-50% of the cost if localizability were not part of the project’s requirements. While building for localizability may (or may not) increase the initial cost slightly, this cost will be recovered on the first localization.
If all funders prioritize localizability and openness, the cost of building good learning apps will decrease by two orders of magnitude.
Here is some example text that can pasted into a grant requirement to ensure both the code and the content is openly licensed:
Intellectual Property Requirements
To ensure that the investment of these funds has a significant multiplier effect, as broad an impact as possible, be cost-effective, and to encourage innovation in the development of new learning materials, as a condition of the receipt of a [insert grant name] grant, the grantee will be required to license to the public all work (except for computer software source code, discussed below) created with the support of the grant under the most current version of the Creative Commons Attribution license (CC BY). Work that must be licensed under CC BY includes new content created using grant funds, modifications made to pre-existing, grantee-owned content using grant funds, and new works and modifications made to pre-existing works commissioned from third parties using grant funds.
For general information on CC BY, please visit: https://creativecommons.org/licenses/by/4.0.
Instructions for marking your work with CC BY can be found at: https://wiki.creativecommons.org/Marking_your_work_with_a_CC_license.
Pre-existing copyrighted materials licensed to the grantee from third parties, including modifications of such materials, remains subject to the intellectual property rights the grantee receives under the terms of the particular license. In addition, works created by the grantee without grant funds do not fall under the CC BY license requirement.
Further, the [insert funder] requires that all computer software source code developed or created with [insert grant name] funds will be released under an intellectual property license that allows others to use and build upon them. Specifically, the grantee will release all new source code developed or created with [insert grant name] grant funds under an open license acceptable to either the Free Software Foundation and/or the Open Source Initiative.
The ability to read represents a transformation of the social, intellectual, emotional, and spiritual advancement of the individual and a society and is the foundation of all further learning. Yet, 160 million children in the world are denied the educational environment to learn this necessary capacity, because they do not have access to an adequate school. Once a child has learned to read, no amount of social or political instability can take that knowledge away from them. Yet, for the populations of children who are not being educated, either because they lack access to school or they are provided with such qualitatively poor schools that learning to read is impossible and access to that knowledge is a far-off fantasy.
Low levels of reading limit or bar an individual from accessing information and education materials, social programs, health information or even a local newspaper. In fact, the World Literacy Foundation estimates that low literacy skills cost the global economy $1.19 trillion per year in lost revenue and increased social costs (World Literacy Foundation, 2015). Literacy rates among the bottom billion (defined as those who survive on $2/ day) rose until 2000 to 54% and have flattened since then. Speakers of non-dominant languages only make up 8% of the world's population but they make up 40% of the world's illiterates. As many as 85% of children in Sub-saharan Africa are being asked to learn to read in a language that they do not understand. As investments in education have declined and the number of teachers available reach dangerously low levels, alternative solutions for teaching literacy skills to every child must be explored. Otherwise, another 157 million children will be joining the ranks of the 750 million adults who do not know how to read.
A portable mobile device-based solution may provide access to the skills and knowledge needed to acquire literacy. It is imperative that our most advanced technology be deployed to help solve this dire need. Many resources exist on the internet for the literate person to further their learning, but little content has been developed to allow children to advance from not reading at all to simple decoding to fluent reading. This is particularly important for educational settings where it is assumed that children by the end of grade 4 have made the transition from learning to read to reading to learn. After this time, no further instruction in the basic elements of literacy are provided and those children who have not successfully acquired the full range of literacy skills needed will be left behind.
If children can become fluent readers, much of the more advanced learning content available on the internet could be localized to support further access. This guide will help you understand how to design and implement mobile apps to facilitate that process. We encourage more developers to engage in this work to diversify the creative minds struggling with the problem of teaching literacy to marginalized populations. The next section will explain what kind of apps are needed.
The approach to content design and curation adopted by this effort is a recreation of a well-designed, research-based reading intervention curriculum that might be found in an effective non-tech based approach. As such, the content must include direct and systematic instruction in all areas of reading and linguistic knowledge necessary to become a fluent reader.
The components of the reading brain circuit for written English comprised the template for what we called the essential “app map”. This template involved what we conceptualize as the ideal set of components necessary for the formation of pre-reading, with emphasis on the language, perceptual, and attentional processes in the young reading brain circuitry. For example, some skills included in the app map are those commonly associated with learning to read: i.e., phoneme knowledge or sound awareness, vocabulary growth, conceptual knowledge, letter-naming and letter-sound knowledge, sight word recognition, decoding and comprehension skills. Other skills, less commonly emphasized, involve the auditory perception of phonemes and rhythmic patterns known to foster phoneme awareness; knowledge of the multiple meanings of words; learning syntactic functions of words (e.g., action verbs), etc. This more comprehensive ideal template for apps is used as the basis for designing the digital learning activities in English and would be modified for the characteristics of the target language in the localization process.
The App Map will vary from language to language and will represent the areas of instruction required to become literate in that language. Equally important to the design of the app is to understand the linguistic, visual and conceptual demands of the language being taught. What is common across learning apps in a privileged language like English, is the bias towards developing apps for early learning and away from developing apps for more advanced content after children have begun the process of learning to read. We encourage developers to collaborate with teachers and academics to discover the important elements of language and literacy that could benefit from representation and instruction in mobile apps.
Great apps are built on great design. This is true for mobile apps that are used to share music, to connect to friends, and of course, for apps designed for children. Mobile apps designed to teach children must be both engaging and provide learning value. An app that is not engaging will not be played with for the length of time it takes for a child to learn the content. Equally important, is the accuracy of the learning content. An app may be engaging, but if the learning content is inaccurate or not incorporated into the gameplay, the child will not learn from the app.
Among the apps that have been reviewed by this team, the best apps were often the most simple. They focused on one (or only a few) skills from the literacy app map. We believe that the simple and more clear your app is the more successful for learning and engagement it will be. There is room in the mobile app environment for many apps (that you or others can create) so it is better to create multiple apps than to try to build an app with too many components. Research also suggests that children avoid apps that include characters that engage in long narrative explanations of the app and how it works. Young children are biased toward action and want to engage in the app immediately. By the same token, the app should avoid text instructions. Remember your users can’t read -- and most kids that can still don’t want to read instructions.
This leads to what we call “discoverable” interaction which is described in more detail below.
For the most part, all the things that make an app good also make it more localizable.
There are a few additional qualities that help make it more localizable which will be covered in the following sections. A necessary step in the creation of an app for learning is the development of the scope and sequence of the content. A scope and sequence document is a list or schema of the ideas ro content areas that will be taught and the order in which the content will be presented. The document describes what the user will have learned when they have mastered the material presented in the app. The value of creating a scope and sequence for the designer is not simply to list all the areas that the app will cover but also to determine how the mastery of the material in one are can facilitate learning the next area. Additional considerations regarding app design and pedagogy are covered in the next section.
The process of localizing apps for learning involves a deep understanding of both the content that the app is teaching and the needs of the child that will be using the app. The goal of any learning app should be to engage the curiosity of the user to learn what the app is presenting. Children come to a learning experience with a variety of needs, skills, and talents. For learning to be supported, an app must equally consider the learning demands of the content and the variable needs of the user.
One of the first steps in localizing literacy apps is the development of the scope and sequence. As explained above the scope and sequence is a document that displays the content area to be taught and the order in which those areas will be presented. The Scope and Sequence answers many questions about how a learning app will be designed including: 1) What will the user learn first, 2) What will the user have mastered when they master the app, and 3) How does mastery in one area of app support learning in the next area of the app. As an activity in the app is designed thought must be given to how the interaction of the game will support some aspect of the content area as described in the scope and sequence. In particular, the designer should consider if the interaction supports any one of three levels of learning: 1) remembering and understanding, 2) applying and analysing or 3) evaluating and creating. While an app can certainly focus on higher level skills like evaluating and creating, the designer must understand that a child that has difficulty remembering the content will not be able to master games that demand higher level skills.
Feed the Monster engages all levels of learning the writing system of a language, but begins by systematically teaching children the relationships between sounds and symbols. When children have shown a proficiency in remembering the letters in a given area of the app, they are challenged to apply those skills and create target words. The content in FtM is structured to support children’s learning by introducing them to letters before reading words and reading words before spelling words. The Scope and Sequence for the English version of the app is shown below:
c, m, n, p, t
f, l, s, n, p, (z)
b, r, n, t, c, (g)
x, h, p, t, f
w, d, t, n, g
Rhyme Patterns in level
an, am, at, ap
if, is, in, ip
ub, un, ut, ug
ox, op, ot
ed, et, en, eg
Possible Words in level
can, pan, tan, Pam, tam, pat, cat, mat, cap, map, nap, tap
if, is, pin, fin, sip, zip, nip, lip
rub, tub, cub, gun, bun, run, but, rut, nut, cut, bug, rug, tug
pox, fox, hop, top, pop, hot
wed, ted, get, wet, net, ten, den
Feed the Monster is roughly divided into 5 sections. Each section introduces the user to a subset of 5-6 consonants and 1 vowel. The first two levels of each section teaches only the consonants and the third section introduces the new vowel. In English, FtM is not comprehensive, but provides multiple practice opportunities for letter names, letter sounds, rhyme patterns and Consonant-Vowel-Consonant (CVC) words (e.g. CAT, BAT, BAG, HAT) for the base consonants and the short vowels. Subsequent versions of the game will include long vowels and other more complex multi-letter patterns.
Translating the scope and sequence of the English version of the Feed the Monster would not be possible. Language differ widely in the number of letters, the number of vowels and the length of common words understandable by children. Hence, localizing an early literacy app entails the following steps:
Before you begin, think through the following questions. These will inform the localization process below:
When beginning work on a target language, our first step was to contract with a native speaker who has experience as an early childhood educator to develop the Scope and Sequence for that version of the game. Then the educators would provide examples of short, easy to learn words for each level that a 4 year old child would reasonably know. If the language has a high percentage of long words (e.g.: Zulu), we began by identifying approximately 40 high-frequency syllables and using those as substitutes for words in the earlier letter groups. Any words introduced later in the game would be based off those syllables so children could use them as a stepping stone for learning to read quickly and fluently decode the words.
Over the last year, Curious Learning has undertaken an effort to localize the mobile literacy game Feed the Monster into high-impact languages defined as those spoken by populations with high rates of illiteracy. Feed the Monster was developed under an open source software license as a joint venture of the Apps Factory, The Center for Educational Technology, and The International Rescue Committee. Feed the Monster is a winner of the EduApps4Syria competition funded by the Norwegian Ministry of Foreign Affairs and was originally created in Arabic. The purpose of the app is to build foundational literacy skills in a highly engaging, game-based format. At the time of publishing this report, 18 versions have been released (excluding the original Arabic), with another 35 in development. Several features of Feed the Monster make this app especially appealing for localization at scale. Those features include the language neutral design, the choice of programming engine, and the fact that the app is developed as open source software.
Open development describes both the content and code that can be openly shared and modified. The authors of Feed the Monster have made the source code available to others who would like to view, copy, alter or share it under the ____ open source software license. The audio and graphic assets of the game were released under the Creative Common Attribution 4.0 International license (CC BY) which enables the free sharing of what would otherwise be copyrighted work. CET and NORAD’s decision to publish the app code under an Open Source software license and the app content under the CC BY license is what undergirds the vast scope and low cost of the project, and should be a serious consideration for any developer who is designing with an eye towards localization.
Feed the Monster was developed using the Unity game development engine, a GUI based suite of tools aimed at facilitating the development of 2D and 3D games. There are many popular professional development and networking groups built around sharing skills and knowledge about the platform. There is a large pool of talent available for producing new localizations.
Unity organizes gameplay assets into “Scene” files based around the Object Oriented Programming model, where every Game Object contains its own audiovisual assets and scripting functionality. This format made it easy to identify the Game Objects in Feed the Monster important to localization, and to compile a list of assets and files that needed to be changed with each new language.
Language neutral is a term to describe a game or interaction that has no language specific text or design elements. A fully language neutral game is one with no text in the game components. Since Feed the Monster is a game to teach children how to read and spell words, many of the graphic elements illustrate letters and words that must be translated to the languages of interest. However, the gameplay or interaction design is not bound by language or culture-specific requirements. The interaction mechanics can be described as highly ‘discoverable’, a term that refers to a player’s ability to ‘stumble upon’ the mechanic without prior instruction or demonstration.
The play experience of Feed the Monster is centered around simple match-and-drag game, where children identify the stone or stones that the monster wants to eat (via an auditory or visual prompt), and drags them from their position on the screen to the monster’s mouth. The match and drag mechanism involves letter-stone matching, where the child sees and hears a target letter, and then finds the stones inscribed with that letter from a group of stones presenting either the target letter or foils. By changing the prompt and the foils given to the player Feed the Monster is able to use its base mechanics to provide a complex and thorough lesson on sound-letter correspondence and spelling for a given set of letters and words.
The visual design elements of Feed the Monster have also been simplified to make the game more language- and culture-neutral. Because the basic game mechanism is simple, there is no need to include explanations of how to play the game. Hence, only the stones or the monster’s feedback phrases require translation into a target language.
Aside from its design choices, CET made development and implementation decisions in Feed the Monster that made developing and streamlining a localization pipeline easier. Two key aspects of the game—the use of fonts in gameplay trials and the abstraction of level content into XML files— dramatically reduced the amount of localization effort per language. Our team was able to develop programmatic solutions to the labor-intensive process of replacing hours of gameplay content with words and letters that often use entirely different vocabulary lists and even different writing systems. A similar game with graphic text and hardcoded level design would have increased the cost per language exponentially and may have rendered the app too expensive to localize, making these implementation choices two of the most important factors affecting localization.
The decision to use fonts in the gameplay text of Feed the Monster removed a substantial amount of work from the visual assets pipeline and allowed us to use programmatic approaches to text rendering that cut time per language down by a factor of multiple person-days. The Unicode standard has support for every writing system we worked with (and more that we did not prioritize) including the Latin, Arabic, Hebrew alphabets, the Devanagari script and its relatives, and the Georgian alphabet. Without Unicode support, our visual asset scope would have been expanded to include every character in our target writing system, including special characters for a language or family of languages. We would also have been responsible for generating assets for every word in our vocabulary lists, which often comprises dozens of words. These expensive and labor-intensive requirements are completely eliminated by using Unicode to render gameplay content and still allow the design of the game to remain cohesive through the use of custom or pre-designed fonts in complementary styles.
Process Diagram for the Feed the Monster localization pipeline. Solid lines indicate the flow of production, while dotted lines indicate revision loops where assets are changed or mistakes are corrected.
The first step in adapting Feed the Monster to a new language is to develop a Scope and Sequence. The Scope and Sequence refers to a document that provides a layout of the order in which letters and sounds will be taught in a game and the full breadth of the content that the game will cover. Feed the Monster was originally instrumented with 6 letter groups across 77 levels. Native speaker translators were contracted to divide the letters or characters of a language into small groups consisting of a vowel letter and 5-7 consonants. For each group of vowel-consonant bundles several short, common words were provided. If a target language has a high percentage of long words (e.g., Zulu), 40- 50 high-frequency syllables were provided. Thus, users had the opportunity to practice decoding skills with syllables as an intermediate stage before advancing to words based on those syllables. Native speakers also provided translations of the list of words and phrases from the user interface, including text graphics and feedback audio. After the Scope and Sequence was created, another native speaker was recruited to review the document and identify errors or suggest changes. Once the document had undergone a complete review and revision, it was ready to send down the asset production pipeline. [Please see the Appendix for these Scope and Sequence documents.]
The Feed the Monster asset pipeline consisted of two asynchronous, simultaneous processes through which we produced the graphic and audio assets. The visual assets consisted of approximately 25 PNGs of stylized text displaying the feedback phrases and game tiles for the memory minigame. The audio assets consisted of recordings of the letter sounds, words,syllables and feedback phrases from the scope and sequence. A graphic artist produced the visual assets using the text translations in the scope and sequence. Native speakers of the target language served as voice actors. Using a Google Apps Script, the Scope and Sequence document was parsed into a script for the actor, with a corresponding list of files and their contents to aid them in splitting the audio. Another native speaker (usually independent from any others previously contracted) would review the recorded audio for missing assets, missed translation errors, or mispronunciations.
After the Scope and Sequence was translated and assets produced and reviewed, the gameplay levels were generated and the assets added to the project to compile and publish an APK.
Except in specific circumstances, the only edits to the level XML files we made were to the actual educational content (the letters and words presented, and which targets to use). We left the gameplay metadata (trial time, stone position, point values, bonus stones etc.) intact. We also did not change the levels’ letter group and used that number to help determine how to populate the file.
Every level file is structured identically; level-wide details such as prompt type, letter group, and default trial time are stored in the root tag <XMLLevel>. Each level file then has a list of the 5 trials in the game, which encode the stones required to win and the content and positions of all the stones in the trial. Because this information was encoded in a standard format, we were able to design a program to generate properly formatted level XML files using arbitrary data from a CSV file we generated from the master Scope and Sequence using the same Apps Script that generated the voice actor script. We then made the decision to specify content only at the Letters Group level, and randomize the production of trials within the app. These constraints allowed us to generate 77 level XML files instantaneously from a single Scope and Sequence document and with confidence that they were arranged in an appropriate difficulty curve for a developing reader in the target language.
APK assembly was straightforward. The project was hosted on a github repository, and new languages were stored in branches off the master Arabic branch (for right-to-left languages) or the English branch (for left-to-right languages). Using version control allowed our developers to quickly switch between languages while developing while keeping assets separate. After creating a new branch, the developer would delete all audio and graphic text from the Unity project, and replace it with the target language’s assets. Using a Python script, the developer generated the level XML files from the CSV file previously produced from the Scope and Sequence sheet and inserted them into the project. Data collection was instrumented through Firebase, which allows Curious Learning to individually track the performance of each language.
After the APK was compiled native speakers were recruited to play all the levels of the game and provide feedback on the accuracy and the functionality of the game. The most common issues were misnamed audio or graphics files, missing assets, or overlapping stones in the gameplay levels. After review and revisions, the new version was published to the Google Play Store.
The process of localizing Feed the Monster to high-impact languages revealed consistent bottlenecks and challenges that fall into the following categories: recruiting knowledgeable translators and reproducing games that require large numbers of assets. The process of recruiting native speakers for content localization and review required more time than other components of content production. For each language, a new set of contractors were recruited and trained to adapt and approve the scope and sequence. For privileged languages, like English and Spanish, that are spoken more frequently and by more people, more contractors were available. The challenge of identifying experienced translators was magnified when the target language was a minority language.
To effectively localize early literacy apps it is not possible to rely on translating the content of the app from one language to another. Languages differ in the characteristics of their writing systems and in the words that are suitable for learning to read and the app must reflect those differences. In alphabetic languages, like English or French, each symbol represents a discrete sound. In syllabaries, symbols represent sound combinations or syllables. Alphabets tend to have fewer symbols with more predictability while syllabaries have larger inventories of characters that vary when combined with other characters. For example, Hindi consonant characters change visually when followed by a different vowel sound. This variance between the phonological structure and orthographic representation of common vocabulary words drives the need for each language to have its own scope and sequence.
Before you begin, think through the following questions. These will inform the localization process:
Use the following steps to complete your localization:
Stage 1: Mapping out your language’s sounds and letters
Here you’ll map out all of the consonant and vowel sounds in your language, as well as the characters and multi-character combinations used to represent these sounds in text. In the next stage, you’ll draw on these sounds to fill out the language scope document, which is used to create the app for your language.
Stage 2: Mapping out the app sequence
Transfer the comprehensive mapping of characters into an initial learning sequence for the app. In some languages, this may not cover all sounds -- instead, focus on the most common ones.
If you would like to embed this guide on your own website, you may do so as an iframe with the following HTML snippet:
 required under the rules of the EduApps4Syria contest