When learning goes online, educators must be attuned to the specific needs of English learners in their schools. 

In recent years, K-12 education has become one of the nation’s leading investors in computer technology. In 2015, for example, schools and districts across the United States spent $13.2 billion on digital devices and software — more than 10 times the amount spent by the federal government (International Society for Technology in Education, 2018; Technology for Education Consortium, 2017). And that was before the COVID-19 crisis forced school systems to shift to distance learning, causing millions of students, teachers, and administrators to become reliant on online platforms for communication and instruction.   

But while digital instruction (nowadays, digital instruction at a distance) has become ubiquitous, the nature and quality of that instruction varies widely, with differing results for different student populations.   

Federal law regards technology as a means by which to equalize opportunities for all students, including those designated as English learners (ELs). And, under the Every Student Succeeds Act (ESSA), recipients of Title III funds — which are intended to supplement programs for ELs — can use a portion of the money to purchase digital tools and software that enhance second-language programs (South, 2017). Even before the recent shift to distance learning, this led schools and districts to invest in a “dizzying array” of devices and programs meant to improve students’ language development (Watson et al., 2015). However, implementation of these tools has tended to race out ahead of the research into whether, or under what conditions, ELs benefit from the same technology-based instruction as other students. For example, recent high-profile reports on technology use in schools — published by the U.S. Department of Education (2017b), the National Science Board (2018), and Evergreen Education Group, a nonprofit focused on digital learning (Watson et al., 2015) — all but ignore ELs in their analyses.  

This lack of attention to ELs is deeply concerning. At present, 10% of U.S. public school students are classified as ELs, and that number is growing exponentially (National Center for Education Statistics, 2019). Further, ELs are being outperformed by their monolingual peers on all metrics of achievement (U.S. Department of Education, 2019). To help them improve, many educators are turning to language-learning software (e.g., Duolingo) and digital content and assessment platforms (e.g., Newsela). However, given the large amount of linguistic and sociocultural diversity among ELs, it seems unlikely that any one tool will be appropriate for all students in a given school or district (Flores & Rosa, 2015; Solano-Flores & Trumbull, 2008). Further, most teachers receive little pre- or in-service professional development related to instruction for ELs at all (Lucas, Villegas, & Freedson-Gonzalez, 2008; Rotermund, DeRoche, & Ottem, 2017), let alone support for utilizing technology effectively. This puts them in the extraordinarily difficult position of having to try out completely unproven practices on the ELs they teach.  

Looking forward, we should certainly strive to incorporate best practices for technology use into training for teachers of ELs. However, with the sudden shift to distance learning and its continued use in the fall, teachers and school leaders require advice and support now. So, from my reading of the research in this area, I would encourage educators of ELs to focus on overcoming three challenges that they may very well be facing right now, having to do with (1) disparities in access to and use of technology, (2) biases that are baked into many software programs and digital platforms, and (3) language learners’ need for authentic social interaction.  

Gaps in accessibility and use  

The “digital divide” is usually framed in economic terms: Individuals of low socioeconomic status (SES), often without a college degree, tend to have less access to computers, Wi-Fi routers, and other technology than do their high-SES, college-educated peers (Attewell, 2001). Similarly, under-resourced schools tend to have fewer computers and slower internet connections than do resource-rich schools (Attewell & Battle, 1999). In recent years, these gaps have narrowed somewhat, but they remain significant (KewalRamani et al., 2018) and they continue to affect many ELs, who disproportionately attend under-resourced schools (U.S. Department of Education, 2017a). 

However, ELs also face a “second-level digital divide” (Hargittai, 2002), sometimes called a “digital use divide” (Warschauer, 2012), that is often invisible to teachers. Indeed, it may appear to some teachers that their low-income and/or minority students — including ELs — use technology in school more often than their high-SES and/or white peers (Warschauer, Knobel, & Stone, 2004). However, the digital-use divide has to do not with the quantity of minutes and hours spent using computers but, rather, with the quality of technology-based teaching and learning. While some students may be encouraged to use computers to execute complex intellectual tasks (e.g., participating in historical simulations or working on collaborative writing projects), ELs are often assigned to use computers for vocabulary drills, phonics practice, and other rote lessons (Valadéz & Durán, 2007; Warschauer, 2012). That is, while they may spend a lot of time working at the computer, they are nonetheless stuck on the wrong side of a digital divide, cut off from rich technology-based experiences.  

Unfortunately, few teachers have the resources and training to use technology in ways that provide ELs with meaningful opportunities for higher-order thinking and learning (Andrei, 2017). Few have been taught how to model such learning for non-native English speakers (Levy, 2009), how to create instructional groupings (e.g., pairs, small groups) that allow ELs and native-English speakers to collaborate effectively online (Freeman, 2012), or how to decide when ELs would benefit from turning the computer off and interacting face-to-face or over the phone (Schmid, 2008). Teachers need support to help them develop such expertise (KewalRamani et al., 2018; National Center for Education Statistics, 2018; Pierson, 2001). If our school systems continue to neglect that need, then “the digital-use divide could grow, even as access to technology in schools increases” (U.S. Department of Education, 2017b), and even if, as now, distance learning becomes the norm.   

Baked-in biases  

Teachers and administrators who work with ELs should understand that technology is always designed with particular users in mind. As Yong Zhao (2004) and colleagues put it, technology “comes with shapes and expectations. A piece of software often conveys a certain teaching approach, which to a certain degree actively shapes what the teacher can do with it” (pp. 24-25). In short, just because a program is well-designed for use by English speakers, that doesn’t necessarily mean it will work well for ELs, or for ELs from particular language backgrounds. 

Just because a program is well-designed for use by English speakers, that doesn’t necessarily mean it will work well for ELs, or for ELs from particular language backgrounds.  

For example, speech-recognition software, which detects speech and assesses the speaker’s pronunciation, grammar, and/or diction, is popular with language teachers and students because it can help them improve their spoken intelligibility or comprehensibility (though evidence suggests it is a more effective tool for teaching pronunciation than other language skills; Derwing, Munro, & Carbonaro, 2000; Issacs, 2018). Further, it provides real-time feedback in a semiprivate environment (Ahn & Lee, 2016), which can shield language learners from scrutiny, allowing them to practice their English without fear that they’ll be mocked or bullied by native speakers (Neri, Cucchiarini, & Strik, 2003).   

However, these programs do not all have a good track record of accurately registering the pronunciation of second-language speakers of English (Ashwell & Elam, 2017; Coniam, 1999; Derwing, Munro, & Carbonaro, 2000). Although some research (Cucchiarini & Strik, 2018; Litman, Strik, & Lim, 2018; Witt & Young, 2000) suggests these programs are improving, teachers should be aware that they may not respond in the same ways to native and non-native speech, or provide accurate feedback to speakers of nonstandard or nondominant language varieties (Hanani, Russell, & Carey, 2013), such as Mexican Spanish as opposed to Castilian Spanish. Even the variety of English (e.g., American, British, or Australian) that the system considers “standard” can affect accuracy (Coniam, 1999), meaning that software developed in the United Kingdom may be less suitable for users in the United States. Because of the assumptions made by their designers, speech-recognition software may mark pronunciation or dialect differences as errors, causing frustration among some ELs and making it difficult for them and their teachers to identify appropriate areas for growth.   

Similarly, automated writing assessment systems may be biased against ELs (Weigle, 2013), in that these programs are typically designed with native English speakers in mind (Stevenson & Phakiti, 2019), and they may not take into account factors such as the student’s level of English proficiency or the grammatical features of their home language. Consequently, automated writing programs may over- or misidentify errors in ELs’ writing, as compared to human raters (Hoang & Kunnan, 2016). Moreover, the usefulness of these programs as formative tools may vary depending on the objectives of a lesson or unit (Ranalli, Link, & Chukharev-Hudilainen, 2017). Automated programs may be more helpful when the content focus is on formal features of language (e.g., subject-verb agreement) rather than on style or coherence (Deane, 2013).   

In short, before purchasing any speech recognition software, online program, or tablet application meant to serve ELs, school and district leaders should determine what linguistic group it treats as “standard” and consider whether its design is appropriate for use with the given students and at the intended grade level. Administrators should be clear about what they want to accomplish with the software, they should be realistic about its limitations, and they should develop guidelines for teachers so they only use it for its intended purpose.   

The need for authentic social interaction  

ELs must have frequent opportunities to interact with English-proficient peers both in and out of the classroom (Firth & Wagner, 1997; Walqui & van Lier, 2010). It is by participating in everyday conversations and school-based discussions about rich, grade-level content that they improve both their comprehension of complex ideas in spoken English and their fluency in both academic and informal ways of using the language. And that holds true whether the discussion happens in a face-to-face setting or via technology.    

During the pandemic, while ELs are at home and may have very few chances to interact with English-speaking peers, it is especially important for teachers to find ways to promote these kinds of interactions (Liu et al., 2002). They should keep in mind, though, that different kinds of technology — such as whole-group videoconferences, one-to-one phone calls, asynchronous communications (e.g., viewing and responding to a video the teacher recorded earlier) — tend to be useful for different purposes. For example, computers and text-sharing programs might be preferable for a group essay project, where students will be using a full keyboard and screen, while cell phones or video chats might work better for interactions that are more spontaneous, informal, instantaneous (Liu, Navarrete, & Wivagg, 2014), and — some might argue — more authentic (Kukulska-Hulme & Shield, 2008).   

ELs must have frequent opportunities to interact with English-proficient peers both in and out of the classroom.

Further, teachers should understand that merely increasing the frequency of social interactions does not necessarily promote language development or content learning. For instance, if ELs send dozens of text messages back and forth with other students, then they may seem to be getting a lot of valuable practice communicating in English. And, in fact, evidence suggests that texting does help young-adult ELs develop some English skills (Li, Cummins, & Deng, 2017; McSweeney, 2017). But this is not an adequate substitute for real-time social interactions about course content, in which ELs are expected to use academic language to ask questions and explain, elaborate on, and discuss complex ideas (Kukulska-Hulme, & Shield, 2008; Walqui & van Lier, 2010).   

In short (and especially at a time when distance learning is the norm), teachers should be careful to choose technologies that match the given purpose. For instance, if the goal is to assess whether an EL understands a given concept in geometry or biology, then a real-time video chat may not be the best medium, since the student could have trouble explaining, in English, material that they understand perfectly well. And if the goal is to help ELs strengthen their oral communication in English, then it is important to give them frequent and varied opportunities to interact with English-speaking peers, from one-to-one video chats to group texts, e-mails, and virtual classroom discussion.  

At the same time, administrators should ensure that students and teachers have access to technology that promotes social interaction among ELs and their more English-proficient peers. Programs or software designed to help students practice on their own — with the computer providing all the feedback — are not likely to serve students as well as those that allow and encourage them to communicate back and forth with classmates and teachers.  

Making the best use of the tools we have  

Since the passage of ESSA in 2016, federal policy has specified that technology should be used to improve educational opportunities for English learners. During the pandemic, and now that schools are providing so much of their instruction online, that priority has become all the more urgent. As yet, however, there has been relatively little research into effective technology-based instruction for ELs in U.S. schools. But the research that does exist suggests that ELs often encounter challenges related to the accessibility of both hardware and software, the biases baked into those programs, and the extent to which those tools and programs promote genuine social interactions with peers.    

The considerations I have outlined can help teachers and school leaders alike  make informed choices about what technology they use and how they use it. When weighing the merits of a tablet-
based app versus a computer-based website, for example, or deliberating between two online assessment platforms, educators should keep in mind that “technology itself is not a panacea” (South, 2017). Its value depends on the teachers who implement it and the school leaders who provide those teachers with guidance and support. When educators approach technology with clear eyes and an awareness of how students from all backgrounds might experience it, they are better able to provide learning opportunities of value to all students, ELs included.   

References  

Ahn, T. Y., & Lee, S. M. (2016). User experience of a mobile speaking application with automatic speech recognition for EFL learning. British Journal of Educational Technology, 47 (4), 778-786.  

Andrei, E. (2017). Technology in teaching English language learners: The case of three middle school teachers. TESOL Journal, 8 (2), 409-431.  

Ashwell, T., & Elam, J. R. (2017). How accurately can the Google Web Speech API recognize and transcribe Japanese L2 English learners’ oral production? Jalt Call Journal, 13 (1), 59-76.  

Attewell, P. (2001). Comment: The first and second digital divides. Sociology of Education, 252-259.  

Attewell, P. & Battle,. J. (1999). Home computers and school performance. The Information Society, 15 (1), 1-10.  

Coniam, D. (1999). Voice recognition software accuracy with second language speakers of English. System, 27 (1), 49-64.  

Cucchiarini, C., & Strik, H. (2018). Second language learners’ spoken discourse: Practice and corrective feedback through automatic speech recognition. In Smart Technologies: Breakthroughs in Research and Practice (pp. 367-389). IGI Global.  

Deane, P. (2013). On the relation between automated essay scoring and modern views of the writing construct. Assessing Writing, 18 (1), 7-24.  

Derwing, T.M., Munro, M.J., & Carbonaro, M. (2000). Does popular speech recognition software work with ESL speech?. TESOL Quarterly, 34 (3), 592-603.  

Firth, A. & Wagner, J. (1997). On discourse, communication and (some) fundamental concepts in SLA research. Modern Language Journal, 91, 285–300.  

Flores, N., & Rosa, J. (2015). Undoing appropriateness: Raciolinguistic ideologies and language diversity in education. Harvard Educational Review, 85 (2), 149-171.  

Freeman, B. (2012). Using digital technologies to redress inequities for English language learners in the English speaking mathematics classroom. Computers & Education, 59 (1), 50-62.  

Hanani, A., Russell, M.J., & Carey, M.J. (2013). Human and computer recognition of regional accents and ethnic groups from British English speech. Computer Speech & Language, 27 (1), 59-74.  

Hargittai, E. (2002). Second-level digital divide: Differences in people’s online skills. First Monday, 7 (4).  

Hoang, G.T.L. & Kunnan, A.J. (2016). Automated essay evaluation for English language learners: A case study of MY Access. Language Assessment Quarterly, 13 (4), 359-376.  

International Society for Technology in Education. (2018). Using ESSA to fund edtech: Getting the most out of Title IV-A. Washington, DC: Author. id.iste.org/docs/advocacy-resources/title-iv-a-guide-2019.pdf

Isaacs, T. (2018). Shifting sands in second language pronunciation teaching and assessment research and practice. Language Assessment Quarterly, 15 (3), 273-293.  

KewalRamani, A., Zhang, J., Wang, X., Rathbun, A., Corcoran, L., Diliberti, M., & Zhang, J. (2018). Student access to digital learning resources outside of the classroom (NCES 2017-098). Washington, DC: National Center for Education Statistics.  

Kukulska-Hulme, A., & Shield, L. (2008). An overview of mobile assisted language learning: From content delivery to supported collaboration and interaction. ReCALL, 20 (3), 271-289.  

Levy, M. (2009). Technologies in use for second language learning. The Modern Language Journal, 93, 769-782.  

Li, J., Cummins, J., & Deng, Q. (2017). The effectiveness of texting to enhance academic vocabulary learning: English language learners’ perspective. Computer Assisted Language Learning, 30 (8), 816-843.  

Litman, D., Strik, H., & Lim, G. S. (2018). Speech technologies and the assessment of second language speaking: Approaches, challenges, and opportunities. Language Assessment Quarterly, 15 (3), 294-309.  

Liu, M., Moore, Z., Graham, L., & Lee, S. (2002). A look at the research on computer-based technology use in second language learning: A review of the literature from 1990–2000. Journal of Research on Technology in Education, 34 (3), 250-273.  

Liu, M., Navarrete, C.C., & Wivagg, J. (2014). Potentials of mobile technology for K-12 education: An investigation of iPod touch use for English language learners in the United States. Journal of Educational Technology & Society, 17 (2), 115-126.  

Lucas, T., Villegas, A. M., & Freedson-Gonzalez, M. (2008). Linguistically responsive teacher education: Preparing classroom teachers to teach English language learners. Journal of Teacher Education, 59 (4), 361-373.  

McSweeney, M. (2017). I text English to everyone: Links between second-language texting and academic proficiency. Languages, 2 (3), 7.  

National Center for Education Statistics (2018). Student access to digital learning resources outside the classroom. Washington, DC: Author. nces.ed.gov/pubs2017/2017098/guide.asp  

National Center for Education Statistics. (2019). English language learners in public schools. Washington, DC: Author. nces.ed.gov/programs/coe/indicator_cgf.asp  

National Science Board. (2018). Science and engineering indicators 2018. Arlington, VA: National Science Foundation. www.nsf.gov/statistics/2018/nsb20181  

Neri, A., Cucchiarini, C., & Strik, H. (2003, January). Automatic speech recognition for second language learning: How and why it actually works. In M.J. Sole, D. Recasens, & J. Romero (Eds), Proceedings of the 15th International Congress of Phonetic Sciences (pp. 1157-1160), Barcelona.  

Pierson, M.E. (2001). Technology integration practice as a function of pedagogical expertise. Journal of Research on Computing in Education, 33 (4), 413-430.  

Ranalli, J., Link, S., & Chukharev-Hudilainen, E. (2017). Automated writing evaluation for formative assessment of second language writing: investigating the accuracy and usefulness of feedback as part of argument-based validation. Educational Psychology, 37 (1), 8-25.  

Rotermund, S., DeRoche, J., & Ottem, R. (2017). Teacher professional development by selected teacher and school characteristics: 2011-12 (NCES 2017-200). Washington, DC: U.S. Department of Education, National Center for Education Statistics.  

Schmid, E.C. (2008). Potential pedagogical benefits and drawbacks of multimedia use in the English language classroom equipped with interactive whiteboard technology. Computers & Education, 51 (4), 1553-1568.  

Solano-Flores, G. & Trumbull, E. (2008). In what language should English language learners be tested? In R.J. Kopriva (Ed.), Improving testing for English language learners. New York, NY: Routledge.  

South, J. (2017). Dear colleague letter: Federal funding for technology. Washington, DC: U.S. Department of Education, Office of Educational Technology. https://tech.ed.gov/files/2017/01/2017.1.18-Tech-Federal-Funds-Final-V4.pdf  

Stevenson, M. & Phakiti, A. (2019). Automated feedback and second language writing. In K. Hyland & F. Hyland (Eds.), Feedback in second language writing: Contexts and issues (pp. 125-142). New York, NY: Cambridge University Press.  

Technology for Education Consortium. (2017). How districts can save (billions) on edtech. Edweek Market Brief. marketbrief.edweek.org/wp-content/uploads/2017/03/How_School_Districts_Can_Save_Billions_on_Edtech.pdf  

U.S. Department of Education. (2017a). Our nation’s English learners. Washington, DC: Author. www2.ed.gov/datastory/el-characteristics/index.html  

U.S. Department of Education. (2017b). Reimagining the role of technology in education: 2017 national education technology plan update. Washington, DC: U.S. Department of Education, Office of Educational Technology. tech.ed.gov/files/2017/01/NETP17.pdf

U.S. Department of Education. (2019). Academic performance and outcomes for English learners. Washington, DC: Author. www2.ed.gov/datastory/el-outcomes/index.html  

Valadéz, J.R. & Durán, R.P. (2007). Redefining the digital divide: Beyond access to computers and the internet. The High School Journal, 90 (3), 31-44  

Walqui, A. & van Lier, L. (2010). Scaffolding the academic success of adolescent English language learners: A pedagogy of promise. San Francisco, CA: WestEd.  

Warschauer, M. (2012). The digital divide and social inclusion. Americas Quarterly, 6 (2), 131-135.  

Warschauer, M., Knobel, M., & Stone, L. (2004). Technology and equity in schooling: Deconstructing the digital divide. Educational Policy, 18 (4), 562-588.  

Watson, J., Pape L., Murin A., Gemin B., & Vashaw L. (2015). Keeping pace with K–12 digital learning: An annual review of policy and practice. Durango, CO: Evergreen Education Group.  

Weigle, S.C. (2013). English language learners and automated scoring of essays: Critical considerations. Assessing Writing, 18 (1), 85-99.  

Witt, S.M. & Young, S.J. (2000). Phone-level pronunciation scoring and assessment for interactive language learning. Speech Communication, 30 (2-3), 95-108.  

Zhao, Y., Alvarez-Torres, M.J., Smith, B., & Tan, H. S. (2004). The non-neutrality of technology: A theoretical analysis and empirical study of computer mediated communication technologies. Journal of Educational Computing Research, 30 (1-2), 23-55.

ABOUT THE AUTHOR

default profile picture

Jennifer Altavilla

JENNIFER ALTAVILLA is a PhD candidate in Educational Policy at the Stanford Graduate School of Education, and affiliate faculty at the Alder Graduate School of Education. She is a former elementary- and middle-school English Language Development teacher and English Learner Program Director.