As a disability reporter for The Washington Post, Amanda Morris has covered hearing loss, prolonged COVID and the spread of fake sign language on TikTok, among other topics.
one of his recent stories showed how some patients with amyotrophic lateral sclerosis (ALS) are turning to artificial intelligence to bank their natural voices for use with assistive technology as the disease progresses.
It’s a compelling read that combines great characters and lots of interactive elements. Readers can hear natural and AI voices of interview subjects, watch a video of an ALS patient’s daily life, and learn how these AI programs process and replicate speech patterns.
Morris recently spoke with AHCJ about how he reported the story.
(Responses have been lightly edited for brevity and clarity.)
What got you the idea of doing this story?
I’ve been thinking a lot about AI and how it’s starting to become more important in health care. We see this in many different medical devices that now use AI to monitor conditions such as diabetes. I was scrolling through Twitter one day and saw I had posted about someone getting a new copy of their voice. I was like, “Well, that’s interesting.” I didn’t know AI was going to make such a big difference In the realm of artificial voices. As a society we often think of AI and artificial voices in the context of fake fakery, but I don’t think anyone really thought about the more positive side of voice technology in terms of news coverage and how it can help people. How can
How did you find the patients you profile?
It was a long process. I reached out to several different ALS organizations and their local chapters, asking people to put them in touch; I contacted Team Gleason, who funds a number of voice banking services in the US; And I reached out to people I knew on Twitter and others in my own network asking if they knew anyone who had the service or was planning to use the service. My final list of people I interviewed was about 20. I wanted to make sure I interviewed a broad spectrum of people who had been along with the process to get a sense of what it was like at each stage. Some had not yet lost their voices, and some had been using their synthesized voices for years. The four people we used in the story represent different stages of this process, as well as truly compelling stories. It was interesting to see what the synthesized version of his voice meant because it’s different for everyone.
How did you interview because some of the people you talked to had already lost their voices?
For three of them, I sent a list of questions ahead of time to give them time to type out their responses. It took a long time for Ruth to type because she has to type with her eyes (using assistive technology). Brian and Ron also took longer to type because their hand strength was not what it used to be. I had a caveat that, during the interview, if I wanted to ask clarifications or follow-up questions, I could. I also had them interview someone else who could help answer more basic fact-checking questions, like what month something happened. If I asked Brian and Ron personal questions, they could answer in a whisper, depending on who else was in the room with them, and that person could repeat it back to me.
I really enjoyed the multimedia aspects you included, like Ron’s video, and your explanation of how the AI voices work. How did you get the idea to include them?
For Ron (pictured in the video), I went to Mexico with a video reporter because I thought it was important to see what his day-to-day was like. We spent two full days with Ron, watching him go through his normal routine of nurse visits, medications, feedings, getting dressed, hanging out with his wife, watching TV, and more. It really gave me a better feel for how he was using his voice, but also what it was like for him to go through this process.
I worked on this story with a team of multimedia reporters and editors, and we changed a lot about what elements to include and how to include them. Alexa Juliana Ard, a video journalist who worked with me on this story, suggested that we as people should speak directly to the camera and speak for ourselves. I just loved that idea. Initially, we wanted to play videos of people with ALS to show the difference between natural and artificial voices, but the quality and length of the videos we were given varied and we wanted everyone to get the same treatment.
What was important to you in telling this story?
The most important part was more about the audio element of capturing those voices. I kept thinking about what their voices represented for each person. Ron is a joker. He is very eloquent, very talkative, very patient. Ruth was a more practical person. I wanted to capture a little bit of each person’s personality, and that’s why I think the pictures, video and audio, with the word limits I had, helped bring everyone to life. That’s what I cared about most – making sure the people reading the story felt a connection to each person.
What advice do you have for other writers who don’t cover health IT or tech in general?
I recently did a piece the open notebook (a website that helps science, environment and health journalists sharpen their skills) how to cover assistive technology, I don’t cover technique as a beat but I do cover assistive technique as part of the disability beat. A common fallacy a lot of people fall into when covering assistive technology is that they universally think it’s great, whatever it is, and acts like the technology solves every problem. I don’t think many times it is — it’s more nuanced than that. I also think that people who cover assistive technology don’t always talk to users and don’t always consider users of technology to be active users. People tend to think of iPhone users as very passive, but users shape the technology a lot. We hack it. We use it to meet our needs, for uses that may be beyond the imagination of the creator.
An interesting element of any technology story is: how is it being used in different ways by different people? What problems is it not fixing yet? What problems does it help with? Asking yourself those questions and interrogating technology will reveal a more nuanced, accurate story.