When the pandemic first emerged and schools shuttered back in March last year, friends Jin Schofield and Sarvnaz Ale Mohammad started thinking about how they could spend their time while helping others.

The 17 year olds decided to develop an American Sign Language (ASL) translator capable of turning signing into spoken word.

They spent full days last summer learning coding intricacies and recording images of sign language to feed into algorithms.

“When quarantine started initially we did a lot of research into how we could use this potential year of time to do something to learn, but also help the world, maybe,” said Schofield, who along with Ale Mohammad is going into Grade 12 at St. Robert Catholic High School in Thornhill, just north of Toronto.

By April 2020, they had co-founded ConchShell, and the venture now boasts a single manually controlled prototype device they hope to tweak and improve upon with future versions funded by thousands of dollars in science fair winnings and other awards.

The prototype bracelet cost them $40 to build and uses a Raspberry Pi computer and camera, the small single-board computers developed to teach basic computer science.

The prototype device uses $40 worth of hardware to turn ASL sign into verbal words. Photo supplied by ConchShell

The STEM-focused students expect their invention could be useful for people who need to sign to communicate with grocery store workers or in other settings where others do not know ASL.

“The idea is that they would be able to hold it like this,” Schofield said, holding her wrist close to her chest during a video interview to explain how the bracelet would work. “And sign in front of it with their other hand, and it would use machine learning to translate that into spoken word, a spoken voice.”

Two 17-year-olds decided to develop an American Sign Language translator, spending full days last summer learning coding intricacies and recording images of ASL spelling to feed into algorithms that turn them into spoken words.

It can only be used currently by an ASL user spelling out the letters of a word, while ASL typically require both hands and facial features for full expression, but the two are also looking into using chest straps to position the camera to capture words using two hands.

“We tested it out, we even had to do a little trigonometry to figure out the best angle,” Jin said, laughing. “We never thought we'd have to use trigonometry, but we did.”

While potentially instantaneous, the current version is connected to laptops and responds to manual commands as they continue to tinker.

“We started learning along as we started this project,” Ale Mohammad said. “Our technical background, we were in a school robotics club and things like that, but with machine learning, we didn’t have a lot of experience, so we were just taking courses as we go.”

“It was completely incomprehensible to us,” Ale Mohammad said of the many technical challenges they faced. “Back then, we would get really frustrated when we would see something hard, but now we see it as a sign of success, like, ‘Oh, we got to the hard part,’” she said.

They have since won a slew of science and medical technology competitions, including a $5,000 prize from student incubator Basecamp at Ryerson University’s DMZ, and a University of Toronto engineering award.

They’re planning to put the money towards buying more hardware and tweaking the device, which they want to start testing in the real world and with a diverse array of users within the next six months.

“We’ve finished the software, we’ve basically completed the hardware, although we want to take the Raspberry Pi we're using and switch it out for an actual circuit later on and a more efficient, cheaper computer,” Schofield said.

Morgan Sharp / Local Journalism Initiative / Canada’s National Observer

Keep reading