University of Washington developing sign language over cellphones
Following up from a previous post about Microsoft’s decision not to include ASL-enabled technology in their new Kinect product, UW researchers are trying to find a way to allow ASL over video calls on current 3G mobile technology.
The issue is that 3G networks are not sufficiently fast enough to broadcast uncompressed video, so the quality is generally not good enough to be able to understand sign language over a standard video call. The new 4G networks that are starting to crop up around the country will aid in this dilemma; but not everyone will immediately have access to the newer networks when they arrive.
However, by concentrating the image quality over the face and hands portion of the video, researchers have been able to develop an effective avenue for ASL communication over older cellphones on current 3G networks. The technology (known as MobileASL) is currently being tested by ASL speakers at the university – over 200 calls were made, averaging around 60 seconds apiece – and the results have been “generally positive”.
Apple’s recent FaceTime video call integration into the new iPhone handset – not a new concept, by any means, but the first time it has been brought to the masses – still is not sufficiently clear enough to allow ASL over the phone. However, MobileASL betters it in this regard, and uses only one tenth of the bandwidth used by FaceTime.
A larger field study will be performed in the winter, so stay tuned to this interesting development for deaf cellphone users!