tripadvisor
flickr
americanexpress
bandcamp
basecamp
behance
bigcartel
bitbucket
blogger
codepen
compropago
digg
dribbble
dropbox
ello
etsy
eventbrite
evernote
facebook
feedly
github
gitlab
goodreads
googleplus
instagram
kickstarter
lastfm
line
linkedin
mailchimp
mastercard
medium
meetup
messenger
mixcloud
paypal
periscope
pinterest
quora
reddit
runkeeper
shopify
signal
sinaweibo
skype
slack
snapchat
soundcloud
sourceforge
spotify
stackoverflow
stripe
stumbleupon
trello
tumblr
twitch
uber
vimeo
vine
visa
vsco
wechat
whatsapp
wheniwork
wordpress
xero
xing
yelp
youtube
zerply
zillow
px
aboutme
airbnb
amazon
pencil
envelope
bubble
magnifier
cross
arrow-up
arrow-down
arrow-left
arrow-right
envelope-o
caret-down
caret-up
caret-left
caret-right
SignReality – Extended Reality for Sign Language translation
The research project “SignReality – Extended Reality for Sign Language translation” has as goal the development of an augmented reality (AR) model and application visualizing an animated interpreter for German sign language (DGS). The project is a co-operation of the departments DFKI-DRX and the Affective Computing Group of DFKI-COS and is part of the activities of the broader DFKI Sign Language team, which expands over 4 departments and has been running 2 EU-funded and 2 German-funded research projects.
The app developed in SignReality will allow deaf and hard-of-hearing users to have in the augmented or virtual space a personal interpreter able to translate speech and text. They will be able to position and to resize the interpreter according to the needs of the translation, based on the observation that sign language users E.g., it is better to have the interpreter next to the speaking person to enhance the translation’s content with the direct view of the speaking person. The application will be used as a research prototype to study novel methods of interaction and content delivery between deaf and hard-of-hearing users and the surrounding environments, aiming at reducing communication barriers with hearing people.
The project has a duration of 8 months and is funded as part of an FSTP call from the EU Project UTTER (Unified Transcription and Translation for Extended Reality; EU Horizon under Grant Agreement No 101070631) in cooperation with the Universities of Amsterdam and Edinburgh. UTTER aims to take online and hybrid interaction to the next level by employing Large Language Models, focusing on use cases such as videoconferencing (speech dialogue) and multilingual customer support (chat).
Project team: Eleftherios Avramidis, Fabrizio Nunnari
DFKI Design Research Explorations, Cognitive Assistants
Funded by European Union (101070631) UTTER open call: Development and application of deep models for eXtended Reality