Amanda Duarte

About Me

Hello there! I am glad you are here.

I am a PhD candidate at Barcelona Supercomputing Center / Universitat Politècnica de Catalunya under the supervision of Prof. Jordi Torres and Prof. Xavier Giró.
Thanks to the INPhINIT - ”La Caixa” Doctoral fellowship I am also a Marie Skłodowska-Curie fellow.

My research aims at giving sign language users further access to information. Specifically, my work focus on developing systems that enable automatic translation of online content (e.g. the speech of videos or texts) into sign language representations. To that end, I lead the Speech2Signs project that focus on the task of automatic speech to sign language translation.
As no data for learning such a system were available, we started by collecting the first large-scale continuous American Sign Language dataset called How2Sign , which can be downloaded here!

Before starting my PhD, I got my master’s in Computer Engineering at Federal University of Rio Grande in Brazil, and a degree in Systems Analysis at Instituto Federal Sul-rio-grandense.
My past research projects span a wide variety of areas and involve multimodal data collection and annotation, speech-conditioned image generation, underwater robot localization and navigation and underwater image restoration.

Portuguese is my first language and I am also fluent in English and Spanish. I am able to understand Catalan, but be aware of possible misunderstandings. Expect even more misunderstanding when using American Sign Language (ASL) but it's also worth trying :).

Besides research, I am passionate about travel, photography, and art; I enjoy painting, specially using watercolor.


Contact

I am always open to collaborate or to meet new people.
Please feel free to contact me if this is the case or if you need any further information.

amanda.duarte(at)bsc.es
amanda.duarte(at)upc.edu

News

Outstanding Reviewer award Sept 2021

Happy to share that I was selected as an outstanding reviewer for ICCV2021.

Visiting student at Oxford University / Ecole des Ponts ParisTech Summer/Fall 2021

Excited to be virtually visiting Oxford University and Ecole des Ponts ParisTech doing research under the supervision of Dr. Samuel Albanie and Prof. Gül Varol.

The How2Sign dataset is now available June 2021

Our large-scale multimodal dataset for continuous American Sign Language is now available for download. Check out the dataset page for instructions on how to donwload it.

Paper accepted @ CVPR'21 March 2021

Our paper How2Sign: A Large-scale Multimodal Dataset for Continuous American Sign Language was accepted at CVPR 2021! I will see you online in June. But in the meantime you can check out the video, poster and full paper in the project page.

Grounded Sequence to Sequence Transduction journal May 2020

The work done while participating at JSALT and collaborating with the Grounded Sequence to Sequence Transduction group was accepted to be published at the IEEE Journal of Selected Topics in Signal Processing. You can see our journal paper here.

Visiting student at CMU Spring 2019

I will be a visiting student at Carnegie Mellon University during the spring doing research under the supervision of Prof. Florian Metze at the LTI.

Wav2Pix at ICASSP 2019 May 2019

Our paper Wav2Pix: Speech-conditioned Face Generation using Generative Adversarial Networks was accepted at ICASSP 2019. Hope to see you in Brighton!
Our code is available on Github.

Marie Skłodowska-Curie fellow - INPhINIT - ”La Caixa” Doctoral fellowship Oct 2018

Happy to annouce that I was award with a Marie Skłodowska-Curie fellow thanks to the INPhINIT - ”La Caixa” Doctoral fellowship.
Thus, will also be joining the Barcelona Supercomputing Center during my PhD studies.

New start: PhD Student at UPC Oct 2018

I am very excited to announce that I am now a PhD student at Universitat Politècnica de Catalunya under the supervision of Prof. Jordi Torres and Prof. Xavier Giró.
I will be part of the Image Processing Group (GPI).

Presenting at SiVL 2018 Sep 2018

I will be presenting our work Towards Speech to Sign Language Translation at the Workshop on Shortcomings in Vision and Language (SiVL) at ECCV.

Participating in the Frederick Jelinek Memorial Summer JSALT workshop Summer 2018

I will be part of the Grounded Sequence-to-Sequence Transduction team during the six-week-long research program on Machine Learning for Speech, Language and Computer Vision Technology during the JSALT workshop at John Hopkins University.

Facebook/Caffe2 research award Oct 2017

Happy to announce that our project “Speech2Signs: Spoken to Sign Language Translation using Neural Networks won 1 out of 5 Caffe2 research awards.
Thus, I will be joining UPC as reserach assistant.
Thanks Facebook Research and Academic Relations Program.

Visiting Student at BSC/UPC Sep - Dec 2017

I will be a visiting student at Barcelona Computing Center (BSC) - Universitat Politècnica de Catalunya (UPC) working on cross-modal retrieval.
Thanks Severo Ochoa Mobility Program

Master in Computer Engineering April 2017

I received my master's in Computer Engineering from the Federal University of Rio Grande.
Thesis title: Dataset Generation for Computer Vision and Performance Analysis of Image Restoration Methods Applied to Underwater Environments.
My Master's thesis is available in Portuguese.

TURBID Dataset is now Available Fev 2017

Our dataset for evaluating underwater image restoration methods is available!

Research

Full list at my Google Scholar profile.

How2Sign: A Large-scale Multimodal Dataset for Continuous American Sign Language
A. Duarte S. Palaskar, L. Ventura, D. Ghadiyaram, K. DeHaan, F. Metze, J. Torres, X. Giro-i-Nieto
Conference on Computer Vision and Pattern Recognition (CVPR), 2021.
[PDF] [Poster] [5' video] [Dataset page]

Can everybody sign now? Exploring sign language video generation from 2D poses
L. Ventura, A. Duarte, X. Giro-i-Nieto
In Sign Language Recognition, Translation & Production workshop, ECCV 2020.
[PDF] [1' Video] [Workshop page]

Grounded Sequence to Sequence Transduction
L. Specia, R. Arora, L. Barrault, O. Caglayan, A. Duarte, D. Elliott, ... , J. Libovicky
In IEEE Journal of Selected Topics in Signal Processing, 2020. (Impact factor: 4.9)
[PDF]

Cross-modal Neural Sign Language Translation
A. Duarte
In ACM International Conference on Multimedia-Doctoral Symposium, 2019.
[PDF]

WAV2PIX: SPEECH-CONDITIONED FACE GENERATION USING GENERATIVE ADVERSARIAL NETWORKS
A. Duarte F. Roldan, M. Tubau, J. Escur, S. Pascual, A. Salvador, E. Mohedano, K. McGuinness, J. Torres, X. Giro-i-Nieto
In 44th International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2019.
[PDF] [Project page]

Single Image Restoration for Participating Media Based on Prior Fusion
J. Gaya, A. Duarte F. Codevilla, P. Drews-Jr, S. Botelho
In IEEE Computer Graphics and Applications, 2019. (Impact factor: 1.6)
[PDF]

Towards Speech to Sign Language Translation
A. Duarte, G. Camli, J. Torres, X. Giró-i-Nieto
In European Conference on Computer Vision (ECCV) Workshop on Shortcomings in Vision and Language (SiVL), 2018.
[PDF] [Project page] [Workshop page]

Cross-modal Embeddings for Video and Audio Retrieval
D. Surís, A. Duarte, A. Salvador, X. Giró-i-Nieto
In European conference on computer vision (ECCV) workshop, 2018.
[PDF]

TURBID: An Underwater Turbid Image Dataset
A. Duarte, F. Codevilla, J. Gaya and S. Botelho
In IEEE OCEANS 2016-Shanghai, 2016.
[PDF] [Dataset page]

Vision-based Obstacle Avoidance Using Deep Learning
J. Gaya, L. Gonçalves, A. Duarte, B. Zanchetta, P. Drews-Jr, S. Botelho
In 13th Latin-America Robotics Symposium - LARS, 2016.
[PDF]

Towards Comparison of Underwater SLAM Methods: An Open Dataset Collection
A.C. Duarte, G. Zaffari, R. Rosa, L. Longaray, P. Drews-Jr, S. Botelho
In IEEE OCEANS 2016-Monterey, 2016.
[PDF]