Procesado audio-visual para implementación de un mecanismo atencional
Fecha
2017-10-06
Autores
Título de la revista
ISSN de la revista
Título del volumen
Editor
Resumen
[ES]El fin del proyecto desarrollado es localizar la posición, en tiempo real, de hablantes dentro de una habitación o sala, y controlar el movimiento de una vídeo cámara ip conforme a dicha información. La posición de dichos hablantes es localizada por medio de la fusión de sistemas de localización auditivos y visuales. La localización auditiva se realiza mediante la captura de señales sonoras con un array de micrófonos y el algoritmo SRP-PHAT, mientras que la parte visual utiliza la librería de visión artificial OpenCV para procesar el vídeo capturado. Este tipo de aplicación es útil en campos como la robótica para implementar mecanismos atencionales, o en sistemas de videoconferencia con múltiples hablantes en movimiento. Los objetivos inicialmente marcados con el proyecto se han alcanzado.
[EN]The goal of the project is to locate the position in real time of speakers in a room and to control the movement of an ip videocamera based on the location information. The positions of the speakers are obtained through a fusion mechanism of both audio and video location systems. The audio location is based on the application of the SRP-PHAT algorithm to audio signals captured with a microphone array, while the visual one uses the OpenCV artificial vision library to process the captured video. This type of application is useful in fields such as robotics to implement attentional mechanisms or in videoconference systems with multiple speakers moving around. The initial goals of the project have been achieved.
[EN]The goal of the project is to locate the position in real time of speakers in a room and to control the movement of an ip videocamera based on the location information. The positions of the speakers are obtained through a fusion mechanism of both audio and video location systems. The audio location is based on the application of the SRP-PHAT algorithm to audio signals captured with a microphone array, while the visual one uses the OpenCV artificial vision library to process the captured video. This type of application is useful in fields such as robotics to implement attentional mechanisms or in videoconference systems with multiple speakers moving around. The initial goals of the project have been achieved.