Judges’ Queries and Presenter’s Replies

  • Members may log in to read judges’ queries and presenters’ replies.

Presentation Discussion

  • May 23, 2012 | 10:07 a.m.

    Awesome work! Why did you chose the physical reality when it can be so constraining? Have you considered using a virtual world? is there any sound based signal that could be combined with your visual based signal? What is the time scale of the signal that you are using now? How does that map onto the time scale of actual physical movements that the person may have once performed or perceived?
    Best of luck in the competition -Liz Torres

  • Icon for: Aadeel Akhtar

    Aadeel Akhtar

    Presenter
    May 23, 2012 | 10:49 p.m.

    Thanks for your questions Professor Torres!

    1) When using projectors or computer screens there’s the added issue of the refresh rate of the monitor or projector which can limit the number of possible stimuli. Furthermore, it is easier to change the brightness of individual LEDs by adjusting the voltage going across it, and we plan to study the effects of brightness on classification accuracy. That being said, we do have future plans for implementing SSVEP in a CUBE or CAVE virtual reality environment.

    2) One of our lab members is currently exploring the use of audio and tactile feedback to enhance concentration on the LED stimulus.

    3) As far as time scale goes, we take an FFT every 1/32 seconds, though it may take at least a couple seconds for the neurons to lock to the frequency. As a result, SSVEP is slower than the actual time scale of physical movements, but faster than most other brain-computer interface paradigms like P300 or motor imagery.

  • Further posting is closed as the competition has ended.

Icon for: Aadeel Akhtar

AADEEL AKHTAR

University of Illinois at Urbana-Champaign
Years in Grad School: 2
Judges’
Choice

Playing checkers with your mind: an application of a SSVEP-based brain-computer interface

There are many Brain-Computer Interfaces (BCIs) today that rely on a user’s gaze and visual focus as inputs. However, most of these BCIs have been limited in scope to single users and stimuli in fixed positions. To make BCIs more practical, they should account for dynamic spatial configurations of stimuli whose positions correspond to an area in the physical environment where users would like to interact. In this project we present an application for using a BCI based on Steady-State Visually Evoked Potentials (SSVEP) to play a game of checkers. We light squares on a checkerboard with flickering LEDs to elicit SSVEP responses in the players. When a player takes an action, he focuses his gaze on a particular square. We then classify the resulting SSVEP and a robot arm picks or places selected pieces on the board. We show how to coordinate BCI inputs from multiple users to control a robot arm in a physical environment with stimuli whose spatial configurations can change.