Create an eye-tracking experiment

Eye-tracking
Python

This page will show you how to collect eye-tracking data in a simple Psychopy paradigm. We will use the same paradigm that we built together in the Getting started with Psychopy tutorial. If you have not done that tutorial yet, please go through it first.

Caution

Tobii eye-tracker

Note that this tutorial is specific for using Tobii eye-trackers. The general steps and idea are obviously applicable to other eye-trackers, but the specific code and packages may vary.

Tobii_sdk

To start, we will look into how to connect and talk to our Tobii eyetracker with the SDK that Tobii provides. An SDK is a collection of tools and programs for developing applications for a specific platform or device. We will use the Python Tobii SDK that lets us easily find and get data from our Tobii eye tracker.

Install

To install the Python Tobii SDK, we can simply run this command in our conda terminal:

pip install tobii_research

Great! We have installed the Tobii SDK.

Connect to the eye-tracker

So how does this library work, how do we connect to the eye-tracker and collect our data? Very good questions!

The tobii_research documentation is quite extensive and describes in detail a lot of functions and data classes that are very useful. However, we don’t need much to start our experiment.

First we need to identify all the eye trackers connected to our computer. Yes, plural, tobii_research will return a list of all the eye trackers connected to our computer. 99.99999999% of the time you will only have 1 eye tracker connected, so we can just select the first (and usually only) eye tracker found.

# Import tobii_research library
import tobii_research as tr

# Find all connected eye trackers
found_eyetrackers = tr.find_all_eyetrackers()

# We will just use the first one
Eyetracker = found_eyetrackers[0]

Perfect!! We have identified our eye-trackers, and we have selected the first one (and only).

We are now ready to use our eye-tracker to collect some data… but how?

Collect data

Tobii_research has a cool way of telling us what data we are collecting at each time point. It uses a callback function. What is a callback function, you ask? It is a function that tobii runs each time it has a new data point. Let’s say we have an eye tracker that collects data at 300Hz (300 samples per second): the function will be called every time the tobii has one of those 300 samples.

This callback function will give us a gaze_data object. This object contains multiple information of that collected sample and we can simply select the information we care about. In our case we want:

  • the system_time_stamp, our time variable

  • the left_eye.gaze_point.position_on_display_area, it contains the coordinates on the screen of the left eye (both x and y)

  • the right_eye.gaze_point.position_on_display_area, it contains the coordinates on the screen of the right eye (both x and y)

  • the left_eye.pupil.diameter, is the pupil diameter of the left eye

  • the right_eye.pupil.diameter, is the pupil diameter of the right eye

  • the left_eye.gaze_point.validity, this is a value that tells us whether we think the recognition is ok or not

Here is our callback function:

def gaze_data_callback(gaze_data):

    # Extract the data we are interested in
    t  = gaze_data.system_time_stamp
    lx = gaze_data.left_eye.gaze_point.position_on_display_area[0]
    ly = gaze_data.left_eye.gaze_point.position_on_display_area[1]
    lp = gaze_data.left_eye.pupil.diameter
    lv = gaze_data.left_eye.gaze_point.validity
    rx = gaze_data.right_eye.gaze_point.position_on_display_area[0]
    ry = gaze_data.right_eye.gaze_point.position_on_display_area[1]
    rp = gaze_data.right_eye.pupil.diameter
    rv = gaze_data.right_eye.gaze_point.validity

How we said, this function will be called every time tobii has a new data-point. COOL!! Now we need to tell tobii to run this function we have created. This is very simple, we can just do the following:

# Start the callback function
Eyetracker.subscribe_to(tr.EYETRACKER_GAZE_DATA, gaze_data_callback)

We are telling tobii that we are interested in the EYETRACKER_GAZE_DATA and that we want it to pass it to our function gaze_data_callback.

Triggers/Events

As we have seen, our callback function can access the tobii data and tell us what it is for each sample. Just one little piece missing… We want to know what we presented and when. In most studies, we present stimuli that can be pictures, sounds or even videos. For the following analysis, it is important to know at what exact point in time we presented these stimuli.

Luckily there is a simple way we can achieve this. We can set a text string that our callback function can access and include in our data. To make sure that our callback function can access this variable we will use the global keyword. This makes sure that we can read/modify a variable that exists outside of the function.

In this way, each time the callback function will be run, it will also have access to the trigger variable. We save the trigger to the ev variable and we set it back to an empty string "" .

This means that we can set the trigger to whatever string we want and when we set it it will be picked up by the callback function.

def gaze_data_callback(gaze_data):
    global trigger

    if len(trigger)==0:
        ev = ''
    else:
        ev = trigger
        trigger=[]
    
    # Extract the data we are interested in
    t  = gaze_data.system_time_stamp / 1000.0
    lx = gaze_data.left_eye.gaze_point.position_on_display_area[0] * winsize[0]
    ly = gaze_data.left_eye.gaze_point.position_on_display_area[1] * winsize[1]
    lp = gaze_data.left_eye.pupil.diameter
    lv = gaze_data.left_eye.gaze_point.validity
    rx = gaze_data.right_eye.gaze_point.position_on_display_area[0] * winsize[0]
    ry = gaze_data.right_eye.gaze_point.position_on_display_area[1] * winsize[1]
    rp = gaze_data.right_eye.pupil.diameter
    rv = gaze_data.right_eye.gaze_point.validity
    
    
trigger = ''

# Time passes
# when you present a stimulus you can set trigger to a string that will be captured by the callabck function

trigger = 'Presented Stimulus'

Perfect! Now we have 1. a way to access the data from the eye-tracker and 2. know exactly what stimuli we are presenting the participant and when.

Correct the data

Tobii presents gaze data in a variety of formats. The one we’re most interested in is the Active Display Coordinate System (ADCS). This system maps all gaze data onto a 2D coordinate system that aligns with the Active Display Area. When an eye tracker is used with a monitor, the Active Display Area refers to the display area that doesn’t include the monitor frame. In the ADCS system, the point (0, 0) represents the upper left corner of the screen, and (1, 1) represents the lower right corner.

While this coordinate system is perfectly acceptable, it might cause some confusion when we come to analyze and plot the data. This is because in most systems, the data’s origin is located in the lower left corner, not the upper left. It might seem a bit complicated, but the image below will make everything clear.

For this reason, we typically adjust the data to position the origin in the bottom left corner. This can be achieved by subtracting the gaze coordinates from the maximum window height size.

In addition to the origin point issue, the gaze coordinates are reported between 0 and 1. It’s often more convenient to handle data in pixels, so we can transform our data into pixels. This is done by multiplying the obtained coordinates by the pixel dimensions of our screen.

Lastly, we also modify the time column. Tobii provides data in microseconds, but such precision isn’t necessary for our purposes, so we convert it to milliseconds by dividing by 1000.

def gaze_data_callback(gaze_data):
    global trigger
    global winsize

    if len(trigger)==0:
        ev = ''
    else:
        ev = trigger
        trigger=[]
    
    # Extract the data we are interested in
    t  = gaze_data.system_time_stamp / 1000.0
    lx = gaze_data.left_eye.gaze_point.position_on_display_area[0] * winsize[0]
    ly = winsize[1] - gaze_data.left_eye.gaze_point.position_on_display_area[1] * winsize[1]
    lp = gaze_data.left_eye.pupil.diameter
    lv = gaze_data.left_eye.gaze_point.validity
    rx = gaze_data.right_eye.gaze_point.position_on_display_area[0] * winsize[0]
    ry = winsize[1] - gaze_data.right_eye.gaze_point.position_on_display_area[1] * winsize[1]
    rp = gaze_data.right_eye.pupil.diameter
    rv = gaze_data.right_eye.gaze_point.validity
    
    
trigger = ''
winsize = (1920, 1080)

Save the data

You might have noticed that we get the data in our callback function but don’t actually save it anywhere. So how to save them? There are two main approaches we can use:

  1. We could have a saving function inside our callback that could append the new data to a .csv each time the callback is called.

  2. We could append the data to a list. Once the experiment is finished we could save our data.

These two approaches have however some weaknesses. The first could slow down the callback function if our PC is not performing or if we are sampling at a very high sampling rate. The second is potentially faster, but if anything happens during the study which makes python crash (and trust me, it can happen…..) you would lose all your data.

What is the solution you ask? A mixed approach!!!!!!!
We can store our data in a list and save it during the less important parts of the study, for example the Inter Stimulus Interval (the time between a trial and another). So let’s write a function to do exactly that.

Lets first create an empty list that we will fill with out data from the callback function. As before, we make sure that our callback function can access this list and append the new data we will use the global keyword.

# Create an empty list we will append our data to
gaze_data_buffer = []

# This will be called every time there is new gaze data
def gaze_data_callback(gaze_data):
    global gaze_data_buffer
    global trigger
    global winsize

    if len(trigger)==0:
        ev = ''
    else:
        ev = trigger
        trigger=[]
        
    # Extract the data we are interested in
    t  = gaze_data.system_time_stamp / 1000.0
    lx = gaze_data.left_eye.gaze_point.position_on_display_area[0] * winsize[0]
    ly = winsize[1] - gaze_data.left_eye.gaze_point.position_on_display_area[1] * winsize[1]
    lp = gaze_data.left_eye.pupil.diameter
    lv = gaze_data.left_eye.gaze_point.validity
    rx = gaze_data.right_eye.gaze_point.position_on_display_area[0] * winsize[0]
    ry = winsize[1] - gaze_data.right_eye.gaze_point.position_on_display_area[1] * winsize[1]
    rp = gaze_data.right_eye.pupil.diameter
    rv = gaze_data.right_eye.gaze_point.validity
        
    # Add gaze data to the buffer 
    gaze_data_buffer.append((t,lx,ly,lp,lv,rx,ry,rp,rv,ev))

Now the gaze_data_buffer will be filled with the data we extract. Let’s save this list then.

We will first make a copy of the list, and then empty the original list. In this way, we have our data stored, while the original list is empty and can be filled with new data.

After creating a copy, we use pandas to transform the list into a data frame and save it to a csv file. Using mode = 'a' we tell pandas to append the new data to the existing .csv. If this is the first time we are trying to save the data the .csv does not yet exist, so pandas will create a new csv instead.

def write_buffer_to_file(buffer, output_path):

    # Make a copy of the buffer and clear it
    buffer_copy = buffer[:]
    buffer.clear()
    
    # Define column names for the dataframe
    columns = ['time', 'L_X', 'L_Y', 'L_P', 'L_V', 
               'R_X', 'R_Y', 'R_P', 'R_V', 'Event']

    # Convert buffer to DataFrame
    out = pd.DataFrame(buffer_copy, columns=columns)
    
    # Check if the file exists
    file_exists = not os.path.isfile(output_path)
    
    # Write the DataFrame to an HDF5 file
    out.to_csv(output_path, mode='a', index =False, header = file_exists)

Create the actual experiment

Now we have two function, one to access and append the data to a list, and the second to save the data to a csv. Let’s see now how to include these functions in our study.

Short recap of the paradigm

As we already mentioned, we will use the experimental design that we created in Getting started with Psychopy as a base and we will add things to it to make it an eye-tracking study. If you don’t remember the paradigm please give it a rapid look as we will not go into much detail about each specific part of it.

Here a very short summary of what the design was: After a fixation cross, two shapes can be presented: a circle or a square. The circle indicates that a reward will appear on the right of the screen while the square predicts the appearance of an empty cloud on the left.

Combine things

Let’s try to build together the experiment then.

Import and functions

To start, let’s import the libraries and define the two functions that we create before

import os
import glob
import pandas as pd

# Import some libraries from PsychoPy
from psychopy import core, event, visual, prefs
prefs.hardware['audioLib'] = ['PTB']
from psychopy import sound

import tobii_research as tr


#%% Functions

# This will be called every time there is new gaze data
def gaze_data_callback(gaze_data):
    global trigger
    global gaze_data_buffer
    global winsize

    if len(trigger)==0:
        ev = ''
    else:
        ev = trigger
        trigger=[]
    
    # Extract the data we are interested in
    t  = gaze_data.system_time_stamp / 1000.0
    lx = gaze_data.left_eye.gaze_point.position_on_display_area[0] * winsize[0]
    ly = winsize[1] - gaze_data.left_eye.gaze_point.position_on_display_area[1] * winsize[1]
    lp = gaze_data.left_eye.pupil.diameter
    lv = gaze_data.left_eye.gaze_point.validity
    rx = gaze_data.right_eye.gaze_point.position_on_display_area[0] * winsize[0]
    ry = winsize[1] - gaze_data.right_eye.gaze_point.position_on_display_area[1] * winsize[1]
    rp = gaze_data.right_eye.pupil.diameter
    rv = gaze_data.right_eye.gaze_point.validity
        
    # Add gaze data to the buffer 
    gaze_data_buffer.append((t,lx,ly,lp,lv,rx,ry,rp,rv,ev))
    
        
def write_buffer_to_file(buffer, output_path):

    # Make a copy of the buffer and clear it
    buffer_copy = buffer[:]
    buffer.clear()
    
    # Define column names
    columns = ['time', 'L_X', 'L_Y', 'L_P', 'L_V', 
               'R_X', 'R_Y', 'R_P', 'R_V', 'Event']

    # Convert buffer to DataFrame
    out = pd.DataFrame(buffer_copy, columns=columns)
    
    # Check if the file exists
    file_exists = not os.path.isfile(output_path)
    
    # Write the DataFrame to an HDF5 file
    out.to_csv(output_path, mode='a', index =False, header = file_exists)

Load the stimuli

Now we are going to set a few settings, such as the screen size, create a Psychopy window, load the stimuli and then prepare the trial definition. This is exactly the same as we did in the previous Psychopy tutorial.

#%% Load and prepare stimuli

# Setting the directory of our experiment
os.chdir(r'C:\Users\tomma\OneDrive - Birkbeck, University of London\TomassoGhilardi\PersonalProj\BCCCD')


# Winsize
winsize = (1920, 1080)

# create a window
win = visual.Window(size = winsize,fullscr=True, units="pix", pos =(0,30), screen=1)

# Load images
fixation = visual.ImageStim(win, image='EXP\\Stimuli\\fixation.png', size = (200, 200))
circle   = visual.ImageStim(win, image='EXP\\Stimuli\\circle.png', size = (200, 200))
square   = visual.ImageStim(win, image='EXP\\Stimuli\\square.png', size = (200, 200))
winning   = visual.ImageStim(win, image='EXP\\Stimuli\\winning.png', size = (200, 200), pos=(560,0))
loosing  = visual.ImageStim(win, image='EXP\\Stimuli\\loosing.png', size = (200, 200), pos=(-560,0))

# Load sound
winning_sound = sound.Sound('EXP\\Stimuli\\winning.wav')
losing_sound = sound.Sound('EXP\\Stimuli\\loosing.wav')

# List of stimuli
cues = [circle, square] # put both cues in a list
rewards = [winning, loosing] # put both rewards in a list
sounds = [winning_sound,losing_sound] # put both sounds in a list

# Create list of trials in which 0 means winning and 1 means losing
Trials = [0, 1, 0, 0, 1, 0, 1, 1, 0, 1 ]

Start recording

Now we are ready to look for the eye-trackers connected to the computer and select the first one that we find. Once we have selected it, we will launch our callback function to start collecting data.

#%% Record the data

# Find all connected eye trackers
found_eyetrackers = tr.find_all_eyetrackers()

# We will just use the first one
Eyetracker = found_eyetrackers[0]

#Start recording
Eyetracker.subscribe_to(tr.EYETRACKER_GAZE_DATA, gaze_data_callback)

Present our stimuli

The eye-tracking is running! Let’s show our participant something!

As you can see below, after each time we flip our window (remember: flipping means we actually show what we drew), we set the trigger variable to a string that identifies the specific stimulus we are presenting. This will be picked up our callback function.

#%% Trials

for trial in Trials:

    ### Present the fixation
    win.flip() # we flip to clean the window

        
    fixation.draw()
    win.flip()
    trigger = 'Fixation'
    core.wait(1)  # wait for 1 second


    ### Present the cue
    cues[trial].draw()
    win.flip()
    if trial ==0:
        trigger = 'Circle'
    else:
        trigger = 'Square'

    core.wait(3)  # wait for 3 seconds
    win.flip()

    ### Wait for saccadic latencty
    core.wait(0.75)

    ### Present the reward
    rewards[trial].draw()
    win.flip()
    if trial ==0:
        trigger = 'Reward'
    else:
        trigger = 'NoReward'
    core.wait(2)  # wait for 1 second
    win.flip()    # we re-flip at the end to clean the window
    
    ### ISI
    clock = core.Clock()
    while clock.getTime() < 1:
        pass
    
    ### Check for closing experiment
    keys = event.getKeys() # collect list of pressed keys
    if 'escape' in keys:
        win.close()  # close window
        core.quit()  # stop study

As we said before in Save the data, it is best to save the data during our study to avoid any potential data loss. And it is better to do this when there are things of minor interest, such as an ISI. If you remember, in the previous tutorial: Getting started with Psychopy, we created the ISI in a different way than just a clock.wait() and we said that this different method would come in handy later on. This is the moment!

Our ISI starts the clock and checks when 1 second has passed after starting this clock. Why is this important? Because we can save the data after starting the clock. Since the time that it will take will be variable, we will be simply check how much time has passed after saving the data and wait (using the while clock.getTime() < 1: pass code) until 1 second has fully passed. This will ensure that we will wait for 1 second in total considering the saving of the data.

### ISI
clock = core.Clock()
write_buffer_to_file(gaze_data_buffer, 'DATA\\RAW\\'+ Sub +'.csv')
while clock.getTime() < 1:
    pass
Warning

Carefull!!!
If saving the data takes more than 1 second, your ISI will also be longer. However, this should not be the case with typical studies where trials are not too long. Nonetheless, it’s always a good idea to keep an eye out.

Stop recording

We’re almost there! We have imported our functions, started collecting data, sent the triggers, and saved the data. The last step will be stop data collection (or python will keep getting an endless amount of data from the eye tracker!). Do do that, We simply unsubscribe from the eye tracker to which we had subscribed to start of data collection:

win.close()
Eyetracker.unsubscribe_from(tr.EYETRACKER_GAZE_DATA, gaze_data_callback)

Note that we also closed the Psychopy window, so that the stimulus presentation is also officially over. Well done!!! Now go and get your data!!! We’ll see you back when it’s time to analyze it.

END!!

Great job getting to here!! it want easy but you did it. Here is all the code we made together.

import os
import glob
import pandas as pd

# Import some libraries from PsychoPy
from psychopy import core, event, visual, prefs
prefs.hardware['audioLib'] = ['PTB']
from psychopy import sound

import tobii_research as tr


#%% Functions

# This will be called every time there is new gaze data
def gaze_data_callback(gaze_data):
    global trigger
    global gaze_data_buffer
    global winsize

    if len(trigger)==0:
        ev = ''
    else:
        ev = trigger
        trigger=[]
    
    # Extract the data we are interested in
    t  = gaze_data.system_time_stamp / 1000.0
    lx = gaze_data.left_eye.gaze_point.position_on_display_area[0] * winsize[0]
    ly = winsize[1] - gaze_data.left_eye.gaze_point.position_on_display_area[1] * winsize[1]
    lp = gaze_data.left_eye.pupil.diameter
    lv = gaze_data.left_eye.gaze_point.validity
    rx = gaze_data.right_eye.gaze_point.position_on_display_area[0] * winsize[0]
    ry = winsize[1] - gaze_data.right_eye.gaze_point.position_on_display_area[1] * winsize[1]
    rp = gaze_data.right_eye.pupil.diameter
    rv = gaze_data.right_eye.gaze_point.validity
        
    # Add gaze data to the buffer 
    gaze_data_buffer.append((t,lx,ly,lp,lv,rx,ry,rp,rv,ev))
    
        
def write_buffer_to_file(buffer, output_path):

    # Make a copy of the buffer and clear it
    buffer_copy = buffer[:]
    buffer.clear()
    
    # Define column names
    columns = ['time', 'L_X', 'L_Y', 'L_P', 'L_V', 
               'R_X', 'R_Y', 'R_P', 'R_V', 'Event']

    # Convert buffer to DataFrame
    out = pd.DataFrame(buffer_copy, columns=columns)
    
    # Check if the file exists
    file_exists = not os.path.isfile(output_path)
    
    # Write the DataFrame to an HDF5 file
    out.to_csv(output_path, mode='a', index =False, header = file_exists)
    
    
    
#%% Load and prepare stimuli

# Winsize
winsize = (1920, 1080)

# create a window
win = visual.Window(size = winsize,fullscr=True, units="pix", pos =(0,30), screen=1)

# Load images
fixation = visual.ImageStim(win, image='EXP\\Stimuli\\fixation.png', size = (200, 200))
circle   = visual.ImageStim(win, image='EXP\\Stimuli\\circle.png', size = (200, 200))
square   = visual.ImageStim(win, image='EXP\\Stimuli\\square.png', size = (200, 200))
winning   = visual.ImageStim(win, image='EXP\\Stimuli\\winning.png', size = (200, 200), pos=(560,0))
loosing  = visual.ImageStim(win, image='EXP\\Stimuli\\loosing.png', size = (200, 200), pos=(-560,0))

# Load sound
winning_sound = sound.Sound('EXP\\Stimuli\\winning.wav')
losing_sound = sound.Sound('EXP\\Stimuli\\loosing.wav')

# List of stimuli
cues = [circle, square] # put both cues in a list
rewards = [winning, loosing] # put both rewards in a list
sounds = [winning_sound,losing_sound] # put both sounds in a list

# Create list of trials in which 0 means winning and 1 means losing
Trials = [0, 1, 0, 0, 1, 0, 1, 1, 0, 1 ]



#%% Record the data

# Find all connected eye trackers
found_eyetrackers = tr.find_all_eyetrackers()

# We will just use the first one
Eyetracker = found_eyetrackers[0]

#Start recording
Eyetracker.subscribe_to(tr.EYETRACKER_GAZE_DATA, gaze_data_callback)


# Create an empty list we will append our data to
gaze_data_buffer = []

for trial in Trials:

    ### Present the fixation
    win.flip() # we flip to clean the window

        
    fixation.draw()
    win.flip()
    trigger = 'Fixation'
    core.wait(1)  # wait for 1 second


    ### Present the cue
    cues[trial].draw()
    win.flip()
    if trial ==0:
        trigger = 'Circle'
    else:
        trigger = 'Square'
    core.wait(3)  # wait for 3 seconds

    ### Wait for saccadic latencty
    win.flip()
    core.wait(0.75)

    ### Present the reward
    rewards[trial].draw()
    win.flip()

    if trial ==0:
        trigger = 'Reward'
    else:
        trigger = 'NoReward'
    sounds[trial].play()
    core.wait(2)  # wait for 2 second

    ### ISI
    win.flip()    # we re-flip at the end to clean the window
    clock = core.Clock()
    write_buffer_to_file(gaze_data_buffer, 'DATA\\RAW\\'+ Sub +'.csv')
    while clock.getTime() < 1:
        pass
    
    ### Check for closing experiment
    keys = event.getKeys() # collect list of pressed keys
    if 'escape' in keys:
        win.close()  # close window
        Eyetracker.unsubscribe_from(tr.EYETRACKER_GAZE_DATA, gaze_data_callback) # unsubscribe eyetracking
        core.quit()  # stop study
      
win.close() # close window
Eyetracker.unsubscribe_from(tr.EYETRACKER_GAZE_DATA, gaze_data_callback) # unsubscribe eyetracking
core.quit() # stop study
Back to top