BTG 2024
Bridging the Technological Gap Workshop
PsychoPy, Python, eye-tracking, tobii, tobii_research, experimental psychology, tutorial, experiment, DevStart, developmental science, workshop
Hello hello!!! This page has been created to provide support and resources for the tutorial that will take place during the Bridging the Technological Gap Workshop.
What will you learn?
This workshop will focus on using Python to run and analyze an eye-tracking study.
We will focus on:
How to implement eye-tracking designs in Python
How to interact with an eye-tracker via Python
How to extract and visualize meaningful eye-tracking measures from the raw data
What will you need?
Python
In this tutorial, our primary tool will be Python!! There are lots of ways to install python. We recommend installing it via Miniconda. However, for this workshop, the suggested way to install Python is using Anaconda.
You might ask….Then which installation should I follow? Well, it doesn’t really matter! Miniconda is a minimal installation of Anaconda. It lacks the GUI, but has all the main features. So follow whichever one you like more!
Once you have it installed, we need a few more things. For the Gaze Tracking & Pupillometry Workshop (the part we will be hosting) we will need some specific libraries and files. We have tried our best to make everything as simple as possible:
Libraries
We will be working with a conda environment (a self-contained directory that contains a specific collection of Python packages and dependencies, allowing you to manage different project requirements separately). To create this environment and install all the necessary libraries, all you need is this file:
Once you have downloaded the file, simply open the anaconda/miniconda terminal and type conda env create -f
, then simply drag and drop the downloaded file onto the terminal. This will copy the filename with its absolute path. In my case it looked something like this:
Now you will be asked to confirm a few things (by pressing Y
) and after a while of downloading and installing you will have your new workshop environment called Psychopy!
Now you should see a shortcut in your start menu called Spyder(psychopy), just click on it to open spyder in our newly created environment. If you don’t see it, just reopen the anaconda/miniconda terminal, activate your new environment by typing conda activate psychopy
and then just type spyder
.
Files
We also need some files if you want to run the examples with us. Here you can download the zip files with everything you need:
Once downloaded, simply extract the file by unzipping it. For our workshop we will work together in a folder that should look like this:
If you have a similar folder… you are ready to go!!!!
Q&A
We received many interesting questions during the workshop! We’ll try to add new tutorials and pages to address your queries. However, since this website is a side project, it may take some time.
In the meantime, we’ll share our answers here. They may be less precise and exhaustive than what we’ll have in the future, but they should still provide a good idea of how to approach things.
Videos
We received several questions about working with videos and PsychoPy while doing eye-tracking. It can be quite tricky, but here are some tips:
Make sure you’re using the right codec.
If you need to change the codec of the video, you can re-encode it using a tool like
Handbrake (remember to set the constant framerate in the video option)
Below, you’ll find a code example that adapts our Create an eye-tracking experiment tutorial to work with a video file. The main differences are:
We’re showing a video after the fixation.
We’re saving triggers to our eye-tracking data and also saving the frame index at each sample (as a continuous number column).
import os
import glob
import pandas as pd
import numpy as np
# Import some libraries from PsychoPy
from psychopy import core, event, visual, prefs
'audioLib'] = ['PTB']
prefs.hardware[from psychopy import sound
import tobii_research as tr
#%% Functions
# This will be called every time there is new gaze data
def gaze_data_callback(gaze_data):
global trigger
global gaze_data_buffer
global winsize
global frame_indx
# Extract the data we are interested in
= gaze_data.system_time_stamp / 1000.0
t = gaze_data.left_eye.gaze_point.position_on_display_area[0] * winsize[0]
lx = winsize[1] - gaze_data.left_eye.gaze_point.position_on_display_area[1] * winsize[1]
ly = gaze_data.left_eye.pupil.diameter
lp = gaze_data.left_eye.gaze_point.validity
lv = gaze_data.right_eye.gaze_point.position_on_display_area[0] * winsize[0]
rx = winsize[1] - gaze_data.right_eye.gaze_point.position_on_display_area[1] * winsize[1]
ry = gaze_data.right_eye.pupil.diameter
rp = gaze_data.right_eye.gaze_point.validity
rv
# Add gaze data to the buffer
gaze_data_buffer.append((t,lx,ly,lp,lv,rx,ry,rp,rv,trigger, frame_indx))= ''
trigger
def write_buffer_to_file(buffer, output_path):
# Make a copy of the buffer and clear it
= buffer[:]
buffer_copy buffer.clear()
# Define column names
= ['time', 'L_X', 'L_Y', 'L_P', 'L_V',
columns 'R_X', 'R_Y', 'R_P', 'R_V', 'Event', 'FrameIndex']
# Convert buffer to DataFrame
= pd.DataFrame(buffer_copy, columns=columns)
out
# Check if the file exists
= not os.path.isfile(output_path)
file_exists
# Write the DataFrame to an HDF5 file
='a', index =False, header = file_exists)
out.to_csv(output_path, mode
#%% Load and prepare stimuli
r'C:\Users\tomma\Desktop\EyeTracking\Files')
os.chdir(
# Winsize
= (960, 540)
winsize
# create a window
= visual.Window(size = winsize,fullscr=False, units="pix", screen=0)
win
# Load images and video
= visual.ImageStim(win, image='EXP\\Stimuli\\fixation.png', size = (200, 200))
fixation = visual.MovieStim(win, filename='EXP\\Stimuli\\Video60.mp4', loop=False, size=[600,380],volume =0.4, autoStart=True)
Video
# Define the trigger and frame index variable to pass to the gaze_data_callback
= ''
trigger = np.nan
frame_indx
#%% Record the data
# Find all connected eye trackers
= tr.find_all_eyetrackers()
found_eyetrackers
# We will just use the first one
= found_eyetrackers[0]
Eyetracker
#Start recording
Eyetracker.subscribe_to(tr.EYETRACKER_GAZE_DATA, gaze_data_callback)
# Crate empty list to append data
= []
gaze_data_buffer
= 10
Trials_number for trial in range(Trials_number):
### Present the fixation
# we flip to clean the window
win.flip()
fixation.draw()
win.flip()= 'Fixation'
trigger 1) # wait for 1 second
core.wait(
Video.play()= 'Video'
trigger while not Video.isFinished:
# Draw the video frame
Video.draw()
# Flip the window and add index to teh frame_indx
win.flip()
# add which frame was just shown to the eyetracking data
= Video.frameIndex
frame_indx
Video.stop()
win.flip()
### ISI
# we re-flip at the end to clean the window
win.flip() = core.Clock()
clock 'DATA\\RAW\\Test.csv')
write_buffer_to_file(gaze_data_buffer, while clock.getTime() < 1:
pass
### Check for closing experiment
= event.getKeys() # collect list of pressed keys
keys if 'escape' in keys:
# close window
win.close() # unsubscribe eyetracking
Eyetracker.unsubscribe_from(tr.EYETRACKER_GAZE_DATA, gaze_data_callback) # stop study
core.quit()
# close window
win.close() # unsubscribe eyetracking
Eyetracker.unsubscribe_from(tr.EYETRACKER_GAZE_DATA, gaze_data_callback) # stop study core.quit()
Calibration
We received a question about the calibration. How to change the focus time that the eye-tracking uses to record samples for each calibration point. Luckily, the function from the Psychopy_tobii_infant repository allows for an additional argument that specifies how long we want the focus time (default = 0.5s). Thus, you can simply change it by running it with a different value.
Here below we changed the example of Calibrating eye-tracking by increasing the focus_time to 2s. You can increase or decrease it based on your needs!!
import os
from psychopy import visual, sound
# import Psychopy tobii infant
r"C:\Users\tomma\Desktop\EyeTracking\Files\Calibration")
os.chdir(from psychopy_tobii_infant import TobiiInfantController
#%% window and stimuli
= [1920, 1080]
winsize = visual.Window(winsize, fullscr=True, allowGUI=False,screen = 1, color = "#a6a6a6", unit='pix')
win
# visual stimuli
= glob.glob("CalibrationStim\\*.png")
CALISTIMS
# video
= visual.MovieStim(win, "CalibrationStim\\Attentiongrabber.mp4", loop=True, size=[800,450],volume =0.4, unit = 'pix')
VideoGrabber
# sound
= sound.Sound(directory + "CalibrationStim\\audio.wav")
Sound
#%% Center face - screen
# set video playing
True)
VideoGrabber.setAutoDraw(
VideoGrabber.play()
# show the relative position of the subject to the eyetracker
EyeTracker.show_status()
# stop the attention grabber
False)
VideoGrabber.setAutoDraw(
VideoGrabber.stop()
#%% Calibration
# define calibration points
= [(-0.4, 0.4), (-0.4, -0.4), (0.0, 0.0), (0.4, 0.4), (0.4, -0.4)]
CALINORMP = [(x * winsize[0], y * winsize[1]) for x, y in CALINORMP]
CALIPOINTS
= controller.run_calibration(CALIPOINTS, CALISTIMS, audio = Sound, focus_time=2)
success win.flip()