imhr.eyetracking

@purpose: Module designed for working with eyetracking data.
@date: Created on Sat May 1 15:12:38 2019
@author: Semeon Risom

Classes

Eyelink(window, timer[, isPsychopy, subject]) Interface for the SR Research Eyelink eyetracking system..
ROI([image_path, output_path, …]) Generate regions of interest that can be used for data processing and analysis.

Bases: imhr.eyetracking._eyelink.Eyelink

Interface for the SR Research Eyelink eyetracking system..

Parameters:
window : psychopy.visual.Window

PsychoPy window instance.

timer : psychopy.core.CountdownTimer

Psychopy timer instance.

isPsychopy : bool

Is Psychopy being used. Default True.

subject : int

Subject number.

**kwargs : str or None, optional

Here’s a list of available properties:

Property Description
isFlag : bool Bypass Eyelink flags (isRecording, isConnected) to run all functions without checking flags. Default is True.
isLibrary : bool Check if required packages have been installed. Default is False.
demo : bool Run demo mode, which includes region of interest highlighting and other testing methods. Default is False.

Methods

calibration(self) Start calibration procedure.
connect(self[, calibration_type, …]) Connect to Eyelink.
drift_correction(self[, origin]) Starts drift correction.
finish_recording(self[, path]) Ending Eyelink recording.
gc(self, bound, min_[, max_]) Creates gaze contigent event.
sample(self) Collects new gaze coordinates from Eyelink.
send_message(self, msg) Send message to Eyelink.
send_variable(self, variables) send trial variable to eyelink at the end of trial.
set_eye_used(self, eye) Set dominant eye.
start_recording(self, trial, block) Starts recording of Eyelink.
stop_recording(self[, trial, block, variables]) Stops recording of Eyelink.

Notes

According to pylink.chm, the sequence of operations for implementing a trial is:
  1. Perform a DRIFT CORRECTION, which also serves as the pre-trial fixation target.
  2. Start recording, allowing 100 milliseconds of data to accumulate before the trial display starts.
  3. Draw the subject display, recording the time that the display appeared by placing a message in the EDF file.
  4. Loop until one of these events occurs RECORDING halts, due to the tracker ABORT menu or an error, the maximum trial duration expires ‘ESCAPE’ is pressed, the program is interrupted, or abutton on the EyeLink button box is pressed.
  5. Add special code to handle gaze-contingent display updates.
  6. Blank the display, stop recording after an additional 100 milliseconds of data has been collected.
  7. Report the trial result, and return an appropriate error code.

Examples

>>> eytracking = imhr.eyetracking(window=window, subject=subject)
calibration(self)[source]

Start calibration procedure.

Returns:
isCalibration : bool

Message indicating status of calibration.

Examples

>>> eyetracking.calibration()
connect(self, calibration_type=13, automatic_calibration_pacing=1000, saccade_velocity_threshold=35, saccade_acceleration_threshold=9500, sound=True, select_parser_configuration=0, recording_parse_type='GAZE', enable_search_limits=True, ip='100.1.1.1')[source]

Connect to Eyelink.

Parameters:
ip : string

Host PC ip address.

calibration_type : int

Calibration type. Default is 13-point. [see Eyelink 1000 Plus User Manual, 3.7 Calibration]

automatic_calibration_pacing : int

Select the delay in milliseconds, between successive calibration or validation targets if automatic target detection is activeSet automatic calibration pacing. [see pylink.chm]

saccade_velocity_threshold : int

Sets velocity threshold of saccade detector: usually 30 for cognitive research, 22 for pursuit and neurological work. Default is 35. Note: EyeLink II and EyeLink 1000, select select_parser_configuration should be used instead. [see EyeLink Programmer’s Guide, Section 25.9: Parser Configuration; Eyelink 1000 Plus User Manual, Section 4.3.5 Saccadic Thresholds]

saccade_acceleration_threshold : int

Sets acceleration threshold of saccade detector: usually 9500 for cognitive research, 5000 for pursuit and neurological work. Default is 9500. Note: For EyeLink II and EyeLink 1000, select select_parser_configuration should be used instead. [see EyeLink Programmer’s Guide, Section 25.9: Parser Configuration; Eyelink 1000 Plus User Manual, Section 4.3.5 Saccadic Thresholds]

select_parser_configuration : int

Selects the preset standard parser setup (0) or more sensitive (1). These are equivalent to the cognitive and psychophysical configurations. Default is 0. [see EyeLink Programmer’s Guide, Section 25.9: Parser Configuration]

sound : bool

Should sound be used for calibration/validation/drift correction.

recording_parse_type : str

Sets how velocity information for saccade detection is to be computed. Enter either ‘GAZE’ or ‘HREF’. Default is ‘GAZE’. [see Eyelink 1000 Plus User Manual, Section 4.4: File Data Types]

enable_search_limits : bool

Enables tracking of pupil to global search limits. Default is True. [see Eyelink 1000 Plus User Manual, Section 4.4: File Data Types]

Returns:
param : pandas.DataFrame

Returns dataframe of parameters for subject.

Examples

>>> param = eyetracking.connect(calibration_type=13)
drift_correction(self, origin='call')[source]

Starts drift correction. This can be done at any point after calibration, including before and after eyetracking.start_recording has already been initiated.

Parameters:
origin : str

Origin of call, either manual (default) or from gaze contigent event (gc).

Returns:
isDriftCorrection : bool

Message indicating status of drift correction.

Notes

Running drift_correction will end any start_recording event to function properly. Once drift correction has occured, it is safe to run start_recording.

Examples

>>> eyetracking.drift_correction()
finish_recording(self, path=None)[source]

Ending Eyelink recording.

Parameters:
path : str or None

Path to save data. If None, path will be default from PsychoPy task.

Returns:
isFinished : bool

Message indicating status of Eyelink recording.

Notes

pylink.pumpDelay():
Does a unblocked delay using currentTime(). This is the preferred delay function when accurate timing is not needed. [see pylink.chm]
pylink.msecDelay():
During calls to pylink.msecDelay(), Windows is not able to handle messages. One result of this is that windows may not appear. This is the preferred delay function when accurate timing is needed. [see pylink.chm]
tracker.setOfflineMode():
Places EyeLink tracker in offline (idle) mode. Wait till the tracker has finished the mode transition. [see pylink.chm]
tracker.endRealTimeMode():
Returns the application to a priority slightly above normal, to end realtime mode. This function should execute rapidly, but there is the possibility that Windows will allow other tasks to run after this call, causing delays of 1-20 milliseconds. This function is equivalent to the C API void end_realtime_mode(void). [see pylink.chm]
tracker.receiveDataFile():
This receives a data file from the EyeLink tracker PC. Source filename and destination filename should be given. [see pylink.chm]

Examples

>>> #end recording session
>>> eyetracking.finish_recording()
gc(self, bound, min_, max_=None)[source]

Creates gaze contigent event. This function needs to be run while recording.

Parameters:
bound : dict [str, int]:

Dictionary of the bounding box for each region of interest. Keys are each side of the bounding box and values are their corresponding coordinates in pixels.

min_ : int

Mininum duration (msec) in which gaze contigent capture is collecting data before allowing the task to continue.

max_ : int or None

Maxinum duration (msec) before task is forced to go into drift correction.

Examples

>>> # Collect samples within the center of the screen, for 2000 msec, 
>>> # with a max limit of 10000 msec.
>>> region = dict(left=860, top=440, right=1060, bottom=640)
>>> eyetracking.gc(bound=bound, min_=2000, max_=10000)
sample(self)[source]

Collects new gaze coordinates from Eyelink.

Returns:
gxy : tuple

Gaze coordiantes.

ps : tuple

Pupil size (area).

sample : EyeLink.getNewestSample

Eyelink newest sample.

Examples

>>> eyetracking.sample(eye_used=eye_used)
send_message(self, msg)[source]

Send message to Eyelink. This allows post-hoc processing of event markers (i.e. “stimulus onset”).

Parameters:
msg : str

Message to be recieved by eyelink.

Examples

>>> eyetracking.console(msg="eyetracking.calibration() started", color="blue")
send_variable(self, variables)[source]

send trial variable to eyelink at the end of trial.

Parameters:
variable : dict or None

Trial-related variables to be read by eyelink.

set_eye_used(self, eye)[source]

Set dominant eye. This step is required for recieving gaze coordinates from Eyelink->Psychopy.

Parameters:
eye : str

Dominant eye (left, right). This will be used for outputting Eyelink gaze samples.

Examples

>>> dominant_eye = 'left'
>>> eye_used = eyetracking.set_eye_used(eye=dominant_eye)
start_recording(self, trial, block)[source]

Starts recording of Eyelink.

Parameters:
trial : str

Trial Number.

block : str

Block Number.

Returns:
isRecording : bool

Message indicating status of Eyelink recording.

Notes

tracker.beginRealTimeMode():
To ensure that no data is missed before the important part of the trial starts. The EyeLink tracker requires 10 to 30 milliseconds after the recording command to begin writing data. This extra data also allows the detection of blinks or saccades just before the trial start, allowing bad trials to be discarded in saccadic RT analysis. A “SYNCTIME” message later in the trial marks the actual zero-time in the trial’s data record for analysis. [see pylink.chm]
TrialID:
The “TRIALID” message is sent to the EDF file next. This message must be placed in the EDF file before the drift correction and before recording begins, and is critical for data analysis. The viewer will not parse any messages, events, or samples that exist in the data file prior to this message. The command identifier can be changed in the data loading preference settings. [see Data Viewer User Manual, Section 7: Protocol for EyeLink Data to Viewer Integration]
SYNCTIME:
Marks the zero-time in a trial. A number may follow, which is interpreted as the delay of the message from the actual stimulus onset. It is suggested that recording start 100 milliseconds before the display is drawn or unblanked at zero-time, so that no data at the trial start is lost. [see pylink.chm]

Examples

>>> eyetracking.start_recording(trial=1, block=1)
stop_recording(self, trial=None, block=None, variables=None)[source]

Stops recording of Eyelink. Also allows transmission of trial-level variables to Eyelink.

Parameters:
trial : int

Trial Number.

block : int

Block Number.

variables : dict or None

Dict of variables to send to eyelink (variable name, value).

Returns:
isRecording : bool

Message indicating status of Eyelink recording.

Notes

pylink.pumpDelay():
Does a unblocked delay using currentTime(). This is the preferred delay function when accurate timing is not needed. [see pylink.chm]
pylink.msecDelay():
During calls to pylink.msecDelay(), Windows is not able to handle messages. One result of this is that windows may not appear. This is the preferred delay function when accurate timing is needed. [see pylink.chm]
tracker.endRealTimeMode():
Returns the application to a priority slightly above normal, to end realtime mode. This function should execute rapidly, but there is the possibility that Windows will allow other tasks to run after this call, causing delays of 1-20 milliseconds. This function is equivalent to the C API void end_realtime_mode(void). [see pylink.chm]
TRIAL_VAR:
Lets users specify a trial variable and value for the given trial. One message should be sent for each trial condition variable and its corresponding value. If this command is used there is no need to use TRIAL_VAR_LABELS. The default command identifier can be changed in the data loading preference settings. Please note that the eye tracker can handle about 20 messages every 10 milliseconds. So be careful not to send too many messages too quickly if you have many trial condition messages to send. Add one millisecond delay between message lines if this is the case. [see pylink.chm]
TRIAL_RESULT:
Defines the end of a trial. The viewer will not parse any messages, events, or samples that exist in the data file after this message. The command identifier can be changed in the data loading preference settings. [see Data Viewer User Manual, Section 7: Protocol for EyeLink Data to Viewer Integration]

Examples

>>> variables = dict(stimulus='face.png', event='stimulus')
>>> eyetracking.stop_recording(trial=trial, block=block, variables=variables)
class imhr.eyetracking.ROI(image_path=None, output_path=None, metadata_path=None, shape='box', **kwargs)[source]

Bases: imhr.eyetracking._roi.ROI

Generate regions of interest that can be used for data processing and analysis.

Parameters:
isMultiprocessing : bool

Should the rois be generated using multiprocessing. Default is False.

detection : str {‘manual’, ‘haarcascade’}

How should the regions of interest be detected. Either manually (manual), through the use of highlighting layers in photo-editing software, or automatically through feature detection using haarcascade classifers from opencv. Default manual.

image_path : str

Image directory path.

output_path : str

Path to save data.

roi_format : str {‘raw’, ‘dataviewer’, ‘both’}

Format to export ROIs. Either to ‘csv’ (raw) or to Eyelink DataViewer ‘ias’ (dataviewer) or both (both). Default is both. Note: If roi_format = dataviewer, shape must be either be circle, rotated, or straight.

metadata_source : str or None {‘path’, ‘embedded’}

Metadata source. If metadata is being read from a spreadsheet, metadata_source should be equal to path the to the metadata file, else if metadata is embed within the image as a layer name, metadata_source = embedded. Default is embedded. For example:

>>> # if metadata is in PSD images
>>> metadata = 'embedded'
>>> # if metadata is an external xlsx file.
>>> metadata = 'roi/metadata.xlsx'

Although Photoshop PSD don’t directly provide support for metadata. However if each region of interest is stored as a seperate layer within a PSD, the layer name can be used to store metadata. To do this, the layer name has to be written as delimited text. Our code can read this data and extract relevant metadata. The delimiter can be either ; , | t or s (Delimiter type must be identified when running this code using the delimiter parameter. The default is ;.). Here’s an example using ; as a delimiter:

>>> imagename = "BM001"; roiname = 1; feature = "lefteye"

Note: whitespace should be avoided from from each layer name. Whitespaces may cause errors during parsing.

shape : str {‘polygon’, ‘hull’, ‘circle’, ‘rotated’, ‘straight’}

Shape of machine readable boundaries for region of interest. Default is straight. polygon creates a Contour Approximation and will most closely match the orginal shape of the roi. hull creates a Convex Hull, which is similar to but not as complex as a Contour Approximation and will include bulges for areas that are convex. circle creates a mininum enclosing circle. Finally, both rotated and straight create a Bounding Rectangle, with the only difference being compensation for the mininum enclosing area for the box when using rotated.

roicolumn : str

The name of the label for the region of interest in your metadata. For example you may want to extract the column ‘feature’ from your metadata and use this as the label. Default is roi.

uuid : list or None

Create a unique id by combining a list of existing variables in the metadata. This is recommended if roi_format == dataviewer because of the limited variables allowed for ias files. Default is None.

filetype: :obj:`str` {‘psd’, ‘tiff’, ‘dcm’, ‘png’, ‘bmp’, ‘jpg’}

The filetype extension of the image file. Case insensitive. Default is psd. If psd, tiff or DICOM the file can be read as multilayered.

**kwargs : str or None, optional

Additional properties to control how data is exported, naming variables, exporting images are also available:

These properties control additional core parameters for the API:

Property Description
cores : bool (if isMultiprocessing == True) Number of cores to use. Default is total available cores - 1.
isLibrary : bool Check if required packages have been installed. Default is False.
isDebug : bool Allow flags to be visible. Default is False.
isDemo : bool Tests code with in-house images and metadata. Default is False.
save_data : bool Save coordinates. Default is True.
newcolumn : dict {str, str} or False Add additional column to metadata. This must be in the form of a dict in this form {key: value}. Default is False.
save_raw_image : bool Save images. Default is True.
append_output_name : bool or str Add appending name to all exported files (i.e. <’top_center’> IMG001_top_center.ias). Default is False.
save_contour_image : bool Save generated contours as images. Default is True.
scale : int If image is scaled during presentation, set scale. Default is 1.
offset : list [int] Center point of image, relative to screensize. Default is [960, 540].
screensize : list [int] Monitor size is being presented. Default is [1920, 1080].

These properties control data is processed which include the type of haarcascade used, delimiters for metadata:

Property Description
delimiter : str {‘;’ , ‘,’ , ‘|’ , ‘tab’ , ‘space’} (if source == psd) How is metadata delimited. Default is ;.
classifiers : default or :obj:list of :obj:dict

(if detection == haarcascade) Trained classifiers to use. Default is {‘eye_tree_eyeglasses’, ‘eye’, ‘frontalface_alt_tree’, ‘frontalface_alt’, ‘frontalface_alt2’,’frontalface_default’, ‘fullbody’, ‘lowerbody’, ‘profileface’, ‘smile’, ‘upperbody’}. Parameters are stored here. If you want to use custom classifiers, you can pass a list of classifiers and their arguments using the following format:

>>>  [{'custom_cascade': {
        ...   'file': 'haarcascade_eye.xml',
        ...   'type': 'eye',
        ...   'path': './haarcascade_eye.xml',
        ...   'minN': 5,
        ...   'minS': (100,100),
        ...   'sF': 1.01 }
        ...  }]

You can also pass custom arguments by calling them after initiation:

>>> roi = imhr.eyetracking.ROI(detection='manual.....)
>>> roi.default_classifiers['eye']['minNeighbors'] = 10

Here are properties specific to how images are exported after processing. The code can either use matplotlib or PIL as a backend engine:

Property Description
image_backend : str {‘matplotlib’, ‘PIL’} Backend for exporting image. Either matplotlib or PIL. Default is matplotlib.
RcParams : bool A dictionary object including validation validating functions are defined and associated with rc parameters in class:matplotlib.RcParams. Default is None.
background_color : list Set background color (RGB) for exporting images. Default is [110, 110, 110].
dpi : int or None (if save_image == True) Quality of exported images, refers to ‘dots per inch’. Default is 300.
remove_axis : bool Remove axis from matplotlib.pyplot. Default is False.
tight_layout : bool Remove whitespace from matplotlib.pyplot. Default is False.
set_size_inches : bool Set size of matplotlib.pyplot according to screensize of ROI. Default is False.
Attributes:
shape_d : str {‘ELLIPSE’, ‘FREEHAND’, ‘RECTANGLE’}

DataViewer ROI shape.

psd : psd_tools.PSDImage

Photoshop PSD/PSB file object. The file should include one layer for each region of interest.

retval, threshold : numpy.ndarray

Returns from cv2.threshold. The function applies a fixed-level thresholding to a multiple-channel array. retval provides an optimal threshold only if cv2.THRESH_OTSU is passed. threshold is an image after applying a binary threshold (cv2.THRESH_BINARY) removing all greyscale pixels < 127. The output matches the same image channel as the original image. See opencv and leanopencv for more information.

contours, hierarchy : numpy.ndarray

Returns from cv2.findContours. This function returns contours from the provided binary image (threshold). This is used here for later shape detection. contours are the detected contours, while hierarchy containing information about the image topology. See opencv for more information.

image_contours : numpy.ndarray

Returns from cv2.drawContours. This draws filled contours from the image.

Raises:
Exception

[description]

Exception

[description]

Methods

draw_contours(filepath[, img]) [summary]
export_data(df, path, filename[, uuid, …]) [summary]
extract_contours(image, imagename, roiname) [summary]
extract_metadata(imagename, imgtype, layer) Extract metadata for each region of interest.
finished(df[, errors]) Process bounds for all images.
format_contours(imagename, metadata, …) [summary]
format_image([image, imgtype, isRaw, …]) Resize image and reposition image, relative to screensize.
haarcascade(directory[, core, queue]) [summary]
manual_detection(directory[, core, queue]) [summary]
process() [summary]

Notes

Resources

Examples

>>> from imhr.roi import ROI
>>> s = "/dist/example/raw/"; d="/dist/example/"
>>> ROI(source=s, output_path=d, shape='box')
>>> img.save('/Users/mdl-admin/Desktop/roi/PIL.png') #DEBUG: save PIL
>>> cv2.imwrite('/Users/mdl-admin/Desktop/roi/cv2.png', img_cv2) #DEBUG: save cv2
>>> plt.imshow(img_np); plt.savefig('/Users/mdl-admin/Desktop/roi/matplotlib.png') #DEBUG: save matplotlib
classmethod draw_contours(filepath, img=None)[source]

[summary]

Parameters:
filepath : [type]

[description]

data : [type]

[description]

fig : [type]

[description]

source : str, optional

[description], by default ‘bounds’

classmethod export_data(df, path, filename, uuid=None, newcolumn=None, level='image')[source]

[summary]

Parameters:
df : [type]

Bounds.

path : [type]

[description]

filename : [type]

[description]

uuid : [type], optional

[description], by default None

newcolumn : [type], optional

[description], by default None

nested : string {image,`all`}

Nested order, either image or all. Default is image.

Returns:
[type]

[description]

classmethod extract_contours(image, imagename, roiname)[source]

[summary]

Parameters:
image : [type]

[description]

imagename : [type]

[description]

roiname : [type]

[description]

Returns:
[type]

[description]

Raises:
Exception

[description]

Exception

[description]

classmethod extract_metadata(imagename, imgtype, layer)[source]

Extract metadata for each region of interest.

Parameters:
imagename : [type]

[description]

imgtype : [type]

[description]

layer : [type]

[description]

Returns:
[type]

[description]

[type]

[description]

classmethod finished(df, errors=None)[source]

Process bounds for all images.

Parameters:
df : [type]

[description]

errors : [type], optional

[description], by default None

classmethod format_contours(imagename, metadata, roiname, roinumber, bounds, coords)[source]

[summary]

Parameters:
imagename : [type]

[description]

metadata : [type]

[description]

roiname : [type]

[description]

roinumber : [type]

[description]

roilabel : [type]

[description]

bounds_ : [type]

[description]

contours_ : [type]

[description]

Returns:
[type]

[description]

[type]

[description]

Raises:
Exception

[description]

classmethod format_image(image=None, imgtype='psd', isRaw=False, isPreprocessed=False, isNormal=False, isHaar=False)[source]

Resize image and reposition image, relative to screensize.

Parameters:
IMG : None or

Can be either: psd_tools.PSDImage Photoshop PSD/PSB file object. The file should include one layer for each region of interest, by default None

imgtype : str {‘psd’,’dcm’,’tiff’, ‘bitmap’}

Image type.

isRaw : None or ###, optional

If True, the image will be returned without resizing or placed on top of a background image. Default is False.

isPreprocessed : None or ###, optional

If True, the image will be returned with resizing and placed on top of a background image. Default is False.

Attributes:
image : PIL.Image.Image

PIL image object class.

Returns:
image, background : PIL.Image.Image

PIL image object class.

classmethod haarcascade(directory, core=0, queue=None)[source]

[summary]

Parameters:
directory : [type]

[description]

core : int, optional

[description], by default 0

queue : [type], optional

[description], by default None

Returns:
[type]

[description]

Raises:
Exception

[description]

classmethod manual_detection(directory, core=0, queue=None)[source]

[summary]

Parameters:
directory : list

[description]

core : int

(if isMultiprocessing) Core used for this function. Default is 0.

queue : queue.Queue

Constructor for a multiprocessing ‘first-in, first-out’ queue. Note: Queues are thread and process safe.

Returns:
[type]

[description]

classmethod process()[source]

[summary]

Returns:
[type]

[description]