meyelens.gaze
- class meyelens.gaze.ScreenPositions(screen_width, screen_height, random_points: bool = False, num_points: int = 5)[source]
Bases:
objectGenerate a shuffled sequence of calibration target positions on a screen.
The sequence always includes 5 fixed points:
center
top-left, top-right
bottom-left, bottom-right
Optionally, additional random points can be added within the screen bounds.
- Parameters:
screen_width (float) – Screen width in degrees (or in any coordinate unit you consistently use).
screen_height (float) – Screen height in degrees (or in any coordinate unit you consistently use).
random_points (bool, optional) – If
True, add random positions in addition to the fixed ones.num_points (int, optional) – Total number of points in the returned sequence (including the 5 fixed points). If smaller than 5, only the 5 fixed points will be used.
Notes
Positions are shuffled once at initialization.
Coordinates are returned as (x, y) where the origin is the center.
- class meyelens.gaze.GazeData(folder=None)[source]
Bases:
objectLoad, preprocess, and visualize gaze calibration recordings.
This class expects recordings in a folder as
*.txt(or any extension) with semicolon-separated columns, including at least:x,y: gaze coordinatestrg1: target identifier (used for plotting groups)trg2,trg3: screen target coordinates (x, y) associated with each sample
By default, if no folder is provided, data is stored under:
~/Documents/GazeDataNotes
Missing
x/ysamples are linearly interpolated in both directions.File discovery is based on a filename pattern used by your acquisition pipeline: the second dash-separated token must equal
track_cal.txt.
- get_last()[source]
Convenience method to load the most recent recording in the list.
- Returns:
(gaze_points, screen_positions)where each has shape (n_samples, 2).- Return type:
tuple[numpy.ndarray, numpy.ndarray]
Notes
This simply calls
get()withi=-1(last element in list).
- get(i: int = -1)[source]
Load a specific recording by index.
- Parameters:
i (int, optional) – Recording index (supports negative indexing).
- Returns:
(gaze_points, screen_positions)where each has shape (n_samples, 2).- Return type:
tuple[numpy.ndarray, numpy.ndarray]
- Raises:
IndexError – If
iis out of range for the available recordings.
- get_all()[source]
Load and concatenate all recordings.
- Returns:
(all_gaze_points, all_screen_positions)concatenated across recordings. Each has shape (n_total_samples, 2).- Return type:
tuple[numpy.ndarray, numpy.ndarray]
Notes
Uses the same interpolation strategy as
get().
- plot(i: int = -1, skip_samples: int = 20) None[source]
Plot gaze traces grouped by target identifier for a given recording.
- Parameters:
i (int, optional) – Recording index (supports negative indexing).
skip_samples (int, optional) – Number of samples to skip at the beginning of each target segment. This is useful to ignore the initial transient after a target switch.
- Return type:
None
Notes
Uses
trg1to group samples by target ID.Plots
yon the x-axis andxon the y-axis (matching your current convention), then inverts the y-axis.
- class meyelens.gaze.GazeModelPoly[source]
Bases:
objectPolynomial regression gaze calibration model.
This model learns a mapping from raw gaze coordinates (e.g., pupil/eye coordinates) to screen target coordinates using polynomial feature expansion and linear regression.
Workflow
Call
train()with paired(gaze_points, screen_positions).Call
predict()to map new gaze points to calibrated screen positions.Optionally call
save()/load()to persist the trained model.
Notes
Both inputs and targets are standardized using
sklearn.preprocessing.StandardScaler.Training metrics (MSE and R²) are printed to stdout (no logging).
- train(gaze_points, screen_positions, degree: int = 2) None[source]
Train the calibration model.
- Parameters:
gaze_points (numpy.ndarray) – Raw gaze points of shape (n_samples, 2).
screen_positions (numpy.ndarray) – Screen target positions of shape (n_samples, 2).
degree (int, optional) – Polynomial degree used by
sklearn.preprocessing.PolynomialFeatures.
- Return type:
None
Notes
Training is performed in standardized space (both X and y). Reported MSE and R² are therefore in standardized units.
- predict(new_eye_position)[source]
Predict calibrated screen coordinates for new gaze points.
- Parameters:
new_eye_position (numpy.ndarray) – New gaze points of shape (n_samples, 2).
- Returns:
Predicted screen positions of shape (n_samples, 2), returned in the original (inverse-transformed) coordinate space.
- Return type:
numpy.ndarray
- Raises:
ValueError – If the model has not been trained (or loaded) yet.
- save(model_path: str | None = None) None[source]
Save the trained model and preprocessing objects to disk using joblib.
- Parameters:
model_path (str or None, optional) – Output path. If
None, defaults togaze_models/gazemodel_poly.pkland creates thegaze_modelsfolder if needed.- Return type:
None
Notes
The saved bundle includes:
calibration_modelscaler_featuresscaler_targetpoly_features
- static load(model_path: str | None = None)[source]
Load a saved model bundle from disk.
- Parameters:
model_path (str or None, optional) – Path to the saved
.pkl. IfNone, defaults togaze_models/gazemodel_poly.pkl.- Returns:
An instance populated with the loaded model and preprocessing objects.
- Return type: