meyelens package
Subpackages
Submodules
meyelens.camera module
camera.py
A small OpenCV-based camera wrapper with:
Optional calibration loading from a TOML file.
Optional frame undistortion (requires calibration).
Fixed resolution / framerate configuration.
Auto-exposure or manual exposure control.
Interactive ROI selection and optional cropping.
A simple preview window for quick diagnostics.
This module is written to be friendly to Sphinx autodoc. If you use NumPy-style
docstrings, enable sphinx.ext.napoleon in your conf.py.
Notes
OpenCV camera properties (auto-exposure, exposure units, gain, FPS) can behave differently across operating systems, backends (DirectShow/MSMF/V4L2), and camera drivers. The setters in this class attempt to apply requested values, but your hardware may ignore some properties.
Dependencies
opencv-python
numpy
toml
Example
>>> from camera import Camera
>>> with Camera(camera_index=0, resolution=(640, 480), auto_exposure=True) as cam:
... frame = cam.get_frame()
... if frame is not None:
... cam.show(frame, "One frame")
... cv2.waitKey(0)
... cv2.destroyAllWindows()
- class meyelens.camera.Camera(camera_index: int = 0, calibration_file: str | Path = 'camera_calibration.toml', undistort: bool = False, exposure: float = 0, framerate: float = 30, resolution: Tuple[int, int] = (640, 480), auto_exposure: bool = True, crop: Tuple[int, int, int, int] | None = None)[source]
Bases:
objectHigh-level wrapper around
cv2.VideoCapture.- Parameters:
camera_index – Index passed to
cv2.VideoCapture. Common values are0or1. Some systems support special values (e.g.-1) but this is backend-dependent.calibration_file – Path to a TOML file containing camera calibration parameters. Expected keys: -
camera_matrix: 3x3 array-like -distortion_coefficients: array-like (e.g. 1x5, 1x8)undistort – If
Trueand calibration parameters are available, frames are undistorted on read.exposure – Manual exposure value passed to OpenCV when auto-exposure is disabled. The numeric meaning is backend/driver-dependent.
framerate – Requested camera FPS. Note: many cameras ignore this, or it depends on resolution/exposure.
resolution – Requested camera resolution as
(width, height).auto_exposure – If
Trueattempt to enable auto-exposure; ifFalseattempt to disable auto-exposure and apply manual exposure/gain settings.crop – Optional crop rectangle stored as
(top, left, height, width). If provided, frames returned byget_frame()are cropped accordingly.
- cap
The underlying
cv2.VideoCapture.
- camera_matrix
Loaded camera intrinsic matrix (or
Noneif not available).
- dist_coeffs
Loaded distortion coefficients (or
Noneif not available).
- crop
Crop rectangle in
(top, left, height, width)format, orNone.
- Raises:
RuntimeError – If the camera cannot be opened.
- load_calibration(calibration_file: str | Path) None[source]
Load camera calibration from a TOML file.
- Parameters:
calibration_file – TOML file path. Must contain at least
camera_matrixanddistortion_coefficientskeys.
Notes
If loading fails for any reason, the camera continues operating without calibration.
- set_resolution(resolution: Tuple[int, int]) None[source]
Attempt to set the capture resolution.
- Parameters:
resolution –
(width, height).
- set_framerate(framerate: float) None[source]
Attempt to set the capture framerate.
- Parameters:
framerate – Requested FPS.
- set_auto_exposure(enabled: bool) None[source]
Attempt to enable/disable auto-exposure.
- Parameters:
enabled – If
True, attempt to enable auto-exposure. IfFalse, attempt to disable auto-exposure and apply manual exposure settings.
Notes
OpenCV uses different conventions depending on backend:
Some backends expect
CAP_PROP_AUTO_EXPOSUREto be 0.25 for manual and 0.75 for auto (common on V4L2).Others accept 0/1.
Here we try a reasonable approach while keeping original intent.
- set_exposure(exposure: float) bool[source]
Attempt to set manual exposure.
- Parameters:
exposure – Manual exposure value passed to OpenCV.
- Returns:
Trueif the camera is open and we attempted to set the property,Falseotherwise.- Return type:
bool
Notes
Many drivers require auto-exposure to be disabled for this to take effect.
- get_frame(flip_vertical: bool = True, apply_crop: bool = True) numpy.ndarray | None[source]
Capture a frame.
- Parameters:
flip_vertical – If
True, flip the frame vertically (OpenCV flipCode=0), matching the original behavior.apply_crop – If
Trueandcropis set, crop the returned frame.
- Returns:
BGR image (H x W x 3) if successful, otherwise
None.- Return type:
Optional[numpy.ndarray]
- static show(frame: numpy.ndarray, name: str = 'Frame') None[source]
Display a frame in an OpenCV window.
- Parameters:
frame – Frame to display.
name – Window name.
- wait_key(key: str = 'q', delay_ms: int = 1) bool[source]
Check whether a given key was pressed in the last OpenCV event loop iteration.
- Parameters:
key – Single character key to detect (e.g.
"q").delay_ms – Delay in milliseconds passed to
cv2.waitKey().
- Returns:
Trueif the key was pressed, elseFalse.- Return type:
bool
- preview(window_name: str = 'Camera Preview') None[source]
Open a live preview window with a framerate readout.
Controls
ESC: exit preview
‘o’: increase exposure by +1 (only meaningful if manual exposure is supported)
‘p’: decrease exposure by -1
- param window_name:
Name of the OpenCV window.
- close() None[source]
Release camera resources and close OpenCV windows.
Notes
Calling
cv2.destroyAllWindows()is global (it closes all OpenCV windows), so if you manage multiple windows externally you may prefer to destroy specific windows yourself.
- select_roi(window_name: str = 'Select ROI') None[source]
Interactively select a rectangular ROI and store it in
crop.Workflow
Drag with left mouse button to draw a rectangle.
Press ‘s’ to save the selection.
Press ‘r’ to reset the selection.
Press ESC to exit without changing
crop.
The selection is stored as
(top, left, height, width)and will be applied byget_frame()whenapply_crop=True.- param window_name:
OpenCV window name used for ROI selection.
meyelens.fileio module
- class meyelens.fileio.FileWriter(path_to_file, filename: str = '', append: bool = False, sep: str = ';')[source]
Bases:
objectSimple synchronous text file writer.
This class creates a timestamped
.txtfile and exposes convenience methods to write either a single string (one line) or a list of values separated by a custom delimiter.Notes
This writer is synchronous: each call writes directly to disk.
The filename is always timestamped to reduce accidental overwrites.
The file is opened immediately on initialization and must be closed by calling
close().
- write(stringa: str) None[source]
Write a single line to the file.
- Parameters:
stringa (str) – Line content. A newline character is automatically appended.
- write_sv(lista) None[source]
Write a list of values as a separator-joined line.
- Parameters:
lista (iterable) – Values to serialize. Each element is converted to
str. The resulting line is written with a trailing newline.
- class meyelens.fileio.BufferedFileWriter(path_to_file, filename: str = '', buffer_size: int = 100, metadata=None, headers=None, sep: str = ';')[source]
Bases:
objectBuffered asynchronous text file writer.
This writer uses an in-memory
queue.Queueas a buffer and a background thread to flush lines to disk. This is useful for time-critical data acquisition loops where direct disk writes would introduce latency.The file format is:
optional metadata lines (prefixed with
#)a header row (separator-joined)
data rows (one per buffered entry)
Notes
The background thread is started automatically at initialization.
Call
close()to stop the thread and ensure all queued data is written.If the buffer is full, new entries are discarded and a
printwarning is emitted (per your request, no logging is used).
- write(string: str) None[source]
Queue a pre-formatted line for writing.
- Parameters:
string (str) – The line to write (without a trailing newline). A newline will be appended by the writer thread.
Notes
If the buffer is full, the value is discarded and a warning is printed.
meyelens.gaze module
- class meyelens.gaze.ScreenPositions(screen_width, screen_height, random_points: bool = False, num_points: int = 5)[source]
Bases:
objectGenerate a shuffled sequence of calibration target positions on a screen.
The sequence always includes 5 fixed points:
center
top-left, top-right
bottom-left, bottom-right
Optionally, additional random points can be added within the screen bounds.
- Parameters:
screen_width (float) – Screen width in degrees (or in any coordinate unit you consistently use).
screen_height (float) – Screen height in degrees (or in any coordinate unit you consistently use).
random_points (bool, optional) – If
True, add random positions in addition to the fixed ones.num_points (int, optional) – Total number of points in the returned sequence (including the 5 fixed points). If smaller than 5, only the 5 fixed points will be used.
Notes
Positions are shuffled once at initialization.
Coordinates are returned as (x, y) where the origin is the center.
- class meyelens.gaze.GazeData(folder=None)[source]
Bases:
objectLoad, preprocess, and visualize gaze calibration recordings.
This class expects recordings in a folder as
*.txt(or any extension) with semicolon-separated columns, including at least:x,y: gaze coordinatestrg1: target identifier (used for plotting groups)trg2,trg3: screen target coordinates (x, y) associated with each sample
By default, if no folder is provided, data is stored under:
~/Documents/GazeDataNotes
Missing
x/ysamples are linearly interpolated in both directions.File discovery is based on a filename pattern used by your acquisition pipeline: the second dash-separated token must equal
track_cal.txt.
- get_last()[source]
Convenience method to load the most recent recording in the list.
- Returns:
(gaze_points, screen_positions)where each has shape (n_samples, 2).- Return type:
tuple[numpy.ndarray, numpy.ndarray]
Notes
This simply calls
get()withi=-1(last element in list).
- get(i: int = -1)[source]
Load a specific recording by index.
- Parameters:
i (int, optional) – Recording index (supports negative indexing).
- Returns:
(gaze_points, screen_positions)where each has shape (n_samples, 2).- Return type:
tuple[numpy.ndarray, numpy.ndarray]
- Raises:
IndexError – If
iis out of range for the available recordings.
- get_all()[source]
Load and concatenate all recordings.
- Returns:
(all_gaze_points, all_screen_positions)concatenated across recordings. Each has shape (n_total_samples, 2).- Return type:
tuple[numpy.ndarray, numpy.ndarray]
Notes
Uses the same interpolation strategy as
get().
- plot(i: int = -1, skip_samples: int = 20) None[source]
Plot gaze traces grouped by target identifier for a given recording.
- Parameters:
i (int, optional) – Recording index (supports negative indexing).
skip_samples (int, optional) – Number of samples to skip at the beginning of each target segment. This is useful to ignore the initial transient after a target switch.
- Return type:
None
Notes
Uses
trg1to group samples by target ID.Plots
yon the x-axis andxon the y-axis (matching your current convention), then inverts the y-axis.
- class meyelens.gaze.GazeModelPoly[source]
Bases:
objectPolynomial regression gaze calibration model.
This model learns a mapping from raw gaze coordinates (e.g., pupil/eye coordinates) to screen target coordinates using polynomial feature expansion and linear regression.
Workflow
Call
train()with paired(gaze_points, screen_positions).Call
predict()to map new gaze points to calibrated screen positions.Optionally call
save()/load()to persist the trained model.
Notes
Both inputs and targets are standardized using
sklearn.preprocessing.StandardScaler.Training metrics (MSE and R²) are printed to stdout (no logging).
- train(gaze_points, screen_positions, degree: int = 2) None[source]
Train the calibration model.
- Parameters:
gaze_points (numpy.ndarray) – Raw gaze points of shape (n_samples, 2).
screen_positions (numpy.ndarray) – Screen target positions of shape (n_samples, 2).
degree (int, optional) – Polynomial degree used by
sklearn.preprocessing.PolynomialFeatures.
- Return type:
None
Notes
Training is performed in standardized space (both X and y). Reported MSE and R² are therefore in standardized units.
- predict(new_eye_position)[source]
Predict calibrated screen coordinates for new gaze points.
- Parameters:
new_eye_position (numpy.ndarray) – New gaze points of shape (n_samples, 2).
- Returns:
Predicted screen positions of shape (n_samples, 2), returned in the original (inverse-transformed) coordinate space.
- Return type:
numpy.ndarray
- Raises:
ValueError – If the model has not been trained (or loaded) yet.
- save(model_path: str | None = None) None[source]
Save the trained model and preprocessing objects to disk using joblib.
- Parameters:
model_path (str or None, optional) – Output path. If
None, defaults togaze_models/gazemodel_poly.pkland creates thegaze_modelsfolder if needed.- Return type:
None
Notes
The saved bundle includes:
calibration_modelscaler_featuresscaler_targetpoly_features
- static load(model_path: str | None = None)[source]
Load a saved model bundle from disk.
- Parameters:
model_path (str or None, optional) – Path to the saved
.pkl. IfNone, defaults togaze_models/gazemodel_poly.pkl.- Returns:
An instance populated with the loaded model and preprocessing objects.
- Return type:
meyelens.meye module
- class meyelens.meye.Meye(model=None)[source]
Bases:
objectPupil segmentation and basic shape extraction using a pre-trained neural network.
This class loads a Keras/TensorFlow segmentation model and exposes a
predict()method that:converts the input frame to grayscale (if needed)
resizes it to the model input size (hardcoded to 128x128 in this implementation)
runs inference to obtain a pupil mask
optionally performs morphological post-processing to isolate the pupil region
optionally fits an ellipse to estimate major/minor diameters and orientation
- model_path
Path to the Keras model file used for inference.
- Type:
str or pathlib.Path
- model
Loaded Keras model.
- Type:
tensorflow.keras.Model
- requiredFrameSize
Expected model input frame size (height, width) derived from the model input. Note: this implementation still resizes to 128x128 in
predict().- Type:
tuple[int, int]
- centroid
Centroid (row, col) of the largest detected pupil region after post-processing. Set to
(np.nan, np.nan)when no pupil is found.- Type:
tuple[float, float] or float
- pupil_size
Number of non-zero pixels in the resized pupil mask (in the original image size).
- Type:
float
- major_diameter
Major axis length from an ellipse fit (in pixels), if available.
- Type:
float
- minor_diameter
Minor axis length from an ellipse fit (in pixels), if available.
- Type:
float
- orientation
Ellipse orientation angle (degrees), if available.
- Type:
float
Notes
This class prints GPU availability on initialization (no logging).
The model is assumed to return two outputs (
mask, info). Onlymaskis used.- Coordinate conventions:
centroid from
skimage.measure.regionprops()is (row, col).the recorders write centroid as (x=col, y=row) by swapping indices.
- predict(img, post_proc: bool = True, morph: bool = True, fill_ellipse: bool = False)[source]
Predict a pupil mask and centroid from an input image.
- Parameters:
img (numpy.ndarray) – Input frame, grayscale (H, W) or BGR (H, W, 3).
post_proc (bool, optional) – If
True, applymorphProcessing()to binarize, keep the largest connected component, and perform morphological closing. IfFalse, the raw network output is used and centroid is set to (0, 0).morph (bool, optional) – If
True, attempt ellipse fitting on the post-processed mask to estimate major/minor diameters and orientation.fill_ellipse (bool, optional) – If
True, replace the mask with a filled ellipse fitted to the contour (useful for smoothing irregular segmentations).
- Returns:
pupil_resized (numpy.ndarray) – Processed pupil mask resized back to the original image size. Pixel values are 0/255 when post-processing is enabled.
centroid (tuple[float, float]) – Centroid of the detected pupil region in (row, col) format.
Notes
The model input is normalized to [0, 1] and shaped as (1, H, W, 1).
This implementation resizes inputs to 128x128 regardless of
requiredFrameSize.
- morphProcessing(sourceImg, thr: float = 0.8)[source]
Post-process the raw model output to isolate the pupil region.
Steps
Threshold the model output at
thrLabel connected components
Keep only the largest component
Apply morphological closing with an elliptical kernel
- param sourceImg:
Raw model output mask (float array in [0, 1]).
- type sourceImg:
numpy.ndarray
- param thr:
Threshold used to binarize the mask.
- type thr:
float, optional
- returns:
morph (numpy.ndarray) – Post-processed binary mask as uint8 with values 0 or 255.
centroid (tuple[float, float]) – Centroid of the largest component in (row, col) format. Returns
(np.nan, np.nan)if no component is found.
- static fit_ellipse_and_fill(mask)[source]
Fit an ellipse to the pupil mask and return a filled ellipse mask.
- Parameters:
mask (numpy.ndarray) – Binary mask (uint8) typically with values 0/255.
- Returns:
New mask where the pupil is represented by a filled ellipse (0/255). If ellipse fitting is not possible, returns the input mask.
- Return type:
numpy.ndarray
Notes
Uses the convex hull of the largest contour to stabilize ellipse fitting.
OpenCV requires at least 5 points to fit an ellipse.
- static overlay_roi(mask, roi, ratios=(0.7, 0.3))[source]
Overlay a pupil mask over a region of interest (ROI) image.
- Parameters:
mask (numpy.ndarray) – Binary mask to overlay (expected 0/255).
roi (numpy.ndarray) – Base image (typically BGR) to overlay on.
ratios (tuple[float, float], optional) – Blending ratios for (roi, mask_color) passed to
cv2.addWeighted().
- Returns:
Blended image for visualization.
- Return type:
numpy.ndarray
- static mask2color(mask, channel: int = 1)[source]
Convert a single-channel mask into a 3-channel color image.
- Parameters:
mask (numpy.ndarray) – 2D mask.
channel (int, optional) – Channel to place the mask in: - 0: red - 1: green - 2: blue
- Returns:
3-channel image with the mask in the selected channel.
- Return type:
numpy.ndarray
- class meyelens.meye.MeyeRecorder(cam_ind=0, model=None, show_preview=False, filename='meye', folder_path='Data', sep=';')[source]
Bases:
objectSynchronous frame-by-frame recorder using
MeyeandFileWriter.This recorder captures frames from a
Camera, runs pupil detection, and writes one line per frame to a semicolon-separated text file.The output columns are:
time (seconds since start)
x, y (centroid coordinates; written as col, row)
pupil (mask area in pixels)
major_diameter, minor_diameter, orientation (ellipse fit; may be NaN)
trg1 … trg9 (user-defined trigger values)
- Parameters:
cam_ind (int, optional) – Camera index passed to
Camera.model (str or pathlib.Path or None, optional) – Path to the Keras model file. If
None, the packaged model is used.show_preview (bool, optional) – If
True, display an overlay window during recording.filename (str, optional) – Base filename used by
FileWriter(timestamp is added automatically).folder_path (str, optional) – Output folder where the text file is created.
sep (str, optional) – Column separator used in the output file.
Notes
This recorder writes synchronously to disk at each
save_frame()call.- save_frame(trg1=0, trg2=0, trg3=0, trg4=0, trg5=0, trg6=0, trg7=0, trg8=0, trg9=0) None[source]
Capture one frame, run pupil detection, and append a row to the output file.
- Parameters:
trg1 (int, optional) – Trigger values to save alongside the measurements.
trg2 (int, optional) – Trigger values to save alongside the measurements.
trg3 (int, optional) – Trigger values to save alongside the measurements.
trg4 (int, optional) – Trigger values to save alongside the measurements.
trg5 (int, optional) – Trigger values to save alongside the measurements.
trg6 (int, optional) – Trigger values to save alongside the measurements.
trg7 (int, optional) – Trigger values to save alongside the measurements.
trg8 (int, optional) – Trigger values to save alongside the measurements.
trg9 (int, optional) – Trigger values to save alongside the measurements.
- Return type:
None
- get_data()[source]
Capture one frame and return instantaneous pupil metrics.
- Returns:
Dictionary containing:
centroid: (x, y) as (col, row)size: pupil mask area in pixelsmajor_diameter: ellipse major axis in pixels (or NaN)minor_diameter: ellipse minor axis in pixels (or NaN)orientation: ellipse angle (degrees) (or NaN)
- Return type:
dict
- class meyelens.meye.MeyeAsyncRecorder(cam_ind=0, model=None, show_preview=False, path_to_file='Data', filename='meye', buffer_size=100, sep=';', cam_crop=None)[source]
Bases:
objectAsynchronous frame-by-frame recorder using
MeyeandBufferedFileWriter.This recorder is similar to
MeyeRecorder, but data rows are queued in memory and written to disk by a background thread, reducing I/O latency inside tight loops.- Parameters:
cam_ind (int, optional) – Camera index passed to
Camera.model (str or pathlib.Path or None, optional) – Path to the Keras model file. If
None, the packaged model is used.show_preview (bool, optional) – If
True, display an overlay window during recording.path_to_file (str, optional) – Output folder where the text file is created.
filename (str, optional) – Base filename used by
BufferedFileWriter(timestamp is added automatically).buffer_size (int, optional) – Queue size for
BufferedFileWriter. When full, new rows are discarded and a warning is printed by the writer (no logging).sep (str, optional) – Column separator used in the output file.
cam_crop (list[int] or tuple[int, int, int, int] or None, optional) – Crop passed to
Camera(implementation-dependent).
Notes
You must call
stop()(orclose_all()) to flush remaining queued data.- start(metadata=None) None[source]
Start recording and initialize the asynchronous output writer.
- Parameters:
metadata (dict or None, optional) – Metadata passed to
BufferedFileWriterand written as comment lines at the top of the file.- Return type:
None
- save_frame(trg1=0, trg2=0, trg3=0, trg4=0, trg5=0, trg6=0, trg7=0, trg8=0, trg9=0) None[source]
Capture one frame, run pupil detection, and queue a row for disk writing.
- Parameters:
trg1 (int, optional) – Trigger values to save alongside the measurements.
trg2 (int, optional) – Trigger values to save alongside the measurements.
trg3 (int, optional) – Trigger values to save alongside the measurements.
trg4 (int, optional) – Trigger values to save alongside the measurements.
trg5 (int, optional) – Trigger values to save alongside the measurements.
trg6 (int, optional) – Trigger values to save alongside the measurements.
trg7 (int, optional) – Trigger values to save alongside the measurements.
trg8 (int, optional) – Trigger values to save alongside the measurements.
trg9 (int, optional) – Trigger values to save alongside the measurements.
- Return type:
None
- get_data()[source]
Capture one frame and return instantaneous pupil metrics.
- Returns:
Dictionary containing:
centroid: (x, y) as (col, row)size: pupil mask area in pixelsmajor_diameter: ellipse major axis in pixels (or NaN)minor_diameter: ellipse minor axis in pixels (or NaN)orientation: ellipse angle (degrees) (or NaN)
- Return type:
dict
meyelens.meyelens_offlinegui module
Pupil analysis GUI (PyQt6) with interactive ROI
Loads a CNN model that outputs (mask, info)
Lets you choose a video and processing options in a GUI
- Shows preview inside the GUI:
full (flipped) frame with draggable square ROI
processed overlay preview (mask + centroid), based on cropped+resized input
- Run full analysis:
CSV with pupil size, centroid, eye/blink prob
Optional overlay video with prediction mask + centroid
- Assumptions:
model(input) -> (mask, info) mask: (1, H, W, 1) probability map info: (1, 2) [eyeProbability, blinkProbability]
- meyelens.meyelens_offlinegui.morphProcessing(sourceImg: numpy.ndarray, threshold: float, imclosing: int, meye_model: Meye | None)[source]
Binarize the prediction and keep the largest component using the core MEYE implementation when possible. For non-default closing sizes we fall back to the custom kernel logic so the UI control still works.
- meyelens.meyelens_offlinegui.preprocess_frame_for_model(frame_bgr: numpy.ndarray, settings: dict, requiredFrameSize: tuple[int, int]) numpy.ndarray[source]
- Apply processing in this order:
BGR -> gray -> flip (optional) -> crop (optional) -> resize -> invert (optional)
Returns gray uint8 frame of size requiredFrameSize.
- class meyelens.meyelens_offlinegui.ROIRectItem(*args: Any, **kwargs: Any)[source]
Bases:
QGraphicsRectItemMovable square ROI constrained inside the image.
- class meyelens.meyelens_offlinegui.ROIView(*args: Any, **kwargs: Any)[source]
Bases:
QGraphicsViewQGraphicsView that shows the input frame and a draggable square ROI. Scene coordinates == image pixel coordinates.
- roiChanged
alias of
int
- class meyelens.meyelens_offlinegui.MainWindow(*args: Any, **kwargs: Any)[source]
Bases:
QMainWindow- on_crop_spin_changed(_value)[source]
Spinboxes changed -> update ROI rectangle (and thus crop if enabled).
- on_processing_param_changed(_value=None)[source]
Threshold / IMCLOSING / invert / cropEnabled changed.
meyelens.offline module
- class meyelens.offline.ExperimentReader(folder_path)[source]
Bases:
objectRead back a recorded experiment folder (video + per-frame CSV + metadata).
This class is designed to mirror the output structure produced by
FastVideoRecorder. It expects a folder containing:a video file (default name used here:
pupillometry.avi)- a CSV file named
expinfo.csvwith: optional metadata lines starting with
#in the form# key: valuea header row
per-frame rows containing at least a
timestampcolumn
- a CSV file named
- Parameters:
folder_path (str or pathlib.Path) – Folder containing the recorded video and
expinfo.csv.
- folder_path
Base folder of the recording.
- Type:
str
- video_path
Path to the video file.
- Type:
str
- csv_path
Path to the CSV file with timestamps/signals.
- Type:
str
- metadata
Metadata parsed from comment lines in the CSV.
- Type:
dict
- frame_info
Frame-by-frame table loaded from the CSV (comment lines ignored).
- Type:
pandas.DataFrame
- fps
Estimated FPS computed from timestamp differences.
- Type:
float
- cap
OpenCV video capture handle.
- Type:
cv2.VideoCapture
Notes
fpsis estimated as the mean of1 / diff(timestamp); this assumes timestamps are in seconds and monotonic.This class does not automatically close the capture: call
close().
- get_metadata()[source]
Return parsed metadata.
- Returns:
Metadata dictionary from the CSV comment lines.
- Return type:
dict
- get_frame_count()[source]
Get total number of frames in the video.
- Returns:
Number of frames according to OpenCV.
- Return type:
int
- get_frame(index)[source]
Retrieve a frame by index.
- Parameters:
index (int) – Frame index (0-based).
- Returns:
The frame (as returned by OpenCV), or
Noneif reading fails.- Return type:
numpy.ndarray or None
- play_video(delay: int = 30, repeat: bool = True)[source]
Play the recorded video with an overlay of the per-frame
signalvalue.- Parameters:
delay (int, optional) – Delay passed to
cv2.waitKey()in milliseconds. Smaller values play faster.repeat (bool, optional) – If
True, loop the video when it ends.
Notes
Press
qto quit playback.
- visualize_fps_stability()[source]
Plot instantaneous FPS over time derived from recorded timestamps.
This function computes:
dt = diff(timestamps)instantaneous_fps = 1 / dt
Then plots instantaneous FPS against the mid-time between consecutive frames.
Notes
This method expects that
timestampis a numeric column in seconds.Potential issue in original code
The previous implementation attempted to iterate over
self.frame_infoas if it were a list of dicts (entry["timestamp"]). Here we use DataFrame columns directly.
- class meyelens.offline.FastVideoRecorder(name='experiment', dest_folder='.', fps=20.0, frame_size=(640, 480), metadata=None, filename='eye.avi')[source]
Bases:
objectSimple video + CSV recorder for experiments.
This class writes:
a grayscale video file (MJPG codec)
a CSV file named
expinfo.csvwith optional metadata comment lines and one row per recorded frame.
The output folder is created as:
<dest_folder>/<timestamp>-<name>/
- Parameters:
name (str, optional) – Name appended to the output folder.
dest_folder (str, optional) – Base destination directory.
fps (float, optional) – Target frames per second passed to OpenCV VideoWriter.
frame_size (tuple[int, int], optional) – Frame size (width, height) expected by OpenCV VideoWriter.
metadata (dict or None, optional) – Optional metadata written to the top of
expinfo.csvas# key: value.filename (str, optional) – Video filename inside the output folder.
- output_folder
Created output folder path.
- Type:
str
- video_path
Full path to the recorded video file.
- Type:
str
- timestamp_path
Full path to the per-frame CSV file (
expinfo.csv).- Type:
str
- frame_index
Incremented each time
record_frame()is called.- Type:
int
- record_frame(frame, signal='', trial_n='')[source]
Record a frame to video and append a row to
expinfo.csv.- Parameters:
frame (numpy.ndarray) – Frame to write. If BGR, it is converted to grayscale before writing.
signal (str or int, optional) – Signal/trigger value to store for this frame.
trial_n (str or int, optional) – Trial identifier to store for this frame.
Notes
Timestamp is recorded using
time.time()(seconds since epoch).frame_indexstarts at 0 and increments per call.
- class meyelens.offline.FrameRateManager(fps, duration: float = 10)[source]
Bases:
objectUtility class to help maintain a target frame rate in a polling loop.
Typical usage
>>> frm = FrameRateManager(fps=30, duration=5) >>> frm.start() >>> while not frm.is_finished(): ... if frm.is_ready(): ... # acquire frame / do work here ... frm.set_frame_time()
- param fps:
Target frames per second.
- type fps:
float
- param duration:
Maximum loop duration in seconds for
is_finished().- type duration:
float, optional
- fps
Target FPS.
- Type:
float
- interframe
Target inter-frame interval (seconds).
- Type:
float
- time_grab
Timestamp saved when a new frame cycle begins (set in
is_ready()).- Type:
float
- duration
Duration used to determine loop end.
- Type:
float
- framecount
Counts how many times the loop was “ready” (i.e., frames acquired).
- Type:
int
- is_ready() bool[source]
Check if it is time to process/acquire the next frame.
- Returns:
Trueif current time has reached the next scheduled frame time. WhenTrue, also updates internal counters.- Return type:
bool
- set_frame_time(overhead: float = 0.0005)[source]
Schedule the next frame time based on processing overhead.
This method measures time elapsed since
is_ready()last settime_graband subtracts it from the nominal inter-frame interval.- Parameters:
overhead (float, optional) – Small constant to compensate for additional overhead (seconds).
- Return type:
None
meyelens.online module
Online recording helpers.
This module re-exports the canonical recorders from meyelens.meye to keep
the public API stable while avoiding duplicate implementations.
- class meyelens.online.MeyeRecorder(cam_ind=0, model=None, show_preview=False, filename='meye', folder_path='Data', sep=';')[source]
Bases:
objectSynchronous frame-by-frame recorder using
MeyeandFileWriter.This recorder captures frames from a
Camera, runs pupil detection, and writes one line per frame to a semicolon-separated text file.The output columns are:
time (seconds since start)
x, y (centroid coordinates; written as col, row)
pupil (mask area in pixels)
major_diameter, minor_diameter, orientation (ellipse fit; may be NaN)
trg1 … trg9 (user-defined trigger values)
- Parameters:
cam_ind (int, optional) – Camera index passed to
Camera.model (str or pathlib.Path or None, optional) – Path to the Keras model file. If
None, the packaged model is used.show_preview (bool, optional) – If
True, display an overlay window during recording.filename (str, optional) – Base filename used by
FileWriter(timestamp is added automatically).folder_path (str, optional) – Output folder where the text file is created.
sep (str, optional) – Column separator used in the output file.
Notes
This recorder writes synchronously to disk at each
save_frame()call.- save_frame(trg1=0, trg2=0, trg3=0, trg4=0, trg5=0, trg6=0, trg7=0, trg8=0, trg9=0) None[source]
Capture one frame, run pupil detection, and append a row to the output file.
- Parameters:
trg1 (int, optional) – Trigger values to save alongside the measurements.
trg2 (int, optional) – Trigger values to save alongside the measurements.
trg3 (int, optional) – Trigger values to save alongside the measurements.
trg4 (int, optional) – Trigger values to save alongside the measurements.
trg5 (int, optional) – Trigger values to save alongside the measurements.
trg6 (int, optional) – Trigger values to save alongside the measurements.
trg7 (int, optional) – Trigger values to save alongside the measurements.
trg8 (int, optional) – Trigger values to save alongside the measurements.
trg9 (int, optional) – Trigger values to save alongside the measurements.
- Return type:
None
- get_data()[source]
Capture one frame and return instantaneous pupil metrics.
- Returns:
Dictionary containing:
centroid: (x, y) as (col, row)size: pupil mask area in pixelsmajor_diameter: ellipse major axis in pixels (or NaN)minor_diameter: ellipse minor axis in pixels (or NaN)orientation: ellipse angle (degrees) (or NaN)
- Return type:
dict
- class meyelens.online.MeyeAsyncRecorder(cam_ind=0, model=None, show_preview=False, path_to_file='Data', filename='meye', buffer_size=100, sep=';', cam_crop=None)[source]
Bases:
objectAsynchronous frame-by-frame recorder using
MeyeandBufferedFileWriter.This recorder is similar to
MeyeRecorder, but data rows are queued in memory and written to disk by a background thread, reducing I/O latency inside tight loops.- Parameters:
cam_ind (int, optional) – Camera index passed to
Camera.model (str or pathlib.Path or None, optional) – Path to the Keras model file. If
None, the packaged model is used.show_preview (bool, optional) – If
True, display an overlay window during recording.path_to_file (str, optional) – Output folder where the text file is created.
filename (str, optional) – Base filename used by
BufferedFileWriter(timestamp is added automatically).buffer_size (int, optional) – Queue size for
BufferedFileWriter. When full, new rows are discarded and a warning is printed by the writer (no logging).sep (str, optional) – Column separator used in the output file.
cam_crop (list[int] or tuple[int, int, int, int] or None, optional) – Crop passed to
Camera(implementation-dependent).
Notes
You must call
stop()(orclose_all()) to flush remaining queued data.- start(metadata=None) None[source]
Start recording and initialize the asynchronous output writer.
- Parameters:
metadata (dict or None, optional) – Metadata passed to
BufferedFileWriterand written as comment lines at the top of the file.- Return type:
None
- save_frame(trg1=0, trg2=0, trg3=0, trg4=0, trg5=0, trg6=0, trg7=0, trg8=0, trg9=0) None[source]
Capture one frame, run pupil detection, and queue a row for disk writing.
- Parameters:
trg1 (int, optional) – Trigger values to save alongside the measurements.
trg2 (int, optional) – Trigger values to save alongside the measurements.
trg3 (int, optional) – Trigger values to save alongside the measurements.
trg4 (int, optional) – Trigger values to save alongside the measurements.
trg5 (int, optional) – Trigger values to save alongside the measurements.
trg6 (int, optional) – Trigger values to save alongside the measurements.
trg7 (int, optional) – Trigger values to save alongside the measurements.
trg8 (int, optional) – Trigger values to save alongside the measurements.
trg9 (int, optional) – Trigger values to save alongside the measurements.
- Return type:
None
- get_data()[source]
Capture one frame and return instantaneous pupil metrics.
- Returns:
Dictionary containing:
centroid: (x, y) as (col, row)size: pupil mask area in pixelsmajor_diameter: ellipse major axis in pixels (or NaN)minor_diameter: ellipse minor axis in pixels (or NaN)orientation: ellipse angle (degrees) (or NaN)
- Return type:
dict
meyelens.utils module
- class meyelens.utils.CountdownTimer(duration: float)[source]
Bases:
objectCountdown timer based on PsychoPy’s high-precision clock.
This utility wraps
psychopy.core.Clockto provide a simple countdown interface for time-limited tasks (e.g., stimulus display windows, response deadlines, trial timeouts).- Parameters:
duration (float) – Total countdown duration in seconds.
- duration
Total countdown duration in seconds.
- Type:
float
- clock
PsychoPy clock (if available) or a perf_counter-based fallback.
- Type:
psychopy.core.Clock or _PerfCounterClock
- is_running
Truewhen the countdown is active.- Type:
bool
Notes
If the timer is not running,
get_time_left()returns 0 (matching your current behavior and avoiding exceptions).This class is intentionally minimal and does not use logging.
- start() None[source]
Start (or restart) the countdown.
Resets the internal clock and marks the timer as running.
- Return type:
None
- get_time_left() float[source]
Get the remaining time in the countdown.
- Returns:
Remaining time in seconds. If the timer is not running or the countdown has completed, returns 0.
- Return type:
float
- is_finished() bool[source]
Check whether the countdown has completed.
- Returns:
Trueif remaining time is 0, otherwiseFalse.- Return type:
bool
- stop() None[source]
Stop (pause) the countdown.
This does not reset elapsed time; it simply marks the timer as inactive. With the current design,
get_time_left()will return 0 while stopped.- Return type:
None