test_final / src /components /Help.jsx
Yingtao-Zheng's picture
fix: hybird not functioning. Change back from xgboost+geo to MLP+geo for hybird
518de6c
import React, { useState } from 'react';
function Help() {
const [clearMsg, setClearMsg] = useState('');
const clearAllHistory = async () => {
if (!window.confirm('Delete all saved sessions? My Records and My Achievement will reset.')) return;
setClearMsg('');
try {
const res = await fetch('/api/history', { method: 'DELETE' });
const data = await res.json().catch(() => ({}));
if (res.ok && data.status === 'success') {
setClearMsg('Session history cleared.');
} else {
setClearMsg(data.message || 'Could not clear history.');
}
} catch (e) {
setClearMsg('Request failed.');
}
};
return (
<main id="page-f" className="page">
<h1 className="page-title">Help</h1>
<div className="help-container">
<section className="help-section">
<h2>How to Use Focus Guard</h2>
<ol>
<li>Navigate to the Focus page from the menu</li>
<li>Read the setup notes in the intro screen, then allow camera access when prompted</li>
<li>Click the green "Start" button to begin monitoring</li>
<li>Position yourself in front of the camera with good lighting</li>
<li>The system will track your focus in real-time using face mesh analysis</li>
<li>Use the model selector to switch between detection models (Hybrid, XGBoost, MLP, Geometric)</li>
<li>Optionally enable <strong>Eye Gaze</strong> tracking for additional gaze-based focus signals</li>
<li>Click "Stop" when you're done to save the session</li>
</ol>
</section>
<section className="help-section">
<h2>What is "Focused"?</h2>
<p>The system considers you focused when:</p>
<ul>
<li>Your face is detected and visible in the camera frame</li>
<li>Your head is oriented toward the screen (low yaw/pitch deviation)</li>
<li>Your eyes are open and gaze is directed forward</li>
<li>You are not yawning</li>
</ul>
<p>The system uses MediaPipe Face Mesh to extract 478 facial landmarks, then computes features like head pose, eye aspect ratio (EAR), gaze offset, PERCLOS, and blink rate to determine focus.</p>
</section>
<section className="help-section">
<h2>Available Models</h2>
<p><strong>Hybrid</strong> <em>(Recommended)</em>: Blends MLP predictions (30%) with geometric face/eye scoring (70%) using a weighted average tuned with LOPO evaluation. Most robust across different people. LOPO F1: 0.8409.</p>
<p><strong>XGBoost:</strong> Gradient-boosted tree model using 10 selected features. Highest raw accuracy (95.87% pooled, LOPO AUC 0.8695). Strong on tabular data with fast inference.</p>
<p><strong>MLP:</strong> Two-layer neural network (10β†’64β†’32 neurons) trained with PyTorch. Good balance of speed and accuracy (92.92% pooled, LOPO AUC 0.8624). Fastest inference.</p>
<p><strong>Geometric:</strong> Rule-based scoring using head pose and eye openness. No ML model needed β€” lightweight fallback when model checkpoints are unavailable. LOPO F1: 0.8195.</p>
<p style={{ marginTop: '10px', color: '#667281', fontSize: '0.9rem' }}>
<strong>Tip:</strong> If you wear glasses or have unusual lighting, try different models to find the one that works best for your setup.
</p>
</section>
<section className="help-section">
<h2>Eye Gaze Tracking</h2>
<p>The <strong>Eye Gaze</strong> button enables L2CS-Net, a deep neural network that estimates your gaze direction from the eye region. It runs alongside your selected base model and can improve focus detection accuracy.</p>
<p style={{ marginTop: '8px' }}><strong>Performance note:</strong> Eye gaze tracking increases CPU usage and may reduce frame rate. If the system feels sluggish, disable it.</p>
<h3 style={{ marginTop: '14px', fontSize: '1rem' }}>Calibration</h3>
<p>For best accuracy, calibrate when Eye Gaze is active:</p>
<ol>
<li>Start a session, then click the <strong>"Recalibrate"</strong> button that appears in the model strip when Eye Gaze is on</li>
<li>Look directly at each dot as it appears on screen β€” there are <strong>9 calibration points</strong> across the screen</li>
<li>A final <strong>validation point</strong> confirms accuracy before calibration is applied</li>
</ol>
<p>You can recalibrate at any time using the "Recalibrate" button, which appears in the model strip when Eye Gaze is on and a session is running.</p>
</section>
<section className="help-section">
<h2>Adjusting Settings</h2>
<p><strong>Frame Rate:</strong> Controls how many frames per second are sent for analysis. Range: 10–30 FPS. A minimum of 10 FPS is enforced to keep temporal features (blink rate, PERCLOS) accurate.</p>
<p><strong>Model Selection:</strong> Switch models in real-time using the pill buttons above the timeline. The active model is highlighted. Different models may perform better depending on your lighting, setup, and whether you wear glasses.</p>
<p><strong>Floating Window:</strong> Opens a Picture-in-Picture window with your camera feed so you can keep the video visible while working in other apps.</p>
</section>
<section className="help-section">
<h2>Privacy &amp; Data</h2>
<p>Video frames are processed in real-time on the server and are never stored. Only focus status metadata (timestamps, confidence scores) is saved to the session database. View past runs under <strong>My Records</strong>; stats and badges live under <strong>My Achievement</strong>.</p>
<p style={{ marginTop: '12px' }}>
<button
type="button"
onClick={clearAllHistory}
style={{
padding: '8px 16px',
borderRadius: '8px',
border: '1px solid #c44',
background: 'transparent',
color: '#e88',
cursor: 'pointer',
fontSize: '14px'
}}
>
Clear all session history
</button>
{clearMsg && (
<span style={{ marginLeft: '12px', color: '#aaa', fontSize: '14px' }}>{clearMsg}</span>
)}
</p>
</section>
<section className="help-section">
<h2>FAQ</h2>
<details>
<summary>Why is my focus score low?</summary>
<p>Ensure good lighting so the face mesh can detect your landmarks clearly. Face the camera directly and avoid large head movements. Try switching to a different model if one isn't working well for your setup.</p>
</details>
<details>
<summary>Does wearing glasses affect accuracy?</summary>
<p>Yes, glasses can reduce accuracy β€” especially for eye-based features like EAR and gaze offset β€” because the lenses may distort landmark positions or cause reflections. If you wear glasses, try different models (e.g. Geometric or MLP may handle glasses better than XGBoost for some users). Avoid using Eye Gaze tracking with glasses as it may significantly degrade results.</p>
</details>
<details>
<summary>Can I use this without a camera?</summary>
<p>No, camera access is required. The system relies on real-time face landmark detection to determine focus.</p>
</details>
<details>
<summary>Does this work on mobile?</summary>
<p>Yes, it works on mobile browsers that support camera access and WebSocket connections. Performance depends on your device and network speed. Eye Gaze tracking is not recommended on mobile due to performance constraints.</p>
</details>
<details>
<summary>Is my data private?</summary>
<p>Yes. No video frames are stored. Processing happens in real-time and only metadata (focus/unfocused status, confidence, timestamps) is saved.</p>
</details>
<details>
<summary>Why does the face mesh lag behind my movements?</summary>
<p>The face mesh overlay updates each time the server returns a detection result. The camera feed itself renders locally. Any visible lag depends on network latency and server processing time. Reducing the frame rate slider can help if lag is noticeable.</p>
</details>
<details>
<summary>The Hybrid model doesn't seem to work differently from MLP β€” why?</summary>
<p>The Hybrid model blends MLP (30%) and geometric (70%) scores using a fixed weighted average. Because its geometric component dominates, it tends to be more conservative than raw MLP β€” especially when head pose or eye openness signals are borderline. It is tuned to be the most consistent across different people rather than the most aggressive.</p>
</details>
</section>
<section className="help-section">
<h2>Technical Info</h2>
<p><strong>Face Detection:</strong> MediaPipe Face Mesh (478 landmarks)</p>
<p><strong>Feature Extraction:</strong> Head pose (yaw/pitch/roll), EAR, MAR, gaze offset, PERCLOS, blink rate β€” 10 features selected via LOFO analysis</p>
<p><strong>ML Models:</strong> PyTorch MLP (10β†’64β†’32β†’2), XGBoost (600 trees), Geometric (rule-based), Hybrid (MLP 30% + Geometric 70% weighted blend)</p>
<p><strong>Eye Gaze:</strong> L2CS-Net (ResNet50 backbone, trained on Gaze360) with 9-point polynomial calibration</p>
<p><strong>Storage:</strong> SQLite database (sessions, events, settings)</p>
<p><strong>Framework:</strong> FastAPI + React (Vite) + WebSocket</p>
<p><strong>Evaluation:</strong> Leave-One-Person-Out (LOPO) cross-validation on 9 participants, 144K frames</p>
</section>
</div>
</main>
);
}
export default Help;