Prepare public release v0.1.0

This commit is contained in:
2026-01-30 09:44:12 +01:00
parent 75be95fdf7
commit 2034281ad7
41 changed files with 2137 additions and 1028 deletions

7
.gitignore vendored
View File

@@ -1,4 +1,4 @@
# Generated files
# Build artifacts
/target/
*.o
*.so
@@ -27,6 +27,9 @@ test_images/
test_output/
test_frames/
# Internal documentation (not for repository)
# ONNX model files (download separately - see models/README.md)
models/*.onnx
# Internal development documentation (not for repository)
status.md
development_path.md

87
CHANGELOG.md Normal file
View File

@@ -0,0 +1,87 @@
# Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [Unreleased]
## [0.1.0] - 2026-01-30
### Added
- **Core Authentication**
- Face detection with LBPH algorithm (default) or ONNX models (optional)
- Face embedding extraction and template matching
- Multi-template support per user (e.g., with/without glasses)
- Configurable distance thresholds
- **Security Features**
- TPM2 integration for hardware-bound template encryption
- AES-256-GCM software fallback when TPM unavailable
- Secure memory handling with automatic zeroization
- Constant-time comparison for security-sensitive operations
- Memory locking to prevent swapping sensitive data
- **Anti-Spoofing**
- IR presence validation
- Depth estimation (gradient-based)
- Texture analysis (LBP-based)
- Blink detection
- Micro-movement tracking
- Configurable thresholds per check
- **Camera Support**
- V4L2 camera enumeration
- IR camera detection heuristics
- IR emitter control
- Multiple pixel format support (GREY, YUYV, MJPEG)
- **IPC & Integration**
- Unix socket IPC for PAM module communication
- D-Bus interface for desktop integration
- Peer credential verification
- Rate limiting
- **CLI Tool**
- `capture` - Capture test frames
- `detect` - Test face detection
- `status` - Show system status
- `enroll` - Enroll a face
- `list` - List enrolled templates
- `remove` - Remove templates
- `test` - Test authentication
- `config` - View/modify configuration
- **PAM Module**
- C implementation for maximum compatibility
- Configurable timeout
- Password fallback support
- Debug logging
- **Settings Apps**
- GNOME Settings app (GTK4/libadwaita)
- KDE System Settings module (Qt6/KCM)
- **Documentation**
- Comprehensive README
- API documentation
- Testing guide
- Coding standards
- Security policy
### Security Notes
- Requires IR camera for security - RGB cameras explicitly not supported
- TPM2 recommended for production deployments
- Software fallback encryption is NOT cryptographically bound to hardware
### Known Limitations
- ONNX models require glibc 2.38+ (Ubuntu 24.04+, Fedora 39+)
- IR emitter control may require hardware-specific configuration
- Full TPM hardware integration needs real TPM for testing
[Unreleased]: https://github.com/YOUR_USERNAME/linux-hello/compare/v0.1.0...HEAD
[0.1.0]: https://github.com/YOUR_USERNAME/linux-hello/releases/tag/v0.1.0

205
CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,205 @@
# Contributing to Linux Hello
Thank you for your interest in contributing to Linux Hello! This document provides
guidelines and information for contributors.
## Code of Conduct
We follow the [Contributor Covenant](https://www.contributor-covenant.org/version/2/1/code_of_conduct/).
Please read it before participating.
## Getting Started
### Prerequisites
- Rust 1.70+ (install via [rustup](https://rustup.rs/))
- GCC (for PAM module)
- Development packages:
```bash
# Ubuntu/Debian
sudo apt install build-essential libpam0g-dev v4l-utils pkg-config libssl-dev
# Fedora
sudo dnf install gcc make pam-devel v4l-utils pkgconfig openssl-devel
# Arch Linux
sudo pacman -S base-devel pam v4l-utils pkgconf openssl
```
### Building
```bash
# Clone the repository
git clone https://github.com/YOUR_USERNAME/linux-hello.git
cd linux-hello
# Build all crates
cargo build
# Run tests
cargo test
# Build PAM module
cd pam-module && make
```
## How to Contribute
### Reporting Bugs
1. Check existing [issues](../../issues) to avoid duplicates
2. Use the bug report template
3. Include:
- Linux distribution and version
- Camera hardware (if relevant)
- Steps to reproduce
- Expected vs actual behavior
- Logs (if applicable)
### Suggesting Features
1. Open an issue with the feature request template
2. Describe the use case
3. Explain how it benefits users
### Code Contributions
1. **Fork** the repository
2. **Create a branch** from `main`:
```bash
git checkout -b feature/your-feature-name
```
3. **Make changes** following our code style
4. **Write/update tests** for your changes
5. **Run checks**:
```bash
cargo fmt --check
cargo clippy
cargo test
```
6. **Commit** with a clear message:
```bash
git commit -m "feat: add new feature description"
```
7. **Push** and open a Pull Request
### Commit Message Format
We follow [Conventional Commits](https://www.conventionalcommits.org/):
- `feat:` New feature
- `fix:` Bug fix
- `docs:` Documentation only
- `style:` Code style (formatting, etc.)
- `refactor:` Code refactoring
- `test:` Adding/updating tests
- `chore:` Maintenance tasks
Examples:
```
feat: add blink detection to anti-spoofing
fix: correct camera enumeration on Fedora
docs: update installation instructions for Arch
```
## Code Style
### Rust
- Follow `rustfmt` defaults
- Use `cargo clippy` to catch common issues
- Document public APIs with doc comments
- Keep functions under 60 lines when practical
```rust
/// Brief description of what the function does.
///
/// # Arguments
///
/// * `param` - Description of the parameter
///
/// # Returns
///
/// Description of return value
///
/// # Errors
///
/// When the function can fail and why
pub fn example_function(param: &str) -> Result<()> {
// Implementation
}
```
### C (PAM Module)
- Follow Linux kernel style
- No `goto` statements
- All loops must have fixed bounds
- Check all return values
- See [CODING_STANDARDS.md](CODING_STANDARDS.md) for details
## Testing
### Running Tests
```bash
# All tests
cargo test
# Specific crate
cargo test -p linux-hello-daemon
# With output
cargo test -- --nocapture
# Integration tests (may require camera)
cargo test --test '*'
```
### Writing Tests
- Place unit tests in the same file as the code
- Place integration tests in `/tests/`
- Use descriptive test names: `test_function_condition_expected_result`
## Documentation
- Update README.md for user-facing changes
- Update docs/API.md for API changes
- Add inline documentation for complex code
- Include examples where helpful
## Pull Request Process
1. Ensure all CI checks pass
2. Update documentation if needed
3. Add tests for new functionality
4. Request review from maintainers
5. Address review feedback
6. Squash commits if requested
### Review Criteria
- Code quality and style
- Test coverage
- Documentation
- Security implications
- Performance impact
## Security
If you discover a security vulnerability, please see [SECURITY.md](SECURITY.md)
for responsible disclosure guidelines. **Do not open public issues for security vulnerabilities.**
## License
By contributing, you agree that your contributions will be licensed under the
[GPL-3.0 License](LICENSE).
## Questions?
- Open a [Discussion](../../discussions) for general questions
- Check existing issues and documentation first
- Be patient - maintainers are volunteers
Thank you for contributing! 🎉

70
SECURITY.md Normal file
View File

@@ -0,0 +1,70 @@
# Security Policy
## Supported Versions
| Version | Supported |
| ------- | ------------------ |
| 0.1.x | :white_check_mark: |
## Reporting a Vulnerability
Linux Hello handles sensitive biometric data and integrates with system authentication.
We take security vulnerabilities seriously.
### How to Report
**Please do NOT open public GitHub issues for security vulnerabilities.**
Instead, report vulnerabilities by:
1. **Email**: Send details to the project maintainers privately
2. **Include**:
- Description of the vulnerability
- Steps to reproduce
- Potential impact
- Suggested fix (if any)
### What to Expect
- **Acknowledgment**: Within 48 hours
- **Initial Assessment**: Within 7 days
- **Status Updates**: Every 14 days until resolution
- **Credit**: Security researchers will be credited (unless anonymity requested)
### Scope
The following are in scope for security reports:
- Authentication bypass
- Template extraction or decryption
- Anti-spoofing bypass
- IPC/D-Bus authorization issues
- Memory safety issues
- Privilege escalation
- Information disclosure
### Out of Scope
- Social engineering attacks
- Physical attacks requiring extended access
- Attacks requiring TPM hardware exploits
- Denial of service (unless used for auth bypass)
## Security Architecture
See the [README](README.md#security) for details on our security model:
- **TPM2 Integration**: Hardware-bound encryption
- **Anti-Spoofing**: Multi-layer liveness detection
- **Secure Memory**: Automatic zeroization of sensitive data
- **IPC Security**: Peer credential verification and rate limiting
## Security Hardening Recommendations
For production deployments:
1. **Enable TPM**: Set `[tpm] enabled = true` in config
2. **Use IR Camera**: RGB cameras are explicitly not supported
3. **Keep Updated**: Apply security updates promptly
4. **Audit Logs**: Monitor `/var/log/auth.log` for authentication events
5. **Limit Access**: Configure appropriate file permissions

View File

@@ -117,9 +117,7 @@ fn get_real_username() -> String {
std::process::Command::new("whoami")
.output()
.ok()
.and_then(|output| {
String::from_utf8(output.stdout).ok()
})
.and_then(|output| String::from_utf8(output.stdout).ok())
.map(|s| s.trim().to_string())
.unwrap_or_else(|| "unknown".to_string())
})
@@ -130,7 +128,11 @@ async fn main() -> Result<()> {
let cli = Cli::parse();
// Initialize logging
let log_level = if cli.verbose { Level::DEBUG } else { Level::INFO };
let log_level = if cli.verbose {
Level::DEBUG
} else {
Level::INFO
};
FmtSubscriber::builder()
.with_max_level(log_level)
.with_target(false)
@@ -140,30 +142,22 @@ async fn main() -> Result<()> {
let config = Config::load_or_default();
match cli.command {
Commands::Capture { output, count, device } => {
cmd_capture(&config, &output, count, device.as_deref()).await
}
Commands::Detect { image, scores, output } => {
cmd_detect(&config, image.as_deref(), scores, output.as_deref()).await
}
Commands::Status { camera, daemon } => {
cmd_status(&config, camera, daemon).await
}
Commands::Enroll { label } => {
cmd_enroll(&config, &label).await
}
Commands::List => {
cmd_list(&config).await
}
Commands::Remove { label, all } => {
cmd_remove(&config, label.as_deref(), all).await
}
Commands::Test { verbose, debug } => {
cmd_test(&config, verbose, debug).await
}
Commands::Config { json, set } => {
cmd_config(&config, json, set.as_deref()).await
}
Commands::Capture {
output,
count,
device,
} => cmd_capture(&config, &output, count, device.as_deref()).await,
Commands::Detect {
image,
scores,
output,
} => cmd_detect(&config, image.as_deref(), scores, output.as_deref()).await,
Commands::Status { camera, daemon } => cmd_status(&config, camera, daemon).await,
Commands::Enroll { label } => cmd_enroll(&config, &label).await,
Commands::List => cmd_list(&config).await,
Commands::Remove { label, all } => cmd_remove(&config, label.as_deref(), all).await,
Commands::Test { verbose, debug } => cmd_test(&config, verbose, debug).await,
Commands::Config { json, set } => cmd_config(&config, json, set.as_deref()).await,
}
}
@@ -211,12 +205,18 @@ async fn cmd_capture(
for i in 0..count {
let frame = camera.capture_frame()?;
match frame.format {
PixelFormat::Grey => {
let filename = format!("{}/frame_{:03}.png", output, i);
if let Some(img) = image::GrayImage::from_raw(frame.width, frame.height, frame.data) {
img.save(&filename).map_err(|e| linux_hello_common::Error::Io(std::io::Error::new(std::io::ErrorKind::Other, e)))?;
if let Some(img) = image::GrayImage::from_raw(frame.width, frame.height, frame.data)
{
img.save(&filename).map_err(|e| {
linux_hello_common::Error::Io(std::io::Error::new(
std::io::ErrorKind::Other,
e,
))
})?;
info!("Saved frame {} (Permissions: Grey)", filename);
}
}
@@ -227,9 +227,18 @@ async fn cmd_capture(
for chunk in frame.data.chunks_exact(2) {
gray_data.push(chunk[0]); // Y component
}
if let Some(img) = image::GrayImage::from_raw(frame.width, frame.height, gray_data) {
img.save(&filename).map_err(|e| linux_hello_common::Error::Io(std::io::Error::new(std::io::ErrorKind::Other, e)))?;
info!("Saved frame {} (converted from YUYV to grayscale)", filename);
if let Some(img) = image::GrayImage::from_raw(frame.width, frame.height, gray_data)
{
img.save(&filename).map_err(|e| {
linux_hello_common::Error::Io(std::io::Error::new(
std::io::ErrorKind::Other,
e,
))
})?;
info!(
"Saved frame {} (converted from YUYV to grayscale)",
filename
);
}
}
PixelFormat::Mjpeg => {
@@ -238,7 +247,12 @@ async fn cmd_capture(
match image::load_from_memory(&frame.data) {
Ok(img) => {
let gray = img.to_luma8();
gray.save(&filename).map_err(|e| linux_hello_common::Error::Io(std::io::Error::new(std::io::ErrorKind::Other, e)))?;
gray.save(&filename).map_err(|e| {
linux_hello_common::Error::Io(std::io::Error::new(
std::io::ErrorKind::Other,
e,
))
})?;
info!("Saved frame {} (decoded from MJPEG)", filename);
}
Err(e) => {
@@ -249,7 +263,11 @@ async fn cmd_capture(
_ => {
let filename = format!("{}/frame_{:03}.raw", output, i);
std::fs::write(&filename, &frame.data)?;
info!("Saved frame {} ({} bytes - Unknown format)", filename, frame.data.len());
info!(
"Saved frame {} ({} bytes - Unknown format)",
filename,
frame.data.len()
);
}
}
}
@@ -294,26 +312,30 @@ async fn cmd_detect(
info!("Saving annotated image to: {}", out_path);
// Convert to RGB for drawing
let mut rgb_img = img.to_rgb8();
// Draw bounding box (red color)
draw_bounding_box(&mut rgb_img, x, y, w, h, [255, 0, 0]);
rgb_img.save(out_path)
.map_err(|e| linux_hello_common::Error::Io(
std::io::Error::new(std::io::ErrorKind::Other, e)
))?;
rgb_img.save(out_path).map_err(|e| {
linux_hello_common::Error::Io(std::io::Error::new(
std::io::ErrorKind::Other,
e,
))
})?;
println!("Annotated image saved to: {}", out_path);
}
}
None => {
println!("No face detected");
if let Some(out_path) = output {
// Save original image without annotations
img.save(out_path)
.map_err(|e| linux_hello_common::Error::Io(
std::io::Error::new(std::io::ErrorKind::Other, e)
))?;
img.save(out_path).map_err(|e| {
linux_hello_common::Error::Io(std::io::Error::new(
std::io::ErrorKind::Other,
e,
))
})?;
println!("Image saved to: {} (no face detected)", out_path);
}
}
@@ -329,23 +351,16 @@ async fn cmd_detect(
}
/// Draw a bounding box on an RGB image
fn draw_bounding_box(
img: &mut image::RgbImage,
x: u32,
y: u32,
w: u32,
h: u32,
color: [u8; 3],
) {
fn draw_bounding_box(img: &mut image::RgbImage, x: u32, y: u32, w: u32, h: u32, color: [u8; 3]) {
let (img_width, img_height) = img.dimensions();
let thickness = 2u32;
// Clamp coordinates to image bounds
let x1 = x.min(img_width.saturating_sub(1));
let y1 = y.min(img_height.saturating_sub(1));
let x2 = (x + w).min(img_width.saturating_sub(1));
let y2 = (y + h).min(img_height.saturating_sub(1));
// Draw horizontal lines (top and bottom)
for t in 0..thickness {
// Top line
@@ -353,7 +368,7 @@ fn draw_bounding_box(
for px in x1..=x2 {
img.put_pixel(px, top_y, image::Rgb(color));
}
// Bottom line
let bottom_y = y2.saturating_sub(t);
if bottom_y >= y1 {
@@ -362,7 +377,7 @@ fn draw_bounding_box(
}
}
}
// Draw vertical lines (left and right)
for t in 0..thickness {
// Left line
@@ -370,7 +385,7 @@ fn draw_bounding_box(
for py in y1..=y2 {
img.put_pixel(left_x, py, image::Rgb(color));
}
// Right line
let right_x = x2.saturating_sub(t);
if right_x >= x1 {
@@ -425,28 +440,42 @@ async fn cmd_status(config: &Config, show_camera: bool, show_daemon: bool) -> Re
println!("\nConfiguration:");
println!(" Detection model: {}", config.detection.model);
println!(" Anti-spoofing: {}", if config.anti_spoofing.enabled { "enabled" } else { "disabled" });
println!(" TPM: {}", if config.tpm.enabled { "enabled" } else { "disabled" });
println!(
" Anti-spoofing: {}",
if config.anti_spoofing.enabled {
"enabled"
} else {
"disabled"
}
);
println!(
" TPM: {}",
if config.tpm.enabled {
"enabled"
} else {
"disabled"
}
);
Ok(())
}
async fn cmd_enroll(config: &Config, label: &str) -> Result<()> {
use linux_hello_daemon::auth::AuthService;
info!("Starting enrollment with label: {}", label);
// Get real user (handles sudo)
let user = get_real_username();
println!("Enrolling user: {}", user);
println!("Label: {}", label);
println!("Please look at the camera...");
// Create auth service
let auth_service = AuthService::new(config.clone());
auth_service.initialize()?;
// Enroll with 5 frames
match auth_service.enroll(&user, label, 5).await {
Ok(()) => {
@@ -462,16 +491,16 @@ async fn cmd_enroll(config: &Config, label: &str) -> Result<()> {
async fn cmd_list(_config: &Config) -> Result<()> {
use linux_hello_common::TemplateStore;
let store = TemplateStore::new(TemplateStore::default_path());
let users = store.list_users()?;
if users.is_empty() {
println!("No enrolled users");
return Ok(());
}
println!("Enrolled users:");
for user in users {
let templates = store.list_templates(&user)?;
@@ -481,28 +510,27 @@ async fn cmd_list(_config: &Config) -> Result<()> {
println!(" {}:", user);
for label in templates {
if let Ok(template) = store.load(&user, &label) {
println!(" - {} (enrolled: {}, frames: {})",
label,
template.enrolled_at,
template.frame_count
println!(
" - {} (enrolled: {}, frames: {})",
label, template.enrolled_at, template.frame_count
);
}
}
}
}
Ok(())
}
async fn cmd_remove(_config: &Config, label: Option<&str>, all: bool) -> Result<()> {
use linux_hello_common::TemplateStore;
let user = std::env::var("USER")
.or_else(|_| std::env::var("USERNAME"))
.unwrap_or_else(|_| "unknown".to_string());
let store = TemplateStore::new(TemplateStore::default_path());
if all {
store.remove_all(&user)?;
println!("Removed all templates for user: {}", user);
@@ -515,23 +543,23 @@ async fn cmd_remove(_config: &Config, label: Option<&str>, all: bool) -> Result<
"No label specified".to_string(),
));
}
Ok(())
}
async fn cmd_test(config: &Config, verbose: bool, _debug: bool) -> Result<()> {
use linux_hello_daemon::auth::AuthService;
// Get real user (handles sudo)
let user = get_real_username();
println!("Testing authentication for user: {}", user);
println!("Please look at the camera...");
// Create auth service
let auth_service = AuthService::new(config.clone());
auth_service.initialize()?;
match auth_service.authenticate(&user).await {
Ok(true) => {
println!("✓ Authentication successful!");
@@ -560,62 +588,77 @@ async fn cmd_config(config: &Config, json: bool, set: Option<&str>) -> Result<()
"Invalid format. Use key=value".to_string(),
));
}
let key = parts[0].trim();
let value = parts[1].trim();
let mut new_config = config.clone();
// Update configuration based on key
match key {
"general.log_level" => new_config.general.log_level = value.to_string(),
"general.timeout_seconds" => {
new_config.general.timeout_seconds = value.parse()
.map_err(|_| linux_hello_common::Error::Config("Invalid timeout value".to_string()))?;
new_config.general.timeout_seconds = value.parse().map_err(|_| {
linux_hello_common::Error::Config("Invalid timeout value".to_string())
})?;
}
"camera.device" => new_config.camera.device = value.to_string(),
"camera.ir_emitter" => new_config.camera.ir_emitter = value.to_string(),
"camera.fps" => {
new_config.camera.fps = value.parse()
.map_err(|_| linux_hello_common::Error::Config("Invalid fps value".to_string()))?;
new_config.camera.fps = value.parse().map_err(|_| {
linux_hello_common::Error::Config("Invalid fps value".to_string())
})?;
}
"detection.model" => new_config.detection.model = value.to_string(),
"detection.min_face_size" => {
new_config.detection.min_face_size = value.parse()
.map_err(|_| linux_hello_common::Error::Config("Invalid min_face_size value".to_string()))?;
new_config.detection.min_face_size = value.parse().map_err(|_| {
linux_hello_common::Error::Config("Invalid min_face_size value".to_string())
})?;
}
"detection.confidence_threshold" => {
new_config.detection.confidence_threshold = value.parse()
.map_err(|_| linux_hello_common::Error::Config("Invalid confidence_threshold value".to_string()))?;
new_config.detection.confidence_threshold = value.parse().map_err(|_| {
linux_hello_common::Error::Config(
"Invalid confidence_threshold value".to_string(),
)
})?;
}
"embedding.model" => new_config.embedding.model = value.to_string(),
"embedding.distance_threshold" => {
new_config.embedding.distance_threshold = value.parse()
.map_err(|_| linux_hello_common::Error::Config("Invalid distance_threshold value".to_string()))?;
new_config.embedding.distance_threshold = value.parse().map_err(|_| {
linux_hello_common::Error::Config(
"Invalid distance_threshold value".to_string(),
)
})?;
}
"anti_spoofing.enabled" => {
new_config.anti_spoofing.enabled = value.parse()
.map_err(|_| linux_hello_common::Error::Config("Invalid boolean value".to_string()))?;
new_config.anti_spoofing.enabled = value.parse().map_err(|_| {
linux_hello_common::Error::Config("Invalid boolean value".to_string())
})?;
}
"anti_spoofing.depth_check" => {
new_config.anti_spoofing.depth_check = value.parse()
.map_err(|_| linux_hello_common::Error::Config("Invalid boolean value".to_string()))?;
new_config.anti_spoofing.depth_check = value.parse().map_err(|_| {
linux_hello_common::Error::Config("Invalid boolean value".to_string())
})?;
}
"anti_spoofing.liveness_model" => {
new_config.anti_spoofing.liveness_model = value.parse()
.map_err(|_| linux_hello_common::Error::Config("Invalid boolean value".to_string()))?;
new_config.anti_spoofing.liveness_model = value.parse().map_err(|_| {
linux_hello_common::Error::Config("Invalid boolean value".to_string())
})?;
}
"anti_spoofing.min_score" => {
new_config.anti_spoofing.min_score = value.parse()
.map_err(|_| linux_hello_common::Error::Config("Invalid min_score value".to_string()))?;
new_config.anti_spoofing.min_score = value.parse().map_err(|_| {
linux_hello_common::Error::Config("Invalid min_score value".to_string())
})?;
}
"tpm.enabled" => {
new_config.tpm.enabled = value.parse()
.map_err(|_| linux_hello_common::Error::Config("Invalid boolean value".to_string()))?;
new_config.tpm.enabled = value.parse().map_err(|_| {
linux_hello_common::Error::Config("Invalid boolean value".to_string())
})?;
}
"tpm.pcr_binding" => {
new_config.tpm.pcr_binding = value.parse()
.map_err(|_| linux_hello_common::Error::Config("Invalid boolean value".to_string()))?;
new_config.tpm.pcr_binding = value.parse().map_err(|_| {
linux_hello_common::Error::Config("Invalid boolean value".to_string())
})?;
}
_ => {
return Err(linux_hello_common::Error::Config(format!(
@@ -624,7 +667,7 @@ async fn cmd_config(config: &Config, json: bool, set: Option<&str>) -> Result<()
)));
}
}
// Save configuration
let config_path = "/etc/linux-hello/config.toml";
match new_config.save(config_path) {
@@ -634,10 +677,14 @@ async fn cmd_config(config: &Config, json: bool, set: Option<&str>) -> Result<()
let user_config_path = dirs_config_path();
std::fs::create_dir_all(user_config_path.parent().unwrap())?;
new_config.save(&user_config_path)?;
println!("Configuration saved to {} (no write access to {})", user_config_path.display(), config_path);
println!(
"Configuration saved to {} (no write access to {})",
user_config_path.display(),
config_path
);
}
}
return Ok(());
}

View File

@@ -362,7 +362,8 @@ impl Config {
/// Save configuration to a TOML file
pub fn save<P: AsRef<Path>>(&self, path: P) -> Result<()> {
let content = toml::to_string_pretty(self).map_err(|e| Error::Serialization(e.to_string()))?;
let content =
toml::to_string_pretty(self).map_err(|e| Error::Serialization(e.to_string()))?;
std::fs::write(path, content)?;
Ok(())
}

View File

@@ -54,4 +54,4 @@ pub mod template;
pub use config::Config;
pub use error::{Error, Result};
pub use template::{FaceTemplate, TemplateStore};
pub use template::{FaceTemplate, TemplateStore};

View File

@@ -54,8 +54,8 @@
//! ```
use serde::{Deserialize, Serialize};
use std::path::{Path, PathBuf};
use std::fs;
use std::path::{Path, PathBuf};
use crate::error::{Error, Result};
@@ -191,7 +191,7 @@ impl TemplateStore {
let template_file = self.template_path(&template.user, &template.label);
let json = serde_json::to_string_pretty(template)
.map_err(|e| Error::Serialization(e.to_string()))?;
fs::write(&template_file, json)?;
Ok(())
}
@@ -199,32 +199,32 @@ impl TemplateStore {
/// Load a template for a user and label
pub fn load(&self, user: &str, label: &str) -> Result<FaceTemplate> {
let template_file = self.template_path(user, label);
if !template_file.exists() {
return Err(Error::UserNotEnrolled(format!("{}:{}", user, label)));
}
let content = fs::read_to_string(&template_file)?;
let template: FaceTemplate = serde_json::from_str(&content)
.map_err(|e| Error::Serialization(e.to_string()))?;
let template: FaceTemplate =
serde_json::from_str(&content).map_err(|e| Error::Serialization(e.to_string()))?;
Ok(template)
}
/// Load all templates for a user
pub fn load_all(&self, user: &str) -> Result<Vec<FaceTemplate>> {
let user_dir = self.user_path(user);
if !user_dir.exists() {
return Ok(vec![]);
}
let mut templates = Vec::new();
for entry in fs::read_dir(&user_dir)? {
let entry = entry?;
let path = entry.path();
if path.extension().and_then(|s| s.to_str()) == Some("json") {
if let Ok(content) = fs::read_to_string(&path) {
if let Ok(template) = serde_json::from_str::<FaceTemplate>(&content) {
@@ -244,11 +244,11 @@ impl TemplateStore {
}
let mut users = Vec::new();
for entry in fs::read_dir(&self.base_path)? {
let entry = entry?;
let path = entry.path();
if path.is_dir() {
if let Some(user) = path.file_name().and_then(|n| n.to_str()) {
users.push(user.to_string());
@@ -262,17 +262,17 @@ impl TemplateStore {
/// List all templates for a user
pub fn list_templates(&self, user: &str) -> Result<Vec<String>> {
let user_dir = self.user_path(user);
if !user_dir.exists() {
return Ok(vec![]);
}
let mut labels = Vec::new();
for entry in fs::read_dir(&user_dir)? {
let entry = entry?;
let path = entry.path();
if let Some(ext) = path.extension().and_then(|s| s.to_str()) {
if ext == "json" {
if let Some(stem) = path.file_stem().and_then(|s| s.to_str()) {
@@ -288,7 +288,7 @@ impl TemplateStore {
/// Remove a template
pub fn remove(&self, user: &str, label: &str) -> Result<()> {
let template_file = self.template_path(user, label);
if template_file.exists() {
fs::remove_file(&template_file)?;
}
@@ -299,7 +299,7 @@ impl TemplateStore {
/// Remove all templates for a user
pub fn remove_all(&self, user: &str) -> Result<()> {
let user_dir = self.user_path(user);
if user_dir.exists() {
fs::remove_dir_all(&user_dir)?;
}
@@ -313,7 +313,7 @@ impl TemplateStore {
if !user_dir.exists() || !user_dir.is_dir() {
return false;
}
// Check if directory has any template files
if let Ok(entries) = fs::read_dir(&user_dir) {
for entry in entries {
@@ -324,7 +324,7 @@ impl TemplateStore {
}
}
}
false
}
}

View File

@@ -10,19 +10,19 @@
//!
//! Run with: cargo bench -p linux-hello-daemon
use criterion::{black_box, criterion_group, criterion_main, Criterion, BenchmarkId, Throughput};
use criterion::{black_box, criterion_group, criterion_main, BenchmarkId, Criterion, Throughput};
use image::GrayImage;
use linux_hello_daemon::detection::{detect_face_simple, SimpleFaceDetector, FaceDetect};
use linux_hello_daemon::embedding::{
cosine_similarity, euclidean_distance, PlaceholderEmbeddingExtractor, EmbeddingExtractor,
};
use linux_hello_daemon::matching::match_template;
use linux_hello_common::FaceTemplate;
use linux_hello_daemon::anti_spoofing::{
AntiSpoofingConfig, AntiSpoofingDetector, AntiSpoofingFrame,
};
use linux_hello_daemon::secure_memory::{SecureEmbedding, SecureBytes, memory_protection};
use linux_hello_common::FaceTemplate;
use linux_hello_daemon::detection::{detect_face_simple, FaceDetect, SimpleFaceDetector};
use linux_hello_daemon::embedding::{
cosine_similarity, euclidean_distance, EmbeddingExtractor, PlaceholderEmbeddingExtractor,
};
use linux_hello_daemon::matching::match_template;
use linux_hello_daemon::secure_memory::{memory_protection, SecureBytes, SecureEmbedding};
// ============================================================================
// Test Data Generation Helpers
@@ -118,9 +118,7 @@ fn bench_face_detection(c: &mut Criterion) {
BenchmarkId::new("simple_detection", name),
&(image.clone(), width, height),
|b, (img, w, h)| {
b.iter(|| {
detect_face_simple(black_box(img), black_box(*w), black_box(*h))
});
b.iter(|| detect_face_simple(black_box(img), black_box(*w), black_box(*h)));
},
);
@@ -130,9 +128,7 @@ fn bench_face_detection(c: &mut Criterion) {
BenchmarkId::new("detector_trait", name),
&(image.clone(), width, height),
|b, (img, w, h)| {
b.iter(|| {
detector.detect(black_box(img), black_box(*w), black_box(*h))
});
b.iter(|| detector.detect(black_box(img), black_box(*w), black_box(*h)));
},
);
}
@@ -158,8 +154,8 @@ fn bench_embedding_extraction(c: &mut Criterion) {
for (width, height, name) in face_sizes {
let image_data = generate_test_image(width, height);
let face_image = GrayImage::from_raw(width, height, image_data)
.expect("Failed to create test image");
let face_image =
GrayImage::from_raw(width, height, image_data).expect("Failed to create test image");
group.throughput(Throughput::Elements(1)); // 1 embedding per iteration
@@ -167,9 +163,7 @@ fn bench_embedding_extraction(c: &mut Criterion) {
BenchmarkId::new("placeholder_extractor", name),
&face_image,
|b, img| {
b.iter(|| {
extractor.extract(black_box(img))
});
b.iter(|| extractor.extract(black_box(img)));
},
);
}
@@ -182,15 +176,9 @@ fn bench_embedding_extraction(c: &mut Criterion) {
for dim in dimensions {
let extractor = PlaceholderEmbeddingExtractor::new(dim);
group.bench_with_input(
BenchmarkId::new("dimension", dim),
&face_image,
|b, img| {
b.iter(|| {
extractor.extract(black_box(img))
});
},
);
group.bench_with_input(BenchmarkId::new("dimension", dim), &face_image, |b, img| {
b.iter(|| extractor.extract(black_box(img)));
});
}
group.finish();
@@ -216,9 +204,7 @@ fn bench_template_matching(c: &mut Criterion) {
BenchmarkId::new("cosine_similarity", dim),
&(emb_a.clone(), emb_b.clone()),
|b, (a, bb)| {
b.iter(|| {
cosine_similarity(black_box(a), black_box(bb))
});
b.iter(|| cosine_similarity(black_box(a), black_box(bb)));
},
);
@@ -226,9 +212,7 @@ fn bench_template_matching(c: &mut Criterion) {
BenchmarkId::new("euclidean_distance", dim),
&(emb_a.clone(), emb_b.clone()),
|b, (a, bb)| {
b.iter(|| {
euclidean_distance(black_box(a), black_box(bb))
});
b.iter(|| euclidean_distance(black_box(a), black_box(bb)));
},
);
}
@@ -246,9 +230,7 @@ fn bench_template_matching(c: &mut Criterion) {
BenchmarkId::new("match_against_n_templates", count),
&(query.clone(), templates),
|b, (q, tmpl)| {
b.iter(|| {
match_template(black_box(q), black_box(tmpl), 0.4)
});
b.iter(|| match_template(black_box(q), black_box(tmpl), 0.4));
},
);
}
@@ -263,10 +245,7 @@ fn bench_template_matching(c: &mut Criterion) {
fn bench_anti_spoofing(c: &mut Criterion) {
let mut group = c.benchmark_group("anti_spoofing");
let resolutions = [
(320, 240, "QVGA"),
(640, 480, "VGA"),
];
let resolutions = [(320, 240, "QVGA"), (640, 480, "VGA")];
for (width, height, name) in resolutions {
let pixels = generate_test_image(width, height);
@@ -361,10 +340,10 @@ fn bench_encryption(c: &mut Criterion) {
// Test with different data sizes (embedding sizes)
let data_sizes = [
(128 * 4, "128_floats"), // 128-dim embedding
(256 * 4, "256_floats"), // 256-dim embedding
(512 * 4, "512_floats"), // 512-dim embedding
(1024 * 4, "1024_floats"), // 1024-dim embedding
(128 * 4, "128_floats"), // 128-dim embedding
(256 * 4, "256_floats"), // 256-dim embedding
(512 * 4, "512_floats"), // 512-dim embedding
(1024 * 4, "1024_floats"), // 1024-dim embedding
];
for (size, name) in data_sizes {
@@ -372,29 +351,18 @@ fn bench_encryption(c: &mut Criterion) {
group.throughput(Throughput::Bytes(size as u64));
group.bench_with_input(
BenchmarkId::new("encrypt", name),
&plaintext,
|b, data| {
b.iter(|| {
storage.encrypt("bench_user", black_box(data))
});
},
);
group.bench_with_input(BenchmarkId::new("encrypt", name), &plaintext, |b, data| {
b.iter(|| storage.encrypt("bench_user", black_box(data)));
});
// Encrypt once for decrypt benchmark
let encrypted = storage.encrypt("bench_user", &plaintext)
let encrypted = storage
.encrypt("bench_user", &plaintext)
.expect("Encryption failed");
group.bench_with_input(
BenchmarkId::new("decrypt", name),
&encrypted,
|b, enc| {
b.iter(|| {
storage.decrypt("bench_user", black_box(enc))
});
},
);
group.bench_with_input(BenchmarkId::new("decrypt", name), &encrypted, |b, enc| {
b.iter(|| storage.decrypt("bench_user", black_box(enc)));
});
// Round-trip benchmark
group.bench_with_input(
@@ -435,9 +403,7 @@ fn bench_secure_memory(c: &mut Criterion) {
BenchmarkId::new("secure_embedding_create", dim),
&data,
|b, d| {
b.iter(|| {
SecureEmbedding::new(black_box(d.clone()))
});
b.iter(|| SecureEmbedding::new(black_box(d.clone())));
},
);
@@ -449,9 +415,7 @@ fn bench_secure_memory(c: &mut Criterion) {
BenchmarkId::new("secure_cosine_similarity", dim),
&(secure_a.clone(), secure_b.clone()),
|b, (a, bb)| {
b.iter(|| {
a.cosine_similarity(black_box(bb))
});
b.iter(|| a.cosine_similarity(black_box(bb)));
},
);
@@ -460,9 +424,7 @@ fn bench_secure_memory(c: &mut Criterion) {
BenchmarkId::new("secure_to_bytes", dim),
&secure_a,
|b, emb| {
b.iter(|| {
emb.to_bytes()
});
b.iter(|| emb.to_bytes());
},
);
@@ -471,9 +433,7 @@ fn bench_secure_memory(c: &mut Criterion) {
BenchmarkId::new("secure_from_bytes", dim),
&bytes,
|b, data| {
b.iter(|| {
SecureEmbedding::from_bytes(black_box(data))
});
b.iter(|| SecureEmbedding::from_bytes(black_box(data)));
},
);
}
@@ -493,9 +453,7 @@ fn bench_secure_memory(c: &mut Criterion) {
BenchmarkId::new("constant_time_eq_match", size),
&(bytes_a.clone(), bytes_b.clone()),
|b, (a, bb)| {
b.iter(|| {
a.constant_time_eq(black_box(bb))
});
b.iter(|| a.constant_time_eq(black_box(bb)));
},
);
@@ -504,9 +462,7 @@ fn bench_secure_memory(c: &mut Criterion) {
BenchmarkId::new("constant_time_eq_differ", size),
&(bytes_a.clone(), bytes_diff.clone()),
|b, (a, d)| {
b.iter(|| {
a.constant_time_eq(black_box(d))
});
b.iter(|| a.constant_time_eq(black_box(d)));
},
);
}
@@ -515,16 +471,12 @@ fn bench_secure_memory(c: &mut Criterion) {
for size in byte_sizes {
group.throughput(Throughput::Bytes(size as u64));
group.bench_with_input(
BenchmarkId::new("secure_zero", size),
&size,
|b, &sz| {
let mut buffer: Vec<u8> = (0..sz).map(|i| (i % 256) as u8).collect();
b.iter(|| {
memory_protection::secure_zero(black_box(&mut buffer));
});
},
);
group.bench_with_input(BenchmarkId::new("secure_zero", size), &size, |b, &sz| {
let mut buffer: Vec<u8> = (0..sz).map(|i| (i % 256) as u8).collect();
b.iter(|| {
memory_protection::secure_zero(black_box(&mut buffer));
});
});
}
group.finish();
@@ -552,11 +504,9 @@ fn bench_full_pipeline(c: &mut Criterion) {
b.iter(|| {
// Step 1: Face detection
let detections = detector.detect(
black_box(&image_data),
black_box(width),
black_box(height)
).unwrap();
let detections = detector
.detect(black_box(&image_data), black_box(width), black_box(height))
.unwrap();
if let Some(detection) = detections.first() {
// Step 2: Extract face region (simulated)
@@ -587,11 +537,9 @@ fn bench_full_pipeline(c: &mut Criterion) {
let mut spoof_detector = AntiSpoofingDetector::new(config.clone());
// Step 1: Face detection
let detections = detector.detect(
black_box(&image_data),
black_box(width),
black_box(height)
).unwrap();
let detections = detector
.detect(black_box(&image_data), black_box(width), black_box(height))
.unwrap();
if let Some(detection) = detections.first() {
// Step 2: Anti-spoofing check

View File

@@ -9,9 +9,7 @@ use linux_hello_daemon::tpm::{get_tpm_storage, TpmStorage};
fn main() {
// Initialize logging
tracing_subscriber::fmt()
.with_env_filter("info")
.init();
tracing_subscriber::fmt().with_env_filter("info").init();
println!("=== Linux Hello TPM Test ===\n");

View File

@@ -121,7 +121,7 @@ impl Default for AntiSpoofingConfig {
enable_ir_check: true,
enable_depth_check: true,
enable_texture_check: true,
enable_blink_check: false, // Requires multiple frames
enable_blink_check: false, // Requires multiple frames
enable_movement_check: false, // Requires multiple frames
temporal_frames: 10,
}
@@ -177,7 +177,7 @@ impl AntiSpoofingDetector {
pub fn check_frame(&mut self, frame: &AntiSpoofingFrame) -> Result<LivenessResult> {
let mut checks = LivenessChecks::default();
let mut scores: Vec<(f32, f32)> = Vec::new(); // (score, weight)
// Extract frame features
let features = self.extract_features(frame)?;
self.update_history(features.clone());
@@ -211,8 +211,8 @@ impl AntiSpoofingDetector {
}
// Micro-movement analysis (requires frame history)
if self.config.enable_movement_check
&& self.frame_history.len() >= self.config.temporal_frames / 2
if self.config.enable_movement_check
&& self.frame_history.len() >= self.config.temporal_frames / 2
{
let score = self.analyze_movements()?;
checks.movement_check = Some(score);
@@ -220,9 +220,11 @@ impl AntiSpoofingDetector {
}
// Calculate weighted average
let (total_score, total_weight) = scores.iter()
.fold((0.0, 0.0), |(s, w), (score, weight)| (s + score * weight, w + weight));
let (total_score, total_weight) =
scores.iter().fold((0.0, 0.0), |(s, w), (score, weight)| {
(s + score * weight, w + weight)
});
let final_score = if total_weight > 0.0 {
total_score / total_weight
} else {
@@ -253,7 +255,7 @@ impl AntiSpoofingDetector {
fn check_ir_presence(&self, frame: &AntiSpoofingFrame) -> Result<f32> {
// IR cameras produce specific brightness patterns
// Real faces reflect IR differently than screens/photos
let bbox = frame.face_bbox.unwrap_or((
frame.width / 4,
frame.height / 4,
@@ -265,7 +267,7 @@ impl AntiSpoofingDetector {
let (x, y, w, h) = bbox;
let mut total: u64 = 0;
let mut count: u64 = 0;
for row in y..(y + h).min(frame.height) {
for col in x..(x + w).min(frame.width) {
let idx = (row * frame.width + col) as usize;
@@ -281,10 +283,10 @@ impl AntiSpoofingDetector {
}
let avg_brightness = (total as f32) / (count as f32);
// IR images of real faces typically have moderate, non-uniform brightness
// Very dark or very bright uniform regions suggest screens/photos
// Calculate brightness variance
let mut variance: f64 = 0.0;
for row in y..(y + h).min(frame.height) {
@@ -314,14 +316,16 @@ impl AntiSpoofingDetector {
};
let score = (brightness_score + variance_score) / 2.0;
debug!("IR check: brightness={:.1}, variance={:.1}, score={:.2}",
avg_brightness, std_dev, score);
debug!(
"IR check: brightness={:.1}, variance={:.1}, score={:.2}",
avg_brightness, std_dev, score
);
Ok(score.clamp(0.0, 1.0))
}
/// Estimate depth using pixel intensity patterns
///
///
/// This is a simplified placeholder. Real implementation would use:
/// - Stereo IR cameras with disparity calculation
/// - Structured light projection patterns
@@ -335,22 +339,22 @@ impl AntiSpoofingDetector {
));
let (x, y, w, h) = bbox;
// Analyze intensity gradients that suggest 3D structure
// Real faces have characteristic nose/cheek depth patterns
// Calculate horizontal gradient at face center (nose ridge)
let center_y = y + h / 2;
let center_x = x + w / 2;
let mut gradient_sum: f32 = 0.0;
let mut samples = 0;
// Sample horizontal gradient across face
for col in (center_x.saturating_sub(w/4))..(center_x + w/4).min(frame.width - 1) {
for col in (center_x.saturating_sub(w / 4))..(center_x + w / 4).min(frame.width - 1) {
let idx1 = (center_y * frame.width + col) as usize;
let idx2 = (center_y * frame.width + col + 1) as usize;
if idx1 < frame.pixels.len() && idx2 < frame.pixels.len() {
let grad = (frame.pixels[idx2] as i32 - frame.pixels[idx1] as i32).abs();
gradient_sum += grad as f32;
@@ -363,7 +367,7 @@ impl AntiSpoofingDetector {
}
let avg_gradient = gradient_sum / samples as f32;
// Real faces typically have gradients in the 5-30 range
// Flat images (photos) have lower gradients
let score = if avg_gradient > 3.0 && avg_gradient < 50.0 {
@@ -374,7 +378,10 @@ impl AntiSpoofingDetector {
0.5
};
debug!("Depth check: avg_gradient={:.1}, score={:.2}", avg_gradient, score);
debug!(
"Depth check: avg_gradient={:.1}, score={:.2}",
avg_gradient, score
);
Ok(score.clamp(0.0, 1.0))
}
@@ -392,9 +399,9 @@ impl AntiSpoofingDetector {
// Calculate Local Binary Pattern (LBP) variance
// Real skin has specific texture patterns
// Screens show moiré patterns, photos show printing dots
let mut lbp_values: Vec<u8> = Vec::new();
// Sample LBP in face region
for row in (y + 1)..((y + h).min(frame.height) - 1) {
for col in (x + 1)..((x + w).min(frame.width) - 1) {
@@ -403,22 +410,30 @@ impl AntiSpoofingDetector {
continue;
}
let center = frame.pixels[center_idx];
// 8-neighbor LBP
let mut lbp: u8 = 0;
let offsets = [(-1, -1), (-1, 0), (-1, 1), (0, 1),
(1, 1), (1, 0), (1, -1), (0, -1)];
let offsets = [
(-1, -1),
(-1, 0),
(-1, 1),
(0, 1),
(1, 1),
(1, 0),
(1, -1),
(0, -1),
];
for (i, (dy, dx)) in offsets.iter().enumerate() {
let ny = (row as i32 + dy) as usize;
let nx = (col as i32 + dx) as usize;
let idx = ny * frame.width as usize + nx;
if idx < frame.pixels.len() && frame.pixels[idx] >= center {
lbp |= 1 << i;
}
}
lbp_values.push(lbp);
}
}
@@ -463,17 +478,16 @@ impl AntiSpoofingDetector {
}
// Check for eye brightness variations indicating blinks
let eye_variations: Vec<f32> = self.frame_history
let eye_variations: Vec<f32> = self
.frame_history
.windows(2)
.filter_map(|w| {
match (&w[0].eye_brightness, &w[1].eye_brightness) {
(Some((l1, r1)), Some((l2, r2))) => {
let left_diff = (l1 - l2).abs();
let right_diff = (r1 - r2).abs();
Some((left_diff + right_diff) / 2.0)
}
_ => None,
.filter_map(|w| match (&w[0].eye_brightness, &w[1].eye_brightness) {
(Some((l1, r1)), Some((l2, r2))) => {
let left_diff = (l1 - l2).abs();
let right_diff = (r1 - r2).abs();
Some((left_diff + right_diff) / 2.0)
}
_ => None,
})
.collect();
@@ -483,7 +497,7 @@ impl AntiSpoofingDetector {
// Look for characteristic blink pattern (quick drop and rise)
let max_variation = eye_variations.iter().cloned().fold(0.0f32, f32::max);
let score = if max_variation > 20.0 {
0.9 // Clear blink detected
} else if max_variation > 10.0 {
@@ -492,7 +506,10 @@ impl AntiSpoofingDetector {
0.4 // No blink detected
};
debug!("Blink check: max_variation={:.1}, score={:.2}", max_variation, score);
debug!(
"Blink check: max_variation={:.1}, score={:.2}",
max_variation, score
);
Ok(score)
}
@@ -504,8 +521,9 @@ impl AntiSpoofingDetector {
// Real faces have natural micro-movements
// Photos/videos may be perfectly still or have unnatural movement
let movements: Vec<f32> = self.frame_history
let movements: Vec<f32> = self
.frame_history
.windows(2)
.map(|w| {
let (x1, y1) = w[0].position;
@@ -519,14 +537,16 @@ impl AntiSpoofingDetector {
}
let avg_movement = movements.iter().sum::<f32>() / movements.len() as f32;
let movement_variance: f32 = movements.iter()
let movement_variance: f32 = movements
.iter()
.map(|m| (m - avg_movement).powi(2))
.sum::<f32>() / movements.len() as f32;
.sum::<f32>()
/ movements.len() as f32;
// Real faces: small but variable movements (0.5-5 pixels, variance 0.1-2)
// Photos: near-zero movement
// Videos: potentially large, regular movements
let score = if avg_movement > 0.3 && avg_movement < 8.0 && movement_variance > 0.05 {
0.8 + (movement_variance.min(1.0) * 0.2)
} else if avg_movement < 0.3 {
@@ -535,8 +555,10 @@ impl AntiSpoofingDetector {
0.5 // Unusual movement pattern
};
debug!("Movement check: avg={:.2}, var={:.2}, score={:.2}",
avg_movement, movement_variance, score);
debug!(
"Movement check: avg={:.2}, var={:.2}, score={:.2}",
avg_movement, movement_variance, score
);
Ok(score.clamp(0.0, 1.0))
}
@@ -550,11 +572,11 @@ impl AntiSpoofingDetector {
));
let (x, y, w, h) = bbox;
// Calculate average brightness
let mut total: u64 = 0;
let mut count: u64 = 0;
for row in y..(y + h).min(frame.height) {
for col in x..(x + w).min(frame.width) {
let idx = (row * frame.width + col) as usize;
@@ -572,10 +594,7 @@ impl AntiSpoofingDetector {
};
// Face center position
let position = (
(x + w / 2) as f32,
(y + h / 2) as f32,
);
let position = ((x + w / 2) as f32, (y + h / 2) as f32);
// Eye region brightness (simplified - assumes eyes at upper 1/3 of face)
let eye_y = y + h / 4;
@@ -586,7 +605,7 @@ impl AntiSpoofingDetector {
let left_brightness = self.region_brightness(frame, left_eye_x, eye_y, eye_w, eye_h);
let right_brightness = self.region_brightness(frame, right_eye_x, eye_y, eye_w, eye_h);
let eye_brightness = if left_brightness > 0.0 && right_brightness > 0.0 {
Some((left_brightness, right_brightness))
} else {
@@ -605,7 +624,7 @@ impl AntiSpoofingDetector {
fn region_brightness(&self, frame: &AntiSpoofingFrame, x: u32, y: u32, w: u32, h: u32) -> f32 {
let mut total: u64 = 0;
let mut count: u64 = 0;
for row in y..(y + h).min(frame.height) {
for col in x..(x + w).min(frame.width) {
let idx = (row * frame.width + col) as usize;
@@ -626,7 +645,7 @@ impl AntiSpoofingDetector {
/// Update frame history
fn update_history(&mut self, features: FrameFeatures) {
self.frame_history.push(features);
// Keep only recent frames
while self.frame_history.len() > self.config.temporal_frames {
self.frame_history.remove(0);
@@ -636,31 +655,31 @@ impl AntiSpoofingDetector {
/// Determine the most likely reason for rejection
fn determine_rejection_reason(&self, checks: &LivenessChecks) -> String {
let mut reasons = Vec::new();
if let Some(score) = checks.ir_check {
if score < 0.5 {
reasons.push("IR illumination pattern suspicious");
}
}
if let Some(score) = checks.depth_check {
if score < 0.5 {
reasons.push("Face appears flat (possible photo)");
}
}
if let Some(score) = checks.texture_check {
if score < 0.5 {
reasons.push("Skin texture appears artificial");
}
}
if let Some(score) = checks.blink_check {
if score < 0.5 {
reasons.push("No natural eye blinks detected");
}
}
if let Some(score) = checks.movement_check {
if score < 0.5 {
reasons.push("Unnatural movement pattern");
@@ -683,13 +702,13 @@ mod tests {
let width = 100;
let height = 100;
let mut pixels = vec![brightness; (width * height) as usize];
// Add some variation
for i in 0..pixels.len() {
let variation = ((i * 17) % 30) as i16 - 15;
pixels[i] = (brightness as i16 + variation).clamp(0, 255) as u8;
}
AntiSpoofingFrame {
pixels,
width,
@@ -704,10 +723,10 @@ mod tests {
fn test_anti_spoofing_single_frame() {
let config = AntiSpoofingConfig::default();
let mut detector = AntiSpoofingDetector::new(config);
let frame = create_test_frame(100, true);
let result = detector.check_frame(&frame).unwrap();
assert!(result.score >= 0.0 && result.score <= 1.0);
assert!(result.checks.ir_check.is_some());
assert!(result.checks.depth_check.is_some());
@@ -718,11 +737,11 @@ mod tests {
fn test_ir_check_dark_frame() {
let config = AntiSpoofingConfig::default();
let mut detector = AntiSpoofingDetector::new(config);
// Very dark frame (suspicious for IR)
let frame = create_test_frame(10, true);
let result = detector.check_frame(&frame).unwrap();
// Dark IR frame should score lower
assert!(result.checks.ir_check.unwrap() < 0.7);
}
@@ -731,11 +750,11 @@ mod tests {
fn test_ir_check_normal_frame() {
let config = AntiSpoofingConfig::default();
let mut detector = AntiSpoofingDetector::new(config);
// Normal brightness frame
let frame = create_test_frame(100, true);
let result = detector.check_frame(&frame).unwrap();
// Should pass with reasonable score
assert!(result.checks.ir_check.unwrap() > 0.5);
}
@@ -745,23 +764,23 @@ mod tests {
let mut config = AntiSpoofingConfig::default();
config.enable_movement_check = true;
config.temporal_frames = 5;
let mut detector = AntiSpoofingDetector::new(config);
// Simulate multiple frames with slight movement
for i in 0..5 {
let mut frame = create_test_frame(100, true);
frame.timestamp_ms = i * 100;
// Slightly different face position
frame.face_bbox = Some((25 + (i as u32 % 3), 25, 50, 50));
let _ = detector.check_frame(&frame).unwrap();
}
// After enough frames, movement check should be performed
let frame = create_test_frame(100, true);
let result = detector.check_frame(&frame).unwrap();
assert!(result.checks.movement_check.is_some());
}
@@ -769,15 +788,15 @@ mod tests {
fn test_reset() {
let config = AntiSpoofingConfig::default();
let mut detector = AntiSpoofingDetector::new(config);
// Add some frames
for _ in 0..3 {
let frame = create_test_frame(100, true);
let _ = detector.check_frame(&frame);
}
assert!(!detector.frame_history.is_empty());
detector.reset();
assert!(detector.frame_history.is_empty());
}
@@ -787,10 +806,10 @@ mod tests {
let mut checks = LivenessChecks::default();
checks.ir_check = Some(0.3);
checks.depth_check = Some(0.2);
let config = AntiSpoofingConfig::default();
let detector = AntiSpoofingDetector::new(config);
let reason = detector.determine_rejection_reason(&checks);
assert!(reason.contains("IR") || reason.contains("flat"));
}

View File

@@ -8,7 +8,7 @@ use tracing::{debug, info, warn};
use crate::camera::PixelFormat;
use crate::detection::detect_face_simple;
use crate::embedding::{EmbeddingExtractor, PlaceholderEmbeddingExtractor};
use crate::embedding::{EmbeddingExtractor, LbphEmbeddingExtractor};
use crate::matching::{average_embeddings, match_template};
use image::GrayImage;
@@ -17,15 +17,14 @@ use image::GrayImage;
pub struct AuthService {
config: Config,
template_store_path: std::path::PathBuf,
embedding_extractor: PlaceholderEmbeddingExtractor,
embedding_extractor: LbphEmbeddingExtractor,
}
impl AuthService {
/// Create a new authentication service
pub fn new(config: Config) -> Self {
let template_store_path = TemplateStore::default_path();
let embedding_extractor =
PlaceholderEmbeddingExtractor::new(PlaceholderEmbeddingExtractor::default_dimension());
let embedding_extractor = LbphEmbeddingExtractor::default();
Self {
config,
@@ -50,7 +49,7 @@ impl AuthService {
info!("Authenticating user: {}", user);
let template_store = self.template_store();
// Check if user is enrolled
if !template_store.is_enrolled(user) {
warn!("User {} is not enrolled", user);
@@ -90,7 +89,7 @@ impl AuthService {
for i in 0..frame_count {
debug!("Capturing frame {}/{}", i + 1, frame_count);
match self.capture_and_extract_embedding().await {
Ok(emb) => {
embeddings.push(emb);
@@ -127,7 +126,10 @@ impl AuthService {
let template_store = self.template_store();
template_store.store(&template)?;
info!("User {} enrolled successfully with {} frames", user, template.frame_count);
info!(
"User {} enrolled successfully with {} frames",
user, template.frame_count
);
Ok(())
}
@@ -157,33 +159,40 @@ impl AuthService {
// Capture frame
let frame = camera.capture_frame()?;
debug!("Captured frame: {}x{}, format: {:?}", frame.width, frame.height, frame.format);
debug!(
"Captured frame: {}x{}, format: {:?}",
frame.width, frame.height, frame.format
);
// Convert frame to grayscale image
let gray_image = match frame.format {
PixelFormat::Grey => {
GrayImage::from_raw(frame.width, frame.height, frame.data)
.ok_or_else(|| linux_hello_common::Error::Detection(
PixelFormat::Grey => GrayImage::from_raw(frame.width, frame.height, frame.data)
.ok_or_else(|| {
linux_hello_common::Error::Detection(
"Failed to create grayscale image".to_string(),
))?
}
)
})?,
PixelFormat::Yuyv => {
// Simple YUYV to grayscale conversion (take Y channel)
let mut gray_data = Vec::with_capacity((frame.width * frame.height) as usize);
for chunk in frame.data.chunks_exact(2) {
gray_data.push(chunk[0]); // Y component
}
GrayImage::from_raw(frame.width, frame.height, gray_data)
.ok_or_else(|| linux_hello_common::Error::Detection(
GrayImage::from_raw(frame.width, frame.height, gray_data).ok_or_else(|| {
linux_hello_common::Error::Detection(
"Failed to create grayscale from YUYV".to_string(),
))?
)
})?
}
PixelFormat::Mjpeg => {
// Decode MJPEG (JPEG) to image, then convert to grayscale
image::load_from_memory(&frame.data)
.map_err(|e| linux_hello_common::Error::Detection(
format!("Failed to decode MJPEG: {}", e)
))?
.map_err(|e| {
linux_hello_common::Error::Detection(format!(
"Failed to decode MJPEG: {}",
e
))
})?
.to_luma8()
}
_ => {
@@ -210,15 +219,9 @@ impl AuthService {
}
/// Extract a face region from an image
fn extract_face_region(
image: &GrayImage,
x: u32,
y: u32,
w: u32,
h: u32,
) -> Result<GrayImage> {
fn extract_face_region(image: &GrayImage, x: u32, y: u32, w: u32, h: u32) -> Result<GrayImage> {
let (img_width, img_height) = image.dimensions();
// Clamp coordinates to image bounds
let x = x.min(img_width);
let y = y.min(img_height);
@@ -232,7 +235,7 @@ fn extract_face_region(
}
let mut face_data = Vec::with_capacity((w * h) as usize);
for row in y..(y + h) {
for col in x..(x + w) {
let pixel = image.get_pixel(col, row);
@@ -240,8 +243,7 @@ fn extract_face_region(
}
}
GrayImage::from_raw(w, h, face_data)
.ok_or_else(|| linux_hello_common::Error::Detection(
"Failed to create face image".to_string(),
))
GrayImage::from_raw(w, h, face_data).ok_or_else(|| {
linux_hello_common::Error::Detection("Failed to create face image".to_string())
})
}

View File

@@ -5,7 +5,7 @@
use linux_hello_common::Result;
/// IR emitter controller
///
///
/// Public API - used in tests and may be used by external code
#[allow(dead_code)] // Public API, used in tests
pub struct IrEmitterControl {
@@ -19,7 +19,7 @@ pub struct IrEmitterControl {
impl IrEmitterControl {
/// Create a new IR emitter controller for a device
///
///
/// Public API - used in tests
#[allow(dead_code)] // Public API, used in tests
pub fn new(device_path: &str) -> Self {
@@ -30,7 +30,7 @@ impl IrEmitterControl {
}
/// Attempt to enable the IR emitter
///
///
/// Public API - used in tests and may be used by external code
#[cfg(target_os = "linux")]
#[allow(dead_code)] // Public API, used in tests
@@ -75,7 +75,12 @@ impl IrEmitterControl {
// Method 2: Try power_line_frequency (some cameras use this)
let _ = std::process::Command::new("v4l2-ctl")
.args(["-d", &self.device_path, "--set-ctrl", "power_line_frequency=2"])
.args([
"-d",
&self.device_path,
"--set-ctrl",
"power_line_frequency=2",
])
.output();
// For now, assume emitter is enabled (some cameras have always-on emitters)
@@ -89,7 +94,7 @@ impl IrEmitterControl {
}
/// Attempt to disable the IR emitter
///
///
/// Public API - used in tests
#[cfg(target_os = "linux")]
#[allow(dead_code)] // Public API, used in tests
@@ -105,7 +110,7 @@ impl IrEmitterControl {
}
/// Check if emitter is active
///
///
/// Public API - used in tests
#[allow(dead_code)] // Public API, used in tests
pub fn is_active(&self) -> bool {
@@ -113,7 +118,7 @@ impl IrEmitterControl {
}
/// Get device path
///
///
/// Public API - used in tests
#[allow(dead_code)] // Public API, used in tests
pub fn device_path(&self) -> &str {
@@ -135,7 +140,7 @@ impl IrEmitterControl {
}
/// Scan for IR emitter capabilities on a device
///
///
/// This is a public API function that may be used by external code or future features.
#[cfg(target_os = "linux")]
#[allow(dead_code)] // Public API, may be used externally
@@ -182,11 +187,11 @@ mod tests {
#[test]
fn test_ir_emitter_enable_disable() {
let mut emitter = IrEmitterControl::new("/dev/video0");
// Enable
emitter.enable().unwrap();
assert!(emitter.is_active());
// Disable
emitter.disable().unwrap();
assert!(!emitter.is_active());

View File

@@ -22,7 +22,10 @@ pub fn enumerate_cameras() -> Result<Vec<CameraInfo>> {
if let Ok(device) = Device::new(i) {
if let Ok(caps) = device.query_caps() {
// Only include capture devices
if caps.capabilities.contains(v4l::capability::Flags::VIDEO_CAPTURE) {
if caps
.capabilities
.contains(v4l::capability::Flags::VIDEO_CAPTURE)
{
let name = caps.card.clone();
let is_ir = detect_ir_camera(&name, &device);
let resolutions = get_supported_resolutions(&device);
@@ -124,7 +127,7 @@ pub struct Camera {
impl Camera {
/// Open a camera device by path
///
///
/// Public API - used by library code (auth.rs) and CLI
#[allow(dead_code)] // Public API, used by library code and CLI
pub fn open(device_path: &str) -> Result<Self> {
@@ -134,8 +137,8 @@ impl Camera {
.parse()
.map_err(|_| Error::Camera(format!("Invalid device path: {}", device_path)))?;
let device =
Device::new(index).map_err(|e| Error::Camera(format!("Failed to open device: {}", e)))?;
let device = Device::new(index)
.map_err(|e| Error::Camera(format!("Failed to open device: {}", e)))?;
// Try to set a preferred format
let mut format = device
@@ -175,7 +178,7 @@ impl Camera {
}
/// Start streaming
///
///
/// Public API - used internally by capture_frame
#[allow(dead_code)] // Public API
pub fn start(&mut self) -> Result<()> {
@@ -198,7 +201,7 @@ impl Camera {
}
/// Capture a single frame
///
///
/// Public API - used by authentication and enrollment
#[allow(dead_code)] // Public API, used by auth service
pub fn capture_frame(&mut self) -> Result<Frame> {
@@ -231,7 +234,7 @@ impl Camera {
}
/// Get current resolution
///
///
/// Public API - used by CLI status command
#[allow(dead_code)] // Public API, used by CLI
pub fn resolution(&self) -> (u32, u32) {

View File

@@ -45,9 +45,9 @@
//! }
//! ```
mod ir_emitter;
#[cfg(target_os = "linux")]
mod linux;
mod ir_emitter;
#[cfg(target_os = "linux")]
pub use linux::*;
@@ -217,13 +217,13 @@ impl Camera {
pub fn capture_frame(&mut self) -> Result<Frame> {
self.frame_count += 1;
// Generate a synthetic frame (gradient + noise)
let size = (self.width * self.height) as usize;
let mut data = Vec::with_capacity(size);
let offset = (self.frame_count % 255) as u8;
for y in 0..self.height {
for x in 0..self.width {
// Moving gradient pattern
@@ -231,7 +231,7 @@ impl Camera {
data.push(val);
}
}
// Emulate capture delay
std::thread::sleep(std::time::Duration::from_millis(33));
@@ -243,7 +243,7 @@ impl Camera {
timestamp_us: self.frame_count * 33333,
})
}
pub fn resolution(&self) -> (u32, u32) {
(self.width, self.height)
}
@@ -310,16 +310,16 @@ mod tests {
fn test_mock_camera_capture() {
let mut camera = Camera::open("mock_cam_0").unwrap();
camera.start().unwrap();
let frame = camera.capture_frame().unwrap();
assert_eq!(frame.width, 640);
assert_eq!(frame.height, 480);
assert_eq!(frame.format, PixelFormat::Grey);
assert_eq!(frame.data.len(), 640 * 480);
let frame2 = camera.capture_frame().unwrap();
assert!(frame2.timestamp_us > frame.timestamp_us);
camera.stop();
}
}

View File

@@ -3,7 +3,7 @@
//! Handles connection to the system bus, service name registration,
//! and serving the org.linuxhello.Manager interface.
use linux_hello_common::{Config, Result, Error};
use linux_hello_common::{Config, Error, Result};
use tracing::{error, info};
use zbus::connection::Builder;
use zbus::Connection;

View File

@@ -84,14 +84,16 @@ impl LinuxHelloManager {
fn check_tpm_available(&self) -> bool {
// Check if TPM device exists
std::path::Path::new("/dev/tpm0").exists()
|| std::path::Path::new("/dev/tpmrm0").exists()
std::path::Path::new("/dev/tpm0").exists() || std::path::Path::new("/dev/tpmrm0").exists()
}
fn count_enrolled_users(&self) -> u32 {
let store = TemplateStore::new(TemplateStore::default_path());
// Use list_users() to count enrolled users
store.list_users().map(|users| users.len() as u32).unwrap_or(0)
store
.list_users()
.map(|users| users.len() as u32)
.unwrap_or(0)
}
}
@@ -151,7 +153,7 @@ impl LinuxHelloManager {
let active = self.enrollment_active.read().await;
if active.is_some() {
return Err(zbus::fdo::Error::Failed(
"Enrollment already in progress".to_string()
"Enrollment already in progress".to_string(),
));
}
}
@@ -201,12 +203,15 @@ impl LinuxHelloManager {
&user_owned,
progress,
&format!("Capturing frame {}/{}", i, frame_count),
).await;
)
.await;
}
}
// Perform actual enrollment
let result = auth_service.enroll(&user_owned, &label_owned, frame_count).await;
let result = auth_service
.enroll(&user_owned, &label_owned, frame_count)
.await;
// Clear enrollment state
{
@@ -227,7 +232,8 @@ impl LinuxHelloManager {
&user_owned,
true,
"Enrollment successful",
).await;
)
.await;
}
Err(e) => {
let _ = LinuxHelloManager::enrollment_complete(
@@ -235,12 +241,14 @@ impl LinuxHelloManager {
&user_owned,
false,
&format!("Enrollment failed: {}", e),
).await;
)
.await;
let _ = LinuxHelloManager::error(
iface_ref.signal_context(),
"enrollment_failed",
&e.to_string(),
).await;
)
.await;
}
}
}
@@ -256,7 +264,7 @@ impl LinuxHelloManager {
let mut active = self.enrollment_active.write().await;
if active.is_none() {
return Err(zbus::fdo::Error::Failed(
"No enrollment in progress".to_string()
"No enrollment in progress".to_string(),
));
}
@@ -273,11 +281,7 @@ impl LinuxHelloManager {
}
/// Remove a specific template or all templates for a user
async fn remove_template(
&self,
user: &str,
label: &str,
) -> zbus::fdo::Result<()> {
async fn remove_template(&self, user: &str, label: &str) -> zbus::fdo::Result<()> {
tracing::info!("D-Bus: RemoveTemplate for user: {}, label: {}", user, label);
let store = TemplateStore::new(TemplateStore::default_path());
@@ -355,11 +359,7 @@ impl LinuxHelloManager {
/// Emitted when an error occurs
#[zbus(signal)]
async fn error(
ctx: &SignalContext<'_>,
code: &str,
message: &str,
) -> zbus::Result<()>;
async fn error(ctx: &SignalContext<'_>, code: &str, message: &str) -> zbus::Result<()>;
}
#[cfg(test)]

View File

@@ -198,11 +198,13 @@ pub struct SimpleFaceDetector {
impl SimpleFaceDetector {
/// Create a new simple face detector
///
///
/// Public API - used for testing and placeholder implementation
#[allow(dead_code)] // Public API, used in tests
pub fn new(confidence_threshold: f32) -> Self {
Self { confidence_threshold }
Self {
confidence_threshold,
}
}
}

View File

@@ -12,9 +12,15 @@
//! - Similar faces have embeddings with small distances
//! - Different faces have embeddings with large distances
//!
//! # Implementations
//!
//! - [`LbphEmbeddingExtractor`] - LBPH-based extractor using Local Binary Pattern
//! Histograms for identity-discriminative embeddings (~90% accuracy)
//! - [`PlaceholderEmbeddingExtractor`] - Test-only extractor using image statistics
//!
//! # Embedding Properties
//!
//! - **Dimension**: Typically 128 (MobileFaceNet) or 512 (ArcFace)
//! - **Dimension**: 3776 for LBPH (8x8 grid with 59 uniform LBP bins)
//! - **Normalized**: Embeddings have unit length (L2 norm = 1)
//! - **Metric**: Use cosine similarity or Euclidean distance for comparison
//!
@@ -31,18 +37,17 @@
//!
//! ```rust
//! use linux_hello_daemon::{
//! EmbeddingExtractor, PlaceholderEmbeddingExtractor,
//! EmbeddingExtractor, LbphEmbeddingExtractor,
//! cosine_similarity, euclidean_distance,
//! };
//! use image::GrayImage;
//!
//! // Create an extractor
//! let extractor = PlaceholderEmbeddingExtractor::new(128);
//! // Create an LBPH extractor (recommended for production)
//! let extractor = LbphEmbeddingExtractor::default();
//!
//! // Extract embedding from a face image
//! let face = GrayImage::new(112, 112);
//! let embedding = extractor.extract(&face).unwrap();
//! assert_eq!(embedding.len(), 128);
//!
//! // Compare embeddings
//! let same_embedding = embedding.clone();
@@ -50,8 +55,8 @@
//! assert!((similarity - 1.0).abs() < 0.01); // Identical vectors
//! ```
use linux_hello_common::Result;
use image::GrayImage;
use linux_hello_common::Result;
/// Trait for face embedding extraction backends.
///
@@ -94,6 +99,277 @@ pub trait EmbeddingExtractor {
fn extract(&self, face_image: &GrayImage) -> Result<Vec<f32>>;
}
// ============================================================================
// LBPH (Local Binary Pattern Histograms) Embedding Extractor
// ============================================================================
/// Number of uniform LBP patterns for 8 neighbors (58 uniform + 1 non-uniform bin)
const UNIFORM_BINS: usize = 59;
/// Lookup table mapping 256 LBP codes to uniform pattern bins (0-58)
/// Pre-computed for efficiency
static UNIFORM_LUT: [u8; 256] = compute_uniform_lut();
/// Compute the uniform LBP lookup table at compile time
const fn compute_uniform_lut() -> [u8; 256] {
let mut lut = [58u8; 256]; // Default to non-uniform bin (58)
let mut bin = 0u8;
let mut i = 0u8;
loop {
if is_uniform_pattern(i) {
lut[i as usize] = bin;
bin += 1;
}
if i == 255 {
break;
}
i += 1;
}
lut
}
/// Check if an LBP pattern is uniform (at most 2 bitwise transitions)
const fn is_uniform_pattern(pattern: u8) -> bool {
// Count transitions between adjacent bits (including wrap-around)
let mut transitions = 0u8;
let mut prev_bit = (pattern >> 7) & 1;
let mut j = 0u8;
loop {
let curr_bit = (pattern >> j) & 1;
if curr_bit != prev_bit {
transitions += 1;
}
prev_bit = curr_bit;
if j == 7 {
break;
}
j += 1;
}
transitions <= 2
}
/// LBPH (Local Binary Pattern Histograms) face embedding extractor.
///
/// This extractor creates identity-discriminative embeddings using the LBPH algorithm:
///
/// 1. **LBP Computation**: For each pixel, compare with 8 neighbors to create a binary code
/// 2. **Uniform Patterns**: Use only uniform LBP patterns (59 bins) for robustness
/// 3. **Spatial Histograms**: Divide face into grid cells and compute histogram per cell
/// 4. **Feature Vector**: Concatenate all cell histograms into the final embedding
///
/// # Algorithm
///
/// LBP encodes local texture by comparing each pixel with its neighbors:
/// - If neighbor >= center: bit = 1
/// - If neighbor < center: bit = 0
///
/// The 8 bits form a pattern (0-255). "Uniform" patterns with at most 2 bitwise
/// transitions are more stable and meaningful for face recognition.
///
/// # Accuracy
///
/// Expected ~90-92% accuracy under controlled conditions:
/// - Consistent lighting (IR camera preferred)
/// - Front-facing pose
/// - Similar enrollment/authentication conditions
#[derive(Clone, Debug)]
pub struct LbphEmbeddingExtractor {
/// Number of horizontal grid cells (default: 8)
pub grid_x: usize,
/// Number of vertical grid cells (default: 8)
pub grid_y: usize,
/// Standard face size for consistent processing
pub face_size: u32,
}
impl Default for LbphEmbeddingExtractor {
fn default() -> Self {
Self {
grid_x: 8,
grid_y: 8,
face_size: 128,
}
}
}
impl LbphEmbeddingExtractor {
/// Create a new LBPH extractor with custom grid size
pub fn new(grid_x: usize, grid_y: usize) -> Self {
Self {
grid_x,
grid_y,
face_size: 128,
}
}
/// Get the output embedding dimension
pub fn dimension(&self) -> usize {
self.grid_x * self.grid_y * UNIFORM_BINS
}
/// Compute LBP code for a single pixel
#[inline]
fn compute_lbp(pixels: &[u8], width: u32, x: u32, y: u32) -> u8 {
let idx = |dx: i32, dy: i32| -> usize {
((y as i32 + dy) as u32 * width + (x as i32 + dx) as u32) as usize
};
let center = pixels[idx(0, 0)];
let mut code: u8 = 0;
// 8-neighbor LBP pattern (clockwise from top-left)
// Bit positions: 7 0 1
// 6 C 2
// 5 4 3
let neighbors = [
(-1, -1),
(0, -1),
(1, -1), // top row
(1, 0), // right
(1, 1),
(0, 1),
(-1, 1), // bottom row
(-1, 0), // left
];
for (bit, &(dx, dy)) in neighbors.iter().enumerate() {
if pixels[idx(dx, dy)] >= center {
code |= 1 << bit;
}
}
code
}
/// Compute histogram for a single grid cell
fn compute_cell_histogram(
&self,
pixels: &[u8],
width: u32,
height: u32,
cell_x: usize,
cell_y: usize,
) -> [f32; UNIFORM_BINS] {
let mut histogram = [0u32; UNIFORM_BINS];
let cell_width = width as usize / self.grid_x;
let cell_height = height as usize / self.grid_y;
let start_x = (cell_x * cell_width).max(1) as u32;
let end_x = ((cell_x + 1) * cell_width).min(width as usize - 1) as u32;
let start_y = (cell_y * cell_height).max(1) as u32;
let end_y = ((cell_y + 1) * cell_height).min(height as usize - 1) as u32;
let mut count = 0u32;
for y in start_y..end_y {
for x in start_x..end_x {
let lbp = Self::compute_lbp(pixels, width, x, y);
let bin = UNIFORM_LUT[lbp as usize] as usize;
histogram[bin] += 1;
count += 1;
}
}
// Normalize histogram
let mut normalized = [0.0f32; UNIFORM_BINS];
if count > 0 {
let scale = 1.0 / count as f32;
for (i, &h) in histogram.iter().enumerate() {
normalized[i] = h as f32 * scale;
}
}
normalized
}
/// Resize image to standard size using bilinear interpolation
fn resize_image(src: &GrayImage, new_width: u32, new_height: u32) -> Vec<u8> {
let (src_w, src_h) = src.dimensions();
let src_pixels = src.as_raw();
let mut dst = vec![0u8; (new_width * new_height) as usize];
let x_ratio = src_w as f32 / new_width as f32;
let y_ratio = src_h as f32 / new_height as f32;
for y in 0..new_height {
for x in 0..new_width {
let src_x = x as f32 * x_ratio;
let src_y = y as f32 * y_ratio;
let x0 = src_x.floor() as u32;
let y0 = src_y.floor() as u32;
let x1 = (x0 + 1).min(src_w - 1);
let y1 = (y0 + 1).min(src_h - 1);
let x_frac = src_x - x0 as f32;
let y_frac = src_y - y0 as f32;
let idx = |px: u32, py: u32| (py * src_w + px) as usize;
let p00 = src_pixels[idx(x0, y0)] as f32;
let p10 = src_pixels[idx(x1, y0)] as f32;
let p01 = src_pixels[idx(x0, y1)] as f32;
let p11 = src_pixels[idx(x1, y1)] as f32;
// Bilinear interpolation
let top = p00 * (1.0 - x_frac) + p10 * x_frac;
let bottom = p01 * (1.0 - x_frac) + p11 * x_frac;
let value = top * (1.0 - y_frac) + bottom * y_frac;
dst[(y * new_width + x) as usize] = value.round() as u8;
}
}
dst
}
}
impl EmbeddingExtractor for LbphEmbeddingExtractor {
fn extract(&self, face_image: &GrayImage) -> Result<Vec<f32>> {
let (width, height) = face_image.dimensions();
if width < 8 || height < 8 {
return Err(linux_hello_common::Error::Detection(
"Face image too small for LBPH extraction".to_string(),
));
}
// Resize to standard size if needed
let (pixels, w, h) = if width != self.face_size || height != self.face_size {
let resized = Self::resize_image(face_image, self.face_size, self.face_size);
(resized, self.face_size, self.face_size)
} else {
(face_image.as_raw().clone(), width, height)
};
// Compute histogram for each grid cell
let mut embedding = Vec::with_capacity(self.dimension());
for cell_y in 0..self.grid_y {
for cell_x in 0..self.grid_x {
let histogram = self.compute_cell_histogram(&pixels, w, h, cell_x, cell_y);
embedding.extend_from_slice(&histogram);
}
}
// L2 normalize the embedding
let norm: f32 = embedding.iter().map(|&x| x * x).sum::<f32>().sqrt();
if norm > 0.0 {
for val in &mut embedding {
*val /= norm;
}
}
Ok(embedding)
}
}
/// Placeholder embedding extractor for testing.
///
/// Uses simple image statistics to generate a pseudo-embedding.
@@ -124,7 +400,7 @@ impl EmbeddingExtractor for PlaceholderEmbeddingExtractor {
fn extract(&self, face_image: &GrayImage) -> Result<Vec<f32>> {
let (_width, _height) = face_image.dimensions();
let pixels = face_image.as_raw();
if pixels.is_empty() {
return Err(linux_hello_common::Error::Detection(
"Empty face image".to_string(),
@@ -134,7 +410,7 @@ impl EmbeddingExtractor for PlaceholderEmbeddingExtractor {
// Simple placeholder: compute image statistics and normalize
let sum: f64 = pixels.iter().map(|&p| p as f64).sum();
let mean = sum / pixels.len() as f64;
let variance: f64 = pixels
.iter()
.map(|&p| {
@@ -143,39 +419,39 @@ impl EmbeddingExtractor for PlaceholderEmbeddingExtractor {
})
.sum::<f64>()
/ pixels.len() as f64;
let std_dev = variance.sqrt();
// Create a simple feature vector from statistics
// In production, this would be a neural network output
let mut embedding = Vec::with_capacity(self.dimension);
// Use mean, std_dev, and other statistics as features
embedding.push((mean / 255.0) as f32);
embedding.push((std_dev / 255.0) as f32);
// Add histogram-like features
let mut histogram = [0u32; 16];
for &pixel in pixels {
let bin = (pixel as usize * 16) / 256;
histogram[bin.min(15)] += 1;
}
for &count in &histogram {
embedding.push((count as f32) / (pixels.len() as f32));
}
// Pad or truncate to desired dimension
while embedding.len() < self.dimension {
// Use weighted combination of existing features
let idx = embedding.len() % embedding.len().max(1);
embedding.push(embedding[idx] * 0.5);
}
if embedding.len() > self.dimension {
embedding.truncate(self.dimension);
}
// Normalize the embedding vector
let norm: f32 = embedding.iter().map(|&x| x * x).sum::<f32>().sqrt();
if norm > 0.0 {
@@ -183,7 +459,7 @@ impl EmbeddingExtractor for PlaceholderEmbeddingExtractor {
*val /= norm;
}
}
Ok(embedding)
}
}
@@ -223,15 +499,15 @@ pub fn cosine_similarity(a: &[f32], b: &[f32]) -> f32 {
if a.len() != b.len() {
return 0.0;
}
let dot_product: f32 = a.iter().zip(b.iter()).map(|(x, y)| x * y).sum();
let norm_a: f32 = a.iter().map(|x| x * x).sum::<f32>().sqrt();
let norm_b: f32 = b.iter().map(|x| x * x).sum::<f32>().sqrt();
if norm_a == 0.0 || norm_b == 0.0 {
return 0.0;
}
dot_product / (norm_a * norm_b)
}
@@ -264,7 +540,7 @@ pub fn euclidean_distance(a: &[f32], b: &[f32]) -> f32 {
if a.len() != b.len() {
return f32::MAX;
}
let sum_sq_diff: f32 = a
.iter()
.zip(b.iter())
@@ -273,7 +549,7 @@ pub fn euclidean_distance(a: &[f32], b: &[f32]) -> f32 {
diff * diff
})
.sum();
sum_sq_diff.sqrt()
}
@@ -318,9 +594,9 @@ mod tests {
let extractor = PlaceholderEmbeddingExtractor::new(128);
let img = GrayImage::new(100, 100);
let embedding = extractor.extract(&img).unwrap();
assert_eq!(embedding.len(), 128);
// Check normalization
let norm: f32 = embedding.iter().map(|&x| x * x).sum::<f32>().sqrt();
assert!((norm - 1.0).abs() < 0.01 || norm < 0.01); // Should be normalized or near-zero
@@ -331,7 +607,7 @@ mod tests {
let a = vec![1.0, 0.0, 0.0];
let b = vec![1.0, 0.0, 0.0];
assert!((cosine_similarity(&a, &b) - 1.0).abs() < 0.001);
let c = vec![0.0, 1.0, 0.0];
assert!((cosine_similarity(&a, &c) - 0.0).abs() < 0.001);
}
@@ -343,4 +619,165 @@ mod tests {
let dist = euclidean_distance(&a, &b);
assert!((dist - 5.0).abs() < 0.001);
}
// ============================================================================
// LBPH Tests
// ============================================================================
#[test]
fn test_uniform_pattern_detection() {
// All zeros is uniform (0 transitions)
assert!(is_uniform_pattern(0b00000000));
// All ones is uniform (0 transitions)
assert!(is_uniform_pattern(0b11111111));
// Single continuous run is uniform (2 transitions)
assert!(is_uniform_pattern(0b00001111));
assert!(is_uniform_pattern(0b11110000));
assert!(is_uniform_pattern(0b00111100));
// Wrap-around pattern is uniform (2 transitions)
assert!(is_uniform_pattern(0b11000011));
// Non-uniform patterns (>2 transitions)
assert!(!is_uniform_pattern(0b10101010));
assert!(!is_uniform_pattern(0b01010101));
assert!(!is_uniform_pattern(0b11001100));
}
#[test]
fn test_uniform_lut_size() {
// Should have 58 uniform patterns + 1 non-uniform bin
let mut uniform_count = 0;
for i in 0..=255u8 {
if is_uniform_pattern(i) {
uniform_count += 1;
}
}
assert_eq!(uniform_count, 58);
}
#[test]
fn test_lbph_extractor_dimension() {
let extractor = LbphEmbeddingExtractor::default();
assert_eq!(extractor.dimension(), 8 * 8 * 59); // 3776
let custom = LbphEmbeddingExtractor::new(4, 4);
assert_eq!(custom.dimension(), 4 * 4 * 59); // 944
}
#[test]
fn test_lbph_extractor_output_size() {
let extractor = LbphEmbeddingExtractor::default();
let img = GrayImage::new(128, 128);
let embedding = extractor.extract(&img).unwrap();
assert_eq!(embedding.len(), extractor.dimension());
}
#[test]
fn test_lbph_extractor_normalization() {
let extractor = LbphEmbeddingExtractor::default();
// Create image with varying pixel values
let mut img = GrayImage::new(128, 128);
for (i, pixel) in img.pixels_mut().enumerate() {
pixel.0[0] = ((i * 7) % 256) as u8;
}
let embedding = extractor.extract(&img).unwrap();
// Check L2 normalization
let norm: f32 = embedding.iter().map(|&x| x * x).sum::<f32>().sqrt();
assert!(
(norm - 1.0).abs() < 0.01,
"Embedding should be L2 normalized, got {}",
norm
);
}
#[test]
fn test_lbph_identical_images_high_similarity() {
let extractor = LbphEmbeddingExtractor::default();
// Create an image with texture
let mut img = GrayImage::new(128, 128);
for y in 0..128 {
for x in 0..128 {
let val = ((x + y * 2) % 256) as u8;
img.put_pixel(x, y, image::Luma([val]));
}
}
let emb1 = extractor.extract(&img).unwrap();
let emb2 = extractor.extract(&img).unwrap();
let similarity = cosine_similarity(&emb1, &emb2);
assert!(
(similarity - 1.0).abs() < 0.001,
"Same image should have similarity ~1.0"
);
}
#[test]
fn test_lbph_different_images_lower_similarity() {
let extractor = LbphEmbeddingExtractor::default();
// Image 1: horizontal gradient
let mut img1 = GrayImage::new(128, 128);
for y in 0..128 {
for x in 0..128 {
img1.put_pixel(x, y, image::Luma([(x * 2) as u8]));
}
}
// Image 2: vertical gradient
let mut img2 = GrayImage::new(128, 128);
for y in 0..128 {
for x in 0..128 {
img2.put_pixel(x, y, image::Luma([(y * 2) as u8]));
}
}
let emb1 = extractor.extract(&img1).unwrap();
let emb2 = extractor.extract(&img2).unwrap();
let similarity = cosine_similarity(&emb1, &emb2);
// Different patterns should have lower similarity
assert!(
similarity < 0.95,
"Different images should have similarity < 0.95, got {}",
similarity
);
}
#[test]
fn test_lbph_resizing() {
let extractor = LbphEmbeddingExtractor::default();
// Create a larger image
let mut large_img = GrayImage::new(256, 256);
for (i, pixel) in large_img.pixels_mut().enumerate() {
pixel.0[0] = ((i * 3) % 256) as u8;
}
// Create the same pattern at standard size
let mut std_img = GrayImage::new(128, 128);
for (i, pixel) in std_img.pixels_mut().enumerate() {
pixel.0[0] = ((i * 3 * 4) % 256) as u8; // Scaled pattern
}
let emb_large = extractor.extract(&large_img).unwrap();
let emb_std = extractor.extract(&std_img).unwrap();
// Both should produce valid embeddings of same size
assert_eq!(emb_large.len(), emb_std.len());
assert_eq!(emb_large.len(), extractor.dimension());
}
#[test]
fn test_lbph_small_image_error() {
let extractor = LbphEmbeddingExtractor::default();
let tiny_img = GrayImage::new(4, 4);
let result = extractor.extract(&tiny_img);
assert!(result.is_err(), "Should fail for images smaller than 8x8");
}
}

View File

@@ -212,7 +212,10 @@ impl RateLimiter {
let entry = self.connections.entry(uid).or_insert((0, now, None));
// Apply exponential backoff on failures
let current_backoff = entry.2.map(|b| b.duration_since(now)).unwrap_or(Duration::ZERO);
let current_backoff = entry
.2
.map(|b| b.duration_since(now))
.unwrap_or(Duration::ZERO);
let new_backoff = if current_backoff == Duration::ZERO {
RATE_LIMIT_BACKOFF
} else {
@@ -243,9 +246,7 @@ impl Default for RateLimiter {
#[serde(tag = "action")]
pub enum IpcRequest {
#[serde(rename = "authenticate")]
Authenticate {
user: String,
},
Authenticate { user: String },
#[serde(rename = "enroll")]
Enroll {
user: String,
@@ -254,9 +255,7 @@ pub enum IpcRequest {
frame_count: u32,
},
#[serde(rename = "list")]
List {
user: String,
},
List { user: String },
#[serde(rename = "remove")]
Remove {
user: String,
@@ -284,16 +283,43 @@ pub struct IpcResponse {
}
/// Authentication handler type
pub type AuthHandler = Arc<dyn Fn(String) -> std::pin::Pin<Box<dyn std::future::Future<Output = Result<bool>> + Send>> + Send + Sync>;
pub type AuthHandler = Arc<
dyn Fn(String) -> std::pin::Pin<Box<dyn std::future::Future<Output = Result<bool>> + Send>>
+ Send
+ Sync,
>;
/// Enrollment handler type
pub type EnrollHandler = Arc<dyn Fn(String, String, u32) -> std::pin::Pin<Box<dyn std::future::Future<Output = Result<()>> + Send>> + Send + Sync>;
pub type EnrollHandler = Arc<
dyn Fn(
String,
String,
u32,
) -> std::pin::Pin<Box<dyn std::future::Future<Output = Result<()>> + Send>>
+ Send
+ Sync,
>;
/// List handler type
pub type ListHandler = Arc<dyn Fn(String) -> std::pin::Pin<Box<dyn std::future::Future<Output = Result<Vec<String>>> + Send>> + Send + Sync>;
pub type ListHandler = Arc<
dyn Fn(
String,
)
-> std::pin::Pin<Box<dyn std::future::Future<Output = Result<Vec<String>>> + Send>>
+ Send
+ Sync,
>;
/// Remove handler type
pub type RemoveHandler = Arc<dyn Fn(String, Option<String>, bool) -> std::pin::Pin<Box<dyn std::future::Future<Output = Result<()>> + Send>> + Send + Sync>;
pub type RemoveHandler = Arc<
dyn Fn(
String,
Option<String>,
bool,
) -> std::pin::Pin<Box<dyn std::future::Future<Output = Result<()>> + Send>>
+ Send
+ Sync,
>;
/// IPC server for handling PAM authentication requests
pub struct IpcServer {
@@ -333,7 +359,9 @@ impl IpcServer {
F: Fn(String, String, u32) -> Fut + Send + Sync + 'static,
Fut: std::future::Future<Output = Result<()>> + Send + 'static,
{
self.enroll_handler = Some(Arc::new(move |user, label, count| Box::pin(handler(user, label, count))));
self.enroll_handler = Some(Arc::new(move |user, label, count| {
Box::pin(handler(user, label, count))
}));
}
/// Set the list handler
@@ -351,7 +379,9 @@ impl IpcServer {
F: Fn(String, Option<String>, bool) -> Fut + Send + Sync + 'static,
Fut: std::future::Future<Output = Result<()>> + Send + 'static,
{
self.remove_handler = Some(Arc::new(move |user, label, all| Box::pin(handler(user, label, all))));
self.remove_handler = Some(Arc::new(move |user, label, all| {
Box::pin(handler(user, label, all))
}));
}
/// Start the IPC server
@@ -396,7 +426,10 @@ impl IpcServer {
{
let mut rate_limiter = self.rate_limiter.lock().await;
if let Err(msg) = rate_limiter.check_rate_limit(peer_creds.uid) {
warn!("Rate limited connection from UID {}: {}", peer_creds.uid, msg);
warn!(
"Rate limited connection from UID {}: {}",
peer_creds.uid, msg
);
// Send rate limit response and close connection
let _ = Self::send_error_response(stream, &msg).await;
continue;
@@ -418,7 +451,9 @@ impl IpcServer {
remove_handler,
peer_creds,
rate_limiter,
).await {
)
.await
{
warn!("Error handling client: {}", e);
}
});
@@ -438,8 +473,8 @@ impl IpcServer {
confidence: None,
templates: None,
};
let response_json = serde_json::to_string(&response)
.map_err(|e| Error::Serialization(e.to_string()))?;
let response_json =
serde_json::to_string(&response).map_err(|e| Error::Serialization(e.to_string()))?;
stream.write_all(response_json.as_bytes()).await?;
stream.flush().await?;
Ok(())
@@ -539,7 +574,11 @@ impl IpcServer {
},
}
}
IpcRequest::Enroll { user, label, frame_count } => {
IpcRequest::Enroll {
user,
label,
frame_count,
} => {
// SECURITY: Authorization check for enrollment
// Only root or the user themselves can enroll faces
if !peer_creds.can_operate_on_user(&user) {
@@ -558,28 +597,29 @@ impl IpcServer {
}
} else {
match enroll_handler {
Some(ref h) => {
match h(user.clone(), label.clone(), frame_count).await {
Ok(()) => {
info!(
"Enrollment successful for user '{}' by UID {}",
user, peer_creds.uid
);
IpcResponse {
success: true,
message: Some(format!("Enrollment successful for user: {}", user)),
confidence: None,
templates: None,
}
}
Err(e) => IpcResponse {
success: false,
message: Some(format!("Enrollment failed: {}", e)),
Some(ref h) => match h(user.clone(), label.clone(), frame_count).await {
Ok(()) => {
info!(
"Enrollment successful for user '{}' by UID {}",
user, peer_creds.uid
);
IpcResponse {
success: true,
message: Some(format!(
"Enrollment successful for user: {}",
user
)),
confidence: None,
templates: None,
},
}
}
}
Err(e) => IpcResponse {
success: false,
message: Some(format!("Enrollment failed: {}", e)),
confidence: None,
templates: None,
},
},
None => IpcResponse {
success: false,
message: Some("Enrollment handler not set".to_string()),
@@ -608,22 +648,20 @@ impl IpcServer {
}
} else {
match list_handler {
Some(ref h) => {
match h(user).await {
Ok(templates) => IpcResponse {
success: true,
message: None,
confidence: None,
templates: Some(templates),
},
Err(e) => IpcResponse {
success: false,
message: Some(format!("Error: {}", e)),
confidence: None,
templates: None,
},
}
}
Some(ref h) => match h(user).await {
Ok(templates) => IpcResponse {
success: true,
message: None,
confidence: None,
templates: Some(templates),
},
Err(e) => IpcResponse {
success: false,
message: Some(format!("Error: {}", e)),
confidence: None,
templates: None,
},
},
None => IpcResponse {
success: false,
message: Some("List handler not set".to_string()),
@@ -652,28 +690,26 @@ impl IpcServer {
}
} else {
match remove_handler {
Some(ref h) => {
match h(user.clone(), label, all).await {
Ok(()) => {
info!(
"Templates removed for user '{}' by UID {}",
user, peer_creds.uid
);
IpcResponse {
success: true,
message: Some(format!("Templates removed for user: {}", user)),
confidence: None,
templates: None,
}
}
Err(e) => IpcResponse {
success: false,
message: Some(format!("Error: {}", e)),
Some(ref h) => match h(user.clone(), label, all).await {
Ok(()) => {
info!(
"Templates removed for user '{}' by UID {}",
user, peer_creds.uid
);
IpcResponse {
success: true,
message: Some(format!("Templates removed for user: {}", user)),
confidence: None,
templates: None,
},
}
}
}
Err(e) => IpcResponse {
success: false,
message: Some(format!("Error: {}", e)),
confidence: None,
templates: None,
},
},
None => IpcResponse {
success: false,
message: Some("Remove handler not set".to_string()),
@@ -691,8 +727,8 @@ impl IpcServer {
},
};
let response_json = serde_json::to_string(&response)
.map_err(|e| Error::Serialization(e.to_string()))?;
let response_json =
serde_json::to_string(&response).map_err(|e| Error::Serialization(e.to_string()))?;
stream.write_all(response_json.as_bytes()).await?;
stream.flush().await?;
@@ -736,7 +772,8 @@ impl IpcClient {
pub async fn authenticate(&self, user: &str) -> Result<IpcResponse> {
self.send_request(&IpcRequest::Authenticate {
user: user.to_string(),
}).await
})
.await
}
/// Enroll a user
@@ -745,14 +782,16 @@ impl IpcClient {
user: user.to_string(),
label: label.to_string(),
frame_count,
}).await
})
.await
}
/// List templates for a user
pub async fn list(&self, user: &str) -> Result<IpcResponse> {
self.send_request(&IpcRequest::List {
user: user.to_string(),
}).await
})
.await
}
/// Remove templates for a user
@@ -761,7 +800,8 @@ impl IpcClient {
user: user.to_string(),
label: label.map(|l| l.to_string()),
all,
}).await
})
.await
}
/// Send a request to the daemon
@@ -771,29 +811,32 @@ impl IpcClient {
let mut stream = timeout(
Duration::from_secs(5),
UnixStream::connect(&self.socket_path)
).await
.map_err(|_| Error::Io(std::io::Error::new(
UnixStream::connect(&self.socket_path),
)
.await
.map_err(|_| {
Error::Io(std::io::Error::new(
std::io::ErrorKind::TimedOut,
"Connection timeout",
)))?
.map_err(|e| Error::Io(e))?;
))
})?
.map_err(|e| Error::Io(e))?;
let request_json = serde_json::to_string(request)
.map_err(|e| Error::Serialization(e.to_string()))?;
let request_json =
serde_json::to_string(request).map_err(|e| Error::Serialization(e.to_string()))?;
stream.write_all(request_json.as_bytes()).await?;
stream.flush().await?;
let mut buffer = vec![0u8; 4096];
let n = timeout(
Duration::from_secs(30),
stream.read(&mut buffer)
).await
.map_err(|_| Error::Io(std::io::Error::new(
std::io::ErrorKind::TimedOut,
"Read timeout",
)))?
let n = timeout(Duration::from_secs(30), stream.read(&mut buffer))
.await
.map_err(|_| {
Error::Io(std::io::Error::new(
std::io::ErrorKind::TimedOut,
"Read timeout",
))
})?
.map_err(|e| Error::Io(e))?;
if n == 0 {

View File

@@ -134,12 +134,12 @@ pub use secure_template_store::SecureTemplateStore;
pub use camera::{CameraInfo, Frame, PixelFormat};
// Re-export detection types
pub use detection::{FaceDetection, FaceDetect, detect_face_simple, SimpleFaceDetector};
pub use detection::{detect_face_simple, FaceDetect, FaceDetection, SimpleFaceDetector};
// Re-export embedding types and functions
pub use embedding::{
cosine_similarity, euclidean_distance, EmbeddingExtractor, PlaceholderEmbeddingExtractor,
similarity_to_distance,
cosine_similarity, euclidean_distance, similarity_to_distance, EmbeddingExtractor,
LbphEmbeddingExtractor, PlaceholderEmbeddingExtractor,
};
// Re-export matching types and functions
@@ -149,7 +149,9 @@ pub use matching::{average_embeddings, match_template, MatchResult};
pub use ipc::{IpcClient, IpcRequest, IpcResponse, IpcServer};
// Re-export D-Bus types
pub use dbus_server::{DbusServer, run_dbus_service, check_system_bus_available, SERVICE_NAME, OBJECT_PATH};
pub use dbus_server::{
check_system_bus_available, run_dbus_service, DbusServer, OBJECT_PATH, SERVICE_NAME,
};
pub use dbus_service::LinuxHelloManager;
// Linux-specific camera exports
@@ -159,7 +161,6 @@ pub use camera::{enumerate_cameras, Camera};
// ONNX model exports (when feature enabled)
#[cfg(feature = "onnx")]
pub use onnx::{
OnnxFaceDetector, OnnxEmbeddingExtractor, FaceAligner,
OnnxPipeline, OnnxModelConfig, DetectionWithLandmarks,
REFERENCE_LANDMARKS_112,
DetectionWithLandmarks, FaceAligner, OnnxEmbeddingExtractor, OnnxFaceDetector, OnnxModelConfig,
OnnxPipeline, REFERENCE_LANDMARKS_112,
};

View File

@@ -12,7 +12,7 @@ mod detection;
use linux_hello_common::{Config, Result, TemplateStore};
use linux_hello_daemon::auth::AuthService;
use linux_hello_daemon::dbus_server::{DbusServer, check_system_bus_available};
use linux_hello_daemon::dbus_server::{check_system_bus_available, DbusServer};
use linux_hello_daemon::ipc::IpcServer;
use tracing::{error, info, warn, Level};
use tracing_subscriber::FmtSubscriber;
@@ -71,39 +71,33 @@ async fn main() -> Result<()> {
let auth_service_for_auth = auth_service.clone();
ipc_server.set_auth_handler(move |user| {
let auth_service = auth_service_for_auth.clone();
async move {
auth_service.authenticate(&user).await
}
async move { auth_service.authenticate(&user).await }
});
// Set enrollment handler
let auth_service_for_enroll = auth_service.clone();
ipc_server.set_enroll_handler(move |user, label, frame_count| {
let auth_service = auth_service_for_enroll.clone();
async move {
auth_service.enroll(&user, &label, frame_count).await
}
async move { auth_service.enroll(&user, &label, frame_count).await }
});
// Set list handler
ipc_server.set_list_handler(move |user| {
async move {
let store = TemplateStore::new(TemplateStore::default_path());
store.list_templates(&user)
}
ipc_server.set_list_handler(move |user| async move {
let store = TemplateStore::new(TemplateStore::default_path());
store.list_templates(&user)
});
// Set remove handler
ipc_server.set_remove_handler(move |user, label, all| {
async move {
let store = TemplateStore::new(TemplateStore::default_path());
if all {
store.remove_all(&user)
} else if let Some(l) = label {
store.remove(&user, &l)
} else {
Err(linux_hello_common::Error::Config("No label specified".to_string()))
}
ipc_server.set_remove_handler(move |user, label, all| async move {
let store = TemplateStore::new(TemplateStore::default_path());
if all {
store.remove_all(&user)
} else if let Some(l) = label {
store.remove(&user, &l)
} else {
Err(linux_hello_common::Error::Config(
"No label specified".to_string(),
))
}
});
@@ -112,7 +106,10 @@ async fn main() -> Result<()> {
let mut dbus_server = DbusServer::new();
if dbus_enabled {
match dbus_server.start(auth_service.clone(), config.clone()).await {
match dbus_server
.start(auth_service.clone(), config.clone())
.await
{
Ok(()) => {
info!("D-Bus server started successfully");
info!(" Service: org.linuxhello.Daemon");

View File

@@ -45,8 +45,8 @@
//! assert!(result.best_similarity > 0.99);
//! ```
use linux_hello_common::{FaceTemplate, Result};
use crate::embedding::{cosine_similarity, similarity_to_distance};
use linux_hello_common::{FaceTemplate, Result};
/// Result of matching a probe embedding against stored templates.
///
@@ -125,7 +125,7 @@ pub fn match_template(
for template in templates {
let similarity = cosine_similarity(embedding, &template.embedding);
if similarity > best_similarity {
best_similarity = similarity;
best_label = Some(template.label.clone());
@@ -186,7 +186,7 @@ pub fn average_embeddings(embeddings: &[Vec<f32>]) -> Result<Vec<f32>> {
}
let dimension = embeddings[0].len();
// Verify all embeddings have the same dimension
for emb in embeddings {
if emb.len() != dimension {
@@ -200,7 +200,7 @@ pub fn average_embeddings(embeddings: &[Vec<f32>]) -> Result<Vec<f32>> {
// Average the embeddings
let mut averaged = vec![0.0f32; dimension];
for embedding in embeddings {
for (i, &value) in embedding.iter().enumerate() {
averaged[i] += value;
@@ -266,7 +266,7 @@ mod tests {
let averaged = average_embeddings(&embeddings).unwrap();
assert_eq!(averaged.len(), 3);
// Should be normalized
let norm: f32 = averaged.iter().map(|&x| x * x).sum::<f32>().sqrt();
assert!((norm - 1.0).abs() < 0.01);

View File

@@ -21,11 +21,11 @@ use linux_hello_common::{Error, Result};
/// Reference landmark positions for 112x112 aligned face (ArcFace standard)
pub const REFERENCE_LANDMARKS_112: [[f32; 2]; 5] = [
[38.2946, 51.6963], // Left eye center
[73.5318, 51.5014], // Right eye center
[56.0252, 71.7366], // Nose tip
[41.5493, 92.3655], // Left mouth corner
[70.7299, 92.2041], // Right mouth corner
[38.2946, 51.6963], // Left eye center
[73.5318, 51.5014], // Right eye center
[56.0252, 71.7366], // Nose tip
[41.5493, 92.3655], // Left mouth corner
[70.7299, 92.2041], // Right mouth corner
];
/// Reference landmark positions for 96x96 aligned face (alternative format)
@@ -383,7 +383,11 @@ mod tests {
// Scale should be approximately 1
let scale = transform.scale();
assert!((scale - 1.0).abs() < 0.01, "Scale should be ~1, got {}", scale);
assert!(
(scale - 1.0).abs() < 0.01,
"Scale should be ~1, got {}",
scale
);
// Angle should be approximately 0
let angle = transform.angle();
@@ -397,9 +401,7 @@ mod tests {
// Create a simple test image
let width = 200u32;
let height = 200u32;
let image: Vec<u8> = (0..(width * height))
.map(|i| ((i % 256) as u8))
.collect();
let image: Vec<u8> = (0..(width * height)).map(|i| ((i % 256) as u8)).collect();
let result = aligner.simple_crop(&image, width, height, 50, 50, 100, 100);
assert!(result.is_ok());

View File

@@ -42,8 +42,8 @@
//! }
//! ```
use linux_hello_common::{Error, Result};
use crate::detection::FaceDetection;
use linux_hello_common::{Error, Result};
#[cfg(feature = "onnx")]
use ort::{session::Session, value::TensorRef};
@@ -230,10 +230,8 @@ impl OnnxFaceDetector {
.commit_from_file(model_path.as_ref())
.map_err(|e| Error::Detection(format!("Failed to load ONNX model: {}", e)))?;
let (input_width, input_height) = (
config.detection_input_size.0,
config.detection_input_size.1,
);
let (input_width, input_height) =
(config.detection_input_size.0, config.detection_input_size.1);
let anchors = Self::generate_anchors(input_width, input_height, &AnchorConfig::default());
@@ -331,13 +329,15 @@ impl OnnxFaceDetector {
// Create tensor reference using shape and slice (compatible with ort 2.0 API)
let shape: Vec<i64> = tensor_data.shape().iter().map(|&x| x as i64).collect();
let slice = tensor_data.as_slice()
let slice = tensor_data
.as_slice()
.ok_or_else(|| Error::Detection("Array not contiguous".to_string()))?;
let input_tensor = TensorRef::from_array_view((shape, slice))
.map_err(|e| Error::Detection(format!("Failed to create tensor: {}", e)))?;
// Run inference
let outputs = self.session
let outputs = self
.session
.run(ort::inputs![input_tensor])
.map_err(|e| Error::Detection(format!("Inference failed: {}", e)))?;
@@ -347,9 +347,8 @@ impl OnnxFaceDetector {
drop(outputs); // Explicitly drop to release the session borrow
// Post-process using extracted data
let mut detections = self.decode_detections(
&loc_data, &conf_data, landm_data.as_deref(), width, height
);
let mut detections =
self.decode_detections(&loc_data, &conf_data, landm_data.as_deref(), width, height);
// Apply NMS
detections = self.nms(detections);
@@ -368,7 +367,7 @@ impl OnnxFaceDetector {
) -> Result<Vec<DetectionWithLandmarks>> {
if !self.model_loaded {
return Err(Error::Detection(
"ONNX models not loaded (onnx feature not enabled)".to_string()
"ONNX models not loaded (onnx feature not enabled)".to_string(),
));
}
Ok(vec![])
@@ -380,15 +379,17 @@ impl OnnxFaceDetector {
#[cfg(feature = "onnx")]
fn preprocess(&self, image_data: &[u8], width: u32, height: u32) -> Result<Array4<f32>> {
// Resize image to model input size
let resized = self.resize_bilinear(image_data, width, height, self.input_width, self.input_height);
let resized = self.resize_bilinear(
image_data,
width,
height,
self.input_width,
self.input_height,
);
// Convert to NCHW float32 tensor with normalization
let mut tensor_data = Array4::<f32>::zeros((
1,
3,
self.input_height as usize,
self.input_width as usize,
));
let mut tensor_data =
Array4::<f32>::zeros((1, 3, self.input_height as usize, self.input_width as usize));
for y in 0..self.input_height as usize {
for x in 0..self.input_width as usize {
@@ -412,15 +413,18 @@ impl OnnxFaceDetector {
outputs: &ort::session::SessionOutputs,
) -> Result<(Vec<f32>, Vec<f32>, Option<Vec<f32>>)> {
// Get output tensors - try different naming conventions
let loc = outputs.get("loc")
let loc = outputs
.get("loc")
.or_else(|| outputs.get("bbox"))
.or_else(|| outputs.get("boxes"));
let conf = outputs.get("conf")
let conf = outputs
.get("conf")
.or_else(|| outputs.get("cls"))
.or_else(|| outputs.get("scores"));
let landm = outputs.get("landm")
let landm = outputs
.get("landm")
.or_else(|| outputs.get("landmark"))
.or_else(|| outputs.get("landmarks"));
@@ -438,9 +442,7 @@ impl OnnxFaceDetector {
.try_extract_tensor::<f32>()
.map_err(|e| Error::Detection(format!("Failed to extract conf tensor: {}", e)))?;
let landm_result = landm.map(|l| {
l.try_extract_tensor::<f32>().ok()
}).flatten();
let landm_result = landm.map(|l| l.try_extract_tensor::<f32>().ok()).flatten();
let loc_data: Vec<f32> = loc_slice.to_vec();
let conf_data: Vec<f32> = conf_slice.to_vec();
@@ -592,7 +594,8 @@ impl OnnxFaceDetector {
fn nms(&self, mut detections: Vec<DetectionWithLandmarks>) -> Vec<DetectionWithLandmarks> {
// Sort by confidence (descending)
detections.sort_by(|a, b| {
b.detection.confidence
b.detection
.confidence
.partial_cmp(&a.detection.confidence)
.unwrap_or(std::cmp::Ordering::Equal)
});
@@ -699,7 +702,12 @@ impl OnnxFaceDetector {
/// Detect faces (convenience wrapper around detect_with_landmarks)
///
/// Returns only bounding boxes without landmarks.
pub fn detect(&mut self, image_data: &[u8], width: u32, height: u32) -> Result<Vec<FaceDetection>> {
pub fn detect(
&mut self,
image_data: &[u8],
width: u32,
height: u32,
) -> Result<Vec<FaceDetection>> {
let detections = self.detect_with_landmarks(image_data, width, height)?;
Ok(detections.into_iter().map(|d| d.detection).collect())
}
@@ -734,10 +742,18 @@ mod tests {
#[test]
fn test_iou_calculation() {
let a = FaceDetection {
x: 0.0, y: 0.0, width: 0.5, height: 0.5, confidence: 1.0
x: 0.0,
y: 0.0,
width: 0.5,
height: 0.5,
confidence: 1.0,
};
let b = FaceDetection {
x: 0.25, y: 0.25, width: 0.5, height: 0.5, confidence: 1.0
x: 0.25,
y: 0.25,
width: 0.5,
height: 0.5,
confidence: 1.0,
};
let iou = OnnxFaceDetector::iou(&a, &b);
@@ -751,15 +767,13 @@ mod tests {
fn test_landmarks_to_pixels() {
let det = DetectionWithLandmarks {
detection: FaceDetection {
x: 0.0, y: 0.0, width: 1.0, height: 1.0, confidence: 1.0
x: 0.0,
y: 0.0,
width: 1.0,
height: 1.0,
confidence: 1.0,
},
landmarks: [
[0.5, 0.3],
[0.7, 0.3],
[0.6, 0.5],
[0.4, 0.7],
[0.8, 0.7],
],
landmarks: [[0.5, 0.3], [0.7, 0.3], [0.6, 0.5], [0.4, 0.7], [0.8, 0.7]],
};
let pixels = det.landmarks_to_pixels(100, 100);
@@ -775,7 +789,10 @@ mod tests {
// Check that anchors cover the image
let min_cx = anchors.iter().map(|a| a.cx).fold(f32::INFINITY, f32::min);
let max_cx = anchors.iter().map(|a| a.cx).fold(f32::NEG_INFINITY, f32::max);
let max_cx = anchors
.iter()
.map(|a| a.cx)
.fold(f32::NEG_INFINITY, f32::max);
assert!(min_cx < 50.0);
assert!(max_cx > 590.0);
}
@@ -809,19 +826,31 @@ mod tests {
let detections = vec![
DetectionWithLandmarks {
detection: FaceDetection {
x: 0.0, y: 0.0, width: 0.5, height: 0.5, confidence: 0.9
x: 0.0,
y: 0.0,
width: 0.5,
height: 0.5,
confidence: 0.9,
},
landmarks: [[0.0; 2]; 5],
},
DetectionWithLandmarks {
detection: FaceDetection {
x: 0.1, y: 0.1, width: 0.5, height: 0.5, confidence: 0.8
x: 0.1,
y: 0.1,
width: 0.5,
height: 0.5,
confidence: 0.8,
},
landmarks: [[0.0; 2]; 5],
},
DetectionWithLandmarks {
detection: FaceDetection {
x: 0.8, y: 0.8, width: 0.2, height: 0.2, confidence: 0.7
x: 0.8,
y: 0.8,
width: 0.2,
height: 0.2,
confidence: 0.7,
},
landmarks: [[0.0; 2]; 5],
},

View File

@@ -41,8 +41,8 @@
//! let similarity = cosine_similarity(&embedding1, &embedding2);
//! ```
use linux_hello_common::{Error, Result};
use image::GrayImage;
use linux_hello_common::{Error, Result};
#[cfg(feature = "onnx")]
use ort::{session::Session, value::TensorRef};
@@ -192,13 +192,21 @@ impl OnnxEmbeddingExtractor {
///
/// The image should be an aligned face of the correct input size.
#[cfg(feature = "onnx")]
pub fn extract_from_bytes(&mut self, image_data: &[u8], width: u32, height: u32) -> Result<Vec<f32>> {
pub fn extract_from_bytes(
&mut self,
image_data: &[u8],
width: u32,
height: u32,
) -> Result<Vec<f32>> {
// Validate input size
let expected_pixels = (width * height) as usize;
if image_data.len() != expected_pixels {
return Err(Error::Detection(format!(
"Image size mismatch: expected {}x{}={} pixels, got {}",
width, height, expected_pixels, image_data.len()
width,
height,
expected_pixels,
image_data.len()
)));
}
@@ -207,13 +215,15 @@ impl OnnxEmbeddingExtractor {
// Create tensor reference using shape and slice (compatible with ort 2.0 API)
let shape: Vec<i64> = tensor_data.shape().iter().map(|&x| x as i64).collect();
let slice = tensor_data.as_slice()
let slice = tensor_data
.as_slice()
.ok_or_else(|| Error::Detection("Array not contiguous".to_string()))?;
let input_tensor = TensorRef::from_array_view((shape, slice))
.map_err(|e| Error::Detection(format!("Failed to create tensor: {}", e)))?;
// Run inference
let outputs = self.session
let outputs = self
.session
.run(ort::inputs![input_tensor])
.map_err(|e| Error::Detection(format!("Inference failed: {}", e)))?;
@@ -229,10 +239,15 @@ impl OnnxEmbeddingExtractor {
/// Extract embedding from raw grayscale image bytes (stub for non-onnx builds)
#[cfg(not(feature = "onnx"))]
#[allow(unused_variables)]
pub fn extract_from_bytes(&mut self, image_data: &[u8], width: u32, height: u32) -> Result<Vec<f32>> {
pub fn extract_from_bytes(
&mut self,
image_data: &[u8],
width: u32,
height: u32,
) -> Result<Vec<f32>> {
if !self.model_loaded {
return Err(Error::Detection(
"ONNX models not loaded (onnx feature not enabled)".to_string()
"ONNX models not loaded (onnx feature not enabled)".to_string(),
));
}
Ok(vec![0.0; self.embedding_dim])
@@ -276,7 +291,8 @@ impl OnnxEmbeddingExtractor {
#[cfg(feature = "onnx")]
fn extract_embedding_data(outputs: &ort::session::SessionOutputs) -> Result<Vec<f32>> {
// Get the first output (embedding)
let output = outputs.get("output")
let output = outputs
.get("output")
.or_else(|| outputs.get("embedding"))
.or_else(|| outputs.get("fc1"))
.ok_or_else(|| Error::Detection("No embedding output found".to_string()))?;

View File

@@ -54,7 +54,7 @@ mod detector;
mod embedding;
pub use alignment::{FaceAligner, REFERENCE_LANDMARKS_112, REFERENCE_LANDMARKS_96};
pub use detector::{OnnxFaceDetector, DetectionWithLandmarks};
pub use detector::{DetectionWithLandmarks, OnnxFaceDetector};
pub use embedding::OnnxEmbeddingExtractor;
/// ONNX model configuration
@@ -157,10 +157,8 @@ impl OnnxPipeline {
) -> linux_hello_common::Result<Self> {
let detector = OnnxFaceDetector::load_with_config(&detector_path, config)?;
let extractor = OnnxEmbeddingExtractor::load_with_config(&embedding_path, config)?;
let aligner = FaceAligner::with_size(
config.embedding_input_size.0,
config.embedding_input_size.1,
);
let aligner =
FaceAligner::with_size(config.embedding_input_size.0, config.embedding_input_size.1);
Ok(Self {
detector,
@@ -186,7 +184,9 @@ impl OnnxPipeline {
height: u32,
) -> linux_hello_common::Result<Vec<(DetectionWithLandmarks, Vec<f32>)>> {
// Detect faces
let detections = self.detector.detect_with_landmarks(image_data, width, height)?;
let detections = self
.detector
.detect_with_landmarks(image_data, width, height)?;
let mut results = Vec::new();
@@ -199,11 +199,15 @@ impl OnnxPipeline {
let landmarks_px = detection.landmarks_to_pixels(width, height);
// Align face
let aligned = self.aligner.align(image_data, width, height, &landmarks_px)?;
let aligned = self
.aligner
.align(image_data, width, height, &landmarks_px)?;
// Extract embedding
let (align_w, align_h) = self.aligner.output_size();
let embedding = self.extractor.extract_from_bytes(&aligned, align_w, align_h)?;
let embedding = self
.extractor
.extract_from_bytes(&aligned, align_w, align_h)?;
results.push((detection, embedding));
}
@@ -223,13 +227,12 @@ impl OnnxPipeline {
let results = self.process_frame(image_data, width, height)?;
// Find the detection with highest confidence
let best = results
.into_iter()
.max_by(|a, b| {
a.0.detection.confidence
.partial_cmp(&b.0.detection.confidence)
.unwrap_or(std::cmp::Ordering::Equal)
});
let best = results.into_iter().max_by(|a, b| {
a.0.detection
.confidence
.partial_cmp(&b.0.detection.confidence)
.unwrap_or(std::cmp::Ordering::Equal)
});
Ok(best.map(|(_, embedding)| embedding))
}

View File

@@ -49,7 +49,7 @@ use subtle::{Choice, ConstantTimeEq};
use zeroize::{Zeroize, ZeroizeOnDrop};
/// Secure container for face embedding data
///
///
/// Automatically zeroizes memory when dropped to prevent
/// sensitive biometric data from persisting in memory.
#[derive(Clone, Zeroize, ZeroizeOnDrop)]
@@ -104,7 +104,7 @@ impl SecureEmbedding {
}
/// Get immutable access to the embedding data
///
///
/// # Security Note
/// The returned slice should be used immediately and not stored.
pub fn as_slice(&self) -> &[f32] {
@@ -155,7 +155,11 @@ impl SecureEmbedding {
// Return 0.0 if lengths don't match, otherwise return similarity
// This selection is constant-time using subtle's Choice
let zero = 0.0f32;
if bool::from(lengths_match) { similarity } else { zero }
if bool::from(lengths_match) {
similarity
} else {
zero
}
}
/// Calculate Euclidean distance with another embedding
@@ -164,7 +168,9 @@ impl SecureEmbedding {
return f32::MAX;
}
let sum_sq: f32 = self.data.iter()
let sum_sq: f32 = self
.data
.iter()
.zip(other.data.iter())
.map(|(a, b)| (a - b).powi(2))
.sum();
@@ -185,7 +191,7 @@ impl SecureEmbedding {
pub fn from_bytes(bytes: &[u8]) -> Result<Self> {
if bytes.len() % 4 != 0 {
return Err(Error::Serialization(
"Invalid embedding byte length".to_string()
"Invalid embedding byte length".to_string(),
));
}
@@ -291,7 +297,7 @@ pub mod memory_protection {
use std::ptr;
/// Securely zero a byte slice
///
///
/// Uses volatile writes to prevent compiler optimization
pub fn secure_zero(data: &mut [u8]) {
for byte in data.iter_mut() {
@@ -314,18 +320,13 @@ pub mod memory_protection {
}
/// Lock memory region to prevent swapping (Linux-specific)
///
///
/// Returns Ok(true) if locked successfully, Ok(false) if not supported
#[cfg(target_os = "linux")]
pub fn lock_memory(data: &[u8]) -> Result<bool> {
use std::ffi::c_void;
let result = unsafe {
libc::mlock(
data.as_ptr() as *const c_void,
data.len()
)
};
let result = unsafe { libc::mlock(data.as_ptr() as *const c_void, data.len()) };
if result == 0 {
Ok(true)
@@ -345,12 +346,9 @@ pub mod memory_protection {
#[cfg(target_os = "linux")]
pub fn unlock_memory(data: &[u8]) -> Result<()> {
use std::ffi::c_void;
unsafe {
libc::munlock(
data.as_ptr() as *const c_void,
data.len()
);
libc::munlock(data.as_ptr() as *const c_void, data.len());
}
Ok(())
}
@@ -401,7 +399,7 @@ mod tests {
fn test_secure_embedding_creation() {
let data = vec![1.0, 2.0, 3.0, 4.0];
let embedding = SecureEmbedding::new(data.clone());
assert_eq!(embedding.len(), 4);
assert_eq!(embedding.as_slice(), &data);
}
@@ -411,10 +409,10 @@ mod tests {
let emb1 = SecureEmbedding::new(vec![1.0, 0.0, 0.0]);
let emb2 = SecureEmbedding::new(vec![1.0, 0.0, 0.0]);
let emb3 = SecureEmbedding::new(vec![0.0, 1.0, 0.0]);
// Same vector
assert!((emb1.cosine_similarity(&emb2) - 1.0).abs() < 0.001);
// Orthogonal vectors
assert!(emb1.cosine_similarity(&emb3).abs() < 0.001);
}
@@ -424,7 +422,7 @@ mod tests {
let original = SecureEmbedding::new(vec![1.5, 2.5, 3.5, 4.5]);
let bytes = original.to_bytes();
let restored = SecureEmbedding::from_bytes(&bytes).unwrap();
assert_eq!(original.as_slice(), restored.as_slice());
}
@@ -432,7 +430,7 @@ mod tests {
fn test_secure_embedding_debug_redacted() {
let embedding = SecureEmbedding::new(vec![1.0, 2.0, 3.0]);
let debug_str = format!("{:?}", embedding);
assert!(debug_str.contains("REDACTED"));
assert!(!debug_str.contains("1.0"));
}
@@ -490,17 +488,17 @@ mod tests {
fn test_secure_zero() {
let mut data = vec![1u8, 2, 3, 4, 5];
memory_protection::secure_zero(&mut data);
assert!(data.iter().all(|&b| b == 0));
}
#[test]
fn test_zeroize_guard() {
let mut guard = ZeroizeGuard::new(vec![1u8, 2, 3, 4]);
assert!(guard.get().is_some());
assert_eq!(guard.get().unwrap().len(), 4);
// Modify through guard
if let Some(data) = guard.get_mut() {
data[0] = 10;
@@ -512,7 +510,7 @@ mod tests {
fn test_zeroize_guard_take() {
let guard = ZeroizeGuard::new(vec![1u8, 2, 3]);
let data = guard.take();
assert!(data.is_some());
assert_eq!(data.unwrap(), vec![1, 2, 3]);
}

View File

@@ -3,12 +3,12 @@
//! Enhanced template storage with TPM encryption support.
//! Wraps the common TemplateStore with encryption capabilities.
use linux_hello_common::{Error, FaceTemplate, Result, TemplateStore};
use crate::tpm::{EncryptedTemplate, TpmStorage, get_tpm_storage};
use crate::secure_memory::SecureEmbedding;
use crate::tpm::{get_tpm_storage, EncryptedTemplate, TpmStorage};
use linux_hello_common::{Error, FaceTemplate, Result, TemplateStore};
use serde::{Deserialize, Serialize};
use std::path::{Path, PathBuf};
use std::fs;
use std::path::{Path, PathBuf};
use tracing::{debug, info, warn};
/// Encrypted face template for storage
@@ -49,7 +49,7 @@ impl SecureTemplateStore {
pub fn new<P: AsRef<Path>>(base_path: P) -> Self {
let base = base_path.as_ref().to_path_buf();
let tpm = get_tpm_storage();
Self {
base_path: base.clone(),
tpm,
@@ -75,7 +75,10 @@ impl SecureTemplateStore {
info!("Secure template storage initialized with encryption");
}
Err(e) => {
warn!("TPM initialization failed, using unencrypted storage: {}", e);
warn!(
"TPM initialization failed, using unencrypted storage: {}",
e
);
self.encryption_enabled = false;
}
}
@@ -98,7 +101,9 @@ impl SecureTemplateStore {
/// Get path for secure template file
fn secure_template_path(&self, user: &str, label: &str) -> PathBuf {
self.base_path.join(user).join(format!("{}.secure.json", label))
self.base_path
.join(user)
.join(format!("{}.secure.json", label))
}
/// Store a template securely
@@ -116,7 +121,9 @@ impl SecureTemplateStore {
fs::create_dir_all(&user_dir)?;
// Serialize embedding to bytes
let embedding_bytes: Vec<u8> = template.embedding.iter()
let embedding_bytes: Vec<u8> = template
.embedding
.iter()
.flat_map(|f| f.to_le_bytes())
.collect();
@@ -138,7 +145,10 @@ impl SecureTemplateStore {
let path = self.secure_template_path(&template.user, &template.label);
fs::write(&path, json)?;
debug!("Stored encrypted template for {}:{}", template.user, template.label);
debug!(
"Stored encrypted template for {}:{}",
template.user, template.label
);
Ok(())
}
@@ -158,13 +168,13 @@ impl SecureTemplateStore {
fn load_encrypted(&mut self, user: &str, label: &str) -> Result<FaceTemplate> {
let path = self.secure_template_path(user, label);
let content = fs::read_to_string(&path)?;
let secure: SecureTemplate = serde_json::from_str(&content)
.map_err(|e| Error::Serialization(e.to_string()))?;
let secure: SecureTemplate =
serde_json::from_str(&content).map_err(|e| Error::Serialization(e.to_string()))?;
// Decrypt embedding
let embedding_bytes = self.tpm.decrypt(user, &secure.encrypted_embedding)?;
// Deserialize embedding
let embedding: Vec<f32> = embedding_bytes
.chunks_exact(4)
@@ -192,17 +202,17 @@ impl SecureTemplateStore {
/// Load all templates for a user
pub fn load_all(&mut self, user: &str) -> Result<Vec<FaceTemplate>> {
let user_dir = self.base_path.join(user);
if !user_dir.exists() {
return Ok(vec![]);
}
let mut templates = Vec::new();
for entry in fs::read_dir(&user_dir)? {
let entry = entry?;
let path = entry.path();
if let Some(name) = path.file_name().and_then(|n| n.to_str()) {
if name.ends_with(".secure.json") {
// Encrypted template
@@ -284,11 +294,15 @@ impl SecureTemplateStore {
// Store encrypted version
self.store_encrypted(&template)?;
// Remove unencrypted version
self.fallback_store.remove(&template.user, &template.label)?;
self.fallback_store
.remove(&template.user, &template.label)?;
migrated += 1;
}
info!("Migrated {} templates to encrypted storage for user {}", migrated, user);
info!(
"Migrated {} templates to encrypted storage for user {}",
migrated, user
);
Ok(migrated)
}
}
@@ -312,7 +326,7 @@ mod tests {
fn test_secure_store_creation() {
let temp = TempDir::new().unwrap();
let mut store = SecureTemplateStore::new(temp.path());
store.initialize(false).unwrap();
assert!(!store.is_encryption_enabled());
}
@@ -364,8 +378,12 @@ mod tests {
let mut store = SecureTemplateStore::new(temp.path());
store.initialize(false).unwrap();
store.store(&create_test_template("testuser", "default")).unwrap();
store.store(&create_test_template("testuser", "backup")).unwrap();
store
.store(&create_test_template("testuser", "default"))
.unwrap();
store
.store(&create_test_template("testuser", "backup"))
.unwrap();
let templates = store.load_all("testuser").unwrap();
assert_eq!(templates.len(), 2);

View File

@@ -58,9 +58,9 @@
use linux_hello_common::{Error, Result};
use serde::{Deserialize, Serialize};
use std::path::{Path, PathBuf};
use tracing::{debug, warn};
#[cfg(feature = "tpm")]
use tracing::info;
use tracing::{debug, warn};
/// TPM2 key handle for Linux Hello primary key
pub const PRIMARY_KEY_HANDLE: u32 = 0x81000001;
@@ -196,7 +196,9 @@ impl SoftwareTpmFallback {
use pbkdf2::pbkdf2_hmac;
use sha2::Sha256;
let master = self.master_secret.as_ref()
let master = self
.master_secret
.as_ref()
.ok_or_else(|| Error::Tpm("Master secret not initialized".to_string()))?;
// Combine master secret with user identifier as password
@@ -210,7 +212,12 @@ impl SoftwareTpmFallback {
}
/// Encrypt data using AES-256-GCM
fn aes_gcm_encrypt(&self, plaintext: &[u8], key: &[u8; 32], nonce: &[u8; NONCE_SIZE]) -> Result<Vec<u8>> {
fn aes_gcm_encrypt(
&self,
plaintext: &[u8],
key: &[u8; 32],
nonce: &[u8; NONCE_SIZE],
) -> Result<Vec<u8>> {
use aes_gcm::{
aead::{Aead, KeyInit},
Aes256Gcm, Nonce,
@@ -221,7 +228,8 @@ impl SoftwareTpmFallback {
let nonce = Nonce::from_slice(nonce);
cipher.encrypt(nonce, plaintext)
cipher
.encrypt(nonce, plaintext)
.map_err(|e| Error::Tpm(format!("Encryption failed: {}", e)))
}
@@ -237,7 +245,8 @@ impl SoftwareTpmFallback {
let nonce = Nonce::from_slice(nonce);
cipher.decrypt(nonce, ciphertext)
cipher
.decrypt(nonce, ciphertext)
.map_err(|e| Error::Tpm(format!("Decryption failed (authentication error): {}", e)))
}
@@ -306,13 +315,15 @@ impl TpmStorage for SoftwareTpmFallback {
if encrypted.salt.len() != SALT_SIZE {
return Err(Error::Tpm(format!(
"Invalid salt size: expected {}, got {}",
SALT_SIZE, encrypted.salt.len()
SALT_SIZE,
encrypted.salt.len()
)));
}
if encrypted.iv.len() != NONCE_SIZE {
return Err(Error::Tpm(format!(
"Invalid IV/nonce size: expected {}, got {}",
NONCE_SIZE, encrypted.iv.len()
NONCE_SIZE,
encrypted.iv.len()
)));
}
@@ -329,7 +340,8 @@ impl TpmStorage for SoftwareTpmFallback {
let key_path = self.user_key_path(user);
// Store key metadata (the actual key is derived on-demand using PBKDF2)
let metadata = format!("user={}\ncreated={}",
let metadata = format!(
"user={}\ncreated={}",
user,
std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
@@ -366,8 +378,6 @@ pub mod real_tpm {
use super::*;
use std::convert::TryFrom;
use tss_esapi::{
Context,
TctiNameConf,
abstraction::cipher::Cipher,
attributes::ObjectAttributesBuilder,
handles::KeyHandle,
@@ -381,6 +391,7 @@ pub mod real_tpm {
Auth, InitialValue, MaxBuffer, Private, Public, PublicBuffer, PublicBuilder,
RsaExponent, SymmetricCipherParameters,
},
Context, TctiNameConf,
};
/// AES-256-CFB IV size (128 bits = 16 bytes)
@@ -453,7 +464,9 @@ pub mod real_tpm {
// Get auth path before borrowing context mutably
let auth_path = self.primary_auth_path();
let ctx = self.context.as_mut()
let ctx = self
.context
.as_mut()
.ok_or_else(|| Error::Tpm("TPM not connected".to_string()))?;
// Set owner auth to empty (default)
@@ -487,22 +500,26 @@ pub mod real_tpm {
let cipher = Cipher::aes_256_cfb();
let primary_public = tss_esapi::utils::create_restricted_decryption_rsa_public(
cipher.try_into()
.map_err(|e: tss_esapi::Error| Error::Tpm(format!("Failed to convert cipher: {}", e)))?,
cipher.try_into().map_err(|e: tss_esapi::Error| {
Error::Tpm(format!("Failed to convert cipher: {}", e))
})?,
RsaKeyBits::Rsa2048,
RsaExponent::default(),
).map_err(|e| Error::Tpm(format!("Failed to create primary public: {}", e)))?;
)
.map_err(|e| Error::Tpm(format!("Failed to create primary public: {}", e)))?;
let result = ctx.execute_with_session(Some(AuthSession::Password), |ctx| {
ctx.create_primary(
Hierarchy::Owner,
primary_public,
Some(primary_auth.clone()),
None, // initial data
None, // outside info
None, // creation PCR
)
}).map_err(|e| Error::Tpm(format!("Failed to create primary key: {}", e)))?;
let result = ctx
.execute_with_session(Some(AuthSession::Password), |ctx| {
ctx.create_primary(
Hierarchy::Owner,
primary_public,
Some(primary_auth.clone()),
None, // initial data
None, // outside info
None, // creation PCR
)
})
.map_err(|e| Error::Tpm(format!("Failed to create primary key: {}", e)))?;
// Set auth on the primary key handle
ctx.tr_set_auth(result.key_handle.into(), primary_auth.clone())
@@ -519,10 +536,13 @@ pub mod real_tpm {
fn create_symmetric_key(&mut self, user: &str) -> Result<StoredUserKey> {
use rand::RngCore;
let ctx = self.context.as_mut()
let ctx = self
.context
.as_mut()
.ok_or_else(|| Error::Tpm("TPM not connected".to_string()))?;
let primary_key = self.primary_key
let primary_key = self
.primary_key
.ok_or_else(|| Error::Tpm("Primary key not initialized".to_string()))?;
// Generate auth for the symmetric key
@@ -547,24 +567,27 @@ pub mod real_tpm {
.with_name_hashing_algorithm(HashingAlgorithm::Sha256)
.with_object_attributes(object_attributes)
.with_symmetric_cipher_parameters(SymmetricCipherParameters::new(
cipher.try_into()
.map_err(|e: tss_esapi::Error| Error::Tpm(format!("Failed to convert cipher: {}", e)))?,
cipher.try_into().map_err(|e: tss_esapi::Error| {
Error::Tpm(format!("Failed to convert cipher: {}", e))
})?,
))
.with_symmetric_cipher_unique_identifier(Default::default())
.build()
.map_err(|e| Error::Tpm(format!("Failed to build symmetric key public: {}", e)))?;
// Create the symmetric key under the primary key
let creation_data = ctx.execute_with_session(Some(AuthSession::Password), |ctx| {
ctx.create(
primary_key,
sym_key_public,
Some(sym_key_auth),
None, // No initial sensitive data - TPM generates the key
None, // outside info
None, // creation PCR
)
}).map_err(|e| Error::Tpm(format!("Failed to create symmetric key: {}", e)))?;
let creation_data = ctx
.execute_with_session(Some(AuthSession::Password), |ctx| {
ctx.create(
primary_key,
sym_key_public,
Some(sym_key_auth),
None, // No initial sensitive data - TPM generates the key
None, // outside info
None, // creation PCR
)
})
.map_err(|e| Error::Tpm(format!("Failed to create symmetric key: {}", e)))?;
// Serialize private and public blobs
// Private implements Deref<Target=Vec<u8>>, so we can clone the inner Vec
@@ -606,7 +629,8 @@ pub mod real_tpm {
fn load_user_key(&mut self, user: &str) -> Result<(KeyHandle, Auth)> {
// Get all values that need self before getting mutable context
let key_path = self.user_tpm_key_path(user);
let primary_key = self.primary_key
let primary_key = self
.primary_key
.ok_or_else(|| Error::Tpm("Primary key not initialized".to_string()))?;
// Load stored key data
@@ -628,8 +652,9 @@ pub mod real_tpm {
};
let auth_bytes = self.software_fallback.decrypt(user, &encrypted_template)?;
let sym_key_auth = Auth::try_from(auth_bytes)
.map_err(|e| Error::Tpm(format!("Failed to create auth from decrypted value: {}", e)))?;
let sym_key_auth = Auth::try_from(auth_bytes).map_err(|e| {
Error::Tpm(format!("Failed to create auth from decrypted value: {}", e))
})?;
// Reconstruct Private and Public from stored blobs
let private = Private::try_from(stored_key.private_blob)
@@ -641,13 +666,17 @@ pub mod real_tpm {
.map_err(|e| Error::Tpm(format!("Failed to parse public: {}", e)))?;
// Now get context for TPM operations
let ctx = self.context.as_mut()
let ctx = self
.context
.as_mut()
.ok_or_else(|| Error::Tpm("TPM not connected".to_string()))?;
// Load the key into TPM
let key_handle = ctx.execute_with_session(Some(AuthSession::Password), |ctx| {
ctx.load(primary_key, private, public)
}).map_err(|e| Error::Tpm(format!("Failed to load symmetric key: {}", e)))?;
let key_handle = ctx
.execute_with_session(Some(AuthSession::Password), |ctx| {
ctx.load(primary_key, private, public)
})
.map_err(|e| Error::Tpm(format!("Failed to load symmetric key: {}", e)))?;
// Set auth on the loaded key
ctx.tr_set_auth(key_handle.into(), sym_key_auth.clone())
@@ -663,7 +692,9 @@ pub mod real_tpm {
// Load user's symmetric key
let (key_handle, _auth) = self.load_user_key(user)?;
let ctx = self.context.as_mut()
let ctx = self
.context
.as_mut()
.ok_or_else(|| Error::Tpm("TPM not connected".to_string()))?;
// Generate random IV
@@ -681,15 +712,17 @@ pub mod real_tpm {
let data = MaxBuffer::try_from(chunk.to_vec())
.map_err(|e| Error::Tpm(format!("Failed to create buffer: {}", e)))?;
let (encrypted_chunk, _) = ctx.execute_with_session(Some(AuthSession::Password), |ctx| {
ctx.encrypt_decrypt_2(
key_handle,
false, // encrypt
SymmetricMode::Cfb,
data,
initial_value.clone(),
)
}).map_err(|e| Error::Tpm(format!("TPM encryption failed: {}", e)))?;
let (encrypted_chunk, _) = ctx
.execute_with_session(Some(AuthSession::Password), |ctx| {
ctx.encrypt_decrypt_2(
key_handle,
false, // encrypt
SymmetricMode::Cfb,
data,
initial_value.clone(),
)
})
.map_err(|e| Error::Tpm(format!("TPM encryption failed: {}", e)))?;
ciphertext.extend_from_slice(&encrypted_chunk);
}
@@ -706,7 +739,9 @@ pub mod real_tpm {
// Load user's symmetric key
let (key_handle, _auth) = self.load_user_key(user)?;
let ctx = self.context.as_mut()
let ctx = self
.context
.as_mut()
.ok_or_else(|| Error::Tpm("TPM not connected".to_string()))?;
let initial_value = InitialValue::try_from(iv.to_vec())
@@ -720,15 +755,17 @@ pub mod real_tpm {
let data = MaxBuffer::try_from(chunk.to_vec())
.map_err(|e| Error::Tpm(format!("Failed to create buffer: {}", e)))?;
let (decrypted_chunk, _) = ctx.execute_with_session(Some(AuthSession::Password), |ctx| {
ctx.encrypt_decrypt_2(
key_handle,
true, // decrypt
SymmetricMode::Cfb,
data,
initial_value.clone(),
)
}).map_err(|e| Error::Tpm(format!("TPM decryption failed: {}", e)))?;
let (decrypted_chunk, _) = ctx
.execute_with_session(Some(AuthSession::Password), |ctx| {
ctx.encrypt_decrypt_2(
key_handle,
true, // decrypt
SymmetricMode::Cfb,
data,
initial_value.clone(),
)
})
.map_err(|e| Error::Tpm(format!("TPM decryption failed: {}", e)))?;
plaintext.extend_from_slice(&decrypted_chunk);
}
@@ -744,8 +781,8 @@ pub mod real_tpm {
impl TpmStorage for Tpm2Storage {
fn is_available(&self) -> bool {
// Check if TPM device exists
std::path::Path::new("/dev/tpm0").exists() ||
std::path::Path::new("/dev/tpmrm0").exists()
std::path::Path::new("/dev/tpm0").exists()
|| std::path::Path::new("/dev/tpmrm0").exists()
}
fn initialize(&mut self) -> Result<()> {
@@ -833,12 +870,16 @@ pub fn get_tpm_storage() -> Box<dyn TpmStorage> {
// Fall back to software implementation
warn!("TPM not available, using software fallback");
Box::new(SoftwareTpmFallback::new(SoftwareTpmFallback::default_key_path()))
Box::new(SoftwareTpmFallback::new(
SoftwareTpmFallback::default_key_path(),
))
}
#[cfg(not(feature = "tpm"))]
pub fn get_tpm_storage() -> Box<dyn TpmStorage> {
Box::new(SoftwareTpmFallback::new(SoftwareTpmFallback::default_key_path()))
Box::new(SoftwareTpmFallback::new(
SoftwareTpmFallback::default_key_path(),
))
}
#[cfg(test)]
@@ -878,7 +919,7 @@ mod tests {
let template = EncryptedTemplate {
ciphertext: vec![1, 2, 3, 4, 5],
iv: vec![10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110, 120], // 12 bytes for AES-GCM
salt: vec![0u8; 32], // 32 bytes for PBKDF2 salt
salt: vec![0u8; 32], // 32 bytes for PBKDF2 salt
key_handle: PRIMARY_KEY_HANDLE,
tpm_encrypted: false,
};

View File

@@ -19,7 +19,7 @@ use linux_hello_daemon::onnx::{
FaceAligner, OnnxEmbeddingExtractor, OnnxFaceDetector, OnnxModelConfig, OnnxPipeline,
REFERENCE_LANDMARKS_112,
};
use linux_hello_daemon::{FaceDetect, EmbeddingExtractor};
use linux_hello_daemon::{EmbeddingExtractor, FaceDetect};
use std::path::Path;
/// Model directory path
@@ -118,10 +118,10 @@ mod alignment_tests {
&image,
TEST_WIDTH,
TEST_HEIGHT,
100, // face_x
100, // face_y
200, // face_width
200, // face_height
100, // face_x
100, // face_y
200, // face_width
200, // face_height
);
assert!(result.is_ok());
@@ -251,7 +251,11 @@ mod integration_with_models {
}
let result = OnnxFaceDetector::load(model_path("retinaface.onnx"));
assert!(result.is_ok(), "Failed to load detection model: {:?}", result.err());
assert!(
result.is_ok(),
"Failed to load detection model: {:?}",
result.err()
);
}
#[test]
@@ -263,7 +267,11 @@ mod integration_with_models {
}
let result = OnnxEmbeddingExtractor::load(model_path("mobilefacenet.onnx"));
assert!(result.is_ok(), "Failed to load embedding model: {:?}", result.err());
assert!(
result.is_ok(),
"Failed to load embedding model: {:?}",
result.err()
);
}
#[test]
@@ -274,13 +282,17 @@ mod integration_with_models {
return;
}
let detector = OnnxFaceDetector::load(model_path("retinaface.onnx"))
.expect("Failed to load detector");
let detector =
OnnxFaceDetector::load(model_path("retinaface.onnx")).expect("Failed to load detector");
let image = create_test_image(TEST_WIDTH, TEST_HEIGHT);
let detections = detector.detect(&image, TEST_WIDTH, TEST_HEIGHT);
assert!(detections.is_ok(), "Detection failed: {:?}", detections.err());
assert!(
detections.is_ok(),
"Detection failed: {:?}",
detections.err()
);
// Note: synthetic image may or may not trigger detections
}
@@ -299,14 +311,22 @@ mod integration_with_models {
let face_data = vec![128u8; 112 * 112];
let result = extractor.extract_from_bytes(&face_data, 112, 112);
assert!(result.is_ok(), "Embedding extraction failed: {:?}", result.err());
assert!(
result.is_ok(),
"Embedding extraction failed: {:?}",
result.err()
);
let embedding = result.unwrap();
assert_eq!(embedding.len(), extractor.embedding_dimension());
// Check embedding is normalized (L2 norm should be ~1)
let norm: f32 = embedding.iter().map(|x| x * x).sum::<f32>().sqrt();
assert!((norm - 1.0).abs() < 0.1, "Embedding not normalized: norm = {}", norm);
assert!(
(norm - 1.0).abs() < 0.1,
"Embedding not normalized: norm = {}",
norm
);
}
#[test]
@@ -326,7 +346,11 @@ mod integration_with_models {
let image = create_test_image(TEST_WIDTH, TEST_HEIGHT);
let results = pipeline.process_frame(&image, TEST_WIDTH, TEST_HEIGHT);
assert!(results.is_ok(), "Pipeline processing failed: {:?}", results.err());
assert!(
results.is_ok(),
"Pipeline processing failed: {:?}",
results.err()
);
}
#[test]
@@ -343,9 +367,11 @@ mod integration_with_models {
// Same face should produce similar embeddings
let face_data = vec![128u8; 112 * 112];
let embedding1 = extractor.extract_from_bytes(&face_data, 112, 112)
let embedding1 = extractor
.extract_from_bytes(&face_data, 112, 112)
.expect("First extraction failed");
let embedding2 = extractor.extract_from_bytes(&face_data, 112, 112)
let embedding2 = extractor
.extract_from_bytes(&face_data, 112, 112)
.expect("Second extraction failed");
// Compute cosine similarity
@@ -377,9 +403,11 @@ mod integration_with_models {
let face1 = vec![100u8; 112 * 112];
let face2 = vec![200u8; 112 * 112];
let embedding1 = extractor.extract_from_bytes(&face1, 112, 112)
let embedding1 = extractor
.extract_from_bytes(&face1, 112, 112)
.expect("First extraction failed");
let embedding2 = extractor.extract_from_bytes(&face2, 112, 112)
let embedding2 = extractor
.extract_from_bytes(&face2, 112, 112)
.expect("Second extraction failed");
// Compute cosine similarity

View File

@@ -86,8 +86,13 @@ trait LinuxHelloDaemon {
/// Signal emitted when enrollment progress updates
#[zbus(signal)]
fn enrollment_progress(&self, session_id: &str, step: u32, total: u32, message: &str)
-> ZbusResult<()>;
fn enrollment_progress(
&self,
session_id: &str,
step: u32,
total: u32,
message: &str,
) -> ZbusResult<()>;
/// Signal emitted when enrollment completes
#[zbus(signal)]
@@ -128,10 +133,7 @@ impl DaemonClient {
/// Get the D-Bus proxy for the daemon
async fn get_proxy(&self) -> Result<LinuxHelloDaemonProxy<'static>, DaemonError> {
let guard = self.connection.lock().await;
let conn = guard
.as_ref()
.ok_or(DaemonError::NotConnected)?
.clone();
let conn = guard.as_ref().ok_or(DaemonError::NotConnected)?.clone();
LinuxHelloDaemonProxy::new(&conn)
.await
@@ -144,13 +146,14 @@ impl DaemonClient {
match proxy.get_status().await {
Ok(json) => {
serde_json::from_str(&json)
.map_err(|e| DaemonError::ParseError(e.to_string()))
serde_json::from_str(&json).map_err(|e| DaemonError::ParseError(e.to_string()))
}
Err(e) => {
// If daemon is not running, return a default status
if e.to_string().contains("org.freedesktop.DBus.Error.ServiceUnknown")
|| e.to_string().contains("org.freedesktop.DBus.Error.NameHasNoOwner")
if e.to_string()
.contains("org.freedesktop.DBus.Error.ServiceUnknown")
|| e.to_string()
.contains("org.freedesktop.DBus.Error.NameHasNoOwner")
{
Ok(SystemStatus {
daemon_running: false,
@@ -169,12 +172,13 @@ impl DaemonClient {
match proxy.list_templates().await {
Ok(json) => {
serde_json::from_str(&json)
.map_err(|e| DaemonError::ParseError(e.to_string()))
serde_json::from_str(&json).map_err(|e| DaemonError::ParseError(e.to_string()))
}
Err(e) => {
if e.to_string().contains("org.freedesktop.DBus.Error.ServiceUnknown")
|| e.to_string().contains("org.freedesktop.DBus.Error.NameHasNoOwner")
if e.to_string()
.contains("org.freedesktop.DBus.Error.ServiceUnknown")
|| e.to_string()
.contains("org.freedesktop.DBus.Error.NameHasNoOwner")
{
Ok(Vec::new())
} else {

View File

@@ -8,8 +8,8 @@ use std::rc::Rc;
use std::sync::Arc;
use glib::clone;
use gtk4::prelude::*;
use gtk4::glib;
use gtk4::prelude::*;
use libadwaita as adw;
use libadwaita::prelude::*;
use tokio::sync::Mutex;
@@ -63,9 +63,7 @@ impl EnrollmentDialog {
.show_end_title_buttons(false)
.build();
let cancel_button = gtk4::Button::builder()
.label("Cancel")
.build();
let cancel_button = gtk4::Button::builder().label("Cancel").build();
header.pack_start(&cancel_button);
let start_button = gtk4::Button::builder()
@@ -155,9 +153,21 @@ impl EnrollmentDialog {
.build();
let tips = [
("face-smile-symbolic", "Good lighting", "Ensure your face is well-lit"),
("view-reveal-symbolic", "Clear view", "Remove glasses if possible"),
("object-rotate-right-symbolic", "Multiple angles", "Slowly turn your head when prompted"),
(
"face-smile-symbolic",
"Good lighting",
"Ensure your face is well-lit",
),
(
"view-reveal-symbolic",
"Clear view",
"Remove glasses if possible",
),
(
"object-rotate-right-symbolic",
"Multiple angles",
"Slowly turn your head when prompted",
),
];
for (icon, title, subtitle) in tips {
@@ -238,7 +248,8 @@ impl EnrollmentDialog {
fn validate_input(&self) {
let label = self.label_entry.text();
let valid = !label.is_empty() && label.len() <= 64;
self.start_button.set_sensitive(valid && *self.state.borrow() == EnrollmentState::Ready);
self.start_button
.set_sensitive(valid && *self.state.borrow() == EnrollmentState::Ready);
}
/// Create a weak reference for callbacks
@@ -273,9 +284,11 @@ impl EnrollmentDialog {
self.progress_bar.set_text(Some("Starting..."));
self.instruction_label.set_visible(true);
self.status_page.set_icon_name(Some("camera-video-symbolic"));
self.status_page
.set_icon_name(Some("camera-video-symbolic"));
self.status_page.set_title("Enrolling...");
self.status_page.set_description(Some("Please look at the camera"));
self.status_page
.set_description(Some("Please look at the camera"));
let client = self.client.clone();
let state = self.state.clone();
@@ -315,11 +328,14 @@ impl EnrollmentDialog {
let instruction = instruction.to_string();
glib::idle_add_local_once(clone!(
#[strong] progress_bar,
#[strong] instruction_label,
#[strong]
progress_bar,
#[strong]
instruction_label,
move || {
progress_bar.set_fraction(progress);
progress_bar.set_text(Some(&format!("{}%", (progress * 100.0) as u32)));
progress_bar
.set_text(Some(&format!("{}%", (progress * 100.0) as u32)));
instruction_label.set_label(&instruction);
}
));
@@ -332,19 +348,29 @@ impl EnrollmentDialog {
if *state.borrow() == EnrollmentState::InProgress {
match client_guard.finish_enrollment(&sid).await {
Ok(template_id) => {
tracing::info!("Enrollment completed, template ID: {}", template_id);
tracing::info!(
"Enrollment completed, template ID: {}",
template_id
);
*state.borrow_mut() = EnrollmentState::Completed;
glib::idle_add_local_once(clone!(
#[strong] status_page,
#[strong] progress_bar,
#[strong] instruction_label,
#[strong] start_button,
#[strong] on_completed,
#[strong]
status_page,
#[strong]
progress_bar,
#[strong]
instruction_label,
#[strong]
start_button,
#[strong]
on_completed,
move || {
status_page.set_icon_name(Some("emblem-ok-symbolic"));
status_page.set_title("Enrollment Complete");
status_page.set_description(Some("Your face has been enrolled successfully"));
status_page.set_description(Some(
"Your face has been enrolled successfully",
));
progress_bar.set_visible(false);
instruction_label.set_visible(false);
start_button.set_label("Done");
@@ -360,13 +386,31 @@ impl EnrollmentDialog {
));
}
Err(e) => {
handle_enrollment_error(&state, &status_page, &progress_bar, &instruction_label, &start_button, &label_entry, &on_completed, &e.to_string());
handle_enrollment_error(
&state,
&status_page,
&progress_bar,
&instruction_label,
&start_button,
&label_entry,
&on_completed,
&e.to_string(),
);
}
}
}
}
Err(e) => {
handle_enrollment_error(&state, &status_page, &progress_bar, &instruction_label, &start_button, &label_entry, &on_completed, &e.to_string());
handle_enrollment_error(
&state,
&status_page,
&progress_bar,
&instruction_label,
&start_button,
&label_entry,
&on_completed,
&e.to_string(),
);
}
}
});
@@ -419,13 +463,20 @@ fn handle_enrollment_error(
*state.borrow_mut() = EnrollmentState::Failed;
glib::idle_add_local_once(clone!(
#[strong] status_page,
#[strong] progress_bar,
#[strong] instruction_label,
#[strong] start_button,
#[strong] label_entry,
#[strong] on_completed,
#[strong] error,
#[strong]
status_page,
#[strong]
progress_bar,
#[strong]
instruction_label,
#[strong]
start_button,
#[strong]
label_entry,
#[strong]
on_completed,
#[strong]
error,
move || {
status_page.set_icon_name(Some("dialog-error-symbolic"));
status_page.set_title("Enrollment Failed");

View File

@@ -83,8 +83,7 @@ impl TemplateDisplayModel {
.unwrap_or_else(|| "Unknown".to_string());
let (last_used_date, recently_used) = if let Some(ts) = template.last_used {
let dt = DateTime::<Utc>::from_timestamp(ts, 0)
.map(|dt| dt.with_timezone(&Local));
let dt = DateTime::<Utc>::from_timestamp(ts, 0).map(|dt| dt.with_timezone(&Local));
let date_str = dt
.as_ref()
@@ -135,7 +134,7 @@ mod tests {
id: id.to_string(),
label: label.to_string(),
username: "testuser".to_string(),
created_at: 1704067200, // 2024-01-01 00:00:00 UTC
created_at: 1704067200, // 2024-01-01 00:00:00 UTC
last_used: Some(1704153600), // 2024-01-02 00:00:00 UTC
}
}

View File

@@ -268,11 +268,12 @@ impl SettingsWindow {
// Anti-spoofing switch
let this = self.clone();
self.anti_spoofing_switch.connect_active_notify(move |switch| {
let enabled = switch.is_active();
tracing::info!("Anti-spoofing toggled: {}", enabled);
this.save_settings();
});
self.anti_spoofing_switch
.connect_active_notify(move |switch| {
let enabled = switch.is_active();
tracing::info!("Anti-spoofing toggled: {}", enabled);
this.save_settings();
});
// Confidence threshold
let this = self.clone();
@@ -312,8 +313,10 @@ impl SettingsWindow {
// Update daemon status
glib::idle_add_local_once(clone!(
#[strong] daemon_row,
#[strong] status,
#[strong]
daemon_row,
#[strong]
status,
move || {
if status.daemon_running {
daemon_row.set_subtitle("Running");
@@ -327,8 +330,10 @@ impl SettingsWindow {
// Update camera status
glib::idle_add_local_once(clone!(
#[strong] camera_row,
#[strong] status,
#[strong]
camera_row,
#[strong]
status,
move || {
if status.camera_available {
let device = status.camera_device.as_deref().unwrap_or("Available");
@@ -343,8 +348,10 @@ impl SettingsWindow {
// Update TPM status
glib::idle_add_local_once(clone!(
#[strong] tpm_row,
#[strong] status,
#[strong]
tpm_row,
#[strong]
status,
move || {
if status.tpm_available {
tpm_row.set_subtitle("Available - Secure storage enabled");
@@ -358,8 +365,10 @@ impl SettingsWindow {
// Update enroll button sensitivity
glib::idle_add_local_once(clone!(
#[strong] enroll_button,
#[strong] status,
#[strong]
enroll_button,
#[strong]
status,
move || {
enroll_button.set_sensitive(status.daemon_running && status.camera_available);
}
@@ -371,8 +380,10 @@ impl SettingsWindow {
// Update templates list
glib::idle_add_local_once(clone!(
#[strong] templates_group,
#[strong] template_list,
#[strong]
templates_group,
#[strong]
template_list,
move || {
update_templates_list(&templates_group, &template_list, templates);
}
@@ -505,7 +516,10 @@ fn create_template_row(template: &TemplateInfo) -> adw::ActionRow {
let subtitle = format!(
"Created: {} | Last used: {}",
format_timestamp(template.created_at),
template.last_used.map(format_timestamp).unwrap_or_else(|| "Never".to_string())
template
.last_used
.map(format_timestamp)
.unwrap_or_else(|| "Never".to_string())
);
let row = adw::ActionRow::builder()
@@ -525,7 +539,10 @@ fn create_template_row(template: &TemplateInfo) -> adw::ActionRow {
let template_id = template.id.clone();
delete_button.connect_clicked(move |button| {
// Show confirmation dialog
if let Some(window) = button.root().and_then(|r| r.downcast::<gtk4::Window>().ok()) {
if let Some(window) = button
.root()
.and_then(|r| r.downcast::<gtk4::Window>().ok())
{
show_delete_confirmation(&window, &template_id);
}
});
@@ -554,10 +571,7 @@ fn show_delete_confirmation(window: &gtk4::Window, template_id: &str) {
.body("This will remove the enrolled face template. You will need to enroll again to use facial authentication.")
.build();
dialog.add_responses(&[
("cancel", "Cancel"),
("delete", "Remove"),
]);
dialog.add_responses(&[("cancel", "Cancel"), ("delete", "Remove")]);
dialog.set_response_appearance("delete", adw::ResponseAppearance::Destructive);
dialog.set_default_response(Some("cancel"));
dialog.set_close_response("cancel");
@@ -589,9 +603,7 @@ fn show_about_dialog(window: &adw::ApplicationWindow) {
.comments("Facial authentication for Linux, inspired by Windows Hello")
.build();
about.add_credit_section(Some("Contributors"), &[
"Linux Hello Team",
]);
about.add_credit_section(Some("Contributors"), &["Linux Hello Team"]);
about.present(Some(window));
}

View File

@@ -2,39 +2,39 @@
//!
//! These tests require camera hardware or will use mocks on non-Linux systems.
use linux_hello_daemon::camera::{enumerate_cameras, Camera, IrEmitterControl};
use linux_hello_common::Result;
use linux_hello_daemon::camera::{enumerate_cameras, Camera, IrEmitterControl};
#[test]
fn test_camera_enumeration() -> Result<()> {
let cameras = enumerate_cameras()?;
// Should at least enumerate (may be empty if no cameras)
println!("Found {} camera(s)", cameras.len());
for cam in &cameras {
println!(" - {}", cam);
}
Ok(())
}
#[test]
fn test_camera_open_and_capture() -> Result<()> {
let cameras = enumerate_cameras()?;
if cameras.is_empty() {
println!("No cameras available, skipping capture test");
return Ok(());
}
// Try to open the first camera
let first_cam = &cameras[0];
println!("Opening camera: {}", first_cam.device_path);
let mut camera = Camera::open(&first_cam.device_path)?;
let (width, height) = camera.resolution();
println!("Camera resolution: {}x{}", width, height);
// Capture a few frames
for i in 0..3 {
let frame = camera.capture_frame()?;
@@ -47,13 +47,13 @@ fn test_camera_open_and_capture() -> Result<()> {
frame.data.len(),
frame.timestamp_us
);
// Verify frame properties
assert_eq!(frame.width, width);
assert_eq!(frame.height, height);
assert!(!frame.data.is_empty());
}
camera.stop();
Ok(())
}
@@ -61,20 +61,20 @@ fn test_camera_open_and_capture() -> Result<()> {
#[test]
fn test_ir_emitter_control() -> Result<()> {
let cameras = enumerate_cameras()?;
// Find an IR camera if available
let ir_camera = cameras.iter().find(|c| c.is_ir);
if let Some(cam) = ir_camera {
println!("Testing IR emitter on: {}", cam.device_path);
let mut emitter = IrEmitterControl::new(&cam.device_path);
// Try to enable
emitter.enable()?;
assert!(emitter.is_active());
println!("IR emitter enabled");
// Try to disable
emitter.disable()?;
assert!(!emitter.is_active());
@@ -82,19 +82,19 @@ fn test_ir_emitter_control() -> Result<()> {
} else {
println!("No IR camera found, skipping IR emitter test");
}
Ok(())
}
#[test]
fn test_camera_info_properties() -> Result<()> {
let cameras = enumerate_cameras()?;
for cam in cameras {
// Verify all cameras have valid properties
assert!(!cam.device_path.is_empty());
assert!(!cam.name.is_empty());
println!(
"Camera: {} (IR: {}, Resolutions: {})",
cam.device_path,
@@ -102,6 +102,6 @@ fn test_camera_info_properties() -> Result<()> {
cam.resolutions.len()
);
}
Ok(())
}

View File

@@ -8,9 +8,12 @@ fn cli_binary() -> String {
let _ = Command::new("cargo")
.args(["build", "--bin", "linux-hello"])
.output();
// Return path to the binary
format!("{}/target/debug/linux-hello", env!("CARGO_MANIFEST_DIR").replace("/linux-hello-tests", ""))
format!(
"{}/target/debug/linux-hello",
env!("CARGO_MANIFEST_DIR").replace("/linux-hello-tests", "")
)
}
#[test]
@@ -19,15 +22,15 @@ fn test_cli_status_command() {
.args(["status"])
.output()
.expect("Failed to execute CLI");
let stdout = String::from_utf8_lossy(&output.stdout);
let stderr = String::from_utf8_lossy(&output.stderr);
println!("Status stdout: {}", stdout);
if !stderr.is_empty() {
println!("Status stderr: {}", stderr);
}
// Status should at least run without crashing
// May fail if no cameras, but should handle gracefully
assert!(output.status.code().is_some());
@@ -39,18 +42,21 @@ fn test_cli_config_command() {
.args(["config"])
.output()
.expect("Failed to execute CLI");
let stdout = String::from_utf8_lossy(&output.stdout);
let stderr = String::from_utf8_lossy(&output.stderr);
println!("Config output: {}", stdout);
if !stderr.is_empty() {
println!("Config stderr: {}", stderr);
}
// Should output TOML configuration
assert!(stdout.contains("[general]") || stdout.contains("log_level"),
"Expected config output in stdout, got: {}", stdout);
assert!(
stdout.contains("[general]") || stdout.contains("log_level"),
"Expected config output in stdout, got: {}",
stdout
);
}
#[test]
@@ -59,18 +65,21 @@ fn test_cli_config_json_command() {
.args(["config", "--json"])
.output()
.expect("Failed to execute CLI");
let stdout = String::from_utf8_lossy(&output.stdout);
let stderr = String::from_utf8_lossy(&output.stderr);
println!("Config JSON output: {}", stdout);
if !stderr.is_empty() {
println!("Config JSON stderr: {}", stderr);
}
// Should output JSON configuration
assert!(stdout.contains("\"general\"") || stdout.contains("log_level"),
"Expected JSON config output in stdout, got: {}", stdout);
assert!(
stdout.contains("\"general\"") || stdout.contains("log_level"),
"Expected JSON config output in stdout, got: {}",
stdout
);
}
#[test]
@@ -78,7 +87,7 @@ fn test_cli_capture_command() {
// Create a temporary directory for output
let temp_dir = std::env::temp_dir().join("linux-hello-test");
std::fs::create_dir_all(&temp_dir).expect("Failed to create temp dir");
let output = Command::new(cli_binary())
.args([
"capture",
@@ -89,15 +98,15 @@ fn test_cli_capture_command() {
])
.output()
.expect("Failed to execute CLI");
let stdout = String::from_utf8_lossy(&output.stdout);
let stderr = String::from_utf8_lossy(&output.stderr);
println!("Capture stdout: {}", stdout);
if !stderr.is_empty() {
println!("Capture stderr: {}", stderr);
}
// May fail if no camera, but should handle gracefully
// If successful, check for output files
if output.status.success() {
@@ -107,7 +116,7 @@ fn test_cli_capture_command() {
.collect();
println!("Created {} file(s) in temp dir", files.len());
}
// Cleanup
let _ = std::fs::remove_dir_all(&temp_dir);
}
@@ -117,34 +126,33 @@ fn test_cli_detect_command() {
// Create a simple test image
let temp_dir = std::env::temp_dir().join("linux-hello-test-detect");
std::fs::create_dir_all(&temp_dir).expect("Failed to create temp dir");
// Create a simple grayscale PNG (100x100, mid-gray)
use image::{GrayImage, Luma};
let img = GrayImage::from_fn(100, 100, |_, _| Luma([128u8]));
let img_path = temp_dir.join("test.png");
img.save(&img_path).expect("Failed to save test image");
let output = Command::new(cli_binary())
.args([
"detect",
"--image",
img_path.to_str().unwrap(),
])
.args(["detect", "--image", img_path.to_str().unwrap()])
.output()
.expect("Failed to execute CLI");
let stdout = String::from_utf8_lossy(&output.stdout);
let stderr = String::from_utf8_lossy(&output.stderr);
println!("Detect stdout: {}", stdout);
if !stderr.is_empty() {
println!("Detect stderr: {}", stderr);
}
// Should at least run and report something
assert!(stdout.contains("detected") || stdout.contains("Face") || stdout.contains("No face"),
"Expected detection output, got: {}", stdout);
assert!(
stdout.contains("detected") || stdout.contains("Face") || stdout.contains("No face"),
"Expected detection output, got: {}",
stdout
);
// Cleanup
let _ = std::fs::remove_dir_all(&temp_dir);
}

View File

@@ -1,7 +1,9 @@
//! Integration tests for face detection
use linux_hello_daemon::detection::{detect_face_simple, FaceDetection, FaceDetect, SimpleFaceDetector};
use linux_hello_common::Result;
use linux_hello_daemon::detection::{
detect_face_simple, FaceDetect, FaceDetection, SimpleFaceDetector,
};
#[test]
fn test_simple_face_detection() {
@@ -9,28 +11,31 @@ fn test_simple_face_detection() {
let width = 640u32;
let height = 480u32;
let mut image = Vec::new();
// Create a gradient pattern (simulates a face-like region)
for y in 0..height {
for x in 0..width {
// Center region with higher intensity (face-like)
let center_x = width / 2;
let center_y = height / 2;
let dist = ((x as i32 - center_x as i32).pow(2) + (y as i32 - center_y as i32).pow(2)) as f32;
let dist =
((x as i32 - center_x as i32).pow(2) + (y as i32 - center_y as i32).pow(2)) as f32;
let max_dist = ((width / 2).pow(2) + (height / 2).pow(2)) as f32;
let intensity = (128.0 + (1.0 - dist / max_dist) * 100.0) as u8;
image.push(intensity);
}
}
let detection = detect_face_simple(&image, width, height);
assert!(detection.is_some(), "Should detect face in test image");
if let Some(det) = detection {
println!("Face detected: x={:.2}, y={:.2}, w={:.2}, h={:.2}, conf={:.2}",
det.x, det.y, det.width, det.height, det.confidence);
println!(
"Face detected: x={:.2}, y={:.2}, w={:.2}, h={:.2}, conf={:.2}",
det.x, det.y, det.width, det.height, det.confidence
);
// Verify detection is within image bounds
assert!(det.x >= 0.0 && det.x <= 1.0);
assert!(det.y >= 0.0 && det.y <= 1.0);
@@ -53,7 +58,7 @@ fn test_face_detection_low_contrast() {
let width = 100u32;
let height = 100u32;
let image = vec![10u8; (width * height) as usize];
let detection = detect_face_simple(&image, width, height);
// May or may not detect, but shouldn't crash
if let Some(det) = detection {
@@ -67,7 +72,7 @@ fn test_face_detection_high_contrast() {
let width = 100u32;
let height = 100u32;
let image = vec![255u8; (width * height) as usize];
let detection = detect_face_simple(&image, width, height);
// Should not detect (too bright)
if let Some(det) = detection {
@@ -78,22 +83,22 @@ fn test_face_detection_high_contrast() {
#[test]
fn test_simple_face_detector_trait() -> Result<()> {
let detector = SimpleFaceDetector::new(0.3);
// Test with reasonable image
let width = 200u32;
let height = 200u32;
let image: Vec<u8> = (0..width * height)
.map(|i| ((i % 200) + 50) as u8)
.collect();
let detections = detector.detect(&image, width, height)?;
println!("Detector found {} face(s)", detections.len());
// Test with threshold too high
let strict_detector = SimpleFaceDetector::new(0.9);
let strict_detections = strict_detector.detect(&image, width, height)?;
println!("Strict detector found {} face(s)", strict_detections.len());
Ok(())
}
@@ -106,13 +111,13 @@ fn test_face_detection_pixel_conversion() {
height: 0.8,
confidence: 0.95,
};
let (x, y, w, h) = detection.to_pixels(640, 480);
assert_eq!(x, 160);
assert_eq!(y, 48);
assert_eq!(w, 320);
assert_eq!(h, 384);
// Test edge cases
let edge = FaceDetection {
x: 0.0,

View File

@@ -10,11 +10,11 @@ fn test_authenticate_request_serialization() {
let request = IpcRequest::Authenticate {
user: "testuser".to_string(),
};
let json = serde_json::to_string(&request).unwrap();
assert!(json.contains("\"action\":\"authenticate\""));
assert!(json.contains("\"user\":\"testuser\""));
// Deserialize back
let parsed: IpcRequest = serde_json::from_str(&json).unwrap();
match parsed {
@@ -33,16 +33,20 @@ fn test_enroll_request_serialization() {
label: "default".to_string(),
frame_count: 5,
};
let json = serde_json::to_string(&request).unwrap();
assert!(json.contains("\"action\":\"enroll\""));
assert!(json.contains("\"user\":\"testuser\""));
assert!(json.contains("\"label\":\"default\""));
// Deserialize back
let parsed: IpcRequest = serde_json::from_str(&json).unwrap();
match parsed {
IpcRequest::Enroll { user, label, frame_count } => {
IpcRequest::Enroll {
user,
label,
frame_count,
} => {
assert_eq!(user, "testuser");
assert_eq!(label, "default");
assert_eq!(frame_count, 5);
@@ -57,10 +61,10 @@ fn test_list_request_serialization() {
let request = IpcRequest::List {
user: "testuser".to_string(),
};
let json = serde_json::to_string(&request).unwrap();
assert!(json.contains("\"action\":\"list\""));
let parsed: IpcRequest = serde_json::from_str(&json).unwrap();
match parsed {
IpcRequest::List { user } => {
@@ -79,7 +83,7 @@ fn test_remove_request_serialization() {
label: Some("default".to_string()),
all: false,
};
let json = serde_json::to_string(&request).unwrap();
let parsed: IpcRequest = serde_json::from_str(&json).unwrap();
match parsed {
@@ -90,14 +94,14 @@ fn test_remove_request_serialization() {
}
_ => panic!("Expected Remove request"),
}
// Remove all
let request = IpcRequest::Remove {
user: "testuser".to_string(),
label: None,
all: true,
};
let json = serde_json::to_string(&request).unwrap();
let parsed: IpcRequest = serde_json::from_str(&json).unwrap();
match parsed {
@@ -114,10 +118,10 @@ fn test_remove_request_serialization() {
#[test]
fn test_ping_request_serialization() {
let request = IpcRequest::Ping;
let json = serde_json::to_string(&request).unwrap();
assert!(json.contains("\"action\":\"ping\""));
let parsed: IpcRequest = serde_json::from_str(&json).unwrap();
match parsed {
IpcRequest::Ping => {}
@@ -135,16 +139,16 @@ fn test_response_serialization() {
confidence: Some(0.95),
templates: None,
};
let json = serde_json::to_string(&response).unwrap();
assert!(json.contains("\"success\":true"));
assert!(json.contains("\"confidence\":0.95"));
// Deserialize
let parsed: IpcResponse = serde_json::from_str(&json).unwrap();
assert!(parsed.success);
assert_eq!(parsed.confidence, Some(0.95));
// Response with templates
let response = IpcResponse {
success: true,
@@ -152,10 +156,13 @@ fn test_response_serialization() {
confidence: None,
templates: Some(vec!["default".to_string(), "glasses".to_string()]),
};
let json = serde_json::to_string(&response).unwrap();
let parsed: IpcResponse = serde_json::from_str(&json).unwrap();
assert_eq!(parsed.templates, Some(vec!["default".to_string(), "glasses".to_string()]));
assert_eq!(
parsed.templates,
Some(vec!["default".to_string(), "glasses".to_string()])
);
}
/// Test error response serialization
@@ -167,11 +174,11 @@ fn test_error_response_serialization() {
confidence: None,
templates: None,
};
let json = serde_json::to_string(&response).unwrap();
assert!(json.contains("\"success\":false"));
assert!(json.contains("User not enrolled"));
let parsed: IpcResponse = serde_json::from_str(&json).unwrap();
assert!(!parsed.success);
assert!(parsed.message.unwrap().contains("not enrolled"));
@@ -196,9 +203,10 @@ fn test_pam_protocol_compatibility() {
}
_ => panic!("Failed to parse PAM-format request"),
}
// Test response format
let success_response = r#"{"success":true,"message":"Authentication successful","confidence":1.0}"#;
let success_response =
r#"{"success":true,"message":"Authentication successful","confidence":1.0}"#;
let parsed: IpcResponse = serde_json::from_str(success_response).unwrap();
assert!(parsed.success);
}

View File

@@ -5,8 +5,10 @@
use linux_hello_common::{Config, FaceTemplate, TemplateStore};
use linux_hello_daemon::auth::AuthService;
use linux_hello_daemon::embedding::{PlaceholderEmbeddingExtractor, EmbeddingExtractor, cosine_similarity};
use linux_hello_daemon::matching::{match_template, average_embeddings, MatchResult};
use linux_hello_daemon::embedding::{
cosine_similarity, EmbeddingExtractor, PlaceholderEmbeddingExtractor,
};
use linux_hello_daemon::matching::{average_embeddings, match_template, MatchResult};
use tempfile::TempDir;
/// Test template storage operations
@@ -14,10 +16,10 @@ use tempfile::TempDir;
fn test_template_store_operations() {
let temp_dir = TempDir::new().unwrap();
let store = TemplateStore::new(temp_dir.path());
// Initialize store
store.initialize().unwrap();
// Create test template
let template = FaceTemplate {
user: "testuser".to_string(),
@@ -26,29 +28,29 @@ fn test_template_store_operations() {
enrolled_at: 1234567890,
frame_count: 5,
};
// Store template
store.store(&template).unwrap();
// Verify user is enrolled
assert!(store.is_enrolled("testuser"));
assert!(!store.is_enrolled("nonexistent"));
// Load template back
let loaded = store.load("testuser", "default").unwrap();
assert_eq!(loaded.user, "testuser");
assert_eq!(loaded.label, "default");
assert_eq!(loaded.embedding, vec![0.1, 0.2, 0.3, 0.4, 0.5]);
assert_eq!(loaded.frame_count, 5);
// List users
let users = store.list_users().unwrap();
assert!(users.contains(&"testuser".to_string()));
// List templates
let templates = store.list_templates("testuser").unwrap();
assert!(templates.contains(&"default".to_string()));
// Remove template
store.remove("testuser", "default").unwrap();
assert!(!store.is_enrolled("testuser"));
@@ -60,7 +62,7 @@ fn test_multiple_templates_per_user() {
let temp_dir = TempDir::new().unwrap();
let store = TemplateStore::new(temp_dir.path());
store.initialize().unwrap();
// Add multiple templates
for (i, label) in ["default", "glasses", "profile"].iter().enumerate() {
let template = FaceTemplate {
@@ -72,15 +74,15 @@ fn test_multiple_templates_per_user() {
};
store.store(&template).unwrap();
}
// Load all templates
let templates = store.load_all("multiuser").unwrap();
assert_eq!(templates.len(), 3);
// List template labels
let labels = store.list_templates("multiuser").unwrap();
assert_eq!(labels.len(), 3);
// Remove all
store.remove_all("multiuser").unwrap();
assert!(!store.is_enrolled("multiuser"));
@@ -90,23 +92,23 @@ fn test_multiple_templates_per_user() {
#[test]
fn test_embedding_extraction() {
let extractor = PlaceholderEmbeddingExtractor::new(128);
// Create test grayscale image
let mut img = image::GrayImage::new(100, 100);
// Fill with gradient pattern
for y in 0..100 {
for x in 0..100 {
img.put_pixel(x, y, image::Luma([(x + y) as u8 / 2]));
}
}
// Extract embedding
let embedding = extractor.extract(&img).unwrap();
// Check dimension
assert_eq!(embedding.len(), 128);
// Check normalization (should be approximately unit length)
let norm: f32 = embedding.iter().map(|&x| x * x).sum::<f32>().sqrt();
assert!((norm - 1.0).abs() < 0.1 || norm < 0.01);
@@ -116,11 +118,11 @@ fn test_embedding_extraction() {
#[test]
fn test_embedding_consistency() {
let extractor = PlaceholderEmbeddingExtractor::new(128);
// Create identical images
let mut img1 = image::GrayImage::new(100, 100);
let mut img2 = image::GrayImage::new(100, 100);
for y in 0..100 {
for x in 0..100 {
let val = ((x * 2 + y * 3) % 256) as u8;
@@ -128,14 +130,18 @@ fn test_embedding_consistency() {
img2.put_pixel(x, y, image::Luma([val]));
}
}
// Extract embeddings
let emb1 = extractor.extract(&img1).unwrap();
let emb2 = extractor.extract(&img2).unwrap();
// Should be identical
let similarity = cosine_similarity(&emb1, &emb2);
assert!((similarity - 1.0).abs() < 0.001, "Identical images should have similarity ~1.0, got {}", similarity);
assert!(
(similarity - 1.0).abs() < 0.001,
"Identical images should have similarity ~1.0, got {}",
similarity
);
}
/// Test cosine similarity
@@ -145,11 +151,11 @@ fn test_cosine_similarity() {
let a = vec![1.0, 0.0, 0.0];
let b = vec![1.0, 0.0, 0.0];
assert!((cosine_similarity(&a, &b) - 1.0).abs() < 0.001);
// Orthogonal vectors
let c = vec![0.0, 1.0, 0.0];
assert!((cosine_similarity(&a, &c) - 0.0).abs() < 0.001);
// Similar vectors
let d = vec![0.9, 0.1, 0.0];
let similarity = cosine_similarity(&a, &d);
@@ -175,17 +181,17 @@ fn test_template_matching() {
frame_count: 1,
},
];
// Exact match
let result = match_template(&vec![1.0, 0.0, 0.0], &templates, 0.5);
assert!(result.matched);
assert_eq!(result.matched_label, Some("default".to_string()));
assert!((result.best_similarity - 1.0).abs() < 0.001);
// Close match (should match glasses template better due to similarity)
let result = match_template(&vec![0.85, 0.15, 0.0], &templates, 0.5);
assert!(result.matched);
// No match (orthogonal)
let result = match_template(&vec![0.0, 0.0, 1.0], &templates, 0.3);
assert!(!result.matched);
@@ -199,16 +205,16 @@ fn test_embedding_averaging() {
vec![0.0, 1.0, 0.0],
vec![0.0, 0.0, 1.0],
];
let averaged = average_embeddings(&embeddings).unwrap();
// Should be 3 dimensional
assert_eq!(averaged.len(), 3);
// Should be normalized
let norm: f32 = averaged.iter().map(|&x| x * x).sum::<f32>().sqrt();
assert!((norm - 1.0).abs() < 0.01);
// All components should be roughly equal (1/sqrt(3) normalized)
let expected = 1.0 / 3.0_f32.sqrt();
for val in &averaged {
@@ -229,16 +235,19 @@ fn test_empty_embeddings_error() {
fn test_auth_service_init() {
let config = Config::default();
let auth_service = AuthService::new(config);
// This should succeed (creates template directory if needed)
// Note: May need root permissions in production
// For testing, we can verify it doesn't panic
let result = auth_service.initialize();
// On systems without /var/lib, this might fail
// That's okay for unit testing
if result.is_err() {
println!("Auth service init failed (expected without root): {:?}", result);
println!(
"Auth service init failed (expected without root): {:?}",
result
);
}
}
@@ -251,7 +260,7 @@ fn test_match_result_structure() {
distance_threshold: 0.5,
matched_label: Some("default".to_string()),
};
assert!(result.matched);
assert_eq!(result.best_similarity, 0.95);
assert_eq!(result.distance_threshold, 0.5);
@@ -262,18 +271,18 @@ fn test_match_result_structure() {
#[test]
fn test_embedding_diversity() {
let extractor = PlaceholderEmbeddingExtractor::new(128);
// Create different pattern images
let patterns: Vec<Box<dyn Fn(u32, u32) -> u8>> = vec![
Box::new(|x, _y| x as u8), // Horizontal gradient
Box::new(|_x, y| y as u8), // Vertical gradient
Box::new(|x, y| (x ^ y) as u8), // XOR pattern
Box::new(|_x, _y| 128), // Solid gray
Box::new(|x, _y| x as u8), // Horizontal gradient
Box::new(|_x, y| y as u8), // Vertical gradient
Box::new(|x, y| (x ^ y) as u8), // XOR pattern
Box::new(|_x, _y| 128), // Solid gray
Box::new(|x, y| ((x * y) % 256) as u8), // Multiplication pattern
];
let mut embeddings = Vec::new();
for pattern in &patterns {
let mut img = image::GrayImage::new(100, 100);
for y in 0..100 {
@@ -283,7 +292,7 @@ fn test_embedding_diversity() {
}
embeddings.push(extractor.extract(&img).unwrap());
}
// Check that different patterns produce somewhat different embeddings
// (not all identical)
for i in 0..embeddings.len() {
@@ -306,13 +315,13 @@ fn test_template_serialization() {
enrolled_at: 1704153600,
frame_count: 10,
};
// Serialize
let json = serde_json::to_string(&original).unwrap();
// Deserialize
let restored: FaceTemplate = serde_json::from_str(&json).unwrap();
// Verify
assert_eq!(original.user, restored.user);
assert_eq!(original.label, restored.label);

View File

@@ -2,13 +2,13 @@
//!
//! Tests for TPM storage, secure memory, and anti-spoofing functionality.
use linux_hello_common::FaceTemplate;
use linux_hello_daemon::anti_spoofing::{
AntiSpoofingConfig, AntiSpoofingDetector, AntiSpoofingFrame,
};
use linux_hello_daemon::secure_memory::{SecureBytes, SecureEmbedding, memory_protection};
use linux_hello_daemon::secure_memory::{memory_protection, SecureBytes, SecureEmbedding};
use linux_hello_daemon::tpm::{EncryptedTemplate, SoftwareTpmFallback, TpmStorage};
use linux_hello_daemon::SecureTemplateStore;
use linux_hello_common::FaceTemplate;
use tempfile::TempDir;
// =============================================================================
@@ -19,7 +19,7 @@ use tempfile::TempDir;
fn test_software_tpm_initialization() {
let temp = TempDir::new().unwrap();
let mut storage = SoftwareTpmFallback::new(temp.path());
assert!(storage.is_available());
storage.initialize().unwrap();
}
@@ -29,14 +29,14 @@ fn test_software_tpm_encrypt_decrypt_roundtrip() {
let temp = TempDir::new().unwrap();
let mut storage = SoftwareTpmFallback::new(temp.path());
storage.initialize().unwrap();
let plaintext = b"Sensitive face embedding data for security testing";
let encrypted = storage.encrypt("testuser", plaintext).unwrap();
assert!(!encrypted.ciphertext.is_empty());
assert_ne!(encrypted.ciphertext.as_slice(), plaintext);
assert!(!encrypted.tpm_encrypted); // Software fallback
let decrypted = storage.decrypt("testuser", &encrypted).unwrap();
assert_eq!(decrypted.as_slice(), plaintext);
}
@@ -46,13 +46,13 @@ fn test_software_tpm_user_key_management() {
let temp = TempDir::new().unwrap();
let mut storage = SoftwareTpmFallback::new(temp.path());
storage.initialize().unwrap();
storage.create_user_key("user1").unwrap();
storage.create_user_key("user2").unwrap();
storage.remove_user_key("user1").unwrap();
// user2's key should still exist
// Can still encrypt for both users (key derivation is deterministic)
let encrypted = storage.encrypt("user1", b"test").unwrap();
assert!(!encrypted.ciphertext.is_empty());
@@ -62,7 +62,9 @@ fn test_software_tpm_user_key_management() {
fn test_encrypted_template_structure() {
let template = EncryptedTemplate {
ciphertext: vec![1, 2, 3, 4, 5, 6, 7, 8],
iv: vec![0xAA, 0xBB, 0xCC, 0xDD, 0x11, 0x22, 0x33, 0x44, 0x55, 0x66, 0x77, 0x88], // 12 bytes for AES-GCM
iv: vec![
0xAA, 0xBB, 0xCC, 0xDD, 0x11, 0x22, 0x33, 0x44, 0x55, 0x66, 0x77, 0x88,
], // 12 bytes for AES-GCM
salt: vec![0u8; 32], // 32 bytes for PBKDF2 salt
key_handle: 0x81000001,
tpm_encrypted: true,
@@ -86,7 +88,7 @@ fn test_encrypted_template_structure() {
fn test_secure_embedding_operations() {
let data = vec![0.1, 0.2, 0.3, 0.4, 0.5];
let embedding = SecureEmbedding::new(data.clone());
assert_eq!(embedding.len(), 5);
assert!(!embedding.is_empty());
assert_eq!(embedding.as_slice(), data.as_slice());
@@ -97,13 +99,13 @@ fn test_secure_embedding_similarity_metrics() {
// Identical vectors
let emb1 = SecureEmbedding::new(vec![1.0, 0.0, 0.0, 0.0]);
let emb2 = SecureEmbedding::new(vec![1.0, 0.0, 0.0, 0.0]);
let similarity = emb1.cosine_similarity(&emb2);
assert!((similarity - 1.0).abs() < 0.001);
let distance = emb1.euclidean_distance(&emb2);
assert!(distance.abs() < 0.001);
// Orthogonal vectors
let emb3 = SecureEmbedding::new(vec![0.0, 1.0, 0.0, 0.0]);
let similarity2 = emb1.cosine_similarity(&emb3);
@@ -115,7 +117,7 @@ fn test_secure_embedding_serialization() {
let original = SecureEmbedding::new(vec![1.5, -2.5, 3.14159, 0.0, f32::MAX]);
let bytes = original.to_bytes();
let restored = SecureEmbedding::from_bytes(&bytes).unwrap();
assert_eq!(original.as_slice(), restored.as_slice());
}
@@ -124,7 +126,7 @@ fn test_secure_bytes_constant_time_comparison() {
let secret1 = SecureBytes::new(vec![0x12, 0x34, 0x56, 0x78]);
let secret2 = SecureBytes::new(vec![0x12, 0x34, 0x56, 0x78]);
let secret3 = SecureBytes::new(vec![0x12, 0x34, 0x56, 0x79]);
assert!(secret1.constant_time_eq(&secret2));
assert!(!secret1.constant_time_eq(&secret3));
}
@@ -133,7 +135,7 @@ fn test_secure_bytes_constant_time_comparison() {
fn test_secure_memory_zeroization() {
let mut data = vec![0xAA_u8; 64];
memory_protection::secure_zero(&mut data);
assert!(data.iter().all(|&b| b == 0));
}
@@ -141,7 +143,7 @@ fn test_secure_memory_zeroization() {
fn test_secure_embedding_debug_hides_data() {
let embedding = SecureEmbedding::new(vec![1.0, 2.0, 3.0, 4.0, 5.0]);
let debug_str = format!("{:?}", embedding);
assert!(debug_str.contains("REDACTED"));
assert!(!debug_str.contains("1.0"));
assert!(!debug_str.contains("2.0"));
@@ -166,12 +168,12 @@ fn test_secure_template_store_unencrypted() {
let temp = TempDir::new().unwrap();
let mut store = SecureTemplateStore::new(temp.path());
store.initialize(false).unwrap();
assert!(!store.is_encryption_enabled());
let template = create_test_template("alice", "default");
store.store(&template).unwrap();
let loaded = store.load("alice", "default").unwrap();
assert_eq!(loaded.user, "alice");
assert_eq!(loaded.embedding, template.embedding);
@@ -182,12 +184,14 @@ fn test_secure_template_store_enrollment_check() {
let temp = TempDir::new().unwrap();
let mut store = SecureTemplateStore::new(temp.path());
store.initialize(false).unwrap();
assert!(!store.is_enrolled("bob"));
store.store(&create_test_template("bob", "primary")).unwrap();
store
.store(&create_test_template("bob", "primary"))
.unwrap();
assert!(store.is_enrolled("bob"));
store.remove("bob", "primary").unwrap();
assert!(!store.is_enrolled("bob"));
}
@@ -197,11 +201,17 @@ fn test_secure_template_store_load_all() {
let temp = TempDir::new().unwrap();
let mut store = SecureTemplateStore::new(temp.path());
store.initialize(false).unwrap();
store.store(&create_test_template("charlie", "default")).unwrap();
store.store(&create_test_template("charlie", "backup")).unwrap();
store.store(&create_test_template("charlie", "outdoor")).unwrap();
store
.store(&create_test_template("charlie", "default"))
.unwrap();
store
.store(&create_test_template("charlie", "backup"))
.unwrap();
store
.store(&create_test_template("charlie", "outdoor"))
.unwrap();
let templates = store.load_all("charlie").unwrap();
assert_eq!(templates.len(), 3);
}
@@ -211,10 +221,10 @@ fn test_secure_template_store_load_secure_embedding() {
let temp = TempDir::new().unwrap();
let mut store = SecureTemplateStore::new(temp.path());
store.initialize(false).unwrap();
let template = create_test_template("dave", "default");
store.store(&template).unwrap();
let secure = store.load_secure("dave", "default").unwrap();
assert_eq!(secure.len(), template.embedding.len());
assert_eq!(secure.as_slice(), template.embedding.as_slice());
@@ -227,18 +237,18 @@ fn test_secure_template_store_load_secure_embedding() {
fn create_test_frame(brightness: u8, is_ir: bool, width: u32, height: u32) -> AntiSpoofingFrame {
let size = (width * height) as usize;
let mut pixels = vec![brightness; size];
// Add realistic variation
for (i, pixel) in pixels.iter_mut().enumerate() {
let x = (i % width as usize) as u32;
let y = (i / width as usize) as u32;
// Add gradient and noise
let gradient = ((x + y) % 20) as i16 - 10;
let noise = ((i * 17 + 31) % 15) as i16 - 7;
*pixel = (brightness as i16 + gradient + noise).clamp(0, 255) as u8;
}
AntiSpoofingFrame {
pixels,
width,
@@ -253,10 +263,10 @@ fn create_test_frame(brightness: u8, is_ir: bool, width: u32, height: u32) -> An
fn test_anti_spoofing_basic_check() {
let config = AntiSpoofingConfig::default();
let mut detector = AntiSpoofingDetector::new(config);
let frame = create_test_frame(100, true, 200, 200);
let result = detector.check_frame(&frame).unwrap();
assert!(result.score >= 0.0 && result.score <= 1.0);
assert!(result.checks.ir_check.is_some());
assert!(result.checks.depth_check.is_some());
@@ -267,19 +277,19 @@ fn test_anti_spoofing_basic_check() {
fn test_anti_spoofing_ir_verification() {
let config = AntiSpoofingConfig::default();
let mut detector = AntiSpoofingDetector::new(config);
// Normal IR frame (should pass)
let normal_frame = create_test_frame(100, true, 200, 200);
let result1 = detector.check_frame(&normal_frame).unwrap();
let ir_score1 = result1.checks.ir_check.unwrap();
detector.reset();
// Very dark frame (suspicious)
let dark_frame = create_test_frame(10, true, 200, 200);
let result2 = detector.check_frame(&dark_frame).unwrap();
let ir_score2 = result2.checks.ir_check.unwrap();
// Normal frame should score higher than very dark frame
assert!(ir_score1 > ir_score2);
}
@@ -289,9 +299,9 @@ fn test_anti_spoofing_temporal_analysis() {
let mut config = AntiSpoofingConfig::default();
config.enable_movement_check = true;
config.temporal_frames = 5;
let mut detector = AntiSpoofingDetector::new(config);
// Simulate multiple frames with natural movement
for i in 0..6 {
let mut frame = create_test_frame(100, true, 200, 200);
@@ -299,9 +309,9 @@ fn test_anti_spoofing_temporal_analysis() {
// Add slight position variation
let offset = (i % 3) as u32;
frame.face_bbox = Some((50 + offset, 50, 100, 100));
let result = detector.check_frame(&frame).unwrap();
// After enough frames, movement check should be available
if i >= 3 {
assert!(result.checks.movement_check.is_some());
@@ -313,17 +323,17 @@ fn test_anti_spoofing_temporal_analysis() {
fn test_anti_spoofing_reset() {
let mut config = AntiSpoofingConfig::default();
config.enable_movement_check = true;
let mut detector = AntiSpoofingDetector::new(config);
// Process some frames
for _ in 0..5 {
let frame = create_test_frame(100, true, 200, 200);
let _ = detector.check_frame(&frame);
}
detector.reset();
// After reset, first frame should not have movement analysis
let frame = create_test_frame(100, true, 200, 200);
let result = detector.check_frame(&frame).unwrap();
@@ -334,12 +344,12 @@ fn test_anti_spoofing_reset() {
fn test_anti_spoofing_rejection_reasons() {
let mut config = AntiSpoofingConfig::default();
config.threshold = 0.95; // Very high threshold
let mut detector = AntiSpoofingDetector::new(config);
let frame = create_test_frame(100, true, 200, 200);
let result = detector.check_frame(&frame).unwrap();
if !result.is_live {
assert!(result.rejection_reason.is_some());
let reason = result.rejection_reason.unwrap();
@@ -358,11 +368,11 @@ fn test_anti_spoofing_config_customization() {
enable_movement_check: false,
temporal_frames: 10,
};
let mut detector = AntiSpoofingDetector::new(config);
let frame = create_test_frame(100, true, 200, 200);
let result = detector.check_frame(&frame).unwrap();
// Texture check should not be performed
assert!(result.checks.texture_check.is_none());
// IR and depth checks should be performed
@@ -379,7 +389,7 @@ fn test_secure_workflow_enroll_and_verify() {
let temp = TempDir::new().unwrap();
let mut store = SecureTemplateStore::new(temp.path());
store.initialize(false).unwrap();
// Simulate enrollment embedding
let enroll_embedding = vec![0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8];
let template = FaceTemplate {
@@ -392,17 +402,17 @@ fn test_secure_workflow_enroll_and_verify() {
.as_secs(),
frame_count: 5,
};
// Store securely
store.store(&template).unwrap();
assert!(store.is_enrolled("secure_user"));
// Load as secure embedding for matching
let stored = store.load_secure("secure_user", "default").unwrap();
// Simulate authentication embedding (slightly different)
let auth_embedding = SecureEmbedding::new(vec![0.11, 0.19, 0.31, 0.39, 0.51, 0.59, 0.71, 0.79]);
// Compare securely
let similarity = stored.cosine_similarity(&auth_embedding);
assert!(similarity > 0.9); // Should be very similar
@@ -414,6 +424,6 @@ fn test_memory_protection_basics() {
let data = vec![0xABu8; 1024];
let result = memory_protection::lock_memory(&data);
assert!(result.is_ok()); // Should not error, even if lock fails
let _ = memory_protection::unlock_memory(&data);
}