Job Overview
This position is responsible for the quality assurance of Android AI products. The core mission is to drive the evolution of automated testing frameworks through engineering methods. You will directly participate in system architecture migration, solve adaptation challenges across multiple chipset platforms, and maintain the infrastructure that supports large-scale AI testing.
Core Responsibilities
1. Test Engineering & Migration
Drive Architecture Upgrades: Participate in the technical maintenance and new feature development of the ARTS framework, leading the configuration unification and migration from legacy versions to the new version.
Cross-Platform Integration: Perform test integration for various underlying chipset frameworks and solve technical challenges arising from hardware differences.
Develop Specialized Test Modes: Implement new features such as Non-LLM execution modes and audio testing capabilities.
2. Devices Lab Management
High Availability Management: Operate and maintain the lab automation platform, ensuring the server achieves a 99.9% uptime target.
Hardware Resource Optimization: Proactively identify and resolve laboratory hardware bottlenecks and manage the configuration of diverse AI test devices.
3. Model Quality & Visibility
Automated Monitoring Development: Build and maintain data visualization dashboards to monitor key KPIs such as scores, model names, and runtimes.
Metric Benchmarking: Execute advanced model evaluation metrics and conduct baseline comparison analysis.
4. Non-Functional Validation
Hardware Performance Evaluation: Validate on-device resource utilization (Memory, CPU, and Thermal) during AI tasks to ensure test stability under high-load scenarios.
Requirements
Automation Framework Maintenance & Development: Experience in contributing to or maintaining automated test frameworks, with the ability to perform technical maintenance and implement new features. Proficiency in C++ and Java is required.
Android System & AI Fundamentals: Familiarity with Android AI system architecture and an understanding of basic model performance metrics (e.g., Perplexity).
Cross-Platform Hardware Integration: Technical background in handling integration challenges across different chipsets to unify and refine test configurations.
Data Visualization & KPI Tracking: Ability to develop and maintain dashboards to effectively monitor key performance indicators for product releases.
Plus
Large-Scale Architecture Migration: Experience leading or participating in test framework version migrations.
Deep AI Model Evaluation: Familiarity with both LLM and Non-LLM execution modes.
International Project Collaboration: Ability to perform task transitions and collaborate with teams in the US or other overseas regions to ensure project deadlines.