Unexpected Issues During Pilot Production

Table of content

    During pilot production and production line ramp-up, many issues do not appear in the way teams initially expect. Sometimes it starts with a small fluctuation at a test station, or a comment from a line engineer saying, “This result looks a bit unusual.”
    However, when takt time, yield targets, and delivery milestones are all under pressure, these seemingly minor anomalies can quickly be amplified and begin to affect the overall production rhythm.

    We have been working with Huaqin as a long-term partner. As projects progressed, the challenges encountered on the production line became increasingly complex. On site, our role gradually extended from basic production test support to problem analysis and cross-team coordination during pilot production. In many cases, the focus was not simply on whether a test station was functioning, but on how to absorb uncertainties early and prevent them from disrupting delivery schedules.

    The following two experiences both took place during the pilot production phase of Huaqin projects. They are not exceptional cases. On the contrary, they represent the kind of everyday issues that most accurately reflect the realities of production line delivery.

    Airtightness Testing Issues in Project α

    During the pilot ramp-up of Project α, the airtightness test station for the audio microphone showed clear instability. For the same batch of products, pass rates fluctuated noticeably across repeated tests, frequently interrupting the station’s operating rhythm. Initial troubleshooting naturally focused on the test system itself, including software logic, equipment status, and basic parameter settings. It soon became clear, however, that the issue did not originate from these areas.

    As on-site verification continued, we gradually confirmed that the anomaly was more closely related to the product’s mechanical structure and material characteristics. This model used a relatively uncommon combination of materials. A sealing solution that had worked well in previous projects could not maintain consistency during actual compression. Even slight variations in applied pressure were enough to influence test results.

    Once the direction of the problem was clarified, the on-site approach shifted accordingly. Rather than repeatedly adjusting the existing solution, we returned to verifying the compatibility between materials and structure. Over the following period, we worked together with the customer’s engineering team on the production line, testing multiple material options. This included different types of silicone and cushioning materials, variations in silicone hardness, and adjustments to plug compression methods. Each step was evaluated based on real test results before moving forward.

    The process was not fast, nor was it particularly clever. In essence, it came down to repeatedly confirming one question: could this solution run stably under real production line conditions?
    Ultimately, by introducing a customized soft silicone gasket and making fine parameter adjustments, the airtightness test results gradually stabilized. The station was able to run continuously, and the pilot production rhythm was restored.

    Figure 1. Test Fixture Diagram

    Noise Floor Issues in Project β

    Compared with the airtightness issue in Project α, the noise floor anomaly encountered during pilot production in Project β was more complex to diagnose.

    During headphone pilot production for Project β at Huaqin’s Nanchang site, the noise floor test station repeatedly triggered alarms. Test data showed that measured noise levels consistently exceeded specification limits, significantly impacting the pilot production schedule. This model used high-sensitivity drivers along with a new circuit design, making the potential noise sources inherently more complex. It was not a problem that could be resolved by simply adjusting a single parameter.

    Rather than focusing solely on the test station, we worked with the customer’s audio team to investigate the issue from a system-level signal chain perspective. The process involved sequentially testing different shielding cables, adjusting grounding strategies, evaluating various Bluetooth dongle connection methods, and isolating potential power supply and electromagnetic interference sources within the test environment.

    Through continuous spectrum analysis and comparative testing, the scope of the issue was gradually narrowed. It was ultimately confirmed that the elevated noise floor was primarily related to power interference from the Bluetooth dongle, combined with differences in product behavior across operating states. After this conclusion was reached, relevant configurations were adjusted and validated on site. As a result, noise floor measurements returned to a stable and controllable range, allowing pilot production to proceed.

    Figure 2. Work with the customer engineer to solve problems

    Common Characteristics of Pilot Production Issues

    Looking back at these two pilot production experiences, it becomes clear that despite their different manifestations, the underlying diagnostic processes were quite similar. Whether dealing with airtightness instability or excessive noise, the root cause could not be isolated to a single module. Effective resolution required on-site evaluation across mechanical structure, materials, system operating states, and test conditions.

    During pilot production, issues of this nature rarely come with ready-made answers. They are also unlikely to be resolved through a single verification cycle. More often, progress is made through repeated trials, comparisons, and eliminations, gradually converging on a solution that is genuinely suitable for long-term production line operation.

    Production line delivery rarely follows a perfectly smooth path. In many cases, what ultimately determines whether a project can move forward as planned are those unexpected issues that must be addressed immediately when they arise. In our long-term collaboration with customers, our work often takes place at these critical moments—working alongside engineering teams to stabilize processes, maintain momentum, and keep projects moving forward step by step. If you also want CRYSOUND to support your production line, you can fill out the Get in Touch form below.

    Abnormal Noise Detection: From Human Ears to AI

    With the rapid growth of consumer audio products such as headphones, loudspeakers and wearables, users’ expectations for “good sound” have moved far beyond simply being able to hear clearly. Now they want sound that is comfortable, clean, and free from any extra rustling, clicking or scratching noises. However, in most factories, abnormal noise testing still relies heavily on human listening. Shift schedules, subjective differences between operators, fatigue and emotional state all directly impact your yield rate and brand reputation. In this article, based on CRYSOUND’s real project experience with AI listening inspection for TWS earbuds, we’ll talk about how to use AI to “free human ears” from the production line and make listening tests truly stable, efficient and repeatable. Why Is Audio Listening Test So Labor-Intensive? In traditional setups, the production line usually follows this pattern: automatic electro-acoustic test + manual listening recheck. The pain points of manual listening are very clear: Strong subjectivity: Different listeners have different sensitivity to noises such as “rustling” or “scratching”. Even the same person may judge inconsistently between morning and night shifts. Poor scalability: Human listening requires intense concentration, and it’s easy to become fatigued over long periods. It’s hard to support high UPH in mass production. High training cost: A qualified listener needs systematic training and long-term experience accumulation, and it takes time for new operators to get up to speed. Results hard to trace: Subjective judgments are difficult to turn into quantitative data and history, which makes later quality analysis and improvement more challenging. That’s why the industry has long been looking for a way to use automation and algorithms to handle this work more stably and economically—without sacrificing the sensitivity of the “human ear.” From “Human Ears” to “AI Ears”: CRYSOUND’s Overall Approach CRYSOUND’s answer is a standardized test platform built around the CRYSOUND abnormal noise test system, combined with AI listening algorithms and dedicated fixtures to form a complete, integrated hardware–software solution. Key Characteristics of the Solution: Standardized, multi-purpose platform: Modular design that supports both conventional SPK audio / noise tests and abnormal noise / AI listening tests. 1-to-2 parallel testing: A single system can test two earbuds at the same time. In typical projects, UPH can reach about 120 pcs. AI listening analysis module: By collecting good-unit data to build a model, the system automatically identifies units with abnormal noise, significantly reducing manual listening stations. Low-noise test environment: A high-performance acoustic chamber plus an inner-box structure control the background noise to around 12 dBA, providing a stable acoustic environment for the AI algorithm. In simple terms, the solution is: One standardized test bench + one dedicated fixture + one AI listening algorithm. Typical Test Signal Path Centered on the test host, the “lab + production line” unified chain looks like this: PC host → CRY576 Bluetooth Adapter → TWS earphones Earphones output sound, captured by CRY718-S01 Ear Simulator Signal is acquired and analyzed by the CRY6151B Electroacoustic Analyzer The software calls the AI listening algorithm module, performs automatic analysis on the WAV data and outputs a PASS/FAIL result Fixtures and Acoustic Chamber: Minimizing Station-to-Station Variation Product placement posture and coupling conditions often determine test consistency. The solution reduces test variation through fixture and chamber design to fix the test conditions as much as possible: Fixture: Soft rubber shaped recess. The shaped recess ensures that the earbud is always placed against the artificial ear in the same posture, reducing position errors and test variation. The soft rubber improves sealing and prevents mechanical damage to the earphones. Acoustic box: Inner-box damping and acoustic isolation. This reduces the impact of external mechanical vibration and environmental noise on the measurement results. Professional-Grade Acoustic Hardware (Example Configuration) CRY6151B Electroacoustic Analyzer Frequency range 20–20 kHz, low background noise and high dynamic range, integrating both signal output and measurement input. CRY718-S01 Ear Simulator Set Meets relevant IEC / ITU requirements. Under appropriate configurations / conditions, the system’s own noise can reach the 12 dBA level. CRY725D Shielded Acoustic Chamber Integrates RF shielding and acoustic isolation, tailored for TWS test scenarios. AI Algorithm: How Unsupervised Anomaly Detection “Recognizes the Abnormal” Training Flow: Only “Good” Earphones Are Needed CRYSOUND’s AI listening solution uses an unsupervised anomalous sound detection algorithm. Its biggest advantage is that it does not require collecting many abnormal samples in advance—only normal, good units are needed to train a model that “understands good sound”. In real projects, the typical steps are as follows: Prepare no fewer than 100 good units. Under the same conditions as mass production testing, collect WAV data from these 100 units. Train the model using these good-unit data (for example, 100 samples of 10 seconds each; training usually takes less than 1 minute). Use the model to test both good and defective samples, compare the distribution of the results, and set the decision threshold. After training, the model can be used directly in mass production. Prediction time for a single sample is under 0.5 seconds. In this process, engineers do not need to manually label each type of abnormal noise, which greatly lowers the barrier to introducing the system into a new project. Principle in Brief: Let the Model “Retell” a Normal Sound First Roughly speaking, the algorithm works in three steps: Time-frequency conversion Convert the recorded waveform into a time-frequency spectrogram (like a “picture of the sound”). Deep-learning-based reconstruction Use the deep learning model trained on “normal earphones” to reconstruct the time-frequency spectrogram. For normal samples, the model can more or less “reproduce” the original spectrogram. For samples containing abnormal noise, the abnormal parts are difficult to reconstruct. Difference analysis Compare the original spectrogram with the reconstructed one and calculate the difference along the time and frequency axes to obtain two difference curves. Abnormal samples will show prominent peaks or concentrated energy areas on these curves. In this way, the algorithm develops a strong fit to the “normal” pattern and becomes naturally sensitive to any deviation from that pattern, without needing to build a separate model for each type of abnormal noise. In actual projects, this algorithm has already been verified in more than 10 different projects, achieving a defect detection rate of up to 99.9%. Practical Advantages of AI Listening No dependence on abnormal samples: No need to spend enormous effort collecting various “scratching” or “electrical” noise examples. Adapts to new abnormalities: Even if a new type of abnormal sound appears that was not present during training, as long as it is significantly different from the normal pattern, the algorithm can still detect it. Continuous learning: New good-unit data can be continuously added later so that the model can adapt to small drifts in the line and environment over the long term. Greatly reduced manual workload: Instead of “everyone listening,” you move to “AI scanning + small-batch sampling inspection,” freeing people to focus on higher-value analysis and optimization work. A Typical Deployment Case: Real-World Practice on an ODM TWS Production Line On one ODM’s TWS production line, the daily output per line is on the order of thousands of sets. In order to improve yield and reduce the burden of manual listening, they introduced the AI abnormal-noise test solution: ItemBefore Introducing the AI Abnormal-Noise Test SolutionAfter Introducing the AI Abnormal-Noise Test SolutionTest method4 manual listening stations, abnormal noises judged purely by human listeners4 AI listening test systems, each testing one pair of earbudsManpower configuration4 operators (full-time listening)2 operators (for loading/unloading + rechecking abnormal units)Quality riskMissed defects and escapes due to subjectivity and fatigueDuring pilot runs, AI system results matched manual sampling; stability improved significantlyWork during pilot stageDefine manual listening proceduresCollect samples, train the AI model, set thresholds, and validate feasibility via manual samplingDaily line capacity (per line)Limited by the pace of manual testingAbout 1,000 pairs of earbuds per dayAbnormal-noise detection rateMissed defects existed, not quantified≈ 99.9%False-fail rate (good units misjudged)Affected by subjectivity and fatigue, not quantified≈ 0.2% On this line, AI listening has essentially taken over the original manual listening tasks. Not only has the headcount been cut by half, but the risk of missed defects has been significantly reduced, providing data support for scaling the solution across more production lines in the future. Deployment Recommendations: How to Get the Most Out of This Solution If you are considering introducing AI-based abnormal-noise testing, you can start from the following aspects: Plan sample collection as early as possible Begin accumulating“confirmed no abnormal-noise”good-unit waveforms during the trial build /small pilot stage, so you can get a head start on AI training later. Minimize environmental interference The AI listening test station should be placed away from high-noise equipment such as dispensing machines and soldering machines. By turning off alarm buzzers, defining material-handling aisles that avoid the test stations, and reducing floor vibration, you can effectively lower false-detection rates. Keep test conditions consistent Use the same isolation chamber, artificial ear, fixtures and test sequence in both the training and mass-production phases, to avoid model transfer issues caused by environmental differences. Maintain a period of human–machine coexistence In the early stage, you can adopt a“100% AI + manual sampling”strategy, and then gradually transition to“100% AI + a small amount of DOA recheck,”in order to minimize the risks associated with deployment. Conclusion: Let Testing Return to “Looking at Data” and Put People Where They Create More Value AI listening tests, at their core, are an industrial upgrade—from experience-based human listening to data- and algorithm-driven testing. With standardized CRYSOUND test platforms, professional acoustic hardware, product-specific fixtures and AI algorithms, CRYSOUND is helping more and more customers transform time-consuming, labor-intensive and subjective manual listening into something stable, quantifiable and reusable. If you’d like to learn more about abnormal-noise testing for earphones, or planning to try AI listening on your next-generation production line—or discuss your blade process and inspection targets—please use the “Get in touch” form below. Our team can share recommended settings and an on-site workflow tailored to your production conditions.

    AR Glasses Production-Line Testing Upgrade – Multi-Station Audio & VPU Solution

    As the AR glasses market transitions from proof-of-concept to large-scale commercialization, product capabilities in audio and haptic interaction continue to expand, driving increased demands for production-line testing. With key modules such as audio and VPU (Vibration Processing Units), AR glass production-line testing is evolving from simple functional validation to consistency control aimed at enhancing real-world user experience. Based on actual mass production project experience, this article introduces audio and VPU testing solutions for different workstations, with a focus on free-field audio testing, VPU deployment, and fixture design, providing practical reference for scaling AR glasses manufacturing. Accelerating Market Expansion of AR Glasses and New Trends in Production-Line Testing As smart glasses products mature, their functional boundaries are expanding rapidly. According to various industry reports, the shipment volume and investment scale of AR glasses continue to increase, with the market shifting from concept validation to commercialization. Products driven by companies like Meta are increasingly capable of supporting voice interaction, calls, notifications, and recording, supplementing functions traditionally carried out by smartphones and earphones. This shift has transformed AR glasses from a low-frequency conceptual product into a high-frequency wearable interaction terminal. Consequently, audio capabilities have become a core component of the smart glasses experience, directly impacting voice interaction and call quality. At the same time, vibration and haptic feedback have been introduced to enhance interaction confirmation and user perception. As these capabilities become commonplace in mass-produced products, production-line testing is no longer just focused on whether basic functions work but is now required to handle multiple critical capabilities, such as audio and VPU, simultaneously. This shift presents new challenges for upgrading production-line testing solutions. Audio Testing Solutions for Multi-Station Production Lines Audio is one of the most directly influential functions on the user experience of AR glasses, and its production-line testing needs to balance accuracy, consistency, and production efficiency. In a multi-station production environment, audio testing is often distributed across several workstations depending on the assembly phase. At the temple or frame workstations, audio testing focuses more on validating the basic performance of individual microphones or speakers, ensuring that key components meet the requirements early in the assembly process and avoiding costly rework later on in the process. At the final assembly workstation, the focus shifts to overall audio performance and system-level coordination. While different workstations focus on different aspects, the fixture positioning, acoustic environment control, and testing process design need to maintain consistent logic throughout. CRYSOUND’s AR glass audio testing solutions are designed to address this need, with a unified testing architecture that allows flexible deployment across different workstations while maintaining stable and consistent results. The solutions can be divided into the following two types, meeting the aesthetic and UPH requirements of different production lines. Drawer-Type Single-Unit (1-to-1) Easy automation integration Standing operation for convenient loading and unloading Simultaneous testing of SPK and MIC (airtightness), supporting multi-MIC scenarios Serial testing for left and right SPK, parallel testing for multiple MICs Supports Bluetooth, USB ADB, and Wi-Fi ADB communication Average cycle time (CT): 100s | UPH: 36 Clamshell Dual-Unit (1-to-2) Parallel dual-unit testing for improved efficiency Ergonomic seated operation design Simultaneous testing of SPK and MIC (airtightness), supporting multi-MIC scenarios Serial testing for left and right SPK (single box), parallel testing for multiple MICs Supports Bluetooth, USB ADB, and Wi-Fi ADB communication Average cycle time (CT): 150s | UPH: 70 Speaker EQ in AR Glasses: From Pressure Field to Free Field In traditional earphone products, speaker EQ is usually built in a relatively stable pressure-field environment, where ear coupling and wearing style have a well-controlled impact on the acoustic environment. In contrast, AR glasses typically use open structures for the speakers, with no sealed cavity between the driver and the ear, making their acoustic performance closer to free-field characteristics. This structural difference makes the frequency response of AR glasses speakers more sensitive to sound radiation direction, structural reflections, and wearing posture, and dictates that their EQ strategy cannot simply follow earphone product experience. In the production-line testing and tuning process, the speaker EQ for AR glasses needs to be evaluated and validated under free-field conditions. Due to the open acoustic structure, the frequency response is more susceptible to structural reflections, assembly tolerances, and variations in wearing posture, making it difficult to rely solely on hardware consistency to ensure stable listening across different products. By introducing EQ tuning, these systemic deviations can be compensated without changing the structural design, improving the consistency of audio performance during mass production. The focus of the testing solution is not to pursue idealized sound quality, but rather to capture real acoustic differences under stable and repeatable free-field testing conditions, providing reliable data for EQ parameter validation. CRYSOUND supports customized EQ algorithms. In one mass production project, speaker EQ calibration was introduced at the final test station under free-field conditions, and the results were accepted by the customer, validating the applicability and practical significance of this solution for glasses products. VPU Testing Solutions for AR/Smart Glasses Why AR Glasses Include VPU (Vibration Processing Unit) As AR/smart glasses increasingly support voice interaction, calls, and notifications, relying on audio feedback alone is no longer enough. In noisy environments, privacy-sensitive scenarios, or with low-volume prompts, users need a feedback method that does not disturb others but is sufficiently clear. This is where VPU is introduced. Unlike traditional earphones, glasses are not always tightly coupled to the ear, making audio prompts more susceptible to environmental noise. By utilizing vibration or haptic feedback, the system can convey status confirmations, interaction responses, or notifications to users without increasing volume or relying on screens. Therefore, VPU becomes a key component for supplementing or even replacing some audio feedback in AR glasses. Primary Roles of VPU in AR Glasses In current mass-produced smart glasses designs, VPU typically serves the following functions: Interaction confirmation feedback: such as successful voice wake-up, completed command recognition, or the start/stop of recording or photo taking. Silent notifications: vibrational feedback in scenarios where audio prompts are unsuitable. Enhanced experience: boosting interaction certainty and immersion when combined with audio feedback. These functions have made VPU an essential capability in the AR glasses interaction experience, rather than just an optional feature. Typical VPU Placement in AR Glasses (Why in the Nose Bridge/Pads) Structurally, VPU is typically located near the nose bridge or nose pads for three main reasons: Proximity to sensitive body areas: The nose bridge is sensitive to small vibrations, providing high feedback efficiency. Stable and consistent coupling: Compared to the temples, the nose bridge has a more stable and consistent contact with the face, ensuring better vibration transmission. Does not interfere with audio device layout: Avoids interference with speakers and microphones in the temple region. Therefore, during production-line testing, VPU is often tested as an independent target, requiring dedicated verification at the frame or final assembly stage. VPU Testing Implementation and Consistency Control on the Production Line Based on the functional positioning and structural characteristics of VPU in AR glasses, VPU testing is typically scheduled based on the product form and assembly progress in mass production. In some cases, testing may even be moved earlier in the process to identify potential VPU issues before they are exacerbated in subsequent assembly stages. It is important to note that production-line testing environments differ fundamentally from laboratory validation environments. In laboratory testing, VPU is typically tested as a standalone component under simplified conditions and higher excitation levels (e.g., 1g). However, in production-line environments, the VPU is already integrated into the frame or complete product, requiring excitation conditions that closely mimic those of real-world wearing scenarios. In practice, production-line VPU testing typically takes place in the 0.1g–0.2g, 100–2kHz excitation range, verifying consistency in VPU performance under realistic physical conditions. CRYSOUND’s AR glasses VPU production-line testing solution uses the CRY6151B Electro-Acoustic Analyzer as the testing and analysis platform. The vibration table provides stable excitation, and the product VPU synchronizes vibration response signals with a reference accelerometer. Software analysis evaluates key parameters such as frequency response (FR) and total harmonic distortion (THD).This test architecture balances testing effectiveness and production-line throughput, meeting the deployment needs for VPU testing at different stations. Compared to audio testing, VPU testing is more sensitive to testing configurations and fixture design, with less room for error and greater difficulty in consistency control. Based on experience from multiple projects, fixture design must fully account for structural differences in locations such as the nose bridge and nose pads. It is important to prioritize materials and contact methods that facilitate vibration transmission, and to design standardized fixture shapes that keep the fixture's center of gravity aligned with the vibration table's working plane, minimizing the introduction of additional variables at the structural level. By following these design principles, the stability and repeatability of VPU test results can be improved in a production-line environment, providing reliable support for validating the product's VPU capabilities. From Functional Testing to Experience Constraints In AR glasses production lines, the role of testing is evolving. In the past, audio or vibration modules were more likely to be treated as independent functions, with the goal of confirming whether they were "functional." However, with the current form of the product, these modules directly influence voice interaction, wearing comfort, and overall experience. As a result, the test results now serve as a prerequisite for the overall product performance. For example, audio and VPU modules are no longer just performance verification items; they now play a role in the consistency control of the user experience. The interaction between audio performance, vibration feedback, and structural assembly means that production-line testing needs to identify potential issues that could affect the experience in advance, rather than just filtering out problems at the final inspection stage. This change is pushing test strategies from "functional pass" to "experience control." If you’d like to learn more about AR glasses audio testing solutions—or discuss your blade process and inspection targets—please use the “Get in touch” form below. Our team can share recommended settings and an on-site workflow tailored to your production conditions.

    Get in touch

    If you are interested or have questions about our products, book a demo and we will be glad to show how it works, which solutions it can take part of and discuss how it might fit your needs and organization.

    Support Support
    Product Catalogs Product Catalogs Solutions Solutions User Manuals User Manuals Software Download Software Download Product Inquiry Product Inquiry Schedule Demo Schedule Demo Technical Support Technical Support +86-571-88225128 +86-571-88225128
    Request Quote 0
    Request Quote