Measure Sound Better
Blogs
This integrated single-station EoL test solution enables automotive HVAC air vent suppliers to perform NVH (noise/BSR), motor electrical testing, and vane presence detection in a single inspection step, helping to improve overall test efficiency and reduce labor dependency. System Block Diagram of the Automotive HVAC Air Vent Test Solution Modern automotive HVAC air vent assemblies increasingly integrate multiple drive motors, multi-row vanes (louvers), and smart features such as automatic airflow control and voice interaction. As a result, upstream process variation or assembly defects can translate directly into vehicle-level concerns—typically perceived as abnormal noise, buzz/squeak/rattle (BSR), airflow direction mismatch, or reduced airflow caused by missing/misassembled vanes. To reduce rework and prevent customer complaints, suppliers increasingly require 100% end-of-line (EoL) testing on the production line, covering NVH (noise/BSR), motor electrical testing, and vane presence detection. CRYSOUND Single-Station EoL Test Solution CRYSOUND’s automotive HVAC air vent EoL test solution enables customers to perform single-station, 100% testing of noise/BSR, motor electrical testing, and vane presence detection. The solution integrates CRYSOUND’s in-house hardware and software, CRY3203-S01 measurement microphone set, SonoDAQ, CRY7869 acoustic test box, and OpenTest. And it combines electroacoustic measurement with abnormal noise analysis (sound quality and AI-based algorithms) to identify noise/BSR issues that FFT and Leq may miss. It also integrates motor electrical testing and vane presence detection, enabling one-time clamping and a single OK/NG decision within the same sound-insulated EoL station. Schematic of the HVAC Air Vent Test Fixture Customer Results: Efficiency, Labor, and Quality Gains Replaced manual listening with machine-based detection, enabling unified criteria with quantitative, traceable results. One fixture, three test positions: supports parallel or mixed testing of left/center/right dashboard air vents, improving efficiency by >100%. Variant support via fixture changeover: reuse the same test station across different products, reducing repeated capital investment. One-operator, one-click inspection: a single line can save 1–2 long-term operators. EoL Test Equipment for Automotive HVAC Air Vent Typical Target Users This solution is designed for suppliers of motorized air vents and other motor-driven interior components,such as Valeo S.A.,Ningbo Joysonquin Automotive Systems Co., Ltd. and Jiangsu Xinquan Automotive Trim Co., Ltd. Main Hardware and Software Configuration ProductQty.NoteCRY3203-S01 Measurement Microphone Set1Measurement Microhone SetCRY5820 SonoDAQ Pro1Audio AnalyzerCRY7869 Acoustic Test Box1Test EnvironmentOpenTesthttp://www.opentest.com1SoftwareFixture1CustomizablePC & Monitor1(Optional) Feel free to fill in the form below ↓to contact us. Our team can share application-specific EoL testing recommendations based on your automotive HVAC air vent requirements.
In industrial production and environmental monitoring, excessive noise implies compliance risks or potential complaint disputes. To handle this, you need a professional sound level meter (SLM) that provides "credible, traceable, and analyzable data." Faced with price differences ranging from hundreds to tens of thousands of dollars, and a complex array of parameters, how do you choose without making costly mistakes? We have distilled the complex selection process into a "4-Step Decision Method" to help you quickly find the balance between your budget and your needs. Step 1: Define the "Purpose" — Does the data need to be externally accountable? This is the first watershed moment in selection, directly determining the equipment's "Accuracy Class." Scenario A: Data must be "Externally Accountable" Typical Use Cases: Environmental law enforcement, third-party testing, laboratory R&D, legal arbitration. Must Choose: Class 1 Sound Level Meter. Key Reason: The difference between Class 1 and Class 2 goes beyond reading errors. The core difference lies in the Frequency Response Range. Class 1 Devices (e.g., CRY2851): Typically cover a wide band of 10 Hz – 20 kHz, capturing extremely low-frequency vibrations and ultra-high-frequency noise, fully meeting strict standards like IEC 61672-1:2013 Class 1. Class 2 Devices: Usually have a narrower frequency range (e.g., 20 Hz – 8 kHz) with potential attenuation at high or low ends, making them unsuitable for strict metering or certification scenarios. Scenario B: Used only for "Internal Management" Typical Use Cases: Workshop inspections, equipment spot checks, community surveys, internal process comparisons. Recommended: Class 2 Sound Level Meter. Core Advantage: It meets the vast majority of industrial and environmental noise measurement needs and is the ideal choice for internal control. Step 2: Clarify "Indicators" — What exactly are you measuring? Selecting the wrong indicators renders the data useless. Focus on the following two points: Frequency Weighting (A, C, Z): Which one to use? A-Weighting (Most Common): Simulates the human ear's response (insensitive to low frequencies). Must be used for Environmental Noise Evaluation and Occupational Health Assessments (e.g., 85 dB(A) limits). C-Weighting: Less attenuation at low frequencies, reflecting the total energy of the sound more truly. Often used for Mechanical Noise and Impact Sound where rich low-frequency components exist. Z-Weighting (Zero Weighting): Flat response across the entire frequency range with no attenuation. Must be used when you need Spectrum Analysis or deep research into noise components to preserve the original signal. "Instantaneous Value" or "Statistical Value"? For quick site checks: Focus on Lp (Instantaneous Sound Pressure Level) and Lmax (Maximum Sound Level). For scientific assessment or reporting: You must have Leq (Equivalent Continuous Sound Level). This is the core metric for evaluating noise energy over a period of time. Professional equipment (like CRY2850/2851) comes standard with integrating functions to automatically calculate Leq. Figure 1. Software Interface Diagram Step 3: Confirm if "Analysis" is needed — Do you need to find the noise source? This distinguishes a "regular noise meter" from a "professional sound level meter." Looking at a total value (e.g., 85dB) only tells you "it's noisy here"; seeing the spectrum tells you "where is it noisy." When do you need Spectrum Analysis (1/1 Octave, 1/3 Octave, or FFT)? Noise Control: Determining if noise comes from a fan (aerodynamic noise) or a motor (electromagnetic noise). R&D: Comparing sound quality differences between competing products or iterations. Diagnostics: Distinguishing between high-frequency bearing squeal and low-frequency structural resonance. Selection Advice: Taking the CRY2851 as an example, it supports both OCT Analysis and FFT Analysis. If your goal is to "solve problems" rather than just "record numbers," be sure to choose a device with spectrum functions. Figure 2. Measurement Demonstration Step 4: Plan the Measurement "Mode" — Single measurement or long-term monitoring? Many projects fail because the device "measures accurately, but is hard to use." Dynamic Range: Say goodbye to "Manual Gear Shifting" Old equipment requires manual range switching, which is prone to errors. Modern sound level meters (like CRY2851) feature a >120 dB wide dynamic range, covering everything from whispers to roaring engines without switching gears—preventing errors and improving efficiency. Data Export: Ensure data is "Portable and Usable" Ensure the device supports automatic storage to an SD card or internal memory and exports in universal formats (like CSV). Avoid the trap of "measuring data but failing to record it manually." Remote Monitoring Capability (Essential for Outdoor/Long-term) For long-term scenarios like construction sites or traffic monitoring, the device must have: Communication Functions: (LAN/Serial Port) for real-time remote data transmission. Outdoor Protection: (e.g., paired with NA41 Outdoor Kit, IP65 rating) to withstand rain and dust; otherwise, the equipment is easily damaged. Quick Selection Cheat Sheet To help you decide quickly, we have summarized three typical application scenarios based on the four-step method above: Figure 3. Handheld Measurement Operation The "Avoid Pitfalls" Checklist: Check these 5 points last Check the Standard: Confirm compliance with the latest IEC 61672-1:2013 standard. Check Bandwidth: Even for Class 2 meters, ensure the frequency range covers your main noise sources to avoid missed detections. Check Calibration: Buying a Class 1 SLM requires a Class 1 Sound Calibrator (e.g., CRY563A); otherwise, the system accuracy is downgraded. Check Range: Prefer "Wide Dynamic Range" or "Auto-Range" devices; refuse manual gear shifting. Check Accessories: Windscreens and protective cases are mandatory for outdoor use. Selecting a sound level meter is essentially balancing "Risk vs. Cost." If you still have doubts about "Class 1 vs. Class 2" or "Whether Spectrum Analysis is needed," CRYSOUND is ready to provide full lifecycle support: Pre-sales: Our application engineers provide one-on-one scenario consulting to help you match precisely and avoid wasting money. After-sales: We offer a full suite of services from calibration and training to long-term technical support, ensuring a complete chain of evidence. Instead of struggling with parameters alone, get in touch with our team using the form below to receive a configuration plan tailored to your application.
Sound Quality
Learn how to measure Loudness (ISO 532-1), Sharpness, and Tonality (ECMA-74) with OpenTest — free, open-source software. Step-by-step guide for automotive NVH, consumer electronics & home appliances engineers. This article is for engineers working in acoustics and vibration testing. It introduces how to perform sound quality measurements in OpenTest based on the ISO 532 loudness standard and the ECMA-74 tonality evaluation methods. By measuring and comparing three key psychoacoustic metrics — Loudness, Sharpness, and Prominence (Tonality) — teams in consumer electronics, automotive NVH, home appliances and IT equipment can turn “how good or bad it sounds” into quantitative engineering data, and complete a standardized sound quality workflow on a single platform from data acquisition, through analysis, to reporting. Why Sound Quality Measurements Matter In traditional noise testing, we usually rely on dB values to describe how “loud” a device is. But more and more studies and real-world projects are reminding engineers that “loudness” is only part of the story. In automotive NVH, home appliances, IT equipment and consumer electronics, user acceptance of product sound depends much more on whether it sounds pleasant, sharp, tiring or annoying, not just the overall sound pressure level. Industry surveys also show that most manufacturers now treat “how good it sounds” as being just as important as “how quiet it is”, and they start paying attention to sound quality already in early design phases. At the same sound level, poor sound quality can significantly drag down overall product satisfaction. This is exactly why Sound Quality as a discipline exists: through a set of psychoacoustic metrics such as Loudness, Sharpness and Tonality/Prominence, it turns subjective impressions like “sharp”, “boomy”, “harsh” or “smooth” into data that is measurable, comparable and traceable, so engineering teams can go beyond noise control and truly design and optimize product sound around listening experience. Key Metrics in Sound Quality Measurement In engineering practice, sound quality is not a single number, but a set of psychoacoustic quantities. Commonly used metrics include Loudness, Sharpness, Roughness, Fluctuation Strength, Prominence/Tonality, etc. Figure 1 – Key metrics in sound quality measurement Loudness (ISO 532-1) Loudness and Loudness Level describe how loud a sound is perceived by the human ear, rather than just its sound pressure level in dB. Internationally, the ISO 532-1:2017 standard based on the Zwicker method is widely used for loudness calculation. It can handle both stationary and time-varying sounds and correlates well with subjective perception in many technical noise applications. From an engineering point of view, loudness has clear advantages over A-weighted SPL: It accounts for the ear’s different sensitivity to frequency (human hearing is more sensitive in the mid-high range) At the same dB level, loudness often tracks “does it feel loud or not?” more accurately Sharpness (DIN 45692) Sharpness reflects whether a sound is perceived as sharp or piercing. When the high-frequency content has a higher proportion, people tend to feel the sound is more “sharp” or “edgy”. Sharpness was standardized in DIN 45692:2009, and is typically calculated based on the specific loudness distribution from a loudness model, applying additional weighting in the higher Bark bands. The result is expressed in acum. In applications such as fans, compressors and e-drive whine, reducing sharpness often improves subjective comfort more effectively than just lowering the overall dB level. Roughness (asper) Roughness corresponds roughly to fast amplitude modulation in the 15–300 Hz range, which gives a “raspy, vibrating” impression — for example in certain inverter whines or gear whine where the sound feels like it is “shaking”. Unit: asper Classical definition: 1 asper corresponds to a 1 kHz, 60 dB pure tone amplitude-modulated at about 70 Hz with 100% modulation depth The deeper the modulation and the closer the modulation frequency is to the sensitive region (around 70 Hz), the higher the perceived roughness In engineering, roughness is often used to describe how much a sound feels like it is “buzzing” or “scratching”, and it is particularly relevant for subjective evaluation of technical noise in e-drive systems, gearboxes and compressors. Fluctuation Strength (vacil) Fluctuation Strength captures slower amplitude fluctuations — amplitudes that go up and down in the range of roughly 0.5–20 Hz, perceived as “pulsing” or “breathing”, with a typical peak sensitivity around 4 Hz. Unit: vacil A classical definition of 1 vacil: a 1 kHz, 60 dB pure tone with 4 Hz, 100% amplitude modulation In cabin idle “breathing noise”, or fans whose level periodically rises and falls, fluctuation strength is a key descriptor You can think of Fluctuation Strength and Roughness as two sides of the same “modulation” coin: Fluctuation Strength: slow modulation (a few Hz), perceived as “breathing” or “pulsing” Roughness: faster modulation (tens of Hz), perceived as “vibrating, raspy, grainy” Prominence / Tonality (ECMA-74) Many devices are not particularly loud overall, yet become extremely annoying because of one or two narrowband tonal components. These “sticking out tones” are usually quantified by Tonality / Prominence. In IT and information technology equipment noise, ECMA-74 specifies methods based on Tone-to-Noise Ratio (TNR) and Prominence Ratio (PR) to evaluate tonal prominence and to determine whether a spectral line is a “prominent tone”. Historically, these metrics come from psychoacoustic research and are now widely used in automotive, aerospace, home appliances and IT equipment to predict and optimize annoyance. For example, studies have shown that, with loudness controlled, Sharpness, Tonality and Fluctuation Strength are important predictors for the annoyance of helicopter noise. Why Sound Quality Is More Useful Than Just “Watching dB” In many projects, you may have already seen questions like these: Two fan designs have similar sound power levels, but one “sounds smooth” while the other has a clear whine After noise reduction, overall SPL is a few dB lower, but user feedback hardly improves On the production line, A-weighted SPL is used as the only criterion, and some “bad-sounding” units still slip through Fundamentally, that is because: Sound pressure level / sound power = “how much energy is there” Sound quality metrics = “how the ear feels about it” With metrics like Loudness, Sharpness, Roughness, Fluctuation Strength and Prominence, you can decompose vague complaints like “it just sounds uncomfortable” into: Which frequency region has too much energy (leading to high sharpness) Whether there is strong amplitude modulation (causing high roughness or fluctuation strength) Whether any tonal component is sticking out clearly above its surroundings (high tonality / prominence) In engineering iteration, these metrics can be mapped directly to: Structural optimization (stiffness, modes, blade shape, etc.) Control strategies (e.g. PWM frequency, fan speed curves and transitions) Material and noise treatment / isolation choices This gives you much clearer and more actionable directions than “just reduce dB”. Sound Quality Analysis in OpenTest As a platform for acoustics and vibration testing, OpenTest supports a complete sound quality workflow from acquisition → analysis → reporting. Fill in the form at the bottom ↓ of this page to contact us and get an OpenTest demo. Example Device: Office PC Fan Noise To make the process concrete, we use a very accessible device as our example: a typical office PC. Test objective: evaluate sound quality metrics of its fan noise under different operating conditions, in order to: Compare subjective noise performance of different cooling and fan control strategies Provide quantitative input to NVH reviews (e.g. does loudness exceed the target, is sharpness too high?) Build a foundation for further sound quality optimization (e.g. suppressing whine frequencies, smoothing speed transitions) Test environments might be: A semi-anechoic room / low-noise lab (recommended); or A quiet office environment for early-stage, comparative evaluation Measurement System: SonoDAQ + OpenTest Sound Quality Module On the hardware side, we use a CRYSOUND SonoDAQ multi-channel data acquisition system (for more detailed model information, please contact us), together with one or more measurement microphones placed near the PC fan or at the listening position, according to the test requirements. Figure 2 – SonoDAQ Pro multi-channel data acquisition system Of course, OpenTest also supports connection via openDAQ, ASIO, WASAPI and other mainstream audio interfaces, so you can reuse existing DAQ devices or audio interfaces for measurement where appropriate. On the software side, the Sound Quality module in OpenTest is one of the measurement modules. Combined with FFT analysis, octave analysis and sound level analysis, it can cover most standard audio and vibration test needs. Configuring Measurement Parameters After creating a new project in OpenTest, proceed as follows: 1. Channel configuration and calibration In Channel Setup, select the microphone channels to be used and set sensitivity, sampling rate and frequency weighting as required Use a sound calibrator (e.g. 1 kHz, 94 dB SPL) to calibrate the measurement microphones, ensuring that loudness and related metrics have a reliable absolute reference 2. Switch to the “Measure > Sound Quality” module Select the metrics to be calculated: Loudness, Sharpness, Prominence Set analysis bandwidth, frequency resolution and time averaging modes Optionally configure test duration and labels for different operating conditions Essentially, this step turns the “calculation definitions” in ISO 532, DIN 45692 and ECMA-74 into a reusable OpenTest sound quality scenario template. Acquiring Sound Data for Different Operating Conditions Once the test environment is set up and the parameters are configured, click Start to measure sound quality data under different operating conditions. Each test record is saved automatically for later analysis. Because sound quality focuses on how it sounds during real use, it is recommended to record several typical conditions, for example: Idle / standby (fan off or low speed) Typical office load (documents, multi-tab browsing, etc.) High load / stress test (CPU/GPU at full load) With this breakdown, engineers can clearly manage which sound quality result corresponds to which operating condition. Figure 3 – Overlaying multiple sound quality test records in OpenTest From Multiple Measurements to One Sound Quality Report After measuring multiple operating conditions (e.g. idle, typical office and full-load stress test), you can do the following in OpenTest. In the data set list, select the records you want to compare and overlay: Compare loudness curves under different conditions See whether sharpness spikes during acceleration or speed transitions Identify conditions where prominent narrowband tones appear (high prominence) In the Data Selector, save the associated waveforms and analysis results: Export .wav files for later listening tests or subjective evaluations Export .csv / Excel for further statistics or modelling Click the Report button in the toolbar: Enter project, DUT and operating condition information Select sound quality metrics and plots to include (e.g. loudness vs. time, bar charts of sharpness, spectra with marked tonal prominence) Generate a sound quality report with one click for internal review or customer submission Figure 4 – Example of a sound quality report in OpenTest The generated report includes measurement conditions and operating modes, key sound quality metrics such as Loudness, Sharpness and Prominence, as well as a comparison with traditional acoustic metrics (sound pressure level, 1/3-octave spectra, sound power, etc.), making it easier for project teams to discuss using a set of metrics that are both objective and closely related to perceived sound. Typical Application Scenarios You can build different sound quality test scenarios in OpenTest for different businesses, for example: Consumer electronics / IT equipment (laptops, routers, fans, etc.) Use loudness + sharpness + (where applicable) roughness to evaluate the “subjective comfort” of different thermal / fan strategies Compare sound quality across different speed curves or PWM schemes Automotive NVH / e-drive systems Use multi-channel acquisition to record interior noise and speed signals synchronously Combine order analysis with sound quality metrics to see how “sharp” an e-drive whine is and whether there is pronounced modulation causing roughness Home appliances and industrial equipment When sound power already meets standards, use sound quality metrics to further screen for “annoying noise”, instead of relying only on dB If you are building or upgrading your sound quality testing capabilities, you can use ISO 532 and ECMA-74 as the backbone and let OpenTest connect environment, acquisition, analysis and reporting into a repeatable chain. That way, each sound quality test is clearly traceable and much more likely to evolve from a single experiment into a long-term engineering asset. Welcome to fill in the form below ↓ to contact us and book a demo and trial of the OpenTest Sound Quality module. You can also visit the OpenTest website at www.opentest.com to learn more about its features and application cases.
Measurement microphones are used in acoustic metrology, type-approval testing, and engineering measurements. Unlike general audio capture applications, measurement scenarios place far greater emphasis on consistency and traceability: the same microphone should deliver stable output when re-tested over time; variation within a production lot should be sufficiently small; and performance fluctuations between lots should remain controllable. In these applications, tiny contaminants introduced during manufacturing may not cause immediate “failure,” but can accumulate over time as increased self-noise, subtle shifts in frequency response, changes in insulation leakage, or long-term drift—ultimately increasing measurement uncertainty and recalibration costs. Therefore, completing critical component assembly and sealing steps inside a controlled clean environment (a cleanroom) is a common engineering approach to achieve stable performance and batch-to-batch consistency for measurement-grade microphones. This article starts with measurement microphone structures and traceability requirements, then explains how particulate and molecular contamination affects noise, response, and drift. It next outlines cleanroom controls (cleanliness class, environment, people/material flow) that reduce risk. Finally, it summarizes benefits for consistency and recalibration cost. Figure 1. Precision Assembly in a Cleanroom Critical Structure and Measurement-Grade Requirements Taking a condenser measurement microphone as an example, its core structure consists of the diaphragm, backplate, an extremely small gap, and acoustic pathways. The dimensions and surface conditions of these structures directly affect sensitivity, frequency response, phase characteristics, and self-noise. Measurement microphones typically need to meet standardized geometric and electroacoustic requirements and support a traceable calibration chain. For example, the IEC 61094 series specifies requirements related to measurement microphone specifications and calibration, helping ensure comparability and consistency when used as metrology instruments and transfer standards. How Contamination Affects Performance Contamination typically falls into two categories: particulate contamination (dust, fibers, skin flakes, metal debris, etc.) and molecular contamination (oil mist, residual volatile organic compounds, cleaning-agent residues, etc.). For measurement microphones, both can alter boundary conditions of diaphragm motion, acoustic damping, or electrical insulation. Particulate Contamination: Self-Noise, Nonlinearity, and Response Deviation When particles enter critical gaps or adhere near the diaphragm, they may introduce localized friction and changes in damping, raising self-noise and reducing the effective dynamic range for low-level measurements. In more extreme cases, particles can cause intermittent contact or restricted motion, resulting in nonlinear distortion and poorer repeatability. Figure 2. Microphone Cross-sectional Structure Molecular Contamination: Changes in Insulation and Charge Stability Molecular contamination often appears as thin-film deposits on surfaces. Such films may change surface resistance on insulating parts, altering leakage currents and therefore affecting effective polarization conditions and low-frequency stability, potentially increasing electrical noise. For measurement chains requiring long-term stability, issues caused by molecular contamination are more subtle and often manifest as slow drift. Moisture Absorption/Migration and Batch Variation: Long-Term Stability and Consistency Some contaminants are hygroscopic or migratory. Under temperature and humidity cycling and long-term aging, their distribution and surface state may keep changing, causing gradual drift in sensitivity and frequency response. Meanwhile, contamination events are inherently random: the location and amount of particle deposition are hard to reproduce, which can amplify within-lot dispersion and lead to yield fluctuations—ultimately increasing the workload for system-level calibration and consistency control. The Engineering Value of a Cleanroom: Bringing “Contamination Risk” Under Process Control A cleanroom keeps particulate and molecular contamination within a verifiable range and stabilizes environmental parameters such as temperature, humidity, and pressure differential. Cleanroom classification commonly references ISO 14644-1, which uses airborne particle concentration as the primary metric. For measurement microphones, the key is to bring contamination risk in assembly, sealing, and packaging steps under process control. Completing critical assembly and sealing in a low-particle environment reduces the likelihood of random dust and fiber contamination. Controlling temperature/humidity, pressure differential, and implementing electrostatic management reduces risks from adsorption and secondary deposition. Following standardized protocols for personnel/material entry and tool maintenance—and maintaining clean packaging—helps preserve a consistent “as-shipped” condition. At CRYSOUND, critical assembly and sealing are performed in a Class 1,000 cleanroom, equivalent to ISO Class 6 under ISO 14644-1. It helps reduce particulate contamination risk during mass production while keeping process conditions stable. Figure 3. Cleanroom Manufacturing Area Cleanrooms and Calibration: Complementary, Not a Substitute A cleanroom controls contamination variables during manufacturing to reduce the risks of performance dispersion and drift. Calibration establishes traceability and provides parameters such as sensitivity under specified conditions. Clean manufacturing cannot replace calibration, but it can improve re-test consistency and reduce the impact of drift on calibration intervals and uncertainty. Figure 4. Cleanroom Manufacturing Direct Value for End Applications Once contamination variables are controlled, self-noise levels and response characteristics become more stable, and batch-to-batch differences are easier to manage. In multi-channel systems, acoustic imaging measurements, and production-line consistency monitoring, sensor interchangeability is easier to achieve—and it also becomes easier to define more appropriate recalibration and periodic verification strategies. A clean, controlled environment provides stable contamination control conditions for key manufacturing steps of measurement microphones, helping reduce risks of elevated self-noise, response deviation, and long-term drift. Combined with standardized design, in-process inspection, and traceable calibration, reliable measurement results can be maintained throughout the product lifecycle. You are welcome to learn more about microphone functions and hardware solutions on our website and use the“Get in touch”form to contact the CRYSOUND team.
Before you begin any formal data acquisition work, one critical step is connecting the DAQ front end to the PC. In day‑to‑day engineering, the most common options include USB direct connection, Wi‑Fi wireless, Ethernet, and PXIe. This article introduces these four common connection methods from several angles—how they differ, where each one shines, and their practical limitations—to help you build a deeper, more intuitive understanding of DAQ connectivity. Ethernet Connection An Ethernet connection means the front end joins a local area network (LAN) through its network port, and the PC accesses the device over IP. A typical data path looks like this: Sensor → front‑end sampling → Ethernet transport (TCP/UDP, etc.) → PC/server storage and processing. This topology ranges from very simple to quite complex, for example: Front end ↔ PC (point‑to‑point direct link) Multiple front ends → switch → PC/server (distributed) Figure 1. Ethernet Connection Advantages of Ethernet Connections Flexible topology: single‑node, multi‑node, and distributed setups are all easy to organize; Comfortable distance and cabling: copper Ethernet or fiber makes it easier to deploy across rooms, floors, or even buildings—and routing can be more standardized; Mature infrastructure and strong maintainability: switches, cables, transceivers, fiber, and rack accessories are widely available, and issues are usually easier to locate and troubleshoot; Limitations of Ethernet Connections The network introduces uncertainty—topology, switch performance, port congestion, broadcast storms, and link errors can all cause throughput/latency fluctuations; With multiple devices/nodes, the need for network planning rises quickly: IP addressing, subnetting, whether to use DHCP, routing across subnets, switch cascade depth, etc. As the system grows, things can get messy without a plan. Cable quality, shielding/grounding, routing close to high‑power lines, poor port contact, or switch power instability may show up as packet loss, retransmissions, or speed‑negotiation anomalies. For engineers, Ethernet is straightforward on the test floor: in many setups, a single cable is enough to bring the DAQ front end online with the PC—parameter setup, start/stop, live monitoring, and logging all feel smooth. When the distance grows, you can extend the copper run or switch to fiber to keep transmission stable. In cross‑floor or multi‑room environments—or where noise/safety constraints make it inconvenient to stay near the rig—data can be acquired and monitored from an office or control room over the network. Of course, very long cable runs can be a headache in their own right. SonoDAQ Pro comes standard with two Gigabit LAN ports (GLAN, daisy‑chain capable, supporting 90 W PoE++ power delivery) and also provides a USB‑C port with gigabit‑class throughput, giving users more flexible network‑style connection options. Figure 2. SonoDAQ Rear Panel Wi‑Fi Connection Wi‑Fi DAQ means the acquisition node communicates with a PC or a LAN over a wireless network. Unlike simply “replacing the cable with wireless,” Wi‑Fi DAQ systems typically have two working modes: Real‑time streaming: after sampling, data is sent to the PC over Wi‑Fi in real time; Local buffering/storage: data is first buffered or stored on the front end; Wi‑Fi is used mainly for control, preview, transferring selected segments, or exporting after the run. Two common networking setups are: The DAQ front end joins an on‑site access point (STA mode); The PC creates a hotspot and the DAQ front end connects to it. In short, the front end must support Wi‑Fi, and it must be on the same LAN as the PC. Figure 3. Wi-Fi Connection Advantages of Wi‑Fi Connections No cabling: when wiring is difficult or not allowed, the DAQ can be placed close to the measurement point and controlled over Wi‑Fi; Flexible remote acquisition: by mapping the DAQ’s IP to the public Internet, the PC can access the DAQ by IP address for ultra‑long‑distance remote control. Limitations of Wi‑Fi Connections Uncertainty for sustained high‑volume transfers: available wireless bandwidth can change at any time, so long, continuous acquisitions are more likely to expose packet loss/retransmissions/buffer overflows—the heavier the data load, the more obvious this becomes; Stability depends heavily on the environment: multipath, co‑channel interference, AP congestion, and movement (changing the RF path) can all cause throughput swings and higher latency/jitter, showing up as choppy live plots or occasional disconnect/reconnect events. In real projects, Wi‑Fi is most often used when cabling is inconvenient or prohibited, or when remote/off‑site acquisition is required but running Ethernet is impractical. Engineers can configure parameters remotely, start/stop acquisition, monitor key metrics, or pull specific segments. For larger datasets or long‑duration logging, it’s common to pair Wi‑Fi with front‑end buffering/local storage—Wi‑Fi keeps things visible and controllable, while the front end protects data integrity. USB Connection A USB DAQ device typically means sampling happens in an external front end (with built‑in ADCs, signal conditioning, clocks, etc.). The PC handles configuration, visualization/analysis, and data storage, while USB “moves” the data into the computer. In this relationship, the PC acts as the USB host and the front end acts as the USB device. Figure 4. USB Connection Advantages of USB Connections Low barrier and quick to start: no IP setup and no dependency on network infrastructure—plug it in, install the driver/software, and you can usually start acquiring; Highly portable: an external box plus a laptop is a common combo, well suited to field work, customer sites, and temporary setups; Ubiquitous interface: cables, adapters, mounting clips, and docks are easy to source; Limitations of USB Connections Scalability is generally less “natural” than network/platform approaches. When a system grows from a single front end to multiple front ends and coordinated multi‑point measurements, cabling, device management, and synchronization depend more on the specific implementation; If multiple high‑throughput devices share the same USB controller (DAQ front end, external SSD, camera, etc.), you may see throughput fluctuations, buffer warnings, and occasional stuttering. USB controllers, driver stacks, system load, and power‑management policies vary from PC to PC, so the same device can behave differently on different hosts. Most USB front ends are portable external devices. They often integrate a reasonably complete set of general‑purpose measurement interfaces—analog inputs/outputs, digital I/O, counters/encoders, etc. With a single USB cable, you get both connection and control to the PC for acquisition, display, and storage. As a result, USB is widely used for temporary measurements in the field or at customer sites, rapid R&D bring‑up and debugging, and small‑channel, short‑duration tests. PXIe Interface PXIe is a platform form factor built around a chassis, backplane, and modules. Measurement/instrument modules plug into the chassis and interconnect through the backplane; the chassis then works with a controller or an external link to a PC workstation. Compared with a single external DAQ box, PXIe is more platform‑oriented, modular, and capable of system‑level composition. If a PXIe controller is installed in the chassis, the chassis effectively becomes the host and can run acquisitions independently. Without a PXIe controller, a PXIe chassis is typically not connected to a PC via a standard Ethernet port. Instead, it uses a remote‑control link that essentially “extends the PCIe bus” so an external PC can see the chassis modules as if they were local PCIe devices. In practice, the two most common options are MXI‑Express (a host interface card in the PC plus a remote‑control module in the chassis, linked with a dedicated cable) and Thunderbolt. A typical data path looks like this: Sensor → PXIe module sampling/processing → chassis backplane → controller/link → PC/storage Figure 5. PXIe interface Advantages of PXIe Interface You can populate the chassis with the functional modules you need (analog, digital, bus interfaces, switch matrices, etc.). System capability comes from the “module mix,” and adding or swapping modules later is straightforward; High level of engineering integration: power, cooling, and mechanical form factor feel more like a test platform. In rack/bench systems, cabling, maintenance, and spare‑parts management are easier to standardize; When a test system is expected to evolve—more channels, more functions, module upgrades over time—the platform’s long‑term scalability is a strong advantage. Limitations of PXIe Interface Higher cost and larger footprint: a chassis + module ecosystem is typically a bigger investment than “PC + single card/box,” and it tends to be a fixed installation. Less friendly for mobile/field work: for scenarios that require frequent transport and rapid setup, PXIe’s platform advantages can become a burden; Higher system‑build complexity: it’s more like building a test system, where rack layout, harness management, thermal design, power headroom, and grounding all need to be considered. In practice, SonoDAQ Pro adopts a PCIe‑based modular backplane architecture. Each functional module connects to the main control platform (ARM) through the backplane for high‑speed data uplink/downlink, synchronization, and power distribution. We call this internal interconnect “Trilink.” While enabling modular expansion, SonoDAQ Pro also supports external communication interfaces such as GLAN, Wi‑Fi, and USB‑C, significantly improving deployment flexibility. For a more hands‑on view of how SonoDAQ works over different connection methods (USB / Wi‑Fi / GLAN)—including real usage workflows, representative scenarios, and common configuration checklists—please fill out the Get in touch form below and we’ll reach out shortly.
CRY580 A²B Interface is a bidirectional bridge designed to connect the A²B (Automotive Audio Bus) ecosystem with standard test & measurement setups (e.g., SonoDAQ, CRY6151B, Audio Precision). This article explains what makes A²B testing challenging—most analyzers don’t have a native A²B interface—and how CRY580 solves it by encoding/decoding A²B streams and converting them into measurable Analog or S/PDIF outputs, while supporting multi-channel I²S/TDM audio paths for fast, repeatable validation. Faster Automotive Audio Testing with CRY580 One bidirectional A²B bridge for testing: apply an analog/digital test stimulus for A²B amplifier testing, and bring A²B microphone or accelerometer sensor streams out as analog or S/PDIF for measurement. The A²B Audio Bus Is Reshaping In-Vehicle Audio A²B technology enables cost-effective audio data transport over long distances, combining multichannel audio (I²S/TDM), control (I²C), and power delivery over affordable cabling. Bidirectional data transfer at 50 Mbps bandwidth Low and deterministic latency(50 µs) System-level diagnostics Slave nodes can be locally-powered or bus-powered Programmable using ADI's SigmaStudio® GUI Uses cost-effective cables(unshielded twisted pair) The Testing Pain: A²B Adds Performance—And Complexity Traditional audio analyzers do not include A²B interfaces, making it impossible to directly test A²B devices. To perform accurate testing, a dedicated A²B codec is required to decode and convert A²B audio signals into standard analog or digital formats for measurement and analysis. How Bridging to Measurements Works in Practice How A²B Technology and Digital Microphones Enable Superior Performance in Emerging Automotive Applications A²B Microphone A²B Accelerometer A²B Amplifier "Bridging" in practice means converting A²B audio signals into standard analog or digital formats for testing: for A²B amplifier testing, injecting analog/digital stimulus into the A²B bus; and for A²B sensor testing, extracting A²B audio data to analog or S/PDIF for measurement. The CRY580 serves as the ideal bidirectional test bridge, facilitating seamless conversion and measurement in both directions. Introducing CRY580: An A²B Interface Built for Automotive Testing The CRY580 is a versatile A²B interface designed to seamlessly bridge A²B networks with testing equipment. It provides both decoding and encoding capabilities, allowing for the efficient transfer of audio data between A²B devices and standard measurement systems. Whether you're testing A²B microphones, amplifiers, or sensors, the CRY580 enables smooth and reliable testing workflows, ensuring accurate results across a range of automotive audio applications. Who Buys CRY580 and What They Test OEM / Tier1 Audio Teams: Integration, debugging, and acceptance testing across A²B networks. A²B Microphone & Mic-Array Suppliers: Sensitivity, frequency response (FR), and phase consistency checks. A²B Amplifier / Audio Processor Suppliers: Amplifier testing with injected stimuli, as well as mapping and performance verification. Test Labs: Standardized A²B measurement processes and delivery. Manufacturing / EOL QC: Repeatable pass/fail testing with faster fault isolation. Typical Test Setups: More Than Just an Interface At CRYSOUND, we provide more than just the CRY580 A²B interface. We offer a full automotive audio testing solution, including audio acquisition cards, microphones and sensors, acoustic sources, custom fixtures, acoustic test boxes, and vibration shakers, delivering a complete and streamlined testing experience. Here’s a description of the testing block diagram, including the use of the latest OpenTest Audio Test & Measurement Software https://opentest.com The CRY580 A²B Interface can be used in conjunction with the Audio Precision. Digital Interface Analog Interface "Performing A²B microphone performance tests (Frequency Response, THD+N, Phase, SNR, AOP) in an anechoic chamber, using the CRY5820 SonoDAQ Pro, CRY580 A²B Interface, and other equipment.” Why CRYSOUND: A Complete Automotive Audio Test Ecosystem The value of end-to-end delivery: reducing system integration time and minimizing coordination costs between multiple suppliers. We cover everything from R&D to production line testing. BOM list of the solution CRY580 bridges A²B to mainstream test & measurement setups in both directions, turning complex in-vehicle audio validation into a faster, repeatable workflow from R&D to end-of-line production. To discuss your use case, system configuration, or a demo, please fill out the Get in touch form below and we’ll reach out shortly.
In audio and vibration testing, FFT analysis (Fast Fourier Transform) is one of the tools almost every engineer uses sooner or later: Loudspeaker frequency response Headphone distortion NVH diagnostics Structural resonance troubleshooting Production noise and “mysterious tone” hunting A lot of practical questions are actually asking the same few things: Where is the energy concentrated in frequency? Is it dominated by one tone or a bunch of harmonics? How high is the noise floor? Are there any resonance peaks? FFT is the most universal entry point to answer these questions. This article will help you clarify three things from an engineering perspective: What FFT analysis is How FFT works conceptually How to use FFT correctly and efficiently in practice What Is FFT? In the time domain, a signal is just a waveform changing over time – all components “stacked together” in one trace. You can see it, but it’s hard to tell which frequencies are inside. FFT (Fast Fourier Transform) decomposes a time-domain signal into a sum of sinusoids at different frequencies. In the frequency domain, the signal is represented by frequency + amplitude + phase. In simple terms: Time domain: how the signal moves over time Frequency domain: what frequency components it contains, which are strongest, and how they relate to each other Historically, Fourier’s key idea (early 19th century) was that a complex periodic function can be expressed as a sum of sines and cosines. This evolved into the continuous-time Fourier transform, mapping signals onto a continuous frequency axis. In the computer age, things changed: engineers work with sampled data and typically only have a finite-length record of N samples. That leads to the DFT (Discrete Fourier Transform), which maps N time samples to N discrete frequency bins. FFT (Fast Fourier Transform) is not a different transform. It is a family of algorithms that compute the exact same DFT much more efficiently: Direct DFT: complexity ~ O(N²) FFT: complexity ~ O(N log N) The output X[k] is identical to the DFT result – FFT just gets there far faster by exploiting symmetry and divide-and-conquer. What FFT Is Good at – and What It Isn’t FFT is very good at: Finding deterministic narrowband components Fundamental tones, harmonics, switching frequencies, whistle tones, speed-related lines Looking at broadband distributions Noise floor, 1/f slopes, in-band power, SNR Characterizing system behavior Transfer functions, resonances / anti-resonances, coherence, delay estimation Serving as the foundation of time–frequency analysis STFT, spectrograms, etc. FFT is not good at (or not sufficient on its own for): Strongly non-stationary signals and “instantaneous frequency” For chirps and rapidly changing content, you need STFT, wavelets, or other time–frequency methods, not a single FFT on a long record Separating two extremely close tones below your frequency resolution If the spacing is smaller than your bin resolution (set by N), no algorithm will magically resolve them Turning short data into “long measurements” Zero padding only interpolates the spectrum visually; it does not add new information Before Using FFT: Key Concepts to Get Right To use FFT well, you need to be confident about a few fundamentals: Sampling rate DFT and its interpretation What you actually plot (magnitude, amplitude, power, PSD) Windowing and spectral leakage Averaging Sampling Rate: How High in Frequency You Can See Before FFT, you already made one crucial decision: sampling. A continuous-time signal x(t) is turned into a discrete sequence x[n]=x(n/fs). The sampling rate fsf_sfs determines the highest frequency you can observe without aliasing: the Nyquist frequency, fs/2. If the analog signal contains energy above fs/2, it does not disappear – it folds back into the band below Nyquist as aliasing. Once aliasing happens, FFT cannot “undo” it; the information is irretrievably mixed. In practice, you must use an anti-alias filter before the ADC (or before any resampling) to suppress components above Nyquist. Example: A 900 Hz sine sampled at fs=1 kHz will appear at 100 Hz in the discrete spectrum – a classic aliasing artifact. DFT Computation and Interpretation Given N samples x[0]..x[N−1], the DFT is defined as: The inverse transform (IDFT) reconstructs the time signal: Intuitively, X[k] tells you how strongly the signal correlates with a complex exponential at that bin’s frequency. The magnitude X[k] indicates “how much” of that frequency component exists The phase encodes time alignment relative to other components What Are You Plotting? Magnitude, Amplitude, Power, PSD From one set of FFT results X[k], you can create many different “spectra” that look similar but represent different physical quantities. This is where confusion between tools and platforms often arises. Common variants include: Magnitude spectrum |X[k]| Units depend on normalization (e.g., “V·samples”) Useful for locating peaks, harmonics, and general spectral shape Amplitude spectrum Properly scaled magnitude, in physical units (e.g. V) Appropriate for reading off sinusoid amplitudes and doing calibrated measurements Power spectrum |X[k]|² Again, scaling dependent; often used for power/energy comparisons when conventions are fixed Power Spectral Density (PSD) Sxx(f) Units like V²/Hz or Pa²/Hz Used for noise analysis, band power, and comparisons across different FFT lengths If you want to compare noise levels across different FFT sizes, windows, or tools, use PSD (or amplitude spectral density). Raw |X| or |X|² values are rarely directly comparable. A Concrete Example: Two Tones in Time and Frequency Imagine a signal consisting of two sinusoids at different frequencies. In the time domain, their sum may look like a “wobbly” waveform. In the frequency domain (FFT/PSD), you will see two distinct narrow peaks at the corresponding frequencies. In OpenTest’s FFT analysis, you can visualise both the spectrum and PSD/ASD side by side, making it easy to: Identify tonal components Inspect noise distribution Compare different operating conditions on the same frequency grid Try it yourself: Download the free OpenTest edition and run an FFT on a simple two-tone signal to see both peaks clearly separated. Window Functions and Spectral Leakage: Cleaning Up Spectra In theory, FFT assumes the sampled block contains an integer number of periods and is then repeated periodically. In reality, the record almost never lines up perfectly with an integer number of cycles. When you repeat that block, you get discontinuities at the boundaries, which causes energy to spread into neighboring bins — this is spectral leakage. To reduce leakage, we typically apply a window function to the time record before doing FFT. A window simultaneously affects: Main lobe width Wider main lobe = peaks get broader → it’s harder to separate close tones Side lobe height Lower side lobes = easier to see small peaks near a large one (better dynamic range) Amplitude/energy scaling Windows change the relationship between a pure tone’s true amplitude and the observed peak, as well as the noise floor level Some practical guidelines: Rectangular window Only use when you can ensure coherent sampling (an integer number of periods in the record) and you want the narrowest possible main lobe Hanning (Hann) window A very robust default choice for general acoustics and vibration work Widely used with Welch/PSD methods Hamming Similar to Hann, with slightly different side-lobe behavior, common in communications Blackman / Blackman–Harris Lower side lobes, useful when you need to see small peaks next to big ones, at the cost of a wider main lobe In OpenTest, you can switch between different window functions in the FFT analysis module and immediately see the impact on peak width, side lobes, and noise floor. Averaging: Making Spectra More Stable For noisy or non-stationary signals, a single FFT can look very “spiky” or unstable. By averaging multiple spectra, you obtain a smoother, more repeatable result. Common averaging types include: Linear averaging A simple arithmetic mean of several FFT results Exponential averaging Recent data gets more weight; good for live monitoring when the spectrum should react but not jump wildly Energy (power) averaging Based on power; ensures power-related quantities remain consistent A good averaging configuration strikes a balance between suppressing random fluctuations and preserving genuine changes in the signal. Where Do We Use FFT in Practice? Audio and Acoustics Typical applications include: Finding feedback frequencies, harmonic distortion, and device noise floors Frequency response (transfer function) measurement Room modes / resonance analysis Spectrograms of speech, music, and equipment noise In audio/acoustics, you must be clear about units and conventions: dB SPL, A-weighting, 1/3-octave bands, etc. FFT is the engine; the reporting convention (reference, weighting, bandwidth) must be clearly defined. Vibration and Rotating Machinery Identifying speed-related peaks (1X, 2X, gear mesh frequencies) Structural resonances and mode behavior under different operating conditions Bearing diagnostics, gear whine, imbalance, misalignment For bearing and gearbox analysis, envelope detection/demodulation is often used: Band-pass filter the signal Demodulate and then perform FFT on the envelope to reveal fault frequencies If the rotational speed is changing, a simple FFT will “smear” peaks. In that case, order tracking or synchronous resampling is more appropriate, turning the axis from “frequency” into “order”. Power Electronics and Power Quality Line frequency harmonics (50/60 Hz and multiples), THD, ripple, switching spikes Pre-compliance EMI checks: spectral lines, noise floor, in-band power In power systems, non-coherent sampling is a common issue: if the record length is not an integer number of mains cycles, leakage affects harmonic accuracy. Solutions include synchronous sampling, integer-cycle windows, or specialized harmonic analyzers. RF and Communications (Baseband View) Modulated signal spectra and spectral masks OFDM and multi-carrier spectral analysis, adjacent channel leakage Here, consistency is paramount: Same units Same bandwidth (RBW) Same window, detector, and averaging style FFT itself is straightforward; turning it into comparable power measurements requires tightly defined settings. Imaging and 2D Filtering 2D FFT extends the same idea to images: Edges correspond to high spatial frequencies; smooth areas to low frequencies Low-pass / high-pass filtering, removal of periodic noise, convolution acceleration in the frequency domain The same periodic extension assumption now applies in 2D: discontinuities at image borders produce strong artifacts in the frequency domain. Padding, mirrored borders, or 2D windows are common ways to mitigate this. Turning FFT into an Everyday Engineering Tool From a mathematical standpoint, FFT is not particularly “lightweight”. But in engineering use, the goal is actually simple: See what’s hidden inside the signal more clearly and much faster. When you understand: What FFT really computes How sampling, windowing, scaling, and averaging affect the result When to use spectra vs PSD, and which settings matter for your use case …then FFT stops being an abstract math topic and becomes a practical, everyday tool for acoustics and vibration work – from R&D and validation all the way to production testing. Download and get started now -> or fill out the form below ↓ to schedule a live demo. Explore more features and application stories at www.opentest.com.
In acoustic measurements (SPL, frequency response, noise, reverberation, etc.), large errors often come not from instrument accuracy, but from a mismatch between the assumed sound field and the actual one. What a microphone reads as sound pressure is not strictly equivalent across different fields—especially at mid and high frequencies, where the microphone dimensions become comparable to the acoustic wavelength. Measurement microphones are commonly categorized by the field for which their calibration/compensation is defined: Free-field, Pressure-field, and Diffuse-field (Random incidence). This article uses engineering-oriented comparison tables and common-pitfall checklists to explain the differences among the three sound-field types, their typical application scenarios, and key usage considerations. It also provides selection rules that can be directly incorporated into test plans, helping to improve measurement repeatability and comparability. Build Intuition With One Picture The following diagrams illustrate the three typical sound-field assumptions used in microphone calibration and selection. Figure 1 Free field: reflections negligible, wave incident mainly from one direction Figure 2 Pressure field: coupler/cavity measurement focusing on diaphragm surface pressure Figure 3 Diffuse (random-incidence) field: energy arrives from many directions (statistical sense) Quick Comparison for Engineering Selection TypeField assumptionTypical scenariosPlacement / orientationMain error driversFree-field microphoneReflections negligible; primarily single-direction incidence (often 0°)Anechoic measurements; on-axis loudspeaker response; front-field SPLAim at source (0°)Angle deviation; unintended reflections; fixture scatteringPressure-field microphoneMeasure true pressure at diaphragm surface (often in small cavities)Couplers; ear simulators; boundary/flush measurementsFlush-mounted or connected to couplerLeaks; cavity resonances; coupling repeatabilityDiffuse-field (random-incidence) microphoneEnergy arrives from all directions with equal probability (statistical)Reverberation rooms; highly reflective enclosures; diffuse-field testsOrientation less critical, but mounting must be controlledNot truly diffuse in real rooms; local blockage/reflections Free Field: Estimate the Undisturbed Sound Pressure A free field is an environment where reflections are negligible and sound arrives mainly from a defined direction (commonly 0° to the microphone axis). Because the microphone body perturbs the field, a free-field microphone typically includes free-field compensation, so the indicated pressure better represents the pressure that would exist without the microphone in place. Typical Use Cases Anechoic or quasi-free-field SPL measurements On-axis loudspeaker frequency response and source characterization Tests with a strictly defined incidence direction Practical Notes Keep 0° incidence when specified; off-axis angles can cause significant high-frequency deviations. Minimize scattering from fixtures (stands, adaptors, fixture、cable、windscreens). Control nearby reflective surfaces that break the free-field assumption. Pressure Field: Measure Diaphragm Surface Pressure A pressure field is commonly associated with small enclosed volumes (couplers/cavities). Here, the quantity of interest is the true pressure at the diaphragm surface. The microphone often becomes part of the cavity boundary. Typical Use Cases Pistonphone/coupler calibration and cavity measurements IEC ear simulators and couplers for headphone and in-ear testing Flush/boundary pressure measurements Practical Notes Seal and coupling are critical; small leaks can strongly affect low and mid frequencies. Cavity resonances can shape high-frequency response; follow the applicable standard or method. Maintain consistent mounting force and assembly for repeatability. Diffuse Field: An Average Over Angles A diffuse field (random incidence) assumes that sound energy arrives from all directions with equal probability, in a statistical sense. This is approached in reverberation rooms or highly reflective enclosures. Diffuse-field microphones are designed so their response better matches the average over many incidence angles. Typical Use Cases Reverberation-room measurements and room acoustics Noise and SPL measurements in reflective cabins (vehicle or enclosure) Statistical measurements where multi-direction incidence dominates Practical Notes A normal room is not necessarily diffuse; strong direct sound breaks the assumption. Proper installation and operation remain essential: large fixtures, mounting brackets, and obstructions can alter the characteristics of the local acoustic field. Keep measurement locations consistent; position changes alter modal and reverberant contributions. Rule of Thumb: Write the Field Assumption into the Test Plan Quasi-anechoic, direction defined → choose a free-field microphone Coupler/cavity/boundary pressure → choose a pressure-field microphone Highly reflective, multi-direction incidence → choose a diffuse-field microphone When the field is uncertain, define the geometry first (direct-to-reverberant ratio, incidence direction, distance), then apply an appropriate calibration or correction strategy to control the dominant error sources. Common Pitfalls Using a free-field microphone in a coupler/cavity: high-frequency deviations are often exaggerated. Free-field testing without controlling angle: off-axis error grows at mid and high frequencies. Treating a normal room as diffuse: if direct sound dominates, the diffuse-field assumption fails. Conclusion Free field, pressure field, and diffuse field are not marketing terms—they tie microphone design and calibration assumptions to specific acoustic models. By explicitly documenting the assumed field (geometry, angle, reflections, calibration and corrections) in your test plan, you can significantly improve repeatability and comparability across measurements. To learn more about microphone functions and measurement hardware solutions, visit our website—and if you’d like to talk to the CRYSOUND team, please fill out the “Get in touch” form.
The Acoustic Imaging Leak Detection System is developed by CRYSOUND and has already been deployed in multiple coal chemical, petrochemical and natural gas facilities. It is used for online leak monitoring in high‑risk areas. This article is written by the Acoustic Imaging Leak Detection System project team at CRYSOUND based on real‑world deployment and operation experience. In a straightforward way, we will explain why such a system is needed, how it works in principle, what actually changes after it is put into service on site, and what it can and cannot do. Why is traditional leak inspection so difficult? In petrochemical plants, natural gas stations, coal chemical complexes and hazardous chemical storage yards, everyone understands how sensitive the word “leak” is. What really makes life hard is that many critical points are located high above ground, on pipe racks or at the tops of towers. In the past, finding a small leak at height usually meant going through a process like this: • Erect scaffolding or use a man‑lift and spend hours going up and down; • Climb around the pipe racks with soap solution or portable instruments in hand; • In winter, hands are frozen stiff; in summer, clothes are soaked with sweat, and even after checking a full round, people still worry: “There are so many valves and flanges, did we miss something?” To sum up, traditional leak inspection at such sites has several persistent pain points: • High locations: pipe racks at 20 meters or tower tops are hard to reach. Temporary access equipment is costly and high‑risk to use. • Very quiet leaks: the ultrasonic signals generated by small leaks are drowned in the noise of pumps and fans, and are practically impossible to hear with the human ear. • Invisible leaks: in the early stage, leak flow is tiny. Soap solution doesn’t bubble, and the smell is faint. By the time you actually see stains or smell gas, the leak has usually spread. • Low efficiency: a single process area can easily have thousands of monitoring points. Manual “up and down” inspection is mostly spot‑checking, and it is very hard to achieve truly continuous and full coverage. Traditional electrochemical, infrared and laser‑based detection methods are essentially point or line monitoring: • Measuring at a fixed point to see whether the concentration exceeds a threshold; • Watching along a single optical path to see whether any gas crosses it. What operators actually want, however, is not only to know whether a leak exists, but also to see clearly, over a wide area, exactly where the leak is occurring. That is precisely the problem that the ultrasonic acoustic imaging leak detection system (Acoustic Imaging Leak Detection System) is designed to solve. Acoustic Imaging Leak Detection System: Turning “inaudible leak noise” into colorful soundmap on the screen Basic principle: pressurized gas leak → ultrasonic signal → colorful soundmap on the image When pressurized gas escapes through valve gaps, tiny flange cracks or weld defects, it interacts with the surrounding air and produces intense turbulence, creating a class of ultrasonic signals with distinct characteristics: • The greater the leak rate, the stronger the ultrasonic signal; • The higher the pressure difference, the more pronounced the acoustic characteristics; • These signals are quite different from the lower‑frequency mechanical noise of motors and pumps, which makes it possible to pick them out from the background. What Acoustic Imaging Leak Detection System does is to convert this “inaudible sound” into “visible images” in a smart way: • A multi‑channel ultrasonic sensor array is used to acquire ultrasonic signals simultaneously from multiple directions; • At the front end, amplification, filtering and denoising are performed to remove electromagnetic interference and low‑frequency background noise as much as possible; • Phase and amplitude differences between channels are analyzed to estimate the spatial distribution of sound energy and to infer from which direction and which area the leak noise is coming; • The sound energy distribution is mapped into a two‑dimensional “heat map” and overlaid onto the live video image from the field. In the end, the location with the strongest leak signal will appear as a red‑yellow‑green “cloud” on the display. For operators, the effect is very intuitive: wherever a cloud appears on the image, that is where something looks suspicious. Engineering parameters: how far and how small can it detect? Based on field tests and joint calibration results from multiple online projects,Acoustic Imaging Leak Detection System exhibits the following typical capabilities in engineering applications: Recommended detection distance: 0.5–50 m. Within roughly 1–30 m, the system achieves better signal‑to‑noise ratio and imaging performance for small leaks. Operating frequency range: Acoustic Imaging Leak Detection System operates in the ultrasonic band (above 20 kHz). A band‑pass filter is used to select the leakage characteristic band (typically 20–40 kHz), effectively suppressing audible‑range and low‑frequency mechanical noise. Minimum detectable leak rate / orifice size (for typical conditions): Under a minimum pressure difference of about 0.6 MPa, Acoustic Imaging Leak Detection System can provide visual detection for early‑stage leaks around the 0.1 mm scale at valve gaps and flange micro‑cracks. The actual sensitivity varies with gas type, pressure, background noise and sensor placement. Localization accuracy: Within the recommended detection distance, Acoustic Imaging Leak Detection System can provide leak localization with approximately centimeter‑level accuracy. Combined with the video image, it can effectively point to a specific piece of equipment or flange area on the screen. These values are not rigid, unchanging limits, but rather typical engineering‑level performance verified across multiple real‑world projects. Protection rating: Acoustic Imaging Leak Detection System has passed Ex ib IIC T4 Gb explosion‑proof certification and IP66 ingress protection tests, making it suitable for long‑term deployment in typical hazardous areas. System architecture: more than a single sensor—it is a complete online system Acoustic Imaging Leak Detection System is not just a “smart sensor”. It is a complete online monitoring system that can roughly be broken down into three layers: Front‑end sensing layer: Pan‑tilt ultrasonic imaging leak detectors are deployed on site. They “listen” for leaks, capture the video image, and output the colored acoustic image. The pan‑tilt unit can rotate and tilt to scan a wide area. Mid‑tier storage layer: NVR and other storage equipment receive data from the front‑end devices, storing video, acoustic images and alarm records completely for later playback and incident analysis. Back‑end management layer: VMS and other management platforms connect to multiple front‑end devices, performing unified device management, detection control, alarm display and report generation, and presenting all data centrally on the control room video wall. In short: • The front end “sees” the leak point; • The mid‑tier “remembers” the process; • The back end “manages the whole site on one screen.” A typical site: from climbing pipe racks to watching colored clouds Let us take a typical coal chemical unit in Ningxia as an example. In this facility, 11 Acoustic Imaging Leak Detection System units have been installed, covering gasifiers, heaters, tank farms and pipe racks. We can look at how day‑to‑day work has changed after Acoustic Imaging Leak Detection System was introduced. Before the retrofit: six people climbing for half a day and still feeling unsure In a typical gasifier area, there are many high‑temperature and high‑pressure pipelines, valves and flanges inside the unit. Many key points are located around 20 meters above ground. The media are mostly flammable or toxic gases, so any leak not only wastes feedstock but also poses risks to personnel safety and plant stability. Previously, inspection was carried out roughly as follows: • Several inspectors and maintenance technicians would be assigned, scaffolding or access platforms would be prepared, and then they would go up onto the pipe racks; • With soap solution and portable detectors in hand, they would walk along the racks and platforms, checking each flange and valve one by one; • A single round could easily take half a day. During major inspections or special campaigns, they might have to repeat this work for days in a row. Front‑line staff described this mode in three words: “tiring, slow, and worrying.” Tiring: repeatedly climbing at height and twisting into awkward positions to look and listen close to equipment; Slow: in an area with dozens or hundreds of points, checking each one by one takes a long time; Worrying: with high background noise and many points, people always feel that eyes and ears alone may miss subtle issues. During the retrofit: letting the pan‑tilt unit “sweep the area” every day After assessing leak risks and inspection workload, we worked with the client to deploy several pan‑tilt ultrasonic imaging leak detectors at different platform elevations and connect them to Acoustic Imaging Leak Detection System: • High‑level pan‑tilt units cover key areas such as gasifier heads and pulverized coal lines; • Mid‑level units cover lock hoppers, heat‑tracing lines, and dense clusters of flanges and valves; • Low‑level units cover feed tanks and ground‑level pipelines. Setting patrol routes and presets For each pan‑tilt unit, several preset views are configured—for example, along a specific pipe rack, a group of flanges, or a particular platform area. Patrol cycles are set according to process sections and risk levels, with higher‑risk areas scanned more frequently. Connecting to the central control system All acoustic images and alarm information from the front‑end devices are fed into the Acoustic Imaging Leak Detection System management platform. On the control room video wall, operators can see an overview of the unit, the colored cloud images, and the alarm list at the same time. From then on, the devices basically follow the configured strategy and automatically “sweep the area” every day: • Each pan‑tilt unit rotates and tilts along its preset route, scanning key areas at each elevation; • Once characteristic ultrasonic leak signals appear at a certain location, a cloud will pop up at the corresponding position on the screen; • When operators in the control room see an abnormal cloud, they can immediately notify maintenance, who go straight to the indicated valve or flange to verify and fix the problem. After the retrofit: from “people hunting for problems” to “problems showing up on their own” After a period of operation, feedback from the site has mainly focused on three aspects: Fewer high‑level work operations Where previously 2–3 comprehensive high‑level inspection rounds per month were needed, they have now been reduced to seasonal campaigns plus on‑demand checks when abnormal clouds appear. High‑level work is much more focused on specific issues, and overall frequency has clearly dropped. Problems are found earlier and at a smaller scale In the past, many small leaks were only noticed when people smelled something or saw visible signs. Now, as soon as a leak reaches the detectable threshold, anomalies can appear on the cloud image in advance, allowing corrective actions to be taken earlier. Maintenance is more efficient Previously, when someone reported “it smells like gas in that area,” maintenance teams had to check dozens of flanges and valves one by one. Now, Acoustic Imaging Leak Detection System directly marks which piece of equipment shows a strong acoustic anomaly on the screen, so technicians can take their work orders and go straight to the target region. Front‑line staff came up with a vivid summary: “In the past, we went around looking for problems; now, the problems show up on the screen by themselves.” This, in essence, is the change from climbing pipe racks to watching colored clouds. What can Acoustic Imaging Leak Detection System do—and what can it not do? From a safety and engineering perspective, understanding the system’s boundaries is very important—this is being responsible both to the plant and to the system itself. What Acoustic Imaging Leak Detection System is particularly good at Wide‑area online monitoring of high‑level and high‑risk zones By combining pan‑tilt units with sensor arrays, Acoustic Imaging Leak Detection System can perform area coverage scans within approximately 0.5–50 m, making it especially suitable for 20 m pipe racks, tower tops and other locations where frequent manual access is difficult. Visual localization Acoustic Imaging Leak Detection System not only tells you that “there is a leak”, but also shows a cloud directly on the image to indicate where it is. With centimeter‑level localization accuracy, it can quickly narrow down to a specific piece of equipment or flange area. Around‑the‑clock monitoring Acoustic Imaging Leak Detection System can operate online 24/7, greatly reducing the dependence on “someone just happening to walk by that point” at the right time. Compared with methods that rely on gas concentration build‑up, Acoustic Imaging Leak Detection System is less affected by wind dispersing the gas, because it focuses on the ultrasonic signal generated by the jet itself, rather than on concentration readings at a single point. Reducing high‑level work and repetitive inspections By shifting from “frequent high‑level inspections” to “going up only when an abnormal cloud appears,” Acoustic Imaging Leak Detection System helps reduce the workload and risk of working at height while improving overall inspection efficiency. What Acoustic Imaging Leak Detection System cannot do: limitations we need to acknowledge honestly It cannot “see” leaks that are completely blocked The ultrasonic leakage signal can only be effectively detected and imaged when it is able to propagate to the ultrasonic sensor array. If the leak source is completely blocked by structural components or thick‑walled shells along the path, the array will receive much weaker, or even no, leak signal. Such areas need to be compensated by reasonable sensor placement, multi‑angle coverage or other complementary detection methods. Strong ultrasonic interference sources require special design Examples include process blow‑off points, steam vents that are open for long periods, and high‑frequency pneumatic devices, all of which can generate ultrasonic signatures similar to leaks. For these points, on‑site noise spectrum analysis is usually carried out during project design, and measures such as regional masking or logic filtering are introduced. Acoustic Imaging Leak Detection System is not a universal replacement, but a powerful complement For some scenarios where gas concentration itself must be monitored—such as toxic gas alarms in occupied areas—electrochemical, infrared and laser‑based sensors are still necessary. Acoustic Imaging Leak Detection System is better suited to building a “sonic radar network” that lights up leak risks on the screen as early as possible. If we think of the entire leak‑monitoring setup as a team: • Concentration sensors are responsible for “defending the bottom line” (whether concentration exceeds the limit); • Acoustic Imaging Leak Detection System is like an “early scout,” indicating where suspicious jets may be occurring and reminding you to take a closer look. Conclusion: let the system see the problem first so people can solve it more safely With an ultrasonic imaging leak detection system like Acoustic Imaging Leak Detection System in place, the way work is done can change fundamentally: • The system scans the unit along preset routes every day; • Once a colored cloud appears on the display, personnel take their work orders and go up in a targeted way to deal with the issue; • High‑level work becomes more focused and less frequent, and many leaks can be resolved before they cause noticeable impact. For industries such as petrochemicals, natural gas and coal chemicals, Acoustic Imaging Leak Detection System is not a flashy new gadget, but a way to identify leaks earlier, organize inspections more safely and manage risk more systematically. It is important to emphasize that Acoustic Imaging Leak Detection System is not a replacement for all traditional detection techniques, but an important piece of the puzzle. In actual projects, we usually combine Acoustic Imaging Leak Detection System with concentration detection, process interlocks and manual inspections, using a layered defense approach to improve overall leak‑control capability. If your site is facing issues such as many high‑level points with frequent scaffolding, late detection and slow troubleshooting of small leaks, or heavy inspection pressure at night and in bad weather, you may want to consider deploying an ultrasonic imaging leak detection system like Acoustic Imaging Leak Detection System—letting problems first appear clearly on the screen so that people can address them more calmly and safely. To discuss your application or see whether Acoustic Imaging Leak Detection System is a fit, please get in touch via our Get in Touch form.
A complete engineer's guide to DAQ systems: PCIe/PXI cards, USB/Ethernet recorders, modular multi-channel systems. Covers dynamic range, PTP sync, IEPE, and how to select the right DAQ for NVH, vibration & acoustic testing. A data acquisition system (DAQ) is the measurement front end: it converts analog sensor outputs—such as voltage, current, and charge—into digital data. The signal is first conditioned (amplification, filtering, isolation, IEPE excitation, etc.) and then fed to an ADC, where it is digitized at the specified sampling rate and resolution; software subsequently handles visualization, storage, and analysis. This article systematically reviews common DAQ form factors, including PCIe/PXI plug-in cards, external USB/Ethernet/Thunderbolt devices, integrated data recorders, and modular distributed systems. It also summarizes key selection criteria—signal compatibility, channel headroom and scalability, sampling rate and anti-aliasing filtering, dynamic range, THD+N, clock synchronization and inter-channel delay, as well as delivery and after-sales support—to help readers quickly build a clear understanding of DAQ systems. Why Data Acquisition Matters? In the real world, physical stimuli such as temperature, sound, and vibration are everywhere. We can sense them directly; in a sense, the human body itself is a “data acquisition system”: our senses act like sensors that capture signals, the nervous system handles transmission and encoding, the brain fuses and analyzes the information to make decisions, and muscles execute actions—forming a closed feedback loop. Progress in science and engineering ultimately comes from observing, understanding, and validating the world with more reliable methods. Physical quantities such as temperature, sound pressure, vibration, stress, and voltage are the primary carriers of information. However, human perception is subjective and cannot quantify these changes accurately and repeatably; and in high-current, high-temperature, high-stress, or high-SPL environments, direct exposure can even cause irreversible harm. To enable measurement that is quantifiable, recordable, and safer, data acquisition systems (DAQ) came into being. Put simply, a data acquisition system (DAQ) is an analog front end that converts a sensor’s analog output (voltage/current/charge, etc.) into digital data at a defined sampling rate and resolution, and hands it to software for display, logging, and analysis (typically with the required signal conditioning). It helps engineers see problems more clearly—and solve them. In today’s development cycles—from cars and aircraft to consumer electronics—it’s difficult to validate performance, safety, and reliability efficiently without data acquisition. In durability testing, DAQ records cyclic load and strain for fatigue-life analysis; in noise control, synchronous multi-point acquisition of vibration and sound pressure helps identify noise sources and transmission paths. This quantitative capability is what provides a scientific basis for engineering improvements. DAQ applications span a wide range of fields: Automotive NVH and mechanical vibration testing: Used to acquire body vibration, noise, engine balance, structural modal data, and more—helping engineers improve vehicle ride comfort. Audio testing: In the development and production of speakers, microphones, headphones, and other audio devices, DAQ is used to measure frequency response, SPL, distortion, and more, to verify acoustic performance. Industrial automation and monitoring: DAQ is widely used for process monitoring, condition monitoring, and industrial control. For example, it acquires temperature, pressure, flow, and torque sensor signals to enable real-time monitoring and alarms, and it often must run continuously with high stability and strong immunity to interference. Research labs and education: From physics and biology experiments to seismic monitoring and weather observation, DAQ is a basic tool for capturing raw data. It makes data recording automated and digital, which simplifies downstream processing. As quality and performance requirements continue to rise across industries, DAQ has become an indispensable set of “eyes and ears,” giving engineers the ability to observe and interpret complex phenomena. Common DAQ Form Factors Depending on interface, level of integration, and the application, DAQ hardware comes in several common forms. Below are a few typical DAQ card/system categories: TypeForm factor / InterfaceAdvantagesLimitationsTypical ApplicationPlug-in DAQ cardPCIe / PXI / PXIeLow latency; high throughput; strong real-time performanceNot portable; requires chassis/industrial PC; expansion limited by platformFixed labs; rack systems; high-throughput acquisitionExternal DAQ deviceUSB / Ethernet / ThunderboltPortable; fast setup; laptop-friendlyBandwidth/latency depends on interface; driver stability is critical; mind power and cablingField testing; mobile measurements; general-purpose DAQIntegrated data recorderBuilt-in battery/storage/display (standalone)Ready out of the box; easy in the field; straightforward offline loggingChannel count/algorithms often limited; weaker expandability; post-processing depends on exportPatrol inspection; quick diagnostics; long-duration offline loggingModular distributed systemMainframe + modules; network expansion (synchronized)Mix signal types as needed; easy channel scaling; strong synchronizationPlanning matters: sync/clock/cabling; system design becomes more important at scaleSynchronized Multi-Physics Measurement;High-Channel-Count Scalability;Distributed, Multi-Site Testing Plug-in DAQ cards (internal): These are boards installed inside a computer, with typical interfaces such as PCI, PCIe, and PXI (CompactPCI). They plug directly into the PC/chassis bus and are powered and controlled by the host, providing high bandwidth and strong real-time performance for high-throughput applications in desktop or industrial PC environments. The trade-off is portability—these are usually used in fixed labs or rack systems. External DAQ devices (modules): DAQ hardware that connects to a computer via USB, Ethernet, Thunderbolt, and similar interfaces. USB DAQ is common—compact, plug-and-play, and well-suited to laptops and field testing. Ethernet/network DAQ enables longer cable runs and multi-device connections. External units are generally portable with their own enclosure, but high-end models may be somewhat limited in real-time performance by interface bandwidth (USB latency is typically higher than PCIe). Portable / integrated data recorders: These integrate the DAQ hardware with an embedded computer, display, and storage to form a standalone instrument. They’re convenient in the field and can acquire, log, and do basic analysis without an external PC. Examples include portable vibration acquisition/analyzer units with tablet-style displays and handheld multi-channel recorders. They are typically optimized for specific applications, ready to use out of the box, and well-suited for mobile measurements or quick on-site diagnostics. Modular distributed DAQ system platform: Built from multiple acquisition modules and a main controller/chassis, allowing flexible channel scaling and mixing of different function modules. Each module handles a certain signal type or channel count and connects to the controller (or directly to a PC) over a high-speed, time-synchronized network (e.g., EtherCAT, Ethernet/PTP). This architecture offers very high scalability and distributed measurement capability; modules can be placed close to the test article to reduce sensor cabling. For example, CRYSOUND’s SonoDAQ is a modular platform: each mainframe supports multiple modules and can be expanded via daisy-chain or star topology to thousands of channels. Modular systems are a strong fit for large-scale, cross-area synchronized measurement. What Makes Up a DAQ System? A complete data acquisition system typically includes the following key building blocks: Sensors: The front end that converts physical phenomena into electrical signals—for example, microphones that convert sound pressure to voltage, accelerometers that convert acceleration to charge/voltage, strain gauges that convert force to resistance change, and thermocouples for temperature measurement; Signal conditioning: Electronics between the sensor and the DAQ ADC that adapts and optimizes the signal.Typical functions include gain/attenuation (scaling signal amplitude into the ADC input range), filtering (e.g., anti-aliasing low-pass filtering to remove noise/high-frequency content), isolation (signal/power isolation for noise reduction and protection), and sensor excitation (providing power to active sensors, such as constant-current sources for IEPE sensors). Analog-to-digital converter (ADC): The core component that converts continuous analog signals into discrete digital samples at the configured sampling rate and resolution. Sampling rate sets the usable bandwidth (it must satisfy Nyquist and include margin for the anti-aliasing filter transition band), while resolution (bit depth) affects quantization step size and usable dynamic range. Many DAQ products use 16-bit or 24-bit ADCs; in high-dynamic-range acoustic/vibration front ends (such as platforms like SonoDAQ), you may also see 32-bit data output/processing paths to better cover wide ranges and weak signals (depending on the specific implementation and how the specs are defined). Data interface and storage: The ADC’s digital data must be delivered to a computer or storage media. Plug-in DAQ writes directly into host memory over the system bus. USB/Ethernet DAQ streams data to PC software through a driver. In addition to USB/Ethernet/wireless data transfer, SonoDAQ also supports real-time logging to an onboard SD card, allowing standalone recording without a PC—useful as protection against link interruptions or for long-duration unattended acquisition. Host PC and software: This is the back end of a DAQ system. Most modern DAQ relies on a computer and software for visualization, logging, and analysis. Acquisition software sets sampling parameters, controls the measurement, displays waveforms in real time, and processes data for results and reporting. Different vendors provide their own platforms (e.g., OpenTest, NI LabVIEW/DAQmx, DewesoftX, HBK BK Connect). Software usability and capability directly impact productivity. In addition, CRYSOUND’s OpenTest supports protocols such as openDAQ and ASIO, enabling configuration with multiple DAQ systems. What Specs Matter When Selecting a DAQ? Three common selection pitfalls: Focusing only on “sampling rate / bit depth” while ignoring front-end noise, range matching, anti-aliasing filtering, and synchronization metrics: the data may “look like it’s there,” but the analysis is unstable and not repeatable. Sizing channel count to “just enough” with no headroom: once you add measurement points, you’re forced to replace the whole system or stack a second system—increasing cost and integration effort. Focusing only on hardware while ignoring software and workflow: configuration, real-time monitoring, batch testing, report export, and protocol compatibility (openDAQ/ASIO, etc.) directly determine throughput. What you should evaluate: Signal types to acquire: In selection, clearly defining your signal types is the first step. Acoustic/vibration measurements are very different from stress, temperature, and voltage measurements. Traditional systems often support only a subset of signal types—for example, only sound pressure and acceleration—so when the requirement expands to temperature, you may need a second system, which increases budget and adds integration/synchronization complexity. SonoDAQ uses a modular platform approach: by inserting the required signal-type modules, you can expand capability within one system and run synchronized multi-physics tests—configuring what you need in one platform. Channel count and scalability: First determine how many signals you need to acquire and choose a DAQ with enough analog input channels (or a system that can expand). It’s best to leave some margin for future points—for example, if you need 12 channels today, consider 16+ channels. Equally important is scalability: SonoDAQ can be synchronized across multiple units to scale to hundreds or even thousands of channels while maintaining inter-channel acquisition skew < 100 ns, which suits large-scale testing. By contrast, fixed-channel devices cannot be expanded once you exceed capacity, forcing a replacement and increasing cost. Match sampling rate to signal bandwidth: start with the highest frequency/bandwidth of interest. The baseline is Nyquist (sampling rate > 2× the highest frequency). In practice, you also need margin for the anti-aliasing filter transition band, so many projects start at 2.5–5× bandwidth and then fine-tune based on the analysis method (FFT, octave bands, order tracking, etc.). For example, if engine vibration content tops out at 1 kHz, you might start at 5.12 kS/s or higher; for speech/acoustics that needs to cover 20 kHz, common choices are 51.2 kS/s or 96 kS/s. In short: base it on the spectrum, keep some margin, and align it with your filtering and analysis. Measurement accuracy and dynamic range: If your application needs to resolve weak signals while also covering large signal swings—for example, NVH tests often need to capture very low noise in quiet conditions and also record high SPL under strong excitation—you need a high-dynamic-range, high-resolution DAQ (24-bit ADC or higher, dynamic range > 120 dB). For audio testing, where distortion and noise floor matter and you want the DAQ’s self-noise to be well below the DUT, choose a low-noise, high-SNR front end and check vendor specs such as THD+N. Environment and use constraints: Think about where the DAQ will be used: on a lab bench, on the factory floor, or outdoors in the field. If you need to travel frequently or test on a vehicle, a portable/rugged DAQ is usually a better fit.For scenarios without stable power for long periods, built-in battery capability and battery runtime become critical. Lead time and after-sales support: After you define the procurement need, delivery lead time is a practical factor you can’t ignore. If your schedule is tight, a 2–3 month lead time can directly delay project kickoff and execution, so evaluate the supplier’s delivery commitment. Support is equally important: training, responsiveness when issues occur, and whether remote or on-site assistance is available. Also review warranty terms, software upgrade policy, and support response mechanisms—these directly affect long-term system stability and overall project efficiency. With the above steps, you can narrow down the DAQ characteristics that fit your application and make a defensible choice from a crowded product list. In short: start from requirements, focus on the key specs, plan for future expansion, and don’t ignore vendor maturity and support. Choose the right tool, and testing becomes far more efficient. FAQ Q: Can I use a sound card as a DAQ? A: For a small number of audio channels where synchronization/range/calibration requirements are not strict, a sound card can “work” at a basic level. But in engineering test work, common issues are: no IEPE excitation, insufficient input range and noise floor, uncontrolled channel-to-channel sync, and driver latency that is high and unstable. If you need repeatable, traceable test data, use a professional DAQ front end. Q: What’s the difference between a DAQ and an oscilloscope? A: An oscilloscope is more of an electronics debugging tool—great for capturing transients and doing quick troubleshooting. A DAQ is more of a long-duration, multi-channel, time-synchronized acquisition and analysis system, with an emphasis on channel scalability, synchronization consistency, long-term stability, and data management. Q: How do I choose the sampling rate? A: Start from the highest frequency/bandwidth of interest and meet Nyquist (>2× fmax) as a baseline. In practice, also account for the anti-aliasing filter transition band and your analysis method; starting at 2.5–5× bandwidth is usually safer. If you’re unsure, prioritize proper filtering and dynamic range first, then optimize sampling rate. Q: What is IEPE, and when do I need it? A: IEPE is a constant-current excitation scheme used by sensors such as accelerometers and IEPE measurement microphones, with power and signal on the same cable. If you use IEPE sensors, your DAQ front end must support IEPE excitation, appropriate isolation/grounding strategy, and suitable input range and bandwidth. Q: What should I check for multi-channel / multi-device synchronization? A: Focus on three things: a common clock source (external clock/PTP/GPS, etc.), channel-to-channel sampling skew/delay, and trigger/alignment strategy. For NVH, array measurements, and structural modal testing, sync performance often matters more than single-channel specs. Q: How do I estimate channel count—and should I leave headroom? A: List the “must-measure” signals and points first, then add auxiliary channels such as tach/trigger/temperature. A good rule is to reserve at least 20%–30% headroom, or choose a modular platform that scales, so you’re not forced to replace the system when points get added. If you’d like to learn more about the latest intelligent sound & vibration data acquisition system, SonoDAQ, from CRYSOUND, including its key features, typical application scenarios, and common configuration options, please fill out the Get in touch form below to contact the CRYSOUND team. You’re also welcome to reach out to the CRYSOUND team. Based on your constraints—such as signal types, channel count, sampling rate/bandwidth, synchronization requirements, and on-site environmental conditions—we can provide a product demo and practical configuration recommendations.
As the AR glasses market transitions from proof-of-concept to large-scale commercialization, product capabilities in audio and haptic interaction continue to expand, driving increased demands for production-line testing. With key modules such as audio and VPU (Vibration Processing Units), AR glass production-line testing is evolving from simple functional validation to consistency control aimed at enhancing real-world user experience. Based on actual mass production project experience, this article introduces audio and VPU testing solutions for different workstations, with a focus on free-field audio testing, VPU deployment, and fixture design, providing practical reference for scaling AR glasses manufacturing. Accelerating Market Expansion of AR Glasses and New Trends in Production-Line Testing As smart glasses products mature, their functional boundaries are expanding rapidly. According to various industry reports, the shipment volume and investment scale of AR glasses continue to increase, with the market shifting from concept validation to commercialization. Products driven by companies like Meta are increasingly capable of supporting voice interaction, calls, notifications, and recording, supplementing functions traditionally carried out by smartphones and earphones. This shift has transformed AR glasses from a low-frequency conceptual product into a high-frequency wearable interaction terminal. Consequently, audio capabilities have become a core component of the smart glasses experience, directly impacting voice interaction and call quality. At the same time, vibration and haptic feedback have been introduced to enhance interaction confirmation and user perception. As these capabilities become commonplace in mass-produced products, production-line testing is no longer just focused on whether basic functions work but is now required to handle multiple critical capabilities, such as audio and VPU, simultaneously. This shift presents new challenges for upgrading production-line testing solutions. Audio Testing Solutions for Multi-Station Production Lines Audio is one of the most directly influential functions on the user experience of AR glasses, and its production-line testing needs to balance accuracy, consistency, and production efficiency. In a multi-station production environment, audio testing is often distributed across several workstations depending on the assembly phase. At the temple or frame workstations, audio testing focuses more on validating the basic performance of individual microphones or speakers, ensuring that key components meet the requirements early in the assembly process and avoiding costly rework later on in the process. At the final assembly workstation, the focus shifts to overall audio performance and system-level coordination. While different workstations focus on different aspects, the fixture positioning, acoustic environment control, and testing process design need to maintain consistent logic throughout. CRYSOUND’s AR glass audio testing solutions are designed to address this need, with a unified testing architecture that allows flexible deployment across different workstations while maintaining stable and consistent results. The solutions can be divided into the following two types, meeting the aesthetic and UPH requirements of different production lines. Drawer-Type Single-Unit (1-to-1) Easy automation integration Standing operation for convenient loading and unloading Simultaneous testing of SPK and MIC (airtightness), supporting multi-MIC scenarios Serial testing for left and right SPK, parallel testing for multiple MICs Supports Bluetooth, USB ADB, and Wi-Fi ADB communication Average cycle time (CT): 100s | UPH: 36 Clamshell Dual-Unit (1-to-2) Parallel dual-unit testing for improved efficiency Ergonomic seated operation design Simultaneous testing of SPK and MIC (airtightness), supporting multi-MIC scenarios Serial testing for left and right SPK (single box), parallel testing for multiple MICs Supports Bluetooth, USB ADB, and Wi-Fi ADB communication Average cycle time (CT): 150s | UPH: 70 Speaker EQ in AR Glasses: From Pressure Field to Free Field In traditional earphone products, speaker EQ is usually built in a relatively stable pressure-field environment, where ear coupling and wearing style have a well-controlled impact on the acoustic environment. In contrast, AR glasses typically use open structures for the speakers, with no sealed cavity between the driver and the ear, making their acoustic performance closer to free-field characteristics. This structural difference makes the frequency response of AR glasses speakers more sensitive to sound radiation direction, structural reflections, and wearing posture, and dictates that their EQ strategy cannot simply follow earphone product experience. In the production-line testing and tuning process, the speaker EQ for AR glasses needs to be evaluated and validated under free-field conditions. Due to the open acoustic structure, the frequency response is more susceptible to structural reflections, assembly tolerances, and variations in wearing posture, making it difficult to rely solely on hardware consistency to ensure stable listening across different products. By introducing EQ tuning, these systemic deviations can be compensated without changing the structural design, improving the consistency of audio performance during mass production. The focus of the testing solution is not to pursue idealized sound quality, but rather to capture real acoustic differences under stable and repeatable free-field testing conditions, providing reliable data for EQ parameter validation. CRYSOUND supports customized EQ algorithms. In one mass production project, speaker EQ calibration was introduced at the final test station under free-field conditions, and the results were accepted by the customer, validating the applicability and practical significance of this solution for glasses products. VPU Testing Solutions for AR/Smart Glasses Why AR Glasses Include VPU (Vibration Processing Unit) As AR/smart glasses increasingly support voice interaction, calls, and notifications, relying on audio feedback alone is no longer enough. In noisy environments, privacy-sensitive scenarios, or with low-volume prompts, users need a feedback method that does not disturb others but is sufficiently clear. This is where VPU is introduced. Unlike traditional earphones, glasses are not always tightly coupled to the ear, making audio prompts more susceptible to environmental noise. By utilizing vibration or haptic feedback, the system can convey status confirmations, interaction responses, or notifications to users without increasing volume or relying on screens. Therefore, VPU becomes a key component for supplementing or even replacing some audio feedback in AR glasses. Primary Roles of VPU in AR Glasses In current mass-produced smart glasses designs, VPU typically serves the following functions: Interaction confirmation feedback: such as successful voice wake-up, completed command recognition, or the start/stop of recording or photo taking. Silent notifications: vibrational feedback in scenarios where audio prompts are unsuitable. Enhanced experience: boosting interaction certainty and immersion when combined with audio feedback. These functions have made VPU an essential capability in the AR glasses interaction experience, rather than just an optional feature. Typical VPU Placement in AR Glasses (Why in the Nose Bridge/Pads) Structurally, VPU is typically located near the nose bridge or nose pads for three main reasons: Proximity to sensitive body areas: The nose bridge is sensitive to small vibrations, providing high feedback efficiency. Stable and consistent coupling: Compared to the temples, the nose bridge has a more stable and consistent contact with the face, ensuring better vibration transmission. Does not interfere with audio device layout: Avoids interference with speakers and microphones in the temple region. Therefore, during production-line testing, VPU is often tested as an independent target, requiring dedicated verification at the frame or final assembly stage. VPU Testing Implementation and Consistency Control on the Production Line Based on the functional positioning and structural characteristics of VPU in AR glasses, VPU testing is typically scheduled based on the product form and assembly progress in mass production. In some cases, testing may even be moved earlier in the process to identify potential VPU issues before they are exacerbated in subsequent assembly stages. It is important to note that production-line testing environments differ fundamentally from laboratory validation environments. In laboratory testing, VPU is typically tested as a standalone component under simplified conditions and higher excitation levels (e.g., 1g). However, in production-line environments, the VPU is already integrated into the frame or complete product, requiring excitation conditions that closely mimic those of real-world wearing scenarios. In practice, production-line VPU testing typically takes place in the 0.1g–0.2g, 100–2kHz excitation range, verifying consistency in VPU performance under realistic physical conditions. CRYSOUND’s AR glasses VPU production-line testing solution uses the CRY6151B Electro-Acoustic Analyzer as the testing and analysis platform. The vibration table provides stable excitation, and the product VPU synchronizes vibration response signals with a reference accelerometer. Software analysis evaluates key parameters such as frequency response (FR) and total harmonic distortion (THD).This test architecture balances testing effectiveness and production-line throughput, meeting the deployment needs for VPU testing at different stations. Compared to audio testing, VPU testing is more sensitive to testing configurations and fixture design, with less room for error and greater difficulty in consistency control. Based on experience from multiple projects, fixture design must fully account for structural differences in locations such as the nose bridge and nose pads. It is important to prioritize materials and contact methods that facilitate vibration transmission, and to design standardized fixture shapes that keep the fixture's center of gravity aligned with the vibration table's working plane, minimizing the introduction of additional variables at the structural level. By following these design principles, the stability and repeatability of VPU test results can be improved in a production-line environment, providing reliable support for validating the product's VPU capabilities. From Functional Testing to Experience Constraints In AR glasses production lines, the role of testing is evolving. In the past, audio or vibration modules were more likely to be treated as independent functions, with the goal of confirming whether they were "functional." However, with the current form of the product, these modules directly influence voice interaction, wearing comfort, and overall experience. As a result, the test results now serve as a prerequisite for the overall product performance. For example, audio and VPU modules are no longer just performance verification items; they now play a role in the consistency control of the user experience. The interaction between audio performance, vibration feedback, and structural assembly means that production-line testing needs to identify potential issues that could affect the experience in advance, rather than just filtering out problems at the final inspection stage. This change is pushing test strategies from "functional pass" to "experience control." If you’d like to learn more about AR glasses audio testing solutions—or discuss your blade process and inspection targets—please use the “Get in touch” form below. Our team can share recommended settings and an on-site workflow tailored to your production conditions.
Octave-band analysis can be implemented in two fundamentally different ways: FFT binning (integrating PSD/FFT bins into 1/1- and 1/3-octave bands) and a true octave filter bank (standards-oriented bandpass filters + RMS/Leq averaging). In this post, we compare how the two methods work, where their results match, where they diverge (scaling, window ENBW, band-edge weighting, latency, transient response), and how OpenTest supports both for acoustics, NVH, and compliance measurement. For a detailed explanation of the concepts, read this → Octave-Band Analysis: The Mathematical and Engineering Rationale Octave-band filter banks (true octave / CPB filter bank) Parallel bandpass filters + energy detector + time averaging A filter-bank (true octave) analyzer typically: Design a bandpass filter H_b(z) (or H_b(s)) for each band center frequency. Run filters in parallel to obtain band signals y_b(t). Compute band mean-square/power and apply time averaging to output band levels. To be comparable across instruments, filter magnitude responses must satisfy IEC/ANSI tolerance masks (class) for the specified filter set. [1][3] IIR vs FIR: why IIR (cascaded biquads) is common in practice IIR advantages: lower order for a given roll-off, lower compute, good for real-time/embedded; stable when implemented as SOS/biquads. FIR advantages: linear phase is possible (useful when waveform shape matters); design/verification can be more straightforward. For band-level outputs, phase is usually not the primary concern, so IIR filter banks are common. Multirate processing: the “secret weapon” of CPB filter banks Low-frequency CPB bands are very narrow. Implementing them at the full sampling rate is inefficient. A common strategy is to group bands by octave and downsample for low-frequency groups: Low-pass then decimate (e.g., by 2 per octave) for lower-frequency groups. Implement the corresponding bandpass filters at the reduced sampling rate. Ensure adequate anti-aliasing before decimation. Time averaging / time weighting: band levels are statistics, not instantaneous values Band levels typically require time averaging. Common options include block RMS, exponential averaging, or Leq (energy-equivalent level). In sound level meter contexts, IEC 61672-1 defines Fast/Slow time weightings (Fast ~125 ms, Slow ~1 s). [5][6] Engineering implication: different time constants produce different readings, so time weighting must be stated in reports. How to validate that a filter bank behaves “like the standard” Sine sweep: verify passband behavior and adjacent-band isolation; observe time delay effects. Pink/white noise: verify average band levels and variance/stabilization time; check effective bandwidth behavior. Impulse/step: examine ringing and time response (critical for transient use). Cross-check against a known compliant reference instrument/implementation. From band definitions to compliant digital filters: an end-to-end workflow (conceptual) Choose the band system: base-10/base-2, the fraction 1/b (commonly b=3), generate exact fm and f1/f2. Choose performance target: which standard edition and which class/mask tolerance? Choose filter structure: IIR SOS for real-time; FIR or forward-backward filtering if phase/zero-phase is required. Design each bandpass: map f1/f2 into the digital domain correctly (e.g., pre-warp for bilinear transform). Implement multirate if needed: decimate for low-frequency groups with sufficient anti-alias filtering. Verify: magnitude response vs mask; noise tests for effective bandwidth; sweep/impulse tests for time response. Calibrate and report: units and reference quantities, averaging/time weighting, method details. Time response explained: group delay, ringing, and averaging all shape readings A band-level analyzer is a time-domain system (filter → energy detector → smoother), so readings are governed by multiple time scales: Filter group delay: how late events appear in each band. Filter ringing/decay: how long a short pulse “rings” within a band. Energy averaging/time weighting: the time resolution vs fluctuation of the output level. Thus, for transients (impacts, start/stop events, sweeps), different compliant implementations can yield different peak levels and time tracks—consistent with ANSI’s caution. [3] Rule of thumb: for steady-state contributions, use longer averaging for stability; for transient localization, shorten averaging but accept higher variability and lock down algorithm details. Common real-time pitfalls Forgetting anti-aliasing in the decimation chain: low-frequency bands become contaminated by aliasing. Numerical instability of high-Q low-frequency IIR sections: use SOS/biquads and sufficient precision. Averaging in dB: always average in energy/mean-square, then convert to dB. Assuming band energies must sum exactly to total energy: standard filters are not necessarily power-complementary; verify using standard-consistent criteria instead. Octave-Band Filter Bank Analysis in OpenTest OpenTest supports octave-band analysis using a filter-bank approach:1) Connect the device, such as SonoDAQ Pro2) Select the channels and adjust the parameter settings. For an external microphone, enable IEPE and switch to acoustic signal measurement.3) In the Octave-Band Analysis section under Measurement Mode, choose the IEC 61260-1 algorithm. It supports real-time analysis, linear averaging, exponential averaging, and peak hold.4) After configuring the parameters, click the Test button to start the measurement.5) A single recording can be analyzed simultaneously in 1/1-octave, 1/3-octave, 1/6-octave, 1/12-octave, 1/24-octave, and 1/24-octave bands. Figure 1: Octave-Band Filter Bank Analysis in OpenTest FFT binning and FFT synthesis FFT binning: convert a narrowband spectrum into CPB band integrals Estimate spectrum (single FFT, Welch PSD, or STFT). Integrate/sum within each octave/fractional-octave band to obtain band power. This is common in software/offline work because a single FFT provides high-resolution spectrum that can be re-binned into any band system (1/1, 1/3, 1/12, …). Key challenge #1: FFT scaling and window corrections After an FFT, scaling depends on your definitions: 1/N normalization, amplitude vs power vs PSD, one-sided vs two-sided spectrum, and windowing. For noise measurements, ENBW is crucial; ignoring it can introduce systematic offsets. [7] A practical PSD normalization (periodogram form) # convert to one-sided PSD: multiply by 2 except DC (and Nyquist if present) This yields PSD in units of (input unit)²/Hz and supports energy consistency checks by integrating PSD over frequency. Two quick self-checks for scaling White noise check: generate noise with known variance σ²; integrate one-sided PSD over 0..fs/2 and recover ≈σ² (accounting for the ×2 rule). Pure tone check: generate a sine with amplitude A (RMS=A/√2); integrating spectral energy should recover ≈A²/2 (subject to leakage and window choice). If both checks pass, your FFT scaling is likely correct; then partial-bin weighting and octave binning become meaningful. Key challenge #2: band edges rarely align to bins → partial-bin weighting Hard include/exclude decisions at band edges cause step-like errors, especially at low frequency where bands are narrow. Use overlap-based weighting (Section 4.2.4) for the boundary bins. Does zero-padding solve edge misalignment? (common misconception) Zero-padding interpolates the displayed spectrum but does not improve true frequency resolution (which is set by the original window length). It can reduce visual stair-stepping but cannot turn 1–2-bin low-frequency bands into reliable band-level estimates. Fundamental fixes are longer windows or multirate processing/filter banks. Key challenge #3: time–frequency trade-off (window length sets low-frequency accuracy and delay) FFT resolution is Δf = fs/N. Low-frequency 1/3-octave bands can be only a few Hz wide, so achieving enough bins per band requires very large N, increasing latency and smoothing transients. Root cause: 1/3 octave is constant-Q, but STFT uses constant-Δf bins In CPB, band width scales with frequency (Δf_band ∝ f, constant-Q). In STFT, bin spacing is constant (Δf_bin constant). Therefore low-frequency CPB needs extremely fine Δf_bin (long windows), while high frequency is over-resolved. Solution routes: long-window STFT vs multirate STFT vs CQT/wavelets Long-window STFT: simplest, but high latency and transient smearing. Multirate STFT: downsample low-frequency content and FFT at lower fs, similar in spirit to multirate filter banks. Constant-Q transform (CQT) / wavelets: naturally logarithmic resolution, but matching IEC/ANSI masks requires extra calibration/validation. [4] For compliance measurements, standards-oriented filter banks are preferred; for research/feature extraction, CQT/wavelets can be attractive. FFT synthesis: constructing per-band filtering in the frequency domain FFT synthesis pushes the FFT approach closer to a filter bank: Define a frequency-domain weight W_b[k] per band (brick-wall or smooth/mask-like). Compute Y_b[k] = X[k]·W_b[k] and IFFT to get y_b[n]. Compute band RMS/averages from y_b[n]. It can easily implement zero-phase (non-causal) filtering. For strict IEC/ANSI matching, W_b and normalization must be carefully designed and validated. Making FFT synthesis stream-like: OLA, dual windows, and amplitude normalization To output continuous time signals per band, use overlap-add (OLA): frame, window, FFT, apply W_b, IFFT, synthesis window, and OLA. Choose analysis/synthesis windows to satisfy COLA (constant overlap-add) conditions (e.g., Hann with 50% overlap) to avoid periodic level modulation. If the goal is to match standard filters, how should W_b be chosen? W_b[k] depends on what you want to match: Match brick-wall integration: W_b is hard 0/1 within [f1,f2]. Match IEC/ANSI filter behavior: |W_b(f)| approximates the standard mask and effective bandwidth (matches ∫|W_b|²). Match energy complementarity for reconstruction: design Σ_b |W_b(f)|² ≈ 1 (Section 7.6). You typically cannot satisfy all three perfectly at once; define your priority (compliance vs decomposition/reconstruction) up front. Energy-conserving frequency-domain filter banks: why Σ|W_b|² matters If you want band energies to sum to total energy (within numerical error), a common design aims for approximate power complementarity: IEC/ANSI masks do not necessarily enforce strict complementarity, so don’t assume exact additivity in compliance contexts. Welch/averaging strategies: how to make FFT band levels stable Use Welch averaging (segment, window, overlap, average power spectra). Average in the power domain (|X|² or PSD), then convert to dB. For non-stationary signals, consider STFT to obtain time–band matrices. Report window type, overlap, averaging count, and ENBW/CG treatment. FFT-Binning Analysis in OpenTest OpenTest supports octave-band analysis based on FFT binning:1) Connect the device, such asSonoDAQ Pro2) Select the channels and adjust the parameter settings. For an external microphone, enable IEPE and switch to acoustic signal measurement.3) In the Octave-Band Analysis section under Measurement Mode, choose the FFT-based algorithm.4) A single recording can be analyzed simultaneously in 1/1-octave, 1/3-octave, 1/6-octave, 1/12-octave, and 1/24-octave bands. Figure 2: FFT-Binning Octave-Band Analysis in OpenTest Filter-bank vs FFT/FFT synthesis: differences, equivalence conditions, and trade-offs A comparison table DimensionFilter-bank (True Octave / CPB)FFT binning / FFT synthesisStandards complianceEasier to match IEC/ANSI magnitude masks; mainstream for hardware instruments. [1][3]Hard binning behaves like band integration; matching masks requires extra weighting or standard-compliant digital filters.Real-time / latencyCausal real-time possible; latency set by filter order and averaging.Block processing adds at least one window length of delay; low-frequency resolution often forces longer windows.Transient responseContinuous output but affected by group delay/ringing; different compliant implementations may differ. [3]Set by STFT windowing; transients are smeared by windows and sensitive to window type/length.Leakage & correctionsControlled via filter design; leakage can be managed.Strongly depends on window and ENBW/scaling; edge-bin misalignment needs partial weighting. [7]InterpretabilityRMS after bandpass filtering—aligned with sound level meters and analyzers.Spectrum estimation + binning—more statistical; interpretation depends on window/averaging settings.ComputationMany filters in parallel; multirate can reduce cost.One FFT can serve all bands; efficient for offline/batch.Phase & reconstructionIIR is typically nonlinear phase (fine for levels).Frequency weights can be zero-phase; reconstruction needs attention to complementarity and transitions. When do both methods give (almost) the same answers? Band-averaged results typically agree closely when: You compare averaged band levels (not transient peak tracks). The signal is approximately stationary and the observation time is long enough. FFT resolution is fine enough that each band contains enough bins (especially at the lowest band). FFT scaling is correct (one-sided handling, Δf, window U, ENBW/CG where needed). Partial-bin weighting is used at band edges. Why differences grow for transients and short events Differences are driven by mismatched time scales: filter banks have band-dependent group delay and ringing but continuous output; STFT uses a fixed window that sets both frequency resolution and time smoothing. If event duration is comparable to the window length or filter impulse response, results depend strongly on implementation details. Error budget: where mismatches usually come from (and how to locate them quickly) Wrong averaging/combination in dB: must average and sum in the energy domain. Inconsistent FFT scaling: 1/N conventions, one-sided vs two-sided, Δf, window normalization U. Missing window corrections: ENBW for noise; coherent gain/leakage for tones. Using nominal frequencies to compute edges instead of exact definitions. No partial-bin weighting at band boundaries (especially harmful at low frequency). Multirate/anti-alias issues in filter banks. Different averaging time constants/windows between methods. True method differences: brick-wall binning vs standard filter skirts/roll-off imply systematic offsets. A strong debugging approach: first match total mean-square using white noise (scaling/ENBW/partial-bin), then validate band centers and adjacent-band isolation using swept sines or tones. Engineering checklist: make 1/3-octave analysis correct, stable, and reproducible Choose a method: compliance → filter bank; offline statistics → FFT binning For regulations/type testing/instrument comparability: prefer IEC/ANSI-compliant filter banks and report standard edition and class. [1][3] For offline processing, large datasets, or flexible band definitions: FFT binning can be efficient, but scaling and boundary weighting must be rigorous. If you need per-band time-domain signals (modulation, envelope, etc.): consider FFT synthesis or explicit filter banks. Selecting FFT parameters from the lowest band (example) Example: fs=48 kHz, lowest band of interest is 20 Hz (1/3 octave). Its bandwidth is only a few Hz. If you want at least M=10 bins per band, you may need Δf_bin ≤ bandwidth/10, implying a very large N (e.g., ~100k points; 2^17=131072). This illustrates why real-time compliance often favors filter banks. Typical mistakes that prevent results from matching Summing magnitude |X| instead of power |X|² or PSD. Averaging in dB instead of in linear power/mean-square. Ignoring ENBW/window scaling for noise. [7] Computing band edges from nominal frequencies. Not stating time weighting/averaging conventions (Fast/Slow/Leq). [5][6] Recommended validation flow (regardless of implementation) Tone-at-center test (or sweep): verify that energy peaks in the correct band and adjacent-band rejection behaves as expected. White/pink noise: verify expected spectral shape in band levels and assess stability/averaging time. Cross-implementation comparison: compare your implementation with a known reference on identical signals; isolate scaling vs definition vs filter-skirt differences. Record and freeze parameters (band definition, windowing, averaging) in the test report. Reproducibility checklist: include these in reports so others can recompute your levels Band definition: base-10 or base-2? b in 1/b? exact vs nominal used for computation? reference frequency fr? Implementation: standard filter bank (IIR/FIR, multirate) vs FFT binning/synthesis; software/library versions. Sampling/preprocessing: fs, detrending/DC removal, anti-alias filtering, resampling. Time averaging: Leq / block RMS / exponential; time constants, block size, overlap, averaging frames; Fast/Slow context if relevant. FFT details (if used): window type, N, hop, zero-padding, PSD normalization, one-sided handling, ENBW/CG, partial-bin weighting. Calibration/units: input units and reference quantities (e.g., 20 µPa), sensor calibration factors and dates. Output definition: RMS vs peak vs band power; 10log vs 20log conventions; any band aggregation steps. If you remember one line: document “band definition + time averaging + FFT scaling/window treatment (if any)”. Most disputes disappear. Quick formulas and numeric example (ready for code/report) Base-10 one-third-octave constants G = 10^(3/10) ≈ 1.995262 r = 10^(1/10) ≈ 1.258925 # adjacent center-frequency ratio k = 10^(1/20) ≈ 1.122018 # edge multiplier about center f1 = fm / k f2 = fm * k Example: the 1 kHz one-third-octave band fm = 1000 Hz f1 = 1000 / 1.122018 ≈ 891.25 Hz f2 = 1000 * 1.122018 ≈ 1122.02 Hz Δf ≈ 230.77 Hz Q ≈ 4.33 OpenTest integrates both methods. Download and get started now -> or fill out the form below ↓ to schedule a live demo. Explore more features and application stories at www.opentest.com. References [1] IEC 61260-1:2014 PDF sample (iTeh): https://cdn.standards.iteh.ai/samples/13383/3c4ae3e762b540cc8111744cb8f0ae8e/IEC-61260-1-2014.pdf [3] ANSI S1.11-2004 preview PDF (ASA/ANSI): https://webstore.ansi.org/preview-pages/ASA/preview_ANSI%2BS1.11-2004.pdf [4] HEAD acoustics Application Note: FFT - 1/n-Octave Analysis - Wavelet (filter bank description): https://cdn.head-acoustics.com/fileadmin/data/global/Application-Notes/SVP/FFT-nthOctave-Wavelet_e.pdf [5] IEC 61672-1:2013 (IEC page): https://webstore.iec.ch/en/publication/5708 [6] NTi Audio Know-how: Fast/Slow time weighting (IEC 61672-1 context): https://www.nti-audio.com/en/support/know-how/fast-slow-impulse-time-weighting-what-do-they-mean [7] MathWorks: ENBW definition example: https://www.mathworks.com/help/signal/ref/enbw.html
Octave-band analysis converts detailed spectra into standardized 1/1- and 1/3-octave bands using constant-percentage bandwidth on a logarithmic frequency axis. In this post, we explain the mathematical basis of CPB, why IEC 61260-1 and ANSI S1.11 define octave bands the way they do, and how band levels are computed in practice (FFT binning vs. filter-bank RMS). The goal: repeatable, comparable results for acoustics, NVH, and compliance measurements. What is octave-band analysis, and what problem does it solve? Octave-band analysis is a family of spectrum analysis methods that partition the frequency axis on a logarithmic scale into band-pass bands. Each band has a constant ratio between its upper and lower cut-off frequencies (constant percentage bandwidth, CPB). Within each band we ignore fine line-spectrum details and focus on total energy / RMS (or power) in that band. In other words, it is not “what happens at every 1 Hz,” but “how energy is distributed across equal relative bandwidths.” This representation naturally matches human hearing and many engineering systems, whose frequency resolution is often closer to a relative (log) scale than a fixed-Hz scale. It is a common reporting format required by many standards: room acoustics parameters, sound insulation ratings, environmental noise, machinery noise, wind/road noise, etc., often use 1/3-octave bands. From linear Hz to log frequency: why CPB looks more like an engineering language Using equal-width frequency bins (e.g., every 10 Hz) to accumulate energy leads to inconsistent behavior across the spectrum: At low frequencies, a 10 Hz bin may be too wide and can smear details. At high frequencies, a 10 Hz bin may be too narrow, giving higher variance and less stable estimates for random noise. In contrast, CPB bandwidth grows with frequency (Δf ∝ f). Each band covers a similar relative change, improving stability and repeatability—important for standardized testing. A visual intuition: bandwidth increases on a linear axis, but is uniform on a log axis Figure 1: the same 1/3-octave bands plotted on a linear frequency axis—bandwidth appears larger at high frequencies Each horizontal segment represents a 1/3-octave band [f1, f2]; the short vertical mark is the band center frequency fm. On a linear axis, higher-frequency bands look wider. Figure 2: the same bands on a logarithmic frequency axis—bands become evenly spaced (the essence of CPB) Once the horizontal axis is logarithmic, these bands appear equal-width/equal-spacing; this is exactly what “constant percentage bandwidth” means. These two figures capture the core idea: octave-band analysis uses equal steps on a log-frequency scale, not equal steps in Hz. Standards and terminology: what do IEC/ANSI/ISO systems actually specify? In practice, “doing 1/3-octave analysis” is constrained by more than just band edges. Standards specify (or strongly imply): how center frequencies are defined (exact vs nominal), the octave ratio definition (base-10 vs base-2), filter tolerances/classes, and even the measurement/averaging conventions used to form band levels. IEC 61260-1:2014 highlights: base-10 ratio, reference frequency, and center-frequency formulas IEC 61260-1:2014 is a key specification for octave-band and fractional-octave-band filters. It adopts a base-10 design: the octave frequency ratio is G = 10^(3/10) ≈ 1.99526 (very close to 2, but not exactly 2). The reference frequency is fr = 1000 Hz. It provides formulas for the exact mid-band (center) frequencies and specifies that the geometric mean of band-edge frequencies equals the center frequency. [1] Key formulas (rearranged from the standard): [1] If the fractional denominator b is odd (e.g., 1, 3, 5, ...): If b is even (e.g., 2, 4, 6, ...): And always: Why does the even-b case look “half-step shifted”? Intuitively, the center-frequency grid is evenly spaced on log(f). When b is even, IEC chooses a half-step offset relative to fr so that band edges align more neatly in common reporting conventions. In practice, a robust implementation is to generate the exact fm sequence using the standard’s formula, then compute edges via f1 = fm / G^(1/(2b)) and f2 = fm * G^(1/(2b)), and only then label bands by the usual nominal frequencies. View the data with OpenTest (IEC 61260-1 Octave-Band Analysis) -> Band edges, center frequency, and the bandwidth designator b Standards commonly use 1/b as the “bandwidth designator”: 1/1 is one octave, 1/3 is one-third octave, etc. [1] Once (G, b, fr) are chosen, the entire band set (centers and edges) is fixed mathematically. Exact vs nominal: why two “center frequencies” appear for the same band “Exact” center frequencies are used for mathematically consistent definitions and filter design; “nominal” values are used for labeling and reporting. [1] ISO 266:1997 defines preferred frequencies for acoustics measurements based on ISO 3 preferred-number series (R10), referenced to 1000 Hz. [2] As a result, the exact geometric sequence is typically labeled with familiar nominal values such as: 20, 25, 31.5, 40, 50, 63, 80, 100, 125, 160, …, 1k, 1.25k, 1.6k, 2k, 2.5k, 3.15k, …, 20k. Implementation tip: compute edges from exact frequencies; only round/display as nominal. This avoids drifting away from the standard. Base-10 vs base-2: why standards don’t insist on an exact 2:1 octave Although “octave” is often thought of as 2:1, IEC 61260-1 specifies base-10 (G=10^(3/10)) rather than G=2. Key motivations include: Alignment with decimal preferred-number series (ISO 266 is tied to R10). [2] International consistency: IEC 61260-1:2014 specifies base-10 and notes that base-2 designs are less likely to remain compliant far from the reference frequency. [1] In base-10, one-third octave corresponds to 10^(1/10) ≈ 1.258925 (also interpretable as 1/10 decade), which yields a clean mapping: 10 one-third-octave bands per decade. “10 one-third-octave bands = 1 decade”: why this matters With base-10 one-third-octave spacing, each step multiplies frequency by r = 10^(1/10). Therefore: 10 consecutive 1/3-octave bands multiply frequency by exactly 10 (one decade). This matches ISO 266/R10 conventions and simplifies tables, plotting, and communication. Standardization values readability and consistency as much as raw mathematical purity. Figure 3: Base-10 one-third-octave spacing—10 equal ratio steps per decade (×10 in frequency) ANSI S1.11 / ANSI/ASA S1.11: tolerance classes and a transient-signal caution ANSI S1.11 (and later ANSI/ASA adoptions aligned with IEC 61260-1) specify performance requirements for filter sets and analyzers, including tolerance classes (often class 0/1/2 depending on edition). [3][4] A practical caution in ANSI documents: for transient signals, different compliant implementations can produce different results. [3] This highlights that time response (group delay, ringing, averaging time constants) matters for transient analysis. What do class/mask/effective bandwidth actually control? “I used 1/3-octave bands” is not just about nominal band edges. Standards aim to ensure different instruments/algorithms yield comparable results by constraining: Frequency spacing: center-frequency sequence and edge definitions (base-10, exact/nominal, f1/f2). Magnitude response tolerance (mask): allowable ripple near passband and required attenuation away from center. Energy consistency for broadband noise: constraints on effective bandwidth so band levels are comparable across implementations. Effective bandwidth matters because real filters are not ideal brick walls. For broadband noise, the output energy depends on ∫|H(f)|^2 S(f)df. Differences in passband ripple, skirts, and roll-off can cause systematic offsets. Standards constrain effective bandwidth to keep such offsets within acceptable limits. [1][3][4] The transient caution is not a contradiction: masks mainly constrain steady-state frequency-domain behavior, while transients depend on phase/group delay, ringing, and time averaging. [3] Mathematics: band definitions, bandwidth, Q, and band indexing CPB and equal spacing on a log axis CPB is equivalent to equal-width spacing in log-frequency. If u = log(f), then every band spans a fixed Δu. Many spectra (e.g., 1/f-type) look smoother and statistically more stable in log frequency. Band-edge formulas from the geometric-mean definition (general 1/b form) IEC defines the center frequency as the geometric mean of the edges: fm = sqrt(f1 f2). [1] For 1/b octave bands, the edge ratio is typically f2/f1 = G^(1/b), where G is the octave ratio. Then: For base-10 one-third octave (b=3): G=10^(3/10). Adjacent center ratio is r = G^(1/3) = 10^(1/10) ≈ 1.258925; edge multiplier is k = 10^(1/20) ≈ 1.122018. Q-factor and resolution: octave analysis is constant-Q analysis Define Q = fm / (f2 − f1). For CPB bands, Δf = f2 − f1 scales with fm, so Q depends only on b and G (not on frequency). Quick reference (base-10, fr=1000 Hz): Fractional-octaveBand ratio f2/f1Relative bandwidth Δf/fmQ = fm/Δf1/11.9952620.7045921.4191/21.4125380.3471072.8811/31.2589250.2307684.3331/61.1220180.1151938.6811/121.0592540.05757317.369 Interpretation: for 1/3 octave, Q≈4.33 and each band is about 23% wide relative to its center. Finer bands (1/6, 1/12) give higher resolution but higher variance for random noise and typically require longer averaging. Band numbering (integer index) and formulaic enumeration Implementations often use an integer band index x. In IEC, x appears directly in the center-frequency formula: fm = fr * G^(x/b). [1] This provides a stable way to enumerate all bands covering a target frequency range and ensures contiguous, standard-consistent edges. For base-10: so and you can invert as Figure 4: Q factor for common fractional-octave bandwidths (base-10 definition) Two meanings of “1/3 octave”: base-2 vs base-10—do not mix them Some literature uses base-2: adjacent centers are 2^(1/3). IEC 61260-1 and much modern acoustics practice use base-10: adjacent centers are 10^(1/10). A quick check: if nominal centers look like 1.0k → 1.25k → 1.6k → 2.0k (R10 style), it is likely base-10. Mathematical definition of band levels: from PSD integration to dB reporting Continuous-frequency view: integrate PSD within the band Octave-band level is essentially the integral of power spectral density over a frequency band. For sound pressure p(t): For vibration (velocity/acceleration), the same logic applies with different units and reference quantities. Key point: because dB is logarithmic, any summation or averaging must be performed in the linear power/mean-square domain first. Two discrete implementations: filter-bank RMS vs FFT/PSD binning Filter-bank method: y_b(t)=BandPass_b{x(t)}, then compute mean(y_b^2) as band mean-square (optionally with time averaging). FFT/PSD binning method: estimate S_pp(f) (e.g., via periodogram/Welch), then numerically integrate/sum bins within [f1,f2]. For long, stationary signals, averaged results can be very close. For transients, sweeps, and short events, they often differ. Be explicit about what spectrum you have: magnitude, power, PSD (and dB/Hz) Magnitude spectrum |X(f)|: amplitude units (e.g., Pa), useful for tones/harmonics. Power spectrum |X(f)|²: mean-square units (Pa²). Power spectral density (PSD): mean-square per Hz (Pa²/Hz), most common for noise. Because octave-band levels represent band mean-square/power, you must end up integrating/summing in Pa² (or analogous) regardless of starting representation. Frequency resolution and one-sided spectra: Δf, 0..fs/2, and the “×2” rule FFT bin spacing is Δf = fs/N. A typical discrete approximation is: If you use a one-sided spectrum (0..fs/2), to conserve energy you typically multiply all non-DC and non-Nyquist bins by 2 (because negative-frequency power is folded into the positive side). Different software handles these conventions differently, so align definitions before comparing results. Window corrections: coherent gain (tones) vs ENBW (noise) are different Windowing reduces spectral leakage but changes scaling: For tone amplitude: correct by coherent gain (CG), often CG = sum(w)/N. For broadband noise/PSD: correct by equivalent noise bandwidth (ENBW), e.g., ENBW = fs·sum(w²)/(sum(w))². [9] CG controls peak amplitude; ENBW controls average noise-floor area. Octave-band levels are energy statistics and are more sensitive to ENBW. WindowCoherent Gain (CG)ENBW (bins)Rectangular1.0001.000Hann0.5001.500Hamming0.5401.363Blackman0.4201.727 Partial-bin weighting: what to do when band edges do not align to FFT bins Band edges rarely land exactly on bin frequencies. Treat PSD as approximately constant within each bin of width Δf, and weight boundary bins by their overlap fraction: This produces smoother, more physically consistent band levels when N or band edges change. Figure 5: Partial-bin weighting schematic when band edges do not align with FFT bins A unifying formula: both methods compute ∫|H_b(f)|² S_xx(f) df Both filter-bank and PSD binning can be written as: Brick-wall binning corresponds to |H_b|² being 1 inside [f1,f2] and 0 outside. A true standards-compliant filter has a roll-off and ripple, which is why standards constrain masks and effective bandwidth. Band aggregation: composing 1-octave from 1/3-octave, and forming total levels Under ideal partitioning and energy accounting: Three adjacent 1/3-octave bands can be combined to approximate one full octave band. Summing all band energies over a covered range yields the total energy. Always combine in the energy domain. If L_i are band levels in dB, energies are E_i = 10^(L_i/10). Then: IEC 61260-1 notes that fractional-octave results can be combined to form wider-band levels. [1] Effective bandwidth: why standards specify it Real filters are not ideal rectangles. For white noise (constant PSD S0), output mean-square is: For non-white spectra such as pink noise (PSD ~ 1/f), standards may define normalized effective bandwidth with weighting to maintain comparability across typical engineering noise spectra. [1] Practical implication: FFT “hard-binning” implicitly assumes a brick-wall filter with B_eff = (f2 − f1). A compliant octave filter has skirts, so B_eff can differ slightly (and by class). To match results, either approximate the standard’s |H(f)|² in the frequency domain or document the methodological difference. Why 1/3 octave is favored (math + perception + engineering trade-offs) Information density is “just right”: finer than 1 octave, steadier than very fine fractions A single octave band can be too coarse and hide spectral shape; very fine fractions (e.g., 1/12, 1/24) can be unstable and expensive: Higher estimator variance for random noise (each band captures less energy). More computation and higher reporting burden. Often more detail than regulations or rating schemes need. One-third octave is the classic compromise: enough resolution for engineering insight, stable enough for standardized measurements, and broadly supported by instruments and software. Psychoacoustics: critical bands in mid-frequencies are close to 1/3 octave Many psychoacoustics references describe ~24 critical bands across the audible range, and in the mid-frequency region the critical-bandwidth is often similar to a 1/3-octave bandwidth. [7][8] This makes 1/3 octave a natural intermediate representation for problems tied to perceived sound, while still being more standardized than Bark/ERB scales. Direct standards/application pull: many workflows mandate 1/3 octave I/O Once major standards define inputs/outputs in 1/3 octave, ecosystems (instruments, software, reporting templates) converge around it. Examples: Building acoustics ratings: ISO 717-1 references one-third-octave bands for single-number quantity calculations. [5] Room acoustics parameters (e.g., reverberation time) are commonly reported in octave/one-third-octave bands (ISO 3382 series). [6] Extra base-10 benefits: R10 tables, 10 bands/decade, readability 10 bands per decade: multiplying frequency by 10 corresponds to exactly 10 one-third-octave steps (very clean for log plots). R10 preferred numbers: 1.00, 1.25, 1.60, 2.00, 2.50, 3.15, 4.00, 5.00, 6.30, 8.00 (×10^n) are widely recognized and easy to communicate. Compared with base-2, decimal labeling is less awkward and cross-standard ambiguity is reduced. Octave-band analysis is typically implemented using either FFT binning or a filter bank. Keep reading -> Octave-Band Analysis Guide: FFT Binning vs. Filter Bank OpenTest integrates both methods. Download and get started now -> or fill out the form below ↓ to schedule a live demo. Explore more features and application stories at www.opentest.com. References [1] IEC 61260-1:2014 PDF sample (iTeh): https://cdn.standards.iteh.ai/samples/13383/3c4ae3e762b540cc8111744cb8f0ae8e/IEC-61260-1-2014.pdf [2] ISO 266:1997, Acoustics - Preferred frequencies (ISO): https://www.iso.org/obp/ui/ [3] ANSI S1.11-2004 preview PDF (ASA/ANSI): https://webstore.ansi.org/preview-pages/ASA/preview_ANSI%2BS1.11-2004.pdf [4] ANSI/ASA S1.11-2014/Part 1 / IEC 61260-1:2014 preview: https://webstore.ansi.org/preview-pages/ASA/preview_ANSI%2BASA%2BS1.11-2014%2BPart%2B1%2BIEC%2B61260-1-2014%2B%28R2019%29.pdf [5] ISO 717-1:2020 abstract (mentions one-third-octave usage): https://www.iso.org/standard/77435.html [6] ISO 3382-2:2008 abstract (room acoustics parameters): https://www.iso.org/standard/36201.html [7] Ansys Help: Bark scale and critical bands (mentions midrange close to third octave): https://ansyshelp.ansys.com/public/Views/Secured/corp/v252/en/Sound_SAS_UG/Sound/UG_SAS/bark_scale_and_critical_bands_179506.html [8] Simon Fraser University Sonic Studio Handbook: Critical Band and Critical Bandwidth: https://www.sfu.ca/sonic-studio-webdav/cmns/Handbook5/handbook/Critical_Band.html [9] MathWorks: ENBW definition example: https://www.mathworks.com/help/signal/ref/enbw.html
In real DAQ use, enclosure durability and scratch resistance directly affect service life and maintenance cost. This article shares a pencil hardness scratch test on the SonoDAQ top cover (PC + carbon fiber) and compares it with a typical laptop enclosure. The results show how the enclosure performs from 2H to 5H and why the surface finish helps it hold up in daily handling. How Scratch Resistance Affects DAQ Use When choosing a DAQ front end, engineers usually look first at the specs—sample rate, dynamic range, synchronization accuracy, channel count… But after a few years of real use, many realize that enclosure reliability and scratch resistance can be just as important to the system’s service life and day-to-day experience. For soundand vibration test equipment, this is even more obvious. Typical SonoDAQ applications include NVH road tests, on-site industrial measurements, and long-term outdoor or semi-outdoor acquisition, where the device often has to: be carried frequently, loaded into vehicles, or fixed on fixtures or test benches; be moved between lab desks, instrument carts, and tool cases; remain in close contact with other metal equipment, screwdrivers, laptops, and more. In such environments, a housing that scratches easily not only looks worn, but can also drive up maintenance and replacement costs. To better reflect daily handling, we ran a pencil-hardness scratch test on the SonoDAQ front-end upper cover and used a common laptop enclosure as a reference. Test Setup The test was performed strictly in accordance with ISO 15184:2020, and was intended to evaluate the scratch resistance of the UV-cured coating on the outer surface of the SonoDAQ front-end upper cover. Samples SampleDescriptionA — SonoDAQ top coverMaterial: PC + carbon-fiber plate (top/bottom covers), with an internal aluminum frame and corner protection.B — Typical laptop enclosureMaterial: Plastic/metal housing with a sprayed coating. This test follows the pencil hardness test approach. Pencils of different hardness grades were used to scratch the enclosure surface under consistent contact conditions, and the surface was inspected for any scratches visible to the naked eye. Test Tools Pencil hardness tester, additional weights can be added as required. Pencils: hardness grades 2H, 3H, 4H, and 5H. Procedure Insert the pencil into the pencil hardness tester at a 45° angle, with a total load of 750 g (equivalent to applying 7.5 N to the coating surface). For each pencil hardness grade, scratch the enclosure surface three times and check whether any visible scratches appear. Keep the scratch length and applied force as consistent as possible to ensure comparability across hardness grades. Results Criteria Whether visible scratches appear; Whether the surface gloss changes noticeably. Results From the results, we could see that the front-end enclosure showed different levels of scratch resistance under different pencil grades. To further validate durability, we ran the same pencil hardness test on a typical laptop enclosure. Laptop housings are usually plastic or metal and also have a painted surface. We used the same method as for the DAQ unit: 2H Pencil: SonoDAQ ProTypical Laptop Conclusion: Neither the SonoDAQ enclosure nor the laptop enclosure showed any obvious scratches; visually there was almost no change. 3H Pencil: SonoDAQ ProTypical Laptop Conclusion: Neither the SonoDAQ enclosure nor the laptop enclosure showed any obvious scratches; visually there was almost no change. 4H Pencil: SonoDAQ ProTypical Laptop Conclusion: At 4H, the SonoDAQ enclosure still showed no visible scratches; in contrast, the laptop enclosure exhibited clearly visible scuffs, essentially reaching the upper limit of its scratch resistance. 5H Pencil: SonoDAQ Pro Conclusion: At 5H, light scratches began to appear on the SonoDAQ enclosure, indicating it was approaching its scratch-resistance limit. Note that the pencil hardness test is primarily a relative comparison of scratch resistance between enclosures; it does not represent a material’s absolute hardness or long-term wear life. However, for assessing whether a surface is “easy to scratch” in everyday use, it is a very direct method. If we translate the pencil grades into typical real-world scenarios: Accidental rubbing from most keys, equipment edges, and tools usually falls in the 2H-3H range; 4H-5H corresponds to harder, sharper, and more forceful scratching—often with some deliberate pressure. At 4H, the SonoDAQ enclosure is still difficult to mark, and it only shows slight scratching at 5H. This means that during normal handling, loading, installation, and daily use, the enclosure is not easy to scratch. Why It Holds Up The SonoDAQ front-end enclosure uses a PC + carbon-fiber composite, which provides good mechanical strength and toughness. On top of that, the surface is finished with a spray-and-bake paint process plus a UV-cured top layer, which plays a key role in: Increasing surface hardness and improving scratch resistance; Improving corrosion resistance and environmental robustness; Balancing durability with a premium look and feel. For instrumentation, “harder” is not always “better.” The right design balances scratch resistance, impact resistance, weight, and long-term reliability. As the results show, SonoDAQ’s enclosure is durable enough for real-world use. For more information on SonoDAQ features, application scenarios, and typical configurations, please fill out the Get in touch form below to contact the CRYSOUND team. We will provide selection recommendations and support based on your test requirements.
Across acoustics testing, product R&D, environmental noise monitoring, and NVH analysis, simply “capturing sound” isn’t the goal—accurate sound measurement is. A measurement microphone is engineered for repeatable, traceable, and quantifiable results, so your data stays comparable across devices, labs, and time. In this post, we explain what a measurement microphone is and how it differs from a regular microphone, based on real-world acoustic measurement workflows. What Is a Measurement Microphone? A measurement microphone is a high-precision acoustic transducer designed to measure sound pressure accurately. Its purpose is not to make audio “sound good,” but to be truthful, calibratable, and repeatable. A typical measurement microphone is engineered to provide: Known and stable sensitivity (e.g., mV/Pa), so its electrical output can be converted into sound pressure (Pa) or sound pressure level (dB). Controlled, near-ideal frequency response (as flat as possible under specified sound-field conditions) for accurate multi-band measurement. Excellent linearity and wide dynamic range, maintaining low distortion from very low noise floors to high SPL environments. Traceable calibration capability, working with acoustic calibrators or pistonphones to manage measurement uncertainty and maintain a reliable measurement chain. Environmental stability, minimizing drift due to temperature, humidity, static pressure, and long-term aging—critical for both lab and field use. In short: a measurement microphone is the front-end sensor of a metrology-grade measurement chain, where the output must meaningfully represent true sound pressure in a defined sound field. What Is a Regular Microphone? Most microphones people encounter daily—conference mics, phone mics, streaming mics, stage mics, and studio mics—are built for audio capture and production. They typically prioritize: Speech clarity and pleasing timbre Wind/plosive resistance and usability Directivity and feedback control System compatibility, size, durability, and cost Many regular microphones are intentionally not flat. For example, they may boost the vocal presence band, roll off low frequencies, or apply built-in processing such as noise reduction, AGC (automatic gain control), and limiting. These features can be great for “good sound,” but they can severely compromise measurement accuracy. The Core Difference: Different Goals, Different Design Philosophy Measurement Accuracy vs. Pleasant Sound Measurement microphones aim to represent true sound pressure with accuracy, repeatability, and traceability. Regular microphones aim to produce usable or pleasant audio, where tonal shaping is often desired. Calibration and Traceability: Quantifiable vs. Hard to Quantify Measurement microphones are designed to support periodic calibration: Regular microphones are typically treated as functional audio devices—specs may be provided, but traceable metrology calibration is rarely central to their usage. Quick Comparison Table DimensionMeasurement MicrophoneRegular MicrophonePrimary GoalAccurate, traceable measurementAudio capture and sound qualityFrequency ResponseControlled & defined (free/pressure/diffuse field)Tuned for application; may be intentionally shapedCalibrationDesigned for calibration and uncertainty managementTypically not traceable or routinely calibratedLinearity/Dynamic RangeEmphasizes wide range, low distortionLimiting/compression/processingKey SpecsSensitivity, equivalent noise, max SPL, phase, driftSensitivity, directivity, timbre, ease of useTypical Use CasesAcoustics testing, compliance, R&D, NVH, monitoringMeetings, streaming, recording, stage, calls Why Do You Need a Measurement Microphone? If your work involves any of the following, a measurement microphone is often essential: Acoustic product development: loudspeaker/headphone response & distortion, spatial acoustics, array localization NVH engineering: cabin noise, transfer path analysis, order tracking Environmental/industrial noise monitoring: long-term stability and verifiable SPL logging Standards and compliance testing: traceable results and reproducible procedures across labs Acoustic material and silencer evaluation: impedance tubes, reverberation chambers, anechoic measurements In these scenarios, the real problem is rarely “can you record sound?” The real question is: can you trust the dB value? If your work involves any of the scenarios above, CRYSOUND’s measurement microphones are specifically designed for these high-standard applications, delivering stable, reliable, and consistent measurement data to fully meet the demands of such use cases. Conclusion: Measurement Turns Sound into Reliable Data A regular microphone helps you hear. A measurement microphone helps you verify. When you need to put acoustics into engineering reports, standards, and closed-loop product improvement, a measurement microphone is the foundation that makes results defensible. To learn more about microphone functions and measurement hardware solutions, visit our website—and if you’d like to talk to the CRYSOUND team, please fill out the “Get in touch” form.
CRYSOUND’s PCBA testing solution integrates RF and audio performance validation within a 1-to-8 parallel architecture, enabling synchronized electrical, RF, audio, and power testing. This unified platform enhances PCBA test efficiency and adaptability for TWS, smart speakers, and wearables, driving cost-effective, high-volume production with streamlined integration. Industry Pain Points: Challenges of Traditional PCBA Testing in Multi-Category Production As smart hardware products diversify and iteration cycles shorten, traditional automated testing equipment increasingly exposes limitations—especially in cross-category production scenarios: Low space utilization: Traditional testers are typically customized for a single product category. Power testing for smart speakers, low-power testing for smart glasses, and RF testing for earbuds often require separate dedicated equipment, leading to excessive floor space usage and high expansion costs. High labor costs: Single-board testing systems require dedicated operators for calibration and supervision. Different operating logics across devices increase training costs, while peak production periods often rely on temporary staffing, causing labor costs to scale directly with output. Low production efficiency: Testing processes are largely serial. Panelized boards must be transferred between multiple stations, and special procedures—such as multi-channel audio testing for smart speakers—further extend cycle times, making it difficult to meet delivery demands. These issues ultimately trap manufacturers in an operational dilemma of “higher output equals higher costs, and product changes equal line downtime,” limiting responsiveness and profit growth. Core Advantages: An Integrated Solution for Multi-Scenario Applications Leveraging a mature technical architecture and extensive industry experience, the CRYSOUND panelized PCBA testing solution abandons the traditional “single-function, single-application” design philosophy. Instead, it addresses real-world multi-category production needs to optimize both testing efficiency and cost control. Fully Integrated Design with Over 50% Space Optimization The solution integrates key testing functions—including electrical performance, RF validation, audio inspection, and power stability testing—into a single system, forming a one-stop testing workflow: Smart speaker applications: Integrated multi-channel audio testing and high-power stability modules eliminate the need for separate acoustic chambers and power validation benches. The system occupies only 25 m², saving 58% of space compared to traditional distributed layouts. Smart glasses applications: Designed for compact PCBA form factors, the system focuses on precise low-power current measurement and short-range RF validation, reducing damage risks caused by multi-station transfers. TWS/OWS earbud applications: RF, audio, and current parameter testing are completed within a single station. The 8-channel parallel testing architecture supports efficient panelized testing cycles. Through functional integration, a single system can replace 3–4 traditional dedicated testers, significantly improving workshop space utilization and enabling flexible capacity expansion. Intelligent Operations and Maintenance: Approximately 60% Labor Cost Reduction With a standardized user interface, the solution supports semi-unattended testing operations: Automated process control: After manual loading, the system automatically completes barcode registration, synchronized multi-module testing, and real-time data uploads. Abnormal conditions trigger tiered alarm mechanisms without requiring full-time supervision. Unified operating logic: All systems use a standardized human–machine interface. Operators can manage multi-category testing after a single training session, significantly reducing training costs and operational errors. Improved maintenance efficiency: One technician can manage four systems simultaneously, compared with the traditional ratio of one operator for two machines—resulting in a 200% increase in labor efficiency. Parallel Testing Architecture: Doubling Production Throughput By breaking through the bottleneck of serial testing, the multi-channel parallel testing design allows different test modules to operate simultaneously, dramatically reducing panelized board test cycles: Smart speakers: Parallel multi-channel audio and RF testing increases throughput from approximately 150 boards/hour to 300 boards/hour or more. TWS/OWS earbuds: The 8-channel parallel configuration achieves stable throughput of over 400 boards/hour, representing an efficiency improvement of approximately 150% compared with traditional single-channel systems. This approach eliminates the need to “add more machines to increase capacity,” enabling manufacturers to meet peak-order demands while optimizing cost efficiency. Standardized Technical Assurance: Precision and Reliability All core test modules undergo strict calibration and validation, meeting recognized industry standards: Equipped with RF test modules, MBT electrical performance modules, and audio loopback closed-loop testing units, supporting precise testing of mainstream chipsets from Qualcomm, BES, JieLi, and others. Testing accuracy complies with IPC-A-610 PCBA acceptability standards. RF shielding effectiveness reaches ≥70 dB within 700 MHz–6 GHz, audio distortion remains <1.5% within 100 Hz–10 kHz, and electrical measurement accuracy is controlled within ±0.5% of full scale. Test data can be stored in multiple formats, enabling full traceability from pre-test to post-test stages and meeting ISO 9001 quality management system requirements. Cost Advantages: Quantified Results Across Multiple Dimensions The CRYSOUND solution delivers sustainable cost advantages across equipment procurement, operations, and quality control: Equipment investment: Integrated design reduces the number of dedicated testers required, lowering initial equipment investment by over 30% for multi-category production. Operational costs: Optimized space utilization and reduced staffing requirements lower rental and labor expenses, saving RMB 150,000–300,000 per system annually. Quality costs: Integrated testing minimizes handling damage during panel transfers. For lightweight boards such as those used in smart glasses, damage rates drop by 30%, while precise testing and data traceability keep defect rates below 2%, representing a 40%+ reduction compared with traditional approaches. Case Studies: Efficiency Upgrades in Multi-Category Production The following cases are based on anonymized production data from real customers and demonstrate actual deployment results: Case 1: Mid-Sized TWS Earphone ODM (Monthly Output: 500,000 Units) Initial challenges: Four traditional test lines deployed in an 800 m² workshop, each requiring four operators. Single-line throughput was approximately 200 boards/hour, creating delivery pressure during peak seasons. Results after implementation: Four traditional lines were consolidated into two CRYSOUND test lines, freeing 200 m² of space for expansion. Each line required only 1.5 operators, saving RMB 45,000 per month in labor costs. Throughput per line increased to 400 boards/hour, doubling total monthly capacity to 1 million units, while delivery cycles shortened from 15 days to 10 days. Core value: Space utilization improved by 25%, labor costs reduced by 37.5%, and capacity increased by 24%. Case 2: Smart Speaker Brand Factory (Monthly Output: 150,000 Units) Initial challenges: Multi-channel audio testing and RF testing were separated into two stations, occupying 60 m². High-power testing defect rates reached 1.2%, mainly due to board damage during transfers. Results after implementation: The integrated system occupied only 25 m², saving 35 m² of production space. Eliminating multi-station transfers reduced handling-related defect rates to 0.5%, preventing the loss of approximately 1,000 units per month. Core value: Space usage reduced by 50%, changeover efficiency improved by 25%, and transfer-related defect rates decreased by 31.8%. The solution is now running stably across 10+ factories and 30+ production lines. Key Differences vs. Traditional Automated Test Equipment Comparison DimensionTraditional Automated EquipmentCRYSOUND Integrated Testing SolutionFunctional adaptabilitySingle-category customization; multiple systems required for cross-category productionIntegrated multi-scenario testing covering earbuds, speakers, and glassesChangeover efficiencyNo standardized process; line downtime up to 32 hoursParameterized configuration; downtime reduced to 4 hoursSpace utilizationDispersed single-function layouts with low efficiencyIntegrated design saving 50%+ spaceInitial investmentHigh due to multiple equipment purchasesOver 30% savings through integration CRYSOUND replaces the traditional “function-driven equipment” model with a “production-driven system” approach, enabling a shift from “adapting production to equipment” to “designing equipment around production.” Choose CRYSOUND Panelized PCBA Testing for Certainty in Quality and Efficiency As competition in smart wearable and consumer electronics markets intensifies, quality consistency and delivery speed are decisive factors. The CRYSOUND 1-to-8 PCBA comprehensive testing system is more than a piece of equipment—it is a complete solution for strengthening production-line competitiveness. By ensuring reliable wireless performance, optimized power consumption, and built-in safety validation for every PCBA leaving the factory, CRYSOUND helps manufacturers maintain full confidence and control over product quality, even at large-scale production volumes. If you’d like to learn more about PCBA testing—or discuss your blade process and inspection targets—please use the “Get in touch” form below. Our team can share recommended settings and an on-site workflow tailored to your production conditions.
Negative-pressure airtightness is critical for high-speed train car bodies, and even minor leaks can lead to rework or delivery risks. This article presents a case from Changchun where CRYSOUND’s CRY8124 Acoustic Imaging Camera was used to quickly, intuitively, and verifiably pinpoint leaks on a carbon-fiber train car body shell, showcasing the CRY8124’s application in vacuum leak detection for carbon-fiber high-speed train car bodies. Case Snapshot Year: 2025 Location: Changchun Workpiece: Carbon-fiber train car body shell Test condition: Vacuum/negative-pressure setting; 15-minute pressure-hold test Sample size: 4 units Coverage: Scanned 6 key areas (car-body section joints/seams, structural interfaces, process holes, corners/curved transition areas, edge of cover film, around embedded components, etc.) Participants: CRYSOUND's Technical Engineers Deliverables: Acoustic imaging heatmap images/videos + report Project Background: Vacuum Leaks Are “Hard to Find, Time-Consuming, and Easy to Miss” Carbon-fiber car body shells feature complex structures with numerous joints and interfaces. When a leak exists during negative-pressure testing, traditional methods often face three common challenges: Experience-dependent localization: Requires repeated “listen–feel–try” steps, and heavily depends on operator skill and experience. High interference: Background noise from workshop fans, tools, friction, and impacts can mask weak leak signals. Inconsistent efficiency: Troubleshooting time varies significantly between operators for the same issue, making verification difficult. On-Site Approach: Pinpointing Leaks with “Visible Sound” In this project, CRY8124 Acoustic Imaging Camera was used to perform scan-based inspections across key areas of the shell. The core value of acoustic imaging lies in making the sound source generated by a leak visible on the screen—turning leak localization from “guessing” into “seeing.” On-Site Inspection Procedure: Maintain the negative-pressure condition: Troubleshooting was performed under the customer’s specified negative-pressure (vacuum gauge pressure approx. -100 kPa) test state. Selected frequency range: Based on on-site verification, 20–40 kHz was selected (offset from the dominant background-noise frequencies, providing better contrast for leak sources). Selected imaging threshold: Based on on-site verification, an imaging threshold of -40 dB was selected Scan and locate: Move the device along high-risk areas such as seams, interfaces, corners, and the edges of cover films. Point verification: Re-test suspected sound-source points at close range and mark them; adjust angles as needed for confirmation (strong airflow, film vibration, or strong reflections may create false leak indications, so multi-angle rechecks are required). Evidence output: Save images/videos with acoustic heatmap overlays to support on-site closure and quality documentation. Reports can later be generated using CRYSOUND’s second-generation analysis software. Inspection Results: Multiple Leaks Quickly Identified Under the customer’s specified negative-pressure test conditions at a train manufacturing site in Changchun, acoustic imaging scan inspections were carried out on a carbon-fiber train car body shell. Multiple vacuum leak points identified: A total of three suspected leak points were marked. Rechecks were performed using a temporary sealing (blocking) comparison method. After the leak points were sealed, there was no measurable pressure drop, confirming three leak points. All confirmed points were marked on-site, and images/videos with the leak heatmap overlays were saved for quality documentation and verification. Efficiency: On average, the total inspection time per component—from “start scanning” to “finish inspection, marking, and saving evidence / completing verification”—was under 10 minutes. Closed-loop validation: After corrective actions, a re-inspection was performed under the same conditions. The leak heatmap disappeared, and the workpiece passed the customer’s pressure-hold specification. From the on-site inspection visuals, different leak points consistently appeared as stable acoustic heatmap overlays on the device interface. Why Is Acoustic Imaging Well Suited for This Process? From the perspective of airtightness testing for composite structures, vacuum leak detection is not short of methods that can “find a problem.” The real challenge is achieving results that are fast, accurate, visual, and verifiable. In composite car-body applications, the advantages of acoustic imaging mainly include: Visual localization: Leak points are overlaid directly onto the surface of the structure as acoustic heatmaps, making the leak location visible and reducing communication and handoff costs. Stronger resistance to environmental interference: By selecting an appropriate frequency range and setting the imaging threshold, the contrast between leak sources and background noise is improved, minimizing the impact of ambient interference on results. More controllable efficiency: As a handheld tool, the cycle time is more consistent, making it suitable for batch inspections and production-line management. Traceable evidence: Images and videos can be retained for review, quality traceability, and training purposes. Practical Tips: How to Be “Faster and More Accurate” On Site Based on our on-site experience in Changchun, here are three actionable recommendations: Prioritize high-risk geometries: seams, hole edges, corners, cover-film edges, and interface transition areas. Image first, then verify up close: use the device to identify suspected leak points first, then confirm them at close range and from multiple angles. Standardize the documentation template: save images/videos for every point to support corrective actions, test report writing, and follow-up verification. Conclusion: Turning Troubleshooting from “Experience-Based Work” into a Standardized Process” In vacuum leak detection for carbon-fiber train car body shells, CRY8124 Acoustic Imaging Camera upgrades “listening for leaks” into visualized localization, delivering a closed-loop outcome with higher efficiency, clearer pinpointing, and retained evidence—while significantly reducing reliance on individual experience. If you’d like to learn more about the application of CRY8124 Acoustic Imaging Camera for vacuum leak testing, or discuss a detection solution better suited to your composite-material process and acceptance criteria, please contact us via the form below. Our sales or technical support engineer will get in touch with you.
In acoustic testing, sensor calibration, electroacoustics, and NVH, gain, input range, and quantization directly determine the quality of the data you capture. This article explains these three factors from an engineering perspective. Using typical CRYSOUND setups—measurement microphones, preamps, acoustic imaging systems, and DAQ system such as SonoDAQ Pro with OpenTest—it shows how to configure them correctly in practice. From the Test Floor: When “Weird Waveforms” Are Caused by Quantization In real acoustic test environments, engineers often encounter situations like these: On a production line, waveforms from a batch of MEMS microphones suddenly look stair-stepped, and the spectrum becomes rough. In NVH or fan noise tests, low-level waveform sections appear grainy, with details barely visible. In acoustic imaging systems, signals from distant leakage points are audible but unstable, with jittery image edges. Figure 1: Data with poor quantization quality often appears noisy or blurred. Many engineers initially attribute these issues to excessive noise. In practice, a large portion of them result from signals that are too small relative to an overly large input range, causing most quantization levels to be wasted. If a signal does not sufficiently occupy the system’s dynamic range, even a high-resolution ADC cannot deliver meaningful data quality. Three Core Concepts Explained in Engineering Terms Gain: Bringing the Signal into the Right Zone In CRYSOUND acoustic measurement chains, gain is typically applied in the following parts: Measurement microphone and preamplifier stages Electroacoustic analyzers or DAQ front ends such as SonoDAQ Pro Figure 2: Left: a 5 V signal. Right: applying a gain of 2 to the 5 V signal, resulting in a 10 V signal. The purpose of gain is straightforward: amplify signals that may only be tens or hundreds of millivolts so they approach the DAQ’s full-scale input and can be properly digitized by the ADC. Range: The Window Through Which the System Sees the Signal Input range defines both the maximum signal amplitude a system can accept and the voltage step corresponding to each quantization bit at a given ADC resolution. For high-precision devices such as CRYSOUND measurement microphones and sound level meters like CRY2851, selecting an appropriate range that keeps the signal within the linear operating region is essential for stable measurements. Figure 3: Left: input range set to 10 V. Right: input range set to 0.01 V. Figure 4: Number of available bins used for signal quantization. Quantization: Translating the Analog World into Digital Data Quantization is the process by which an ADC converts continuous analog signals into discrete digital values. When more quantization levels are effectively used, the digital signal represents the analog waveform more faithfully. When fewer levels are used, stair-step waveforms and low-level jitter become apparent. Figure 5: During quantization, the signal amplitude is divided into discrete levels. How Gain and Range Work Together in CRYSOUND Systems The interaction between gain, range, and quantization becomes clearer when viewed through real CRYSOUND application scenarios. 1. Sensors and Electroacoustic Testing CRYSOUND measurement microphones, preamplifiers, and electroacoustic analyzers (e.g., CRY6151B) are commonly used for: Microphone capsule testing; Production-line and laboratory testing of headphones, loudspeakers, and other electroacoustic components. In these systems, the typical best practice is: Estimate the signal level based on the DUT sensitivity and the expected sound pressure level (SPL); Set an appropriate gain on the front-end amplifier or analyzer so the signal reaches about 60–80% of full scale; Select a matching input range to avoid clipping while also preserving as much dynamic range as possible. This approach delivers low distortion while making full use of the ADC’s effective bits, reducing quantization noise. 2. Acoustic Imaging and Array Measurements In CRYSOUND acoustic imaging products (e.g., acoustic imaging cameras based on high-performance microphone arrays), the system often processes wideband signals from many synchronized channels, then applies localization and imaging algorithms. In this scenario: If the signal level from a given direction is far below the lower limit of the overall range, that area may suffer from insufficient quantization resolution, resulting in more image speckle/noise; Properly setting the overall array gain and the input range of each front-end module helps balance weak far-field signals against strong near-field signals. That’s why, for gas leak detection, partial discharge identification, or mechanical degradation monitoring, a reliable acoustic imaging system depends not only on algorithms, but also on the underlying quantization quality. 3. DAQ Systems and Repeatable Workflows For acoustic and vibration acquisition, CRYSOUND provides modular DAQ hardware (e.g., the SonoDAQ series) and the OpenTest software platform, enabling end-to-end workflows from measurement and analysis to automated test sequences. On these platforms, engineers can: Configure per-channel sensor gain, range, and sampling rate directly in the channel settings; Save a validated configuration as a template and reuse it across different products or projects; Use wizard-style interfaces in applications such as sound power, noise, and vibration to ensure parameter settings remain aligned with relevant standards. In other words:Gain, range, and quantization—these “low-level details”—can be captured in software scenario templates and turned into shared, auditable testing assets for the team, instead of living only in one engineer’s experience. A Quick Cheat Sheet for CRYSOUND Users Whether you are using CRYSOUND measurement microphones, sound level meters, electroacoustic test systems, or a DAQ + OpenTest platform, the checklist below can be used as a quick pre-test verification in daily work. Confirm the expected signal range: Estimate the maximum signal amplitude using experience or a short trial capture. Set an appropriate front-end gain: Target is under typical operating conditions, waveform peaks should reach about 60–80% of full-scale input. Select a matching input range: Avoid defaulting to ±10 V; if the signal level is clearly lower, consider using a smaller range. Check for clipping: Flat-topped waveforms or abnormally elevated spectral lines usually indicate overload. Save and reuse configurations: In CRYSOUND platforms, save channel, gain, and range settings as project templates to reduce human error. Closing: Accuracy Comes from the Entire System In real acoustic measurement systems, data quality is never determined by a single ADC alone. Instead, it is the result of the entire signal chain working together: Sensors → Amplification → Range → Quantization → Software Algorithms As an acoustic testing specialist, CRYSOUND aims to help engineers address these fundamental issues—gain, range, and quantization—through a complete product portfolio, from sensors and front-end hardware to acoustic imaging, electroacoustic testing, data acquisition, and software platforms. This provides a reliable data foundation for subsequent analysis and decision-making. If you’d like help choosing the right setup or validating your configuration, please fill out the Get in touch form and we’ll contact you.