An auditor asks for the extractables profile of a packaging component. The data exists – somewhere. It is in a report from a contract lab, cross-referenced to a stability study in a different system, linked to a material specification in a third. Assembling the complete picture takes hours.
This is life in regulated industries: the testing happens, but the evidence does not connect.
What regulated industries have in common
Pharmaceutical packaging, medical devices, food-contact materials, and drinking water infrastructure operate under different regulations but face the same fundamental challenge: proving that products are safe, consistent, and traceable.
The testing requirements are substantial:
- Extractables and leachables (E&L) for anything in contact with drugs or food
- Biocompatibility testing per ISO 10993 for medical devices
- Migration testing per EU 10/2011 or FDA guidelines for food contact
- Container closure integrity (CCI) for pharmaceutical packaging
- Stability studies across temperature and humidity conditions
- Microplastics detection for drinking water and aquatic environments
Each test type generates rich data: chromatograms, spectra, particle counts, dose-response curves. The problem is not data scarcity – it is data fragmentation.
The silo problem
Different test types live in different systems. E&L data comes from a contract analytical lab with their reporting format. Biocompatibility results come from a different CRO. Stability samples run on in-house instruments with their own software. Process parameters sit in a manufacturing execution system.
When someone asks "show me all the evidence for this product," the assembly is manual. When a deviation occurs, root cause analysis requires pulling data from multiple sources and hoping the batch numbers match. When audits come, preparation time correlates with documentation chaos.
The testing itself is fine. The data infrastructure is the bottleneck.
Application: pharmaceutical packaging
Pharmaceutical packaging must protect the drug product without introducing contamination. This means E&L studies to identify what the packaging releases, CCI testing to confirm seal integrity, and stability studies to verify protection over shelf life.
E&L studies alone generate complex data: GC-MS chromatograms identifying volatile extractables, LC-MS data for semi-volatiles and non-volatiles, ICP-MS for elemental impurities. Each technique produces data in its own format. Interpretation requires comparing across techniques and linking to toxicological evaluation.
Container closure integrity testing (helium leak, high-voltage leak detection, vacuum decay) produces pass/fail results that need context: which seal parameters were used, which material lot, which sterilization cycle?
The opportunity is to link all this data by product, batch, and component. When CCI fails, you want to see not just the failure, but the seal process parameters, the film lot properties, and whether other batches from the same material showed similar behavior.
Application: medical devices
Medical device compliance under ISO 10993 requires extensive biocompatibility testing: cytotoxicity, sensitization, irritation, systemic toxicity, and more depending on device classification and contact duration.
Chemical characterization supports the biological testing: FTIR and Raman for polymer identification, GC-MS and LC-MS for extractables, particle characterization for implantable devices.
The challenge is demonstrating that a device made with one material lot is equivalent to a device made with another. Supplier changes, process modifications, and manufacturing site transfers all require evidence that the change does not affect biocompatibility.
This evidence case requires connecting material fingerprints (spectroscopy), extractables profiles (chromatography), and biological test results (toxicology) into a coherent package. When that data lives in separate silos, building the case is painful. When it is connected, equivalence arguments become straightforward.
Application: food-contact materials
Food-contact compliance under EU 10/2011 or FDA 21 CFR 177/178 requires demonstrating that materials do not transfer unsafe substances to food. Migration testing uses food simulants under time-temperature conditions that mimic intended use.
Overall migration limits (OML) provide a first screen. Specific migration limits (SML) for individual substances require identification and quantification. Non-intentionally added substances (NIAS) add another layer of complexity – you need to screen for things you did not add on purpose.
The data challenge: migration results depend on material composition, contact conditions, and analytical methods. A migration value only makes sense with context. Did the material contain the same additive package? Was the simulant the same? Were conditions comparable?
Structured data turns migration testing from a compliance checkbox into a knowledge base. Historical results inform new product development. Trends across suppliers or lots become visible.
Application: microplastics in drinking water
Microplastics detection is emerging as a regulatory requirement and public concern. Drinking water utilities, bottled water producers, and pipe system operators are all facing questions about particle contamination.
The analytical challenge is significant: isolate particles from water, identify them by polymer type (usually FTIR or Raman), count and size them. The result is a complex dataset linking particle counts to polymer identifications.
But detection alone is not enough. When particles are found, the next question is: where do they come from? Source tracing requires comparing detected particle spectra to potential source materials – pipe materials, gaskets, coatings, treatment chemicals.
This is where fingerprint libraries become valuable. If you can match detected particles to known material spectra, you can investigate the source. If your water infrastructure materials are characterized in a structured database, source tracing becomes systematic rather than guesswork.
Connecting characterization to compliance evidence
The pattern across all these applications is the same:
- Characterize materials with spectroscopy (FTIR, Raman, NIR) to establish fingerprints
- Link characterization to test outcomes (E&L, migration, biocompatibility, stability)
- Preserve method context (conditions, instruments, standards) so comparisons are valid
- Track by batch and lot so changes can be traced
- Generate evidence packages when audits, submissions, or investigations require them
When this data infrastructure exists, compliance becomes a byproduct of normal operations rather than a periodic scramble.
What changes
With connected data:
- Faster audit preparation: Evidence packages assemble from structured records, not manual file searches
- Quicker deviation investigation: Root cause analysis traces from outcome to inputs
- Streamlined change control: Equivalence cases build from comparable historical data
- Proactive risk identification: Trends across batches or suppliers become visible before failures
The testing burden does not decrease. What changes is the value extracted from that testing.
How PolyCore helps
PolyCore creates one record per product, component, or material. E&L data, migration results, biocompatibility outcomes, stability readings, and characterization spectra all link to that record. Method context stays attached, so comparisons remain valid across labs, instruments, and time.
The platform supports regulated workflows: audit-ready evidence retrieval, traceable change documentation, and risk-based decision support grounded in actual test data.
Interested in exploring this?
If compliance documentation is consuming more time than testing itself, or if audit preparation feels like a recurring emergency, we can design a pilot that connects your existing test data into a coherent evidence system without changing your instruments or methods.