After infrastructure cost and data readiness, the most underestimated, and typically, the most difficult, barrier to modeling and simulation (M&S) adoption is company culture.
Unlike software licenses or data pipelines, the “right” culture cannot be purchased or installed.
Most companies genuinely believe they are “supportive of predictive modeling,” justifying its theoretical use in presentations and long‑term vision statements, while resisting it in practice.
Experimental Bias: “Show Me the Data”
Pharmaceutical development has been built on tangible experimentation for decades. Physical experiments carry an implicit legitimacy: what we see in the lab feels more real than what a model predicts. This bias runs deep, even when wet experiments are slow, expensive, or incomplete and even though manually collected data can be inaccurate, due operator error, inappropriate test protocols, or faulty instrumentation.
In these scenarios, models are applied post-data collection and as explanatory tools rather than decision drivers. Modeling teams are brought in late, asked to “fit the data,” validate what is already known, or explain why an experiment behaved the way it did, rather than guiding which experiments should be run in the first place.
This relegates M&S to a reactive role and severely limits its value.
True adoption requires a cultural shift from experiment‑led to decision‑led development, where models inform experimental design, reduce trial‑and‑error, and quantify risk before material is consumed. That shift is as much about trust and mindset as it is about math.
Risk Aversion and Regulatory Anxiety
Pharmaceutical organizations are, by necessity, risk‑averse. Patient safety, regulatory scrutiny, and stringent quality expectations create strong incentives to avoid new approaches, especially those perceived as “black boxes.” In regulated environments, predictability and traceability matter more than novelty, and any new methodology must earn trust across technical, quality, and regulatory stakeholders.
Even though regulators increasingly encourage model‑informed development, internal teams often hesitate to rely on M&S for primary decisions. There remains a lingering fear that “if it’s not physically measured, it won’t be accepted,” or that simulation results will be difficult to defend during audits, filings, or inspections. As a result, modeling is frequently positioned as supporting evidence at best, rather than as a foundational decision‑making tool.
Ironically, this cultural resistance persists even when traditional experimental approaches are statistically weak, incomplete, or poorly scalable. Limited batch counts, narrow operating ranges, and confounded experiments are routinely accepted as sufficient, simply because they are physical. In contrast, well‑constructed models, especially mechanistic ones, are subjected to a much higher bar, regardless of their ability to provide broader insight and predictive power.
This is where Quality by Design (QbD) fundamentally changes the conversation. QbD is not centered on testing quality into a product, but on knowledge‑based process and product development. Its core principles, understanding parameter relationships, establishing causality, identifying critical quality attributes (CQAs), and defining a scientifically justified design space, naturally align with mechanistic and hybrid modeling.
Mechanistic, physics‑based models are uniquely suited to support QbD because they explicitly encode cause‑and‑effect relationships. They make assumptions visible, enforce mass and energy balances, and capture scale‑dependent phenomena that experiments alone often cannot. Rather than producing opaque correlations, these models generate explainable knowledge: how inputs propagate through a process, why certain parameters matter more than others, and under what conditions failure modes emerge.
When used correctly, modeling becomes a vehicle for building and demonstrating process understanding - exactly what QbD demands. Models can systematically explore multidimensional design spaces that would be impractical or prohibitively expensive to cover experimentally, helping teams justify operating ranges, assess robustness, and proactively manage risk.
This only works if organizations are culturally prepared to treat models as first‑class engineering artifacts, not optional extras or post‑hoc justifications. Validation, documentation, and lifecycle management must be approached with the same rigor applied to experimental methods. When that happens, models are no longer perceived as regulatory liabilities, but as assets that strengthen submissions and reduce uncertainty.
In this context, M&S is not in conflict with regulatory expectations, it is one of the most effective ways to meet them.
The Silo Problem: Modeling as a Specialist Function
Another cultural hurdle is organizational structure. In many companies, M&S exists in a small, specialized group, separate from R&D, process development, manufacturing, or tech transfer.
While this centralization may seem efficient, it often reinforces the perception that modeling is someone else’s job. Process engineers focus on experiments. Manufacturing focuses on execution. Modeling teams wait for requests.
When M&S is culturally siloed, it becomes transactional: a service function rather than an integrated capability. Requests arrive late, constraints are unclear, and results don’t influence real decisions because the teams responsible for execution were not part of the modeling journey from the start.
Adoption stalls not because models fail, but because ownership is unclear.
Incentives That Work Against Modeling
Culture is shaped by incentives, and many organizations unknowingly discourage M&S adoption through how success is measured.
Experimental throughput, batch count, and on‑time milestone delivery are often rewarded more visibly than insight generation or risk reduction. Running “one more experiment” feels safer than trusting a model, especially when individual career incentives are tied to tangible lab outputs.
In this environment, modeling can be perceived as slowing things down, adding review steps, or introducing uncertainty, even when it ultimately saves time and cost at the program level.
Without leadership explicitly rewarding model‑informed decisions, M&S struggles to gain traction beyond early adopters and internal champions.
Why Outsourcing Changes the Cultural Equation
This is where Simulation‑as‑a‑Service fundamentally alters not just cost structures, but culture.
External modeling partners like Procegence bring more than tools and expertise. They introduce neutrality. An external team is not competing with internal experiments, budgets, or headcount. Their role is clearly defined: to support decisions.
Outsourcing also lowers the internal barrier to entry. Teams are more willing to experiment with modeling when it does not require organizational restructuring, long‑term hiring commitments, or internal ownership battles. Projects can start small, prove value, and scale based on outcomes rather than promises.
Perhaps most importantly, external partners help normalize M&S as a standard part of decision‑making, not a niche specialty. When modeling delivers clear, project‑based impact, fewer experiments, faster scale‑up, and smoother tech transfer, it earns credibility organically.
Building a Decision‑Centered Culture
Successful M&S adoption does not begin with software or data. It begins with a cultural commitment to better decisions.
That means:
- Involving modeling early, not as validation but as guidance
- Treating models as evolving knowledge assets, not final answers
- Rewarding teams for reduced uncertainty, not just experimental output
- Embracing hybrid, physics‑informed approaches that balance rigor with practicality
Mechanistic and hybrid models fit naturally into this cultural shift. They align with how engineers and scientists think, provide explainability, and scale beyond the data immediately available.
Moving Forward
The irony remains: companies might want to invest in M&S to reduce cost, risk, and time, yet cultural resistance often prevents them from realizing those very benefits.
Overcoming the cultural barrier does not require abandoning experimentation or taking reckless risks. It requires reframing modeling not as a replacement for experiments, but as an inherent force multiplier.
Organizations that succeed will be those that treat M&S not as an optional capability, but as a core decision discipline, supported by the right tools, data, and partners.
In a world where timelines are tighter, margins are thinner, and uncertainty is unavoidable, culture, not technology, will determine who truly benefits from M&S. Let us know how we can help you get started.
In our fourth article in this series, we explore expectations around validation and trust in the adoption of M&S.




