I need to say something that most diagnostics companies don't want to hear: your product's sensitivity and specificity don't matter as much as you think they do.

That's not because the science is irrelevant. It's because every IVD manufacturer leads with the same metrics. Sensitivity. Specificity. Turnaround time. Throughput. Limit of detection. These are table stakes — the minimum requirements to be considered. They are not differentiators. And when every product in a competitive set leads with essentially the same performance data, the buyer defaults to the only remaining variable: price.

I've watched this happen more times than I can count. A diagnostics company invests years and millions in developing a genuinely superior assay. They achieve regulatory clearance. They create a beautiful data sheet with impressive analytical performance metrics. They present at conferences. And then they wonder why procurement is grinding them on price as if their product were a commodity. The answer is almost always the same: they positioned on features instead of value.

The Feature Trap: How Smart Scientists Make Bad Marketers

There's a reason diagnostics companies default to feature-based positioning, and it's not stupidity. It's professional training.

Scientists — and most people in IVD companies are scientists at heart — are trained to present data objectively. You state your methods, report your results, and let the data speak for itself. This is excellent science and terrible marketing. In a commercial context, "letting the data speak for itself" means hoping that a procurement manager with 47 other products to evaluate will independently deduce that your 99.2% sensitivity versus the competitor's 98.8% sensitivity translates into meaningful clinical benefit. They won't. They don't have time, and frankly, it's not their job.

The result is what I call the "spec sheet stalemate." Every company publishes their performance data. Every company claims superior or equivalent performance. The lab director looks at five virtually identical spec sheets and asks purchasing to get the best price. Your R&D team is horrified. Your sales team is frustrated. And your product — which may genuinely be the best option for the clinical application — is being commoditised because nobody translated the science into a commercial narrative.

What Positioning Actually Means (And What It Doesn't)

Let me be specific about what I mean by positioning, because the word gets thrown around so loosely in life sciences that it's almost lost its meaning.

Positioning is not your tagline. It's not your value proposition (though it informs it). It's not what you print on the booth banner at AACC. Positioning is the mental space you occupy in your customer's mind relative to every alternative — including doing nothing.

That last part is crucial. In diagnostics, your biggest competitor is almost never another product. It's the current way of doing things. It's the test that's already on the menu, the workflow that's already established, the reagent that's already stocked, the protocol that technicians already know. Switching costs in a diagnostic lab are real — not just financial costs, but operational disruption, retraining, revalidation, and the cognitive cost of change.

This means your positioning needs to answer a fundamentally different question than "why is our assay better?" It needs to answer: "why is switching worth the disruption?"

The Three Positioning Mistakes I See Most Often

Mistake 1: Positioning for the Wrong Audience

Most IVD positioning is written by scientists for scientists. The problem is that the buying decision in a modern healthcare system involves a minimum of four stakeholders, and usually more.

The laboratory director or section head evaluates analytical and clinical performance. The lab manager evaluates operational fit — workflow integration, hands-on time, maintenance requirements, staff training. The clinician evaluates clinical utility — what decisions this test enables that they couldn't make before. The procurement or finance team evaluates economic impact — total cost of ownership, reimbursement, and budget impact.

In the NHS, you can add ICB commissioners, who evaluate population health impact and alignment with national priorities. In the US, you might add the hospital CFO, who evaluates revenue implications under various reimbursement scenarios.

Each of these stakeholders has different decision criteria, different language, and different pain points. Positioning that resonates with a molecular biologist will bounce off a procurement manager. A single positioning statement aimed at "diagnostics professionals" will connect deeply with nobody.

The solution isn't to create five different positionings — that's how you end up with an incoherent brand. It's to build a positioning architecture that has a core narrative (the "why") and audience-specific proof points (the "how" and "what") that ladder up to the same central proposition.

Mistake 2: Confusing Regulatory Claims with Commercial Claims

What you can say on your IFU (Instructions for Use) and what you should lead with commercially are fundamentally different things. I'm always surprised by how many companies treat their intended use statement as their positioning statement. Regulatory language is designed to be precise, defensible, and conservative. Commercial language needs to be compelling, memorable, and actionable.

Here's an example. A regulatory claim might read: "Indicated for the qualitative detection of mutations in the EGFR gene in NSCLC tumour tissue." Technically accurate. Commercially useless. The commercial narrative should be something closer to: "Identifies the 60% of NSCLC patients who could benefit from targeted therapy — enabling treatment decisions in 48 hours instead of two weeks." Same science. Completely different impact.

Obviously, commercial claims must be substantiated and compliant. I'm not suggesting you overstate your evidence. But there's a wide gap between regulatory conservatism and commercial persuasion, and most IVD companies don't exploit it. They default to the regulatory language because it feels "safe," and then wonder why their marketing materials read like package inserts.

Mistake 3: Positioning on Technology Instead of Outcome

This is particularly common in genomics and molecular diagnostics, where the technology itself is genuinely exciting. Companies fall in love with their platform — the chemistry, the bioinformatics pipeline, the novel detection method — and position on the technology rather than the clinical or operational outcome it enables.

Nobody outside your R&D team cares about your proprietary amplification chemistry. They care about whether the test gives them an answer they can act on, quickly enough to make a difference, at a cost that works within their budget. The technology is how you deliver the outcome. The outcome is what you sell.

I once worked with a company that had developed a genuinely innovative liquid biopsy panel. Their positioning led with the technology — "proprietary cell-free DNA enrichment methodology." The clinicians they were selling to had one question: "does this tell me something I can't already get from a tissue biopsy?" When the answer was reframed as "yes — actionable results from a blood draw in cases where tissue is insufficient or unavailable, with concordance data showing equivalent clinical utility" — the conversation changed entirely. Same product, same science, completely different commercial trajectory.

A Positioning Framework That Actually Works

After years of doing this across diagnostics, genomics, and research tools, I've landed on a framework that works consistently. It's not complicated — but it requires discipline, because the temptation to revert to feature-listing is strong.

Layer 1: Clinical Value

What clinical decision does this test enable? Not "what does it detect" — that's a feature. What does the clinician do differently with this result? Does it change treatment selection? Does it identify patients who would otherwise be missed? Does it enable earlier intervention? Does it reduce unnecessary procedures?

Clinical value is the most powerful positioning layer because it connects directly to patient outcomes. But it requires evidence — ideally clinical utility studies showing that test results changed clinical management and improved outcomes. If you don't have this evidence yet, your positioning should be built around generating it.

Layer 2: Operational Value

How does this test improve the way the lab operates? Faster turnaround? Lower hands-on time? Better batch flexibility? Integration with existing LIS systems? Reduced need for send-outs?

Operational value resonates most strongly with lab managers and department heads — the people who have to make your product work within an existing operation. In my experience, operational value is often the tiebreaker when clinical performance is comparable across products. A test that saves 30 minutes of technician time per run, across 250 runs per year, is saving over 125 hours of skilled labour. That's a compelling argument for a lab manager staring at a staffing shortage.

Layer 3: Economic Value

What is the financial case? This goes beyond price per test — it encompasses total cost of ownership, health economic impact, and reimbursement potential.

In the NHS, the economic case often needs to be framed as budget impact: what is the net effect on the trust's finances over 3-5 years, accounting for device costs, consumables, downstream clinical savings, and any reduction in other test usage? In the US, reimbursement strategy is its own discipline — CPT coding, coverage determination, payer negotiations. In both cases, the economic positioning needs to be specific, quantified, and defensible.

Layer 4: Strategic Value

This is the layer most companies forget, and it's increasingly important. Does your product align with strategic priorities — national screening programmes, NHS Long Term Plan objectives, precision medicine initiatives, health equity goals? A product that helps an NHS trust demonstrate compliance with a NICE guideline has strategic value beyond its clinical and economic merits.

The Status Quo Is Your Real Competitor

I'll finish with the most underestimated insight in diagnostics positioning. Your fiercest competitor is not the product with the booth next to yours at a congress. It's inertia.

Diagnostic labs are conservative environments — for good reason. Switching tests involves validation, training, IT integration, quality system updates, and operational risk. The clinical team needs to trust the new results. The laboratory needs to be confident in the workflow. Procurement needs to see the business case. All of this creates a powerful bias towards the status quo.

Your positioning needs to overcome this bias by making the cost of not switching more compelling than the cost of switching. What are patients missing because the current test isn't sensitive enough? What clinical decisions are being delayed because the current turnaround time is too long? What money is being wasted on unnecessary confirmatory testing because the current assay's specificity isn't adequate?

When you can articulate the clinical, operational, and economic cost of the status quo more vividly than the disruption cost of switching, you've found your positioning. Everything else is spec sheets.


If your diagnostic product has strong science but a weak commercial story, let's fix the positioning.

Related: Product Positioning Services | Life Science Research Tools Case Study