Ask any insurance product team about the biggest barrier to their telematics rollout and privacy will usually be somewhere in the top three. Sometimes it’s at the top. It’s rarely questioned, and it’s rarely wrong, but it is almost always incomplete.
The research bears out the concern. Pew found 81% of US adults are worried about how companies use their data.1 A 2024 study of drivers specifically identified 68% as concerned about telematics tracking.2 In Germany, a Bitkom survey from February 2024 put the figure at 98% of respondents wanting full visibility into what their vehicle collects.3 If privacy felt like a niche worry ten years ago, it isn’t one now.
What the same research shows less often, though, is that privacy concern and privacy refusal aren’t the same thing. Over half of the drivers in the US study said they’d opt in to telematics if insurers promised not to sell the data, offered transparency about what was collected, and gave them control over deletion.2 That isn’t a privacy problem. That’s a design problem.
Insurance telematics has had a version of this conversation for two decades. The answers have become sharper. Data-protection law is clearer, the EDPB’s connected-vehicle guidelines explicit, and enforcement in neighbouring sectors already in motion.4 The market opportunity is real: European UBI was valued at roughly $10 billion in 2024 and is still doubling roughly every four years.5 But the drivers inside that market are making one decision at a time, and they’re making it based on what they can understand.
A note on where we sit in this conversation before we go further. Dolphin builds telematics across the full architectural spectrum: fully anonymous programmes, Chinese-wall architectures that follow EDPB guidance to the letter, and programmes where the insurer wants a conventional data flow with the user’s knowledge and consent. We have preferences, and this page sets them out. We also have an order book, and our preferences are not a description of every programme we’ve ever shipped. What follows is what we see working when the goal is adoption, not compliance-as-an-end.
This page lays out how to think about the decision every telematics user makes, often before they ever see a premium quote. It covers what the law requires, where the law isn’t enough, and what European insurers who’ve made this work have actually done. The short version: users rarely refuse telematics because of privacy. They refuse it because the value isn’t visible.
The three questions every user actually asks
When a driver opens a telematics app for the first time, they aren’t reading the privacy policy. They’re running a calculation. It has three inputs, and the inputs are almost always the same, regardless of market.
What am I giving up? The question isn’t really about data categories. Nobody opens an app and thinks in terms of “motion vectors” or “GNSS coordinates.” They think about two things: whether the app follows them around in some diffuse sense, and whether anyone they know will see the data. Both concerns are addressable, but only if they’re addressed directly. Telematics programs that open with a list of sensors the app uses lose the user before the list is finished. Programs that open with “this service detects a crash and alerts help — it does not share your location with anyone else” have already answered the question that mattered.
What do I get back? This is the question most privacy discussions skip. The academic research is clear on its weight: Gerber and colleagues’ 2018 review of the privacy paradox found perceived benefit to be one of the strongest predictors of data-disclosure willingness, alongside habit and concrete utility.6 Capgemini’s 2023 study of 3,000+ European vehicle owners found only around a third willing to share vehicle data today — and concluded that the lever for change is concrete, purpose-specific benefit, not additional legal text.7 In insurance telematics, that benefit is usually one of three things: help when something goes wrong (crash detection, roadside assistance), recognition when something goes right (premium discounts, rewards, score improvements), or insight the user finds valuable on its own terms (driving feedback, safer-route suggestions). Programs that lead with one of these and keep it concrete see higher opt-in rates, measurably and consistently. Programs that lead with pricing logic rarely do.
Do I still have a choice tomorrow? This is the question that separates “consent” from “trust.” Even users who happily accept on day one will withdraw consent quietly if they later feel they can’t withdraw it openly. The practical consequence is the visible control surface: a dashboard that shows what the app currently tracks, a one-tap pause, a clear account deletion flow. Research on defaults is relevant here. Johnson and Goldstein’s 2003 Science paper on organ-donor consent established that defaults anchor behaviour strongly even when stated preferences are similar;8 in telematics, the analogue is that a program’s default posture (opt-in per service versus bundled, visible controls versus buried) shapes both consent rate and subsequent engagement. A program that makes control visible almost never needs to use it.
Three questions, three answers, in that order. When the answers are clear, adoption follows naturally. When any of the three is vague, the user treats the whole service as a gamble, and most people don’t gamble with data.
The concrete example that makes this click in most insurer conversations is automatic crash detection. Crash detection is arguably the most data-intensive telematics service on the market: it requires continuous motion sensing, contextual interpretation of driving patterns, and precise location at the moment of impact. If privacy concern were primarily about data intensity, crash detection would see low opt-in. It doesn’t. Across European programmes we work with, acceptance is consistently high, because all three questions have one-sentence answers. What am I giving up: my approximate motion and location, only while driving. What do I get back: fast help if I crash. Do I still have a choice: yes, any time. The data is heavy; the decision is light.
None of this is theoretical for us. We see the inverse pattern too: programmes where the three questions are left ambiguous, where the architecture is technically compliant but the user-facing story is vague, where opt-in numbers stay under 10% and nobody’s sure why. The answer is usually the same. Not that the data collection was wrong, but that the explanation was missing. The rest of this page walks through how to close that gap — what design moves reliably tighten the answers, and where the three Dolphin offerings sit across the spectrum of choices an insurer actually has.
Why privacy is the top adoption barrier insurers hear about
Privacy concern among drivers is broad and well-documented. Pew’s 2023 US survey found 81% of adults worried about how companies use their data, with two-thirds saying they understood little or nothing about what companies actually do with it.1 Narrowed to telematics specifically, a 2024 study of US drivers put the share concerned about tracking at 68%.2 In Germany, Bitkom surveyed over 1,000 consumers in early 2024 and found 98% wanting full visibility into what their vehicle collects and 97% wanting the ability to stop data transmission at will.3
The gap between that concern and actual adoption is the number that matters. Consumer Reports surveyed more than 40,000 US policyholders in 2024 and found only 14% enrolled in a telematics programme — and only 28% even aware of their insurer offering one.9 In the DACH market, Germany’s largest insurer HUK-Coburg grew its Telematik Plus programme to around 670,000 users in 2024, impressive in absolute terms but still only about 5% of its 13.3 million-customer book.10
Two things follow. First, the addressable market is large and mostly uninformed rather than actively opposed. Second, the barrier between “aware” and “enrolled” is where programme design makes the difference. Programmes that open by talking about data tend to stay small. Programmes that open by talking about what the service actually does for the user tend to grow.
What a telematics app actually collects (and what it doesn’t need to)
Before listing categories, one reality check worth naming. A driver who installs a telematics app is not discovering for the first time that the app collects location data. The privacy question they’re asking is not “do you collect data,” it’s “what exactly, for what, with whom, and for how long.” Data-protection law is written to protect users who haven’t asked those questions, including the weakest links in the chain, which is right and necessary. Running a telematics service, though, means actually answering the questions for the users who do ask. The list below is the honest version of what a modern smartphone-based telematics app collects, and what it deliberately doesn’t touch.
The functional data breaks into four categories:
- Motion — accelerometer and gyroscope readings used to detect trips, braking, cornering, acceleration, and impact events.
- Location — GPS or equivalent positioning, sampled as often as the service needs and no more often. For crash detection and trip summaries this is relatively light; for turn-by-turn coaching it’s heavier.
- Trip state — derived signals like trip start/end, mode of transport (car vs. passenger vs. public transport), phone-use-while-driving events.
- Device context — operating system, app version, permissions state, and similar metadata needed to run the service reliably across thousands of handset models.
Identity sits in a separate bucket. In a well-designed telematics architecture, the driving data above is linked to a user only by a hashed identifier that the telematics provider can’t reverse. The personal information that makes a user identifiable (name, address, phone number, email, policy number) is held separately by the insurer, with a one-way mapping between the two. This is the “Chinese wall” architecture the EDPB describes, and it’s the structural reason telematics providers don’t need to see, and usually can’t see, who their data belongs to.
What a telematics app doesn’t need — and what a well-built one doesn’t ask for — is almost everything else on the phone. Contacts, text messages, call logs, photos, browsing history, unrelated app usage: none of these have a role in measuring how a vehicle is driven or detecting a crash. Asking for them erodes trust without adding service value. Most of the real design work is in the opposite direction: using the minimum of what’s listed above to deliver each feature, and being transparent about which data goes where.
What European regulation actually requires
The legal frame for EU telematics is clearer than it’s often made out to be. The core document is the EDPB’s Guidelines 01/2020 on processing personal data in the context of connected vehicles, adopted in March 2021.4 It’s a readable text for a regulatory document. Its headline position: consent is the default legal basis for storing or accessing data on in-vehicle terminal equipment under the ePrivacy Directive, and the word consent appears eighty-six times in the document.
Two principles from those guidelines shape how telematics programmes should be architected.
The first is data minimisation, which is best read as proportionate to the stated purpose rather than minimal in absolute terms. A telematics service that detects crashes needs continuous motion sensing and precise location at the moment of impact; a service that scores driving behaviour needs sustained motion data and trip-level location context. Both are proportionate to what they actually do, and both are what EU law allows when the service and the data flow are designed to match. The design question the regulator will ask is whether the data the programme collects is what the stated service needs — not whether it’s “as little as possible.”
The second is purpose limitation. The EDPB explicitly states that telemetry collected for vehicle maintenance purposes cannot be forwarded to an insurer to build behaviour-based products without fresh consent specific to the new purpose.11 That distinction is increasingly load-bearing as car manufacturers accumulate connected-vehicle data and consider new commercial uses for it. An insurer relying on OEM data flows, rather than its own consented telematics programme, needs purpose-specific consent from the user — not the original vehicle-sale consent. An insurer running a consented telematics app under its own brand has the easier story to tell.
Looking forward, in March 2025 CNIL opened public consultation on a new draft recommendation specifically governing connected-vehicle location data, which signals where the next wave of European supervisory attention is heading.12
For contrast, the US picture is less harmonised but more litigious — and the cases that have surfaced recently are about consent architectures very different from the European telematics-app norm. In January 2025 the Texas Attorney General sued Allstate and its subsidiary Arity for covertly collecting driving data from roughly 45 million Americans through SDKs embedded in third-party apps — the first US enforcement action under a state comprehensive privacy law.13 In April 2025 a class action against Toyota and Progressive followed, alleging vehicle telemetry was shared via a data broker without informed consent.14 The specific architectures under challenge in both cases (buried consent, third-party SDK chains, no clear purpose limitation) are the ones European practice has discouraged for years and the ones a properly designed insurance telematics app rules out by construction.
No EU data-protection authority has yet fined a motor insurer specifically for its telematics programme. That gap won’t last forever, but it does reflect the fact that European telematics has, broadly, been built on the consent-and-purpose-specific patterns the regulators care about. Programmes that keep doing that — and that are explicit with users about what their app does and why — should expect to stay on the right side of where enforcement lands next.
Why “GDPR-compliant” isn’t a go-to-market message
Being legally compliant protects the insurer. Explaining the service to the user is a different job.
The academic literature on privacy decisions has been consistent on this for a decade. Acquisti, Taylor and Wagman’s 2016 review in the Journal of Economic Literature describes the consumer’s position as one of “imperfect or asymmetric information regarding when their data is collected, for what purposes, and with what consequences.”15 No privacy policy text, however well-drafted, closes that asymmetry by itself. Kirsten Martin’s 2020 work on the privacy paradox extends the point: once a firm asks for data, it takes on an associated duty of care whose breach creates distrust comparable to an outright security failure.16
In practical terms this means a consent-obtained telematics programme can still fail on trust. A user who ticks the box on a 40-page terms-of-service does not then remember what they agreed to, and the first time they see a feature behaving in a way they didn’t expect, the consent no longer feels valid to them — legally binding or not. The programmes that avoid this outcome are the ones where consent is not the event that authorises everything afterwards, but a running conversation between the app and the user.
That’s the boundary where compliance ends and design begins. The next section is about the design.
Design choices we recommend across the spectrum
The specific architectural and interaction choices that correlate with higher opt-in sit at three layers.
Architecture. Separate the identity of the user from the driving data — the telematics backend sees a hash, the insurer’s policy system sees a name, and the bridge between the two is narrow and auditable. Process what can be processed on-device rather than sending everything to a server. Retain only what the service needs for as long as the service needs it, and define that retention period explicitly rather than leaving it open-ended. None of this prevents an insurer from running an attributed programme when that’s what the product calls for. It sets the defaults so the burden of justification shifts toward collecting more rather than toward collecting less.
Interaction. Ask for each purpose separately rather than bundling consents. Give the user a visible dashboard that shows what the app is currently doing, not just what the privacy policy says it may do. Make pausing the service one tap, and make account deletion a clear path rather than a hidden email address. Users rarely use pause or deletion when the features exist; they leave the programme entirely when the features don’t.
Messaging. The single highest-leverage copy change we see across programmes is the switch from monitoring language to service language. “We track your driving” lands differently from “We detect a crash and alert help.” Both describe an overlapping set of technical capabilities, but the second one answers the user’s actual question. Coaching language beats grading language for the same reason. Programmes that frame telematics as a feedback tool rather than a surveillance tool see materially higher sustained engagement.
These choices are the recommendation. The next section is where each of Dolphin’s three product offerings sits against them, and how configurable they are in practice.
The three Dolphin offerings and where each sits on the privacy spectrum
Dolphin’s product portfolio is built around one telematics engine with three different packaging and operational models. They sit at different places on the privacy spectrum by default, and each is configurable to move along the spectrum if the insurer’s programme requires it.
MOVE Score app. Our own-brand app, available directly to end users. Registration is hash-based and anonymous by default: we don’t ask for name, email, or insurance status at sign-up, and the MOVE Score itself is a risk signal derived from driving behaviour and exposure without needing the user’s identity. This is the most privacy-forward end of the spectrum, and it exists to let insurers test a telematics programme, or run a “try-before-you-buy” acquisition funnel, without onboarding personal data they don’t yet need. The app is also useful as a benchmark: it’s the reference implementation of the design principles above.
MOVE SDK. The engine on its own, integrated into an insurer’s own app (or any other app). The SDK ships with the identity-separation architecture already built in, and it supports the full range of telematics features, from trip detection and behaviour scoring through to automatic crash detection. What it does not do is force a particular data-flow architecture on the integrating app. An insurer can run MOVE SDK in a fully anonymous configuration, in a Chinese-wall configuration that maps hashes to identities only when genuinely needed, or in a conventionally attributed configuration where the insurer sees user-level driving data from the start. All three are supported; the choice is the insurer’s.
White-label apps. We build the app, typically on top of MOVE SDK and MOVE Score, for insurers who’d rather launch quickly than build a dedicated mobile team. The product and onboarding are designed with the insurer’s brand and programme logic. The underlying privacy posture is the same configurable spectrum as the SDK: anonymous, Chinese-wall, or attributed, per the insurer’s chosen architecture.
| Offering | Default identity posture | What’s configurable | Example |
|---|---|---|---|
| MOVE Score app | Hash-based, anonymous | Can be linked to an insurer identity on opt-in | Our direct-to-consumer MOVE Score product |
| MOVE SDK | Architecture-agnostic, ships with separation primitives | Full spectrum: anonymous ↔ Chinese-wall ↔ attributed | Integrated into an insurer’s existing app |
| White-label app | Whatever the insurer specifies | Same as SDK | Generali Austria Mobility, Porsche Smart Driver, UNION Drivello |
The recommendations in the previous section are the pattern we see correlating with adoption. They are not a mandate. An insurer with a specific programme design, regulatory environment, or existing customer relationship sometimes has good reasons for a different configuration, and when they do, we build to that configuration rather than arguing against it. What we do is tell them, honestly, where we think their opt-in rate is likely to land.
How three European insurers have put this into practice
The three case studies on this site cover most of the spectrum that insurers operate on. Each one is live, each one is open to users beyond the insurer’s own policyholder base, and each one combines the design principles above with a distinct commercial logic.
Generali Austria — Mobility App, launched May 2022. The earliest of the three, and the longest-running proof point. Generali positioned the app as a mobility programme rather than an insurance tool: safer driving and greener travel choices earn in-app rewards, sustained engagement beyond the renewal touchpoint. Privacy-first architecture underpinned that positioning; the app was made available to all drivers in Austria, not only Generali policyholders, which would be harder to justify in a more conventional attributed data model.
Porsche Versicherung Austria — Smart Driver, launched October 2023. Sits closer to the classical UBI pattern in commercial terms — safe driving translates into up to 20% off monthly insurance premiums — while using the same identity-separation architecture. Retention was the design priority: a gamified driving experience that keeps the user opening the app between policy events. The case study documents the specific design choices that went into that retention curve.
UNION Biztosító (VIG) — Drivello, launched November 2025. The most recent, and the clearest example of the “trust-first” end of the spectrum. Drivello gives every user one year of free roadside assistance from the day after registration, regardless of whether they take out a UNION policy, and uses the MOVE Score to support discount eligibility on mandatory liability insurance. Driving data is explicitly used for coaching and discount calculation, not for premium increases or claims decisions — a commitment that’s visible to users rather than buried in terms. Drivello is the programme that most closely mirrors the three-questions framework in its UX decisions.
Across the three programmes we’ve seen opt-in, retention, and Net Promoter figures that none of them would have hit with a classical “download our app to lower your premium” value proposition. The differences in design, not the differences in data collection, are what explain the curves.
→ Read the full case studies: Generali, Porsche, UNION Drivello.
Questions insurance product teams actually ask us
Which legal basis should we rely on for telematics processing — consent, contract, or legitimate interest?
For most user-facing telematics features under EU law, consent is the conservative default. The EDPB’s Guidelines 01/2020 position it that way explicitly, and for features that touch sensitive data (continuous location, driving-style profiling) it’s the only option that reliably holds up to regulator scrutiny. Insurance Europe has argued that Article 6(1)(b) GDPR (contract necessity) should also be available for telematics-based products,17 but consent remains the posture that survives the most enforcement scenarios. The right answer for any given programme depends on the feature set, the jurisdiction, and the insurer’s wider privacy strategy — a conversation worth having with a DPO before architecture is fixed.
How do we choose between an anonymous-by-design architecture and a Chinese-wall architecture?
The decision usually comes down to what the programme needs the user identity for. If the insurer needs to link driving data to a specific policy (premium discount, claims cross-referencing, renewal messaging), an anonymous-by-design architecture doesn’t fit, and a Chinese-wall architecture is the correct structural answer. If the programme is primarily a customer-acquisition or engagement play — a standalone driving-insights service, a try-before-you-buy funnel — anonymous-by-design is both simpler and stronger from a privacy standpoint. Dolphin’s MOVE Score app sits in the second category; most white-label and SDK integrations sit in the first.
What opt-out mechanics should our telematics app offer, and how granular should they be?
At minimum: a one-tap pause for the full service, purpose-level toggles for distinct features (trip tracking vs. crash detection vs. location-based services), and a clear path to account deletion with a defined data-retention tail. Granular opt-out is sometimes seen as a threat to service integrity; in practice, programmes that provide it see lower hard opt-out rates because users retain the sense of control that would otherwise drive them out of the programme entirely.
How should our onboarding explain data collection without triggering refusal?
Explain features first, data second. Describe the service the user gets, then describe what the service uses, then show the controls that govern it. Keep each step short and specific. Avoid generic privacy-policy language in the onboarding flow itself — the 40-page policy can live where it lives, but onboarding should answer the three questions the user is actually asking: what am I giving up, what do I get back, can I still change my mind later.
What retention period should our telematics programme use?
Defined, specific, and tied to a stated purpose for each data category. Raw GPS traces should usually be retained for weeks rather than years; aggregated driving scores can be retained longer where the programme offers historical feedback or trend analysis. Open-ended retention is the configuration most likely to draw regulator attention, and the one hardest to justify under GDPR’s storage-limitation principle.
Does hashing actually hold up to re-identification risk in a telematics dataset?
Hashing a user identifier is necessary but not sufficient. Telematics data is high-dimensional — trip patterns, home and work location, daily routine — and can be re-identified by someone with access to auxiliary information even when direct identifiers are removed. The architectural answer is to keep the hashed driving data and the identity mapping in separate systems, with access controls that make the two being combined a deliberate, logged action rather than an ambient capability. That’s the structural property that holds under pressure; hashing alone doesn’t.
What’s the right way to brief a regulator on our telematics programme before launch?
Pre-briefing (where the regulator’s posture supports it) is usually worth the investment. Bring the architecture, the data flows, the retention schedule, and the user-facing consent mechanics; expect questions on data-minimisation rationale and on the legal basis chosen per processing activity. Programmes that engage regulators early tend to ship faster, not slower — the concerns raised are ones the programme would have hit eventually, and hitting them before launch is cheaper than hitting them in remediation.
Where this goes next
CNIL’s 2025 draft recommendation on connected-vehicle location data12 is the most visible near-term regulatory development, and similar guidance from other EU data-protection authorities is likely within the next 12–24 months. The first EU DPA fine against a motor insurer specifically for its telematics programme is not a question of if but when. Programmes that treat privacy as product design now — that bake the three-question answers into architecture and UX rather than into terms-and-conditions — will be the ones unaffected when that enforcement lands.
→ Related reading: Privacy, trust and telematics · What about privacy in insurance telematics? (podcast) · Glossary: UBI, PAYD, PHYD.
About the author
Harald Trautsch is Co-Founder and CEO of Dolphin Technologies, the Vienna-based telematics company he started in 2001 after a decade spent building automotive security and crash-detection systems. One of those early crash-detection devices alerted emergency services to a couple whose car had gone over a roadside embankment; both were rescued. The incident set Dolphin’s direction for the next two decades: telematics as a tool for prevention and help, not just for pricing.
Between 2008 and 2012 Harald led international market development for Octo Telematics, and negotiated the EU eCall Directive in Brussels on behalf of the Austrian Ministry of Transport — work that shaped how connected-vehicle data is treated in European law today. He returned to Dolphin in 2014 following a management buy-out and has run the company internationally since.
He hosts the Insurance Telematics podcast, speaks regularly at European insurance and insurtech conferences, and has worked with insurers including Generali, Porsche Versicherung, Vienna Insurance Group, and Kooperativa.
LinkedIn: linkedin.com/in/trautsch
References
- Pew Research Center, “How Americans View Data Privacy” (October 2023) — source
- AutoInsurance.com, “Many Drivers Skip Auto Insurance Tracking Apps Over Privacy” (2024) — source
- Bitkom, connected-car survey (February 2024) — source
- EDPB, Guidelines 01/2020 on processing personal data in the context of connected vehicles (2021) — source
- Data Bridge Market Research, Europe Usage-Based Insurance Market Report (2024/2025) — source
- Gerber, N., Gerber, P., Volkamer, M. (2018), “Explaining the privacy paradox” — source
- Capgemini Research Institute, “Monetizing Vehicle Data” (2023) — source
- Johnson, E.J. & Goldstein, D. (2003), “Do Defaults Save Lives?” Science 302 — source
- Consumer Federation of America / Consumer Reports telematics study (2024) — source
- Autohaus, “Bilanz 2024: HUK-Coburg hat jetzt 14 Mio. Fahrzeuge unter Vertrag” (2025) — source
- EDPB, Guidelines 01/2020, Section 1.5 (repurposing of telemetry) — source
- Covington Inside Privacy, CNIL 2025 draft on connected-vehicle location data (2025) — source
- Texas Attorney General press release, Paxton v. Allstate/Arity (January 2025) — source
- Insurance Journal on Toyota/Progressive class action (April 2025) — source
- Acquisti, A., Taylor, C.R., & Wagman, L. (2016), “The Economics of Privacy,” Journal of Economic Literature — source
- Martin, K. (2020), “Breaking the Privacy Paradox,” Business Ethics Quarterly — source
- Insurance Europe response to EDPB connected-vehicle consultation (2020) — source