Rethinking Customer Data: AI and Privacy-Preserving Solutions

verifiable proofs zkTLS personhood credentials online identity data privacy AI deception customer data collection
Lalit Choda
Lalit Choda

Founder & CEO @ Non-Human Identity Mgmt Group

 
October 29, 2025 10 min read

TL;DR

This article explains the shift from storing customer data to requesting verifiable proofs, highlighting zkTLS and personhood credentials. These technologies enable verification without data hoarding, reducing privacy risks and compliance burdens. Discover how this approach enhances security, speeds up processes, and combats AI-driven deception online.

Rethinking Customer Data Collection with Verifiable Proofs

Online identity is shifting from storing customer data to requesting verifiable proofs. The internet already holds what onboarding and risk teams need, such as degrees, loyalty tiers, and proofs of payment. The challenge lies in verifying these facts reliably without compromising privacy. For founders, every onboarding form, fraud check, and compliance workflow involves balancing user verification without creating a data honeypot. The question arises: what if verification didn’t require storage at all? Data is meant for verification, not hoarding.

Closing the Trust Gap

Data breaches are expensive, with IBM estimating the 2025 global average cost at around $4.4 million. Automation amplifies threats, with malicious bots accounting for roughly 37% of internet traffic. A 2025 investigation found over 30 data brokers hiding opt-out and deletion pages from search, leading to federal pressure and state scrutiny. California’s privacy regulator advanced a unified deletion mechanism called DROP under the Delete Act. This reinforces the shift from storing customer data to requesting verifiable proofs, reducing liability and accelerating compliance if verification methods evolve accordingly.

zkTLS Simplified

Zero-Knowledge Transport Layer Security (zkTLS) enables the generation of a cryptographic proof during a TLS handshake, attesting to a specific factual occurrence on a site without revealing the page or underlying data. This translates into verifying data without storing a document that reveals the data to be verified. This eliminates the need for password sharing and isn't screen scraping, as evidence is derived from the session itself. Implementations utilize a witness or proxy model that attests access to the domain and content, producing verifiable proof. This turns verification into a yes or no proof anchored to a real TLS session, not a document upload, aligning with the goal of proof over storage.

The Difference It Makes

Proof over storage reduces risk and speeds decisions without expanding the data held. zkTLS provides assurance of verification while minimizing the attack surface. Only the minimum proof required is requested, meaning fewer honeypots, simplicity in reviews, and a faster user experience. This also maintains humanity in incentives and communities, as proofs are bound to a durable identity while protecting the privacy of personal information. Instead of storing identities, you verify them and move on.

zkTLS in Practice

Humanity Protocol uses zkTLS to turn Web2 facts into reusable proofs that apps can verify without seeing the underlying page. Users generate proof of a specific claim when visiting a trusted site, attaching the claim to their “Human ID,” which can be verified by apps while maintaining the privacy of the underlying page and irrelevant data. This has been scaled across employment, education, and travel loyalty, replacing document collection with minimal proof that answers only the question at hand. zkTLS in practice replaces document collection with minimal proof that answers only the question at hand.

Business Cases

If your business policy has a specific threshold, you can establish proof of the threshold. For instance, decline “balance below X” instead of gathering full statements, thereby reducing backlogs, lowering data retention risk, and fast-tracking the onboarding process through selective disclosure. Another use case is loyalty status, where users can confirm their status without sharing their data, unlocking seamless sign-in experiences without the daunting manual processes of yesteryear. This is made resilient by sybil-resistant growth loops that verify through human reputation rather than personally identifiable information (PII). zkTLS can verify employment in minutes through proofs from official portals, avoiding the ping-pong of emails and documents, and maintaining focus on candidates rather than paperwork. Each claim replaces a document, cutting storage risk and speeding decisions.

What Implementation Would Look Like

Businesses must start by identifying a particular claim that can develop trust or minimize funnel cost while outlining definable success metrics, whether conversion lift or reduction of review time. With the claim identified, it’s crucial to be selective in required disclosures, whereby excess data is treated as a liability. This must be built on a familiar user experience on sites that users already utilize, with consent forms to generate proof in their browsers, showcasing what will be checked and what will never be seen. While developing this methodology, it’s crucial to maintain fallback methods for those who can’t generate proofs, such as manual reviews. The verified attributes could then be reused across a multitude of product lines and, as businesses scale, across partner ecosystems, creating portability that compounds trust and minimizes repeated friction for customers. Implementation considerations must include fallback methods.

Risk, Compliance, and Governance

The lesser the data held by companies, the smaller the blast radius in case of incidents. The industry advises minimal data collection and storage, highlighting that the complication or hiding of deletion and opt-out choices is under active 2025 scrutiny, and California’s DROP system will centralize broker deletions. Businesses must anchor their programs to transparency of consent and convenience in revocation. The evolution of online identity won’t be in the size of databases, but in proof. zkTLS translates trust signals in Web2 into portability in privacy-first credentials that are controlled by customers and verified by systems. The key to this methodology is starting with one attribute, measuring its impact, and exploring how it can be scaled.

Personhood Credentials: Distinguishing Real People Online

Malicious actors have long used misleading identities to conduct fraud, spread disinformation, and carry out other deceptive schemes. With increasingly capable AI, these actors can amplify their operations, intensifying the challenge of balancing anonymity and trustworthiness online. A new tool to address this challenge is "personhood credentials" (PHCs), digital credentials that empower users to demonstrate they are real people—not AIs—to online services, without disclosing personal information. These credentials can be issued by trusted institutions, and the PHC system could be local or global, not necessarily biometrics-based. Personhood credentials offer a way to signal trustworthiness.

Two trends in AI contribute to the urgency of this challenge: AI's increasing indistinguishability from people online (lifelike content and avatars, agentic activity) and AI's increasing scalability (cost-effectiveness, accessibility). Drawing on research into anonymous credentials and "proof-of-personhood" systems, personhood credentials give people a way to signal their trustworthiness on online platforms and offer service providers new tools for reducing misuse by bad actors. Existing countermeasures, like CAPTCHAs, are inadequate against sophisticated AI, while stringent identity verification solutions are insufficiently private for many use-cases.

AI's Increasing Indistinguishability and Scalability

Distinguishing AI-powered users is becoming increasingly difficult as AI advances in its ability to:

  • Generate human-like content that expresses human-like experiences or points of view.
  • Create human-like avatars through photos, videos, and audio.
  • Take human-like actions across the Internet, such as browsing websites, making sophisticated plans, and solving CAPTCHAs.

AI-powered deception is increasingly scalable because of:

  • Decreasing costs at all capability levels.
  • Increasing accessibility, for example, via open-weights deployments through which scaled misuse is less preventable.

These trends suggest that AI may make deceptive activity more convincing and easier to carry out.

Personhood Credentials (PHCs) as a Solution

Personhood credentials (PHCs) empower their holder to demonstrate to providers of digital services that they are a person without revealing anything further. Building on concepts like proof-of-personhood and anonymous credentials, these credentials can be stored digitally and verified through zero-knowledge proofs, without revealing the individual’s specific credential or identity aspects. PHCs offer a promising solution to pervasive deception.

To counter scalable deception while maintaining user privacy, PHC systems must meet two foundational requirements:

  1. Credential limits: The issuer of a PHC gives at most one credential to an eligible person.
  2. Unlinkable pseudonymity: PHCs let a user interact with services anonymously through a service-specific pseudonym; the user’s digital activity is untraceable by the issuer and unlinkable across service providers, even if service providers and issuers collude.

These properties equip service providers with the option to offer services on a per-person basis and to prevent the return of users who violate the service’s rules.

PHC System Design and Issuers

There are many effective ways to design a PHC system, and various organizations—governmental or otherwise—can serve as issuers. In one possible implementation, states could offer a PHC to any holder of their state’s tax identification number. Multiple trusted PHC issuers within a single ecosystem promote choice—people can select systems built on their preferred root of trust (government IDs, social graphs, biometrics) and that offer affordances that best align with their preferences. This approach reduces the risks associated with a single centralized issuer while still preserving the ecosystem’s integrity by limiting the total number of credentials. This paper does not advocate for or against any specific PHC system design; instead, it aims to establish the value of PHCs in general while highlighting challenges that must be taken into account in any design. PHC systems can be designed in many ways.

Benefits of PHCs

PHCs are not forgeable by AI systems, and it is difficult for malicious actors to obtain many of them. By combining verification techniques that have an offline component (e.g., appearing in-person, validating a physical document) and secure cryptography, these credentials are issued only to people and cannot be convincingly faked thereafter. They help counter the problem of indistinguishability by creating a credential only people can acquire, and help counter the problem of scalability by enabling per-credential rate limits on activities.

PHCs give digital services a tool to reduce the efficacy and prevalence of deception, especially in the form of:

  1. Sockpuppets: deceptive actors purporting to be “people” that do not actually exist.
  2. Bot attacks: networks of bots controlled by malicious actors to carry out automated abuse (e.g., breaking site rules and evading suspension by creating new accounts).
  3. Misleading agents: AI agents misrepresenting whose goals they serve.

PHCs offer people a tool to credibly signal that they are a real person operating an authentic account, without conveying their identity. PHCs also help service providers spot deceptive accounts, which may lack such a signal. PHCs improve on existing approaches to countering AI-powered deception.

Limitations of Existing Countermeasures

PHCs improve on and complement existing approaches to countering AI-powered deception online. For example, the following approaches are often not robust to highly capable AI, insufficiently inclusive, and/or not privacy-preserving:

  1. Behavioral filters, e.g., CAPTCHAs, JavaScript browser challenges, anomaly detection.
  2. Economic barriers, e.g., paid subscriptions, credit card verification.
  3. AI content detection, e.g., watermarking, fingerprinting, metadata provenance.
  4. Appearance- and document-based verification, e.g., selfie checks with ID, live video calls.
  5. Digital and hardware identifiers, e.g., phone numbers, email addresses, hardware security keys.

Managing Impacts of PHCs

To achieve their benefits, PHC systems must be designed and implemented with care. We discuss four areas in which PHCs’ impacts must be carefully managed:

  1. Equitable access to digital services that use PHCs.
  2. Free expression supported by confidence in the privacy of PHCs.
  3. Checks on power of service providers and PHC issuers.
  4. Robustness to attack and error by different actors in the PHC ecosystem.

Next Steps

In close collaboration with the public, governments, technologists, and standards bodies are encouraged to invest in the development, piloting, and adoption of personhood credentials as a key tool in addressing scalable deception online:

  1. Invest in development and piloting of personhood credentialing systems, e.g., explore building PHCs incrementally atop existing credentials such as digital driver’s licenses.
  2. Encourage adoption of personhood credentials, e.g., determine services for which PHCs ought to be substitutable for ID verification.

It is also important that these groups accelerate their preparations for AI’s impact more generally by adapting existing digital systems:

  1. Reexamine standards for remote identity verification and authentication, e.g., reconsider confidence in selfie-based identity verification, absent supplemental factors to reduce AI-enabled spoofing.
  2. Study the impact and prevalence of deceptive accounts across major communications platforms, e.g., develop standardized methods for measuring the prevalence of fake accounts on social media.
  3. Establish norms and standards to govern agentic AI users of the Internet, e.g., explore new forms of trust infrastructure for AI agents, akin to HTTPS for websites.

Readers primarily interested in ideas for next steps may refer directly to Section 5 for further detail.

Lalit Choda
Lalit Choda

Founder & CEO @ Non-Human Identity Mgmt Group

 

NHI Evangelist : with 25+ years of experience, Lalit Choda is a pioneering figure in Non-Human Identity (NHI) Risk Management and the Founder & CEO of NHI Mgmt Group. His expertise in identity security, risk mitigation, and strategic consulting has helped global financial institutions to build resilient and scalable systems.

Related Articles

BIO-key funding

BIO-key's Fundraising Urgency and Fingerprint Cards' Growth Surge

Discover how BIO-key raised $4.23M and partnered with IT2Trust for Nordic expansion. Plus, Fingerprint Cards' impressive 35% revenue growth. Read now!

By Lalit Choda October 29, 2025 2 min read
Read full article
ConductorOne funding

ConductorOne Secures $79M for AI-Native Identity Security Platform

ConductorOne secures $79M Series B led by Greycroft to advance AI-native identity security. Discover how they're unifying IGA, IAM, and PAM. Learn more!

By Lalit Choda October 29, 2025 2 min read
Read full article
Qualys ETM

Qualys Enhances ETM with Agentic AI for Identity Security and Threats

Qualys ETM integrates agentic AI for proactive risk management, enhancing identity security, threat prioritization, and exploit validation. Discover how to prevent breaches.

By Lalit Choda October 29, 2025 3 min read
Read full article
AI impersonation

Navigating Identity Crisis: Rethinking Security Perimeters

AI is revolutionizing impersonation tactics. Discover how to defend against AI-driven attacks and secure your digital identity. Learn about new security mandates.

By Lalit Choda October 29, 2025 4 min read
Read full article