The Elastic Definition of “Consent”: How Law Keeps Adapting to a Connected World

“Consent” sounds simple: say yes or no. In law, however, that tiny word carries enormous weight. From data privacy to medical treatment to AI, the question of what counts as genuine consent has become one of the hardest and most important issues of the digital age. This article explores how legal systems around the world are redefining what it means to agree.


Few legal ideas are as foundational as consent. It separates a contract from coercion, a medical procedure from assault, and lawful data processing from surveillance. The entire architecture of modern rights depends on it.

Yet the more connected life becomes, the less clear consent seems. Every time someone downloads an app, visits a website, or steps into a hospital, they are asked to agree to something they may not fully understand. Lawmakers are now asking whether consent given by clicking a box or signing a form can still be considered meaningful.

Consent in data and privacy law

The digital world has turned consent into a ritual rather than a choice. Under the EU General Data Protection Regulation (GDPR), consent must be “freely given, specific, informed and unambiguous.” In theory, that empowers users. In practice, it has produced endless pop-up banners that few people read.

Courts and regulators are pushing back. The Court of Justice of the European Union (CJEU) has ruled that pre-ticked boxes or vague wording do not amount to valid consent. Companies must explain clearly what data they collect and how they use it.

The UK Information Commissioner’s Office and the Irish Data Protection Commission have fined major tech platforms for manipulating design to nudge users toward acceptance. “Consent fatigue” has become a recognised problem: when everything requires a yes, the word loses meaning.

Beyond privacy: medical and contractual consent

The issue goes far beyond digital rights. In medicine, informed consent is a cornerstone of ethical practice. Patients must understand the nature and risks of treatment before agreeing. But what counts as understanding?

Courts have shifted from a paternalistic model to a patient-centred one. In Montgomery v Lanarkshire Health Board (2015), the UK Supreme Court held that doctors must disclose any risk that a reasonable patient in that position would consider significant. The decision redefined consent as a process of dialogue, not a formality.

Contract law tells a similar story. Classical theory assumes that signing a document signifies consent, even if one party never read it. But as contracts have become longer and more complex — especially online — that assumption is under strain. Some US courts have begun questioning the enforceability of “browse-wrap” agreements that users technically accepted by visiting a website. The idea of meaningful consent is quietly reshaping even commercial relationships.

The illusion of choice

The biggest challenge for consent today is power imbalance. When the weaker party has little real alternative, can their agreement ever be free? Clicking “I agree” may be unavoidable if refusing means losing access to essential services, employment, or healthcare.

Legal scholars call this “structural coercion.” The individual seems to consent, but the surrounding circumstances make refusal unrealistic. Legislators are starting to recognise this problem. The EU’s forthcoming AI Act and Data Act both place greater emphasis on fairness and transparency, limiting when consent can legitimise data use. In other words, regulators are saying: some things are too important to leave to fine print.

Consent and technology’s grey zones

Artificial intelligence complicates consent further. Algorithms can infer sensitive traits — health, religion, political views — from seemingly harmless data. If you never provided that data directly, did you consent to its use?

Facial recognition, biometric monitoring, and predictive analytics all operate in spaces where traditional consent is nearly impossible. People may not even know they are being observed. That is why many privacy authorities, including the UK’s and Canada’s, argue that such systems should require explicit legal authorisation rather than individual consent. The responsibility shifts from the user to the provider.

At the same time, emerging technologies like generative AI raise a new question: can machines consent? When an AI model “agrees” to terms of service or automatically transfers data, the fiction of agency collides with the legal reality that only humans can give valid consent. Courts will soon have to decide how far automation can go before accountability breaks down.

Cultural and regional differences

Different legal cultures treat consent differently. Common-law systems tend to prioritise individual choice and contractual autonomy. Civil-law jurisdictions place greater weight on fairness and public policy. In parts of Asia and Africa, collective or community consent can override individual preference in matters such as medical research or land use.

International law reflects this diversity. The OECD Guidelines on Artificial Intelligence and the UNESCO Recommendation on Ethics of AI both stress informed consent but acknowledge that what counts as “informed” depends on context. The challenge is to harmonise standards without erasing cultural nuance.

From formality to relationship

The most promising shift is conceptual. Lawyers and ethicists are starting to treat consent not as a single act, but as an ongoing relationship built on trust. Rather than seeking one-off permission, organisations are expected to maintain transparency and give users continuing control.

That means clearer interfaces, genuine opt-outs, and mechanisms for withdrawal. It also means explaining consequences honestly. Consent should be an invitation to participate, not a shield for liability. When people understand and trust the process, they are more likely to say yes, and to mean it.

The limits of consent

Despite its moral appeal, consent cannot do all the work. Some harms are unacceptable even if people agree to them. That is why many legal systems restrict contracts that waive fundamental rights or permit exploitation. Regulators are learning that informed consent is not a substitute for responsible design.

In the coming years, expect more rules that shift duties away from the individual toward institutions. Transparency and accountability will join consent as co-equal pillars of digital ethics.

Consent remains one of the law’s most elegant ideas, but also one of its most fragile. It works only when the person giving it understands, chooses freely, and has power to refuse. In the digital and automated world, those conditions are harder than ever to guarantee.

The law is adapting, slowly turning consent from a checkbox into a conversation. Its future lies not in longer forms or louder pop-ups, but in genuine respect for autonomy — a reminder that agreement means little unless people truly have a choice.

The Legal Integrity Project Editorial Team

Editorial Team

We are a group of interested lawyers, who are interested in how legal definitions are shifting over time. We aim to communicate these legal definitions in clear and concise language to educate people across the board.

Previous
Previous

“Malice” and Defamation. How Intent Still Shapes the Boundaries of Free Speech

Next
Next

What Counts as “Public Interest”? The Legal Justification Everyone Invokes