jolera-logo-24-white

People Are Security’s Biggest Asset

Every photo we upload. Every item we buy. Every message we send.
Every movement we make.

All of it is being captured, categorized, correlated, and fed into increasingly powerful AI systems.

These systems don’t just predict our future anymore. They increasingly shape it, limit it, and decide it.

This is not a distant scenario. It is not science fiction. And it is not confined to one part of the world. It is already happening.

And most people never notice it. There is no single moment of rupture. No clear line being crossed. Just gradual normalization. One system at a time. One convenience at a time. One trade-off justified as temporary. Until surveillance is no longer a tool. It becomes the environment we operate in.

When Surveillance Becomes Infrastructure 

In China, AI-powered “city brain” platforms are deployed across major metropolitan areas. These systems integrate data from cameras, sensors, mobile devices, payment platforms, and public services to create real-time behavioral profiles of citizens. 

They track: 

  • Where people live 
  • Who they interact with 
  • How often they travel 
  • How they behave in public spaces 
  • How they dispose of their garbage 
  • Whether they comply (or don’t) with rules 
expert technician in data center looking into server with laptop

Break a rule? 

A camera catches you. From multiple angles. Automatically your score drops. Not a credit score, but a citizen score. And when that score collapses, so does your access to society: transportation, employment, housing, education, financial services.

An algorithm decides what you are allowed to do next. 

This Is Not a Warning. It’s a Reality. 

Edward Snowden put it bluntly: 

“If any of your activities differ from what the government wants, you won’t get on a train. You won’t board a plane. You won’t get a job. An algorithm will decide your fate. And what they are selling… is us.”

He was even clearer later: 

“This isn’t science fiction. It isn’t a distant warning. It’s happening today in Shanghai, Beijing, and dozens of other cities under full-spectrum digital governance.” 

And the need for governance isn’t just about surveillance, it’s about trust and accountability in AI. As Clem Delangue, co-founder of Hugging Face, puts it:

I think trust comes from transparency and control. You want to see the datasets that these models have been trained on. … It’s really hard to trust something that you don’t understand.”  

Why This Matters Outside China 

The uncomfortable truth is this: elements of this model are quietly moving West.

Not as a single system, or under a single label. But as fragments that, when combined, start to resemble the same architecture.

We already see:

  • Digital ID programs
  • AI-assisted policing and surveillance
  • Automated “misinformation” detection and scoring
  • Predictive risk models tied to travel, banking, insurance, employment
  • Behavioral analytics embedded in public and private services

The Core Risk: Technology Outpacing Governance 

AI is not neutral. It reflects the data it is trained on, the incentives behind it and the assumptions of those who design and deploy it.

The problem we face today is not that AI exists. It’s that the technology is advancing faster than the guardrails, faster than regulation, ethical frameworks, transparency requirements, accountability mechanisms and public understanding.

Once AI-driven decision-making becomes embedded into infrastructure, reversing it becomes extremely difficult. At that point, “opting out” is no longer an option.

Security Is No Longer Just a Technical Problem 

Traditionally, security has been framed as firewalls, encryption, access control, threat detection and incident response. All of that still matters. But it’s no longer enough, because the biggest vulnerability is no longer just the system. It’s the people.

Not because people are careless, but because:

Their data is harvested at scale

Their behavior is constantly monitored

Their digital identity is fragmented across platforms 

Their decisions are increasingly influenced or constrained by algorithms

In this context, security becomes a human rights and societal issue.

From Cybersecurity to Digital Responsibility 

Organizations, especially those building, integrating, or managing digital systems, have a choice. 

They can: 

  • blindly deploy technology because it’s powerful 
  • optimize only for efficiency and control

Or they can: 

  • demand transparency
  • design for accountability
  • put humans back in the loop
  • treat privacy and security as foundational, not optional

This applies to: 

Governments

Enterprises

Technology Providers 

System Integrators

Including companies like Jolera.

What “People-Centered Security” Really Means 

Putting people at the center of security is not a slogan. 

  • knowing where data comes from
  • understanding how it is used
  • being clear about who makes decisions: humans or algorithms
  • ensuring systems can be audited, challenged, and corrected
  • designing architectures that respect privacy by default
  • recognizing that not everything that can be automated should be 

It also means educating organizations and users alike, not just deploying tools. 

diverse team collaborating and working with laptops

The Question We Can No Longer Avoid 

The real question is no longer whether AI will shape society. It already does.

The question is: do we still get to shape AI?

Or will we quietly accept systems that score us, rank us, restrict us and define us without visibility, accountability or recourse?

A Call for Transparency, Accountability, and Human-Centered Design 

This is the moment to ask harder questions.

To demand: 

→ transparency in AI-driven systems 

→ accountability when algorithms cause harm 

→ security architectures that protect people, not just data 

→ digital governance rooted in democratic values and human rights 

Because once these systems are fully normalized, it may be too late to ask for consent.

What do you think? 

Do you believe this is the moment to demand transparency, accountability, and human rights at the core of every digital governance system?

At Jolera, we explore these challenges through our Data & AI approach, where security, governance, and responsible use of technology are foundational, not afterthoughts.

Discover how we help organizations design, manage, and scale data and AI systems with people at the center. 

Ready to explore responsible AI and data governance?

Learn more about Jolera’s Data & AI approach and how we put people at the center of technology.