Written by Rachel Murphy
Take Action
Take a moment today to connect with someone—whether it’s commenting on a meaningful post, reaching out to a friend, or supporting a creator who inspires you. Building community starts with small, intentional actions.
The Story
TikTok. DeepSeek. Huawei.
What do they have in common?
Each of these companies has faced scrutiny over data security, foreign influence, and national security risks. Some, like TikTok, have been threatened with bans. Others, like Huawei, have been cut off from key U.S. markets. Now, a new name is in the spotlight: DeepSeek AI, a Chinese-developed artificial intelligence model that rivals OpenAI’s GPT-4.
But DeepSeek isn’t just another advanced AI system. Since its launch, many experts have pointed out security concerns. Now, after suffering a massive data leak that exposed over one million users has further sparked global debates over security, privacy, and how governments should regulate foreign tech.
But the real issue isn’t DeepSeek.
AI companies—regardless of origin—collect and use user data with little accountability, while governments fail to protect consumers.
Right now, the burden is on users to figure out:
✅ Which AI tools are safe
✅ What’s buried in vague privacy policies
✅ How to avoid being exploited
The problem? AI shouldn’t be built in a way that puts users at risk in the first place.
🔍 What Happened? The DeepSeek Data Leak Explained
DeepSeek suffered a major security breach due to a simple but critical mistake—its cloud storage was left exposed with no password or access controls. This meant anyone could access sensitive data, including user conversations, login credentials, and internal system details.
Timeline of the Breach
📅 Jan. 29 – Wiz Research discovered the exposed database and notified DeepSeek.
🔒 Same day – DeepSeek secured the database within an hour.
⚠️ But the damage was already done—it’s unclear who else accessed the data or how it was used.
This wasn’t a sophisticated hack, just a careless mistake. And it’s not just DeepSeek. Preventable breaches like this keep happening across the AI industry, with little to no consequences for the companies responsible.
🚨 The Real Risk: AI Uses Your Data Against You
Most people think of a data breach as stolen passwords or financial information, but with AI, the risks go much further.
AI models don’t just store data—they learn from it. When a company like DeepSeek loses control of user data, the impact isn’t just a one-time event. That information can be repurposed, exploited, or weaponized in ways users never intended.
How AI Data Can Be Used Against You
❌ Voice Cloning & Deepfake Scams – AI-generated voices have been used to impersonate family members, tricking people into thinking a loved one is in danger.
❌ Identity Theft at Scale – AI-powered fraud can create convincing fake identities, making scams harder to detect.
❌ Psychological Manipulation – Companies can mine interactions to profile users and influence behaviors, purchases, and even emotions.
These aren’t hypothetical risks, they’re already happening. The problem isn’t just poor security. AI is being built in a way that puts users at risk, and banning individual apps won’t fix that.
The “Whack-a-Mole” Strategy: A Global Problem
Instead of setting clear, universal rules that all AI companies must follow, governments around the world have been using what I call a “whack-a-mole strategy.”
🔹 A security issue pops up in one app? Ban it.
🔹 A different AI tool leaks data? Ban that one too.
🔹 Meanwhile, nothing changes for the next company that makes the same mistakes.
This approach doesn’t fix the root cause—it just forces everyone to figure things out on their own.
It’s not just consumers struggling to navigate AI risks. State governments, federal agencies, and even the military are all being left to decide which tools to ban, restrict, or allow—without consistent standards to guide them.
DeepSeek’s security risks have triggered bans and restrictions worldwide, but the responses have been inconsistent, reinforcing the whack-a-mole approach on a global scale.
🌍 Australia banned DeepSeek over security risks, South Korea temporarily blocked access due to cybersecurity threats, and Taiwan prohibited government employees from using it—each making separate decisions instead of working from a unified global standard.
In the U.S., the same pattern of patchwork responses is happening:
🚫 The U.S. Navy – Restricted DeepSeek from government devices because there’s no clear federal AI security policy.
🚫 Texas – Governor Greg Abbott banned DeepSeek from state government devices instead of waiting for national action.
These bans are not solutions—they’re reactions. They prove that governments and institutions recognize the problem but lack the support they need to handle it effectively.
Without real protections, AI will continue to be built in ways that put users, institutions, and governments at risk—forcing them to navigate dangers they should never have to deal with in the first place.
What Needs to Change? The Case for an AI Bill of Rights
The whack-a-mole approach isn’t working. Governments keep banning apps after something goes wrong, but there’s no clear standard to prevent these issues in the first place.
🔹 AI companies should not get to decide the rules for themselves.
🔹 Users should not have to figure out how to protect themselves alone.
🔹 Governments should not be scrambling to react every time AI poses a new risk.
We need an AI Bill of Rights that ensures real protections for users, institutions, and governments.
💡 A clear set of rules that all AI companies must follow:
✅ AI must be designed to protect users from harm—not expose them to security risks.
✅ Your data cannot be collected, sold, or shared without clear consent.
✅ AI companies must be transparent about how they use your data and who has access to it.
✅ If a company misuses data, there must be real consequences.
Right now, AI is advancing faster than the rules meant to keep it in check. A patchwork of bans and restrictions isn’t enough.
We need consistent, enforceable standards—not just for DeepSeek, but for all AI companies, no matter where they’re from.
Final Thoughts
If you’re considering using DeepSeek AI, I don’t recommend it—at least not yet. Even if you want to try it, I strongly suggest waiting. The security risks are real, and with new regulations on the horizon, access to DeepSeek may become punitive.
But this debate isn’t just about one company. The bigger issue remains:
💡 Why do users have to navigate all this alone?
💡 Why isn’t security built into AI from the start?
💡 Why don’t we have protections that prevent this from happening in the first place?
This isn’t just about DeepSeek, TikTok, or the next AI tool that raises red flags. It’s about creating protections that work for everyone, everywhere.
🔹 What do you think? Should AI companies be held accountable? Let’s discuss.


Leave a comment