There's rarely a quiet week in data protection — and this one was no exception. Below are three developments from the past seven days that caught my eye.

Story #1: The rise in criminals' use of AI

In late January, the National Cyber Security Centre — the UK's technical authority for cyber threats — released a report (here) warning that criminals' use of artificial intelligence will increase over the next two years. (If you've suffered a serious data breach in the UK you'll likely be familiar with the NCSC, which can provide useful guidance and support on dealing with the fallout of an incident.)

The NCSC's conclusion on AI is accurate but not exactly a contrarian bet. We've seen bad actors increasingly rely on AI in the last 18 months, and it would be wishful thinking to conclude anything other than criminals' reliance on AI will grow — rapidly — as these technologies become more sophisticated (and more available).

In many ways we're already there. The sort of blunt force, analogue scams that have been used for the past 30 years (variants of the so-called Nigerian Prince emails being the most common — and successful) are being replaced by a playbook of realistic techniques that can be extremely difficult to identify. These include:

  • Deep fakes and disinformation (e.g., voice replication and generative AI content).
  • Phishing and social engineering attacks (e.g., highly plausible emails and texts).
  • Malware (e.g., rapid creation of software variants).
  • Automation (e.g., life-like interactions with banks and other customer-facing organisations).

Most interestingly, the NCSC's report highlights what I've said before is a natural entry point for organisations thinking about how to use artificial intelligence: security solutions that rely, wholly or in part, on AI. What this looks like in practice is something akin to a battle of the machines, a la Terminator 3 (but, hopefully, without the threat of Armageddon). But many companies — whether or not they realise it — are already using IT- and cyber-enabled products and services that involve AI, and that's going to increase in tandem with the bad guys' reliance on AI-enabled attacks.

But, as always, humans are the great levelers. The increased sophistication of attacks means that your staff need to know what they need to know — now.

And what they need to know is to be on alert for (1) how criminals are using AI, and (2) what that looks like in practice. Employees may not be thrilled to attend another round of IT security training — but the ability to pinpoint an attack to a specific individual (e.g., who opened a phishing email) should help to focus their minds.

Story #2: When BCC emails go wrong

Who among us hasn't received an email where the recipients were supposed to be blind copied — but weren't? I certainly have.

Earlier this week, the UK Information Commissioner's Office posted a reminder on how BCC emails can go wrong — and how to avoid that happening. It has also previously issued guidance on the topic, which is worth checking out (here). The ICO gets a fair amount of stick for its approach to enforcement, and not unfairly so, but its guidance is usually easy to understand, actionable and well worth a read.

Using BCC incorrectly is among the most commonly reported breaches to the ICO (and presumably its European counterparts). What I find interesting about this aspect of data protection is that, although it may seem banal, it's widespread and can lead to extremely serious consequences for the affected individuals.

One only needs to look at ICO enforcement actions (albeit against non-private bodies) to see the real world consequences that these errors can have. Recent cases involving inappropriate using of CC/BCC have included (1) individuals seeking relocation to the UK after the Taliban took control of Afghanistan, and (2) victims of child sexual abuse.

The maddening — but encouraging — thing is that this can be one of the easier aspects of compliance to get under control. I won't repeat the ICO's tips, other than to flag three things that I've seen work well in practice. These may not solve the issue entirely, as there's no accounting for humans, but they should help.

  • Install a prompt on your email client to remind anyone sending an email with CC/BCC fields to confirm that the message is appropriate to send in that way, and ensure that recipients are accurately listed (i.e., BCC rather than CC or vice versa).
  • Remind employees about your policies on email best practices. Do they know when bulk emails, particularly using BCC, are appropriate? Some high-risk data categories are (hopefully) obvious — sensitive and confidential data, for example. But think about the wider context, too.
  • Most people like to think that they can be trusted, but there's no harm in periodically putting that trust to the test by way of internal phishing emails.

Story #3: Is surveillance in toilets ever appropriate?

"Your scientists were so preoccupied with whether they could, they didn't stop to think if they should." Jeff Goldblum's maxim in Jurassic Park came to mind earlier this week when I read a story about schools in the UK installing sensors in the toilets that listen to pupils in an attempt to reduce fighting, vaping and bullying.

You will almost certainly have an immediate reaction to that story. Either you think it's (1) invasive but ultimately justified, given the tragic consequences that bullying can have, or (2) an unjustifiable intrusion into children's privacy. You may even see both sides (I do).

The story also reminded me that one of the things which attracts many people to working in data protection is that it requires you to adopt a mindset in which law, ethics and common sense all play important roles. Sometimes it's clear that the solution predominately involves a legal analysis, but there are often cases where you're required to channel a combination of Ruth Bader Ginsburg, Plato and your most sensible friend.

For example, it may be arguably lawful to install cameras in the toilets of a heavy machinery plant where drug taking is rife and individuals have as a result been seriously injured. The same may also be true, in certain limited contexts, in schools. But is it the right thing to do? That's a really difficult question — and people will come to different conclusions.

What makes these situations doubly hard is that there's usually no off-the-shelf solution — meaning that you have to go back to first principles (of law, risk and ethics). There's much more to consider than I have the space to set out here, but as a starter for three:

  • Is what you're proposing to do the option of last resort? In other words, would you be able to achieve your objective in a way that would involve less intrusive processing and/or reduce the risks for data subjects?
  • Is what you're planning to do reflective of what you want to represent as a organisation? This isn't me making a value judgment, but it's important to consider as people will form an opinion of the organisation based on how it processes personal data. And most people won't look at this through a rationale, legal lens; their reaction will be based on gut feel and whether it is the "right" thing to do.
  • If you're going to proceed, have you addressed your legal obligations in a way that leaves no room for doubt? What risk assessments have you conducted — and how are you addressing (or mitigating) those risks? What (and how) have you told data subjects about how you'll process their data? Who will have access to the data? And how long (and how) will you store personal data?

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.