Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Artificial Intelligence

Why Using Microsoft Copilot Could Amplify Existing Data Quality and Privacy Issues

Microsoft provides an easy and logical first step into GenAI for many organizations, but beware of the pitfalls.

Microsoft Copilot Risks

According to analyst firm Gartner, some 55% of organizations have implemented or are piloting Generative AI. For many of these, Copilot for Microsoft 365 is an obvious starting point given that it’s an easy add-on to the services millions of organizations already use such as M365 and Office365. As well as the ease of purchase there’s also a simplified implementation given that Copilot has plenty of data that it can be trained to work with which is already used by Microsoft services.

Working with a global tech and security giant also comes with an element of trust given there are some 10,000+ apps all competing for a share of the AI space, most of which have unclear or unproven data policies. Unlike most of these, Microsoft Copilot lauds its ‘commercial data protection’ and states that ‘Copilot doesn’t save prompts or answers, nor use them to train the AI model.’

Banned by US Congress and Gartner has urged caution

So all well and good then? Not quite. Recently the US Congress has banned its use for staff members stating that “the Microsoft Copilot application has been deemed by the Office of Cybersecurity to be a risk to users due to the threat of leaking House data to non-House approved cloud services.”

Gartner has also urged caution, stating that: “using Copilot for Microsoft 365 exposes the risks of sensitive data and content exposure internally and externally, because it supports easy, natural-language access to unprotected content. Internal exposure of insufficiently protected sensitive information is a serious and realistic threat. External web queries that go outside the Microsoft Service Boundary also present risks that can’t be monitored.”

Copilot could supercharge existing flaws in data security

Most of the issues identified with using Copilot safely are not new, but GenAI has the ability to amplify existing issues beyond control. Permissions are a key area. According to Microsoft’s own 2023 State of Cloud Permissions Risks Report (PDF) more than 50% of identities are ‘super admins’ and more than half of permissions are considered ‘high risk’.

Even more revealing is that only 1% of permissions granted are actually used. Given that these same permissions that already exist within MS services will simply be carried over into Copilot, open up the potential for significant security incidents. For example, with too much access, an employee may be able to extract information about the salaries of senior executives. For example, one of the top-asked questions to Copilot is “What is the CEO’s salary”.

Advertisement. Scroll to continue reading.

Furthermore, if an attacker were to take over these accounts or there was a malicious insider, their discovery of data would be greatly enhanced using the power of AI. In fact, Gartner discovered that “among organizations who have faced an AI security or privacy incident, 60% reported data compromise by an internal party.”

Data labeling is another issue. Copilot uses existing data labeling practices from other Microsoft products which again introduces the potential for inherited errors. This includes inconsistent labeling and categorization of sensitive material which may have been downgraded to enable ease of use within the organization. The power of AI makes it easier to find and misuse this data.

Garbage in, garbage out – never truer in a world of GenAI

The mantra ‘garbage in, garbage out’ has long been true and AI now gives this a new dimension. In any organization, there will be terabytes of outdated and irrelevant data which could potentially lead Copilot to suggest irrelevant or misleading content. Long-since forgotten notes from a brainstorm or discarded decisions can skew Copilot’s understanding of an organization and missing information can create blind spots for Copilot, hindering its ability to provide comprehensive solutions. This is why organizations need to be particularly careful what they share with Copilot to ensure that it’s only able to analyze data that has meaning.

Yet despite all of the above Copilot promises a significant leap in AI-driven workplace productivity within the Microsoft 365 suite. It enables insights from GPT-4 and DALLE-3 models but within the comparative safety of the Microsoft brand and the early promise has been good with Microsoft claiming that on average Copilot users are saving 14 minutes a day or five hours a month – mostly through removing mundane tasks to boost employees’ productivity and creativity.

In my view, these are the five essential components of Copilot readiness:

Start small – pick just three to six use cases to smart with and engage a small user base

Solidify an AI policy early – work with internal stakeholders to craft an AI usage policy that caters to employee use cases. Remember, Copilot is just one GenAI tool your employees are using.

Ensure data is safe and consistent – determine where the core datasets are and only train Copilot on those that have meaningful information.

Review existing controls – be aware of data classifications, sharing policies, and access management.

Ensure there is continuous user training – empower users to unlock the full potential while staying up to date on the safe use of its capabilities. Keep training scenario and use case specific.

Report and benchmark – build trust and track performance of Copilot against defined use cases.

Learn More at SecurityWeek’s AI Risk Summit – June 24-25, 2024, Ritz-Carlton, Half Moon Bay

Written By

Alastair Paterson is the CEO and co-founder of Harmonic Security, enabling companies to adopt Generative AI without risk to their sensitive data. Prior to this he co-founded and was CEO of the cyber security company Digital Shadows from its inception in 2011 until its acquisition by ReliaQuest/KKR for $160m in July 2022. Alastair led the company to become an international, industry-recognised leader in threat intelligence and digital risk protection.

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Join the session as we discuss the challenges and best practices for cybersecurity leaders managing cloud identities.

Register

SecurityWeek’s Ransomware Resilience and Recovery Summit helps businesses to plan, prepare, and recover from a ransomware incident.

Register

People on the Move

OT zero trust access and control company Dispel has appointed Dean Macris as its CISO.

Cloud identity and security solutions firm Saviynt has hired former Gartner Analyst Henrique Teixeira as Senior Vice President of Strategy.

PR and marketing firm FleishmanHillard named Scott Radcliffe as the agency's global director of cybersecurity.

More People On The Move

Expert Insights

Related Content

Application Security

Cycode, a startup that provides solutions for protecting software source code, emerged from stealth mode on Tuesday with $4.6 million in seed funding.

Data Protection

The cryptopocalypse is the point at which quantum computing becomes powerful enough to use Shor’s algorithm to crack PKI encryption.

Artificial Intelligence

The CRYSTALS-Kyber public-key encryption and key encapsulation mechanism recommended by NIST for post-quantum cryptography has been broken using AI combined with side channel attacks.

Artificial Intelligence

The release of OpenAI’s ChatGPT in late 2022 has demonstrated the potential of AI for both good and bad.

Artificial Intelligence

ChatGPT is increasingly integrated into cybersecurity products and services as the industry is testing its capabilities and limitations.

Artificial Intelligence

The degree of danger that may be introduced when adversaries start to use AI as an effective weapon of attack rather than a tool...

Compliance

The three primary drivers for cyber regulations are voter privacy, the economy, and national security – with the complication that the first is often...

Data Protection

While quantum-based attacks are still in the future, organizations must think about how to defend data in transit when encryption no longer works.