According to analyst firm Gartner, some 55% of organizations have implemented or are piloting Generative AI. For many of these, Copilot for Microsoft 365 is an obvious starting point given that it’s an easy add-on to the services millions of organizations already use such as M365 and Office365. As well as the ease of purchase there’s also a simplified implementation given that Copilot has plenty of data that it can be trained to work with which is already used by Microsoft services.
Working with a global tech and security giant also comes with an element of trust given there are some 10,000+ apps all competing for a share of the AI space, most of which have unclear or unproven data policies. Unlike most of these, Microsoft Copilot lauds its ‘commercial data protection’ and states that ‘Copilot doesn’t save prompts or answers, nor use them to train the AI model.’
Banned by US Congress and Gartner has urged caution
So all well and good then? Not quite. Recently the US Congress has banned its use for staff members stating that “the Microsoft Copilot application has been deemed by the Office of Cybersecurity to be a risk to users due to the threat of leaking House data to non-House approved cloud services.”
Gartner has also urged caution, stating that: “using Copilot for Microsoft 365 exposes the risks of sensitive data and content exposure internally and externally, because it supports easy, natural-language access to unprotected content. Internal exposure of insufficiently protected sensitive information is a serious and realistic threat. External web queries that go outside the Microsoft Service Boundary also present risks that can’t be monitored.”
Copilot could supercharge existing flaws in data security
Most of the issues identified with using Copilot safely are not new, but GenAI has the ability to amplify existing issues beyond control. Permissions are a key area. According to Microsoft’s own 2023 State of Cloud Permissions Risks Report (PDF) more than 50% of identities are ‘super admins’ and more than half of permissions are considered ‘high risk’.
Even more revealing is that only 1% of permissions granted are actually used. Given that these same permissions that already exist within MS services will simply be carried over into Copilot, open up the potential for significant security incidents. For example, with too much access, an employee may be able to extract information about the salaries of senior executives. For example, one of the top-asked questions to Copilot is “What is the CEO’s salary”.
Furthermore, if an attacker were to take over these accounts or there was a malicious insider, their discovery of data would be greatly enhanced using the power of AI. In fact, Gartner discovered that “among organizations who have faced an AI security or privacy incident, 60% reported data compromise by an internal party.”
Data labeling is another issue. Copilot uses existing data labeling practices from other Microsoft products which again introduces the potential for inherited errors. This includes inconsistent labeling and categorization of sensitive material which may have been downgraded to enable ease of use within the organization. The power of AI makes it easier to find and misuse this data.
Garbage in, garbage out – never truer in a world of GenAI
The mantra ‘garbage in, garbage out’ has long been true and AI now gives this a new dimension. In any organization, there will be terabytes of outdated and irrelevant data which could potentially lead Copilot to suggest irrelevant or misleading content. Long-since forgotten notes from a brainstorm or discarded decisions can skew Copilot’s understanding of an organization and missing information can create blind spots for Copilot, hindering its ability to provide comprehensive solutions. This is why organizations need to be particularly careful what they share with Copilot to ensure that it’s only able to analyze data that has meaning.
Yet despite all of the above Copilot promises a significant leap in AI-driven workplace productivity within the Microsoft 365 suite. It enables insights from GPT-4 and DALLE-3 models but within the comparative safety of the Microsoft brand and the early promise has been good with Microsoft claiming that on average Copilot users are saving 14 minutes a day or five hours a month – mostly through removing mundane tasks to boost employees’ productivity and creativity.
In my view, these are the five essential components of Copilot readiness:
Start small – pick just three to six use cases to smart with and engage a small user base
Solidify an AI policy early – work with internal stakeholders to craft an AI usage policy that caters to employee use cases. Remember, Copilot is just one GenAI tool your employees are using.
Ensure data is safe and consistent – determine where the core datasets are and only train Copilot on those that have meaningful information.
Review existing controls – be aware of data classifications, sharing policies, and access management.
Ensure there is continuous user training – empower users to unlock the full potential while staying up to date on the safe use of its capabilities. Keep training scenario and use case specific.
Report and benchmark – build trust and track performance of Copilot against defined use cases.
Learn More at SecurityWeek’s AI Risk Summit – June 24-25, 2024, Ritz-Carlton, Half Moon Bay