Why Visier
Products
Solutions
Developers
Resources
Customers

Is Generative AI Secure Enough for People Analytics?

Generative AI innovations like digital assistants are changing the way we work—but are they secure enough to use with people data? Let's find out.

5M Read
Cube in a digitial setting with a privacy shield on each side, representing generative AI security.

Vee, Visier’s new generative AI digital assistant for people analytics, is designed to help more people throughout the organization get access to the workforce insights they need quickly and easily by asking questions in natural language. Understandably, some people are wondering: Can Vee ensure that the sensitive nature of people data is respected? How does Vee make sure that everyone using it can only access the data they’re privy to? 

The significant security challenges of generative AI

The generative AI tools that have so dramatically captured our attention over the past few months, such as OpenAI’s ChatGPT and Google’s Gemini, are built on large language models (LLMs). They’re astonishingly good at understanding user queries and presenting answers to those questions, all in natural language. 

LLMs are designed to be as flexible as possible and can be put to use on a bewilderingly wide variety of use cases. People data is not one of those use cases. 

LLMs are general purpose tools, built to be used with public data. They simply aren’t designed to handle sensitive, private, or privileged information. 

Those who try to put LLMs to use with people data will face two major obstacles: 

First, the current crop of LLMs is designed to treat all data–and all users–equally. They don’t have a method of understanding and validating a user’s identity, nor a way of restricting access to information based on the person’s permission level. 

So even if you could create, train, and deliver an LLM on a specific company’s people data (within a corporate firewall, for example), the generative AI assistant would likely end up revealing private information—think salaries, performance, and manager feedback—to anyone who asked. It simply wouldn’t know any better.

GET A DEMO OF VEE, VISIER'S GEN AI ASSISTANT FOR PEOPLE INSIGHTS

Second, because the current general-purpose LLMs weren’t built with privacy in mind, they don’t have a way of handling personal information in compliance with GDPR or other privacy standards. 

For example, GDPR specifically requires that systems respect a user’s “right to be forgotten”. Current LLMs can't selectively “forget” specific data from their training. If a user requested their personal info to be removed from the system, the AI model would have to be completely re-trained to exclude it. 

This would demand inordinate amounts of computing power and be incredibly costly. It would be like to wiping the entire memory of your smartphone because you needed to erase one contact. 

These are two reasons most organizations are restricting and blocking any direct use of these technologies on their people data.

The Visier solution: building on our strengths

When developing Vee, we recognized both the power and limitations of public LLMs like ChatGPT. We designed it to harness the power of the LLMs, while still remaining true to Visier’s focus on enterprise privacy and security. 

Vee is not trained on customer data. Part of what has always made Visier stand out is our robust and reliable security model, which is designed to handle the intricacies of people data with enterprise-grade security, privacy, and user access. 

Vee keeps this concept at its core. 

Vee uses large language models (e.g. GPT4) for the user interaction functionality of what it does. The LLMs help Vee understand users’ questions, which are written in natural language and are often abstract, vague, or context-dependent. The LLMs help translate that question into a specific, relevant empirical data query. 

Another significant Visier capability that comes with Vee is our massive body of anonymized data. We have over 250 million normalized, well-structured, anonymized data points, 2,000 business metrics, and tens of thousands of common people analytics questions. We know the “shape” of the types of information people seek to get out of a people analytics system, and what answers look like. We use anonymized data—not actual customer data—to further train the LLMs on how to ask Visier a question. 

The actual sourcing of the answer from the customer’s data is done by the Visier platform, not the public LLMs. Data security is handled just the same as if someone were using Visier’s classic interface. 

In addition to being secure, Vee provides only factual answers–no guesses, no hallucinations. Whereas ChatGPT can be wrong up to 52% of the time, incorrect headcount, reporting structures, or diversity data is not a risk with Vee.

In essence, the LLMs are used as a translation service. We use them to translate a user’s query from natural language to and from the Visier query language. The diagram below shows this concept: 

Assuming the user has the right access level, Visier will produce an answer to their question. At this point, Vee leverages the LLMs once more, this time to format and explain the answer in natural language. The LLM does not retain the information. 

Beyond just answering what it’s asked, Vee can also add value. It can suggest relevant questions, and provide all the supporting knowledge to explain and qualify its answers. For example, Vee might suggest that a user ask about retention risk. If she does, Vee would provide the answer, as well as an explanation of how it was calculated, and how the information could be used to create business impact.

Throughout this process, Vee is bound by the same security model as if the user had logged into Visier in the traditional way. And of course, this all takes place instantaneously, or as we like to say, at the speed of thought

A security-forward digital assistant

Because we’ve designed Vee with a security-first approach, organizations using Visier will be able to deploy Vee with confidence. It can reliably and securely provide any user within a company with self-serve people insights and requires no analytical background or specific training. We’re looking forward to our customers using Vee to create more impact at scale.

Looking to understand more about how Visier’s data security model works? Check out this post. Got additional questions about Vee? Check out our FAQ here, or request a demo.

GET A DEMO OF VEE, VISIER'S GEN AI ASSISTANT FOR PEOPLE INSIGHTS

Back to blog
Back to blog

Recommended resources

All resources

Get the Outsmart newsletter

You can unsubscribe at any time. For more information, check out Visier's Privacy Statement.