🚨 Please beware of email scams from individuals posing as Pliancy recruiters. Pliancy will never ask for payment or sensitive information during the recruitment process. 🚨

Taking the AI Leap

Peek behind the curtain to understand Pliancy’s approach to AI. Plus, get our tips for integrating and evaluating AI tools at your organization.

Pliancy

Company

a long hallway leading to infinite possibilities

The desire to create artificial intelligence is nothing new. From Frankenstein’s monster to Discovery One’s HAL 9000, humans have always been fascinated by bringing the inanimate to life. Today, technology is pushing us closer and closer to science fiction.

Whether you already use AI daily or are looking for guidance, we (Zach Brak, senior solutions engineer; Brian Zabeti, security and compliance manager) are here to give you an inside look at Pliancy’s journey with AI and advice for rolling it out at your organization.

The Rapid Ascent of AI

AI technology has made huge strides since 2020, and our hunger for it has grown in parallel. Between 2020 and 2024, annual global private investment in AI more than doubled (+204%) to $150 billion, according to Quid data shared in Stanford HAI’s 2025 AI Index Report. Capitalizing on this wave, it feels like every company—both B2B and B2C—is lining up for their slice of the pie. (Our publication of this blog post shows we’re not immune, either.) So no, you aren’t imagining it: AI is everywhere these days. 

AI at Pliancy

Pliancy employees love technology; we work at an IT company, after all. It should come as no surprise that we’ve been finding ways to integrate AI into our workflows safely and efficiently. Not only does this allow us to boost our own efforts, but it also gives us the hands-on experience we need to advise our clients on the advantages and pitfalls of various AI-powered tools.

Pliancy has integrated AI resources for our team in a few different ways:

– Gemini, Google’s generative AI chatbot
– Notebook LM, a virtual research assistant also from Google
– Fellow’s meeting recorder & AI notetaker
– AI-assisted search within our knowledge base, Guru
– AI-assisted and augmented security event analysis 

We’ve also instituted an internal AI safe use policy and founded an internal AI safety committee. Our recommendations stem from our experience implementing these tools and safeguards.

What Can AI Do for You?

AI presents a world of possibilities, but we’ve found that “What can AI do for me?” isn’t the right question. There are dozens of ways you could shoehorn AI into your daily life and your workday—but would they actually make a difference?

Instead, ask, “What problem am I trying to solve?” Centering your problem is your best way of ensuring AI tools feel seamless, not tacked-on. If those features aren’t solving an actual problem for you, then they may be more trouble than they’re worth. 

Example: Adopting an AI Notetaker

Up until a few months ago, Pliancy team members used a variety of meeting transcription, recording, and notetaking tools. People who wanted these services used what they liked; people who didn’t feel strongly had no exposure to new tools. There was no standardization of use. Our question was, “How can we standardize a safe notetaking process and encourage wider adoption?”

After testing out new and existing options, we went all-in on Fellow’s new AI notetaker tool, upgrading subscriptions across the org so it would be accessible to everyone. Fellow has helped Pliancy team members circulate agendas, keep track of discussions, and assign action items for years (and they know it—read the case study).

Their new AI notetaker feature checked a few important boxes:

Efficacy. We found that the format of Fellow’s notes was the most effective for our use. Action items route directly to Asana, notes recap decisions made within meetings, and summaries omit non-business-related small talk.

Security. Fellow’s tool allowed us to customize access to specific meetings and organize them in specific shared channels. Other tools we tested had either wide-open permissions (high risk) or very closed structures (no collaboration).

Availability. Users were already familiar with and regularly logged into the platform. Layering a recording feature into an existing tool queued us up for success.

Without a clear objective, it would have been easy to get overwhelmed by the options. Our heads might still be spinning trying to compare various pros, cons, and capabilities. Zeroing in on a specific outcome helped narrow down our options based on whether each tool presented a functional solution to our problem.

How Pliancy Stays Safe

Guardrails must go hand-in-hand with any emerging technology, especially one with as much risk potential as AI. To protect our data, our users, and our business, Pliancy has instituted two components: an AI Safe Use policy and an AI safety committee to proactively assess AI risks throughout their lifecycles.

All employees are required to review and sign our AI safe use policy. The highlights of this policy include our approach to AI-related risk, rules around data security, ethical usage, and prohibited uses. This document clearly states how employees should and should not interact with AI tools, limiting our risks and liability.

We’ve also founded an AI safety committee to assess future tools. This group’s diverse experience in security, technology, and operations ensures we consider issues from multiple perspectives before approving new AI tools and vendors. Decisions are based on evaluation criteria such as data sensitivity, data protection, compliance, and vendor maturity. 

With tools evolving at such a rapid rate, we want to be sure we keep pace with change. An AI safety committee offers a clear pathway for evaluating new resources in a systematic, logical way.

Understanding the Risks of AI

Whether in technology, fashion, music, or any other field, most early adopters are driven by excitement. They’re thrilled by novelty and by cutting-edge developments. Their passion is what drives later adopters toward change. Unchecked, however, that passion can be blinding.

→ Imperfect Results

Generative models have become increasingly advanced, but they’re far from perfect. Though the idea of AI delegation seems liberating in theory, you can’t leave your best judgment at the door. AI tools can return false or incomplete information. They may miss (or misinterpret) vital nuances. Today’s models are still prone to hallucination and context contamination or pollution.

Models have what’s called a context window. Like a person, for better or for worse, there is only so much they can consider at one time. If you’re pushing a lot of information through the system, the model may fail, forgetting key details and fundamentally altering the quality of its responses. (Read more about context engineering and why it matters.) Whether through research or testing, you should know the limits of each tool, then add cross-checks and other precautions. 

→ Data Protection

Beyond model accuracy, robust data protection is a cornerstone of trustworthy AI. While an AI safe use policy provides the first line of defense by guiding users on appropriate data handling, organizational due diligence must go much deeper. It’s no longer enough to ask if your data is used for training. You must scrutinize the entire data processing lifecycle, especially when third-party vendors are involved. Key questions include:

– How is your data logically isolated from other tenants?
– What controls are in place to prevent data commingling?

Furthermore, it’s critical to understand the downstream data flow. Many AI providers now use a mix of in-house and third-party models, creating a complex chain of sub-processors. Your diligence must extend to these fourth and Nth parties, ensuring that protections for data residency, retention, and use are maintained consistently throughout the entire service delivery chain.

Evaluating New AI Tools

It’s important to bring a skeptic’s eye to the hype. Any company can slap a “Now with AI!” sticker on their product. But how do you evaluate what’s in front of you?

A structured evaluation process ensures you adopt tools that add real value without introducing unacceptable risk. The key is to match your diligence to the situation.

First, consider your company’s appetite for innovation and your strategic priorities. Is the goal to gain a competitive edge with cutting-edge technologies, or to ensure stability with mature, proven platforms? This decision is a key factor that, along with your company’s risk tolerance, sets the stage for evaluation.

Next, apply a risk-based framework. Not all AI tools require the same level of scrutiny.

For low-risk tools that won’t handle sensitive data, the focus should be on a quick assessment of value and feasibility. Does it solve a real problem, and is it a viable long-term solution?
For high-risk tools that will be integrated into core operations or handle sensitive data, much deeper due diligence is necessary. This requires a thorough review of the vendor’s security posture, data governance practices, product roadmap, and the transparency of their entire AI supply chain.

By tailoring your evaluation, you can foster innovation responsibly—moving quickly on promising, low-risk tools while ensuring high-risk solutions are safe, secure, and ready for your business.

Building Your AI Baseline

Ignoring AI is not a feasible option for businesses that want to thrive in the coming decade and beyond. If you’ve been waiting to take the plunge, it’s not too late. These are the current table stakes you need to meet:

→ Create an AI Safe Use Policy

First and foremost, you need to establish and enforce an AI safe use policy. Your policy should clearly outline your rules and regulations for AI-powered tools. This also signals to your team members that you take AI-related risks seriously (and they should, too).

→ Establish an Internal AI Chatbot

Second, you need to have a safe chatbot for your organization. Why? Because people are going to use one anyway. Without an approved, enterprise-ready option, you have no idea what company information may be fed into a free version of ChatGPT, Claude, Gemini—and then kept on file and used for training purposes or reviewed by human eyes. Rolling out an approved in-house option means selecting an option that isolates your data and keeps it secure.

→ Form an AI Working Group

Third, you need to form a dedicated group to champion AI growth. Whether you call it a safety committee, an experimentation team, or a working group, this group’s purpose is to explore and evaluate new tools within the context of your business. For this group to be effective, it must include diverse perspectives. A balanced team should have representation from key functions across your organization. This ensures that any AI adoption is strategically aligned with your business objectives.

The Great AI Transition

There’s no denying it: AI has already changed how we communicate, learn, create, consume, and more. It’s more than a passing fad. It’s clear that this is a defining moment in technology history.

No one knows exactly how the dust will settle, but we can take cues from the leaps we’ve lived through before. Tools will get better. More people will get comfortable with them. The shine will, to some degree, wear off. Regulatory agencies will play catch-up. Eventually we’ll wonder how we ever survived before—then the next thing will come along, and the cycle will begin again.


One Last Thing

Listen to this conversation generated with Notebook LM, using the blog post you’ve just read as source data.

Keep Reading