employed for good

good work, built on better systems.

How to Get Your Org Started with AI (even if you don’t want to)

Completely, 100% written by a human.

Short version: An intentional approach to AI will help your org introduce it effectively. Jump ahead to:
– The Prep Work
– The Policy Work
– The AI Planning Work

If you’re feeling the pressure of needing to make AI happen at your organization — along with the dread of actually putting those gears in motion — know that you’re not alone.

The tension is normal. AI is advancing faster than most of us can catch up, technologists included. And when you can’t even wrap your head around how the underlying tech really works, or what it means for your organization’s data, it can feel like you’re staring at the top of an impossible mountain.

But just like with any big task, you want to break this down into smaller, manageable chunks. An AI implementation need not be any different, no matter what anyone else tells you about how fast you need to be moving.

The Prep Work

First, there’s some valuable pre-work to be done before you start signing up your team for any AI accounts. (Yes, I just bought you some time!)

Like with any new program or focus, you want to get aligned on what it is that you want for your organization when it comes to AI. Questions you’ll want to broach are things like:

  • Which challenges do we think AI is best positioned to help us solve for?

  • What do we want our AI approach to look like? On a strategic level (ex. how does this tie into our values, or propel specific plans forward?) and in our team’s day-to-day

  • Which “AI Track” makes the most sense for us? (ex. “borrowed” AI in the form of leveraging the major LLMs like Claude and Chatgpt, vs building your own LLMs using your own data)

  • What do we need to consider when it comes to privacy, security and accuracy with our intended AI use?

  • What do we need to consider when it comes to the AI partners and vendors we choose to engage?

  • How do we want to make sure constituents are represented or in the loop about our use of AI? Which consents do we need to offer?

  • What are our concerns broadly, and what are some ways we can mitigate those?

  • What’s our overall readiness?

By no means is this an exhaustive list of everything your org ought to consider. But it’s a starting point that’s going to determine various specifics around your AI pilot: like your timeline, final vendors and use policy. Which brings us to the next point.

The Policy Work

If your team already has a data policy, start there. But if this is your first go, then a data and AI use policy is going to go a long way in 1) safeguarding your org’s data and 2) providing clear bounds to staff for what qualifies as acceptable AI use.

Now, data safeguarding is obvious. This should be important to any responsible organization that holds sensitive or personally identifiable data for any living person.

Defining acceptable AI use is part of the safety equation. But even more, it provides effective guidance to your staff no matter where they sit on the AI enthusiasm spectrum. This include those who are squeamish to jump in with AI, and those that are gung ho / prepared to feed it all the data possible to revolutionize their workflows.

You can easily find resources for how to develop a mission-minded AI policy online (like the AI Resource Hub available via NTEN). You can technically even ask AI to help you generate one to start.

I don’t necessarily recommend doing that last one. Instead I’d suggest an in-to-out approach in generating a distinct AI policy that’s truly bred from your organization’s DNA. To get there:

  • Start by addressing those questions in full from the prep step

  • Drill down your team’s AI Values. This will likely include some of your broader org values, but is intended to answer ‘What kind of AI organization do we want to be’. (For ex. What’s your equity approach, if any? What will you do to mitigate against potential bias with AI use?)

  • Your values become the skeleton. From there, you can better drill down the policy points you want institutionalized at your organization. This is where external research can come in

By this point, you’ll have less anxiety about AI becoming a wild west at your org – because you’ve already spelled out the role it should play, and you will have taken a first pass at clarifying all the expectations surrounding it.

The AI Planning Work

The final step for getting started with AI is laying out the roadmap for how to get there. And guess what? You have more options here than you might think.

An “experimentation” roadmap

Typically when we talk tech implementations, we’re talking about a significant investment of time & resources into new tools or strategy. This often takes weeks or months of discovery, careful consideration, and a hard sell to your finance team to get that contract amount sign-off.

This approach can work for orgs who seek to use AI in a systematic and structured way. It certainly helps protect against speed-driven project mistakes that can create waste and erode trust.

But it’s also true that AI is “expensive” in a way that’s hard to quantify right now. While some of the largest providers do offer nonprofit tiers — like Claude for Nonprofits — orgs still need to be prepared for those license costs. (That is, if your impact org even qualifies as a nonprofit).

Then, once you’re in, there’s the question of how far your team’s tokens will actually stretch. (The more complex the task is for an AI tool, the more “tokens” get consumed. Your licenses only include a set amount of tokens.) Workflows that use up tokens quickly can easily put your team in an awkward spot and jeopardize the actual work.

So if your org isn’t prepared to take that traditional tech approach I mentioned above, that’s understandable. You may be better off taking more of a lite experimental approach, where you:

  • Make easy use of AI tools that are already available to your team. For example, Google’s Gemini models perform well enough and come already baked into your team’s Google Workspace suite.

  • Make use of new AI tools, without inputting personal data, sensitive data, or intellectual property. There are plenty of generative AI applications that are not reliant on sensitive info – like online research, or certain types of data aggregation. Identifying those opportunities with your team can help them individually make good use of AI, while building more of the case for going all in.

  • Establish a culture of feedback and learning around your team’s AI use. Give space for your team to share the ways that they’ve found AI helpful for their workflows, or where they have questions about how to safely introduce it.

Ultimately, remember that your best friend here is the actual plan. Decide which approach to take, document, and timeline out your milestones and desired progress as if it were any other tech rollout.

To Wrap This Up

An AI plan isn’t likely to assuage everyone’s hangups about this revolutionary technology. But a plan sourced with intentional prep work and a guiding policy framework should help relieve some of the stress.


Comments

Leave a comment