Nonnecke recently transitioned from her role as founding director of the Citris Policy Lab at the University of California, Berkeley to senior director of policy at Americans for Responsible Innovation, a bipartisan nonprofit focused on responsible technology governance. She was appointed to the California Privacy Protection Agency board in 2025 and hosts TecHype, a podcast focused on emerging technology policy. She spoke with PW about third-party vendors of AI tools, practical forms of machine learning, and why humans are still relevant.
What are the first questions librarians and other tech users should ask when procuring AI tools?
When you have a third-party vendor coming to you saying, “We know that you have this significant challenge. You lack resources, you lack staffing to be able to address it—don’t you worry! We have this newfangled magical tool that’s going to erase all of your problems,” you need to ask yourself, What are the potential benefits, what’s the potential risk? At Berkeley, we were the first institution of higher education to implement our own responsible AI strategy, and this was well before ChatGPT came out. Our first principle when we’re thinking about AI is appropriateness. Do you really need to use an AI-enabled tool? Or is the process itself clunky and outdated, and it just needs to change? Do you really need to use AI, or are you falling for FOMO?
What practical steps can organizations take when evaluating AI vendors?
We have a lot more power as procuring entities, especially when we band together. This is especially true of small libraries, which, if they band together, can assert themselves and say, We’re only going to procure this technology if this vendor is transparent with us. Remember, AI vendors need the library more than the libraries likely need them. They have a product they want to sell. At the UC, we banded together all 10 campuses, which gave us significant purchasing power, and allowed us to control the process. We said, You want us to buy your service—these are our conditions.
How can aspiring AI users make smart choices about whom to work with and what to implement?
We’re seeing an emerging third-party sector of AI auditors, certifiers, and licensors that say they will serve as ombudsmen to audit those systems. My concern as somebody who’s worked in this space for over a decade is that we lack standards for what a third-party audit
certification and licensing system actually looks like. At the end of the day, if I implement a tool that causes harm—that causes reputational liability, financial liability, or legal liability—who then is responsible?
On your podcast TecHype, you often debunk myths about AI. What misconceptions do you encounter?
Not all AI is generative. There are very simple forms of machine learning. If you think back to your statistics classes, logistical and linear regression and principal component analysis are statistical methods used in machine learning. Also, AI is not new. We’ve been using machine learning algorithms in the public and private sectors for decades.
Should information science specialists—and workers in general—be concerned about AI coming for their jobs?
AI is not necessarily a tool that will replace workers, but workers who have the skills to use the tools in a meaningful way will replace those who do not. So there’s definitely a role for libraries to provide AI training, both for themselves and their communities. This is especially true because libraries serve as a hub for people seeking employment and upskilling. Also, librarians need to ask themselves, If I could offset some of my work to these tools, what other things can I do that are meaningful? Community engagement tasks, face-to-face work—things computers cannot do. That’s the upside.
Brandie Nonnecke will present “Demystifying AI: Navigating the Laws and Policies Shaping Our Digital Future” on Saturday, June 28, room 118 C, 10:30 a.m.–noon.