Infrastructure Tech Procurements: Adapting to A Shifting Policy and Technology Landscape Webinar Key Takeaways
April 3, 2025

Speakers:

  • Santiago Garces, Chief Information Officer, City of Boston, Massachusetts
  • Waldo Jaquith, Government Delivery Manager, US Digital Response
  • Katy Ruckle, Chief Privacy Officer, State of Washington

Moderator:

  • Tina Walha, Interim CEO, US Digital Response

The following key takeaways and transcript cover the Tech and Innovation Center’s Local Infrastructure Series session focused on how cities can navigate technology procurements amid rapid changes in AI and policy landscapes. Speakers Waldo Jaquith (USDR), Katy Ruckle (Washington State CPO), and Santiago Garces (Boston CIO) discuss recent market disruptions, including DeepSeek’s emergence and federal policy shifts. They emphasize cautious approaches to AI adoption: understanding what you’re buying, monitoring system outputs continuously, and testing free versions before committing. The panel highlights that government AI implementation should prioritize resident outcomes and employee augmentation rather than replacement, with procurement strategies focused on specific use cases rather than technology for its own sake.

Opening Framework

Walha outlined four guiding values: technology driven by shared goals not technology for technology’s sake; designing for diverse needs; commitment to sustainable resourcing; and building trust through transparent, secure solutions.

Recent Market Disruptions

Jaquith identified two major shifts: DeepSeek’s AI model emergence (which trained on ChatGPT, potentially disrupting pricing models) and the new administration taking office. These changes have increased uncertainty for government partners, with most responding through cautious waiting. He noted established vendors are integrating AI into existing software rather than requiring standalone contracts, similar to how Apple has incorporated AI features.

State-Level Response

Despite federal policy shifts, Washington State’s AI guidelines remain consistent, focusing on responsible procurement. Ruckle explained they’ve incorporated “public purpose and social benefit” principles and emphasized practical considerations like public records management for AI tools. The state is creating evaluation frameworks for agencies and establishing sandbox environments for testing.

Municipal Perspective

Garces explained LLMs require massive computational resources, making them accessible primarily through cloud providers via existing procurement vehicles like GSA or NASPO Value Point contracts. He illustrated Boston’s practical approach with their SMART grant project for parking signage, where they’re evaluating traditional machine learning versus generative AI based on effectiveness and cost efficiency, not just technology novelty.

Size Matters

The panel emphasized different challenges for small versus large municipalities, with smaller entities often wanting simple procurement options rather than becoming AI experts. All stressed matching the right tool to the right problem and maintaining equity in service delivery.

Future Outlook

The panel anticipates:

  • Continued market disruption with potentially reduced costs
  • More formal state-level policy requirements
  • Incremental improvements in AI models with better open-source options
  • Development of more specialized, domain-specific models

Practical Advice

  1. Understand what you’re buying (Jaquith)
  2. Implement continuous monitoring of outputs post-procurement (Ruckle)
  3. Experiment with free versions before making procurement commitments (Garces)

The discussion consistently emphasized technology procurement should focus on specific problems rather than trends, with goals of improving resident services and enhancing employee effectiveness rather than replacement.

Condensed Transcript

Tina Walha: Hello and welcome to our 23rd session of the Tech and Innovation Center Series, part of the Local Infrastructure Hub. I am Tina Walha and I’m Interim CEO of US Digital Response, which is a nonpartisan nonprofit that provides technology support to governments. This series aims to help America’s towns and cities take full advantage of Federal infrastructure funding by creating competitive winning bids that deliver real results for residents.

We’re guided by four key values: First, we’re driven by shared goals – we don’t believe in technology for technology’s sake. Second, we design for people – technology processes and products must reflect the needs of diverse groups. Third, we’re committed to resourcing this work with long-term sustainability in mind. Fourth, we want to build trust through solutions that are transparent, secure, available, reliable, durable, and mindful of privacy.

Today we’re focusing on how cities can anticipate and adapt to disruption from technology and policy shifts. We’ll discuss implications of Federal infrastructure and AI policy changes, making decisions about AI procurements, and how cities can reduce risk.

We have expert speakers to guide you: Santiago Garces (CIO, City of Boston), Katy Ruckel (Chief Privacy Officer, Washington State), and Waldo Jaquith (Government Delivery Manager, US Digital Response).

Waldo Jaquith: In the past month, we’ve seen two big changes. First, DeepSeek’s model introduction has shaken things up similarly to what OpenAI, Google, and Claude did previously. It was starting to look like a settled market, reminding me of what happened with Cloud services where AWS got a head start before Microsoft entered, with some Google presence and Oracle mostly processing contracts.

DeepSeek trained their model on ChatGPT, reducing their training costs dramatically. This introduces uncertainty – could these services be much cheaper with more competition? The message for cities is that signing multi-year LLM contracts is risky since you could lock into inferior or overpriced products in a rapidly changing market.

Second, at the Federal level, the new administration is attempting to replace tens of thousands of employees with AI. This likely won’t work and will make it reputationally risky to use AI in government. We’ll probably see big failures that damage AI’s reputation, potentially leading to overcorrection away from its use entirely.

The main response I’m seeing from partners is cautious waiting. There are few cases where it’s truly urgent to adopt AI tools right now to solve problems that only LLMs can address.

Tina: What remains consistent in your procurement recommendations despite these shifts?

Waldo: There’s been a protective but correct message from incumbent technology providers: you don’t need separate contracts with ChatGPT; we’ll add that functionality to existing software. Look at how Apple integrates AI – not through standalone chat interfaces but through discrete features like notification summaries. However, Apple’s implementation has had embarrassing issues with inaccurate summaries.

Government employees may not need chat interfaces for AI. “AI procurement” will look different once functionalities are integrated into standard software. Let’s hold off and let vendors add new capabilities to existing tools.

Tina: Katy, as Washington’s Chief Privacy Officer, has the Federal policy uncertainty affected what you’re doing? The AI executive order was revoked on day one of the new administration.

Katy Ruckle: We’re seeing AI integrated into products and contracts we already have rather than separate AI procurements. The revocation of the Federal executive order hasn’t changed our approach, as it was primarily focused on high-risk activities and red-teaming compute-intensive models.

Our Governor’s executive order created guidance for state agencies based on the NIST AI risk management framework. We’ve added considerations of public purpose and social benefit to our principles because we see that as a responsibility for state government when procuring AI.

Tina: At the NASCIO conference, you shared practical guidance about AI-enabled procurements, including unsexy but important topics like public records. What advice can you offer?

Katy: Public records are indeed a hot topic in Washington. There’s debate about what’s exempt from disclosure and high interest in how AI is being used. Chat tools have gotten attention because some were being deleted as transitory records. We’re now pausing to ensure responsible use and proper record-keeping.

We’re addressing whether prompts are transitory or public records requiring preservation, and considering audit logs associated with these technologies. We’re helping the workforce be mindful about recording and transcribing meetings, which creates public records that may need redaction of sensitive material – a workload issue for records officers.

Santiago Garces: When thinking about large language models, we’re essentially talking about accessing mathematical equations. LLMs require enormous computational resources – ChatGPT4 needs approximately 10,000 GPUs just to load the model, which is why there’s massive capital investment in data centers. Microsoft’s Project Stargate with Oracle came about because they couldn’t build capacity fast enough.

For governments, accessing these services happens through Cloud providers. Most Cloud computing services are available through GSA contracts or NASPO Value Point contracts, allowing any municipality or state to access these tools without an RFP. Vendors adding AI features to products are likely using these same back-end services.

Usage typically shows a long-tail distribution: few employees use AI frequently, most use it occasionally, and many try once and forget it exists. The technology and business models are evolving rapidly – Google recently shifted from charging $20/user/month for Gemini to $2/month across all licenses.

Tina: For a city like Boston, how is this uncertainty affecting your procurement planning?

Santiago: We received a $2M SMART grant to improve our confusing parking signage system. The main uncertainty is whether we still have the funding after recent freezes. We’re comparing two approaches: a classical machine learning method using segmentation to isolate and interpret signs, or a generative AI approach using multimodal models to analyze sign images directly.

Our experiments show the generative AI approach works and is simpler to implement, though potentially higher in per-analysis costs. Either way, we’d use Cloud resources rather than buying infrastructure. Once we confirm our funding status, we’ll validate effectiveness and cost efficiency on smaller scales before deciding which approach to pursue.

Waldo: There’s an enormous difference between large and small municipalities in AI maturity. With smaller governments, I try to determine what they actually want to accomplish before they commit to expensive solutions. Sometimes I suggest starting with a $25/month ChatGPT account for experimentation, but usually they want the “easy button” despite costs.

They shouldn’t need to become AI experts – procurement should be simple through GSA or NASPO. I emphasize that this doesn’t have to be complicated, but also discourage $10,000/month contracts or long-term commitments when they could pay per query instead.

Santiago: DeepSeek shows that smaller models are improving. LLMs today are like early 1990s cell phones – big, clunky, and unreliable, but occasionally useful. Rescinding the AI executive order seems shortsighted since requirements around bias and fairness are technical quality issues. A Tesla that only works in sunny weather or building security that can’t detect people with dark skin is simply inferior technology.

Katy: Washington is assessing cost-benefit ratios and testing use cases in sandbox environments before making large commitments. Many applications focus on environmental purposes like scanning eelgrass or wildfire detection. My privacy expertise becomes relevant when discussing whether vendors can use our data to train their models versus segregating it for our purposes only.

Anthony Townsend: What about simpler technologies like risk scoring algorithms and facial detection that may be problematic but are already baked into infrastructure projects?

Katy: We’re actively discussing risk scoring in state services, evaluating false positives and negatives to ensure people can access needed services. Washington has strict facial recognition laws requiring API testing and community consultation before deployment. These high-risk cases receive thorough impact assessment.

Santiago: It’s about finding the right tool for the job and applying consistent standards. We’ve improved translation services for Cape Verdean and Haitian Creole by fine-tuning LLMs with our existing data and verifying results with native speakers, achieving better results than commercial offerings.

Anthony: What developments do you anticipate over the next 12 months?

Waldo: More change, hopefully with reduced power requirements, carbon emissions, and costs. DeepSeek breached a wall, and I hope we’ll see continued disruption and simplification.

Katy: From the policy side, I expect more formal requirements in legislation and state policy around AI adoption and use.

Santiago: The trend of adding more training data seems to be plateauing in performance gains. We’ll likely see better open-source models, smaller efficient models, and domain-specific improvements rather than revolutionary advances. We must maintain our values and remember we’re building technology to help employees be more effective, not replace them.

Tina: One piece of practical advice for cities with active AI procurements?

Waldo: Understand what you’re buying, otherwise you’re an easy mark.

Katy: Focus on ongoing monitoring of outputs and how you’ll fine-tune as models drift over time.

Santiago: Try free options with low-risk documents first to better understand how the technology works before making larger commitments. Being an informed customer is essential for governments to ensure they secure the right technology. 

Other Resources