Technology now shapes core police work: dispatch and deployment, investigations, supervision, and administrative production (for example, reports and data capture). Over the last decade, agencies have also faced a structural change in how they pay for software. Many tools that were once purchased as one-time “capital” projects are now sold as subscriptions, shifting costs into recurring operating budgets.
That shift creates a practical problem for police executives: Subscription spending competes directly with staffing, overtime, and other routine operating needs. The procurement question is no longer only “Does this tool work?” It is also “Does it work well enough, for our problem, to justify a recurring obligation?” This short brief summarizes (1) what is changing in police technology costs, (2) why evaluation should precede scale-up, and (3) practical budgeting steps that reduce fiscal and governance risk.
The Landscape of Police Technology Costs
Police spending has trended upward even after the “defund” debate of the early 2020s.1 The drivers are mundane but consequential: higher personnel costs, pressure to compete with private-sector wages, and the steady layering of new technology. In inflation-adjusted terms, Bureau of Justice Statistics data show local government police-protection expenditures rose 26 percent per capita from 2000 to 2017.2 An Urban Institute analysis reports that from 1977 to 2021, state and local spending on police increased 189 percent.3 These are descriptive trends, but they set the fiscal context in which technology decisions now occur.
Within those larger budgets, personnel still dominates. Most spending pays for salaries and benefits, while roughly 5 percent is allocated to capital expenditures, the budget category that historically covered major technology purchases.4 That historical arrangement mattered because many systems (for example, records management) were bought under enterprise pricing: a defined up-front cost, often financed through bonds, followed by years of use until replacement. Enterprise purchasing fit police finance because it behaved like other capital assets by having periodic, predictable replacement cycles.
That alignment has weakened. Since the mid-2010s, many vendors have shifted to software as a service (SaaS), which converts technology from a one-time capital purchase into a recurring subscription. In practical terms, technology now competes directly with staffing, overtime, fuel, and other operating needs.
Contracts often make the problem worse.5 Police budgets are usually built on an annual cycle and anchored in historical line items. SaaS agreements are commonly multiyear, include auto-renewal clauses, and embed price escalators. When renewal dates and notice periods do not line up with the budget calendar, costs can rise with limited deliberation. In tight fiscal periods, that misalignment can become a governance problem and, quickly, a political one.
This matters because policing technology is expensive at scale. Annual costs for major systems are large enough to absorb meaningful shares of discretionary spending. The practical implication is straightforward: Stewardship now requires more than “buy and deploy.” It requires credible evaluation before agencies lock themselves into recurring obligations.
The Need for Proper Evaluation of Policing Technology
Walk the exposition floor at an IACP conference and the market signal is unmistakable: policing is being sold technology at scale. Most vendors in the exhibit hall are offering some form of software, analytics, sensors, or integrated platforms, all competing for agency contracts. The products are often impressive in demonstration. Leaders are faced with a public that is quite supportive of, and in demand for, their agencies to use these technologies for both internal management and external crime fighting.6 The claims are familiar: reduced crime, faster investigations, improved efficiency, and safer officers.
The core problem is evidentiary, not aesthetic. Many vendor claims are not backed by evidence that would survive even minimal scrutiny, much less independent evaluation.7 When credible studies exist, effects are frequently contingent on setting, implementation, and baseline conditions. A tool that produces measurable gains in one jurisdiction may do little—or create new costs—in another. For practitioners, the implication is simple: agencies should not infer effectiveness from marketing materials or peer adoption. They should treat each major purchase as a testable intervention and ask how it performs in their own operating environment.
Sherman’s Targeting, Testing, and Tracking framework (the “Triple-T”) formalizes that commonsense approach for policing strategy, including technology.8 It is especially relevant under SaaS contracting, where agencies do not just buy a tool; they take on a recurring obligation. Triple-T forces discipline at three points that matter for budgets and outcomes: target technology to a specific, high-priority problem; test whether it produces measurable improvement relative to business as usual; and track performance over time to decide whether continued payment is justified.
The sections that follow apply Triple-T, along with current research and practitioner experience, to a procurement process that is rigorous enough to be credible and simple enough to be used by agencies of any size.
Targeting: Start with the Problem, Not the Product
Technology procurement should be treated as problem-solving, not shopping. The unit of analysis is the problem the agency is trying to solve, not the attractiveness of a vendor’s tool. Leaders should define the problem first, then assess whether the constraint is primarily technical (amenable to a tool) or adaptive (driven by supervision, training, incentives, workflow, or policy). Only after that diagnostic step should the agency ask whether technology is a plausible lever.
When agencies skip this step, they do not just risk buying the wrong product. They risk committing operating dollars to a recurring contract that produces little public safety or administrative value. Targeting is therefore the first, least costly, and often most consequential stage of procurement.
Before considering any purchase, leadership should be able to answer four questions in plain language:
- What problem are we trying to solve?
- How large or frequent is the problem?
- What data show the problem exists (and where it is concentrated)?
- Is technology a realistic mechanism for improvement, given staffing, policies, and operating environment?
Problem-oriented policing frameworks make this step tractable because they force specificity. For public safety problems, Jerry Ratcliffe’s VOLTAGE framework is a practical scaffold: It directs attention to victims, offenders, locations, times, attractors, groups or gangs, and enhancers. The point is not terminology. The point is disciplined diagnosis: Identify who is harmed, who is driving the harm, where and when it concentrates, what draws activity to the area, and what conditions amplify risk. Administrative problems require the same logic. Define the bottleneck, measure baseline performance, and identify the operational source of delay or error before proposing a technological fix.
Targeting pays off in three ways. First, it increases the probability that implementation succeeds because the tool is matched to a specific mechanism. Second, it reduces misaligned purchases by clarifying what success would look like and what would count as failure. Third, it strengthens the agency’s ability to explain the expenditure to funders and taxpayers in terms that can be evaluated. Under SaaS contracting, that discipline matters even more: Multi-year agreements with renewal clauses are harder to unwind than past capital purchases. The best time to avoid a costly, low-value contract is before the first signature.
Testing: What Does the Research Say and Can You Test It?
Once the agency has defined the problem and technology appears to be a plausible mechanism, the next step is testing. That means two things: (1) read what credible research exists, and (2) run a local evaluation before committing real money. Practitioner-oriented reviews caution that artificial intelligence (AI) systems can shift error patterns, amplify implementation variation, and create governance burdens that are easy to miss if agencies evaluate only vendor demonstrations rather than measurable outcomes.9
The evidence base for police technology is growing, but it is uneven. For newer tools, peer-reviewed studies may be sparse. Even when studies exist, results often do not travel cleanly across agencies because implementation quality, baseline conditions, policies, and community context vary. A second complication is that published evaluations sometimes test a tool against an outcome different from the one the purchasing agency cares about. Leaders should therefore ask not only “Is there evidence?” but “Is there evidence on our problem and our outcome?”
Testing matters because research results and vendor claims often diverge. Vendors sell products; they do not run neutral evaluations. Impacts can be overstated, and many technologies show mixed results across public safety, investigative, and administrative outcomes. A procurement process that treats marketing as evidence is a recipe for recurring costs with ambiguous returns.
The distinction between tools and outcomes is not academic. It is the difference between buying a capability and buying a solution. Consider gunshot detection. If the agency’s primary objective is reducing gun violence homicides, existing research suggests the technology by itself is unlikely to produce large reductions.10 If the objective is faster notification and better evidence recovery at scenes, the same technology may yield measurable operational value. The technology did not change; the outcome and the mechanism did.
The same logic applies to newer AI applications. In one large, rigorous evaluation of AI tools for police report writing, the technology did not improve report-writing speed.11 In contrast, AI-assisted review of body-worn camera footage, when paired with officer self-assessment or supervisor mediation, was associated with fewer negative community-police encounters in a large urban agency and more positive encounters in a rural sheriff’s office.12 The lesson is not that “AI works” or “AI fails.” The lesson is that the effects depend on what the tool is being used to change and how it is integrated into supervision and workflow.
How to Test in Your Agency
Agencies should go beyond having a few early adopters try a tool and report impressions. User feedback helps with implementation details, but it does not estimate impact. A useful evaluation does not need to be complex. It needs to be structured.
A practical evaluation plan has seven steps:
- Define the problem and the outcome(s) the agency intends to change.
- Specify a treatment group (where the tool will be used) and a comparison group (business as usual).
- Ensure the groups are similar enough for an apples-to-apples comparison.
- Pre-plan data collection (what will be measured, by whom, and when).
- Collect and analyze the data.
- Interpret results against the original problem and mechanism.
- Document findings and decision rules.
Partnerships can make this feasible. Universities are often more accessible than police leaders assume, and the National Institute of Justice’s Law Enforcement Advancing Data and Science (LEADS) Scholar program is another pathway for practitioner-researcher collaboration.13 The goal is not academic publication. The goal is credible internal evidence for a real budget decision.
Turning Results into Decisions
Evidence is useful only if it feeds a decision. Policing researcher Dr. Scott M. Mourtgos has proposed an executive framework that emphasizes estimating the probability a strategy will succeed in a specific context rather than forcing a binary “works/does not work” conclusion.14 Applied to technology, the idea is straightforward: establish a baseline, run a pilot alongside a comparison condition, estimate the difference, and translate that difference into a decision about whether to proceed, adjust, stall, or scrap (PASS). By applying framework, executives get a confidence-backed assessment of whether the tool is likely to produce the intended outcome in their environment, at their scale, under their constraints.
A tough, disciplined test improves return-on-investment thinking. At the end of the pilot, leaders can ask a hard but necessary question: Does the measured value justify the full cost, including training, staffing time, storage, and the recurring subscription?
Vendors often offer trials, and short-term pilot contracts are common. Even when a pilot ends with “no,” that outcome is usually far cheaper than locking the agency into a multi-year contract without credible evidence of benefit. The key is discipline: define success up front and keep the criteria fixed. Redefining success midstream converts evaluation into rationalization and erodes trust in procurement.
Tracking Performance Over Time
Under SaaS contracting, technology is no longer a one-time purchase followed by a long useful life. It is a recurring operating obligation that typically grows over time. The practical implication is simple: Agencies should treat technology funding as conditional on continued performance. If a tool no longer produces measurable value, sunsetting it is often the fiscally responsible choice.
Tracking works only if metrics are defined before implementation. Those metrics should map directly to the original problem the agency targeted, while also capturing predictable spillovers such as legal risk, compliance burden, and community concerns. Agencies should monitor both outputs (use and compliance) and outcomes (value). A workable set of key performance indicators answers five questions:
- Use: Is the technology being used, and, if so, how frequently?
- Impact: In how many cases does the technology change decisions, productivity, or results in a measurable way?
- Legal exposure: Has the technology’s use been accepted in criminal proceedings, and what discovery or evidentiary burdens does it create?
- Policy compliance: Is the technology being used appropriately and consistently with training and policy?
- Legitimacy: What community concerns, complaints, or reputational risks have emerged?
Routine review of these indicators is not bureaucratic overhead. It is the mechanism by which an agency can justify ongoing costs and identify low-value subscriptions before they become entrenched line items.
Tracking should also include contract governance. Agencies should maintain a complete inventory of SaaS tools, with renewal dates, notice periods, and escalation terms. Auto-renewals often occur with limited visibility and can hit mid-budget cycle. A simple alert system that flags renewals months in advance gives leadership time to review performance data and decide whether renewal is justified.
Budget analysts can track dates; they cannot decide whether a tool remains worth paying for. That responsibility sits with command staff. Clear communication between leadership and finance is what turns renewal management into deliberate decision-making rather than default spending.
Budgeting for Success
Many emerging technologies offer real capability. The issue is not whether tools can do things. The issue is whether they solve the agency’s prioritized problem at a cost the organization can sustain. A clear understanding of payment models, paired with targeting, testing, and tracking, should precede any decision about long-run funding.
When a tool demonstrably delivers value on the outcomes the agency targeted, budgeting it as an operating expense is appropriate. That choice matches the reality of SaaS and reduces the temptation to fund recurring obligations with one-time money. The condition is ongoing tracking: Recurring costs should be paired with recurring performance review.
Agencies should also look for ways to reduce fiscal exposure through shared funding when the operational benefits are shared. Integrated video systems that support investigations, for example, may also support school safety functions in threat response. In those cases, a formal cost-sharing agreement among police, schools, and other city agencies can align payment with benefits. Regional cooperation is another option. When a capability can be used across jurisdictions, shared procurement and shared services can reduce per-agency cost while expanding coverage.
Even with cost-sharing, many technologies will remain operating expenses. In those cases, Triple-T is the defensible fiscal story: The agency can show it bought technology for a clearly defined purpose, tested it against measurable outcomes, and continues to fund it only when performance justifies the cost. That is what accountability looks like in practice, both for funders and for the public.
Conclusion
Technology procurement is now inseparable from fiscal governance. Under subscription contracting, the default outcome is not a single purchase but a standing obligation that grows unless leadership actively manages it. The practical response is discipline, not pessimism. Agencies should treat each major tool as a problem-specific intervention: define the problem precisely, test whether the tool changes the outcomes that matter in the local setting, and track performance so renewals are earned rather than automatic. Done well, Triple-T does two things that police executives need most. It reduces the odds of locking the agency into high-cost systems with weak returns, and it produces a defensible narrative for funders and the public that links spending to measurable performance rather than to promises. d
Notes:
1Tate Fegley and Ilia Murtazashvili, “From Defunding to Refunding Police: Institutions and the Persistence of Policing Budgets,” Public Choice 196, no. 1 (2023): 123–140.
2Emily Buehler and Kevin Scott, State and Local Government Expenditures on Police Protection in the U.S., 2000–2017, Statistical Brief (Bureau of Justice Statistics, 2020).
3“Criminal Justice Expenditures: Police, Corrections, and Courts,” State and Local Backgrounders (Urban Institute).
4“Criminal Justice Expenditures”; Lindsay Miller, Jessica Toliver, and Police Executive Research Forum, Implementing a Body-Worn Camera Program: Recommendations and Lessons Learned (Office of Community Oriented Policing Services, 2014).
5Recent coverage suggests this problem is felt in the fire safety sector as well. Mike Baker, “Private Equity Finds a New Source of Profit: Volunteer Fire Departments,” The New York Times, December 14, 2025.
6Kaylyn Jackson Schiff et al., “Institutional Factors Driving Citizen Perceptions of AI in Government: Evidence from a Survey Experiment on Policing,” Public Administration Review 85, no. 2 (2025): 451–467.
7Schiff et al., “Institutional Factors Driving Citizen Perceptions of AI in Government.”
8Lawrence W. Sherman, “Targeting, Testing, and Tracking: The Cambridge Assignment Management System of Evidence Based Police Assignment,” in Evidence Based Policing: An Introduction, eds. Renée Mitchell and Laura Huey (Policy Press, 2018), 15–28.
9Council on Criminal Justice, The Implications of AI for Criminal Justice: Key Takeaways from a Convening of Leading Stakeholders (2024).
10Eric L. Piza et al., The Impact of Gunshot Detection Technology on Gun Violence in Kansas City and Chicago: A Multi-Pronged Evaluation (National Institute of Justice, 2024); Jerry H. Ratcliffe et al., “A Partially Randomized Field Experiment on the Effect of an Acoustic Gunshot Detection System on Police Incident Reports,” Journal of Experimental Criminology 15, no. 1 (2019): 67–76.
11Ian T. Adams et al., “No Man’s Hand: Artificial Intelligence Does Not Improve Police Report Writing Speed,” Journal of Experimental Criminology 22, no. 1 (2026): 137–154.
12Ian T. Adams, “Automation and Artificial Intelligence in Police Body-Worn Cameras: Experimental Evidence of Impact on Perceptions of Fairness Among Officers,” CrimRxiv, February 10, 2024.
13“NIJ’s Law Enforcement Advancing Data and Science (LEADS) Scholars Programs,” National Institute of Justice.
14 Scott M. Mourtgos, “Probabilities Over p-Values: A Decision Framework for Evidence-Based Policing,” Justice Evaluation Journal (2026): 1–19.
Please cite as
Matthew Barter and Ian Adams, “Technology, Trust, and Taxpayer Dollars: A Smarter Approach to Police Technology Procurement,” Police Chief Online, April 15, 2026.


