AGA Presentation

Kirby Arinder
2/15/2017

Introduction

Thanks for having me!

I was told I had wide latitude for my topic.

So…

Westphalia, the Enlightenment, and the Twenty-First Century Global Order

Contemporary geopolitics can be understood as the playing out of a dialogue, whose participants might be:

  • Locke and Richelieu
  • Rights and interests
  • States and individuals
  • (Moral) realism and pragmatism

But seriously, folks.

I assume if you're asking me, you want to talk about data!

In specific: Data used for public evaluation.

So let's take two…

Data-Driven Metrics: Holding your Agency Accountable in a World of Performance-Based Budgeting

MS Code § 27-103-159 (2014) revitalized a 1994 performance-based budgeting initiative.

The goal of that initiative was to usher in a new age of data-driven decision-making in government…

But in the short term, it means a lot of measurement and reporting.

Wait, that sounds like work.

Well, yes. And often work for you, or someone like you.

So today I want to talk about this task!

Specifically...

I'm going to try to cover:

  • Why is the effort worth it?
  • What things do we measure?
  • What measurements do we use?
  • How do we know when we're successful?

So let's get started!

I. Why do we want to do this, anyway?

It shouldn't be news: Government costs money, and it is at least intended to do stuff.

As both citizens and civil servants, we want to know that it's doing what it's supposed to!

How do we do that?

With data!

Which is to say, relevant, actionable information about your agency.

Ultimately, that's what the PBB initiative wants from you!

Where will I find the time?

(Or staff, or resources?)

Well, that's a trick question. The summative interests of the legislature and the formative interests of your agency align here!

Gathering, maintaining, and evaluating relevant data is not an option for the good times; it's a core function.

Section I summary

Your agency has a purpose, and costs money.

In order to know whether it's achieving its purpose at a reasonable cost, you must monitor relevant data.

So the PBB initiative simply formalizes goals and centralizes processes to which you're already committed qua government agency.

II. So what things should we measure?

We've said we want relevant, actionable information about your agency.

For now, let's focus on the second part: actionability.

The need for your data to inform action dictates its structure.

Actionable data and specificity

Actionable data enables you to:

  • correct problems and
  • perpetuate successes.

This requires specificity; you need to measure not just your whole agency, but its parts.

New Year's resolutions and your agency

Think of it like New Year's resolutions. I want to be healthier, so I:

  • start crosstraining
  • eat only organic
  • start a regimen of nutritional supplements.

All of which is great, but expensive!

New Year's resolutions and your agency

If my budget can't handle all that – and even if it can, but I want to maximize effectiveness of my personal spending – I need some way to track my results not just collectively, but by resolution.

And the same applies to your agency! Which is why the performance-based budgeting initiative emphasizes the program inventory.

Program inventory

So what does this inventory contain?

It's essentially a mechanism for tracking how dollars turn into results.

Which in turn, dictates its level of detail!

Program inventory

Your program inventory should follow the flow of money into your organization.

Every time money splits, your inventory does too – like an outline!

It stops splitting where:

  • money is spent on
  • discrete activities that should lead to
  • discrete results.

Section II summary

So: Actionable data demands specificity.

  • Specificity means a program inventory at the level where money becomes discrete outcomes.

Looking backward:

  • You already want one of these, for the reasons mentioned in I.

Looking forward:

  • You need more than just an inventory; you need good measures.

III. What kind of measurements do we need?

So now we have an inventory of programs. How do we measure them?

A truism that's still worth repeating:

What measurements are appropriate for a program depends on what kind of program it is!

So what kinds of programs are there?

Let's draw some distinctions. First: between

  • Direct programs and
  • Support programs

Direct programs aim, well, directly at achieving an organization goal!

Support programs are (theoretically) necessary for that achievement, but only instrumentally.

And a second distinction:

Between types of direct programs:

  • Intervention programs and
  • Deterministic programs

Intervention programs aim to bring about a goal by means of some causally effective set of actions.

Deterministic programs achieve a goal by the very fact of their existence; their actions are the goal, effectively.

And one more distinction (not about programs)...

Among types of measurements:

  • inputs
  • outputs
  • outcomes
  • efficiencies

Definitions

Inputs: Resources consumed to carry out program activities. Usually convertible to dollars.

Outputs: Activities, goods, and services carried out by an agency to meet its goals.

Outcomes: The results of outputs on goals.

Efficiencies: Ratios of inputs to outputs or (indirectly) outcomes.

How does all this relate to measurements?

Only some of what you do is directly related to your agency's goals. A large portion may be instrumental activity, and that's okay.

Direct programs should be measured by outcomes – not outputs alone! Efficiencies are also important.

Interventions and deterministic programs measure efficiencies differently.

Support programs are measured primarily by input/output efficiencies.

Section III summary

So the way you measure a program, after you've identified it, depends on what kind of program it is!

Classifying a program should be straightforward with your inventory from II.

IV. How do we know when we've succeeded?

Now we get to the hard part.

Your self-assessment shouldn't just present numbers in a vacuum – even if they're good numbers!

You need benchmarks – goals for those numbers.

But be careful...

Unrealistic benchmarks are worse than useless; they're harmful.

An arbitrary benchmark set too high means false negatives, and wastes resources pursuing rainbows.

An arbitrary benchmark set too low means false positives, and underserves your agency's goals.

So how do we avoid the problem?

Well, this really needs to be handled case-by-case.

But there's one piece of advice I can't overemphasize:

Benchmarks should be discovered or derived, not created.

What does that mean?

Your benchmark shouldn't come from expert opinion alone.

It should be measured! possibly by:

  • Rigorous statistical extrapolation
  • Meaningful real-world comparisons
  • Predetermined superordinate goals

For external use, your benchmark should be presented alongside or incorporated into your program performance measure.

Section IV summary

Performance measures need benchmarks.

The achievement of a benchmark should indicate a measurable, relevant real-world phenomenon.

Overall summary

I. This assessment should be part of your ordinary tasks, not an extra task.

II. It demands specificity – and thus a program inventory.

III. Direct programs are measured primarily by outcomes and efficiencies, support programs primarily by efficiencies alone!

IV. You need non-arbitrary performance benchmarks.

That's all, folks!

Questions?