Kirby Arinder
2/15/2017
Thanks for having me!
I was told I had wide latitude for my topic.
So…
Contemporary geopolitics can be understood as the playing out of a dialogue, whose participants might be:
I assume if you're asking me, you want to talk about data!
In specific: Data used for public evaluation.
So let's take two…
MS Code § 27-103-159 (2014) revitalized a 1994 performance-based budgeting initiative.
The goal of that initiative was to usher in a new age of data-driven decision-making in government…
But in the short term, it means a lot of measurement and reporting.
Well, yes. And often work for you, or someone like you.
So today I want to talk about this task!
I'm going to try to cover:
So let's get started!
It shouldn't be news: Government costs money, and it is at least intended to do stuff.
As both citizens and civil servants, we want to know that it's doing what it's supposed to!
With data!
Which is to say, relevant, actionable information about your agency.
Ultimately, that's what the PBB initiative wants from you!
(Or staff, or resources?)
Well, that's a trick question. The summative interests of the legislature and the formative interests of your agency align here!
Gathering, maintaining, and evaluating relevant data is not an option for the good times; it's a core function.
Your agency has a purpose, and costs money.
In order to know whether it's achieving its purpose at a reasonable cost, you must monitor relevant data.
So the PBB initiative simply formalizes goals and centralizes processes to which you're already committed qua government agency.
We've said we want relevant, actionable information about your agency.
For now, let's focus on the second part: actionability.
The need for your data to inform action dictates its structure.
Actionable data enables you to:
This requires specificity; you need to measure not just your whole agency, but its parts.
Think of it like New Year's resolutions. I want to be healthier, so I:
All of which is great, but expensive!
If my budget can't handle all that – and even if it can, but I want to maximize effectiveness of my personal spending – I need some way to track my results not just collectively, but by resolution.
And the same applies to your agency! Which is why the performance-based budgeting initiative emphasizes the program inventory.
So what does this inventory contain?
It's essentially a mechanism for tracking how dollars turn into results.
Which in turn, dictates its level of detail!
Your program inventory should follow the flow of money into your organization.
Every time money splits, your inventory does too – like an outline!
It stops splitting where:
So: Actionable data demands specificity.
Looking backward:
Looking forward:
So now we have an inventory of programs. How do we measure them?
A truism that's still worth repeating:
What measurements are appropriate for a program depends on what kind of program it is!
Let's draw some distinctions. First: between
Direct programs aim, well, directly at achieving an organization goal!
Support programs are (theoretically) necessary for that achievement, but only instrumentally.
Between types of direct programs:
Intervention programs aim to bring about a goal by means of some causally effective set of actions.
Deterministic programs achieve a goal by the very fact of their existence; their actions are the goal, effectively.
Among types of measurements:
Inputs: Resources consumed to carry out program activities. Usually convertible to dollars.
Outputs: Activities, goods, and services carried out by an agency to meet its goals.
Outcomes: The results of outputs on goals.
Efficiencies: Ratios of inputs to outputs or (indirectly) outcomes.
Only some of what you do is directly related to your agency's goals. A large portion may be instrumental activity, and that's okay.
Direct programs should be measured by outcomes – not outputs alone! Efficiencies are also important.
Interventions and deterministic programs measure efficiencies differently.
Support programs are measured primarily by input/output efficiencies.
So the way you measure a program, after you've identified it, depends on what kind of program it is!
Classifying a program should be straightforward with your inventory from II.
Now we get to the hard part.
Your self-assessment shouldn't just present numbers in a vacuum – even if they're good numbers!
You need benchmarks – goals for those numbers.
Unrealistic benchmarks are worse than useless; they're harmful.
An arbitrary benchmark set too high means false negatives, and wastes resources pursuing rainbows.
An arbitrary benchmark set too low means false positives, and underserves your agency's goals.
Well, this really needs to be handled case-by-case.
But there's one piece of advice I can't overemphasize:
Benchmarks should be discovered or derived, not created.
Your benchmark shouldn't come from expert opinion alone.
It should be measured! possibly by:
For external use, your benchmark should be presented alongside or incorporated into your program performance measure.
Performance measures need benchmarks.
The achievement of a benchmark should indicate a measurable, relevant real-world phenomenon.
I. This assessment should be part of your ordinary tasks, not an extra task.
II. It demands specificity – and thus a program inventory.
III. Direct programs are measured primarily by outcomes and efficiencies, support programs primarily by efficiencies alone!
IV. You need non-arbitrary performance benchmarks.
Questions?