Too often we spend time coming up with corporate and organizational values that sound nice but are never really used. I recently had the chance to interview Alex Bard, the CEO from Campaign Monitor. Under his leadership, the company actively uses their values to drive both day-to-day and strategic decision-making. Read more about how here.
Performance metrics matter. We know this and yet we continue to track things that have little to do with our goals. The better alternative is to make sure that you're measuring progress against the things that matter most to you and your future. Here are four steps to think through that process.
I'm trying to move away from outputs-- such as number of contracts-- and focus more on outcomes. Did I create a connection with a client? Did a client come back after the project to ask more questions or start another task? Would we both want to work together again?
What do you track on a regular basis?
INCREASING BUY-IN BY SHIFTING FROM ROLE-BASED TO ATTITUDE-BASED EMPLOYEE ENGAGEMENT
Gaining buy-in is something we often treat like a "check the box" exercise when, in fact, it's much more complex and costly.
Whether you think in terms of effort or dollars or both, what makes some projects so hard is the anticipation of resistance you will meet along the way. But take a breath, get some coffee and consider how you might apply these three things to improve communication and buy-in today.
What’s the end objective? Shift from role-based (project manager, engineer, HR Director, etc) to attitude-based messages to increase buy-in by more precisely addressing each group’s unique concerns and challenges.
How might this shift from role-based to attitude-based outreach work?
Role-based communications is critical when communicating job requirements but is not precise enough to address an individual’s unique perspective of concerns.
To more precisely target outreach efforts, program managers should segment their stakeholder community by creating attitude categories. The matrix above includes two dimensions including 1) perceived program value (high or low) and 2) implementation pace (early adopter, passive supporter, and resistant).
- Refine the categories to mirror the categories of concerns
- Estimate the percentage of staff within the broad stakeholder community that fall into each bucket. This will provide some focus and sense of areas of importance
- Develop messages and outreach opportunities that match the needs of each attitude category
- Roll-out approach, recognizing there will be multiple messages released in parallel
EXAMPLES OF ATTITUDE-BASED CATEGORIES AND SAMPLE MESSAGES
As shown above, there are two broad dimensions of perceived program value and pace of adoption that help define attitude categories. Using the combinations, attitude groups emerge that can help inform targeted outreach efforts.
To implement this approach, program managers have to morph the traditional thinking on outreach. Specifically, key assumptions include:
- The believers or people who rate high on the perceived value of the program and are early adopters are critically important. These people are continually looking for ways to improve and advance program within their part of the organization. They might be frustrated with the negative feedback because “it’s working just fine for them.” Confident program managers should encourage this key group to run with their ideas to push the program forward.
- Reaching out and trying to convince the most stubborn, resistant staff (depending on their role) should be a lesser priority or not done at all.
- Outreach efforts should be focused on the top and middle tiers with the belief that they’ll create the momentum and have the most influence.
- Every opportunity to highlight accomplishments should be seized upon. Amplifying the positive leaves less time and attention for the more negative, counter-productive attitudes.
In sum, an attitude-based approach will help target messages—regardless of role within organization—and more precisely address their issues and needs for better buy-in.
I'm working with a client at a major crossroads with their program and the supporting system. We've been soliciting a ton of feedback, working through various communication approaches, and refreshing guidance to reflect a more user-center approach. They'll pull through but the broader questions raised on the return on investment-- particularly for the supporting IT system-- has me thinking about other federal programs in a similar position.
"It so happens that the work which is likely to be our most durable monument, and to convey some knowledge of us to the most remote posterity, is a work of bare utility; not a shrine, not a fortress, not a palace, but a bridge."
Montgomery Schuyler expressed this sentiment the day the Brooklyn Bridge opened for traffic in May 1883. That milestone moment marked the end of more than a decade of planning and construction and the beginning of a new era for commerce and economic growth for the region. Flash forward to 2014. Our infrastructure has been revolutionized several times over and we now count our reliance on information technology systems among our critical infrastructure. In the case of physical infrastructure, a lasting concrete and steel monument is a testament to the tremendous effort expended. For our core information technology systems, the results can be equally transformative.
As it was then, careers and lives are defined and shaped around these massive, multi-year structural efforts. Years are invested in planning, design, and implementation and, ultimately, success is made possible by the complex choreography that mobilizes the right skills at the right time. After all of the blood, sweat, and occasional tears, we do not often take the opportunity to reflect on what has been accomplished.
Many agencies are nearing the end of a massive investment and undertaking in the implementation of enterprise-wide systems (many of them financial systems). As the final milestones are reached, leadership has a unique opportunity to proactively answer the following questions.
- Does the new (insert cryptic but sometimes clever acronym here) live up to the promise for a more comprehensive, efficient, effective system?
- Have we actually consolidated more our legacy systems?
- Have we closed any data centers?
- Have we united staff around the core mission and armed them with tools and information that needed to carry out mission work?
Answering these questions takes leadership and guts—a willingness to take a hard look at the investment and outcomes to celebrate what worked and acknowledge what did not.
Federal agencies face a massive undertaking in strengthening and consolidating their IT processes and workforce to counter the threats in a rapidly evolving sector. After all, that’s the concern driving implementation of the Federal Information Technology Acquisition Reform Act. FITARA was crafted on the premise that the current operating model isn’t resilient enough to deflect a large-scale attack or efficient enough to withstand growing budget scrutiny.
After it became law in December 2014, FITARA sparked numerous planning and consolidation efforts for agency information officers, chiefs of contracting, and budget staff. FITARA gives agency CIOs the approval power and oversight responsibility for technology acquisitions. The purpose is to adapt the federal IT acquisition process to major industry trends while ensuring greater transparency and accountability. Key changes resulting from FITARA include enhancing CIO authority, improving risk management, increasing IT portfolio review visibility, establishing a stronger role for acquisitions staff and maximizing strategic sourcing through greater governmentwide software purchasing.
Because federal IT staff are on the hook for making all of these requirements a reality, agencies are now dusting off (or beginning to develop) strategic IT workforce plans to address the “people piece” of this complex equation. To me, the strategic IT workforce component—how these major changes are done with the current federal staff and what skill sets are needed for the future—is especially compelling, but surely won’t be easy.
It’s not surprising that IT is a prime target for improving efficiency and transparency. Capital planning, budget execution, acquisition, and the workforce supporting all of this are all critical to operations and are equally expensive.
FITARA requires upfront action and has a long tail, meaning that implementation efforts will be ongoing for years to come. Agencies must work with their component bureaus, offices, divisions, and units to undergo a common planning effort and move together towards a more unified future.
Despite the FITARA’s common goals, a one size fits all approach won’t work. This is true not only because of the varying levels of IT program maturity and internal controls, but also because of the unique missions and cultures of each agency and sub-organization. Further, developing IT workforce plans will spark resistance if the primary focus is on consolidation instead of developing a menu of solutions (consolidation, strengthened communications, and increased oversight, for example) that can be mixed and matched to more precisely meet the sub-organization’s needs.
Of the federal IT executives I’ve talked to, each would like to take lessons from past IT workforce planning efforts that may have stalled or stopped. But beware the hazards of such an approach.
Why IT Workforce Plans Fail
There are three main reasons why your agency’s FITARA IT workforce plan could fail.
Incomplete data. There is a widespread belief that unless your agency has perfect or complete data, it’s not worth conducting any analysis. The alternative to this “all or nothing” approach is to get what staffing data you can (relatively easily) and document the context when presenting the results. You can build on the data and expand the analysis later. Generate interest by showing people preliminary findings on the existing staff because it gets them thinking about what’s missing or what specific question about the workforce they’d like answered. The data will be incomplete but don’t let that hold you back from doing something.
Overly-complicated analysis. Leaders and staff alike start to get excited and let the “wouldn’t it be nice to knows” run away with the analysis. Before you know it, you’re collecting an amazing but untenable amount of detail. Capturing these shades of gray would take an enormous amount of time and effort for what is, in the end, a limited return on investment. Simple is better—even if it doesn’t reflect all of the specific nuances of hiring, training, staffing, and personnel advancement trends. Especially when analyzing IT staff, each individual is unique and so is their situation.
Inexact modeling. Output from the staffing models seems too high or too low. Unfortunately, it’s very difficult to develop a model with the level of precision that everyone will be satisfied with. However, if everyone uses the same model, this inequity should be less of a concern. The output isn’t that you’re saying you need 1,000 new IT positions, but it does give each sub-organization a sense of where they should focus recruiting/hiring for vacant or new positions in the future. A simpler model developed with in-house experience is a better, faster alternative to coming up with projections based on the IT footprint.
Federal IT executives and other agency leaders can assist in developing good, implementable plans by helping manage the expectations that the output will be perfect.
IT Workforce Plan Outcomes
The outcomes of a good, strategic IT workforce plans include the following:
- A document to demonstrate compliance with FITARA’s requirement to develop and implement a strategic IT workforce plan.
- Increased confidence among agency leadership that the current IT workforce is identified, the gaps are known, and that there is a plan in place to fill critically needed positions.
- A standardized approach to evaluating the current workforce (staff count, grade, cost, skills, etc.) and a common model for estimating the needed workforce.
- An implementable strategy to strengthen communications and oversight.
- A strengthened working relationship among sub-organizations as a result of undertaking a collaborative, inclusive planning process.
Strategic IT workforce planning will never be simple but it can be more effective if your agency focuses on avoiding the potential pitfalls. Implementing the plan will result in a strong, more unified and efficient workforce.
Data is everywhere and, for many, is everything. It’s put on a pedestal where it is both loved and admired. It’s protected and cared for by smart, devoted people. And, it’s both the question and the answer when we cross paths with the OMB or Congress.
I started to write that federal programs “these days” have become synonymous with massive data collection, analysis and reporting exercises, but we all know that’s not 100 percent true. For as long as we have been a nation, we (the public) have demanded from our government (and our contractors) a thorough accounting for the money spent and the accomplishments achieved. In today’s government, that means data and that’s all good.
The issue? A little taste for good data leads to cravings for more. Without realizing it, we’ve become zombies with insatiable appetites for fleshy spreadsheets. That’s not so good.
Why? Data is expensive—in fact, far more expensive than we like to acknowledge. In government and in contracting, data is the fancy, bubbly, bottled water that we treat like tap.
We rarely talk about the cost of data because we believe data makes us smarter and better organizations. Federal executives want insights and answers—so they ask for more analysis. Program managers with their staffs and consultants want to be responsive and to show their program’s value—so they collect data to power such analysis and start crunching. They might wince a little in the process of getting there, but ultimately they move heaven and earth to produce polished, data-rich reports.
Yet for the dozens of well-purposed, well-intentioned federal programs, such as the Federal Real Property Profile (FRPP), the Employer Information Report, the Energy Review, and federal information technology (IT) investments, data requests like these are not making them smarter or better. They are merely creating a data collection burden on both federal employees and their contractors.
Each of these programs (and the many others like them) has a purpose. And all the data they produce has potential value. But how much did it cost to obtain it? Where is that data now? And who is using it?
The FRPP exemplifies a good idea gone wrong. If you’re not familiar with this program, the FRPP is a skim of a federal agency’s real property data and metrics. Agency staff collect the data and send it to the GSA. Using that submitted data, the GSA then produces an annual summary report of the federal government’s footprint.
And then not much else happens.
Now many important things are happening in real property at each of the agencies that reports data—but the FRPP data at the GSA is not reflective of that activity. In fact, the rules around FRPP data make it too hard to easily snag data from other systems and too superficial to do much interesting, useful analysis with it on its own. Yet agencies still bear the cost of gathering the required data, checking it, and submitting it for no return on their data collection investment—except for a check in the compliance box.
In their book, the “Agile Culture,” Pollyanna Pixton, Paul Gibson, and Niel Nickolaisen advise, “Always ensure that the cost of collecting the metric is significantly less than the value that it can deliver. And we do mean significantly less.” Recognize that the cost of collecting data is not zero and can sometimes be very high, especially if it involves continuous action by the delivery team members. This cost is often ignored and can have a huge negative impact on team productivity.
Even good programs can grow increasingly expensive because we need—and want—to understand broad, complex problems. There is no doubt that our pursuit of data and answers has yielded some insights, avoided some crises and enabled some right choices being made the first time. Yet is it right, or even sustainable, to pursue data without periodically asking ourselves how much it is going to cost to obtain the data?
How do we think the data we seek is going to answer an unanswered question? Is the tradeoff between the expense to collect the data and insight gained worth it? If the answer to this quick “gut check” is yes, then by all means, pursue that data with purpose and seriousness. However, if the answer is less than a resounding “yes” or we’re struggling year after year to ensure collection compliance, maybe it’s time to admit that the data isn’t needed or as valuable as once thought. Seeking data at all costs isn’t a wise (or sustainable from a political or budgetary perspective) approach.
The alternative is to treat data like other investments and to measure its return on investment. Keep collection and refinement efforts in check by continuously weighing the costs and benefits.
Blinded by our love of metrics, we pursue data. Yet, as we recently explored, the cost of that pursuit is often high. Staff time is assigned, support contracts are signed, information technology systems are built and secure storage facilities are negotiated.
We are also blinded by an assumption: that data is critical to anticipating the future and investing public funds responsibly. We’re intrigued, entranced, obsessed. We worry about the gaps and suspected errors so we dismiss any extrapolated insights as flawed. And, then we come right back because—like all codependent relationships—we believe we need it (data) to exist.
In spite of this conundrum, we get sucked into the data game like we’re at a carnival. We go back time after time trying to get enough tickets to buy the big prize—but suspecting somewhere in the back of our minds that the stuffed animal is worth a fraction of what we spent to “win” it. And despite working in a time of notable budget shortfalls, the cost to collect data is rarely being scrutinized or the efforts to collect data cut back once they’re started.
Our data appetite is insatiable. Why? Because we’re desperate for answers. When we’re not exactly sure what the questions are, we believe data is the first step on the right path to getting there. In fact, you’d be hard-pressed to find a federal program manager (or their consultant behind the scenes egging them on) who didn’t think having more data was a good thing. Perspectives like that of Kevin Cincotta in Government Executive make so much sense on the surface that data collection efforts often go unchallenged.
Our data appetite convinces us that we’ll use it if we have it. In reality, nothing seems further from the truth. Forrester reported last year that a meager 12 percent of the data collected is ever analyzed. Yikes! This number might be startling at first glance, but it won’t really surprise many federal employees who live this reality. Within our federal programs, there is a limited awareness of the data available, concerns about data quality, a lack of analytical skill available to analyze it, and a just plain old lack of time. The time factor, to me, is the single biggest downfall with collected data. It’s expensive to collect and sadly, much of it goes unused. Plain and simple, it’s a waste.
So, we’re stuck, right? Maybe not. Responsible and assertive leaders must chart another path—up and around the hurdle created by the need for data. This path consists of three steps:
1. Build awareness
To start, agency leaders need a clear understanding of the data collection efforts underway within their organizations. This understanding doesn’t require an exhaustive inventory, but it does require a broad understanding of the top five or so programs requiring data collection and maintenance. Understanding what directives staffs are working under and what the response looks like—the number of staff assigned, the rough value of support contracts, and related systems. This exercise is necessarily conducted at the leadership level—it’s not a program by program problem.
Next, look at the internal and external reports being produced. Stretch beyond what information is being conveyed on the surface and dig deep to determine what (if any) insights are being gained. Which data elements do staffs depend on regularly to make decisions about the direction of the organization?
Based on the understanding gained by agency leaders in step 1, plot the big data collection efforts on a simple 2×2 matrix. Label one axis “federal mandate” (yes, no, or sort of) and the other “high mission utility” (yes, no, or sort of). Obviously, two combined “yeses” earns a green light, while two combined “noes” means an effort is phased out. The “sort of” rating equates to the work of program managers and leadership to either move certain data collection efforts solidly into one box or the other.
2. Vigorously push back:
For any data collection efforts falling into the “yes it’s a federal mandate, but has no mission utility” quadrant, agency leadership should vigorously push back, request an exception, or work to change the requirement so that it better fits the agency’s purpose. Agencies (and the government as a whole) simply don’t have the discretionary budget it takes to apply blanket rules to a deeply nuanced environment.
3. Reenergize analytics:
The biggest thing leaders can do to get more value out of the time invested in data collection is to ask for the reports. Ask program managers to share with you what they think is important. Be careful not to ask too loudly about what else would be nice to know—remember that anything you ask for has a trickle-down effect that costs more money.
Responsible leaders know the costs of data collection and continuously weigh the benefits. They also ask the right questions in the pursuit of data: Are we getting the insights we need? Are we getting the return on insights for the investment in collection? Would the public agree?
This article originally appeared in Bloomberg Government.
Many of us spend our days on one call after another. It's almost the default to how works get done. Meet during the day then squeeze "actual" work in before and after hours.
Some of these meetings are fine, some are terrible-- few are actually good and member. I read a stat recently on post-meeting recall. It's something abysmal. Few of us can remember even 20 minutes after a meeting what the main points where and any key decisions made. Actually, maybe I heard the stat in a meeting and now I can't remember. Anyway... Keith Ferrazzi shared this helpful piece on How to Run a Great Virtual Meeting on Harvard Business Review.
WHY I LOVE IT
It's practical and takes the tried and true advice shared in most articles a step further. He makes an unmissable point about doing anything you can to stop multitasking. This is so important and so difficult to enforce-- even on ourselves. The temptations are too great. I participated in an all-day meeting last week where everyone was in the room but one person who called in from Arizona. There are about 1 million things I'd personally rather do but she was game. She actually sat on her couch all day-- away from her computer-- so that she would force herself to listen and participate. It seemed to work pretty well because she was chiming at appropriate points during the day.
HOW YOU MIGHT USE IT
I took some of his points and added a few of my own to create this little printable reminder that you can keep near your desk phone. So, print this.
Then, without telling anyone, just start using these techniques in advance of your next meeting. There are a couple of things that (to me) make the difference between a good and totally awful virtual meeting experience. If you do nothing else, I'd recommend banning the "around the horn" brief-outs. It's an invitation for people to disengage. Prereads and an agenda focused on gathering feedback and brainstorming is the other way to go. The other thing is to reserve a little bit of time at the end for people (while they're still technically together on the call) to break the multitasking rule and ask them to take one step towards the actions agreed-upon during the meeting. I find that there is this energy spike that happens right near the end that should be seized to propel the group forward. Even 10 minutes after, the action seems harder and is more likely to be put off or go undone.
After an insightful conversation with performance measures guru, Gary Better, I’m sold on logic models as a method to reorienting your measures from crappy, boring, and onerous to awesome—if such a change appeals to you. Of course, as any good guru will do, Gary would tell you that logic models are part of a more holistic approach to setting up a performance management program. Indeed. However, the logic model is a simple, straightforward way figure out what makes sense to add given your program’s maturity and available data.
Once your inventory all of your measures and bucket them according to your program's objectives, you’re ready to tackle the logic model.
Why use logic models? For me, the number one reason is that they get you (and by extension your management team) focused on what is strategically important (outcomes) rather than simply what is easily measured (data, process, and output). Every program and organization needs a blend of each of these four categories.
The problem is that we tend to load up on measures at the front end of this process at the expense of looking closely at the actual outcomes we’re achieving. Though we all do it, it’s totally lame and there is a better way.
Here’s one version of a performance measure logic model that works for all types of programs.
Performance Measurement Logic Model
The idea is that you’d walk through each step backwards (outcome, output, process, then data) in the simple flow chart and ask yourself, your management team—and maybe if you’re feeling crazy—a customer or two to following questions…
What outcomes are we trying to achieve? List a couple and by when, if possible. An example of an outcome might be something like, “A safer and healthier work place.”
What program output are ongoing or completed? An example could be “Number of safety issues addressed through completed projects.”
What processes are being tested? A process example is, “Health and safety inspections completed with work orders generated in the system.”
What supporting data is needed? To close out this example, the data might include number of buildings with asbestos or estimated remediation cost.
Note: It helps to have your desired outcomes handy. These should be in your strategic plan—though they might be a little vague. That’s ok. Spend (a little) time tweaking, if needed.
While not simple necessarily, this is a straightforward approach to rebooting your measures approach and getting everyone focused on the desired outcomes-- a big win for our programs.
A client asked me this morning to review and comment on his organization’s strategic plan template. A template? What? The thoughts racing through my mind went something like... "You can’t template a strategic plan. I mean, sure, there are a handful of broad headers that you could type up but to what end? Creating a template would exacerbate one of the biggest problems with our typical approaches to strategic planning and not something I could get behind. The focus seems more on the end document that the process of discovery, creative thinking, and cross-discipline input. Grrrr."
He elaborated that he was looking for some thought-provoking questions to include in an annotated outline. Sigh. Ok, now, that makes a lot more sense. Based on his request, I jotted down some basic strategic planning guidelines and the kind of questions you should be asking of your team.
Before doing “save as” and creating your spanking new strategic plan file, set some time parameters. As arbitrary as they might seem, establishing short but reasonable boundaries around the effort is tremendously helpful. Without sideboards, strategic planning can and will go on forever, quickly lose momentum, cause the team to question your leadership and the organization's direction, and result in a reassignment to organizing the supply closet. Definitely not a strategic career move. So, I’d suggest that you set a deadline of about a week-- 2 max. No joke. You really do not need more time. Simple and clear beats perfect and polished every time.
Ok, here you go...
- What are you “strategically” trying to solve? Don’t get too hung up on the language and meaning of strategic. I’d say that a plan is strategic if it’s taken into account multiple viewpoints and approaches and selected the best possible path given the information available at the time. A project plan would describe the step-by-step once this approach is finalized.
Current State and Desired End State
- Briefly describe where the organization stands today in the face of this problem.
- In an ideal world, where would you be and by when? Personally, I recommend that you keep the goals modest and the timelines relatively near-term. Multi-year strategic plans have very limited practical value.
Stakeholders and Customers
- Besides you, who cares about this outcome? These are your stakeholders who should be asked for input.
- Who are you trying to please, support, engage, or help? These are your customers (even if you’re not “selling” anything.) Spend most of the time you have talking about this group then take another pass through your decisions and tweak from the customer’s point of view.
Opportunities and Limitations
- What events can you reasonable anticipate in the set timeframe that you want to take advantage of or avoid. Keep it short and snappy.
So what’s the path?
- Given all the thoughts above, provide some sense of the range of options considered. Which path best takes advantage of all of the resources at your disposal? This is your strategic path. Write this down—on paper if that’s easiest.
- What logical evaluation points along the path exist? Mark these roughly on your calendar and commit to a quick (less than 1 meeting) evaluation of how you’re doing.
Have fun with it. To me, one of the most commonly missed opportunities with strategic planning is that we all take it too seriously and limit input to only the coolest kids in the office. Lame. Instead, even the most modest effort to make it interesting, take some guesses, accept some risk, and integrate as many viewpoints as possible will make this different from the last time.
Strategic plans both ground us and free us to move forward. Developing a plan creates a common understanding of where we're going so that we can tell others and invite them to join in. Plans provide a ready answer to the daily question that pops ups, "This is cool. Should we do it?" Without plans, we wander.
Plans are often very good things, but writing plans down can be a tremendously onerous and frustrating process for organizations. Why? Because we worry about whether we're doing the right thing, what other people will think, whether it will work, or if we have what it takes. We often don't know how to articulate and incorporate the input received from our teams in a way that makes them feel included and inspire ownership. It's also hard to know when to start planning or when to stop.
Traditional approaches to strategic planning are process-laden and lengthy. The typical process for strategic planning has, in fact, earned a bad rap for precisely these reasons. Such plans require a tremendous amount of time and, in the end, no one is really sure what they got out of the process. What's worse is when those plans we agonized over sit in a network folder and are rarely referenced.
The alternative is an ultralight approach to discovering and documenting a strategic objective and pulling out the key actions needed to achieve that goal. An objective many teams strive for is to develop an actionable plan that allows them to move together toward a common goal. Here are 8 ways to inspire action with your strategic plan:
- Include as many people and as many diverse perspectives as can reasonably be accommodated in the physical space.
- Tell the group developing the plan that they own the process, the plan, and the outcome. Planning and completing strategic activities is everyone's job.
- Avoid writing or refining mission and vision statements. It's a waste of time.
- Ask participants to articulate their own purpose in doing this work and what excites them about the future of the business.
- Conduct an exercise to list and generally agree on what you (the organization) do, as well as what you don't do.
- Stop writing when you hit 3. Fewer words have more power--if for no other reason than the likelihood of someone actually reading them going up.
- Know that good enough is actually pretty good. Make this point as often as needed. The Atlantic's article entitled "The Power of 'Good Enough'" hit home for me.
- End on a high note. Generating a feeling of community, collaboration, and being a team builds momentum and creates and eagerness to reconnect.
The difference in this ultralight approach to strategic planning is in its ability to inspire. Its duration is deliberately short, and efforts to wordsmith and smooth over the language to the point of meaninglessness are eliminated. All participants are involved, and the team leaves clear, motivated, and energized around a common objective.