Lately I've been doing a deep-dive into the Enterprise Architecture literature (a long-standing but largely latent interest). I've been struck by the fact that the EA process model is predominately top-down, a cycle that begins with getting buy-in/budget from top management, proceeding thru planing and doing and ending with measuring how well you did, before iterating the same cycle until done. Measuring is the last step in each cycle. Shouldn't it be (one of) the first?
Obviously the goal of any EA effort is making a lasting improvement to an organization. That means influencing decisions of people outside the EA team, who are by definition elsewhere in the organization. Centrally planned dictates from on high are notiorously ineffective at that, if only because effective planning relies on tacit local knowledge that is hard to acquire centrally. For the academically-minded, Friederick Hayek wrote at length about this in The Use of Knowledge in Society. For the more concrete-minded, the lack of tacit knowledge at the center is one of the main reasons that Russia's planned economy failed.
By contrast, the software development community has been shedding its centrally planned roots (the "waterfall model") in favor of Agile methodologies. Among other things these push decision-making as far down the hierarchy as possible. Can such approaches work higher up, not just for software development, but for EA? I'm not talking either-or but hybrid...getting top-down and bottom-up working together for improvement.
What about making the "last" step (metrics) one of the first? I.e. instead of spending the first iteration exclusively in the executive suite, what about devoting some of that first iteration to measuring (and *reporting*) on the as-is system. Up front, at the beginning, instead of waiting until the first round of improvements are in place. You'll need as-is numbers anyway to measure the improvement, so why not collect as-is performance numbers at the beginning, as one of the first steps of an EA effort?
The lesser and most obvious reason for collecting as-is metrics at the beginning is that you're going to need them at the end of the first iteration to know how well the changes actually worked. But the larger reason is that "decrease widget-making cost" is more understandable to folks on the factory floor to the more abstract goals of the executive suite, "increase market share" and the like.