Sep 14, 2013

A common-sense approach to Big Data

Big data is heavily hyped these days and everyone wants to get in on the fun. This post continues this blog's "Belaboring the Obvious" theme by describing a common-sense hype-free approach for deciding whether you need to go there and how to get  started if you do. 

The main advice here is to take the steps in the order presented, particularly the first ones, to avoid some of the train wrecks I've seen in companies who've tried doing this the wrong way.

1- What is the business problem to be solved?

The first step is the most crucial because it is the compass that guides all other steps and the financial fuel to get you there. Big data is neither easy or cheap and requires far more effort and cost than the hype will lead you to believe. 

Start by documenting the business problem you hope to solve and get official buy-in from those that you expect to cover the cost. One business problem is enough to get started as this will help you stay focused. More can always be added once you succeed with the first one.

2- What data can we use to solve it?

With the goal decided and approved, the search for data that can support it can begin. This often involves getting permission to use it. If the data is privacy sensitive, custom development may be needed to anonymize it. And you'll need to develop a way of regularly retrieving it from the collection environment to the processing environment. 

3- Is the data set truly large?

This is a critical question and the answer is not necessarily what the big data hype implies. The boundary is not fixed, but for current purposes, we'll define large as "bigger than most computers can handle easily"; typically a few tens of gigabytes. If your data set is smaller than this, or can be reduced to this size through filtering and cleaning, and is not likely to grow beyond this limit in the lifetime of your project, count yourself lucky. You have a small data problem and can start learning to swim in the shallow end of the pool.

4- Choose suitable tools; build as required

There are an overwhelming number of ready to use tools for both small and large data so you'll rarely need to build your own advanced analysis logic. You'll still need lots of custom coding but most of this will be "glue" code for anonymizing sensitive data, converting your data into formats that the off-the-shelf tools require, transferring it between machines, integrating it with other data sources, and so forth.

For small data that is likely to remain small for the lifetime of the project,  avoid the heavily hyped but less mature cluster-based tools like Hadoop by focusing on the much older Data Mining environments like R, RapidMiner, Knime, Weka and others. R seems to be  the oldest, most complete and most heavily used (at least in academic circles) but is the least graphical of the lot. They all support all common data analysis tasks, or can be extended to do so via plugins, and have the considerable advantage of running on ordinary file systems which are downright trouble-free.

If your data set is still small but will grow too large for an ordinary file system in your project's lifetime, you'll need to design your project around a distributed (aka "clustered") file system like Hadoop. This doesn't  mean you need to dive right into the deep end by installing your own distributed cluster. Hadoop supports a "local" mode that lets tutorials run on a single PC, often downloading a data set to demonstrate the tutorial's features. The tutorials are indispensible for learning the many parts of the Hadoop Ecosystem.  Avro, Cassandra, Chukwa, HBase, Hive, Mahout, Pig, Zookeeper, Impala, Accumulo are just a few. These roughly approximate the capabilities of the non-distributed toolkits but aren't nearly as easy to use nor as comprehensive as a rule.

If your data set is truly large, you have no option but to dive right in by building your own distributed processing cluster. The easiest way I know of is to use a virtual hosting service like Amazon EC2 and use that to host a Hadoop distribution from Cloudera or perhaps HortonWorks (I've no direct experience with the latter). I recommend avoiding Apache's  distributions as difficult to install and hard to understand without considerable experience. Cloudera invests heavily in testing and integrating the fast-changing parts of the Hadoop Ecosystem. Their distribution includes Cloudera Manager which comes remarkably close to providing a trouble-free install experience, given the complexity of getting the numerous Hadoop components to work right in combination. My main advice here is to avoid experimenting with any options that you don't yet fully understand. Cloudera Manager is not entirely reliable at undoing experimental changes so it is still all too easy to wind up with an unusable cluster that you won't be able to repair short of starting over from scratch (and losing all your data).

5- Evaluate, replan and adjust

From there its simply wash, rinse and repeat.

Dec 13, 2012

GPU Maven Plugin

I've been experimenting with using the GPU to accelerate Java code lately and wound up writing a maven plugin to make the build process manageable. The distro site is Download and compile it from there with "mvn install"; I haven't  published it to public repositories yet.

The GPU Maven Plugin compiles Java code with hand-selected Java kernels to CUDA that can run on any NVIDIA GPU of comppatibility level 2.0 or higher. It encapsulates the build process so that GPU code is as easy as compiling ordinary Java code with maven. The plugin relies on the NVidia CUDA software being installed which must be done separately.

The plugin source includes forks of Rootbeer and Soot with no modifications except essential bug repairs. Their author is attached to command-line tools and idiosyncratic build conventions and I couldn't wait for him any longer.

How it works

You write ordinary Java that designates code to run on the GPU by enclosing it a class that implements the Rootbeer "Kernel" interface. You use ordinary Java compilers to compile this into a jar of byte codes that the Java virtual machine can run. This plugin steps in at that point to turn the original jar into a new jar that contains CUDA kernels that will run on the GPU when requested to do so by the non-kernel parts of the program. Only the kernels are converted to CUDA; the rest of the programs remains as Java byte codes.

Byte code is a stack-based format that is good for execution but not for the code analysis and translation steps to follow. Rootbeer uses Soot to find Kernel classes in the jar, to locate their dependencies and to translate them to Jimple, a 3-address format that Rootbeer translates into CUDA-compatible C++ source code. Finally, the NVidia tool chain compiles the generated source code to CUDA binaries and links them into a binary kernel that the original Java can run on the GPU.

The plugin handles these steps automatically so the build process looks like an ordinary Java compile to its users.

How to use it

See the gpu-timings folder for example applications with poms that show how to to compile them. See gpu-rootbeer/docs for details.

  • gpu-mandelbrot: A Java mandelbrot generator based on many CPU threads.
  • gpu-mandelbrot-gpu: gpu-mandelbrot modified to run each thread as GPU threads. The goal was to compare performance but this step has not been completed.
  • gpu-timings: Several common algorithms instrumented to compare CPU-only versus GPU performance. Average computes the average of arrays of varying sizes. SumSq computes the sum of the squares. IntMatrixMultiply and DoubleMatrixMultiply multiplies two matrices of varying sizes.

Is it worth it?

It depends on your application, and in particular on the number of GPU tasks and the amount of work they do in parallel, with significant but so far unmeasured costs for transferring data to and from the GPU.

For example, the gpu-timings/Average application computes the average of large arrays by subdividing the array into chunks, assigning a GPU task to summing each chunk, and computing the average when the tasks are done. Tentative conclusions are that conversion of hand-designated Java kernels to GPU/CUDA becomes beneficial for about a thousand threads (10^3) each processing ten thousand values (10^4) in parallel. The improvement is 2x-4x at those levels and grows to 37.2x for 10^5 tasks and 10^6 values.

Jun 26, 2012


I just launched an open source project I've been thinking about for some time. This is the wiki description from the project site (It seems google doesn't index project sites).


XACML is an XML-based access control language; a non-extendable functional language with just enough semantics to express access control policy and no more. But those semantics come wrapped in visual barbed wire; a XML-based syntax so hideous it makes your eyes bleed. Compared to XACML even COBOL and MUMPS look good.

NoXacml is a terse dialect of XACML with a Java-like syntax that people can read and write easily. A compiler turns NoXacml programs into standard XML-based XACML text for conversing with machines.

The NoXACML compiler is still under development and the language itself is still evolving. There are no released versions yet.


OASIS provides an extensive suite of conformance tests that were used to test the compilers. The following is one of the tests rewritten in NoXACML. In english, the policy is: "Julius Hibbert can read or write Bart Simpson's medical record." The XACML version is a whole page of inscrutable XML.

policy IIA001 denyOverrides{
rule IIA1
permit if ( "Julius Hibbert".isIn( && "".uri().isIn( && ( "read".isIn( || "write".isIn(


NoXACML emerged from GOSAC-N (Government Open Source Access Control - Next Generation), a Technica project to provide an open source ABAC (attribute-based access control) system for the US government. GOSAC-N is available on It provides PEPs (Policy Enforcement Points), a PDP (Policy Decision Point) secured to stringent government specs via TLS and SAML2, and browser-based capabilities that will evolve into a full-featured PAP (Policy Administration Point) over time.

The first release was based on Sun's XACML interpreter. The second features a pair of compilers (for XACML 2.0 and 3.0) that compiles XACML to Java source. Compilation can be done off-line so that the PDP loads policies as Java byte-codes or on-the-fly where the PDP dynamically compiles XML files to byte codes for execution. The compilers are proprietary but the site includes working binary copies. An AFCEA paper compares the XACML 2.0 compiler's performance with Sun's interpreter.

The compilers use JAXB to build a DOM model of XACML as its expression tree. The NoXACML compiler builds the same DOM model from NoXACL. The DOM model can then be compiled to Java or converted to the usual XML by JAXB's serialization for exchange as a standard language.

Apr 27, 2012

Why two Brad Cox's?

At the top right you'll notice two "Brad Cox" entries with slightly different gmail addresses (brdjcx vs bradjcox). It seems that at some point in the distant past, I created two gmail accounts with these names, and created this blog  under the account that I no longer use (brdjcx). The other is the one I monitor daily (bradjcox).

I've tried everything imaginable to move the blog to the active account but nothing I've tried seems to work. So I ultimately just added bradjcox as a "contributor".

Mar 1, 2011

Compiling XACML to Java Source

XACML is an Oasis standard that is starting to gain popularity for controlling access to digital resources. There are several implementations but the most popular is Sun's XACML interpreter.
I've submitted a paper for this spring's AFCEA conference that describes a full XACML compiler that we expect to release as government open source on soon. Although I developed it to make XACML more readable and amenable to easy debugging, not speed, the early non-optimized version is blazingly fast, far faster than the Sun interpreter.

The draft is available here. This was written before I started work on the Oasis Conformance Tests. That has now been completed and I'm updating the draft to describe the new version. I'll post it here when ready. Note Added 18 Mar 2011: The updated paper has been published to the link shown above.

Jun 6, 2009

What does Architecture REALLY mean?

In trying to connect my experience in software engineering with the enterprise architecture world I work in these days, I thought I'd record an epiphany about divergent meanings of architecture  in those two worlds.

I've struggled to understand architecture in relation to software construction; as an early stage in the design-implement-build waterfall. I could never see how a set of accounting categories could be of the slightest use to building a software system. The only point of connection is that the categories can be used to justify the funding based on expected performance improvements, and without funding construction doesn't begin.

Ultimately I realized that it has (at least) two entirely different meanings that don't connect other than in the remote sense that funding connects to construction. The epiphany was that (and I'm inferring this; I know of no historical records to back this up) that the  meaning in Enterprise Architecture emerged when some influential accountant realized that the categories of an enterprises' budget reflects the "architecture" of that enterprise. "Enterprise Architecture" caught on to mean a set of budgetary categories. This is certainly the main meaning used in OMB/GAO and the Department of Defense at least. In other words, the benefit that EA brings to construction isn't a top-level systems design; the benefit is  the funds to begin design, which occurs from a nearly clean slate.

The other meaning, the one I've used elsewhere in this blog, is the meanings of architecture from construction industries. It applies after funding has been settled but before construction begins. Its a dialog between architect and customer that occurs before construction crews start work but well after the financing has been arranged. Of course these budgetary categories have no relevance to the architects, designers and builders  actually doing the work. They do help to round up the funding but thatt goes on in an entirely different world before these roles even enter the field.

The astonishment was how little these two meanings have in common, and that such a vast difference is simply taken for granted and not more explicitly spelled out.

Apr 22, 2009

Paving the Bare Spots and Following the Guidon

This is a set of speculations as to whether, and how, a large number of loosely-coupled small projects (less than the $1M threshold at which mandatory OMB guidance kicks in) might be coordinated so that large-scale agency improvement occurs over time.
$1M buys ten developers for a year at a burdened salary of $100K each. That is two small agile development teams or one large one, and a year is 24 biweekly sprints. That may seem like small potatoes to SDLC advocates, but agile  teams can pull off amazing things in a year and 24 sprints leaves lots of opportunities for redirection and convergence. Question is, how could such teams be coordinated so that they converge over time on something useful at larger scales (where large means government-  or at least agency-wide).
Although it may seem wasteful to launch projects without the usual heavy planning, notice that there are no billion- or even million dollar projects at risk. There is just the risk of a single sprint, about $1M/24; a little over $40K. Performance is easily monitored during each sprint review so non-performing projects can be terminated or redirected quickly. And perhaps best of all, costs are so affordable that all agencies could participate. There is no approval process that would prevent some agencies from improving their performance through information technology.
Let's assume that central management is commensurate with the small ($1M) project sizes and consider how to accomplish the most at the periphery with minimum management at the center. Two notions seem productive:
Paving the Bare Spots: This is explained in this paper (pdf) from just after my time with the architecture team for DISA's Netcentric Core Enterprise Systems (NCES). The title refers to a novel way of keeping students off the grass once "keep off the grass" signs have been tried and failed (as they always seem to do). The new approach is to just let the students walk where they wish and paving the bare spots behind them. Works every time.
How might that apply to the president's agenda of radically improving agency performance? See the paper for details, but in short, it involves a digital "enterprise space" modeled after some standards bodies and especially the Apache Software Foundation. This space is what you'd expect of such organizations; a download area for ready to use software (and someday "trusted components" as explained elsewhere on this blog), a source code repository for sharing code, a wiki or similar for sharing ideas, and so forth. And there's a governing body that helps coordinate the work (as distinct from managing it; management is largely handled by each team) and accepts long-term ownership of the results.
Follow the Guidon: This is based on the role the flag-bearer once played in  military formations. Troops were taught to follow the guidon. They were free to exploit local tacit knowledge, route around obstacles, duck and cover if need be, yet converge on the goal set by management. The key thing here is that it was apparent to all that the leadership (or at least their flag bearer!) was out there in front, leading the way and assuring that everyone converged on the goal.
A guidon is just a flag on a stick; a lightweight and easy to follow governance model if there ever was one! The digital equivalent can be seen in the paper, in the governing body and the behind-the-scenes work of establishing enterprise standards and reference implementations of the same.

Apr 18, 2009

Mud Brick Architecture and FEA/DoDAF

Like "service" and before that "object", architecture was borrowed from the tangible context of tangible construction and applied to intangible information systems without defining its new meaning in this foreign digital context.  This blog returns the term to its historical context in order to mine that context for lessons that might apply to information systems. The (notional) historical context I'll be using is outlined by the following  terms:

  • Real-brick architecture is the modern approach to construction. It  leverages trusted building materials (bricks, steel beams, etc) that are not available directly from nature. In other words, real-brick components are usually not available for free. They are commercial items provided by other members of society in exchange for a fee. The important word in that definition is trusted, which is ultimately based on past experience with that component, scientific testing and certification as fit for use. This is an elaborately collaborative approach that seems to be a uniquely human discovery, especially in its reliance on economic exchange for collaborative work.
  • Mud-brick architecture is the primitive approach. Building materials (bricks) are made by each construction crew from  raw materials (mud) found on or near the building site.  Although the materials are free and might or might not be good enough, mud brick architecture is almost obsolete today because mud bricks can not be trusted by their stakeholders. Their properties depend entirely on whoever made them and the quality of the raw materials from that specific construction site. Only the brick makers have this knowledge. Without testing and certification, the home owner, their mortgage broker, the safety inspector, and future buyers have no way of knowing if those mud bricks are really safe.
  • Pre-brick architecture is the pre-historic (cave man) approach. This stage predates the use of  components  in construction at all. Construction begins with some monolithic whole (a mountain for example), then living space is created  by removing whatever isn't needed. This context is actually still quite important today. Java software is still built by piling up jar files into a monolithic class path and letting the class loader remove whatever isn't  needed. Only newer modularity technologies like OSGI break from this mold.

The pre-brick example mainly relies on such subtractive approaches but additive approaches were used too. For example, mud and wattle construction involves daubing mud onto a wicker frame. I apologize to advocates of green architecture where pre-modern construction is enjoying a well-deserved resurgence. I chose these terms to evoke an evolution in construction techniques that the software industry could do well to emulate, not to disparage green architectural techniques (post-brick?) in any way.

Federal Enterprise Architecture and DoDAF

So how does this relate to the enterprise architecture movement in government or DoDAF in particular? The  interest in these terms stems from congressional alarm at the ever-growing encroachment of information technology expenses on the national budget and disappointing returns from such investments. These and other triggering factors (like the Enron debacle) led to the Clinger-Cohen act and other measures designed to give congress and other stakeholders a line of sight into how the money was spent and how much performance improved as a result. The Office of Management and Budget (OMB) is now responsible for ensuring that all projects (>$1M) provide this line of sight. The Federal Enterprise Architecture (FEA) is one of their main tools for doing this government-wide. The Department of Defense Architecture Framework (DoDAF) is a closely related tool largely used by DOD.

I won't summarize this further because that is readily available at the above link. My goal here is to focus attention on what is missing from these initiatives. Their focus is on providing a framework for describing the processes a project manager will follow to achieve the performance objectives that congress expects from providing the money to fund the project. To return to the home construction example, congress is the mortgage broker with money that agencies compete for in budget proposals. Government agencies are the aspiring home owners who need that money to build better digital living spaces that might improve their productivity or deliver similar benefits of interest to congress.  Each agency hires an architect to prepare an architecture in accord with the FEA guidelines. Such architectures specify what, not how. They  provide a sufficiently detailed floor plan that congress can determine the benefits expected (number of rooms, etc), the cost, and the performance improvements that might result. They also provide assurances that approved building processes will be followed, typically SDLC (the much-maligned "waterfall" model). What's missing is after the jump.

What's Missing? Trusted Components

What's missing is the millennia of experience embodied in the distinction between real-brick, mud-brick, and pre-brick architectures. All of them could meet the same functional requirements; the same floor plan (benefits) and performance improvements. The differences are in non-functional requirements such as security (will the walls hold up over time?) and interoperability (do they offer standard interfaces for roads, power, sewer, etc). Any home-buyer knows that non-functional requirements are not "mere implementation details" that can be left to the builder to decide after the papers are all signed. That kind of "how" is the vast differences between a modern home and a mud-brick or mud and wattle hut, which are obviously of great interest to the buyer and their financial backers. This difference is what is omitted during the closing decision by the "what not how" orientation of the FEA process.

So let's turn to some specific examples of what is needed to adopt real-brick architectures for large government projects. It turns out that all the ingredients are available in various places, but have not yet been integrated into a coherent approach:

  • Multigranular Cooperative Practices: This is my obligatory warning against SOA blindness, the belief that SOA is the only level of granularity that we need. But just as cities are made of houses, houses are made of bricks, and bricks are made of clay and sand, enterprise systems require many levels of granularity too. Although SOA standards are fine for inter-city granularity, there is no consensus on how or even whether to support inter-brick granularity; techniques for composing SOA services from anything larger what than Java class libraries support. The only such effort I know of is OSGI but this seems to have had almost no impact in DOD.
  • Consensus Standards: although much work remains, this is  the strongest leg we have to stand on as broad-based consensus standards are the foundation for all else. However standards alone are necessary, not sufficient. Notice that pre-brick architecture is architecture without standards. Mud-brick architectures are based on standards (standard brick sizes, for example), but minus "trust" (testing, certification, etc). Real brick architectures involve wrestling both standards and trust to the ground.
  • Competing Trusted Implementations:  The main gaps today are in this area so I'll  expand on these below:
  • Building Codes and Practices: bricks alone are necessary but insufficient. Building codes specify approved practices for assembling bricks to make buildings. This is almost virgin territory today for an industry that is still struggling to define standard bricks.
  • Construction Patterns and Beyond: This alludes to Christopher Alexander's work on patterns in architecture and to the widespread adoption of the phrase in software engineering. It has since surfaced as a key concept in DOD's Technical Reference Model (TRM) which has adopted Gartner Group's term, "bricks and patterns". However, this emphasizes the difficulty of transitioning from pre-brick to real-brick architectures. Gartner uses "brick" to mean a SOA service that can be reused to build other services or a standard. That is not all all how I use that term. A brick is a concrete sub-SOA component that can be composed with related components to create a secure SOA service, just as modern houses are composed of bricks. True, standards help by specifing the necessary interfaces, but they are never confused with bricks which only implement or comply with that standard. Standards are abstract; bricks are concrete. They exist in entirely different worlds; one mental, the other physical.

Implementations of consensus standards is rarely a problem; the problem is that they're either not competing or not trusted. For example, SOA security is a requirement of each and every one of the SOA services that will be needed. This requirement is addressed (albeit confusingly and verbosely, but that's inevitable with consensus standards) by the WS-Security and Liberty Alliance standards. And those standards are implemented  by almost every middle ware vendor's access management products, including Microsoft, Sun, Computer Associates and  others.

Trusted implementations are not as robust but the road ahead is at least clear, albeit clumsy and expensive today. The absence of support for strong modularity (ala OSGI) in tools such as Java doesn't help, since changes in low-level dependencies (libraries) can and will invalidate the trust in everything that depends on them. Sun claims to have submitted OpenSSO for Common Criteria accreditation at EAL3 last fall (as I recall), and I heard that Boeing has something similar planned for its proprietary solution. I've not tracked the other vendors as closely but expect they all have similar goals.

Competing trusted implementations is a different matter that may well take years to resolve. Becoming the sole-source vendor of a trusted implementation is every vendor's goal because they can leverage that trust to almost any degree, generally at the buyer's expense. Real bricks are inexpensive because they are available from many vendors that compete for the lowest price.

Open Source Software

In view of the importance of the open source movement in industry, its growing adoption in government, its important to point out why it doesn't appear in the above list of critical changes. What matters to the enterprise is that there be competing trusted implementations of consensus standards, not what business model was used to produce those components.

  • Trust implies a degree of encapsulation that open source doesn't provide. Trust seems to imply some kind of "Warranty void if opened" restrictions, at least in every context I've considered.
  • The cost and expense of achieving the trusted label (the certification and accreditation process is NOT cheap) seems very hard for the open source business model to support.
  • The difference between Microsoft Word (proprietary) and OpenOffice (open source) may loom large to programmers, but not to enterprise decision-makers more focused on whether it will perform all the functions their workers might need.
  • Open Source may make more sense for smallest granularity components at the bottom of the hierarchy (mud and clay) that others assemble to make (often proprietary) larger granularity components.

Concrete Recommendations

Enough abstractions. Its time for some concrete suggestions as to how DOD might put them into use in the FEA/DoDAF context.

Beware of one-size-fits-all panacea solutions: SOA is great for horizontal integration of houses to build cities so that roads, sewers and power will interoperate But SOA is extremely poor at vertical integration, for composing houses from smaller components such as bricks. Composing SOA services  from Java class libraries is mud and wattle construction which is not even as advanced as mud brick construction. One way to see this is in SOA security, for which standards exist as well as (somewhat) trusted implementations. SOA security can be factored into security features (access controls, confidentiality, integrity, non repudiation, mediation, etc) that can be handled either by monolithic solutions like OpenSSO or repackaged as pluggable components as in SoaKit. Yes, the same features can be packaged as SOA services. But nobody would tolerate the cost of parsing SOAP messages as they proceed through multiple SOA-based stages. Lightweight (sub-SOA) integration technologies like OSGI and SoaKit (based on OSGI plus lightweight threads and queues) would be ideal for this role and would add no performance cost at all.

Publish a approved list of competing trusted implementations: This doesn't mean to bless just one and call it done. That is a guaranteed path to proprietary lock-in. Both "trusted" and "competing" must be firm requirements. At the very least, trust must mean components that have passed stringent security and interoperability testing, and competing means more than one vendor's components must have made it through those tests.

Expose the use of government-approved components in FEA/DoDAF: These currently expose only what is to be constructed and its impact on agency performance to stakeholders, leaving how to be decided later. How is a major stakeholder issue that should be decided well before project funding, such as whether components from the government-approved list will be used to meet non-functional requirements such as security and interoperability. As a rule, functional requirements can be met through ad hoc construction techniques. Security and interoperability should never be met that way.

Leverage trusted components in the planning process: The current FEA/DoDAF process imposes laborious (expensive!) requirements that each of  hundreds of SOA-based projects must meet. Each of those projects has similar if not identical non-functional requirements, particularly in universal areas like security and interoperability. If trusted components were used to meet those requirements, the cost of elaborating those requirements could be borne once and shared across hundreds of similar projects.

So what?

OMB's mandate to provide better oversight is likely to accomplish exactly that if it doesn't engender too much bottom-up resistance along the way. But to belabor an overworked Titanic analogy, that is like conentrating on auditing the captain's books when the real problem is to stop the ship from sinking.

The president's agenda isn't better oversight. That's someone else's derived goal which might or might not be a means to that end. The president's goal is to improve the performance of government agencies. Insofar as more reliable and cost-effective use of networked computers is a way of doing that, and since hardware is rarely an obstacle these days, the mainline priority is not more oversight but reducing software cost and risk. Better oversight is in a possibly necessary but definitely supporting role.

The best ideas I know for doing that are outlined in this blog. They've been proven by mature industries' millennia of experience against which software's 30-40 years is negligible.

Apr 7, 2009

Agile Enterprise Architecture? You bet!

The title of yesterday's post caused me to google "Agile Enterprise Architecture". And sure enough, others got there before me. Lots of them.

  • Agile Enterprise Architecture AgileEA! is a free open source EA Operational Process. It is a framework that is designed to either use as is, or to tailor and publish your own Enterprise Architecture Operational Process.
  • Eclipse EPF Composer is an ecliipse-based editor for building a new EA or refining an existing one.

The EPF EA Composer is only supported on Windows, RedHat/SUSE Linux and "perhaps others". I tried it on a Parallels Ubuntu VM inside MacOSX. It loaded fine, but crashed with a NPE on clicking the various models. Aparently there's a rich text component that isn't quite portable.

So I installed it on my Windows VM, and was frankly impressed. The Composer installed flawlessly (not usual with eclipse in my experience) and ran perfectly. It wasn't immediately obvious how to use it, but it comes with tutorials that brought it home quickly.

Better yet, there's a community developing EPF "plugins" (terrible name IMO). These aren't code, but architectural approaches, such as Scrum. For example, Eclipse has a plugin download page that allows their Scrum process model to be loaded into EPF. The same site also provides a version of this model published as HTML. That is produced by installing the Scrum library into EPF,  editing it to suit local conventions, and publishing it as HTML.

In my own work with Scrum, we used wikis to convey our ever evolving conventions to each other and our stakeholders. Those  were like communal legal pads that can record whatever people write there. But this unstructured approach means web pages become chaotic. And since navigational links between pages is up to each contributor, it becomes hard to find anything when the site grows over time.

EPF  is more like a workbook. It starts with most of the text you need to define a Scrum methodology, such as descriptions of the key roles and responsibilities. But it allows these to be changed or extended with whatever is missing. All pages are automatically linked into a hierarchy so browsing is much easier. And the structure automatically distributes different roles to different sections, which itself helps to keep the structure intact.

One concern, possibly an inevitable one, is that that publishing to html implies a centrally planned approach, albeit one that might be ameliorated by using distributed development techniques such publishing the evolving model for distributed access via Subversion or similar. That is, the only ones empowered to contribute are  EPF users with access to the enterprise model. Everyone else gets read-only access to the published (HTML) results, and cannot influence the architecture directly.  This is probably inevitable, and arguably good enough for less agile deployments. A middle ground might also be workable, such as importing the published HTML as in some writable format such as a wiki, with a select few responsible for manually moving information from the wiki  into the underlying model.

Apr 6, 2009

Enterprise Architecture and Agility?

Lately I've been doing a deep-dive into the Enterprise Architecture literature (a long-standing but largely latent interest). I've been struck by the fact that the EA process model is predominately top-down, a cycle that begins with getting buy-in/budget from top management, proceeding thru planing and doing and ending with measuring how well you did, before iterating the same cycle until done. Measuring is the last step in each cycle. Shouldn't it be (one of) the first?

Obviously the  goal of any EA effort is making a lasting improvement to an organization. That means influencing decisions of people outside the EA team, who are by definition elsewhere in the organization.  Centrally planned dictates from on high are notiorously ineffective at that, if only because effective planning relies on tacit local knowledge that is hard to acquire centrally. For the academically-minded, Friederick Hayek wrote at length about this in The Use of Knowledge in Society. For the more concrete-minded, the lack of tacit knowledge at the center is one of the main reasons that Russia's planned economy failed.

By contrast, the software development community has been  shedding its centrally planned roots (the "waterfall model") in favor of Agile methodologies. Among other things these push decision-making as far down the hierarchy as possible. Can such approaches work higher up, not just for software development, but for EA? I'm not talking either-or but hybrid...getting top-down and bottom-up working together for improvement.

What about making the "last" step (metrics) one of the first? I.e. instead of spending the first iteration exclusively in the executive suite, what about devoting some of that first iteration to measuring (and *reporting*) on the as-is system. Up front, at the beginning, instead of waiting until the first round of improvements are in place. You'll need as-is numbers anyway to measure the improvement, so why not collect as-is performance numbers at the beginning, as one of the first steps of an EA effort?

The lesser and most obvious reason for collecting as-is metrics at the beginning is that you're going to need them at the end of the first iteration to know how well the changes actually worked. But the larger reason is that "decrease widget-making cost" is more understandable to folks on the factory floor to the more abstract goals of the executive suite, "increase market share" and the like.

Mar 27, 2009

Masterminds of Programming Book

O'Rielly's new Masterminds of Programming book includes interviews of me and Tom Love (Objective-C), Falkoff (APL), Kurtz (BASIC), Moore (FORTH), Milner (ML), Chamberlin (SQL), Aho, Weinberger, and Kernighan (AWK), Geschke and Warnock (PostScript), Stroustrup (C++), Meyer (Eiffel), Wall (Perl), Jones, Hudak, Wadler, and Hughes (Haskell), van Rossum (Python), de Figueiredo and Ierusalimschy (Lua), Gosling (Java), Booch, Jacobson, and Rumbaugh (UML), Hejlsberg (Delphi). Its an honor to be part of that crowd!

My interview reiterates and expands on the brick analogy I've developed in this blog. My interests are more in the components that languages produce, not languages themselves. I have often described Objective-C as the soldering gun that helps to build and use Software-ICs. I had exactly that metaphor in mind when I invented it.

However OOP-style classes are very small granules; grains of sand when bricks (and even larger) components are needed. SOA services are very large-granuarity components. The trusted security components mentioned below are small (sub-SOA) components used to make them. In other words, if the enterprise is a city, SOA services are buildings, trusted security components are bricks to make the buildings, and OOP classes are sand, mud and straw used to make bricks.

The problem is that our industry has not yet started making real bricks, fully encapsulated (no dangling dependencies), standards-compliant interfaces, tested for compliance, and certified as trusted components. Rather we use mud bricks constructed at each building site from whatever mud and straw is at hand (JBOSS,, Spring, etc) by whoever is part of that project. Since components aren't trusted, every SOA service must be individually tested and certified from the ground up since the qualiity of mud bricks depends entirely on the skill of whoever made them. It's a big problem.

Certification and testing is just one of the differences. As with mud vs real bricks, there are technical differences involved in tightening up encapsulation so that changes in underlying dependencies won't invalidate a trusted component. That is why I use OSGI as the basis for SoaKit, a SOA security components suite I've been working on for several years. I'll describe that in more detail when I get a moment.

I'll close here by relating this analogy to the Free and Open Source (FOSS) movement. FOSS is typically involved in producing low-level classes (mud and straw) for others to make higher level components from. FOSS components are free, like mud bricks, which anyone can build from the mud and straw at any construction site. Real bricks, on the other hand, are not free, although they are  made from exactly the same no-cost materials. The trust requirement leads to an obvious business model.

It may seem odd that anyone would choose to buy real bricks when mud bricks are available for free. Yet that is the norm in every industry but ours.

Sep 13, 2008

Malik's Laws of Home Construction

One of our architects posted a link to Malik's Laws of Service Oriented Architecture which argues that building reusable services/software is futile. I responded with this version that substitutes "brick" for "service" throughout:

Malik's Laws of Home Construction

  • No one but you will build the bricks you need in time for you to use them
  • If you build a brick that no one else asked for, you will have built it for yourself
  • If you build a brick for yourself, you will optimize it for your own use
  • It is therefore the optimal brick for you to use
  • It is very unlikely to be the optimal one for anyone else to use
  • No one besides you will use it
  • You will not use anyone else's

And so forth. Notice that Malik is 100% right in the context of primitive (mud brick) construction and 100% wrong in the context of modern (real brick) construction.

Also notice that customers never clamor for a transition from the mud bricks they're used to. The statement of work invariably specifies more of the same, meanwhile complaining bitterly of labor costs, weathering, and roofs collapsing on their heads. As if this were a law of nature instead of a shortcoming of the mud brick approach to architecture.

The  transition from primitive to modern is a slow and evolutionary process that isn’t even mainly technological. Its mostly about trust building, which only begins when a pioneer takes the risk and their customer starts telling their friends. I’d like us to be that pioneer (and yes, I know about pioneers and arrows).

DNI Open Source Conference 2008

I attended the DNI Open Source Conference yesterday but left right after the keynore, as soon as I realized that "Open Source Intelligence" is not at all what we mean by "Open Source Software". We mean pipes. They mean contents. And I find DNI's meaning deeply disturbing.

Part of it was the keynote speaker's "double humped camel" analogy where the gap between humps was the budget cuts of the 1990s. He followed with a "moment of silence for 9/11 victims" which I realized was a triumphant celebration of his camel's second hump in politically-correct disguise.

DNI's meaning of "open source" is basically anything that's not nailed down (as distinct from "closed source" which is). He alluded to that meaning in "there’s real satisfaction in solving a problem or answering a tough question with information that someone was dumb enough to leave out in the open".

He's not talking about software. He's talking about sifting thru mountains of irrelevant information about people's daily lives to draw half-baked conclusions from anything they find there. Airline records. Credit card records. Speed cameras. Street cameras. Anything that's not nailed down.

And that scares the bejezus out of me. Obviously because of the police state implications but also because of the dubious quality of this information. Yes, its free, and worth every penny. Arithmetically better refining of exponentially lower-quality data is just not as effective as putting boots on the ground to develop quality intel resources.

Didn't 9/11 teach us what comes from relying on high-tech SIGINT at the expense of low-tech HUMINT? Especially with even more second-hump resources to get in each other's way?

Grumpf. Oh my country.

Jul 23, 2008

The Mud Brick Business

Mike Taylor wrote: Creating high-quality software is an interesting mix of art and science. At least 20 years ago innovative leaders like Brad Cox and Tom Love (inventors of Objective C) began describing a "software industrial revolution" in which the process of creating software would move from an art done manually by skilled craftsmen to an "industrialized" process that allowed high-quality systems to be built from well-tested reusable parts. This dream remains largely unfulfilled.

In 2007, Accenture CTO Don Rippert described "Industrialized Software Development" as one of eight major trends that Accenture has identified as likely to have major impact on IT over the next five years1.

This triggered this email thread on "The Mud Brick Business"

Its amazing how the Software Industrial Revolution and SoftwareIC metaphors keep turning up (but flattering nonetheless). Recently I’ve been leaning to a new metaphor that contrasts primitive (mud brick) and modern (real brick) architecture which I find useful for understanding why that “dream remains largely unfulfilled”.

The difference between mud brick craftsmanship (cut-to-fit services) and real bricks (trusted standards based products) has little to do with brick-making technology and more to do with trust, standards, business models, etc. An innovative mud brick worker can't just decide to quit hauling mud (selling services) and start making standard bricks (products). That requires a business model in which standard bricks (SoaKit components) is viable compared to selling services (making mud bricks). Mud bricks are “free”. Real ones are not.

Yet somehow that bridge got crossed in antiquity to the point that we take it for granted today. Its discouraging it is still out of reach in software to this day.

I later responded to a misunderstanding: Mud bricks are made from existing materials too (soil, straw, water) just as our mud brick workers use for low-level stuff. Mainly using seldom contributing.

Our work is building custom homes with mud bricks, making whatever non-standard untrusted components we need along the way (mud bricks) from raw materials, not taking the evolutionary leap to standard trusted components (real bricks) that anyone can understand, trust and reuse. The evolutionary leap isn’t to pre-fab houses; its to using pre-fab trusted components (real bricks) to build custom houses, just like real engineers/architects do it.

Concise description is tricky because software engineering has not evolved trusted components nor even a vocabulary for describing them, unlike housing where there are dozens if not hundreds of well-understood integration levels; a nearly infinitely fractal tree of integration technologies (I build the house, you build the water pump, they build the motor, somebody else makes axles, somebody else steel, somebody else digs the ore, etc). Those nouns are understood and thus trusted in ways that software nouns, terms like "Access Manager" and "Access Agent", are not.

The closest software engineering gets to a “real brick" is a computer application, with java classes a distant second. Two levels (more like 1.2), unlike real engineering which has thousands of well-understood levels. That’s where SoaKit comes in as a modest beginning. It adds one new encapsulation layer between classes and applications, with OSGI as the membrane between inside and outside.

A challenging issue for software architects is... Does a mud brick business need an architecture role? I think not. Architecture never existed until construction made the shift to using trusted components (real bricks). Before then, there was only a customer, some laborers, and someone to choose mud and straw and mediate customer needs to the laborers. Everything from there was just cut to fit. Architects only emerged when there were so many trusted components available that a specialist was needed to choose between them.

Aug 25, 2007

DOD needs bricks, not just java clay and sand (ala JBI/SCA/SoaKit)

Note Added: This is a old article that I wrote to address the SOA Blindness that is now prevalent within DOD; the belief that SOA-scale integration is all that DOD needs. Its tries to show why an intermediate level of integration can address a major unsolved problem; SOA security and interoperability.

JBI and SCA seek to provide this intermediate level, which is why I singled them out by name. But on digging deeper, they turned out to be far more complex than this problem requires.  I eventually adopted a much simpler approach based on plain threads and queues with OSGI for modularity. I call this approach SoaKit; a collection of pluggable components for addressing SOA security and interoperability. Such components are the bricks mentioned throughout this blog. Plain java class libraries are the clay and sand.

Java Business Integration (JBI) is a Sun initiative to provide a Enterprise Service Bus for SOA services. Software Components Architecture (SDA) is a newer and far more ambitious OASIS-sponsored initiative to meet the same goal in a language-independent manner. SoaKit was originally based on JBI but has since switched to a much simpler bus based on native Java threads and queues. The needs explanation remains sound although SoaKit is no longer based on either JBI or SCA.

Consider some project building some SOA service, a sensor application for example. DOD would like to deploy that service to many different environments ranging from the lab's firewall-protected LAN to hostile environments on platforms as diverse as submarines, aircraft, and land vehicles. Each platform has unique communication capabilities and each threat environment has unique security and interoperability requirements.

But if projects are responsible not only for the services' core functionality, but also for its communication, security and interoperability requirements, the ability to reuse services across threat environments, platforms, and communication infrastructures is lost. So long as projects are responsible for developing service's full functionality in programming languages such as Java or C++, services cannot be deployed to diverse environments without changing the code and testing it from the ground up.

To gain SOA's promise, we need a way to let service development projects focus on their core competencies (the project's functionality objectives; sensor functionality in this example), and address enterprise requirements (connectivity, threat environments, security, communication links) without without changing the product (the sensor service)'s core logic.

A relatively new standard is available that does exactly that, although its advantages are largely unknown, obfuscated into oblivion by astonishingly bad terminology. JBI (Java Business Integration) is actually an integration technology similar to SOA. I'll concentrate here on why JBI is so important without explaining the arcane terminology used by its devotees, as this is available elsewhere.

JBI is an integration technology, just as soldering irons are the integration technology for making circuits out of chips, resistors and so forth. The technology is wielded by a new class of developer that I'll call configurators to distinguish them from programmers. Just as programmers assemble JBI components from lower level objects in programming languages like Java, configurators assemble SOA services from pre-existing JBI components by using XML as the configuration language.

Our general-purpose sensor service is one example of a JBI component but there are many others. Anything that abides by the JBI specification can be used as a JBI component. JBI development environments (Glassfish is one of several examples) have many others that configurators use to deploy bare Java functionality to meet the requirements of each new operational environment. Attach a pair of LAN transport components and the service qualifies as a participant within a fire-walled SOA lab environment. Replace them with transport components suitable for an aircraft or submarine and the sensor component can participate as a service within a low-threat SOA environment. To deploy to higher-threat environments, just configure in encryption, signing and integrity components, which are all provided by the JBI environment. If the sensor must communicate with incompatible services, just mediation/translation components can be added just as easily.

JBI does this by defining an internal bus like the external bus that SOA services use to communicate. Configurators use XML to connect off-the-shelf JBI components via this bus to achieve the needed effect. The net effect is that new components (like that sensor package) can be developed and tested for functionality without concern for how the service might be deployed later. This allows sensor experts to concentrate on building sensors without concern for matters beyond their specialty, such as how to secure that sensor against every-changing threats, platforms and communication technologies.

The same advantages apply to testing. Without JBI, obtaining authority to operate, for thousands of SOA services involves testing not just the services functionality, but whether it complies with security, interoperability, and transport constraints. JBI means that the new service need only be tested for whether its core functionality is correct. The service can then be protected by pre-existing encryption, signing, verification, transport and even mediation components, each of which is developed and tested within its own independent development cycle.

Aug 24, 2007

SCA Capabilities

mrowley writes (as I understand him) that Security characteristics (authentication, authorization, confidentiality) are one of SCA's responsibilities.

That diverges from my view, where security is something you *compose* by developing/composing SCA components, as distinct from something that SCA *provides*. The latter requires BEA to get it right, whereas in my view, there is no "right" in a broad enterprise like DOD that lacks a consensus (a standard) for what "security" really means. You're stuck with building adapters (like Objective Gateway) as distinct from letting BEA do it for you.

This goes back to why I might seem obsessed with escaping whatever requirements there might be for WSDL-based type-checked components in favor of untyped (by WSDL) components, which really means the component does type checking on whatever comes its way.

This is the same old static- vs. dynamic argument from the Ada vs. Smalltalk wars. That ultimately boiled down to...the real world demands both. In our case, you really need WSDL-specified typing of SOA services as a whole (because the standard says so). But not for the JBI components you compose those services from. They must be defined as xsd:any so that they can be stored away and resassembled later to accept whatever SOA service/SOAP message you're doing that day.

Just as in my simple auditing example, which really applies to every security component (authentication, authorization, confidentiality, integrity, nonreputiation, ....), and arguably every JBI component that aspires to broad reuse.

Especially including BCs...I just hope the Sun soapBC gotchas haven't spread to the others.