National Intelligence Folly: How a Tragic Unsolved Murder Led to Billions of Dollars of Program Fraud, Waste and Abuse Part 20: Good Money After Bad

Article 19 touched on the issue of the burgeoning costs with the Future Imagery Architecture (FIA) optical component and referred to the fact that we have two serious problems-challenges-with the process-particularly the way we (US government) run and fund these programs.

Our process might make sense for a lot of engineering projects, but in many of the examples described so far (with the threat to go through some ~40 of them,) all the process, documents, meetings and handwringing resulted in good documentation and a “check-the-block” down the line of requirements but did not and has not saved-delivered-any of these “bad programs.” And what makes for a “bad program?”

Well Jeff Foxworthy (a story an Infantry guy from Alabama can appreciate) would say “If you lose one year of schedule the first three years, costs at least double, you no longer have a date certain delivery and your launch date is TBD, you may be a bad program!”

To the contrary, all the paperwork, process, milestones and gates can be self-defeating in many instances where a faulty concept, insufficiently engineered or inadequate solution trundles through the process, checking blocks, burning down requirements and is not revealed until-if it gets that far (but most don’t) user meets capability: particularly for technology intensive efforts.

The other problem is with the budget and execution or program process, where dollars are allocated, and the spending often seemingly goes on autopilot for these problematic programs. Before extending my rant in this regard I want to acknowledge that the vast majority of programs do well and meet goals and benchmarks without much fuss or fanfare, contrasted with these costly, often poorly managed, failed program examples that taint the good ones and usually result in more regulations, procedures and mandates.

Developing and acquiring capability has been somewhat of the same process since the development of the spear-the wheel-catapult. There has always been tension between objectives and managing the available resources-to apply them in the most efficient way-that fits with existing plans, capability, organizational structures and tactics, techniques and procedures. You can imagine the first spear developed for peak lethality but was fielded as a crew served weapon because of the need for more “heft,” weight and the penetration speed it had to achieve, as well as the accuracy necessary to take the target down (could be a Mel Brooks movie.)

Modern man has striven to develop budget, acquisition and procurement procedures to manage new and emerging capability, as well as performing things like the recapitalization of existing assets and resources (e.g., ships, tanks, physical plant etc.,) managing pre-planned product improvements (P3I) and achieving all of it within available resources, including budget-time/schedule-and concepts of operations for use that had to mesh with doctrine, organization, training, materiel, leadership, and education, personnel and facilities (DOTMLPF.)

What is now the Pentagon’s Planning, Programming, Budget and Execution (PPBE) effort reflects updates to Secretary of Defense Robert McNamara’s original programming and budget-costing methodology (Planning Programming Budget System) developed from his experiences as an automobile manufacturing executive. Henry Ford pioneered the engineering manufacturing process that was ideally suited for such a project as producing cars like stamping out-in effect-bottle-caps and widgets. Ford was producing an automobile in some 84 steps in a drive to reduce cost to the point where there was a “Model X Ford in every pot.” The secret sauce to this achievement was interchangeable parts.

McNamara’s programming effort works but has proven to be cumbersome and somewhat mis-or ill-suited in cases where the cycle of emerging technology and Moore’s Law run circles around the pace of deliberate programming. Just consider Moore’s 1975 postulation that “the number of transistors in a dense integrated circuit doubles about every two years.”

McNamara’s process as it has evolved works, but one of the major issues is it totally disregarded politics. The importance of that aspect in government projects was realized early on by NASA-among other federal agencies and contractors-who liberally and generously spread contracts and parts manufacturing across all the states and 435 congressional districts.

When wielded with credibility, these processes are comprehensive and successful, apply a variety of analytics when and as needed, and more importantly the process itself helps foster success by applying a pace to actions to meet deadlines and serves as both a forcing function and a check list to ensure the right documentation and actions are timely and adequate to meet schedule gates and milestones. The involvement of subject matter experts with a vested interest in the success of the process is essential to providing necessary detail for “deliverables.

The process can “run amuck” when wielded by leaders and program managers who forget there is a customer with a need/requirement on the other end of it all. The user often gets lost or forgotten in the shuffling madness when leaders don’t respect the process, or they approach requirements as burdensome, rather than informative for decision makers and as evidence of due diligence and the health of the program. An issue I’ve frequently seen is the treatment of every aspect of a program as equal-somewhat of a pain-and satisfied with a “block check.”

This promotes and actualizes situations like in the swamp where legislation becomes a ”catch-basin” of sorts for anybody-and everybody-who has spent time expressing any thoughts or ideas on the subject, which often adds blather and pablum to a pending bill to the point where it gets so big-sometimes several thousand pages (~4000-) that it makes it near impossible to assess and review, necessitating the worst practice imaginable of passing it to see what is in it. Kind of like if you never weeded or pruned your garden.

Contractors are most often paid to complete aspects of the program documentation, which sometimes become like cereal boxes-90% air and a lot of settling of contents. Program documentation can be repetitive, boiler plate regurgitations dominated by statements of the requirements, while light on the solutions.

Program dollars often flow through programs with scant regard for the most technically difficult or challenging aspects. Easy to say: but what’s the alternative? Consider the case where the services were developing a Common Imagery Processor (CIP) for the Distributed Common Ground Station (DCGS) to process film from mission data (pixels.) The CIP was one of a handful of critical functions or system functionality for the system with a capabilities and requirements specification that was matched to the objective data rate capacity for the system-let’s just say it was variable from 10 or 70 or 274MBPS (Common Data Link-doesn’t matter.)

Just about every program has a “critical path” for key aspects of the development that feed or inform the Key Performance Parameters (KPP) that define success, often expressed in terms of initial or entry level, threshold, and objective capability values.

Critical sub-systems or functions will have Key System Attributes (KSA.)

An important function for the CIP was error checking software that audits and performs custodial functions to ensure completeness of the downloaded image, a standard function done routinely on every Internet Protocol (IP) connection today.

The CIP bookkeeping process dictates a buffer specification to ensure an elegant error check function (solution) that doesn’t either slow down the system-or chunk good data on the floor and lose its place in the processing or checksum function: that’s a bad thing.

Much of the manufacturing of the CIP is boiler plate but a risk reduction activity would be warranted to experiment with data rates, processing speeds, projected error rates and sum checks to size the buffer properly to get the right balance between throughput, processing, caching/buffering, reordering and latency.

The value of a program management function is all about identifying the above as a high-risk element of the program-committing to doing a risk reduction cycle to flesh out the requirements, while sequencing it properly to meet the gates identified in the master schedule: and having contingency branches for bad outcomes.

At one-point NIMA acquisition was involved in the manufacture of the CIP for the USAF JSIPS. Some combination of inadequate modeling, poor implementation of requirements-or simply lack thereof resulted in a system that did not perform as needed, with a tendency to lose processing pace, dump queued data to the buffer, and when the buffer reached a certain level, dump data to the floor.

If I remember correctly, it was a $6M dollar embarrassment to fix which was a significant-and unexpected additional development cost. Particularly when you compare it to other service approaches (like Army) who early on procured the existing NRO design and never lost a beat, nor had to do any studies, scoping or risk reduction work: just integration, baby (Telly Savalas…)

The above CIP issue in the 1999-2001 timeframe resulted from an inadequate, faulty or a failure of system performance modeling and analysis of this process under data load. We surely have the measure of this type of problem by now, right?

Well in ~2006 our business unit engineered what was called- imaginatively- “Delivery 1” representing-among other upgrades-a new data base load leveling software called “Aqua Logic.” The code base was tested in our agency test organization that was very well funded to do so, being responsible for maintaining the official baseline of 3 of our 4 enterprise architectures (my business unit had the 4th.) The software and function came billed as one of those no-brainers, bullet-proof efforts that was certified “plug and play:” a gitchy phrase predominating at the time that always drew a furry eyeball from me.

Among its many functions was maintaining the master reference or “point stack reference” used to coordinate data base actions between our systems and some 36 distributed nodes. We tested the functionality through about a ~3% load simulating the stress of connecting to the distributed nodes and a transaction through scripting. But to keep this short, we cutover to the Del 1 baseline and shortly thereafter Aqua Logic pooped the bed once it hit somewhere beyond about sixty thousand system “transactions” and reset the stack pointer reference in a manner that lost every node wherever they were at in the process. I later found out our test organization only stress tested the functionality to about 1% of a “scripted load” but with none of the dynamics experienced on any of the networks. We backed out the code and went with the system we already had, which worked fine because it only had one job and did it without error.

Earlier in this series I went into somewhat nauseating detail about storage alternatives where our program undertook a risk reduction effort that cost 450K with the potential to save over 25M and jump to the next evolution in the objective architecture. The alternative was well worth the investment, worked much better than we expected, and it was planned in a way that the system under experimentation would be used in the implementation should it be successful: it was not a throw-away, in any event.

When you have an approved enterprise architecture solution that is being executed, many “experts” consider such an investment an unnecessary risk or diversion-and sometimes it is-but with technology, compute, cloud and analytics changing faster than these programs deliver, we have to find a way to better plan for-adopt-and implement program flexibility to accommodate innovations without incurring excessive risk: which is kind of the definition of why sometimes you can’t find the problem with the program because it’s the manager/human, not the system/documentation.

Moore postulated what became his law nearly 50 years ago and yet programs still fail to take into account the progression of technology and computing which can change markedly in the life cycle of a program.

Many pigeonhole the above-described action as a “tech Insertion,” but I would describe it more as a risk reduction effort that I often characterized as a “stretch goal,” because we had a solution we were implementing, but were undertaking an effort that had high payoff in terms of cost savings and performance, while incurring minimal risk on the schedule (and it could be accommodated within the design with minimal changes.) Contrasted with tech insertion that I’ve found has no agreed upon definition and is often mischaracterized in the acquisition process, which is looking for something “plug and play” to minimize disruptions to the ongoing effort. But certainly, a topic for another article due to the complexities of the problem in our current program management environment.

I earlier covered a Saturday morning “prayer breakfast” meeting at the Pentagon where what was resolving to the two principals in this costly FIA drama-Boeing President “Robbie” Roberts and the aforementioned former NRO Director Jeffrey Harris, now LMC President for Missiles and Space Operations Division (since 2001) presented program status and financial data. This was relatively late in the saga, what might be referred to as the bottom of the 9th inning down by billions of dollars with two outs and two strikes on the Boeing optical component run, about the time it was the closest thing to a fait accompli or dead man walking known to man-or at least to the group.

Roberts was briefing and I looked down to make a note, when Director for Force Structure, Resources and Assessment, Joint Staff J-8 Lt Gen James Edward “Hoss” Cartwright grabbed my forearm, leaned over and asked, “Is that per day?” Which-coincidently-I had just made a note on; it was-Boeing was burning over ~a million a day on the optical component at a time when it was at least a year late and the prospects remained murky enough that a “window” or time frame for first article completion had replaced the delivery date and was reflected on the FIA Master Program Schedule, with the launch listed as “to be determined:” oh, and there was no slack left in the schedule (I asked.)

I would have to wrack my brain and shake something loose to find another similarly poor performing program-not really-one of the biggest that comes to mind that had similar devastating schedule issues-with no slack left whatsoever (we rarely talk about “negative slack,” more often simply discussed as schedule slip, but if a program falls a year or more behind-we usually call it by the technical term: canceled,) and was a problem child of similar scale, the Space Based Infrared System.

SBIRS issues were increasingly the focus of numerous Nunn-McCurdie breeches/violations starting 2001 through 2007 or so, typically elevating to waiver briefings to the Undersecretary of Defense for Acquisition, Technology and Logistics (Wynn.)

By coincidence (but there aren’t any) SBIRS also started in earnest around 1994 and there were grandiose delusions of grandeur on replacing the Defense Support Program strategic early warning satellites in a program that would be tiered with aspects in Low Earth, High Earth and Geosynchronous (22,500 miles up) orbits-providing improved warning well into the 21st century.

There were many of the same 1990s “gitchy” acquisition nuances that plagued FIA, such as letting the contractors conduct status and critical reviews throughout the acquisition stages-and of course-that tired old saw of “requirements creep” (it’s a conspiracy theory) as an excuse when things went to hell in a hand basket. These lessons relearned were then turned into the acquisition lessons (not) learned best practices of the 21st century: but that would assume that these were actual lessons that we learned from.

The SBIRS LEO component was subsequently broken out into the Space Tracking and Surveillance System Program, parts of it were canceled, while other parts were awarded to LMC (rinse and repeat, ecoutez et repetez.)

So riddle me this: if we truly did learn any of these lessons, why did NRO Dir (Gen) Bruce Carlson resign from government in ~2012 over the decision to fund projects-at the expense of programs-by the ODNI and congress-“oversight”-hahaha (that’s funny?) Talk among yourselves, I provided the answer in an earlier article…but he actually resigned because he was lied to by the PDDNI (O’Sullivan) and congress, who were trimming-in particular NRO and NSA-cause if you need big bucks you have to trim those gardens-future investment dollars to pay for mostly CIA projects that the DNI did not approve for funding as programs in the DNI budget.

It didn’t take a genius to figure out that Boeing optical was a dead man walking, and judging by the prod of Lt Gen Cartwright, it was likely to be “put down” soon. Cartwright would go on to command Strategic Command and become the 8th Vice Chairman of the JCS. He also played a major role in the later 2008 decision to shoot down an NRO satellite that failed to achieve orbit, with the USS Lake Erie employing an SM-3 missile. How major a role? He is the commentator in this video.

Doing the math reveals ~+350 million a year being spent on-among other things-an optical component that as early as fall 2001/spring 2002 had fallen more than a year behind schedule and seemed destined to never see space. But why not continue to invest in the hope that Boeing would achieve the breakthrough required to make everything right with the program?

Well, this is not like a case where your brilliant child is playing the best “chopsticks” on the piano you’ve ever heard, and you think it is time for a recital for the little genius. A little egg on junior’s face and your embarrassment would not impact the security of the US one smidgerino

To belabor a bit, by the time the Gap Study tasker was issued by the DCI in July 2002, Boeing had been at it for nearly 3 years and had lost at least a year of schedule in that time-while expending 350M (per above) a year to get to this point.

These numbers can impress, befuddle, amaze and astound those who have not been involved in government or commercial big programs (or projects.) A somewhat guideline or good rule of “budget thumb” is you get 4 full-time equivalents (FTE) a year per million dollars in the DC metro area-and not to belittle or demean, but these are somewhat run of the mill engineers who do 90% of the work on these projects. The specialists-propulsion, digital signal processing, vibration, satellite electronics PhD level SMEs-they are the “lawyers” of the space industry, command ~300-650 an hour, and are also somewhat gunslingers-hired guns-who might only put in 80-120 hours on 12-15 projects in the course of the year in between their academic, DARPA, community symposiums and government expert input focus.

All to say that with the above logic as the basis of cost-that’s 4 FTE at ~$250K per year-4 per million spent, which means that nearly 1400 Boeing “swinging RichardFTEs were charging against FIA as of the late date discussed (although the actual figure in the briefing was 1172, because I joked with Hoss that it was a Department of Defense, Dependent Identification Form’s worth of FTE (and if worried about me, you should be-as I just remembered that is the note I made to myself to avoid classification issues with the figures (DD 1172~18 years ago: medic!))

I outlined the estimated cost of the program previously to give a sense of the order of magnitude of this “debacle,” with the caveat that nobody in government-except maybe congress-has the true cost captured. The schedule referenced above tells you all you need to know on why that is a fact, as when Boeing was terminated, LMC was activated…

 

Maxdribbler77@gmail.com

22 December 2022

LSMBTG: Lamestream media echo chamber (LMEC-L) social media (SM) big tech tyrants (BT) and government (G)

If you enjoyed this article, then please REPOST or SHARE with others; encourage them to follow AFNN

Truth Social: https://truthsocial.com/@AFNN_USA
Facebook: https://m.facebook.com/afnnusa
Telegram: https://t.me/joinchat/2_-GAzcXmIRjODNh
Twitter: https://twitter.com/AfnnUsa
GETTR: https://gettr.com/user/AFNN_USA
Parler: https://parler.com/AFNNUSA
CloutHub: @AFNN_USA

 

1 thought on “National Intelligence Folly: How a Tragic Unsolved Murder Led to Billions of Dollars of Program Fraud, Waste and Abuse Part 20: Good Money After Bad”

  1. Pingback: webpage

Leave a Comment