Part 20 worked through the budget process that results in funded programs-except for those limping along as “projects” that don’t quite make it as programs but get special love and attention from the political class to fund them at the expense of the future.
The picture for Part 20 depicted the Program Objective Memorandum (POM) and Five-Year Defense Program (FYDP) process that results in an executable budget.
I realize that dear reader is well-aware that I’ve beaten this dead horse of projects versus programs quite a few times in this series-but only each time it “twitched-” in one or more of myriad bad examples used to highlight points. That’s not to say there wasn’t coordination of these projects across the community.
But when Art Money-former Assistant Secretary of Defense, Command, Control, Communications, and Intelligence-advising GEN Keith Alexander-leading the National Security Agency Advisory Group-asks during a National Reconnaissance Office (NRO) POM master schedule briefing (covered in Part X,) where some of these “programs” are, only to be informed that they are “projects” and not included in the master schedule, that is telling. Art knows better than anybody that their lack of reflection in the master schedule means there is no provision within the enterprise for the things needed to make these programs successful.
Despite my belaboring of this point, I don’t feel I’ve communicated what a terrible, debilitating, wrongheaded, misguided, colluding theft and slap in the face, undermining of all the safeguards and all the checks and balances intended to prevent such activity from happening, worst practice this is: strong message to follow.
That it would be ”oversight-” the ODNI and congress (the Senate Select Committee on Intelligence-) perpetuating this fraud on American taxpayers-who will never know because of the classification of these matters-is unforgivable. The politicization of these projects that in effect stole funding from future risk reduction efforts after failing to be funded as programs by the DNI, is what led to the resignation of NRO Director Bruce Carlson: it wasn’t so much the stealing of the money as it was the pledge of support from the PDDNI not to do this “again,” and then the collusion with the senate behind the scenes to do the “winky, winky” thing that caused him to resign: did somebody think he wouldn’t notice??? I’ve mentioned that in 2012 or 2013 new NRO DIR Betty Sapp received the same assurances that DIR Carlson received before her: bold faced lies once again from the same sources. Betty was a budget person who led the NRO office while FIA was playing out, so she knew even more acutely how bold these lies were (than DIR Carlson.)
The NRO FIA program was well integrated within all the checks, balances and controls built into the system. However, the significance of the delivery ambiguity and just as important-if not moreso-the launch date, is these are not launches of “Estes Rockets,” assembled on demand out of your local hobby shop, with one of those, two of these, a rocket engine, stage or two, when and as you need them.
Launch costs were probably on the order of a minimum of a hundred to several hundred million dollars by the time the sequence is all said and done, consuming some 3-5 years or more from production initiation to recovery and clean up after satellite release in orbit.
Elon Musk was not making reusable satellites-yet-nor had any firm challenged the dominant leaders in the business-the United Launch Alliance-Boeing, General Dynamics and LMC (the big three:) these were one and done systems. The most expensive costs associated with launch in a case like this are likely the “carry costs.”
If the launch was pretty much a date certain for a long time-because no contractor in their right mind would give such a huge tell about the health of the program by canceling or delaying launch-way right-except as a last resort-and I forget the figure right now, but to keep an operational rocket that has made it through manufacturing and-heaven forbid-to the launch site in near ready state, short of propellant integration and myriad calibration checks and retests-at one time could cost nearly 10M a month-over and above the bill for the launch itself. Bad things happen to ready state rockets that sit around waiting to be “lit.”
Missing these key dates is what constituted the specter of a Gap that grew closer to realization as the optical component slipped “to the right.”
The fact that the radar and ground components of the contract were on track and doing well was small consolation in this regard-much like my example of the Titanic’s engine performance.
I made the case separately that our largely unacknowledged-at that point in time- national radar capability-one of the worst kept secrets in history-OBTW-as it is an active sensor that any junior high school project could have detected-has long lagged its potential and capability by imagery analysts and decision makers because of the insistence-early-on-and practice/convention to process radar collection into images.
You might wonder what the alternative was (to processing radar into images?) During the Nepal earthquake in April 2015, there was such a tremendous upheaval (>7.4 some say 8.2 on the Richter scale) that initially no space-based satellite nor airborne platform could collect through the atmospheric dust and haze created (at least that was our story): it’s a phenomenon that is common after a number of natural disasters, such as volcano eruptions, hurricanes, tornados, etc. Recall that the recent volcano eruptions in Iceland resulted in flight restrictions and diversions around it because of the dust and debris that tears up aircraft engines.
One system that did collect was the Italian Cosmo-Skymed first generation radar satellite. The system provided extensive and valuable coverage of the town, including rubble and debris estimates, analysis of the damage to the town’s infrastructure, and most importantly insights to safe ingress and egress for responders.
You might not know this about radar (winky, winky) but it can see in the dark, can penetrate most clouds, dust and obscurants at ground level or at altitude, and is the most accurate sensor relatively speaking other than LIDAR-and with much fewer limitations. Radar was a prototype capability being tested as early as Pearl Harbor and actually detected the Japanese attack on 7 December 1941-and a warning was passed that was not acted upon.
One of the more fascinating Cosmos data products included radar Glyphs that are used to reflect flow and directional based elevation changes, and mapping products, Digital Elevation Models, spoilage estimates and others that measured upheaval and subsidence throughout the area, providing X,Y,Z locations of the rubble (a plug for a former colleague, see Thomas Ager’s plain New York style description of what he calls the “beautiful sensor,” radar treatise, The Essentials of SAR: A Conceptual View of Synthetic Aperture Radar and Its Remarkable Capabilities Paperback – August 14, 2021.)
Interestingly enough to make my point about radar misuse and underutilization, Nepal was not a national priority at the time so the Operations and Intelligence (OPSINTEL) update that morning was akin to a news media recap of what was believed to have happened, as “we”-the IC-did not have “eyes on,” and had not yet received any of the late breaking collection changes for emphasis on Nepal: which seems like a pretty sorry end to this discussion.
However, since our research directorate was working with foreign satellite vendors, including Canada, Germany and Italy in particular, we had these products and more to brief and had also shared them with the US Marines Corps heading up the government’s first responder effort on the ground-with vehicles-heading into harm’s way.
All to say that Boeing’s success on the radar component of the FIA contract was not the “drone” the IC-particularly the CIA and NGA, as well as State-were looking for, nor were they that interested in receiving those data in lieu of electro-optical.
Our agency Research arm was in the midst of an initiative that was “rabble-roused” directly with John Hopkins Applied Science Lab and a number of specialists from Booz Allen Hamilton to train analysts on the “black art” of data science and to develop a program of instruction for the NGA College that we believed should be less about familiarization and more focused on leading to certification of data scientists to grow the IC bench strength. The reason the focus was on the College was because-among other missions-it is the certified training schoolhouse for most government imagery analysts-agency and DoD-and the certification entity to validate proficiency.
In the Army the closest thing to a data scientist was called an Operations Research and Statistical Analysis-ORSA-and although during my test and experiment career I personally coined the term “ORSALAC”-with the insinuation of a commonsense deficiency-these folks were worth their weight in gold and I worked with some great ones who I learned a ton from.
The plan to up the government’s analytic game-by incorporating instruction on phenomenology, non-literal sensors, e.g., multi-and hyperspectral, LIDAR, radar, millimeter wave, Moving Target Indicator, acoustics, etc., was an ongoing nearly twenty-year quest to move tradecraft discipline into data space, vice image space.
Major breakthroughs were made over the course of about 6 years with the emerging field of cloud analytics taking shape in the government as early as 2006 with NSA’s Real Time Gateway, and later as Jenkins, Chef and Puppet Scripts, Dockers Data Containers, virtual containers, virtual strings and distributed capacity processing was emerging into a standard, implementable Development Operations, testing and validation regime.
It seemed in the no-brainer category to meld these developments with tradecraft advances.
So imagine sitting there in 2015 and hearing an Ops Intel briefing lead off with “we had no sensors to cover this:” great googly moogly…which is why the results were staged from Italy and our scientists/analysts from our research radar team briefed the emerging results as a “walk-on.” For a mapping and electro optic, literal image agency, this was mostly regarded as “voodoo” or some type of hearsay: tennis apology for your shortsightedness…
The government can be nearly impenetrable when it comes to matters like this, with the biggest impediment-besides the analysts who have not been trained-was the self-anointed agency chief scientist at the time, who declared data science pseudo-science or a “fad.”
A ”fad” (hahahaha,) notwithstanding that “IBM WATSON” and NSA RTG were 6 and 8 years in the rear-view mirror at this point and the IC Information Technology Enterprise (ICITE) was in theory in full swing across the community endorsing Amazon Web Services and their cloud offering (another PDDNI led effort that I touched on earlier.)
Which I believe goes a long way to explaining how some of these government issues can seemingly “pop out of nowhere” as if nobody has ever thought of them-Gomer Pyle-like (gollllly.) LIDAR was a similar issue, as the demonstration and risk reduction effort undertaken in 2002 was slow rolled in an agency that would not go forward with something that lacked “standing” and definition within the agency engineering documentation, where the previously mentioned Shuttle Radar Topography Mission, with its 10-30 meter accuracy was still considered the gold standard for mapping, charting and geography products.
At a time when the Pentagon had emerging requirements for things like Small Diameter Bomb-GBU-39-a precision munition akin to a flying plate-that required a much better accuracy of ~6-8 meters, but more importantly the height or “z value-” than could be expressed out of existing data holdings, just about requiring a custom product for every use (to meet the “nomenclature” of precision insinuated in the name: it was believed to be an important addition to help reduce or perhaps eliminate collateral damage and allow targeting in borderline rules of engagement (danger close) situations.))
DARPA was undertaking several LIDAR initiatives and demonstrations, working on the issue of atmospheric effects on quality, power, etc., including a fascinating project called “Tanks Under Trees” that explored the ”peek and poke through” aspects of the beam. While LIDAR could be processed into images/pictures, the far more fascinating and intriguing aspect in data space was the ability to remove data layers-such as trees-to evaluate what was underneath the canopy.
One of the just classic ironic battles playing out during this time was all manner of blather, adjectives and “harumphs” over artificial intelligence, machine learning, and a concept I was pushing to do more in data space vice image space via pipeline processing, upstream from analysts to allow them to start their analysis from a higher level: I likened it to having a data “sous chef” at your disposal that would do all the chopping, cutting and mundane tasks to let the analyst focus on analysis.
We still had a problem in that analysts generally spent almost 60% of their analysis time on non-analyst tasks. Just think of how you could up your game in the analysis and reporting business if some ~3000 analysts could spend most of their seat time on analysis.
There was a big disconnect in that all of the “gitchy” concepts started out with and focused on imagery: pictures. Much like the Nepal earthquake effort, data science was focused on doing more with measurement sensors, for instance hyperspectral, where the computer did all the work to match signatures to data to characterize environments and present those data to the analyst to augment the story.
A tragic outcome (in my mind) is a culture that believes one should then take those exquisite data emerging from the spectrometer and the comparative signatures-a product of upstream cloud analytics-and produce a very pretty picture with the signature traces hovering over the target like some jet stream stain: spending most of your time getting the “aesthetics” just right so the report had a “picture” which in truth added very little value on top of the results.
In another article series I chronicled the use by Penn State of a Hyperspectral collection over Haiti that reported on the number of blue tarpaulins handed out by FEMA and the United Nations to provide temporary shelters as clean-up operations began. There were some ~2800 tarpaulins detected during the mission, and our analysis lead was asked to “verify” the number by analyzing and validating the count. I spoke to the Penn State scientist responsible for the system and wrote a brief paper that-in effect-started out “hell no,” dismissing this idea pf double-checking the computer counts out of hand as a total waste of time, energy, resources and a worst practice. Ever hear of data science???
What “in tarnation” does any of the above data science, cloud, “Voodoo” have to do with the FIA debacle? Well, where do you think all the money was coming from to attempt to bail out Boeing and FIA, as well as to fund the “projects” I’ve been referring to? To me that is one of the biggest tragedies of the FIA saga: the delay in inevitable capability, the harvesting of program dollars and the cancellation of promising programs.
23 December 2022
LSMBTG: Lamestream media echo chamber (LMEC-L) social media (SM) big tech tyrants (BT) and government (G)
If you enjoyed this article, then please REPOST or SHARE with others; encourage them to follow AFNN
Truth Social: https://truthsocial.com/@AFNN_USA
Facebook: https://m.facebook.com/afnnusa
Telegram: https://t.me/joinchat/2_-GAzcXmIRjODNh
Twitter: https://twitter.com/AfnnUsa
GETTR: https://gettr.com/user/AFNN_USA
Parler: https://parler.com/AFNNUSA
CloutHub: @AFNN_USA