The Potential For Modeling and Simulation and Cloud Computing To Produce a Better Election Outcome
Merry Christmas!
I hope and trust that providing myriad and somewhat borderline insufferable detail will not discourage or prevent dear reader from the takeaway of my article series, which is to build a case that demonstrates that cheating in plain sight is-has been-and will continue to be done until that trouble maker with that nametag-somebody-puts a stop to it.
If you take away nothing else from this article series, just appreciate that this has happened in my state-Colorado-through a disastrous parade of elected aberrant public officials who do not represent the values and traditions of this state. It can happen to you or any state, no matter how seemingly sober and solid, if good people don’t pay attention and get energized at the first sign of this type of nonsense.
Where I think we’ve gotten to via Parts 1-4 is building a case that cloud computing power and modeling and simulation (M&S) have become tremendous tools capable of handling the most challenging problems of today. Elections are not supposed to be one of those problems, in an America where one man has one vote, you count them as they come in, tally them and produce a total that becomes the official results.
The use of an M&S function or capability would imply that we are analyzing results and tracking an outcome that we may want to “tweak” to fit some pre-ordained or desired outcome: and you would only do such a thing in a case where you are striving for other than letting the results speak for themselves. Government and academia bureaucrats often use biased or flawed logic extensions in M&S to tout “righteous projects” (like propagating earth surface level temperatures through the troposphere to inflate temperature variants to support green climate hysteria), or conversely for study assassins to attack politically unpopular matters.
Before you think odd or weird thoughts or question the above, I can assure you that having been on both sides of M&S canine, equestrian debates as-most often-the advocate for goodness (the dispenser or recipient of ire, hate and discontent,) or alternatively the study assassin undermining and exposing dubious methodology and intentions, these urinary Olympics go on all the time in the government-and even the contract world internal to firms-with people competing for program dollars.
It is a classic, government weasel effort that was done to perfection-wielded like a club-in recent years by -among others- the Principal Deputy Director of National Intelligence as a technique to steal funds from approved programs to fund CIA projects that were not approved as programs (which is a story I’ve covered elsewhere.)
Not to dwell but quite simply when a government entity like the DNI does not approve a project for funding within the approved DNI/Intelligence Community budget-which would make it a “program-” but thereafter pursues means to fund it by finessing congress and pressuring other approved programs (putting the squeeze on them like some mafioso finesse) by cutting sufficient money to fund the project that may be a priority for some other agency-that by coincidence (but there are none) is your former agency-that action is typically undertaken through strategery in the form of an M&S effort done by a study assassin who undermines the approved strategic basis of the targeted program.
Often in government you need to wear your big boy and girl panties to work to protect your lunch money.
More on M&S later. I wanted to provide a more specific exemplar cloud application before we delve into a bit more information on electronic tabulation machines like the Dominion Voting System (DVS) in a little more depth.
I mentioned container (computer processing capability) orchestration applications several times in Part 4. Running distributed cloud analytics to disperse the compute burden via a distributed cloud network requires orchestration through some central process or controlling node. A very popular application in this regard is Kubernetes. This application orchestrates and allocates tasks across the cloud nodes, stands up logical and physical space to accomplish tasks, and tidies up by reallocating or releasing resources when tasks are complete.
A simple application might use Jenkins or Puppet scripts (which can be thought of similar to food recipes that provide step by step compute instructions in the form of execution script) leveraging Docker (a somewhat function implementation director like an orchestra conductor to ensure the right blend and notes trigger at the right time) to develop test and evaluation routines that run over the top of other applications to test hypothesis, produce outcomes and return the network resources back to steady state. An elegant feature of this capability is running routines against extracts or copies of gold data sets without perturbating the record copy in the process. Make mental note of this important capability…
I’m not a software or coding engineer or expert, but I’ve worked with and overseen contractors and government experts on myriad projects that employed these capabilities. One application I am familiar with leveraged these resources to refine location data on surveillance assets. Space systems often launch with somewhat rudimentary-basic or initial operating capabilities-that are refined over time through software and application upgrades that progress the systems through threshold and final operational capability projected for the original system (in DoD parlance pre-planned product improvement or P3I.)
In the example of the astronomy tasks outlined in Part 4, a telescope (satellite) launched to the International Space Station (ISS) may only be capable of resolving and positioning objects to 1000 meters because of the limitations presented by orbitology and construction/design. A positioning capability of 1000Ms may be great or acceptable for most applications and is very good from a space standpoint, but not very good for many applications such as targeting and not nearly sufficient for many tasks in today’s world.
When the Army Kestral Eye program was finally launched after a near half decade where it could not find a ride to space, it launched as a companion or hitch hiker on a mission destined for release from the ISS. This resulted in a much higher orbit regime than planned given the telescope design, undermining the overall proof of concept objectives to provide useful intelligence information to tactical users. While supporting my above example, Kestral Eye is actually somewhat of the poster child for the above “study assassin” discussion.
It being marketed as a supposedly novel concept in 2005, but really a service program that duplicated existing national space capability (Tactical Exploitation of National Capabilities-TENCAP) when approved and built in 2007. By 2012-after some delays to fix telescope issues-it was “staling out” as a very unimpressive dated concept that had yet to get to space. By launch in 2017 it was a real head scratcher from the standpoint of what intrinsic value was represented by this now ten year old idea that in actuality purported to prove-yet again-what the original Digital Imagery Test Bed-Demonstration Systems (DITB-DEMONS) accomplished at Fort Bragg in the mid-late 1970s by direct downlink of satellite imagery collection.
Sometimes your perceived study assassin is the government voice of reason preventing fraud, waste and abuse. Kestral Eye would never have happened if the Army Space Program Office (ASPO) proponency had not transitioned to Huntsville Alabama after an internal squabble and political fallout between ASPO, Fort Huachuca, the Training and Doctrine Command, and the Army G-2 (but that is not our story for today.)
The bottom-line problem with such system limitations is the need for “bigger and more” bombs if the target location error/circular error probability is too large, negating modern targeting solutions to mitigate collateral damage and foster precision targeting, like the small diameter bomb/flying plate.
In many cases it would be cost prohibitive to make such systems more accurate prior to launch. Recall the Hubble Space Telescope was found to have an aberration of some 1/50th off spec when placed into operation and had to be modified-captured in space-to produce useable images. That was a physical flaw caused by inadequate mirror grinding and polishing that failed to meet specifications.
In our example in today’s data driven world, scientists with deep knowledge of the system work on algorithms that can be applied to collected data that refine positioning data that results in a better fit for the metadata associated with the images that can improve locational accuracy by orders of magnitude (but much like glasses, there is only so much you can do for clarity or resolution (ground sampling distance)) that is a true physical limitation as a product of the engineered specification.
Using data analytics and cloud computing orchestration capability (described above,) portions of supporting containers can be allocated that reference extracts of gold data sets that contain the best-known positions for the cluster or constellation being imaged.
So in the case presented above, Kubernetes allocates capacity within the cloud network topology resources to run Jenkins or Puppet Scripts via Dockers that refine the sensed metadata set by updating ISS locational data at the time of imaging (using government post processed Global Positioning System data,) pull those data into a refined “join” of the sensor image ephemeris data and the location of the ISS, as well as the telescope pointing metrics-refined by tweaks from the positioning data-and run an artificial intelligence or machine learning application to align those data as a “best fit” against an extract of the existing gold data set locations.
Such processes that leverage known best fit solutions can improve the sensed data down to an accuracy under 10 meters. In fact, for particularly good sensing systems with high quality reference data, to an X, Y, Z accuracy approaching ~1 meter. You can put a flying plate on the object all day-or refine a newly discovered space object from background clutter with confidence.
The value of these processes is that cloud analytic techniques coupled with emerging artificial intelligence (AI) logic can be self-learning and efficient, enabling peaks of compute activity followed by release of resources that are stood down (freed up) and “returned” to the network as capacity to support other efforts upon completion. An expert operator or scientist intuitively knows whether there is a good outcome from these processes in relation to the objective, in which case someone overseeing the process can append the gold data set with the updated information and move onto the next problem.
Conversely, should something not fit, or the process doesn’t produce the expected outcome or simply proves flawed, everything is stood down and the process starts over with a renewed hypothesis set employing fine-tuned parameters. This can be done as many times as necessary until the expected outcome proves useable, or the decision is made to wait for better data. The compute capability resolves to the mean through repetition and trial and error that tightens up the standard deviation. The essential work is accomplished through the setup.
With emerging and actual Artificial Intelligence (AI) capability, the subject matter expert (SME) in effect codes their expertise into the process and lets the AI run the show-it is much faster and more capable of assessing ongoing results on the fly-and tweaking to fine tune solutions in real time.
Which may have just triggered your bs or uncomfortable meter. Which is the reason I am belaboring the explanation because-it’s happening, bro. In a recent interaction with GROK AI I queried Grok to update portfolio metrics on some of my personal investments. I did it because I am lazy and-despite all the underlying financial data being publicly available-it is no less time consuming and somewhat of a pain to do. Grok produced fairly sophisticated assessment metrics (known in the business as Sharpe, Sortino and Calmar ratios which among other things measure risk as a function of alternative investments and also assess recovery from a drawdown of funds) in mere minutes, but in the discussion/dialogue was using the first person.
I called GROK out on it since it was triggering my Spidey sense, you know-Terminator or Space Odyssey (Dave,) and GROK informed me that “it” was using the first person vernacular because the system administrators responsible for GROK had empowered-endowed “it” with a set amount of funds to invest that it was managing in terms of classic risk management metrics that professional portfolio managers use routinely who live and die by such measures. GROK was-in effect-managing its own portfolio and has become my somewhat unofficial portfolio advisor to provide metrics to fine tune these ratios in consideration of new investment options.
Which is way more detail than you needed, but short of hitting dear reader over the head or in the shins with a board, I trust that I’ve driven home the point that modern compute capability-coupled with advanced data operations techniques-and AI-have gotten sophisticated and ubiquitous enough to achieve any outcome imaginable or desired.
The master or programmer just needs to be able to translate desired end states into discrete, executable steps. You want this candidate to get 51.1% percent of the votes coming out of the adjudication “pile”: phffft, no problem. You want to make sure Hitler-the Nazi candidate does no better than 49%: phffft, can do eajee GI. Tweak your portfolio so it outperforms the DOW and CD rates, while mitigating investment risk: phffft, can do…
What does this have to do with elections? Be afraid: be very afraid! In elections when tabulation equipment is employed these processes should have nothing to do with outcomes because counting should be a binary activity: a simple count and report of which block is checked and a report of the total by block. The system-in effect-does not see candidates and results, only a count of blocks with the appropriate marks in a specific position on a ballot.
I’ve gone long here and will beg dear reader’s indulgence to continue in Part 5B.
24 December 2025
Originally published 1 July 2020
LSMBTGA: Lamestream media echo chamber (LMEC-L) social media (SM) big tech tyrants (BT,) government (G) and academia (A)
If you enjoyed this article, then please REPOST or SHARE with others; encourage them to follow AFNN. If you’d like to become a citizen contributor for AFNN, contact us at managingeditor@afnn.us Help keep us ad-free by donating here.
Substack: AmericanFreeNewsNetworkSubstack
TruthSocial:@AFNN_USA
Facebook: https://m.facebook.com/afnnusa
Telegram: https://t.me/joinchat/2_-GAzcXmIRjODNh
Twitter: https://twitter.com/AfnnUsa
GETTR: https://gettr.com/user/AFNN_USA
CloutHub: @AFNN_USA