Election Irregularities Project, Dateline Colorado: The Shameful Railroading And Lawfare Persecution Of Whistleblower Tina Peters (Part 4)

Why Are These Tabulation Machines So Capable: Beyond Vote Counting

In Parts 1-3 I established how modern computing has revolutionized myriad commercial businesses and provided a number of anomalous factoids on Colorado election laws, while touching upon some just plain weird outcomes in the last several cycles-2016, 2018, and 2020. I also introduced the idea that those who control the electronic process implemented for elections have a number of tools at their disposal that have progressed to the point where some key elements of the legacy process have become superfluous.

The disturbing part of this last point is my hypothesis that two of those elements that are now superfluous are ballots and voters.

Recall President Biden’s words leading up to the 2020 election where he basically gave away the plan by telling voters he did not need their votes, they had the most modern fraud mechanism in the history of elections, seemingly improved upon the BHO’Linsky effort from 2012: he literally told his meager audiences that he needed them after the election. We will return to those statements (from Part 3.)

A quick (wonky) digression at risk of belaboring the belabored. The modern computing environment arguably reached the peak of raw processing with BITCOIN Mining. I don’t want to get sideways with that specific application. Many are no doubt aware of projects that crowdsource computing power to accomplish difficult tasks that would otherwise take enormous amounts of what we formerly referred to as mainframe computers. There are more than you probably think or are aware of, here are a number of exemplars of this process.

One of the most elegant has been ongoing in the Astronomy Community for several decades. With more and more telescopes, lenses/optics and radio collection focused on outer space, scientists long ago refined the process they employ to deal with such a massive amount of data. While I haven’t seen the figures in a few years, it was surpassing 20 Terabytes of data per day long ago. Breakthroughs in artificial intelligence, machine learning, computer graphics generation and pattern matching have rendered this task manageable: the task in question is to do a day’s work in a day.

Linking laboratories, academia and scientists around the world via networks (based on concepts originally started as ARPANET,) allows astronomers to eliminate upwards of 99% of the daily collection that bit-and-pattern matching reduce out of the “workflow” as redundant or mapped, leaving a workable subset of the massive collection that others can spend quality time analyzing.

These “workflows” can be implemented in what I call a “pipeline processing process” that is hierarchical in preparation and execution (simplest analogy is like making a meal,) distributed as much as needed based on size and complexity and controlled from some central node that retains the “gold data set” or permanent record that will eventually be updated with the results.

How powerful is such a network when fine-tuned to optimal performance? Many have watched space shuttle launches and aspects of space activity without considering how incredible the video broadcasts have become. We routinely watch a vehicle some dozens of miles “down range” in pretty much real-time without giving a thought to how powerful those video processes must be to perform such a feat.

For anyone who has the opportunity-I heartily recommend a visit to the Udvar-Hazy or Dulles Museum, where you can see one of the aspects that enables this capability-the Tracking and Data Relay Tracking Satellite System. You can also see the U-2, a Soviet SA-2 missile system similar to that which shot down Gary Powers (as well as the tracking and guidance radar, the Fansong-A,) and the SR-71 that flew from Palmdale, Ca to Washington Dulles in 68 minutes.

This article states it was the last “military mission,” but I was in an Army shelter at El Paso during Roving Sands in 1996 and witnessed a test of the SR-71 High Resolution Radar (HRR) data linked to an Army Distributed Common Ground Station, Enhanced Tactical Radar Correlator-a first demonstration of this long-touted capability-and the last operation.

One of the enabling aspects of these processes above was latent or underused compute resources. I referred to an effort I led in ~2006 for an enterprise architecture expansion. One of the most startling metrics from the analysis of alternatives was the discovery of how underutilized compute resources were in general in both my agency and across the government. Our servers averaged about a ~13% utilization rate: that was considered fairly good!

With several hundred racks of equipment, each containing a primary or A side and a B side for failover, as well as a tape system for critical operating system back-up, there is a lot of processing power that is pretty much idling, underused. Which seems like somewhat of a waste of money, but that is how you get four nines in reliability and availability (99.99xx percent availability.)

Cloud computing and container management programs can allocate that unused or excess capacity and leverage it to do the projects described above.

My point in belaboring these aspects “again,” given the massive amount of processing inherent in the above projects, is to point out how trivial a task it would be to handle voting files that contain-in the case of Colorado-under 6 million records: trivial.

Distributed Interactive Simulation, Federated Simulation Techniques and Aggregate Level Simulation Protocol aspects have been around in earnest for some three decades now. Warfighters have benefitted greatly from these tools that increasingly provide realistic immersive environments that are good enough to stress battle staffs at the highest level and down to the operator level.

This paper is a bit dated, but describes many aspects of the concept and also reflects an early concept for obviating processing choke points and issues with handling compute intense functions at distributed nodes through use of a technique where a “backplane” or massive parallel processing capability within a central node handled orchestration and performed up front processing to lessen the downstream burden.

These efforts are much more mature and sophisticated than you likely believe. At Roving Sands one year a Distributed Interactive Simulation effort was conducted that connected some ~29 separate nodes from Dahlgren, Va to the Pentagon, to Colorado Springs, New Mexico and San Diego. Two thirds of the play was conducted in the simulation-particularly the missile launches (SCUDS.)

But there were three categories of “real missile systems” or surrogates on the ground-Gorby and George Russian SS-21s from Chicken Little (Eglin, AFB), 18 huge ten-ton engineer dump trucks with IR signatures, and two actual Maz-543 SCUD-C TELS. No live aircraft or unmanned aerial vehicles (Pred, Hunter, Pioneer, Scan Eagle and Grey Wolf-a Navy Surrogate) ever went after one of the simulated vehicles or launches played in the simulation (this was in the mid-90s.)

How real was the simulation part you ask? I was visiting a PATRIOT (Phased Array Tracking Radar To Intercept of Target) engagement control station connected to the exercise DIS  via the Flight Missile Simulator-Digital (FMSD) to observe a simulation test. The operator began passing hard things (pooping bricks) when the screen lit up with what appeared to be a salvo of three missiles flying toward his volume of protection (airspace.) He broke into a cold sweat and had to be talked off the ledge: you can’t get more real than raising the pulse and brainwaves of a trained crew!

Note this was before the cloud-although these implementations (I like “instantiations”-but not everyone is a fan…) were the 90s version of it. The revolution that matured with the cloud-particularly the hybrid cloud where you can pick and choose the implementation based on the “best athlete” solution/fit for the task vice settling for the Microsoft or Amazon cloud solution/offering “in a box-” is in commercial container orchestration software, as well as the scripting executed to perform analytic tasks of all sizes-from bite size to “ginormous-” to fit the demands of large systems and large problems and distributed processing needs. If familiar with this, please bear with me, as this delves a bit into “wonkyland” for those who haven’t messed with “it.”

I made the case in part one that ubiquitous commercial cloud applications have handled far more difficult tasks than what we are talking about. Above I introduced some information about modeling and simulation and how mature it has gotten to the point where Federally Funded Research and Development Centers (FFRDCs) like MITRE, MIT Lincoln Lab, John Hopkins, etc., hold wargames for senior battle staff with-or without any subordinates involved.

How would such a system be employed for elections? I mentioned in Part 3 how a skilled individual running a central tabulation machine could do this with ease. I also said we don’t “need no stinking ballots,” nor voters: the actual voters could really prove to be a hindrance to a plan. All that is really needed is access, and names.

So lets put some numbers and meat on this theory and see how they fly and whether you agree that in comparison to some of the aforementioned crowd sourced projects, this would be trivial.

In fact, we’ll use Colorado just for poops and giggles. So we have 64 counties in Colorado and what we will call a central or controlling node in Denver where the SOS resides (although it could be anywhere where connectivity and bandwidth aren’t an issue.) In an ideal world we would have a connected, distributed network across all 64 counties that we can connect to Denver to enable it to be the single POC on all things related to the election, including being the official reporting node to the Election Systems & Software (ES&S) national election counts that feed the media “decision desks.”

And in fact, we do have that network, as 62 of 64 counties use the Dominion Voting System (DVS) equipment suite. DVS reportedly serves 40% of all US voters (in a number of other states: you can probably guess what they have in common. Georgia has “30K machines.” California has DVS in 40 of 57 counties: most of those elections weren’t close, with most >30% margins including “Mad” Maxine Waters, Swillwell, Paloozi.” You get what you vote for….they are so proud….)

Now if I were the Colorado SOS-after all the hard work, scheming and plans-spending taxpayer money mailing ballots to all 4.6 million registered voters (actually, 102% of registered voters-every cat, dog, lizard….,) paying somebody to monitor receipt of ballot envelopes from drop boxes, scanning, tracking, decrementing them from the Consolidated/Central Voter Registration (CVR,) sending perhaps tens if not hundreds of thousands of reminders throughout October 20XX and right up to election day to remind individual voters “there is still time to vote,” I would have my chunky 4th point planted in an easy chair in the elections operations center in Denver. “Just sitting there watch’n the wheels go round and round” and have the head DVS guy dazzle me with his expertise, projections and wizardry (I mean, we have DVS systems for 62 of 64 counties. If Georgia has some 30K systems with over 150+ counties, (can be as small as a tablet) Colorado probably has 15K.))

Master and slaves, client and slaves, central count and compilers, master and tabulators, the terms all mean somewhat the same thing: electronic voting machines are built to be “daisy-chained” via a controlling node that serves as the reporting node to a higher order system. The ES&S may be getting county level data, but it is likely fed through the state system in Denver. And that only makes sense-from precinct to county, county to state, state to national via one tested process.

I do want to also state that I haven’t run, tested, or placed hands on any of these systems. So I am not an expert and not pretending to be one, although the structure seems straightforward and has been well documented elsewhere, including the Texas AG testing described in an earlier article.

I also want to get into more detail on aspects of DVS and whether we actually need ballots, or just names (the CVR) and a running count on who has and hasn’t voted based on the aforementioned voting reminders, or whether we just “fudge” the numbers and evidence be damned. And how would anyone know???

Max Dribbler

Maxdribbler77@gmail.com

23 December 2025

Originally published June 21, 2021

LSMBTGA: Lamestream media echo chamber (LMEC-L) social media (SM) big tech tyrants (BT,) government (G) and academia (A)

If you enjoyed this article, then please REPOST or SHARE with others; encourage them to follow AFNN. If you’d like to become a citizen contributor for AFNN, contact us at managingeditor@afnn.us Help keep us ad-free by donating here.

Substack: AmericanFreeNewsNetworkSubstack
TruthSocial:@AFNN_USA
Facebook: https://m.facebook.com/afnnusa
Telegram: https://t.me/joinchat/2_-GAzcXmIRjODNh
Twitter: https://twitter.com/AfnnUsa
GETTR: https://gettr.com/user/AFNN_USA
CloutHub: @AFNN_USA

1 thought on “Election Irregularities Project, Dateline Colorado: The Shameful Railroading And Lawfare Persecution Of Whistleblower Tina Peters (Part 4)”

Leave a Comment