The Potential For Modeling and Simulation and Cloud Computing To Produce a Better Election Outcome: Not A Good Thing
I broke this article in two parts to lessen the burden on the attention span of dear reader, given that I somewhat droned on a bit like normal in my last.
I hope (which is not a plan or strategy) and trust that the bottom line somewhat in the middle of this series came through like a shotgun blast in the middle of a dark winter night.
That being the fact that modern computer automation capability has achieved staggering levels. Why do these election tabulation systems often have M&S functionality (he asked somewhat rhetorically, in a trouble maker tone-although the real answer is that it is a capability provided to support staff training modules and should not-never-ever be running during an election.)
You may regard the previous article discussion as esoterica, blather, gilding the lily, irrelevant, distracting nonsense.
The belabored point among the esoterica is it would be and is child’s play to achieve desired outcomes for elections using tabulation machines-backed with software like Microsoft Sequel Server-capable of performing calculations-and projections (a key word) from the building/emerging gold data set represented by the ongoing image ballot counting within the system.
As a tease to a finer point, the tabulation system that you think is producing counts like a bank money machine, X bills went in, X bills came out-viola-we are good to go, is doing validation checks on the front end and back end. Scanning those paper ballots in, creating image ballots, proof testing as a go or no go for valid ballot forms, undergoing the validation and litmus test steps-stray marks, unclear marks, etc., etc., registering the discernable results-or conversely flagging the image ballot as anomalous and forwarding it into the adjudication file, while sending the approved-positive outcome of a validated ballot with clear voter intention forward in the process and incrementing the individual count/results in the applicable data base.
What I would ask you to take away from all my previous “belaborment” is this is a very simple and straightforward compute process for today’s highly capable and technically functional modern compute capability represented by tabulation machines: boringly so.
An M&S application running in tandem with these processes would be the recipient of the results to update and populate projections for the end state results against an expected performance curve-say, the USAA or Reuters latest poll or survey-updated by the emerging vote-of how things are going. Which is also a projection of how things are not going.
The trend is your friend: except when it is not or you want to manipulate it. Not to steer or guide the witness-dear reader-but running an M&S outcome focused projection effort can go from a novelty, fascinating kind of science project-like, neato torpedo, how is candidate X doing-to a sophisticated swindling, cheating scheme that can be used to inform adjudicators-how candidate X is doing and is projected to do.
And also how candidate Y-the “Nazi” is doing and what candidate X-the friendly-needs as things play out to ensure X gets the outcome they deserve: and the same for candidate Y.
I don’t think I can explain it any more clearly. Which is the relevance of the aforementioned study assassin interplay with the ongoing election process and the potential perturbation (cheating) of the results. The point being the evidence is in the machine and the logic and executables which can only be discerned-detected-through the conduct of a forensic audit of the tabulation record files by SMEs.
You have to have the files to do the audit: store that thought for later.
If you have even a scintilla of suspicion that something rotten came out of the tabulation machine, it would behoove one to ensure those files are retained to be audited. A forensic audit, not running the same batch of image ballots-$100 bills-through again to confirm the count.
We know you can drive, but can you pass the sobriety check?
Break-Break: I don’t want to go into great detail on DVS but recommend reading this Texas SOS DVS system assessment. Texas was considering buying tabulation machines. Considering how big a sale to Texas could be, it’s puzzling that DVS seemingly did not send their “A-Team” for this demonstration. Which may have been the point all along, as this would be like the German’s demonstrating the Enigma code machine for the Brits prior to WWII.
A number of issues stand-out in this assessment, issued pursuant to their decision not to go with DVS. For instance, the fact that it takes so many steps to initialize and quite a bit of work to setup, coupled with the information that the DVS “experts” had difficulties during this assessment (struggling to get the system initialized, necessitating re-installation of software that took 8 hours, zeroing out by mistake the adjudication file,) tells you a lot about it-since they were in the “sales and good impression” mode. Then there is this “intriguing” factoid:
There are two configurations, one that allows multiple client computers connected to a single server computer, and one where everything is on the same computer. We tested only the former, so the single-computer setup is not being certified.
Also, this “little” issue involving adjudication audit: Adjudication results can be lost. In the January exam, during adjudication of the ballots in the test election, one of the Dominion representatives made a series of mistakes that caused the entire batch of adjudication results to be lost. We did not see this problem again during this exam, but the adjudication system is unchanged, so this vulnerability is still present. Recommendation: Certification should be denied.
The above points are important mainly because of the implication that a certain level of technical skill is required by those involved in what comes across as a somewhat “finicky” process-as evidenced by problems experienced by the DVS technicians themselves (picture in your mind ballot scanning operations of the same ballot over and over.) It is also somewhat redundant to state that “everything” could be accomplished on one computer when DVS has a classic network topology and one of the systems in each area (precinct, county, state, etc.) is the master.
Also noteworthy from this report: “Transfer Results. Vote totals are transferred from the ICP precinct scanner to central count using a removable memory device.”
Not to make up a variation that matches any pre-conceived notions, but if each site has all computers serving as clients to a master or central computer, or if everything is accomplished on one computer (per above,) you still have a major vulnerability whether totals are forwarded electronically or via a removable memory device. In the test business that is considered a “single point of failure.” Also known as a major security vulnerability.
In a situation where ballots are scanned as an image on the computer, processed to derive vote counts, output to a file that compiles and prepares vote counts, and outputs those data to a thumb drive ported to a central tabulation machine, it would seem to be a straight-forward operation.
What would make one suspicious of such a process is if there is a lack of positive control for the thumb drives, no audit or traceability for the ballot image in comparison with the original ballot itself, or if it turns out the ballot image is subjected to further steps within the system: or erased or expunged, when federal law is to maintain such records for ~24 months. There have been myriad such allegations since the election. The latter example would apply if anomalies in the image ballot itself caused it to flag and progress to “adjudication.”
In many cases an inordinate number of these ballots were sent to adjudication, as reportedly happened in Michigan, Nevada, where Clarke County reported some 70% of mail-in ballots had to be adjudicated, and in Georgia-Fulton County-where some ~106K out of 113K images flagged for adjudication, while Gwinnet County experienced some 80K.
This piece details how remarkable many of these results are considering how Biden far surpassed BHO’Linsky-as did DJT.
In Part 3 I presented Bidens comments stating, “I don’t need people’s votes, I need their support after the election.” He also stated he had the “best election fraud enterprise” in history.
What does M&S have to do with this process and the adjudications? Much like in the above astronomy example, prior to the election there was likely DVS training and test data based on the available CVR being used to track incoming ballots (recall that in Colorado the SOS’s office sent a mail reminder to those who had not yet voted.) It would be child’s play to use those “errata” data on who has not voted yet to project outcomes in advance of opening ballots on election day.
What would you do with such information? Well one use would be to identify those who have not yet voted and flag them within some central system-that is capable of ingesting and manipulating the “haven’t voted yet list”-into potential ballots: one might do that to expedite late processing. One problem-an “observable” or tell with this technique is if those persons voted late in the process, they would in effect have two ballots in the system, one of which should reflect or log as a provisional ballot. That was reported in a number of voting locations.
Would that be a conundrum? No, as these votes would automatically flag into adjudication. What happens to those in the system who did not vote, and you hypothetically have a ballot image that has a name, but is blank? Well, that would flag and pass into adjudication as well.
Now I did say this was hypothetical, but what do you do with such a ballot? The obvious answer is to eliminate it. But what would stop an adjudicator from judging the “intent” of the voter to be a vote for the affiliated candidate (in the case of a republican or democrat?) There would be a problem with “unaffiliated” Colorado voters that were described in Part 2 (some 400K,) but the adjudicator gets to figure all that “voodoo” out-and it’s not clear to very clear as to whether there is an auditable record for any of it (we found out when the light shined on this process there was no auditable record in most instances tracking the original ballot to the adjudicated image ballot.)
The above is hypothetical, but what would be the observable of such an action? One observable would certainly be high adjudication rates in comparison with previous elections. Another tell might be high participation rates. A third might be seemingly impossible or improbable results that defy historical voting, candidate performance or logic. And of course, the biggest tell would be in the numbers.
If some rogue SOS implemented some type of fairly extensive open filter, adjudication packing fraud scheme where all those mail-in ballot reminders were turned into image ballots, you could end up with a favored beneficiary-candidate-far exceeding all previous election norms, like getting over 80 million votes: that’s like a gazillion compared to any election in history, like the 20,000 year storm.
An interesting recent factoid is the vote intent correlation based on census data. One of the items the census solicits is whether a person voted or plans to vote. There is ordinarily a high correlation between these data and official election result vote counts. You guessed it-except in 2020, where these data were out of whack by more than 4 million votes over and above what people reported in the census figures. Note in this article there have been other somewhat anomalous correlations, such as in 2012. One wonders whether anything Biden offered about “fraud” could help explain some of these “issues,” for instance, in Florida?
I want to wrap this up with a bit of a summary of key points for part 6 to touch upon the factoids, coincidences and strange “bidness” that occurred during Colorado’s 2020 election, and where things should go from here.
I appreciate any feedback or recommendations on clarifying anything I’ve presented in these articles. Most of the projects I worked over the years-with the exception of Open Source “stuff,” and commercial small-sat projects-were, are, and probably always will be classified. I apologize if some of the generic examples I’ve substituted don’t quite hit the mark, for instance, the container and scripting examples. Please let me know if I need to clarify or add more detail before I finish this series in the coming weeks.
26 December 2025
Originally published 1 July 2020
LSMBTGA: Lamestream media echo chamber (LMEC-L) social media (SM) big tech tyrants (BT,) government (G) and academia (A)
If you enjoyed this article, then please REPOST or SHARE with others; encourage them to follow AFNN. If you’d like to become a citizen contributor for AFNN, contact us at managingeditor@afnn.us Help keep us ad-free by donating here.
Substack: AmericanFreeNewsNetworkSubstack
TruthSocial:@AFNN_USA
Facebook: https://m.facebook.com/afnnusa
Telegram: https://t.me/joinchat/2_-GAzcXmIRjODNh
Twitter: https://twitter.com/AfnnUsa
GETTR: https://gettr.com/user/AFNN_USA
CloutHub: @AFNN_USA
2 thoughts on “Election Irregularities Project, Dateline Colorado: The Shameful Railroading And Lawfare Persecution Of Whistleblower Tina Peters (Part 5B)”