Over the past several elections we’ve had somewhat of an astounding array of October surprises from both standpoints of heavy thumb on the scale propaganda or bullspit, as well as the “nothing to see here” efforts that seek to deep six obvious malfeasances into nothingburgers that stink out loud, but become the target of LSMBTG* buriage, deep sixery and dis-and mis-information.
What has transpired in elections since at least 2018-particularly in Colorado which turned Blue that year against all odds and common sense or logic-but more likely since Obama’Linksy cruised to victory via the Florida panhandle in 2012-has been nothing short of thumb on the scale picking winners and losers, and yet many of the American people and in particular politicians seem oblivious to the point where it is suspicious and perhaps evidence of culpability.
I think it is important to baseline what we are talking about before moving forward and there are two pieces I heartily recommend to get in the correct-that is to say informed-frame of mind before we “go there.” Not to toot my horn about some of the writing I’ve done to cover this issue, but my Colorado election series has a lot of this laid out-and I somewhat dwelled over the pieces and parts to provide the deets over the course of a dozen episodes or articles. I refer to it only because it is the long version written for the relatively uninformed skeptic who simply does not believe or has not kept up with or been exposed to computer advances over the years and how computers can not only accomplish these tasks in a trivial manner, but how they can be used to manipulate facts and data to produce outcomes.
I wrote about a lot of defense and military applications over the series-cause that’s what I was involved with/doing-and the bottom line is the integration of artificial intelligence/machine learning and desired outcomes over the last ten years has been astounding. I also wrote about and documented the people behind the voting tabulation advances and what absolute “turd birds” they were, particularly the lead engineer/PhD/myriad patent holder for the Dominion Voting System, Dr. Eric Coomer.
One of the capabilities I’ve watched mature over time that is just a fascinating development is what I refer to as the “best fit” or “bestest” solution among a number of relatively ambiguous solutions. For those who reload ammunition the modern reloading machines-not the expensive, commercial grade that have tight tolerances and can produce thousands of rounds per hour at crazy speeds once they are setup properly-but the relatively ubiquitous machines are what I am referring to because they base upon a concept I call “loose engineering” where they have good tolerances, but are engineered in a way that I have always referred to as a loose coupling of standards or tolerances. Such machines operate at human speeds-where you can achieve a standard of 100 rounds per hour easily-but you stay somewhat close to a reasonable production amount to balance safety, accuracy, tolerances and error rates.
I can easily do a 99% accuracy rate reloading when I do about a 100 an hour-there is often a shell mishap-like a 380 vice a 9 MM or a shell that gets crushed in de-priming or neck sizing. But if I halve that time, I can drop off to as much as 95% (depending on the type of round.) I learned to reload with a “round checker”-my late father-in-law who would periodically measure the length but place every completed bullet into a case gauge-and we had a natural pace where I would stop and do detailed measurements about three times per hundred: we were in no hurry and easily averaged 99.99 out of every hundred while still making good time.
Reloading is obviously a mechanical process but machine learning and artificial intelligence advances in computers have enabled the same capability. Cloud analytics and cloud computing have gotten sophisticated to the point where you can do the same thing as I described above for variable outcomes by monitoring the performance and tuning algorithms based on samples and sampling.
I think the most advanced aspect of such work has taken place in astronomy over the years as part of a workflow-or work process-where computers do the work that would take humans hours-perhaps days-to find subtle differences between images where clouds, interference and weather issues distort images. In my own field we’ve seen the advances with hyperspectral signatures and radar and LIDAR data/images where computers and algorithms are doing all the work and it is left to humans to determine what it all means: but the computers and analytics do the work to figure out the detections because it is happening in signature-data, or mathematical space where the humans can’t compete with the machines.
The ultimate in this field substituting for a human action that had to occur through the use of machines has to be the Mars Rover landing from 5 August 2012, where the Rover had to be landed and controlled by computers because the communications delay with earth was so long that any mishap or problem encountered-coupled with the impossibility of seeing through the dust and debris caused by the lander firing retrorockets to stabilize-meaning earth could not possibly have responded to any slightly bad scenario with the communications delay with mission/ground control: it was truly the only viable alternative for the landing.
We humans watching on earth deemed the action “seven minutes of terror.” It went off without a hitch but for the fraught nerves of those of us observing the event in somewhat real time: an incredible experience watching the power and marvel of science fraught with the risk and potential disaster beyond anyone’s control with tens of billions of dollars and reputations on the line.
What does this have to do with elections and voting? Data is data and when you have a process ongoing that can be subjected to cloud analytics it is amazing what inferences can be made. Many in my field of intelligence and image interpretation are skeptical of allowing machines to do analysis and interpretation that have been the province of humans over the years. But I’m reminded of an instructor at Naval Postgraduate school in Monterey who would start out one of his automation and artificial intelligence classes by making the statement that someday computers will run and dictate every aspect of human activity. When his skeptical master’s aspirants would take issue, he would debunk each argument in turn by showing how each student already responded or used machines in their daily lives in a way that dictated their action-from traffic signals (did you come through a traffic signal today???,) to GPS directions, to timers on ovens, cruise control, etc.
Systems like Saturn Arch performing hyperspectral imaging-signatures, or radar or LIDAR sensors that produce data samples-data cubes-optimized in data space and only secondarily in imagery or picture space-are what I am talking about and the use of machine learning and artificial intelligence to make sense of data from these sensors is mature science. Not only that but doing the identification itself is somewhat outside of the capability and expertise of the human.
I sponsored an effort with John Hopkins a decade or more ago to teach our analysts aspects of “data science” and lead to a certification effort to train them in this methodology. The surprising part about it was not the lessons learned and new insights from experienced analysts and newbies alike. But the reaction of seniors and experienced analysts who dismissed much of the effort as junk or faux science: as if the pursuit of such expertise and knowledge was almost unnecessary!
A great example of what I am talking about is the way we track Global Positioning System precision within the government. The bottom line is we have expected outcomes from a precision standpoint, and we have methodologies and procedures in place to monitor and both identify when problems occur-and correct problems to obtain the best solution-somewhat on the fly-which means before operators using the system are aware that they even occurred in the first place. When you think about it, the requirement drives the solution because there is nothing worse than finding out after the fact that a bad data file or out of specification or calibration sensor caused an erroneous reading that resulted in a bad outcome.
One of the simplest examples of this in operation is when data centers have A and B side servers to ensure they maintain near 100% availability. Computers monitor the performance of the servers and when it goes out of tolerance-as measured by lower spin rates, increased read, write, or sector errors or faults, etc., they seamlessly cut over to the backup or B server and inform maintenance of the problem. This is not new and our system running in 2006 had aspects of this capability, while the new system we built in ~2010 was based on four 9s or better (99.9999) availability (in the course of a day/week/month/year.) No serious data center or data provider would settle for less performance in 2022.
These innovations in computing-enabled by artificial intelligence, machine learning, cloud analytics and compute power-have enhanced and empowered nearly every aspect of modern society. With this capability has come the potential for cheating in just about all aspects of life. One of the more recent examples that exemplifies not only the compute power available but also how destructive such power can be in the wrong hands is the emerging story of cheating in chess.
Chess players are so good at the top levels of the game that you rarely see a lower ranked player win against a top player or do well in somewhat of a random way against top level competition. The modern chess computers are called “engines” and they simply don’t or can’t lose to humans anymore because they play perfect chess, while humans often play variations they know their opponents are either weak against or have some “flaw” in their approach they plan to exploit. The top-rated human chess player of all time is debatable but has never achieved higher than an estimated 2900: while the top-rated chess computer engine is rated at 3324. Even the esoteric and human confounding and baffling game of Go has seen a computer beat an acknowledged Go Champion.
The ongoing cheating scandal in chess involving rising star Hans Nieman-who was previously caught cheating at chess-and his recent defeat of current world champion Magnus Carlson-threatens to change the way competitions are conducted forever and serves as a harbinger or warning about these type competitions going forward.
I don’t want to distract from my topic here but rather to draw an analogy to what has been going on with electronic ballot machines over the past dozen years or so. It would not take a genius to achieve a desired result using electronic tabulation machines which do their business by creating ballot images-vice using the actual voter ballot and indeed there is considerable evidence of anomalous results and strange happenings but there seems to be a great reticence for officials to not only acknowledge the obvious but to look into results to prove or disprove results, accusations and strange happenings.
It would seem relatively easy for officials to simply relook at ballots to verify results, but in states like Georgia officials have actually taken the extraordinary and unusual step of requiring election audits to be conducted with ballot images-computer generated copies of actual ballots-rather than the ballots themselves. Since many of these computer ballot images must be adjudicated to validate a voter intent-and many of these adjudications are not tallied or tracked carefully-you have the real potential for mischief when it comes to the adjudication of these electronic images of ballots: be they copies of actual voter populated ballots, machine populated copies of voter ballots-or some other construct of the process.
In the case of Georgia-even if there was a willingness to do some type of sensible audit-which has consistently been fought by state officials-many counties lack the necessary ballot images dictated by Georgia Law as comprising the required audit means.
And break, break. With the advances in forensics and computers and analytics ongoing we can often find and calculate the fraud by using numbers and data alone with little to no help from election officials. This American Thinker article describes how these techniques can expose fraud and bad actors with little to no cooperation from election officials: and that is about how much these efforts have been receiving of late.
There have been plenty of anomalies, but why has there been such a reticence to deal with known problems, anomalies and irregularities??? More on this in Part 2.
6 October 2022
LSMBTG: Lamestream media echo chamber (LMEC-L) social media (SM) big tech tyrants (BT) and government (G)
If you enjoyed this article, then please REPOST or SHARE with others; encourage them to follow AFNN
Truth Social: https://truthsocial.com/@AFNN_USA
Facebook: https://m.facebook.com/afnnusa
Telegram: https://t.me/joinchat/2_-GAzcXmIRjODNh
Twitter: https://twitter.com/AfnnUsa
GETTR: https://gettr.com/user/AFNN_USA
Parler: https://parler.com/AFNNUSA
CloutHub: @AFNN_USA