Why Are These Tabulation Machines So Capable: Beyond Vote Counting
In part 1 we will set the stage for descriptions of modern election tabulation machines by providing examples of modern day computer capabilities, solutions and implementations that provide increasingly sophisticated solutions to modern day problems and challenges.
For those who have worked with, owned and interacted with computers before-modern computers are unbelievably powerful compared to their “ancestors.” I bought my first one in 1982 but was using rudimentary Army computers like the Field Artillery Digital Automatic Computer (FADAC) adopted for use with our 914 Richards Light Tables with Zoom 70 Macroscopes (capable of resolving 70 lines per millimeter) to do photogrammetry as early as 1976-but particularly for those less familiar, it is astounding how capable these modern day machines are when you consider that email was just becoming ubiquitous in the latter part of the 1990s riding 1.2KB modems.
Way back in 2006 or so I was the program manager for a fairly large government enterprise architecture upgrade that involved several million dollars’ worth of information technology enterprise service support center processing hardware, software and firmware (I don’t pretend to be a Defense Acquisition Workforce Improvement Act-DAWIA acquisition professional-that is, certified for “process-” but I have more time in the proverbial program manager chow line than most of the DAWIA certified technocrats who supported me over several decades. There is a Department of Defense (DoD) definition for programs-well, for everything-that triggers increasingly burdensome paperwork and oversight as dollars, or more importantly congressional interest, increases. But in general terms anything over about $250 million a year is large and over the Future Years Defense Program constitutes a billion-dollar program. There are nauseating details on all things defense acquisition-and it’s not my topic today. Some may observe that the DoD definitions list different amounts. I would say we would get along great with you as the lead DAWIA guy on my team working substantive issues like that….More on the pieces and parts of all things government acquisition in a future article.)
We undertook that year a number of risk reduction studies to test a few implementations with leading edge information technology concepts and accomplish a cost-benefit analysis to take us through a build of a new data center and enterprise architecture service desk at a new site, including a move and stabilization down the road circa ~2011 or so. One of the key pacing items was moving approximately 4 terabytes of data representing our image library holdings.
One study involved a small but growing firm called NVIDIA who was doing leading edge research in distributed cloud processing of photography/imagery (machine learning) for gaming applications with advanced chipsets.
Another effort involved a big defense vendor who was dense packing Storage Area Network (SAN) configurations with a mix of both fiber channels and ethernet, pushing the bounds of performance and testing our/my hypothesis that we could eliminate spinning disk storage in favor of solid state in an on-line, near-line, off-line configuration (to upgrade spinning disk, Digital Cassette Recording System Interface (DCRSI) tape based robot storage, and true tape storage.))
Another study effort we undertook as part of our distributed enterprise research was a collaboration with the National Security Agency (NSA) Technology Directorate to look at the performance of a hybrid cloud implementation to work distributed bulk processing of massive data sets to enable targeting.
If you’ve worked with NSA you are aware they have more money than Richie Rich, a cadre of mathematicians, some 35K people and the resources you would expect from a large intelligence agency. The amount of data being processed was staggering in consideration of the state of technology integration within the US Government at the time (2005-2009.)
NSA still had a large number of Origin and Graph (Silicon Graphics, Cray) processing machines that were unique-many one of a kind, custom builds that had expensive boards and components that were built to individual processing specifications but were state of the art for a long time in applications such as missile warning (the legacy Defense Support Program) and National Missile Defense applications.
Security concerns preclude a deeper discussion of the methodology pursued to accomplish the task, but the NSA project eventually grew into a capability that is best known as the Real Time Regional Gateway, a somewhat closed circuit distributed processing hub based on a hybrid cloud implementation and features that were routinely producing target grade solutions from intercepts within mere seconds to minutes of processing time.
The safest analogy to relate concerns the processing of utility billing in New York. During this timeframe (~2005-2008) a consolidation of utility providers resulted in the major power producer being the beneficiary of a centralization of the regional billing process that resulted in the addition of some 6 million households over and above their normal client base. There was one major problem: their computer infrastructure was completely maxed out processing their existing customer base to provide monthly billing.
The initial solutions using existing infrastructure were non-starters, as their engineers projected they would have to implement a less than optimal 45 vice 30-day billing cycle, or go to a historical usage estimation projection system where they bill as an estimate and then adjust on the next cycle, with a yearend adjustment to rectify accounts.
The latter would create a new problem, particularly for upstate clients, as those yearend adjustments could be burdensome in the extreme because of the huge variation of usage, as well as the increased raw material costs that often occurred on the east coast in the winter months in the last quarter of the calendar year. Plus, they weren’t sure the system could handle the processing load (anyway.)
IBM and a number of firms studied the problem and based on the cost-benefit analysis, the utility company went with a novel concept of implementing an emerging capability-a cloud based solution-that required a massive upgrade of their enterprise architecture. The performance metrics of the cloud-based solution were staggering, not only obviating delays in the billing process, but also providing improved estimation capability to monitor power consumption near real time and to fine tune projections for raw material purchases.
The net result enabled the company to bill on a 20-day cycle-or shorter-if desired-while increasing profitability through resolving their commodity procurement challenges that had been considerably reducing their profit.
Emerging cloud-based solutions were also a great benefit to firms such as Sysco foods, which has an order and delivery battle rhythm that is massive and staggering to manage. Advancements in their tracking system through use of the cloud resulted in enhancements such as a reduction of several percentage points in spoilage for major items such as the delivery of fresh fish for Friday consumption for those practicing Christians for whom Fish ‘N Chips was a mandatory item.
When you are doing tens of millions of dollars a week in Friday fish deliveries, saving a mere 2-3% in spoilage by mitigating transportation and delivery issues is a huge adder to the bottom line of a business unit
Military logistics planners do very, very well at planning myriad types of big, complicated operations and a great story in that regard is how the military adjusted capability when the XM-1/M-1 Abrahams tank entered the inventory and it suddenly became clear that resupply vehicles had to be every bit as capable to keep up with the speed and consumption of this revolutionary tank. Certainly, a very different challenge than that of Sysco Foods but try making Friday fish deliveries in places like New York City and lower Manhattan or Boston routinely!
Bloomberg Enterprises similarly built out their formerly exclusively terminal-based business model through an innovative cloud-based data approach that reduced customer costs, increased timeliness and relevancy of content, and enhanced profitability through significant new capability. Providing unlimited data on demand and enabling them to ramp up their world-wide data analysis efforts by orders of magnitude.
Bloomberg made a major move into China through these efforts, arguably the world’s largest and most insatiable market for consumption of industry and production related economics data (of America.)
We could talk about auto parts, Amazon, NETFLIX, delivery of prescription drugs, IBM Watson winning Jeopardy (although that was on a fixed information corpus Jeopardy knowledge base with no internet access,) revolutionizing patient diagnosis, there was even talk of Watson participating in Iron Chef at one point: I have a “Bengali Butternut BBQ Sauce” recipe produced by Watson as part of the prep. The list is endless.
The point I’m belaboring here is the promise of the benefit of computers as a “chicken in every pot” back when I bought my Radio Shack TRS80 Model III in 1982 has come to fruition.
As an aside, my first job after military retirement was as an Army civilian test officer for Joint Theater Missile Defense evaluations. Most every exercise system was either monitored or backed up as part of my data reduction process-gateways, tactical broadcast, air defense links, the newly deployed Global Command and Control System Common Operational Picture, etc. We used linked Excel spreadsheets at the time to process and reduce data on target engagements.
Of note, with the government available IT circa 1993, the calculations took so long that I would suspend the function until I entered all the new data, hit compute and took a smoke break. It took over three and a half minutes to crunch the sheet. In successive years as the Z-80X chipset was introduced and finally in 1995-1997 the first commercially available reasonably priced Pentium processors (government purchase,) we used that “spreadsheet from hell” as the benchmark to test new computers. And OBTW, along the way I inadvertently discovered you could reach the end of the proverbial spreadsheet internet at 65,536 rows with Excel (you work with what you know…)
The “time to crunch” halved with each new chipset. Until the day we received our first Pentium Processor with new advances in expansible RAM. My Operations Research Statistical Analysis support and I loaded the sheet, I changed some data-hit enter-and the computer blinked, but we thought nothing happened! The Pentium was so fast you missed it if you blinked! We had finally reached the point where technology had met our processing requirements (at government price limits). That was a big day for us geeks who were in the business and noticed.
Business and academia-as well as the government-have made the breakthroughs, developed and refined the applications to the point where I can 3-D print an obscure part on my 1963 Avanti for the cost of materials. And it is better than the original part. With “hog troughs” (the mount that mates the fiberglass body to the door frame on each side) costing over $1500, these are tremendous breakthroughs for the consumer!
I know, a good question at this point is what does this have to do with elections? Somewhere around this timeframe (~2006ish) automated election tabulators were developed and being offered as a solution to continuing problems with the election process and the challenges associated with providing timely and accurate projections and results. Some of the development history remains controversial and a matter of dispute but suffice it to say the original idea was based on speeding up and improving a distributed computing process.
The problematic part of these machines is what process do they contribute to improving? Venezuela often comes up in these discussions based on allegations of fraud in their 2017 election. Unique about this story is that the allegations originated with not only the opposition candidate-which is hardly novel or new-but with the Chief Executive Officer of Smartmatic, the tabulating machine used to count the votes. This was not the first time Smartmatic had been at the center of a Venezuelan election controversy.
Our country has a somewhat checkered history when it comes to elections. LBJ is widely regarded to have stolen through fraud his 1948 Senate election. However, LBJ Historian Robert Dallek is of the belief that LBJ’s “Box 13” actions in 1948 were somewhat of a “return the favor” effort stemming from him being defrauded out of the 1941 Senate Special election.
The election of JFK in 1960 has been the source of allegations of fraud forever.
What is different about the 2020 election is the sophistication of the alleged fraud and the lamestream Media Echo Chamber (LMEC/LSMBTGA) and social media titan incuriosity and suppression of any and all discussion related to the possibility.
Similarly, the State and federal court system, up to and including the Supreme Court, has played gate keeper in preventing serious consideration of any and all challenges based on lack of standing and technical issues, none of which involved looking at the underlying election problems.
That has not stopped the process in many states from going forward, as outlined in this great piece by Stu Cvrk in Redstate (now behind paywall), nor the emergence of facts about incongruent results that are the topic of Part 2 (timed off the site.) Stu has been all over this story from jump street. What happens as these audits play out if the allegations of fraud become facts of fraud?
Theories abound on the ramifications of this potentiality, but it is a topic for another day. Should that happen (allegations become facts) the word of the day will be “spoliation.”
I want to get back to the discussion about the cloud and elections and how distributed processing controlled through a central node could be used-or abused-with only a few people the wiser but will take up those details in Part 2.
First-a little ditty about–PhD Eric Coomer, have you heard of him? Until recently he was the VP for Strategy and Security at Dominion Voting Systems (DVS.) He began his IT election career in 2005 working for Sequoia, becoming the chief software architect. He later took over all development operations as Vice President of Engineering.
DVS acquired Sequoia in 2010, with Coomer joining as Vice President of US Engineering overseeing development in the Denver, Colorado office. More about Eric in Part 2.
20 December 2025
Originally published 21 June 2021
LSMBTGA: Lamestream media echo chamber (LMEC-L) social media (SM) big tech tyrants (BT,) government (G) and academia (A)
If you enjoyed this article, then please REPOST or SHARE with others; encourage them to follow AFNN. If you’d like to become a citizen contributor for AFNN, contact us at managingeditor@afnn.us Help keep us ad-free by donating here.
Substack: AmericanFreeNewsNetworkSubstack
TruthSocial:@AFNN_USA
Facebook: https://m.facebook.com/afnnusa
Telegram: https://t.me/joinchat/2_-GAzcXmIRjODNh
Twitter: https://twitter.com/AfnnUsa
GETTR: https://gettr.com/user/AFNN_USA
CloutHub: @AFNN_USA