History – Hackaday https://hackaday.com Fresh hacks every day Mon, 20 Oct 2025 18:54:07 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 156670177 The Lambda Papers: When LISP Got Turned Into a Microprocessor https://hackaday.com/2025/10/20/the-lambda-papers-when-lisp-got-turned-into-a-microprocessor/ https://hackaday.com/2025/10/20/the-lambda-papers-when-lisp-got-turned-into-a-microprocessor/#comments Tue, 21 Oct 2025 02:00:25 +0000 https://hackaday.com/?p=849011 During the AI research boom of the 1970s, the LISP language – from LISt Processor – saw a major surge in use and development, including many dialects being developed. One …read more]]>
The physical layout of the SCHEME-78 LISP-based microprocessor by Steele and Sussman. (Source: ACM, Vol 23, Issue 11, 1980)
The physical layout of the SCHEME-78 LISP-based microprocessor by Steele and Sussman. (Source: ACM, Vol 23, Issue 11, 1980)

During the AI research boom of the 1970s, the LISP language – from LISt Processor – saw a major surge in use and development, including many dialects being developed. One of these dialects was Scheme, developed by [Guy L. Steele] and [Gerald Jay Sussman], who wrote a number of articles that were published by the Massachusetts Institute of Technology (MIT) AI Lab as part of the AI Memos. This subset, called the Lambda Papers, cover the ideas from both men about lambda calculus, its application with LISP and ultimately the 1980 paper on the design of a LISP-based microprocessor.

Scheme is notable here because it influenced the development of what would be standardized in 1994 as Common Lisp, which is what can be called ‘modern Lisp’. The idea of creating dedicated LISP machines was not a new one, driven by the processing requirements of AI systems. The mismatch between the S-expressions of LISP and the typical way that assembly uses the CPUs of the era led to the development of CPUs with dedicated hardware support for LISP.

The design described by [Steele] and [Sussman] in their 1980 paper, as featured in the Communications of the ACM, features an instruction set architecture (ISA) that matches the LISP language more closely. As described, it is effectively a hardware-based LISP interpreter, implemented in a VLSI chip, called the SCHEME-78. By moving as much as possible into hardware, obviously performance is much improved. This is somewhat like how today’s AI boom is based around dedicated vector processors that excel at inference, unlike generic CPUs.

During the 1980s LISP machines began to integrate more and more hardware features, with the Symbolics and LMI systems featuring heavily. Later these systems also began to be marketed towards non-AI uses like 3D modelling and computer graphics. As however funding for AI research dried up and commodity hardware began to outpace specialized processors, so too did these systems vanish.

Top image: Symbolics 3620 and LMI Lambda Lisp machines (Credit: Jason Riedy)

]]>
https://hackaday.com/2025/10/20/the-lambda-papers-when-lisp-got-turned-into-a-microprocessor/feed/ 7 849011 Computer History Museum The physical layout of the SCHEME-78 LISP-based microprocessor by Steele and Sussman. (Source: ACM, Vol 23, Issue 11, 1980)
Word Processing: Heavy Metal Style https://hackaday.com/2025/10/20/word-processing-heavy-metal-style/ https://hackaday.com/2025/10/20/word-processing-heavy-metal-style/#comments Mon, 20 Oct 2025 17:00:23 +0000 https://hackaday.com/?p=868668 If you want to print, say, a book, you probably will type it into a word processor. Someone else will take your file and produce pages on a printer. Your …read more]]>

If you want to print, say, a book, you probably will type it into a word processor. Someone else will take your file and produce pages on a printer. Your words will directly turn on a laser beam or something to directly put words on paper. But for a long time, printing meant creating some physical representation of what you wanted to print that could stamp an imprint on a piece of paper.

The process of carving something out of wood or some other material to stamp out printing is very old. But the revolution was when the Chinese and, later, Europeans, realized it would be more flexible to make symbols that you could assemble texts from. Moveable type. The ability to mass-produce books and other written material had a huge influence on society.

But there is one problem. A book might have hundreds of pages, and each page has hundreds of letters. Someone has to find the right letters, put them together in the right order, and bind them together in a printing press’ chase so it can produce the page in question. Then you have to take it apart again to make more pages. Well, if you have enough type, you might not have to take it apart right away, but eventually you will.

Automation

A Linotype matrix for an upright or italic uppercase A.

That’s how it went, though, until around 1884. That’s when Ottmar Mergenthaler, a clockmaker from Germany who lived in the United States, had an idea. He had been asked for a quicker method of publishing legal briefs. He imagined a machine that would assemble molds for type instead of the actual type. Then the machine would cast molten metal to make a line of type ready to get locked into a printing press.

He called the molds matrices and built a promising prototype. He formed a company, and in 1886, the New York Tribune got the first commercial Linotype machine.

These machines would be in heavy use all through the early 20th century, although sometime in the 1970s, other methods started to displace them. Even so, there are still a few printing operations that use linotypes as late as 2022, as you can see in the video below. We don’t know for sure if The Crescent is still using the old machine, but we’d bet they are.

Of course, there were imitators and the inevitable patent wars. There was the Typograph, which was an early entry into the field. The Intertype company produced machines in 1914. But just like Xerox became a common word for photocopy, machines like this were nearly always called Linotypes and, truth be told, were statistically likely to have been made by Mergenthaler’s company.

Kind of Steampunk

Diagram from a 1904 book showing the parts of a Linotype.

For a machine that appeared in the 1800s, the Linotype looks both modern and steampunk. It had a 90-key keyboard, for one thing. Some even had paper tape readers so type could be “set” somewhere and sent to the press room via teletype.

The machine had a store of matrices in a magazine. Of course, you needed lots of common characters and perhaps fewer of the uncommon ones. Each matrix had a particular font and size, although for smaller fonts, the matrix could hold two characters that the operator could select from. One magazine would have one font at a particular size.

Unlike type, a Linotype matrix isn’t a mirror image, and it is set into the metal instead of rising out of it. That makes sense. It is a mold for the eventual type that will be raised and mirrored. The machine had 90 keys. Want to guess how many channels a magazine had? Yep. It was 90, although larger fonts might use fewer.

Different later models had extra capabilities. For example, some machines could hold four magazines in a stack so you could set multiple fonts or sizes at one time, with some limitations, depending on the machine. Spaces weren’t in the magazine. They were in a special spaceband box.

Each press of a key would drop a matrix from the magazine into the assembler at the bottom of the machine in a position for the primary or auxiliary letter. This was all a mechanical process, and a skilled operator could do about 30 words per minute, so the machines had to be cleaned and lubricated. There was also a special pi channel where you could put strange matrices you didn’t use very often.

Typecasting

When the line was done, you pressed the casting level, which would push the matrices out of the assembler and into a delivery channel. Then it moved into the casting section, which took about nine seconds. A motor moved the matrices to the right place, and a gas burner or electric heater kept a pot of metal (usually a lead/antimony/tin mix that is traditional for type) molten.

A properly made slug from a Linotype was good for 300,000 imprints. However,  it did require periodic removal of the dross from the top of the hot metal. Of course, if you didn’t need it anymore, you just dropped it back in the pot.

Justification

A composed line with long space bands. (From a 1940 book by the Linotype Company). Note that each matrix has two letters.

You might wonder how type would be justified. The trick is in the space bands. They were larger than the other matrices and made so that the further they were pushed into the block, the more space they took. A mechanism pushed them up until the line of type exactly fit between the margins.

You can see why the space bands were in a special box. They are much longer than the typical type matrices.

How else could you even out the spaces with circa-1900 technology? Pretty clever.

The distributor bar (black) has teeth that engage teeth on each matrix.

If you have been paying attention, there’s one major drawback to this system. How do the matrix elements get back to the right place in the magazine? If you can’t automate that, you still have a lot of manual labor to do. This was the job of the distributor. First, the space bands were sorted out. Each matrix has teeth at the top that allow it to hang on a toothed distributor bar. Each letter has its own pattern of teeth that form a 7-bit code.

As the distributor bar carries them across the magazine channels, it will release those that have a particular set of teeth missing, because it also has some teeth missing. A diagram from a Linotype book makes it easier to understand than reading about it.

The Goldbergs

You have to wonder if Ottmar was related to Rube Goldberg. We don’t think we’d be audacious enough to propose a mechanical machine to do all this on top of an automated way to handle molten lead. But we admire anyone who does. Thomas Edison called the machine the eighth wonder of the world, and we don’t disagree. It revolutionized printing even though, now, it is just a historical footnote.

Can’t get enough info on the Linotype? There is a documentary that runs well over an hour, which you can watch below. If you’ve only got five minutes, try the short demo video at the very bottom.

Moveable type was to printing what 3D printing is to plastic manufacturing. Which might explain this project. Or this one, for that matter.

]]>
https://hackaday.com/2025/10/20/word-processing-heavy-metal-style/feed/ 11 868668 Qwerty
Site of Secret 1950s Cold War Iceworm Project Rediscovered https://hackaday.com/2025/10/17/site-of-secret-1950s-cold-war-iceworm-project-rediscovered/ https://hackaday.com/2025/10/17/site-of-secret-1950s-cold-war-iceworm-project-rediscovered/#comments Fri, 17 Oct 2025 11:00:13 +0000 https://hackaday.com/?p=868172 The overall theme of the early part of the Cold War was that of subterfuge — with scientific missions often providing excellent cover for placing missiles right on the USSR’s …read more]]>

The overall theme of the early part of the Cold War was that of subterfuge — with scientific missions often providing excellent cover for placing missiles right on the USSR’s doorstep. Recently NASA rediscovered Camp Century, while testing a airplane-based synthetic aperture radar instrument (UAVSAR) over Greenland. Although established on the surface in 1959 as a polar research site, and actually producing good science from e.g. ice core samples, beneath this benign surface was the secretive Project Iceworm.

By 1967 the base was forced to be abandoned due to shifting ice caps, which would eventually bury the site under over 30 meters of ice. Before that, the scientists would test out the PM-2A small modular reactor. It not only provided 2 MW of electrical power and heat to the base, but was itself subjected to various experiments. Alongside this public face, Project Iceworm sought to set up a network of mobile nuclear missile launch sites for Minuteman missiles. These would be located below the ice sheet, capable of surviving a first strike scenario by the USSR. A lack of Danish permission, among other complications, led to the project eventually being abandoned.

It was this base that popped up during the NASA scan of the ice bed. Although it was thought that the crushed remains would be safely entombed, it’s estimated that by the year 2100 global warming will have led to the site being exposed again, including the thousands of liters of diesel and tons of hazardous waste that were left behind back in 1967. The positive news here is probably that with this SAR instrument we can keep much better tabs on the condition of the site as the ice cap continues to grind it into a fine paste.


Top image: Camp Century in happier times. (Source: US Army, Wikimedia)

]]>
https://hackaday.com/2025/10/17/site-of-secret-1950s-cold-war-iceworm-project-rediscovered/feed/ 7 868172 camp_century_PM2Anuclearpowerplant
The Great Northeast Blackout of 1965 https://hackaday.com/2025/10/14/the-great-northeast-blackout-of-1965/ https://hackaday.com/2025/10/14/the-great-northeast-blackout-of-1965/#comments Tue, 14 Oct 2025 14:00:40 +0000 https://hackaday.com/?p=864841 At 5:20 PM on November 9, 1965, the Tuesday rush hour was in full bloom outside the studios of WABC in Manhattan’s Upper West Side. The drive-time DJ was Big …read more]]>

At 5:20 PM on November 9, 1965, the Tuesday rush hour was in full bloom outside the studios of WABC in Manhattan’s Upper West Side. The drive-time DJ was Big Dan Ingram, who had just dropped the needle on Jonathan King’s “Everyone’s Gone to the Moon.” To Dan’s trained ear, something was off about the sound, like the turntable speed was off — sometimes running at the usual speed, sometimes running slow. But being a pro, he carried on with his show, injecting practiced patter between ad reads and Top 40 songs, cracking a few jokes about the sound quality along the way.

Within a few minutes, with the studio cart machines now suffering a similar fate and the lights in the studio flickering, it became obvious that something was wrong. Big Dan and the rest of New York City were about to learn that they were on the tail end of a cascading wave of power outages that started minutes before at Niagara Falls before sweeping south and east. The warbling turntable and cartridge machines were just a leading indicator of what was to come, their synchronous motors keeping time with the ever-widening gyrations in power line frequency as grid operators scattered across six states and one Canadian province fought to keep the lights on.

They would fail, of course, with the result being 30 million people over 80,000 square miles (207,000 km2) plunged into darkness. The Great Northeast Blackout of 1965 was underway, and when it wrapped up a mere thirteen hours later, it left plenty of lessons about how to engineer a safe and reliable grid, lessons that still echo through the power engineering community 60 years later.

Silent Sentinels

Although it wouldn’t be known until later, the root cause of what was then the largest power outage in world history began with equipment that was designed to protect the grid. Despite its continent-spanning scale and the gargantuan size of the generators, transformers, and switchgear that make it up, the grid is actually quite fragile, in part due to its wide geographic distribution, which exposes most of its components to the ravages of the elements. Without protection, a single lightning strike or windstorm could destroy vital pieces of infrastructure, some of it nearly irreplaceable in practical terms.

Protective relays like these at a hydroelectric plant started all the ruckus. Source: Wtshymanski at en.wikipedia, CC BY-SA 3.0

Tasked with this critical protective job are a series of relays. The term “relay” has a certain connotation among electronics hobbyists, one that can be misleading in discussions of power engineering. While we tend to think of relays as electromechanical devices that use electromagnets to make and break contacts to switch heavy loads, in the context of grid protection, relays are instead the instruments that detect a fault and send a control signal to switchgear, such as a circuit breaker.

Relays generally sense faults through a series of instrumentation transformers located at critical points in the system, usually directly within the substation or switchyard. These can either be current transformers, which measure the current in a toroidal coil wrapped around a conductor, much like a clamp meter, or voltage transformers, which use a high-voltage capacitor network as a divider to measure the voltage at the monitored point.

Relays can be configured to use the data from these sensors to detect an overcurrent fault on a transmission line; contacts within the relay would then send 125 VDC from the station’s battery bank to trip the massive circuit breakers out in the yard, opening the circuit. Other relays, such as induction disc relays, sense problems via the torque created on an aluminum disk by opposing sensing coils. They operate on the same principle as the old mechanical electrical meters did, except that under normal conditions, the force exerted by the coils is in balance, keeping the disk from rotating. When an overcurrent fault or a phase shift between the coils occurs, the disc rotates enough to close contacts, which sends the signal to trip the breakers.

The circuit breakers themselves are interesting, too. Turning off a circuit with perhaps 345,000 volts on it is no mean feat, and the circuit breakers that do the job must be engineered to safely handle the inevitable arc that occurs when the circuit is broken. They do this by isolating the contacts from the atmosphere, either by removing the air completely or by replacing the air with pressurized sulfur hexafluoride, a dense, inert gas that quenches arcs quickly. The breaker also has to draw the contacts apart as quickly as possible, to reduce the time during which they’re within breakdown distance. To do this, most transmission line breakers are pneumatically triggered, with the 125 VDC signal from the protective relays triggering a large-diameter dump valve to release pressurized air from a reservoir into a pneumatic cylinder, which operates the contacts via linkages.

The Cascade Begins

At the time of the incident, each of the five 230 kV lines heading north into Ontario from the Sir Adam Beck Hydroelectric Generating Station, located on the west bank on the Niagara River, was protected by two relays: a primary relay set to open the breakers in the event of a short circuit, and a backup relay to make sure the line would open if the primary relays failed to trip the breaker for some reason. These relays were installed in 1951, but after a near-catastrophe in 1956, where a transmission line fault wasn’t detected and the breaker failed to open, the protective relays were reconfigured to operate at approximately 375 megawatts. When this change was made in 1963, the setting was well above the expected load on the Beck lines. But thanks to the growth of the Toronto-Hamilton area, especially all the newly constructed subdivisions, the margins on those lines had narrowed. Coupled with an emergency outage of a generating station further up the line in Lakeview and increased loads thanks to the deepening cold of the approaching Canadian winter, the relays were edging closer to their limit.

Where it all began. Overhead view of the Beck (left) and Moses (right) hydro plants, on the banks of the Niagara River. Source: USGS, Public domain.

Data collected during the event indicates that one of the backup relays tripped at 5:16:11 PM on November 9; the recorded load on the line was only 356 MW, but it’s likely that a fluctuation that didn’t get recorded pushed the relay over its setpoint. That relay immediately tripped its breaker on one of the five northbound 230 kV lines, with the other four relays doing the same within the next three seconds. With all five lines open, the Beck generating plant suddenly lost 1,500 megawatts of load, and all that power had nowhere else to go but the 345 kV intertie lines heading east to the Robert Moses Generating Plant, a hydroelectric plant on the U.S. side of the Niagara River, directly across from Beck. That almost instantly overloaded the lines heading east to Rochester and Syracuse, tripping their protective relays to isolate the Moses plant and leaving another 1,346 MW of excess generation with nowhere to go. The cascade of failures marched across upstate New York, with protective relays detecting worsening line instabilities and tripping off transmission lines in rapid succession. The detailed event log, which measured events with 1/2-cycle resolution, shows 24 separate circuit trips with the first second of the outage.

Oscillogram of the outage showing data from instrumentation transformers around the Beck transmission lines. Source: Northeast Power Failure, November 9 and 10, 1965: A Report to the President. Public domain.

While many of the trips and events were automatically triggered, snap decisions by grid operators all through the system resulted in some circuits being manually opened. For example, the Connecticut Valley Electrical Exchange, which included all of the major utilities covering the tiny state wedged between New York and Massachusetts, noticed that Consolidated Edison, which operated in and around the five boroughs of New York City, was drawing an excess amount of power from their system, in an attempt to make up for the generation capacity lost from upstate. They tried to keep New York afloat, but the CONVEX operators had to make the difficult decision to manually open their ties to the rest of New England to shed excess load about a minute after the outage started, finally completely isolating their generators and loads by 5:21.

Heroics aside, New York City was in deep trouble. The first effects were felt almost within the first second of the event, as automatic protective relays detected excessive power flow and disconnected a substation in Brooklyn from an intertie into New Jersey. Operators at Long Island Light tried to save their system by cutting ties to the Con Ed system, which reduced the generation capacity available to the city and made its problem worse. Operators tried to spin up their steam turbine plants to increase generation capacity, but it was too little, too late. Frequency fluctuations began to mount throughout New York City, resulting in Big Dan’s wobbly turntables at WABC.

Well, there’s your problem. Bearings on the #3 turbine at Con Ed’s Ravenwood plant were starved of oil during the outage, resulting in some of the only mechanical damage incurred during the outage. Source: Northeast Power Failure, November 9 and 10, 1965: A Report to the President. Public domain.

As a last-ditch effort to keep the city connected, Con Ed operators started shedding load to better match the dwindling available supply. But with no major industrial users — even in 1965, New York City was almost completely deindustrialized — the only option was to start shutting down sections of the city. Despite these efforts, the frequency dropped lower and lower as the remaining generators became more heavily loaded, tripping automatic relays to disconnect them and prevent permanent damage. Even so, a steam turbine generator at the Con Ed Ravenswood generating plant was damaged when an auxiliary oil feed pump lost power during the outage, starving the bearings of lubrication while the turbine was spinning down.

By 5:28 or so, the outage reached its fullest extent. Over 30 million people began to deal with life without electricity, briefly for some, but up to thirteen hours for others, particularly those in New York City. Luckily, the weather around most of the downstate outage area was unusually clement for early November, so the risk of cold injuries was relatively low, and fires from improvised heating arrangements were minimal. Transportation systems were perhaps the hardest hit, with some 600,000 unfortunates trapped in the dark in packed subway cars. The rail system reaching out into the suburbs was completely shut down, and Kennedy and LaGuardia airports were closed after the last few inbound flights landed by the light of the full moon. Road traffic was snarled thanks to the loss of traffic signals, and the bridges and tunnels in and out of Manhattan quickly became impassable.

Mopping Up

Liberty stands alone. Lighted from the Jersey side, Lady Liberty watches over a darkened Manhattan skyline on November 9. The full moon and clear skies would help with recovery. Source: Robert Yarnell Ritchie collection via DeGolyer Library, Southern Methodist University.

Almost as soon as the lights went out, recovery efforts began. Aside from the damaged turbine in New York and a few transformers and motors scattered throughout the outage area, no major equipment losses were reported. Still, a massive mobilization of line workers and engineers was needed to manually verify that equipment would be safe to re-energize.

Black start power sources had to be located, too, to power fuel and lubrication pumps, reset circuit breakers, and restart conveyors at coal-fired plants. Some generators, especially the ones that spun to a stop and had been sitting idle for hours, also required external power to “jump start” their field coils. For the idled thermal plants upstate, the nearby hydroelectric plants provided excitation current in most cases, but downstate, diesel electric generators had to be brought in for black starts.

In a strange coincidence, neither of the two nuclear plants in the outage area, the Yankee Rowe plant in Massachusetts and the Indian Point station in Westchester County, New York, was online at the time, and so couldn’t participate in the recovery.

For most people, the Great Northeast Power Outage of 1965 was over fairly quickly, but its effects were lasting. Within hours of the outage, President Lyndon Johnson issued an order to the chairman of the Federal Power Commission to launch a thorough study of its cause. Once the lights were back on, the commission was assembled and started gathering data, and by December 6, they had issued their report. Along with a blow-by-blow account of the cascade of failures and a critique of the response and recovery efforts, they made tentative recommendations on what to change to prevent a recurrence and to speed the recovery process should it happen again, which included better and more frequent checks on relay settings, as well as the formation of a body to oversee electrical reliability throughout the nation.

Unfortunately, the next major outage in the region wasn’t all that far away. In July of 1977, lightning strikes damaged equipment and tripped breakers in substations around New York City, plunging the city into chaos. Luckily, the outage was contained to the city proper, and not all of it at that, but it still resulted in several deaths and widespread rioting and looting, which the outage in ’65 managed to avoid. That was followed by the more widespread 2003 Northeast Blackout, which started with an overloaded transmission line in Ohio and eventually spread into Ontario, across Pennsylvania and New York, and into Southern New England.

]]>
https://hackaday.com/2025/10/14/the-great-northeast-blackout-of-1965/feed/ 31 864841 Blackout
Lost Techniques: Bond-out CPUs and In Circuit Emulation https://hackaday.com/2025/10/01/lost-techniques-cpu-in-circuit-emulation/ https://hackaday.com/2025/10/01/lost-techniques-cpu-in-circuit-emulation/#comments Wed, 01 Oct 2025 17:00:05 +0000 https://hackaday.com/?p=835067 These days, we take it for granted that you can connect a cheap piece of hardware to a microcontroller and have an amazing debugging experience. Stop the program. Examine memory …read more]]>

These days, we take it for granted that you can connect a cheap piece of hardware to a microcontroller and have an amazing debugging experience. Stop the program. Examine memory and registers. You can see and usually change anything. There are only a handful of ways this is done on modern CPUs, and they all vary only by detail. But this wasn’t always the case. Getting that kind of view to an actual running system was an expensive proposition.

Today, you typically have some serial interface, often JTAG, and enough hardware in the IC to communicate with a host computer to reveal and change internal state, set breakpoints, and the rest. But that wasn’t always easy. In the bad old days, transistors were large and die were small. You couldn’t afford to add little debugging pins to each processor you produced.

This led to some very interesting workarounds. Of course, you could always run simulators on a larger computer. But that might not work in real time, and almost certainly didn’t have all the external things you wanted to connect to, unless you also simulated them.

The alternative? Create a special chip, often called a bond-out chip. These were usually expensive and had some way to communicate with the outside world. This might be a couple of pins, or there might be a bundle of wires coming out of the top of the chip. You replaced your microprocessor with the expensive bond-out chip and connected it to your very expensive in-circuit emulator.

If you have a better scan of the ICE-51 datasheet, we’d love to see it.

For example, the venerable 8051 had an 8051E chip that brought out the address and data bus lines for debugging. In fact, the history of the 8051 notes that they developed the bond-out chip first. The chip was bigger and sold in lower volumes, so it was more expensive. It needed not just connections but breakpoint hardware to stop the CPU at exactly the right time for debugging.

In some cases, the emulator probe was a board that sat between a stock CPU and the CPU socket. Of course, that meant you had to have room to accommodate the large board. Of course, it also assumes that at least your development board had a socket, although in those days it was rare to have an expensive CPU soldered right down to the board.

Another poor scan, this time of the Lauterbach emulator probe for the 68000.

For example, the Lauterbach ICE-68300 here could take a bond-out chip or a regular chip, although it would be missing features if you didn’t have the special chip.

Of course, you can still find in circuit emulators, but the difference is that they almost certainly have supporting hardware on the standard chip and simply use a serial communication protocol to talk to the on-chip hardware.

Of course, if you want an emulator for an old CPU, you have enough horsepower now that you can probably emulate it like with a modern processor, like the IZE80 does in the video below. Then you can incorporate all kinds of magical debugging features. But be careful what you take on. To properly mimic the hardware means tight timing for things like DRAM refresh and a complete understanding of all the bus timings involved.

But it can be done. In any event, on chip debugging or real in-circuit emulation, it sure makes life easier.

]]>
https://hackaday.com/2025/10/01/lost-techniques-cpu-in-circuit-emulation/feed/ 18 835067 Workbench
The Hottest Spark Plugs Were Actually Radioactive https://hackaday.com/2025/10/01/the-hottest-spark-plugs-were-actually-radioactive/ https://hackaday.com/2025/10/01/the-hottest-spark-plugs-were-actually-radioactive/#comments Wed, 01 Oct 2025 14:00:59 +0000 https://hackaday.com/?p=833783 In the middle of the 20th century, the atom was all the rage. Radiation was the shiny new solution to everything while being similarly poorly understood by the general public …read more]]>

In the middle of the 20th century, the atom was all the rage. Radiation was the shiny new solution to everything while being similarly poorly understood by the general public and a great deal of those working with it.

Against this backdrop, Firestone Tire and Rubber Company decided to sprinkle some radioactive magic into spark plugs. There was some science behind the silliness, but it turns out there are a number of good reasons we’re not using nuke plugs under the hood of cars to this day.

Hot Stuff

The Firestone Polonium spark plug represented a fascinating intersection of Cold War-era nuclear optimism and automotive engineering. These weren’t your garden-variety spark plugs – they contained small amounts of polonium-210. The theory behind radioactive spark plugs was quite simple from an engineering perspective. As the radioactive polonium decayed into lead, it would release alpha particles supposed to ionize the air-fuel mixture in the combustion chamber, making an easier path for the spark to ignite and reducing the likelihood of misfires. Thus, the polonium-210 spark plugs would theoretically create a better, stronger spark and improve combustion efficiency.

Firestone decided polonium, not radium, was the way to go when it filed a patent of its own. Credit: US Patent

These plugs hit the market sometime around 1940, though the idea dates back at least a full 11 years earlier. In 1924, Albert Hubbard applied for a patent (US 1,723,422), which was granted five years later. His patent concerned the use of radium to create an ionized path through the gas inside an engine’s cylinder to improve spark plug performance.

Firestone’s patent (US 2,254,169) came much later, granted in 1941. The company decided that polonium-210 was a more viable radioactive source. Radium was considered “too expensive and dangerous”, while uranium and thorium isotopes were found to be “ineffective.” Polonium, though, was the bee’s knees. From the patent filing:

Frequently, conditions will be so unfavorable that a spark will not occur at all, and it will be necessary to turn the engine over a number of times before a spark occurs. However, if the alpha rays of polonium are passing through the gap, a large number of extra ions are formed by each alpha ray (10,000 ions per-alpha ray) and the gap breaks down promptly after the voltage begins to rise and at a lower voltage value than that required by standard spark plugs. Thus, it might be said that polonium creates favorable conditions for gap breakdown under all circumstances. Many tests have been run which substantiate the above explanations. The most conclusive test of this type consisted in comparing the starting characteristics of many  polonium-containing spark plugs with ordinary spark plugs, all plugs having had more than a year of hard service, in several engines at -15° F. It was found that thirty per cent fewer revolutions of an engine were required for starting when the polonium plugs were used.

Firestone was quite proud of its new Atomic Age product. Credit: Firestone

As per the patent, the radioactive material was incorporated into the electrodes by adding it to the nickel alloy used to produce them. This would put it in prime position to ionize the air charge in the spark gap where it mattered most.

The science seems to check out on paper, but polonium spark plugs were only on the market for a short period of time, with the last known advertisements being published sometime around 1953. If the radioactive spark plugs had serious performance benefits, one suspects they might have stuck around. However, physics tells us they may not have been that special in reality.

In particular, polonium-210 has a relatively short half-life of just 138 days. In a year, 84% of the initial polonium-210 would have already decayed. Thus, between manufacturing, shipping, purchase, and installation, it’s hard to say how much “heat” would have been left in the plugs by the time they even reached the consumer. These plugs would quickly lose their magic simply sitting on the shelf. Beyond that, there are some questions of their performance in a real working engine. Firestone’s patent claimed improved performance over time, but a more sceptical view would be that deposits left on the spark plug electrodes over time would easily block any alpha particles that would otherwise be emitted to help cause ionization.

Examples of the polonium-impregnated spark plugs can be readily found online, though the radioactive material decayed away long ago. Credit: eBay

Ultimately, while the plugs may have had some small benefit when new, any additional performance was minor enough that they never really found a market. Couple this with ugly problems around dispersal, storage, and disposal of radioactive material, and it’s perhaps quite a good thing that these plugs didn’t really catch on.

Despite the lack of market success, however, it’s still possible to find these spark plugs in the wild today. A simple search on online auction sites will turn up dozens of examples, though don’t expect them to show up glowing. The radioactive material within will long have decayed to the point where they’re not going to significantly exceed typical background radiation. Still, they’re an interesting call back to an era when radioactivity was the hottest new thing on the block.

]]>
https://hackaday.com/2025/10/01/the-hottest-spark-plugs-were-actually-radioactive/feed/ 57 833783 FirestonePoloniium
Spy Tech: The NRO and Apollo 11 https://hackaday.com/2025/09/25/spy-tech-the-nro-and-apollo-11/ https://hackaday.com/2025/09/25/spy-tech-the-nro-and-apollo-11/#comments Thu, 25 Sep 2025 14:00:18 +0000 https://hackaday.com/?p=833643 When you think of “secret” agencies, you probably think of the CIA, the NSA, the KGB, or MI-5. But the real secret agencies are the ones you hardly ever hear …read more]]>

When you think of “secret” agencies, you probably think of the CIA, the NSA, the KGB, or MI-5. But the real secret agencies are the ones you hardly ever hear of. One of those is the National Reconnaissance Office (NRO). Formed in 1960, the agency was totally secret until the early 1970s.

If you have heard of the NRO, you probably know they manage spy satellites and other resources that get shared among intelligence agencies. But did you know they played a major, but secret, part in the Apollo 11 recovery? Don’t forget, it was 1969, and the general public didn’t know anything about the shadowy agency.

Secret Hawaii

Captain Hank Brandli was an Air Force meteorologist assigned to the NRO in Hawaii. His job was to support the Air Force’s “Star Catchers.” That was the Air Force group tasked with catching film buckets dropped from the super-secret Corona spy satellites. The satellites had to drop film only when there was good weather.

Spoiler alert: They made it back fine.

In the 1960s, civilian weather forecasting was not as good as it is now. But Brandli had access to data from the NRO’s Defense Meteorological Satellite Program (DMSP), then known simply as “417”. The high-tech data let him estimate the weather accurately over the drop zones for five days, much better than any contemporary civilian meteorologist could do.

When Apollo 11 headed home, Captain Brandli ran the numbers and found there would be a major tropical storm over the drop zone, located at 10.6° north by 172.5° west, about halfway between Howland Island and Johnston Atoll, on July 24th. The storm was likely to be a “screaming eagle” storm rising to 50,000 feet over the ocean.

In the movies, of course, spaceships are tough and can land in bad weather. In real life, the high winds could rip the parachutes from the capsule, and the impact would probably have killed the crew.

What to Do?

Brandli knew he had to let someone know, but he had a problem. The whole thing was highly classified. Corona and the DMSP were very dark programs. There were only two people cleared for both programs: Brandli and the Star Catchers’ commander. No one at NASA was cleared for either program.

With the clock ticking, Brandli started looking for an acceptable way to raise the alarm. The Navy was in charge of NASA weather forecasting, so the first stop was DoD chief weather officer Captain Sam Houston, Jr. He was unaware of Corona, but he knew about DMSP.

Brandli was able to show Houston the photos and convince him that there was a real danger. Houston reached out to Rear Admiral Donald Davis, commanding the Apollo 11 recovery mission. He just couldn’t tell the Admiral where he got the data. In fact, he couldn’t even show him the photos, because he wasn’t cleared for DMSP.

Career Gamble

There was little time, so Davis asked permission to move the USS Hornet task force, but he couldn’t wait. He ordered the ships to a new position 215 nautical miles away from the original drop zone, now at 13.3° north by 169.2° west. President Richard Nixon was en route to greet the explorers, so if Davis were wrong, he’d be looking for a new job in August. He had to hope NASA could alter the reentry to match.

The forecast was correct. There were severe thunderstorms at the original site, but Apollo 11 splashed down in a calm sea about 1.7 miles from the target, as you can see below. Houston received a Navy Commendation medal, although he wasn’t allowed to say what it was for until 1995.

In hindsight, NASA has said they were also already aware of the weather situation due to the Application Technology Satellite 1, launched in 1966. Although the weather was described as “suitable for splashdown”, mission planners say they had planned to move the landing anyway.

Modern Times

Weather predictions really are better than they used to be. (CC-BY: [Hannah Ritchie])
These days, the NRO isn’t quite as secretive as it once was, and, in fact, much of the information for this post derives from two stories from their website. The NRO was also involved in the Manned Orbital Laboratory project and considered using Apollo as part of that program.

Weather forecasting, too, has gotten better. Studies show that even in 1980, a seven-day forecast might be, at best, 45 or 50% accurate. Today, they are nearly 80%. Some of that is better imaging. Some of it is better models and methods, too, of course.

However, thanks to one — or maybe a few — meteorologists, the Apollo 11 crew returned safely to Earth to enjoy their ticker-tape parades. After, of course, their quarantine.

]]>
https://hackaday.com/2025/09/25/spy-tech-the-nro-and-apollo-11/feed/ 13 833643 SpyTech