Dan Maloney – Hackaday https://hackaday.com Fresh hacks every day Mon, 20 Oct 2025 13:32:19 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 156670177 Ask Hackaday: When Good Lithium Batteries Go Bad https://hackaday.com/2025/10/20/ask-hackaday-when-good-lithium-batteries-go-bad/ https://hackaday.com/2025/10/20/ask-hackaday-when-good-lithium-batteries-go-bad/#comments Mon, 20 Oct 2025 14:00:48 +0000 https://hackaday.com/?p=868150 Friends, I’ve gotten myself into a pickle and I need some help. A few years back, I decided to get into solar power by building a complete PV system inside …read more]]>

Friends, I’ve gotten myself into a pickle and I need some help.

A few years back, I decided to get into solar power by building a complete PV system inside a mobile trailer. The rationale for this doesn’t matter for the current discussion, but for the curious, I wrote an article outlining the whole design and build process. Briefly, though, the system has two adjustable PV arrays mounted on the roof and side of a small cargo trailer, with an integrated solar inverter-charger and a 10-kWh LiFePO4 battery bank on the inside, along with all the usual switching and circuit protection stuff.

It’s pretty cool, if I do say so myself, and literally every word I’ve written for Hackaday since sometime in 2023 has been on a computer powered by that trailer. I must have built it pretty well, because it’s been largely hands-off since then, requiring very little maintenance. And therein lies the root of my current conundrum.

Spicy Pillows

I generally only go in the trailer once a month or so, just to check things over and make sure no critters — or squatters — have taken up residence. Apparently, my inspections had become somewhat cursory, because somehow I had managed to overlook a major problem brewing:

Chest burster much? I found this swollen mass of steel and lithium inside my trailer, ready to wreak havoc.

This is one of two homebrew server rack battery modules I used in the trailer’s first battery bank. The LG-branded modules were removed from service and sold second-hand by Battery Hookup; I stripped the proprietary management cards out of the packs and installed a 100-amp BMS, plus the comically oversized junction box for wiring. They worked pretty well for a couple of months, but I eventually got enough money together to buy a pair of larger, new-manufacture server-rack modules from Ruixu, and I disconnected the DIY batteries and put them aside in the trailer.

Glass Houses

While these batteries work fine for what they are, I have to admit that their homebrew nature gnawed at me.As for what happened to these batteries (while not as dramatic, the case on the other one is obviously swelling, too), I’m not sure. There was no chance for physical damage inside the trailer, and neither battery was dropped or penetrated. Whatever happened must have been caused by normal aging of the 28 pouch cells within, or possibly the thermal swings inside the trailer.

Either way, some of the pouches have obviously transformed into “spicy pillows” thanks to the chemical decomposition of their electrodes and electrolytes, creating CO2 and CO gas under enough pressure to deform the 14-gauge steel case of the modules. It’s a pretty impressive display of power when you think about it, and downright terrifying.

I know that posting this is likely going to open me up to considerable criticism in the comments, much of it deserved. I was clearly negligent here, at least in how I chose to store these batteries once I removed them from service. You can also ding me for trying to save a few bucks by buying second-hand batteries and modifying them myself, but let those of you who have never shaken hands with danger cast the first stone.

To my credit, I did mention in my original write-up that, “While these batteries work fine for what they are, I have to admit that their homebrew nature gnawed at me. The idea that a simple wiring mistake could result in a fire that would destroy years of hard work was hard to handle.” But really, the risk posed by these batteries, not just to the years of work I put into the trailer, but also the fire danger to my garage and my neighbor’s boat, camper, and truck, all of which are close to the trailer, makes me a little queasy when I think about it.

Your Turn

That’s all well and good, but the question remains: what do I do with these batteries now? To address the immediate safety concerns, I placed them at my local “Pole of Inaccessibility,” the point in my backyard that’s farthest from anything that might burn. This is a temporary move until I can figure out a way to recycle them. While my city does have battery recycling, I’m pretty sure they’d balk at accepting 90-pound server batteries even if they were brand new. With obvious deformities, they’ll probably at least tell me to get lost; at worst, they’d call the hazmat unit on me. The Environmental Protection Agency has a program for battery recycling, but that’s geared to consumers disposing of a few alkaline cells or maybe the dead pack from a Ryobi drill. Good luck getting them to accept these monsters.

How would you handle this? Bear in mind that I won’t entertain illegal options such as an unfortunate boating accident or “dig deep and shut up,” at least not publicly. But if you have any other ideas, we’d love to hear them. More generally, what’s your retirement plan for lithium batteries look like? With the increased availability of used batteries from wrecked EVs or even e-bike and scooter batteries, it’s a question that many of us will face eventually. If you’ve already run up against this problem, we’d love to hear how you handled it. Sound off in the comments below.

]]>
https://hackaday.com/2025/10/20/ask-hackaday-when-good-lithium-batteries-go-bad/feed/ 46 868150 Lithium While these batteries work fine for what they are, I have to admit that their homebrew nature gnawed at me.
Hackaday Links: October 19, 2025 https://hackaday.com/2025/10/19/hackaday-links-october-19-2025/ https://hackaday.com/2025/10/19/hackaday-links-october-19-2025/#comments Sun, 19 Oct 2025 23:00:36 +0000 https://hackaday.com/?p=866853&preview=true&preview_id=866853 Hackaday Links Column BannerAfter a quiet week in the news cycle, surveillance concern Flock jumped right back in with both feet, announcing a strategic partnership with Amazon’s Ring to integrate that company’s network …read more]]> Hackaday Links Column Banner

After a quiet week in the news cycle, surveillance concern Flock jumped right back in with both feet, announcing a strategic partnership with Amazon’s Ring to integrate that company’s network of doorbell cameras into one all-seeing digital panopticon. Previously, we’d covered both Flock’s “UAVs as a service” model for combating retail theft from above, as well as the somewhat grassroots effort to fight back at the company’s wide-ranging network of license plate reader cameras. The Ring deal is not quite as “in your face” as drones chasing shoplifters, but it’s perhaps a bit more alarming, as it gives U.S. law enforcement agencies easy access to the Ring Community Request program directly through the Flock software that they (probably) already use.

In the event of a crime, police can use the integration to easily blast out a request for footage to Ring owners in the vicinity. The request is supposed to contain details of the alleged crime, including its time and location. Owners are free to comply with the request or ignore it at their discretion, and there is supposed to be no way for the police to track who declines a request, theoretically eliminating the potential for retaliation. On the one hand, we see the benefit of ready access to footage that might be needed quickly to catch a suspect or solve a crime. But on the other hand, it just seems like there’s nowhere you can go anymore where there isn’t a camera ready to be used against you.

Remember “Solar Freakin’ Roadways”? We sure do, and even though the idea of reconfigurable self-powered paving tiles didn’t seem to be going anywhere the last time we checked, we always did like the idea of self-lighted roads. But pluggable modules with solar panels and LEDs built to withstand being run over by cars and trucks and the rigors of Mother Nature might be a more complicated way to go about it than, say, painting the road with glow-in-the-dark paint. Unfortunately, that doesn’t seem to work much better, as revealed by a recent trial in Malaysia.

Admittedly, the trail was limited; a mere 245 meters of rural roadway received the phosphorescent paint markings. The paint absorbed light during the day and emitted a soft green glow at night, to the delight of drivers who praised its visibility. For a while, at least, because within a year or so, the paint had lost most of its power. At 20 times the cost of normal roadway marking paint, it wasn’t cheap either, probably thanks to the europium-doped strontium aluminate compounds that gave it its glow. It’s too bad the trial didn’t work out, because the markings looked fantastic.

You’ve heard about Power-over-Ethernet, but how about Power-over-Skin? The idea comes from a group at Carnegie-Mellon University, and is aimed at powering a network of battery-free wearables using the surface of the skin as the only conductor. To make it work, the researchers use a 40-MHz RF transmitter that’s kept in the user’s pocket and couples with the skin even through layers of fabric. Devices on the user’s skin can pick up the signal through a tuned circuit and rectify it to power a microcontroller. The 40-MHz frequency was selected in part because it offers head-to-toe coverage, but also because it’s too high to cause potentially painful “muscle activation” or local heating. Talk about your skin effect!

If you currently crave a trip to one of the many national parks or monuments in the United States, you might want to hold off until the government shutdown is resolved. Until then, you’ll have to be content with virtual tours such as this one for the Hanford B Reactor site, which, along with Los Alamos in New Mexico and Oakridge in Tennessee, is part of the Manhattan Project National Historic Park. The virtual tour is pretty cool, and everything inside the reactor building, from the sickly green paint to the mid-century furniture, seems to have been restored to what it would have looked like in the 1940s. The Fallout-esque control room is a treat, too. But alas, there’s no virtual gift shop on the way out.

And finally, a bit of electronics history with this fascinating video about how early home computers glitched their way into displaying color. On paper, the video interface on the TRS-80 Color Computer was only capable of generating a monochrome signal. But according to Coco Town, a carefully crafted monochrome signal could convince an analog NTSC television to display not only white pixels but also red and blue, or blue and red, depending on when you hit the reset button. It’s an interesting trip through the details of color TV, and the way that the standard was exploited to make color graphics on the cheap is truly a hack worth understanding. Enjoy!

]]>
https://hackaday.com/2025/10/19/hackaday-links-october-19-2025/feed/ 5 866853 Hackaday Links
Hackaday Podcast Episode 342: Poopless Prints, Radio in Your Fillings, and One Hyperspectral Pixel at a Time https://hackaday.com/2025/10/17/hackaday-podcast-episode-342-poopless-prints-radio-in-your-fillings-and-one-hyperspectral-pixel-at-a-time/ https://hackaday.com/2025/10/17/hackaday-podcast-episode-342-poopless-prints-radio-in-your-fillings-and-one-hyperspectral-pixel-at-a-time/#comments Fri, 17 Oct 2025 16:20:17 +0000 https://hackaday.com/?p=866855&preview=true&preview_id=866855 It was Elliot and Dan on the podcast today, taking a look at the best the week had to offer in terms of your hacks. We started with surprising news …read more]]>

It was Elliot and Dan on the podcast today, taking a look at the best the week had to offer in terms of your hacks. We started with surprising news about the rapidly approaching Supercon keynote; no spoilers, but Star Trek fans such as we who don’t have tickets will be greatly disappointed.

Elliot waxed on about taking the poop out of your prints (not pants), Dan got into a camera that adds a dimension to its images, and we both delighted in the inner workings of an air-powered squishy robot.

Questions? We’ve got plenty. Is it possible to take an X-ray without an X-ray tube? Or X-rays, for that matter? Did Lucille Ball crack a spy ring with her fillings? Is Algol set to take over the world? What’s inside a germanium transistor? How does a flipping fish say Happy Birthday? And how far down the Meshtastic rabbit hole did our own Tom Nardi fall? Tune in to find out the answers.

Download this free-range, cruelty-free MP3.

Episode 342 Show Notes:

News:

What’s that Sound?

  • Congrats to [James Barker] for picking the sound of a rake!

Interesting Hacks of the Week:

Quick Hacks:

Can’t-Miss Articles:

]]>
https://hackaday.com/2025/10/17/hackaday-podcast-episode-342-poopless-prints-radio-in-your-fillings-and-one-hyperspectral-pixel-at-a-time/feed/ 1 866855 Microphone
The Great Northeast Blackout of 1965 https://hackaday.com/2025/10/14/the-great-northeast-blackout-of-1965/ https://hackaday.com/2025/10/14/the-great-northeast-blackout-of-1965/#comments Tue, 14 Oct 2025 14:00:40 +0000 https://hackaday.com/?p=864841 At 5:20 PM on November 9, 1965, the Tuesday rush hour was in full bloom outside the studios of WABC in Manhattan’s Upper West Side. The drive-time DJ was Big …read more]]>

At 5:20 PM on November 9, 1965, the Tuesday rush hour was in full bloom outside the studios of WABC in Manhattan’s Upper West Side. The drive-time DJ was Big Dan Ingram, who had just dropped the needle on Jonathan King’s “Everyone’s Gone to the Moon.” To Dan’s trained ear, something was off about the sound, like the turntable speed was off — sometimes running at the usual speed, sometimes running slow. But being a pro, he carried on with his show, injecting practiced patter between ad reads and Top 40 songs, cracking a few jokes about the sound quality along the way.

Within a few minutes, with the studio cart machines now suffering a similar fate and the lights in the studio flickering, it became obvious that something was wrong. Big Dan and the rest of New York City were about to learn that they were on the tail end of a cascading wave of power outages that started minutes before at Niagara Falls before sweeping south and east. The warbling turntable and cartridge machines were just a leading indicator of what was to come, their synchronous motors keeping time with the ever-widening gyrations in power line frequency as grid operators scattered across six states and one Canadian province fought to keep the lights on.

They would fail, of course, with the result being 30 million people over 80,000 square miles (207,000 km2) plunged into darkness. The Great Northeast Blackout of 1965 was underway, and when it wrapped up a mere thirteen hours later, it left plenty of lessons about how to engineer a safe and reliable grid, lessons that still echo through the power engineering community 60 years later.

Silent Sentinels

Although it wouldn’t be known until later, the root cause of what was then the largest power outage in world history began with equipment that was designed to protect the grid. Despite its continent-spanning scale and the gargantuan size of the generators, transformers, and switchgear that make it up, the grid is actually quite fragile, in part due to its wide geographic distribution, which exposes most of its components to the ravages of the elements. Without protection, a single lightning strike or windstorm could destroy vital pieces of infrastructure, some of it nearly irreplaceable in practical terms.

Protective relays like these at a hydroelectric plant started all the ruckus. Source: Wtshymanski at en.wikipedia, CC BY-SA 3.0

Tasked with this critical protective job are a series of relays. The term “relay” has a certain connotation among electronics hobbyists, one that can be misleading in discussions of power engineering. While we tend to think of relays as electromechanical devices that use electromagnets to make and break contacts to switch heavy loads, in the context of grid protection, relays are instead the instruments that detect a fault and send a control signal to switchgear, such as a circuit breaker.

Relays generally sense faults through a series of instrumentation transformers located at critical points in the system, usually directly within the substation or switchyard. These can either be current transformers, which measure the current in a toroidal coil wrapped around a conductor, much like a clamp meter, or voltage transformers, which use a high-voltage capacitor network as a divider to measure the voltage at the monitored point.

Relays can be configured to use the data from these sensors to detect an overcurrent fault on a transmission line; contacts within the relay would then send 125 VDC from the station’s battery bank to trip the massive circuit breakers out in the yard, opening the circuit. Other relays, such as induction disc relays, sense problems via the torque created on an aluminum disk by opposing sensing coils. They operate on the same principle as the old mechanical electrical meters did, except that under normal conditions, the force exerted by the coils is in balance, keeping the disk from rotating. When an overcurrent fault or a phase shift between the coils occurs, the disc rotates enough to close contacts, which sends the signal to trip the breakers.

The circuit breakers themselves are interesting, too. Turning off a circuit with perhaps 345,000 volts on it is no mean feat, and the circuit breakers that do the job must be engineered to safely handle the inevitable arc that occurs when the circuit is broken. They do this by isolating the contacts from the atmosphere, either by removing the air completely or by replacing the air with pressurized sulfur hexafluoride, a dense, inert gas that quenches arcs quickly. The breaker also has to draw the contacts apart as quickly as possible, to reduce the time during which they’re within breakdown distance. To do this, most transmission line breakers are pneumatically triggered, with the 125 VDC signal from the protective relays triggering a large-diameter dump valve to release pressurized air from a reservoir into a pneumatic cylinder, which operates the contacts via linkages.

The Cascade Begins

At the time of the incident, each of the five 230 kV lines heading north into Ontario from the Sir Adam Beck Hydroelectric Generating Station, located on the west bank on the Niagara River, was protected by two relays: a primary relay set to open the breakers in the event of a short circuit, and a backup relay to make sure the line would open if the primary relays failed to trip the breaker for some reason. These relays were installed in 1951, but after a near-catastrophe in 1956, where a transmission line fault wasn’t detected and the breaker failed to open, the protective relays were reconfigured to operate at approximately 375 megawatts. When this change was made in 1963, the setting was well above the expected load on the Beck lines. But thanks to the growth of the Toronto-Hamilton area, especially all the newly constructed subdivisions, the margins on those lines had narrowed. Coupled with an emergency outage of a generating station further up the line in Lakeview and increased loads thanks to the deepening cold of the approaching Canadian winter, the relays were edging closer to their limit.

Where it all began. Overhead view of the Beck (left) and Moses (right) hydro plants, on the banks of the Niagara River. Source: USGS, Public domain.

Data collected during the event indicates that one of the backup relays tripped at 5:16:11 PM on November 9; the recorded load on the line was only 356 MW, but it’s likely that a fluctuation that didn’t get recorded pushed the relay over its setpoint. That relay immediately tripped its breaker on one of the five northbound 230 kV lines, with the other four relays doing the same within the next three seconds. With all five lines open, the Beck generating plant suddenly lost 1,500 megawatts of load, and all that power had nowhere else to go but the 345 kV intertie lines heading east to the Robert Moses Generating Plant, a hydroelectric plant on the U.S. side of the Niagara River, directly across from Beck. That almost instantly overloaded the lines heading east to Rochester and Syracuse, tripping their protective relays to isolate the Moses plant and leaving another 1,346 MW of excess generation with nowhere to go. The cascade of failures marched across upstate New York, with protective relays detecting worsening line instabilities and tripping off transmission lines in rapid succession. The detailed event log, which measured events with 1/2-cycle resolution, shows 24 separate circuit trips with the first second of the outage.

Oscillogram of the outage showing data from instrumentation transformers around the Beck transmission lines. Source: Northeast Power Failure, November 9 and 10, 1965: A Report to the President. Public domain.

While many of the trips and events were automatically triggered, snap decisions by grid operators all through the system resulted in some circuits being manually opened. For example, the Connecticut Valley Electrical Exchange, which included all of the major utilities covering the tiny state wedged between New York and Massachusetts, noticed that Consolidated Edison, which operated in and around the five boroughs of New York City, was drawing an excess amount of power from their system, in an attempt to make up for the generation capacity lost from upstate. They tried to keep New York afloat, but the CONVEX operators had to make the difficult decision to manually open their ties to the rest of New England to shed excess load about a minute after the outage started, finally completely isolating their generators and loads by 5:21.

Heroics aside, New York City was in deep trouble. The first effects were felt almost within the first second of the event, as automatic protective relays detected excessive power flow and disconnected a substation in Brooklyn from an intertie into New Jersey. Operators at Long Island Light tried to save their system by cutting ties to the Con Ed system, which reduced the generation capacity available to the city and made its problem worse. Operators tried to spin up their steam turbine plants to increase generation capacity, but it was too little, too late. Frequency fluctuations began to mount throughout New York City, resulting in Big Dan’s wobbly turntables at WABC.

Well, there’s your problem. Bearings on the #3 turbine at Con Ed’s Ravenwood plant were starved of oil during the outage, resulting in some of the only mechanical damage incurred during the outage. Source: Northeast Power Failure, November 9 and 10, 1965: A Report to the President. Public domain.

As a last-ditch effort to keep the city connected, Con Ed operators started shedding load to better match the dwindling available supply. But with no major industrial users — even in 1965, New York City was almost completely deindustrialized — the only option was to start shutting down sections of the city. Despite these efforts, the frequency dropped lower and lower as the remaining generators became more heavily loaded, tripping automatic relays to disconnect them and prevent permanent damage. Even so, a steam turbine generator at the Con Ed Ravenswood generating plant was damaged when an auxiliary oil feed pump lost power during the outage, starving the bearings of lubrication while the turbine was spinning down.

By 5:28 or so, the outage reached its fullest extent. Over 30 million people began to deal with life without electricity, briefly for some, but up to thirteen hours for others, particularly those in New York City. Luckily, the weather around most of the downstate outage area was unusually clement for early November, so the risk of cold injuries was relatively low, and fires from improvised heating arrangements were minimal. Transportation systems were perhaps the hardest hit, with some 600,000 unfortunates trapped in the dark in packed subway cars. The rail system reaching out into the suburbs was completely shut down, and Kennedy and LaGuardia airports were closed after the last few inbound flights landed by the light of the full moon. Road traffic was snarled thanks to the loss of traffic signals, and the bridges and tunnels in and out of Manhattan quickly became impassable.

Mopping Up

Liberty stands alone. Lighted from the Jersey side, Lady Liberty watches over a darkened Manhattan skyline on November 9. The full moon and clear skies would help with recovery. Source: Robert Yarnell Ritchie collection via DeGolyer Library, Southern Methodist University.

Almost as soon as the lights went out, recovery efforts began. Aside from the damaged turbine in New York and a few transformers and motors scattered throughout the outage area, no major equipment losses were reported. Still, a massive mobilization of line workers and engineers was needed to manually verify that equipment would be safe to re-energize.

Black start power sources had to be located, too, to power fuel and lubrication pumps, reset circuit breakers, and restart conveyors at coal-fired plants. Some generators, especially the ones that spun to a stop and had been sitting idle for hours, also required external power to “jump start” their field coils. For the idled thermal plants upstate, the nearby hydroelectric plants provided excitation current in most cases, but downstate, diesel electric generators had to be brought in for black starts.

In a strange coincidence, neither of the two nuclear plants in the outage area, the Yankee Rowe plant in Massachusetts and the Indian Point station in Westchester County, New York, was online at the time, and so couldn’t participate in the recovery.

For most people, the Great Northeast Power Outage of 1965 was over fairly quickly, but its effects were lasting. Within hours of the outage, President Lyndon Johnson issued an order to the chairman of the Federal Power Commission to launch a thorough study of its cause. Once the lights were back on, the commission was assembled and started gathering data, and by December 6, they had issued their report. Along with a blow-by-blow account of the cascade of failures and a critique of the response and recovery efforts, they made tentative recommendations on what to change to prevent a recurrence and to speed the recovery process should it happen again, which included better and more frequent checks on relay settings, as well as the formation of a body to oversee electrical reliability throughout the nation.

Unfortunately, the next major outage in the region wasn’t all that far away. In July of 1977, lightning strikes damaged equipment and tripped breakers in substations around New York City, plunging the city into chaos. Luckily, the outage was contained to the city proper, and not all of it at that, but it still resulted in several deaths and widespread rioting and looting, which the outage in ’65 managed to avoid. That was followed by the more widespread 2003 Northeast Blackout, which started with an overloaded transmission line in Ohio and eventually spread into Ontario, across Pennsylvania and New York, and into Southern New England.

]]>
https://hackaday.com/2025/10/14/the-great-northeast-blackout-of-1965/feed/ 31 864841 Blackout
Hackaday Links: October 12, 2025 https://hackaday.com/2025/10/12/hackaday-links-october-12-2025/ https://hackaday.com/2025/10/12/hackaday-links-october-12-2025/#comments Sun, 12 Oct 2025 23:00:20 +0000 https://hackaday.com/?p=865322&preview=true&preview_id=865322 Hackaday Links Column BannerWe’ve probably all seen some old newsreel or documentary from The Before Times where the narrator, using his best Mid-Atlantic accent, described those newfangled computers as “thinking machines,” or better …read more]]> Hackaday Links Column Banner

We’ve probably all seen some old newsreel or documentary from The Before Times where the narrator, using his best Mid-Atlantic accent, described those newfangled computers as “thinking machines,” or better yet, “electronic brains.” It was an apt description, at least considering that the intended audience had no other frame of reference at a time when the most complex machine they were familiar with was a telephone. But what if the whole “brain” thing could be taken more literally? We’ll have to figure that out soon if these computers powered by miniature human brains end up getting any traction.

The so-called “organoid bioprocessors” come from a Swiss outfit called FinalSpark, and if you’re picturing little pulsating human brains in petri dishes connected to wires, you’ll have to guess again. The organoids, which are grown from human skin cells that have been reprogrammed into stem cells and then cultured into human neurons, only have about 10,000 cells per blob. That makes them a fraction of a millimeter in diameter, an important limit since they have no blood supply and must absorb nutrients from their culture medium, and even though they have none of the neuronal complexity of a brain, they’re still capable of some interesting stuff. FinalSpark has a live feed to one of its organoid computing cells on the website; the output looks a little like an EEG, which makes sense if you think about it. We’re not sure where this technology is going, aside from playing Pong, but if you put aside the creep-factor, this is pretty neat stuff.

We thought once 3I/Atlas, our latest interstellar visitor, ducked behind the Sun on its quick trip through the solar system, that things would quiet down a bit, at least in terms of stories about how it’s an alien space probe or something. Don’t get us wrong, we’d dearly love to have it be a probe sent by another civilization to explore our neck of the galactic woods, and at this point we’d even be fine with it being the vanguard of a Vogon Constructor Fleet. But now the best view of the thing is from Mars, leading to stories about the strange cylindrical thing in the Martian sky. The photo was apparently captured on October 4 by one of the navigation cameras on the Perseverance rover, which alone is a pretty neat trick since those cameras are optimized for looking at the ground. But the image is clearly not of a cylinder floating menacingly over the Martian surface; rather, as Avi Loeb explains, it’s likely a spot of light that’s been smeared into a streak by a long integration time. And it might not even be 3I/Atlas; since the comet would have been near Phobos at the time, it could be a smeared-out picture of the Martian moon.

Part of the reason for all this confusion about a simple photograph is the continuing U.S. government shutdown, which has furloughed a lot of the NASA and JPL employees. And not only has the shutdown made it hard to get the straight poop on 3I/Atlas, it’s also responsible for the confusion over the state of the Juno mission. The probe, which has been studying the Jovian system since 2016, was supposed to continue through September 30, 2025; unfortunately, the shutdown started at one minute past midnight the very next day. With no news out of NASA, it’s unclear whether Juno is still in operation, or whether it’s planned intentional deorbit into Jupiter, to prevent contaminating any of the planet’s potentially life-bearing moons, already occurred. That makes it a bit of a Schrödinger’s space probe until NASA can tell us what’s going on.

And finally, are we really recommending that you watch a 25-minute video from a channel that specializes in linguistics? Yep, we sure are, because we found Rob Words’ deep dive into the NATO phonetic alphabet really interesting. For those of you not used to listening to the ham bands or public service radio, phonetic alphabets help disambiguate spoken letters from each other. Over a noisy channel, “cee” and “dee” are easily confused, but “Charlie” and “Delta” are easier to distinguish. But as Rob points out, getting to the finished NATO alphabet — spoiler alert, it’s neither NATO nor phonetic — was anything but a smooth road, with plenty of whiskey-tango-foxtrot moments along the way. Enjoy!

]]>
https://hackaday.com/2025/10/12/hackaday-links-october-12-2025/feed/ 2 865322 Hackaday Links
Ask Hackaday: Why is TTL 5 Volts? https://hackaday.com/2025/10/08/ask-hackaday-why-is-ttl-5-volts/ https://hackaday.com/2025/10/08/ask-hackaday-why-is-ttl-5-volts/#comments Wed, 08 Oct 2025 17:00:34 +0000 https://hackaday.com/?p=849031 The familiar five volts standard from back in the TTL days always struck me as odd. Back when I was just a poor kid trying to cobble together my first …read more]]>

The familiar five volts standard from back in the TTL days always struck me as odd. Back when I was just a poor kid trying to cobble together my first circuits from the Forrest Mims Engineer’s Notebook, TTL was always a problem. That narrow 4.75 V to 5.25 V spec for Vcc was hard to hit, thanks to being too poor to buy or build a dedicated 5 V power supply. Yes, I could have wired up four 1.5 V dry cells and used a series diode to drop it down into range, but that was awkward and went through batteries pretty fast once you got past more than a few chips.

As a hobbyist, the five volt TTL standard always seemed a little capricious, but I strongly suspected there had to be a solid reason behind it. To get some insights into the engineering rationale, I did what anyone living in the future would do: I asked ChatGPT. My question was simple: “How did five volts become the standard voltage for TTL logic chips?” And while overall the answers were plausible, like every other time I use the chatbot, they left me wanting more.

Circular Logic

TTL, 5 volts and going strong since 1976 (at least). Source: Audrius Meskauskas, CC BY-SA 3.0.

The least satisfying of ChatGPT’s answers all had a tinge of circular reasoning to them: “IBM and other big computer makers adopted 5 V logic in their designs,” and thanks to their market power, everyone else fell in line with the five volt standard. ChatGPT also blamed “The Cascade Effect” of Texas Instruments’ standardization of five volts for their TTL chips in 1964, which “set the tone for decades” and forced designers to expect chips and power supplies to provide five volt rails. ChatGPT also cited “Compatibility with Existing Power Supplies” as a driver, and that regulated five volt supplies were common in computers and military electronics in the 1960s. It also cited the development of the 7805 linear regulator in the late 1960s as a driver.

All of this seems like nonsense, the equivalent of saying, “Five volts became the standard because the standard was five volts.” What I was after was an engineering reason for five volts, and luckily, an intriguing clue was buried in ChatGPT’s responses along with the drivel: the characteristics of BJT transistors, and the tradeoffs between power dissipation and speed.

The TTL family has been around for a surprisingly long time. Invented in 1961, TTL integrated circuits have been used commercially since 1963, with the popular 7400-series of logic chips being introduced in 1964. All this development occurred long before MOS technology, with its wider supply range, came into broad commercial use, so TTL — as well as all the precursor logic families, like diode-transistor logic (DTL) and resistor-transistor logic (RTL) — used BJTs in all their circuits. Logic circuits need to distinguish between a logical 1 and a logical 0, and using BJTs with a typical base-emitter voltage drop of 0.7 V or so meant that the supply voltage couldn’t be too low, with a five volt supply giving enough space between the high and low levels without being too susceptible to noise.

The 1961 patent for TTL never mentions 5 volts; it only specifies a “B+”, which seems like a term held over from the vacuum tube days. Source: U.S. Patent 3283170A.

But, being able to tell your 1s and 0s apart really only sets a minimum for TTL’s supply rail. Why couldn’t it have been higher? It could have, and a higher Vcc, like the 10 V to 15 V used in emitter-coupled logic (ECL), might have improved the margins between logic levels and improved noise immunity. But higher voltage means more power, and power means heat, and heat is generally frowned upon in designs. So five volts must have seemed like a good compromise — enough wiggle room between logic levels, good noise immunity, but not too much power wasted.

I thought perhaps the original patent for TTL would shed some light on the rationale for five volts, but like most inventors, James Buie left things as broad and non-specific as possible in the patent. He refers only to “B+” and “B-” in the schematics and narrative, although he does calculate that the minimum for B+ would be 2.2 V. Later on, he states that “the absolute value of the supply voltage need be greater than the turn-on voltage of the coupling transistor and that of the output transistor,” and in the specific claims section, he refers to “a source of EMF” without specifying a magnitude. As far as I can see, nowhere in the patent does the five volt spec crop up.

Your Turn

The Fender “Champ” guitar amp had a rectifier tube with a 5-volt filament. Perhaps TTL’s Vcc comes from that? Source: SchematicHeaven.net.

If I were to hazard a guess, the five volt spec might be a bit of a leftover from the tube era. A very common value for the heater circuit in vacuum tubes was 6.3 V, itself a somewhat odd figure that probably stems from the days when automobiles used 6 V electrical systems, which were really 6.3 V thanks to using three series-connected lead-acid cells with a nominal cell voltage of 2.1 V each.

Perhaps the early TTL pioneers looked at the supply rail as a bit like the heater circuit, but nudged it down to 5 V when 6.3 V proved a little too hot. There were also some popular tubes with heaters rated at five volts, such as the rectifier tubes found in guitar amplifiers like the classic Fender “Champ” and others. The cathodes on these tubes were often directly connected to a dedicated 5 V winding on the power transformer; granted, that was 5 V AC, but perhaps it served as a design cue once TTL came around.

This is, of course, all conjecture. I have no idea what was on the minds of TTL’s designers; I’m just throwing out a couple of ideas to stir discussion. But what about you? Where do you think the five volt TTL standard came from? Was it arrived at through a stringent engineering process designed to optimize performance? Or was it a leftover from an earlier era that just happened to be a good compromise? Was James Buie an electric guitarist with a thing for Fender? Or was it something else entirely? We’d love to hear your opinions, especially if you’ve got any inside information. Sound off in the comments section below.

]]>
https://hackaday.com/2025/10/08/ask-hackaday-why-is-ttl-5-volts/feed/ 85 849031 Volts
Hackaday Links: October 5, 2025 https://hackaday.com/2025/10/05/hackaday-links-october-5-2025/ https://hackaday.com/2025/10/05/hackaday-links-october-5-2025/#comments Sun, 05 Oct 2025 23:00:19 +0000 https://hackaday.com/?p=854862&preview=true&preview_id=854862 Hackaday Links Column BannerWhat the Flock? It’s probably just some quirk of The Almighty Algorithm, but ever since we featured a story on Flock’s crime-fighting drones last week, we’ve been flooded with other …read more]]> Hackaday Links Column Banner

What the Flock? It’s probably just some quirk of The Almighty Algorithm, but ever since we featured a story on Flock’s crime-fighting drones last week, we’ve been flooded with other stories about the company, some of which aren’t very flattering. The first thing that we were pushed was this handy interactive map of the company’s network of automatic license plate readers. We had no idea how extensive the network was, and while our location is relatively free from these devices, at least ones operated on behalf of state, county, or local law enforcement, we did learn to our dismay that our local Lowe’s saw fit to install three of these cameras on the entrances to their parking lot. Not wishing to have our coming and goings documented, we’ll be taking our home improvement dollars elsewhere for now.

But it’s a new feature being rolled out by Flock that really got our attention: the addition of “human distress” detection to their Raven acoustic gunshot detection system. From what we understand, gunshot detection systems use the sudden acoustic impulse generated by the supersonic passage of a bullet, the shock wave from the rapidly expanding powder charge of a fired round, or both to detect a gunshot, and then use the time-of-arrival difference between multiple sensors to estimate the shot’s point of origin. Those impulses carry a fair amount of information, but little of it is personally identifiable, at least directly. On the other hand, human voices carry a lot of personal information, and detecting the sounds of distress, such as screaming, would require very different monitoring techniques. We’d imagine it would be akin to what digital assistants use to monitor for wake words, which would mean turning the world — or at least pockets of it — into a gigantic Alex. We don’t much like the idea of having our every public utterance recorded and analyzed, even with the inevitable assurances from the company that the “non-distress” parts of the audio stream will never be listened to. Yeah, right.

Botnets are bad enough when it’s just routers or smart TVs that are exploited to mine crypto or spam comments on social media. But what if a botnet were made of, you know, actual robots? That might be something to watch out for with the announcement of a vulnerability in certain Unitree robots, including several of their humanoid robots. The vulnerability, still unpatched at the time of the Spectrum story, lies in the Bluetooth system used to set up the robots’ WiFi configuration. It sounds like an attacker can easily craft a BLE packet to become an authenticated user, after which the WiFi SSID and password fields can be used to inject arbitrary code. The fun doesn’t end there, though, since a compromised robot could then go on to infect any other nearby Unitree bots via BLE. And since Unitree seems to be staking out a market position as the leader in affordable humanoid robots, who’s to say what could happen? If you want a zombie robot apocalypse, this seems like a great way to get it.

Also from the “Bad Optics for Robots” files comes this story about a Waymo car that went just a little off course. Or rather, on course — a golf course, to be precise. Viral video shows a Waymo self-driving Jaguar creeping slowly across a golf course fairway as bemused golfers look on. But you can relax, because the robotaxi company says that this isn’t a case of their AI driving system going awry, but rather a human-driven robotaxi preparing for an event at the golf course. The company seems to think this absolves them, and perhaps it does officially and legally. But a very distinctive car that’s well-known for getting into self-driving mischief, appearing in a place one doesn’t typically associate with vehicles larger than golf carts, seems like a bad look for the company.

And finally, back in December of 2023, we dropped a link to My Mechanics’ restoration of a 1973 Datsun 240Z. He’s been making slow but steady progress on the car since then, with the most recent video covering his painstaking restoration of the rear axle and suspension. Where most car rebuild projects use as many replacement parts as possible, My Mechanics prefers to restore the original parts wherever possible. So, where a normal person might look at the chipped cooling fins on the original Z-car’s brake drums and order new ones, My Mechanics instead pulls out the TIG welder and lays up some beads to patch the broken fins. He used a similar technique to restore the severely chowdered compression fittings on the brake lines, something we’ve never seen down before. Over the top? You bet it is, but it still makes for great watching. Enjoy!

 

]]>
https://hackaday.com/2025/10/05/hackaday-links-october-5-2025/feed/ 15 854862 Hackaday Links