Featured – Hackaday https://hackaday.com Fresh hacks every day Mon, 20 Oct 2025 18:54:07 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 156670177 Word Processing: Heavy Metal Style https://hackaday.com/2025/10/20/word-processing-heavy-metal-style/ https://hackaday.com/2025/10/20/word-processing-heavy-metal-style/#comments Mon, 20 Oct 2025 17:00:23 +0000 https://hackaday.com/?p=868668 If you want to print, say, a book, you probably will type it into a word processor. Someone else will take your file and produce pages on a printer. Your …read more]]>

If you want to print, say, a book, you probably will type it into a word processor. Someone else will take your file and produce pages on a printer. Your words will directly turn on a laser beam or something to directly put words on paper. But for a long time, printing meant creating some physical representation of what you wanted to print that could stamp an imprint on a piece of paper.

The process of carving something out of wood or some other material to stamp out printing is very old. But the revolution was when the Chinese and, later, Europeans, realized it would be more flexible to make symbols that you could assemble texts from. Moveable type. The ability to mass-produce books and other written material had a huge influence on society.

But there is one problem. A book might have hundreds of pages, and each page has hundreds of letters. Someone has to find the right letters, put them together in the right order, and bind them together in a printing press’ chase so it can produce the page in question. Then you have to take it apart again to make more pages. Well, if you have enough type, you might not have to take it apart right away, but eventually you will.

Automation

A Linotype matrix for an upright or italic uppercase A.

That’s how it went, though, until around 1884. That’s when Ottmar Mergenthaler, a clockmaker from Germany who lived in the United States, had an idea. He had been asked for a quicker method of publishing legal briefs. He imagined a machine that would assemble molds for type instead of the actual type. Then the machine would cast molten metal to make a line of type ready to get locked into a printing press.

He called the molds matrices and built a promising prototype. He formed a company, and in 1886, the New York Tribune got the first commercial Linotype machine.

These machines would be in heavy use all through the early 20th century, although sometime in the 1970s, other methods started to displace them. Even so, there are still a few printing operations that use linotypes as late as 2022, as you can see in the video below. We don’t know for sure if The Crescent is still using the old machine, but we’d bet they are.

Of course, there were imitators and the inevitable patent wars. There was the Typograph, which was an early entry into the field. The Intertype company produced machines in 1914. But just like Xerox became a common word for photocopy, machines like this were nearly always called Linotypes and, truth be told, were statistically likely to have been made by Mergenthaler’s company.

Kind of Steampunk

Diagram from a 1904 book showing the parts of a Linotype.

For a machine that appeared in the 1800s, the Linotype looks both modern and steampunk. It had a 90-key keyboard, for one thing. Some even had paper tape readers so type could be “set” somewhere and sent to the press room via teletype.

The machine had a store of matrices in a magazine. Of course, you needed lots of common characters and perhaps fewer of the uncommon ones. Each matrix had a particular font and size, although for smaller fonts, the matrix could hold two characters that the operator could select from. One magazine would have one font at a particular size.

Unlike type, a Linotype matrix isn’t a mirror image, and it is set into the metal instead of rising out of it. That makes sense. It is a mold for the eventual type that will be raised and mirrored. The machine had 90 keys. Want to guess how many channels a magazine had? Yep. It was 90, although larger fonts might use fewer.

Different later models had extra capabilities. For example, some machines could hold four magazines in a stack so you could set multiple fonts or sizes at one time, with some limitations, depending on the machine. Spaces weren’t in the magazine. They were in a special spaceband box.

Each press of a key would drop a matrix from the magazine into the assembler at the bottom of the machine in a position for the primary or auxiliary letter. This was all a mechanical process, and a skilled operator could do about 30 words per minute, so the machines had to be cleaned and lubricated. There was also a special pi channel where you could put strange matrices you didn’t use very often.

Typecasting

When the line was done, you pressed the casting level, which would push the matrices out of the assembler and into a delivery channel. Then it moved into the casting section, which took about nine seconds. A motor moved the matrices to the right place, and a gas burner or electric heater kept a pot of metal (usually a lead/antimony/tin mix that is traditional for type) molten.

A properly made slug from a Linotype was good for 300,000 imprints. However,  it did require periodic removal of the dross from the top of the hot metal. Of course, if you didn’t need it anymore, you just dropped it back in the pot.

Justification

A composed line with long space bands. (From a 1940 book by the Linotype Company). Note that each matrix has two letters.

You might wonder how type would be justified. The trick is in the space bands. They were larger than the other matrices and made so that the further they were pushed into the block, the more space they took. A mechanism pushed them up until the line of type exactly fit between the margins.

You can see why the space bands were in a special box. They are much longer than the typical type matrices.

How else could you even out the spaces with circa-1900 technology? Pretty clever.

The distributor bar (black) has teeth that engage teeth on each matrix.

If you have been paying attention, there’s one major drawback to this system. How do the matrix elements get back to the right place in the magazine? If you can’t automate that, you still have a lot of manual labor to do. This was the job of the distributor. First, the space bands were sorted out. Each matrix has teeth at the top that allow it to hang on a toothed distributor bar. Each letter has its own pattern of teeth that form a 7-bit code.

As the distributor bar carries them across the magazine channels, it will release those that have a particular set of teeth missing, because it also has some teeth missing. A diagram from a Linotype book makes it easier to understand than reading about it.

The Goldbergs

You have to wonder if Ottmar was related to Rube Goldberg. We don’t think we’d be audacious enough to propose a mechanical machine to do all this on top of an automated way to handle molten lead. But we admire anyone who does. Thomas Edison called the machine the eighth wonder of the world, and we don’t disagree. It revolutionized printing even though, now, it is just a historical footnote.

Can’t get enough info on the Linotype? There is a documentary that runs well over an hour, which you can watch below. If you’ve only got five minutes, try the short demo video at the very bottom.

Moveable type was to printing what 3D printing is to plastic manufacturing. Which might explain this project. Or this one, for that matter.

]]>
https://hackaday.com/2025/10/20/word-processing-heavy-metal-style/feed/ 11 868668 Qwerty
A Tale of Two Car Design Philosophies https://hackaday.com/2025/10/16/a-tale-of-two-car-design-philosophies/ https://hackaday.com/2025/10/16/a-tale-of-two-car-design-philosophies/#comments Thu, 16 Oct 2025 14:00:51 +0000 https://hackaday.com/?p=810457 As a classic car enthusiast, my passion revolves around cars with a Made in West Germany stamp somewhere on them, partially because that phrase generally implied a reputation for mechanical …read more]]>

As a classic car enthusiast, my passion revolves around cars with a Made in West Germany stamp somewhere on them, partially because that phrase generally implied a reputation for mechanical honesty and engineering sanity. Air-cooled Volkswagens are my favorites, and in fact I wrote about these, and my own ’72 Super Beetle, almost a decade ago. The platform is incredibly versatile and hackable, not to mention inexpensive and repairable thanks to its design as a practical, affordable car originally meant for German families in the post-war era and which eventually spread worldwide. My other soft-spot is a car that might seem almost diametrically opposed to early VWs in its design philosophy: the Mercedes 300D. While it was a luxury vehicle, expensive and overbuilt in comparison to classic Volkswagens, the engineers’ design choices ultimately earned it a reputation as one of the most reliable cars ever made.

As much as I appreciate these classics, though, there’s almost nothing that could compel me to purchase a modern vehicle from either of these brands. The core reason is that both have essentially abandoned the design philosophies that made them famous in the first place. And while it’s no longer possible to buy anything stamped Made in West Germany for obvious reasons, even a modern car with a VIN starting with a W doesn’t carry that same weight anymore. It more likely marks a vehicle destined for a lease term rather than one meant to be repaired and driven for decades, like my Beetle or my 300D.

Punch Buggy Blue

Vintage Beetles also make excellent show cars and beach buggies. Photo courtesy of Bryan Cockfield

Starting with the downfall of Volkswagen, whose Beetle is perhaps the most iconic car ever made, their original stated design intent was to make something affordable and easily repairable with simple tools. The vehicles that came out of this era, including the Beetle, Bus, and Karmann Ghia, omitted many parts we’d think were absolutely essential on a modern car such as a radiator, air conditioner, ABS brakes, a computer, or safety features of any sort. But in exchange the vehicles are easily wrenched on for a very low cost.

For example, removing the valve covers only requires a flat screwdriver and takes about five seconds, and completing a valve adjustment from that point only requires a 13 mm wrench and maybe an additional half hour. The engines can famously be removed in a similar amount of time, and the entire bodies can be lifted off the chassis without much more effort. And some earlier models of Beetle will run just fine even without a battery, assuming you can get a push. As a result of cost and simplicity the Beetle and the other vehicles based on it were incredibly popular for almost an entire century and drove VW to worldwide fame.

This design philosophy didn’t survive the 80s and 90s, however, and this era saw VW abandon nearly everything that made it successful in the first place. Attempting any of the maintenance procedures listed above on a modern Jetta or Golf will have one scratching one’s head, wondering if there’s anything left of the soul of the Volkswagen from the 50s and 60s. Things like having to remove the bumper and grille to change a headlight assembly or removing the intake manifold to change a thermostat are commonplace now. They’ve also abandoned their low-cost roots as well, with their new retro-styled Bus many multiples of even the inflation-adjusted price of a Bus from the 1960s, well beyond what modern safety standards and technology would have added to the cost of the vehicle alone. Let’s also not forget that even when completely ignoring emissions standards, modern VWs have still remained overpriced and difficult to repair.

Besides design cues, there are virtually no similarities between these two cars. Photo courtesy of Bryan Cockfield

VW Is Not Alone

The story of Mercedes ends up in almost exactly the same place but from a completely opposite starting point. Mercedes of the 60s and 70s was known for building mostly indestructible tanks for those with means who wanted to feel like they were riding in the peak of luxury. And that’s what Mercedes mostly delivered: leather seats, power windows, climate control, a comfortable ride, and in a package that would easily go hundreds of thousands of miles with basic maintenance. In the case of the W123 platform, this number often extended to a million miles, a number absolutely unheard of for modern vehicles.

This is the platform my 1984 300D was based on, and mine was well over 300,000 miles before we eventually parted ways. Mercedes of this era also made some ultra-luxury vehicles that could be argued to be the ancestors of modern Mercedes-Maybach like the Mercedes 600, a car with all of the power electronics replaced with hydraulics like the windows, power reclining rear seat, and automatic trunk.

Nothing lets you blend into the Palm Beach crowd as seamlessly as driving a Mercedes. Photo courtesy of Bryan Cockfield

While the Mercedes 600 isn’t exactly known for being a hobbyist car nowadays, the W123s certainly are. My 300D was simple by modern Mercedes standards with a mechanical fuel injected diesel engine that was excessively overbuilt. The mechanical climate control systems made out of springs, plastic, and hope might not be working anymore but I’d be truly surprised if the engine from this car isn’t still running today.

Even plenty of gas-powered Mercedes of that era are wrenchable (as long as you bought one from before Chrysler poisoned the company) and also deliver the luxury that Mercedes was known for and is still coasting on. And this ability to repair or work on a car at a minimum of cost didn’t mean Mercedes sacrificed luxury, either. These cars were known for comfort as well as reliability, something rarely combined in modern cars.

Indeed, like Volkswagen, it seems as though a modern Mercedes will make it just as far as the end of the first lease before it turns into an expensive maintenance nightmare. Mercedes at least has the excuse that it never recovered from infecting itself with Chrysler in the 90s, but Volkswagen has no corporate baggage as severe, instead making a conscious choice to regress towards the mean without the anchor of a lackluster American brand tied around its neck. But a few other other less-obvious things have happened that have crushed the souls of my favorite vintage auto makers as well.

Toyota

Japanese automakers disrupted everything in the 70s and 80s with cars that had everything Volkswagen used to be: simple, inexpensive, repairable, and arguably even more reliable. And, with the advent of Lexus in the 80s and their first model, the LS400, they showed that they could master the Mercedes traits of bulletproof luxury as well. They didn’t need nostalgia or marketing mythology; they just quietly built what Volkswagen and Mercedes once promised, and Volkswagen, Mercedes, and almost every other legacy automaker at the time were simply unable to compete on any of these terms. Many people will blame increasing safety and emissions requirements on the changes seen in the last three decades, but fail to account for the fact that Japanese brands had these same requirements but were able to succeed despite them.

Marketing

Photo courtesy of Bryan Cockfield

Without being able to build reliable vehicles at a competitive price to Toyota, or Honda, or others, these companies turned to their marketing departments and away from their engineers. Many car makers, not just Mercedes and VW, chase gadgetry and features today rather than any underlying engineering principles. They also hope to sell buyers on a lifestyle rather than on the vehicle itself. With Mercedes it’s the image of luxury rather than luxury itself, and for Volkswagen especially it’s often nostalgia rather than repairability or reliability.

This isn’t limited to car companies, either. The 80s and 90s also ushered in a more general time of prioritizing stock holders and quarterly earnings rather than customers, long-term thinking, and quality. Companies like Boeing, GE, Craftsman, Sony, and Nokia all have fallen to victim to the short-term trend at the expense of what once made them great.

Designing for Assembly Rather than Repair

And, if customers are only spending money on a lease term it doesn’t really matter if the cars last longer than that. So, it follows that the easiest way to trim costs when not designing for longevity is to design in ways that minimize assembly cost rather than costs of ownership. That’s partially how we get the classic “remove the bumper to replace the headlight” predicament of many modern vehicles: these cars are designed to please robots on the assembly line, not humans with wrenches.

Dealerships

The way that we’ve structured car buying as a society bears some of this burden as well. Dealerships, especially in North America, are protected by law and skew the car ownership experience significantly, generally to the detriment of car owners. Without these legal protections the dealership model would effectively disappear overnight, and their lobbying groups have fought tooth-and-nail to stop newer companies from shipping cars directly to owners. Not only do dealerships drive up the cost of purchasing a vehicle compared to if it were legally possible to buy direct from a manufacturer, they often make the bulk of their profits on service. That means their incentives are also aligned so that the more unreliable and complex vehicles become, the more the dealerships will benefit and entrench themselves further. This wasn’t as true when VW and Mercedes were making the vehicles that made them famous, but has slowly eroded what made these classics possible in the modern world.

Hope? Probably Not.

There’s no sign that any of these trends are slowing down, and to me it seems to be part of a broader trend that others like [Maya] have pointed out that goes beyond cars. And it’s a shame too as there’s a brand new frontier of electric vehicles that could (in theory) bring us back to a world where we could have reliable, repairable vehicles again. EVs are simpler machines at heart, and they could be the perfect platform for open-source software, accessible schematics, and owner repair. But manufacturers and dealers aren’t incentivized to build anything like the Volkswagens or Mercedes of old, electric or otherwise, even though they easily could. I also won’t hold my breath hoping for [Jeff Bezos] to save us, either, but I’d be happy to be proven wrong.

Buick Park Avenue: the last repairable luxury car? Photo courtesy of Bryan Cockfield

And I also don’t fault anyone for appreciating these legacy brands. I’ve picked on VW and Merc here because I’ve owned them and appreciate them too, or at least what they used to represent. The problem is that somewhere along the way, loyalty to engineering and design ideals got replaced by loyalty to the logo itself. If we really care about what made cars like the Beetle and 300D special in the first place, we should be demanding that the companies that built them live up to those values again, not making excuses when they don’t.

So for now, I’ll keep gravitating toward the vehicles that came closest to those ideals. Others at Hackaday have as well, notably [Lewin] and his Miata which certainly fits this bill. Although I don’t have my VW or Mercedes anymore, I currently have a ’19 Toyota pickup, largely designed in the early 2000s, which isn’t glamorous but it’s refreshingly honest by modern standards and is perhaps a last gasp from this company’s soul, as Toyota now risks following the same path that hollowed out Volkswagen and Mercedes: swapping durability and practicality for complexity, flashy features, and short-term profits. I was also gifted an old Buick with an engine I once heard described as “the time GM accidentally made a Toyota engine.” The rubber bits may be dry-rotting away, but it’s a perfect blend of my Beetle and my 300D because it’s cheap, comfortable, reliable, and fixable (and the climate control actually works). The only thing missing is that little stamp: Made in West Germany.

]]>
https://hackaday.com/2025/10/16/a-tale-of-two-car-design-philosophies/feed/ 114 810457 TwoCarDesign
Rubik’s WOWCube: What Really Makes a Toy? https://hackaday.com/2025/10/15/rubiks-wowcube-what-really-makes-a-toy/ https://hackaday.com/2025/10/15/rubiks-wowcube-what-really-makes-a-toy/#comments Wed, 15 Oct 2025 14:00:00 +0000 https://hackaday.com/?p=865150 If there ever was a toy that enjoys universal appeal and recognition, the humble Rubik’s Cube definitely is on the list. Invented in 1974 by sculptor and professor of architecture …read more]]>

If there ever was a toy that enjoys universal appeal and recognition, the humble Rubik’s Cube definitely is on the list. Invented in 1974 by sculptor and professor of architecture Ernő Rubik with originally the name of Magic Cube, it features a three-by-three grid of colored surfaces and an internal mechanism which allows for each of these individual sections of each cube face to be moved to any other face. This makes the goal of returning each face to its original single color into a challenge, one which has both intrigued and vexed many generations over the decades. Maybe you’ve seen one?

Although there have been some variations of the basic 3×3 grid cube design over the years, none have been as controversial as the recently introduced WOWCube. Not only does this feature a measly 2×2 grid on each face, each part of the grid is also a display that is intended to be used alongside an internal processor and motion sensors for digital games. After spending many years in development, the Rubik’s WOWCube recently went up for sale at $299, raising many questions about what market it’s really targeting.

Is the WOWCube a ‘real’ Rubik’s Cube, and what makes something into a memorable toy and what into a mere novelty gadget that is forgotten by the next year like a plague of fidget spinners?

The Cube’s Genius

Rubik's Cube components with the nylon core visible. (Credit: Encik Tekateki)
Rubik’s Cube components with the nylon core visible. (Credit: Encik Tekateki)

Originally created as a 3D visualization aid for Rubik’s students, the key to the Cube is a sphere. Specifically, the rotation occurs around said internal sphere, with the outer elements interlocked in such a way that they allow for free movement along certain planes. It is this simple design that was turned into a toy by the 1980s, with its popularity surging and never really fading.

There are a few definitions of a ‘toy’, which basically all come down to ‘an object to play with’, meaning something that provides pleasure through act of interacting with it, whether that’s in the innocent sense of a child’s playing time, or the mind-in-gutter adult sense. These objects are thus effectively without real purpose other than to provide entertainment and potentially inflict basic skills on a developing mind.

Although this may seem like a clear-cut distinction, there is a major grey zone, inside of which we find things like of ‘educational toys’ and games like chess. These are toys which are explicitly designed to only provide some kind of reward after a puzzle is solved, often requiring various levels of mental exertion.

It’s hard to argue that a Rubik’s Cube isn’t an educational toy, especially considering its original purpose within the education system. After shuffling the faces of the cube, the goal is to somehow move the individual blocks of color back to their fellow colors on a singular face. This is a process that can be done through a variety of methods, the easiest of which is to recognize the patterns that are formed by the colors.

Generally, solving a Rubik’s Cube is done algorithmically, using visual recognition of patterns and applying the appropriate response. While a casual ‘Cuber’ can solve a standard 3×3 cube in less than half an hour using the basic layer-by-layer algorithm, so-called speedcubers can knock this down to a few seconds by applying far more complicated algorithms. As of May 2025 the world record for fastest single solve stands at 3.05 seconds, achieved by Xuanyi Geng.

In this regard, one can easily put Rubik’s Cube in the same general ‘toy’ category as games like chess, go, and shogi. Although the Cube isn’t by itself a multiplayer game, it also clearly invites competition and a social atmosphere in which to better oneself at the game.

Does It WOW?

With the Cube so firmly established in the global community’s psyche and the multi-colored ‘toy’ a symbol of why paying attention during math classes can absolutely pay off later in life, this brings us to the WOWCube. Looking at the official website for the item, one can’t help but feel less than inspired.

Would you rather play this than solve a Rubik's Cube? (Credit: WOWCube)
Would you rather play this than solve a Rubik’s Cube? (Credit: WOWCube)

Backing up a bit, the device itself is already a major departure from the Cube. Although the WOWCube’s price tag at $299 is absolutely worthy of a ‘Wow’, the 2×2 configuration is decidedly underwhelming. Yes, it rotates like a Cube, and you could use it like a regular 2×2 Cube if that is your thing and you hate a challenge, but the general vibe is that you’re supposed to be playing the equivalent of Flash or phone games on the screens, in addition to using it like a geometrically-challenged smartphone to display statuses and notifications.

For these applications you have the use of a total of 24 1.4″ IPS LC displays, each with a 240 x 240 resolution. Due to the 2×2 configuration, you have eight blocks that can be moved around, each with its own built-in processor, battery, speaker and 6-axis IMU sensor for gyroscope and accelerometer functionality. These blocks communicate with each other using a magnetic system, and after up to five hours of play time you have to recharge it on the special charger.

Currently you can only pre-order the special Rubik’s WOWCube, with delivery expected ‘by Christmas 2025’. You can however get a good idea of what the experience will be like from videos like the 2022 review video of a pre-production unit by MetalJesusRocks, who also helpfully did a teardown while reconnecting the battery in one block after it disconnected during use.

The 2022 preproduction WOWCube with a block removed. (Credit: MetalJesusRocks, YouTube)
The 2022 preproduction WOWCube with a block removed. (Credit: MetalJesusRocks, YouTube)

The internals of a 2022-era WOWCube block. (Credit: MetalJesusRocks, YouTube)The internals of a 2022-era WOWCube block. (Credit: MetalJesusRocks, YouTube)

Although this happened with a preproduction unit, it provides some indications regarding the expected lifespan of a WOWCube, as these devices are likely to experience constant mechanical forces being applied to it. With no touchscreen, you have to sometimes rather violently tap the cube or shake it to register user input, which will likely do wonders for long-term reliability.

In the earlier referenced pre-production review, the conclusion was – especially after having a group of random folk try it out – that although definitely an interesting device, it’s too expensive and too confused about who or what it is targeting. This is also the vibe in a brief production unit review by major gadget YouTube channel Mrwhosetheboss, whose ‘Overkill Toys’ video spent a few minutes fiddling with a 2023-era, $599 Black Edition WOWCube before giving it the ‘impressive, but why’ thumbs down.

This also reveals the interesting aspect here, namely that the WOWCube never was designed by the Rubik’s Cube company for Rubik’s Cube users, but rather it’s the Cubios Inc. company that created the WOWCube Entertainment System. The company that owns the Rubik’s brand name, Spin Master, has decided to make this $299 version of with official Rubik’s Cube branding. Basically, you could have bought your own WOWCube all along for the past few years now.

More Of A MehCube

Considering the overwhelming chorus of crickets that greeted the release of earlier versions of the WOWCube Entertainment System, it seems unlikely that slapping Rubik’s Cube branding on a WOWCube will do much to change the outcome. Although Cube enthusiasts don’t mind shelling out a few hundred bucks for a magnetically levitated, fairy dust-lubricated Cube to gain that 0.1 second advantage in competitive solving, this is totally distinct from this WOWCube product.

While absolutely impressive from a technological perspective, and likely a fun toy for (adult) children who can  use it to keep themselves occupied with a range of potentially educational games, the price tag and potentially fragile nature of the device rather sours the deal. You do not want to give the WOWCube to a young child who may drop it harder than a $1,400 iPad, while giving Junior a dodgy $5 Rubik’s Cube clone to develop their algorithmic skills with is far less of a concern.

So if Rubik’s Cube fans don’t seem interested in this device, and the average person might be interested, but only if it was less than $100, it would seem that the WOWCube is condemned to be just another overpriced gadget, and not some kind of ‘digital re-imagining’ of the veritable Cube, as much as the marketing makes you want to sign up for a WOWClub subscription and obligatory ‘AI’ features.

]]>
https://hackaday.com/2025/10/15/rubiks-wowcube-what-really-makes-a-toy/feed/ 20 865150 WOWcube Rubik's Cube components with the nylon core visible. (Credit: Encik Tekateki) Would you rather play this than solve a Rubik's Cube? (Credit: WOWCube) The 2022 preproduction WOWCube with a block removed. (Credit: MetalJesusRocks, YouTube) The internals of a 2022-era WOWCube block. (Credit: MetalJesusRocks, YouTube)
The Great Northeast Blackout of 1965 https://hackaday.com/2025/10/14/the-great-northeast-blackout-of-1965/ https://hackaday.com/2025/10/14/the-great-northeast-blackout-of-1965/#comments Tue, 14 Oct 2025 14:00:40 +0000 https://hackaday.com/?p=864841 At 5:20 PM on November 9, 1965, the Tuesday rush hour was in full bloom outside the studios of WABC in Manhattan’s Upper West Side. The drive-time DJ was Big …read more]]>

At 5:20 PM on November 9, 1965, the Tuesday rush hour was in full bloom outside the studios of WABC in Manhattan’s Upper West Side. The drive-time DJ was Big Dan Ingram, who had just dropped the needle on Jonathan King’s “Everyone’s Gone to the Moon.” To Dan’s trained ear, something was off about the sound, like the turntable speed was off — sometimes running at the usual speed, sometimes running slow. But being a pro, he carried on with his show, injecting practiced patter between ad reads and Top 40 songs, cracking a few jokes about the sound quality along the way.

Within a few minutes, with the studio cart machines now suffering a similar fate and the lights in the studio flickering, it became obvious that something was wrong. Big Dan and the rest of New York City were about to learn that they were on the tail end of a cascading wave of power outages that started minutes before at Niagara Falls before sweeping south and east. The warbling turntable and cartridge machines were just a leading indicator of what was to come, their synchronous motors keeping time with the ever-widening gyrations in power line frequency as grid operators scattered across six states and one Canadian province fought to keep the lights on.

They would fail, of course, with the result being 30 million people over 80,000 square miles (207,000 km2) plunged into darkness. The Great Northeast Blackout of 1965 was underway, and when it wrapped up a mere thirteen hours later, it left plenty of lessons about how to engineer a safe and reliable grid, lessons that still echo through the power engineering community 60 years later.

Silent Sentinels

Although it wouldn’t be known until later, the root cause of what was then the largest power outage in world history began with equipment that was designed to protect the grid. Despite its continent-spanning scale and the gargantuan size of the generators, transformers, and switchgear that make it up, the grid is actually quite fragile, in part due to its wide geographic distribution, which exposes most of its components to the ravages of the elements. Without protection, a single lightning strike or windstorm could destroy vital pieces of infrastructure, some of it nearly irreplaceable in practical terms.

Protective relays like these at a hydroelectric plant started all the ruckus. Source: Wtshymanski at en.wikipedia, CC BY-SA 3.0

Tasked with this critical protective job are a series of relays. The term “relay” has a certain connotation among electronics hobbyists, one that can be misleading in discussions of power engineering. While we tend to think of relays as electromechanical devices that use electromagnets to make and break contacts to switch heavy loads, in the context of grid protection, relays are instead the instruments that detect a fault and send a control signal to switchgear, such as a circuit breaker.

Relays generally sense faults through a series of instrumentation transformers located at critical points in the system, usually directly within the substation or switchyard. These can either be current transformers, which measure the current in a toroidal coil wrapped around a conductor, much like a clamp meter, or voltage transformers, which use a high-voltage capacitor network as a divider to measure the voltage at the monitored point.

Relays can be configured to use the data from these sensors to detect an overcurrent fault on a transmission line; contacts within the relay would then send 125 VDC from the station’s battery bank to trip the massive circuit breakers out in the yard, opening the circuit. Other relays, such as induction disc relays, sense problems via the torque created on an aluminum disk by opposing sensing coils. They operate on the same principle as the old mechanical electrical meters did, except that under normal conditions, the force exerted by the coils is in balance, keeping the disk from rotating. When an overcurrent fault or a phase shift between the coils occurs, the disc rotates enough to close contacts, which sends the signal to trip the breakers.

The circuit breakers themselves are interesting, too. Turning off a circuit with perhaps 345,000 volts on it is no mean feat, and the circuit breakers that do the job must be engineered to safely handle the inevitable arc that occurs when the circuit is broken. They do this by isolating the contacts from the atmosphere, either by removing the air completely or by replacing the air with pressurized sulfur hexafluoride, a dense, inert gas that quenches arcs quickly. The breaker also has to draw the contacts apart as quickly as possible, to reduce the time during which they’re within breakdown distance. To do this, most transmission line breakers are pneumatically triggered, with the 125 VDC signal from the protective relays triggering a large-diameter dump valve to release pressurized air from a reservoir into a pneumatic cylinder, which operates the contacts via linkages.

The Cascade Begins

At the time of the incident, each of the five 230 kV lines heading north into Ontario from the Sir Adam Beck Hydroelectric Generating Station, located on the west bank on the Niagara River, was protected by two relays: a primary relay set to open the breakers in the event of a short circuit, and a backup relay to make sure the line would open if the primary relays failed to trip the breaker for some reason. These relays were installed in 1951, but after a near-catastrophe in 1956, where a transmission line fault wasn’t detected and the breaker failed to open, the protective relays were reconfigured to operate at approximately 375 megawatts. When this change was made in 1963, the setting was well above the expected load on the Beck lines. But thanks to the growth of the Toronto-Hamilton area, especially all the newly constructed subdivisions, the margins on those lines had narrowed. Coupled with an emergency outage of a generating station further up the line in Lakeview and increased loads thanks to the deepening cold of the approaching Canadian winter, the relays were edging closer to their limit.

Where it all began. Overhead view of the Beck (left) and Moses (right) hydro plants, on the banks of the Niagara River. Source: USGS, Public domain.

Data collected during the event indicates that one of the backup relays tripped at 5:16:11 PM on November 9; the recorded load on the line was only 356 MW, but it’s likely that a fluctuation that didn’t get recorded pushed the relay over its setpoint. That relay immediately tripped its breaker on one of the five northbound 230 kV lines, with the other four relays doing the same within the next three seconds. With all five lines open, the Beck generating plant suddenly lost 1,500 megawatts of load, and all that power had nowhere else to go but the 345 kV intertie lines heading east to the Robert Moses Generating Plant, a hydroelectric plant on the U.S. side of the Niagara River, directly across from Beck. That almost instantly overloaded the lines heading east to Rochester and Syracuse, tripping their protective relays to isolate the Moses plant and leaving another 1,346 MW of excess generation with nowhere to go. The cascade of failures marched across upstate New York, with protective relays detecting worsening line instabilities and tripping off transmission lines in rapid succession. The detailed event log, which measured events with 1/2-cycle resolution, shows 24 separate circuit trips with the first second of the outage.

Oscillogram of the outage showing data from instrumentation transformers around the Beck transmission lines. Source: Northeast Power Failure, November 9 and 10, 1965: A Report to the President. Public domain.

While many of the trips and events were automatically triggered, snap decisions by grid operators all through the system resulted in some circuits being manually opened. For example, the Connecticut Valley Electrical Exchange, which included all of the major utilities covering the tiny state wedged between New York and Massachusetts, noticed that Consolidated Edison, which operated in and around the five boroughs of New York City, was drawing an excess amount of power from their system, in an attempt to make up for the generation capacity lost from upstate. They tried to keep New York afloat, but the CONVEX operators had to make the difficult decision to manually open their ties to the rest of New England to shed excess load about a minute after the outage started, finally completely isolating their generators and loads by 5:21.

Heroics aside, New York City was in deep trouble. The first effects were felt almost within the first second of the event, as automatic protective relays detected excessive power flow and disconnected a substation in Brooklyn from an intertie into New Jersey. Operators at Long Island Light tried to save their system by cutting ties to the Con Ed system, which reduced the generation capacity available to the city and made its problem worse. Operators tried to spin up their steam turbine plants to increase generation capacity, but it was too little, too late. Frequency fluctuations began to mount throughout New York City, resulting in Big Dan’s wobbly turntables at WABC.

Well, there’s your problem. Bearings on the #3 turbine at Con Ed’s Ravenwood plant were starved of oil during the outage, resulting in some of the only mechanical damage incurred during the outage. Source: Northeast Power Failure, November 9 and 10, 1965: A Report to the President. Public domain.

As a last-ditch effort to keep the city connected, Con Ed operators started shedding load to better match the dwindling available supply. But with no major industrial users — even in 1965, New York City was almost completely deindustrialized — the only option was to start shutting down sections of the city. Despite these efforts, the frequency dropped lower and lower as the remaining generators became more heavily loaded, tripping automatic relays to disconnect them and prevent permanent damage. Even so, a steam turbine generator at the Con Ed Ravenswood generating plant was damaged when an auxiliary oil feed pump lost power during the outage, starving the bearings of lubrication while the turbine was spinning down.

By 5:28 or so, the outage reached its fullest extent. Over 30 million people began to deal with life without electricity, briefly for some, but up to thirteen hours for others, particularly those in New York City. Luckily, the weather around most of the downstate outage area was unusually clement for early November, so the risk of cold injuries was relatively low, and fires from improvised heating arrangements were minimal. Transportation systems were perhaps the hardest hit, with some 600,000 unfortunates trapped in the dark in packed subway cars. The rail system reaching out into the suburbs was completely shut down, and Kennedy and LaGuardia airports were closed after the last few inbound flights landed by the light of the full moon. Road traffic was snarled thanks to the loss of traffic signals, and the bridges and tunnels in and out of Manhattan quickly became impassable.

Mopping Up

Liberty stands alone. Lighted from the Jersey side, Lady Liberty watches over a darkened Manhattan skyline on November 9. The full moon and clear skies would help with recovery. Source: Robert Yarnell Ritchie collection via DeGolyer Library, Southern Methodist University.

Almost as soon as the lights went out, recovery efforts began. Aside from the damaged turbine in New York and a few transformers and motors scattered throughout the outage area, no major equipment losses were reported. Still, a massive mobilization of line workers and engineers was needed to manually verify that equipment would be safe to re-energize.

Black start power sources had to be located, too, to power fuel and lubrication pumps, reset circuit breakers, and restart conveyors at coal-fired plants. Some generators, especially the ones that spun to a stop and had been sitting idle for hours, also required external power to “jump start” their field coils. For the idled thermal plants upstate, the nearby hydroelectric plants provided excitation current in most cases, but downstate, diesel electric generators had to be brought in for black starts.

In a strange coincidence, neither of the two nuclear plants in the outage area, the Yankee Rowe plant in Massachusetts and the Indian Point station in Westchester County, New York, was online at the time, and so couldn’t participate in the recovery.

For most people, the Great Northeast Power Outage of 1965 was over fairly quickly, but its effects were lasting. Within hours of the outage, President Lyndon Johnson issued an order to the chairman of the Federal Power Commission to launch a thorough study of its cause. Once the lights were back on, the commission was assembled and started gathering data, and by December 6, they had issued their report. Along with a blow-by-blow account of the cascade of failures and a critique of the response and recovery efforts, they made tentative recommendations on what to change to prevent a recurrence and to speed the recovery process should it happen again, which included better and more frequent checks on relay settings, as well as the formation of a body to oversee electrical reliability throughout the nation.

Unfortunately, the next major outage in the region wasn’t all that far away. In July of 1977, lightning strikes damaged equipment and tripped breakers in substations around New York City, plunging the city into chaos. Luckily, the outage was contained to the city proper, and not all of it at that, but it still resulted in several deaths and widespread rioting and looting, which the outage in ’65 managed to avoid. That was followed by the more widespread 2003 Northeast Blackout, which started with an overloaded transmission line in Ohio and eventually spread into Ontario, across Pennsylvania and New York, and into Southern New England.

]]>
https://hackaday.com/2025/10/14/the-great-northeast-blackout-of-1965/feed/ 31 864841 Blackout
Meshtastic: A Tale of Two Cities https://hackaday.com/2025/10/09/meshtastic-a-tale-of-two-cities/ https://hackaday.com/2025/10/09/meshtastic-a-tale-of-two-cities/#comments Thu, 09 Oct 2025 14:00:29 +0000 https://hackaday.com/?p=864850 If I’m honest with myself, I don’t really need access to an off-grid, fault-tolerant, mesh network like Meshtastic. The weather here in New Jersey isn’t quite so dynamic that there’s …read more]]>

If I’m honest with myself, I don’t really need access to an off-grid, fault-tolerant, mesh network like Meshtastic. The weather here in New Jersey isn’t quite so dynamic that there’s any great chance the local infrastructure will be knocked offline, and while I do value my privacy as much as any other self-respecting hacker, there’s nothing in my chats that’s sensitive enough that it needs to be done off the Internet.

But damn it, do I want it. The idea that everyday citizens of all walks of life are organizing and building out their own communications network with DIY hardware and open source software is incredibly exciting to me. It’s like the best parts of a cyberpunk novel, without all the cybernetic implants, pollution, and over-reaching megacorps. Well, we’ve got those last two, but you know what I mean.

Meshtastic maps are never exhaustive, but this gives an idea of node density in Philly versus surrounding area.

Even though I found the Meshtastic concept appealing, my seemingly infinite backlog of projects kept me from getting involved until relatively recently. It wasn’t until I got my hands on the Hacker Pager that my passing interest turned into a full blown obsession. But it’s perhaps not for the reason you might think. Traveling around to different East Coast events with the device in my bag, it would happily chirp away when within range of Philadelphia or New York, but then fall silent again once I got home. While I’d get the occasional notification of a nearby node, my area had nothing like the robust and active mesh networks found in those cities.

Well, they say you should be the change you want to see in the world, so I decided to do something about it. Obviously I wouldn’t be able to build up an entire network by myself, but I figured that if I started standing up some nodes, others might notice and follow suit. It was around this time that Seeed Studio introduced the SenseCAP Solar node, which looked like a good way to get started. So I bought two of them with the idea of putting one on my house and the other on my parent’s place down the shore.

The results weren’t quite what I expected, but it’s certainly been an interesting experience so far, and today I’m even more eager to build up the mesh than I was in the beginning.

Starting on Easy Mode

I didn’t make a conscious decision to start my experiment at my parent’s house. Indeed, located some 60 miles (96 km) from where I live, any progress in building out a mesh network over there wouldn’t benefit me back home. But it was the beginning of summer, they have a pool, and my daughters love to swim. As such, we spent nearly every weekend there which gave me plenty of time to tinker.

For those unfamiliar with New Jersey’s Southern Shore area, the coastline itself is dotted with vacation spots such as Wildwood, Atlantic City, and Long Beach Island. This is where the tourists go to enjoy the beaches, boardwalks, cotton candy, and expensive rental homes. But move slightly inland, and you’ll find a marshland permeated with a vast network of bays, creeks, and tributaries. For each body of water large enough to get a boat through, you’ll find a small town or even an unincorporated community that in the early 1900s would have been bustling with oyster houses and hunting shacks, but today might only be notable for having their own Wawa.

To infinity, and beyond.

My parents are in one of those towns that doesn’t have a Wawa. Its very quiet, the skies are dark, and there’s not much more than marsh and water all around. So when I ran the SenseCAP Solar up their 20 foot (6 m) flagpole, which in a former life was actually the mast from a sailing catamaran, the results were extremely impressive.

I hadn’t had the radio up for more than a few hours before my phone pinged with a message. We chatted back and forth a bit, and I found that my new mesh friend was an amateur radio operator living on Long Beach Island, and that he too had just recently started experimenting with Meshtastic. He was also, incidentally, a fan of Hackaday. (Hi, Leon!) He mentioned that his setup was no more advanced than an ESP32 dev board sitting in his window, and yet we were reliably communicating at a range of approximately 6 miles (9 km).

Encouraged, I decided to leave the radio online all night. In the morning, I was shocked to find it had picked up more than a dozen new nodes. Incredibly, it was even able to sniff out a few nodes that I recognized from Philadelphia, 50 miles (80 km) to the west. I started to wonder if it was possible that I might actually be able to reach my own home, potentially establishing a link clear across the state.

Later that day, somebody on an airplane fired off a few messages on the way out of Philadelphia International Airport. Seeing the messages was exciting enough, but through the magic of mesh networking, it allowed my node to temporarily see networks at an even greater distance. I picked up one node that was more than 100 miles (160 km) away in Aberdeen, Maryland.

I was exhilarated by these results, and eager to get back home and install the second SenseCAP Solar node. If these were the kind of results I was getting in the middle of nowhere, surely I’d make even more contacts in a dense urban area.

Reality Comes Crashing Home

You see, at this point I had convinced myself that the reason I wasn’t getting any results back at home was the relatively meager antenna built into the Hacker Pager. Now that I had a proper node with an antenna bigger than my pinkie finger, I was sure I’d get better results. Especially since I’d be placing the radio even higher this time — with a military surplus fiberglass mast clamped into the old TV antenna mount on my three story house, the node would be around 40 feet (12 m) above the ground.

The mast gets my node above the neighbor’s roofs, but just barely.

But when I opened the Meshtastic app the day after getting my home node installed, I was greeted with….nothing. Not a single node was detected in a 24 hour period. This seemed very odd given my experience down the shore, but I brushed it off. After all, Meshtastic nodes only occasionally announce their presence when they aren’t actively transmitting.

Undaunted, I made plans with a nearby friend to install a node at his place. His home is just 1.2 miles (1.9 km) from mine, and given the casual 6 mile (9 km) contact I had made at my parent’s place, it seemed like this would be an easy first leg of our fledgling network.

Yet when we stood up a temporary node in his front yard, messages between it and my house were only occasionally making it through. Worse, the signal strength displayed in the application was abysmal. It was clear that, even at such a short range, an intermediary node would be necessary to get our homes reliably connected.

At this point, I was feeling pretty dejected. The incredible results I got when using Meshtastic in the sticks had clearly given me a false sense of what the technology was capable of in an urban environment. To make matters even worse, some further investigation found that my house was about the worst possible place to try and mount a node.

For one thing, until I bothered to look it up, I never realized my house was located in a small valley. According to online line-of-sight tools, I’m essentially at the bottom of a bowl. As if that wasn’t bad enough, I noted that the Meshtastic application was showing an inordinate number of bad packets. After consulting with those more experienced with the project, I now know this to be an indicator of a noisy RF environment. Which may also explain the exceptionally poor reception I get when trying to fly my FPV drone around the neighborhood, but that’s a story for another day.

A More Pragmatic Approach

While I was disappointed that I couldn’t replicate my seaside Meshtastic successes at home, I’m not discouraged. I’ve learned a great deal about the technology, especially its limitations. Besides, the solution is simple enough — we need more nodes, and so the campaign to get nearby friends and family interested in the project has begun. We’ve already found another person in a geographically strategic position who’s willing to host a node on their roof, and as I write this a third Seeed SenseCAP Solar sits ready for installation.

At the same time, the performance of Meshtastic in a more rural setting has inspired me to push further in that region. I’m in the process of designing a custom node specifically tailored for the harsh marine environment, and have identified several potential locations where I can deploy them in the Spring. With just a handful of well-placed nodes, I believe it should be possible to cover literally hundreds of square miles.

I’m now fighting a battle on two fronts, but thankfully, I’m not alone. In the months since I’ve started this project, I’ve noticed a steady uptick in the number of detected nodes. Even here at home, I’ve finally started to pick up some chatter from nearby nodes. There’s no denying it, the mesh is growing everyday.

My advice to anyone looking to get into Meshtastic is simple. Whether you’re in the boonies, or stuck in the middle of a metropolis, pick up some compatible hardware, mount it as high as you can manage, and wait. It might not happen overnight, but eventually your device is going to ping with that first message — and that’s when the real obsession starts.

]]>
https://hackaday.com/2025/10/09/meshtastic-a-tale-of-two-cities/feed/ 81 864850 meshtale_featured
Reshaping Eyeballs With Electricity, No Lasers Or Cutting Required https://hackaday.com/2025/10/08/reshaping-eyeballs-with-electricity-no-lasers-or-cutting-required/ https://hackaday.com/2025/10/08/reshaping-eyeballs-with-electricity-no-lasers-or-cutting-required/#comments Wed, 08 Oct 2025 14:00:17 +0000 https://hackaday.com/?p=834273 Glasses are perhaps the most non-invasive method of vision correction, followed by contact lenses. Each have their drawbacks though, and some seek more permanent solutions in the form of laser …read more]]>

Glasses are perhaps the most non-invasive method of vision correction, followed by contact lenses. Each have their drawbacks though, and some seek more permanent solutions in the form of laser eye surgeries like LASIK, aiming to reshape their corneas for better visual clarity. However, these methods often involve cutting into the eye itself, and it hardly gets any more invasive than that.

A new surgical method could have benefits in this regard, allowing correction in a single procedure that requires no lasers and no surgical cutting of the eye itself. The idea is to use electricity to help reshape the eye back towards greater optical performance.

The Eyes Have It

Thus far, the research has worked with individual eyeballs. Great amounts of work remain before this is a viable treatment for eyes in living subjects. Credit: research paper

Existing corrective eye surgeries most often aim to fix problems like long-sightedness, short-sightedness, and astigmatism. These issues are generally caused by the shape of the cornea, which works with the lens in the eye to focus light on to the light-sensitive cells in the retina. If the cornea is misshapen, it can be difficult for the eye to focus at close or long ranges, or it can cause visual artifacts in the field of view, depending on the precise nature of the geometry. Technologies like LASIK reshape the cornea for better performance using powerful lasers, but also involve cutting into the cornea. The procedure is thus highly invasive and has a certain recovery time, safety precautions that must be taken afterwards, and some potential side effects. A method for reshaping the eye without cutting into it would thus be ideal to avoid these problems.

Enter the technology of Electromechanical Reshaping (EMR). As per a new paper, researchers at the University of California, Irvine, came across the idea by accident, having been looking into the moldable nature of living tissues. As it turns out, collagen-based tissues like the cornea hold their structure thanks to the attractions between oppositely-charged subcomponents. These structures can be altered with the right techniques. For example, since these tissues are laden with water, applying electricity can change the pH through electrolyzation, altering the attraction between components of the tissue and making them pliable and reformable. Once the electric potential is taken away, the tissues can be restored to their original pH balance, and the structure will hold firm in its new form.

The untreated lens is visible in section A, and the new shape of the modified lens can be seen in section B. Graphs C and D show the change in radius and refractive power of the lens. Credit: research paper

Researchers first tested this technique out on other tissues before looking to the eye. The team were able to use EMR to reshape ears from rabbits, while also being able to make physical changes to scar tissue in pigs. These efforts proved the basic mechanism worked, and that it could have applicability to the cornea itself.

To actually effectively reshape the cornea using this technique, a sort of mold was required. To that end, researchers created a “contact lens” type device out of platinum, which was formed in the desired final shape of the cornea. A rabbit eyeball was used in testing, doused in a saline solution to mimic the eye’s natural environment. The platinum device was pushed on to the eye, and used as an electrode to apply a small electrical potential across the eyeball. This was controlled carefully to precisely change the pH to the region where the eye became remoldable. After a minute, the cornea of the rabbit eyeball had conformed to the shape of the platinum lens. With the electrical potential removed, the pH of the eyeball was returned to normal and the cornea retained the new shape. The technique was trialled on twelve eyeballs, with ten of those treated for a shortsightedness condition, also known as myopia. In the case of the myopic eyeballs, all ten were successfully corrected the cornea, creating improved focusing power that would correspond to better vision in a living patient’s eye.

While the technique is promising, great development will be required before this is a viable method for vision correction in human patients. Researchers will need to figure out how to properly apply the techniques to eyeballs that are still in living patients, with much work to be done with animal studies prior to any attempts to translate the technique to humans. However, it could be that a decade or two in the future, glasses and LASIK will be increasingly less popular compared to a quick zap from the electrochemical eye remoulder. Time will tell.

 

 

]]>
https://hackaday.com/2025/10/08/reshaping-eyeballs-with-electricity-no-lasers-or-cutting-required/feed/ 36 834273 Eyeball
Smart Bulbs Are Turning Into Motion Sensors https://hackaday.com/2025/10/07/smart-bulbs-are-turning-into-motion-sensors/ https://hackaday.com/2025/10/07/smart-bulbs-are-turning-into-motion-sensors/#comments Tue, 07 Oct 2025 14:00:59 +0000 https://hackaday.com/?p=833777 If you’ve got an existing smart home rig, motion sensors can be a useful addition to your setup. You can use them for all kinds of things, from turning on …read more]]>

If you’ve got an existing smart home rig, motion sensors can be a useful addition to your setup. You can use them for all kinds of things, from turning on lights when you enter a room, to shutting off HVAC systems when an area is unoccupied. Typically, you’d add dedicated motion sensors to your smart home to achieve this. But what if your existing smart light bulbs could act as the motion sensors instead?

The Brightest Bulb In The Bulb Box

The most typical traditional motion sensors use passive infrared detection, wherein the sensor picks up on the infrared radiation emitted by a person entering a room. Other types of sensors include break-beam sensors, ultrasonic sensors, and cameras running motion-detection algorithms. All of these technologies can readily be used with a smart home system if so desired. However, they all require the addition of extra hardware. Recently, smart home manufacturers have been exploring methods to enable motion detection without requiring the installation of additional dedicated sensors.

Hue Are You?

The technology uses data on radio propagation between multiple smart bulbs to determine whether or not something (or someone) is moving through an area. Credit: Ivani

Philips has achieved this goal with its new MotionAware technology, which will be deployed on the company’s new Hue Bridge Pro base station and Hue smart bulbs. The company’s smart home products use Zigbee radios for communication. By monitoring small fluctuations in the Zigbee communications between the smart home devices, it’s possible to determine if a large object, such as a human, is moving through the area. This can be achieved by looking at fluctuations to signal strength, latency, and bit error rates. This allows motion detection using Hue smart bulbs without any specific motion detection hardware required.

Using MotionAware requires end users to buy the latest Philips Hue Bridge Pro base station. As for whether there is some special magic built into this device, or if Phillips merely wants to charge users to upgrade to the new feature? Well, Philips claims the new bridge is required because it’s powerful enough to run the AI-powered algorithms that sift the radio data and determine whether motion is occurring. The tech is based on IP from a company called Ivani, which developed Sensify—an RF sensing technology that works with WiFi, Bluetooth, and Zigbee signals.

To enable motion detection, multiple Hue bulbs must be connected to the same Hue Bridge Pro, with three to four lights used to create a motion sensing “area” in a given room. When setting up the system, the room must be vacated so the system can calibrate itself. This involves determining how the Zigbee radio signals propagate between devices when nobody—humans or animals—is inside. The system then uses variations from this baseline to determine if something is moving in the room. The system works whether the lights themselves are on or off, because the light isn’t used for sensing—as long as the bulb has power, it can use its radio for sensing motion. Philips notes this only increases standby power consumption by 1%, and a completely negligible amount while the light is actually “on” and outputting light.

There are some limitations to the use of this system. It’s primarily for indoor use, as Philips notes that the system benefits from the way radio waves bounce off surrounding interior walls and objects. Lights should also be separated from 1 to 7 meters apart for optimal use, and effectively create a volume between them in which motion sensing is most effective. Depending on local conditions, it’s also possible that the system may detect motion on adjacent levels or in nearby rooms, so sensitivity adjustment or light repositioning may be necessary. Notably, though, you won’t need new bulbs to use MotionAware. The system will work with all the Hue mains-powered bulbs that have been manufactured since 2014.

The WiZ Kids Were Way Ahead

Philips aren’t the only ones offering in-built motion sensing with their smart home bulbs. WiZ also has a product in this space, which feels coincidental given the company was acquired in 2019 by Philip’s own former lighting division. Unlike Philips Hue, WiZ products rely on WiFi for communication. The company’s SpaceSense technology again relies on perturbations in radio signals between devices, but using WiFi signals instead of Zigbee. What’s more, the company has been at this since 2022

There are some notable differences in Wiz’s technology. SpaceSense is able to work with just two devices at a minimum, and not just lights—you can use any of the company’s newer lights, smart switches, or devices, as long as they’re compatible with SpaceSense, which covers the vast majority of the company’s recent product.

Ultimately, WiZ beat Philips by years with this tech. However, perhaps due to its lower market penetration, it didn’t make the same waves when SmartSense dropped in 2022.

Radio Magic

We’ve seen similar feats before. It’s actually possible to get all kinds of useful information out of modern radio chipsets for physical sensing purposes. We’ve seen systems that measure a person’s heart rate using nothing more than perturbations in WiFi transmission over short distances, for example. When you know what you’re looking for, a properly-built algorithm can let you dig usable motion information out of your radio hardware.

Ultimately, it’s neat to see smart home companies expanding their offerings in this way. By leveraging the radio chipsets in existing smart bulbs, engineers have been able to pull out granular enough data to enable this motion-sensing parlour trick. If you’ve ever wanted your loungeroom lights to turn on when you walk in, or a basic security notification when you’re out of the house… now you can do these kinds of things without having to add more hardware. Expect other smart home platforms to replicate this sort of thing in future if it proves practical and popular with end users.

 

 

]]>
https://hackaday.com/2025/10/07/smart-bulbs-are-turning-into-motion-sensors/feed/ 33 833777 SmartBulbs