Putting A Teensy To Task As A Transputer Link

One downside of working with the old Inmos Transputer devices is the rarity and cost of the original silicon. Obviously, you can’t sidestep the acquisition of the processor—unless you emulate—but what about replacing the IMS C011/C012 link chip? You need this (expensive) part to interface the transputer to the programming host, but as [Erturk Kocalar] discovered, it’s perfectly possible to coax a Teensy to do that job for you just as well.

The unusual two-bit start sequence differentiates a data packet from an ACK. It’s simple to emulate if you use the LSB of a 9-bit word as a dummy start bit!

Transputers work by utilizing an array of bit serial interfaces to connect a network of devices, allowing for cooperative computation on tasks too large to fit on a single device. This protocol is, at its link level, a simple asynchronous bit serial affair, with 11-bit data messages, and a raw two-bit frame for the acknowledge. The C011 device at its heart is just a specialized UART—it takes 8-bit parallel data from the host, dealing with handshaking, and pushes it out to the first transputer in the chain at 5, 10 or 20 Mbps, but inverted and with two start bits and a single stop bit. In parallel, it performs the same task in the reverse direction.

[Erturk] realized that the Teensy UART has an inverted mode and, crucially, a 9-bit data mode. This allows the second start bit to be generated as bit 0 of the word, with the remaining eight bits forming the payload. Simple stuff. Additionally, the Teensy UART is capable of the maximum transputer bitrate of 20 Mbps, without breaking a sweat.

Continue reading “Putting A Teensy To Task As A Transputer Link”

CoreXY 3D Printer Has A Scissor-Lift Z-axis So It Folds Down!

We don’t know about you, but one of the biggest hassles of having a 3D printer at home or in the ‘shop is the space it takes up. Wouldn’t it be useful if you could fold it down? Well, you’re in luck because over on Hackaday.io, that’s precisely what [Malte Schrader] has achieved with their Portable CoreXY 3D printer.

The typical CoreXY design you find in the wild features a moving bed that starts at the top and moves downwards away from the XY gantry as the print progresses. The CoreXY kinematics take care of positioning the hotend in the XY plane with a pair of motors and some cunning pulley drives. Go check this out if you want to read more about that. Anyway, in this case, the bed is fixed to the base with a 3-point kinematic mount (to allow the hot end to be trammed) but is otherwise vertically immobile. That bed is AC-heated, allowing for a much smaller power supply to be fitted and reducing the annoying cooling fan noise that’s all too common with high-power bed heaters.

Both ends of the cable bundle are pivoted so it can fold flat inside the frame!

The XY gantry is mounted at each end on a pair of scissor lift mechanisms, which are belt-driven and geared together from a single stepper motor paired with a reduction gearbox. This hopefully will resolve any issues with X-axis tilting that [Malte] reports from a previous version.

The coarse tramming is handled by the bed mounts, with a hotend-mounted BLTouch further dialling it in and compensating for any bed distortion measured immediately before printing. Simple and effective.

As will be clear from the video below, the folding for storage is a natural consequence of the Z-axis mechanism, which we reckon is pretty elegant and well executed—check out those custom CNC machine Aluminium parts! When the Z-axis is folded flat for storage, the hotend part of the Bowden tube feed is mounted to a pivot, allowing it to fold down as well. They even added a pivot to the other end of the cable bundle / Bowden feed so the whole bundle folds down neatly inside the frame. Nice job!

If you want a little more detail about CoreXY kinematics, check out our handy guide. But what about the H-Bot we hear you ask? Fear not, we’re on it.


Was The Napier Nomad The Most Complex Aero Engine Ever Made?

From 1945 to 1955, a British aeronautical company called Napier & Son produced not just one but two versions of an intricate hybrid piston engine, which they named the Napier Nomad. The post-World War II era saw the development of several fascinating (and highly complex) piston-powered aeronautical engines alongside the emerging gas turbine engine designs. During this period, gas turbines were inefficient, unreliable, and primarily used for military applications. The (then) British Ministry of Supply commissioned the design and creation of a more fuel-efficient piston engine for aeronautical purposes, both military and civil, aiming to achieve gas turbine-like power while maintaining piston engine efficiency. Quite the challenge!

The specification aimed for 6000 hp and optimal fuel efficiency for long-range use. Napier knew gas turbines were limited by maximum operating temperature, constrained by available materials, which increased fuel consumption and reduced range. Piston engines operated at higher peak temperatures. They considered combining both principles to create a superior design, a concept suggested by aeronautical engineer Sir Harry Ricardo, who had consulted for Napier on other projects. Their complex solution was to build a gas turbine with a two-stroke diesel engine as the combustion chamber, merging the benefits of both.

Continue reading “Was The Napier Nomad The Most Complex Aero Engine Ever Made?”

Live Coding Techno With Strudel

The super talented [Switch Angel] is an electronic music artist, with a few cool YouTube videos to show off their absolute nailing of how to live code with Strudel. For us mere mortals, Strudel is a JavaScript port of TidalCycles, which is an algorithmic music generator which supports live coding, i.e. the music that is passed down to the synthesizer changes on-the-fly as you manipulate the code. It’s magical to watch (and listen!) to how you can adapt and distort the music to your whims just by tweaking a few lines of code: no compilation steps, hardly any debugging and instant results.

The traditional view of music generators like this is to create lists of note/instrument pairs with appropriate modifiers. Each sound is specified in sequence — adding a sound extends the sequence a little. Strudel / Tidalcycles works a little differently and is based on the idea of repeating patterns over a fixed time. Adding an extra sound or breaking down one sound slot into multiple sounds squeezes all the remaining slots down, causing the whole pattern to repeat in the same period, with the sounds individually taking up less space. This simple change makes it really easy to add layer upon layer of interest within a sequence with a few extra characters, without recalculating everything else to fit. On top of this base, multiple effects can be layered—more than we can mention here—and all can be adjusted with pop-in sliders directly in the code.

Continue reading “Live Coding Techno With Strudel”

BASICODE: A Bit Like Java, But From The 1980s

Those of us ancient enough to remember the time, or even having grown up during the heyday of the 8-bit home computer, may recall the pain of trying to make your latest creation work on another brand of computer. They all spoke some variant of BASIC, yet were wildly incompatible with each other regardless. BASICODE was a neat solution to this, acting as an early compatibility standard and abstraction layer. It was essentially a standardized BASIC subset with a few extra routines specialized per platform.

But that’s only part of the story. The BASICODE standard program was invented by Dutch radio engineer Hessel de Vries, who worked for the Dutch national radio broadcaster Nederlandse Omroep Stichting (NOS). It was designed to be broadcast over FM radio! The idea of standardization and free national deployment was brilliant and lasted until 1992, when corporate changes and technological advancements ultimately led to its decline.

Continue reading “BASICODE: A Bit Like Java, But From The 1980s”

How To Train A New Voice For Piper With Only A Single Phrase

[Cal Bryant] hacked together a home automation system years ago, which more recently utilizes Piper TTS (text-to-speech) voices for various undisclosed purposes. Not satisfied with the robotic-sounding standard voices available, [Cal] set about an experiment to fine-tune the Piper TTS AI voice model using a clone of a single phrase created by a commercial TTS voice as a starting point.

Before the release of Piper TTS in 2023, existing free-to-use TTS systems such as espeak and Festival sounded robotic and flat. Piper delivered much more natural-sounding output, without requiring massive resources to run. To change the voice style, the Piper AI model can be either retrained from scratch or fine-tuned with less effort. In the latter case, the problem to be solved first was how to generate the necessary volume of training phrases to run the fine-tuning of Piper’s AI model. This was solved using a heavyweight AI model, ChatterBox, which is capable of so-called zero-shot training. Check out the Chatterbox demo here.

As the loss function gets smaller, the model’s accuracy gets better

Training began with a corpus of test phrases in text format to ensure decent coverage of everyday English. [Cal] used ChatterBox to clone audio from a single test phrase generated by a ‘mystery TTS system’ and created 1,300 test phrases from this new voice. This audio set served as training data to fine-tune the Piper AI model on the lashed-up GPU rig.

To verify accuracy, [Cal] used OpenAI’s Whisper software to transcribe the audio back to text, in order to compare with the original text corpus. To overcome issues with punctuation and differences between US and UK English, the text was converted into phonemes using espeak-ng, resulting in a 98% phrase matching accuracy.

After down-sampling the training set using SoX, it was ready for the Piper TTS training system. Despite all the preparation, running the software felt anticlimactic. A few inconsistencies in the dataset necessitated the removal of some data points. After five days of training parked outside in the shade due to concerns about heat, TensorBoard indicated that the model’s loss function was converging. That’s AI-speak for: the model was tuned and ready for action! We think it sounds pretty slick.

If all this new-fangled AI speech synthesis is too complex and, well, a bit creepy for you, may we offer a more 1980s solution to making stuff talk? Finally, most people take the ability to speak for granted, until they can no longer do so. Here’s a team using cutting-edge AI to give people back that ability.

Convert Any Book To A DIY Audiobook?

If the idea of reading a physical book sounds like hard work, [Nick Bild’s] latest project, the PageParrot, might be for you. While AI gets a lot of flak these days, one thing modern multimodal models do exceptionally well is image interpretation, and PageParrot demonstrates just how accessible that’s become.

[Nick] demonstrates quite clearly how little code is needed to get from those cryptic black and white glyphs to sounds the average human can understand, specifically a paltry 80 lines of Python. Admittedly, many of those lines are pulling in libraries, and some are just blank, so functionally speaking, it’s even shorter than that. Of course, the whole application is mostly glue code, stitching together other people’s hard work, but it’s still instructive and fun to play with.

The hardware required is a Raspberry Pi Zero 2 W, a camera (in this case, a USB webcam), and something to hold it above the book. Any Pi with the ability to connect to a camera should also work, however, with just a little configuration.

On the software side, [Nick] pulls in the CV2 library (which is the interface to OpenCV) to handle the camera interfacing, programming it to full HD resolution. Google’s GenAI is used to interface the Gemini 2.5 Flash LLM via an API endpoint. This takes a captured image and a trivial prompt, and returns the whole page of text, quick as a flash.

Finally, the script hands that text over to Piper, which turns that into a speech file in WAV format. This can then be played to an audio device with a call out to the console aplay tool. It’s all very simple at this level of abstraction.

Continue reading “Convert Any Book To A DIY Audiobook?”