Comments on: Nanochat Lets You Build Your Own Hackable LLM https://hackaday.com/2025/10/20/nanochat-lets-you-build-your-own-hackable-llm/ Fresh hacks every day Tue, 21 Oct 2025 08:03:36 +0000 hourly 1 https://wordpress.org/?v=6.8.3 By: FransAtFrance https://hackaday.com/2025/10/20/nanochat-lets-you-build-your-own-hackable-llm/#comment-8197182 Tue, 21 Oct 2025 08:03:36 +0000 https://hackaday.com/?p=868434#comment-8197182 In reply to Sean.

there is also an company where people can rent out their own gpus, if i’m not mistaken a H100 can be rented for US$1/hour there

]]>
By: Sean https://hackaday.com/2025/10/20/nanochat-lets-you-build-your-own-hackable-llm/#comment-8196913 Mon, 20 Oct 2025 22:05:28 +0000 https://hackaday.com/?p=868434#comment-8196913 In reply to None.

To my understanding – which is likely naïve, so I welcome correction – H100’s are 989 TOPS each and he is using eight of them (8 x H100 not 8XH100). An RTX3060 by comparison is 101 TOPS.

So 78 times more powerful in raw computing numbers only; ((989/101)8)4 is ~313 hours or ~13 days.

however it doesn’t scale in this linear fashion as you have WAY less memory (6GB vs 94GB), and with swapping and so on, this likely increases significantly – if it’s even possible.

There is also the paralellism of the eight x H100s and thermal throttling of your 3060 and the power of the underlying hardware orchestrating the entire shebang.

The array is about US$10 an hour to rent, so not that bad.

]]>
By: None https://hackaday.com/2025/10/20/nanochat-lets-you-build-your-own-hackable-llm/#comment-8196892 Mon, 20 Oct 2025 21:11:56 +0000 https://hackaday.com/?p=868434#comment-8196892 Has someone calculated how long would it take to train this model in a single rtx3060? My back-of-napkin calculations point to around 200 hours, or slightly over a week. Does this sound correct?

]]>
By: Helena https://hackaday.com/2025/10/20/nanochat-lets-you-build-your-own-hackable-llm/#comment-8196856 Mon, 20 Oct 2025 19:08:20 +0000 https://hackaday.com/?p=868434#comment-8196856 In reply to Dan.

Check the Terms of Service at the bottom of every page:
“you grant […] SupplyFrame an irrevocable, nonexclusive, royalty-free and fully paid, worldwide license, with the right to grant sublicenses, to reproduce, distribute, publicly display, publicly perform, prepare derivative works of, incorporate into other works, and otherwise use the User Content for purposes of operating and providing the Service to you and to our other users.”

Sounds to me like we’ve already consented to Hackaday/SupplyFrame doing that kind of thing.

]]>
By: Hugo Oran https://hackaday.com/2025/10/20/nanochat-lets-you-build-your-own-hackable-llm/#comment-8196854 Mon, 20 Oct 2025 19:01:54 +0000 https://hackaday.com/?p=868434#comment-8196854 In reply to Dan.

Sorry I inadvertently processed and analysed your comment after reading:)

]]>
By: Dan https://hackaday.com/2025/10/20/nanochat-lets-you-build-your-own-hackable-llm/#comment-8196839 Mon, 20 Oct 2025 18:14:47 +0000 https://hackaday.com/?p=868434#comment-8196839 In reply to Jan-Willem Markus.

I only consent to my comments being read, not processed and analysed.

]]>
By: Jason https://hackaday.com/2025/10/20/nanochat-lets-you-build-your-own-hackable-llm/#comment-8196835 Mon, 20 Oct 2025 18:04:38 +0000 https://hackaday.com/?p=868434#comment-8196835 In reply to SETH.

You can of course train useful AI on low spec hardware. I was doing that more than 20 years ago. It’s even more viable today. But what you can’t do is train arbitrarily complex AI on low spec hardware.

]]>