

It’s misleading.
IBM is very much into AI, as a modest, legally trained, economical tool. See: https://huggingface.co/ibm-granite
But this is the CEO saying “We aren’t drinking the Kool-Aid.” It’s shockingly reasonable.


It’s misleading.
IBM is very much into AI, as a modest, legally trained, economical tool. See: https://huggingface.co/ibm-granite
But this is the CEO saying “We aren’t drinking the Kool-Aid.” It’s shockingly reasonable.


That’s interesting.
I dunno if that’s any better. Compiler development is hard, and expensive.
I dunno what issue they have with LLVM, but it would have to be massive to justify building around it and then switching away to re-invent it.


…The same Zig that ditched LLVM, to make their own compiler from scratch?
This is good. But also, this is sort of in character for Zig.


They’re pretty bad outside of English-Chinese actually.
Voice-to-voice is all relatively new, and it sucks if it’s not all integrated (eg feeding a voice model plain text so it loses the original tone, emotion, cadence and such).
And… honestly, the only models I can think of that’d be good at this are Chinese. Or Japanese finetunes of Chinese models. Amazon certainly has some stupid policy where they aren’t allowed to use them (even with zero security risk since they’re open weights).


Hostly, even a dirt cheap language model (with sound input) would tell you it’s garbage. It could itemize problematic parts of the sub.
But they didn’t use that because this isn’t machine learning. Its Tech Bro AI.


All true, yep.
Still, the clocking advantage is there. Stuff like the N100 also optimizes for lower costs, which means higher clocks on smaller silicon. That’s even more dramatic for repurposed laptop hardware, which is much more heavily optimized for its idle state.
I am well into Avatar obsession. It’s lived rent free in my brain too long.
Watch it till you get bored. And do not, in any circumstances, watch the last season.
Isn’t that in the same universe as Eureka?
I am down for Eurekaverse.


First thing, Lemmy is in need of content and likes recruiting. Hence you got 315 replies, heh.
Basically, if you aren’t a bigot, you don’t have to worry about what you say. You can be politically incorrect in any direction and not get a global/shadowban from the Fediverse.
Each instance has its own flavor and etiquette.


They are human. There’s nothing wrong with acknowledging that, while also reiterating that they basically shouldn’t be in that state.
Also, I think it’s important to draw a line between the “rich” (well-off working professionals like researchers, doctors, small entrepreneurs), and people with more wealth than many sovereign nations put together.


This is interesting, because “add ads” usually means margins are slim, and the product is in a race to the bottom.
If ChatGPT was the transcendent, priceless, premium service they are hyping it as… why would it need ads?


Same with auto overclocking mobos.
My ASRock sets VSoC to a silly high coltage with EXPO. Set that back down (and fiddle with some other settings/disable the IGP if you can), and it does help a ton.
…But I think AMD’s MCM chips just do idle hotter. My older 4800HS uses dramatically less, even with the IGP on.


Yeah.
In general, ‘big’ CPUs have an advantage because they can run at much, much lower clockspeeds than atoms, yet still be way faster. There are a few exceptions, like Ryzen 3000+ (excluding APUs), which idle notoriously hot thanks to the multi-die setup.


Eh, older RAM doesn’t use much. If it runs close to stock voltage, maybe just set it at stock voltage and bump the speed down a notch, then you get a nice task energy gain from the performance boost.


Depends.
Toss the GPU/wifi, disable audio, throttle the processor a ton, and set the OS to power saving, and old PCs can be shockingly efficient.


This is what basically anyone in the ML research/dev industry will tell you, heh.


Not gonna lie. When I need a wank, I need a wank.
But “professional” porn (like pornhub largely hosts) always felt gross to me.
I suppose not. Not yet.
I know people are particular about WMs, but having to minimize a window vs keeping the window decoration in place seems like a… very minor distinction.
Is the use case rearranging a ton of windows? Something like that?
It’ll literally be a criminal hub, with a bunch of anonymized posts joking about dodging corpos. Probably.
And owls. Still owls.
FBI? No, I am not opening up.