• 0 Posts
  • 27 Comments
Joined 2 years ago
cake
Cake day: June 22nd, 2023

help-circle












  • If you don’t want it new, you can get used sets at places like Ebay.

    If you don’t need the instructions (Lego lets you download PDFs on their website), you can order individual parts from pages like Bricklink or Brickowl. This is quite involved and can be confusing if you haven’t done it before. You need to find a seller (or multiple ones) who have all the parts you need at the correct quantities and colors and who will ship it to your region.

    You can buy parts from compatible brands. I’ve used Webrick before, it works just like Brickowl, but can be cheaper for parts that are rare in the Lego world.

    You can also look for sets from compatible brands, like this one. Sometimes they are copies of Lego sets (which I find questionable), sometimes they are unique designs. The quality of those can be hit and miss.

    Alternatively, do retired sets come back into circulation again?

    Usually not, but sometimes it happens, like with the 2017 Saturn V (21309) and the 2021 Saturn V (92176).







  • This article is full of errors!

    At its core, an LLM is a big (“large”) list of phrases and sentences

    Definitely not! An LLM is the combination of an architecture and its model parameters. It’s just a bunch of numbers, no list of sentences, no database. (Seems like the author confused the word “LLM” with the dataset of the LLM???)

    an LLM is a storage space (“database”) containing as many sample documents as possible

    Nope. This applies to the dataset, not the model. I guess you can argue that memorization happens sometimes, so it might have some features of a database. But it isn’t one.

    Additional data (like the topic, mood, tone, source, or any number of other ways to categorize the documents) can be provided

    LLMs are trained in an unsupervised fashion. Just sequences of tokens, no labels.

    Typically, an LLM will cover a single context, e.g. only social media

    I’m not aware of any LLM that does this. What’s the “context” of GPT-4?

    software developers have gone to great lengths to collect an unfathomable number of sample texts and meticulously categorize those samples in as many ways as possible

    The closest real thing is the RLHF process that is used to fine tune an existing LLM for a specific application (like ChatGPT). The dataset for the LLM is not annotated or categorized in any way.

    a GPT uses the words and proximity data stored in LLMs

    This is confusing. “GPT” is the architecture of the LLM.

    it is impossible for it to create something never seen before

    This isn’t accurate, depending on the temperature setting, an LLM can output literally any word at any time with a non-zero probability. It can absolutely produce things it hasn’t seen.

    Also I think it’s too simple to just assert that LLMs are not intelligent. It mostly depends on your definition of intelligence and there are lots of philosophical discussions to be had (see also the AI effect).