• 0 Posts
  • 32 Comments
Joined 1 year ago
cake
Cake day: July 3rd, 2023

help-circle



  • I don’t like the idea of restricting the model’s corpus further. Rather, I think it would be good if it used a bigger corpus, but added the date of origin for each element as further context.

    Separately, I think it could be good to train another LLM to recognize biases in various content, and then use that to add further context for the main LLM when it ingests that content. I’m not sure how to avoid bias in that second LLM, though. Maybe complete lack of bias is an unattainable ideal that you can only approach without ever reaching it.
















  • I remember Asimov’s books in the Robots/Foundation universe to be fairly coherent. Newer books revealed new things that weren’t alluded to in previous books, but they didn’t break continuity.

    The only inconsistency I can think of is how the pre-Foundation’s Edge books didn’t feature the computers we then saw starting with that book, but it’s not like the older books specifically stated they weren’t there.

    And if all else fails, you can always explain anything by bringing The End of Eternity into the canon 😛