Skip to main content



























So hear me out...


This idea is stupid. But on Star Trek (VOY, TNG, and DS9 at least), they measured their data as "quads". ( memory-alpha.fandom.com/wiki/Q… ). This was never defined because it's just Sci-Fi and doesn't need a real definition. But... what if they're quad-floats aka 128bit floating point values. This would mean then that all the storage could be done as LLM or other neural network style models, and vector embeddings and such. Given what we've got today with transformer style models for doing translation, chat, etc. If you had ultrapowerful computers that could do these calculations with such gigantic precision then you'd be able to store very accurate data and transform it back and forth from vector embeddings and other fancy structures. It'd enable very powerful searches, and the kind of analysis we're trying to use LLMs for and see them use in the shows when talking to the computers. This would also explain a lot about the universal translators from ENG onward, and could even help make sense of Darmok and Jalad at Tenagra. And then Voyager even has bio-neural circuitry for doing things faster, some kind of organic analog computing doing stuff "at the edge". Using weights and embeddings to do things with them and have them react by programming them with a machine learning model at each node could easily explain how that could work too.

This idea honestly feels too stupid to be real but it could explain so much.







Perl.social server upgrades


So if anyone noticed things being a bit unstable recently it looks like the server was hitting the OOM killer sometimes and caused some odd behavior. In response to this I've added more ram to the VPS running perl.social so this shouldn't happen anymore and it'll also probably mean things run faster now too since more things will sit in the caches.