Previous 
of course since Apple has Intelligence now, I'll write a little, while it was everyone else in the industry, I kind of didn't care. I thought a huge energy consumption part was the model training? and that the inference , comparatively, isn't bad? also, training on the open web, I don't have an issue with it. why? because it's the open web. teaching reading and writing, it's not limited, well, unless there's banned books, but learning, that should be something we all want, right? I understand when a large language model spits out contents of peoples work as general knowledge, without reference, that's bad. But I do that, all the time, with the preface that, here's something I read somewhere, or I saw an article, or more accurately, I read the headline for this. That seems ok in conversation, but not on social media, or in a blog post? But what about an email? Or a chat message to a friend? hmmm. what if these "AI" chatbots always had to provide references? Oh, you want to know how old Katie Ledecky is? here's where I found that info. Is that's all that's needed? Can you ask these models what their references are? As for generative stuff, I don't see the point. Just not something for me. Seems like a useless tool. Maybe because I don't want to pretend to be something I'm not?
PreviousRandom 
[about musings] ©1998-2024 [eric abando]