Skip to main content
16 events
when toggle format what by license comment
Jan 12 at 23:25 comment added Cerbrus They pretend to reason. They're great at faking it. I'm sorry, but look into how language models generate text before you claim such nonsense, @ChatGPT. You're repeating the same basic arguments all those LLM-fanboys made years ago.
Jan 12 at 22:18 comment added maxhodges They reason. Have you never used one? Try it.
Jan 12 at 19:51 comment added Cerbrus @ChatGPT LLMs still do not reason. They do not know. All they do is generate output that always needs to be verified, as verifying for correctness is something a LLM, by its very nature, can not do itself. Sure, they're very good at approximating an answer, and they're often correct, but their output can never be blindly trusted. That's the whole point of this answer.
Jan 12 at 2:53 comment added maxhodges The assumption that AI can't generate valuable knowledge seems flawed. LLMs have synthesized more information than any human could read in thousands of lifetimes. Their ability to connect diverse concepts and identify patterns makes them legitimate contributors to knowledge creation, not just consumers of it.
Feb 14, 2023 at 10:40 comment added sunny moon Stack Overflow is a knowledge repository so I feel like it should be used to train AI models - rest assured it absolutely is, without us knowing.
Dec 23, 2022 at 16:44 comment added Kevin B Noone here is speaking against assistive technologies; you're simply using that as an excuse to summarily dismiss the problems this tool has caused. Until these problems can be dealt with while allowing it to be used as an assistive technology, it's dead in the water.
Dec 23, 2022 at 16:42 comment added Summer-Sky @KevinB yes here they are like cjn answers to their questions and wondering that it could not be answered correctly ... anyway if You find the ominous other answers that were regarding LLMs as an assistive technology and/or yall get off of your ableism I would be glad if you could add a link to those at my answer meta.stackoverflow.com/a/422306/3623574 Also to make it clear I am looking for a compromise. I am also against piping from cGPT to SO mery christmas for now
Dec 23, 2022 at 16:34 comment added Kevin B @Summer-Sky i mean... that's provably false, there's at least a dozen answers here suggesting exactly that. (several may be deleted.) People using it in this way is why this temporary ban exists.
Dec 23, 2022 at 16:32 comment added Summer-Sky Nobody wants to pipe the output of cGPT into SO directly. And assuming that cGPT knows it all is an indicator for lack of understanding LLM
Dec 21, 2022 at 20:13 comment added GaryFurash Isn't ChatGPT already training on StackOverflow probably? It's just consuming internet stuff so it would Suprise me if wasn't already.
Dec 12, 2022 at 19:34 comment added Fattie Solomon just FWIW what you mention (reading the tvOS doco, understanding it, doing test research, creating a solution) is completely inconceivable with today's technology. (It would be like saying "travel at 1/2 light speed to mars".) the only thing "chatGPT" does is randomly formulate sentences, that have the rhythm and grammar of the example corpus of text". That's it. (The only reason it sometimes "answers correctly" is that it randomly munges up text on the topic, which, is likely to be correct-ish. It does not even *understand what a "question" is. )
Dec 12, 2022 at 19:21 comment added Solomon Ucko @Fattie I suppose it could theoretically read the data from tvOS (source code, documentation, executable files, etc.) and make use of that, but getting that to work well would likely be a massive research project. More likely, it could help the users who (try to) post duplicates or near-duplicates, or at least come up with ideas for commonly-similar problems, rather than wasting other users' time.
Dec 10, 2022 at 4:15 comment added Fattie 10% of the questions on SO are simple, but 90% are actually novel. When tvOS is released and nobody knows how to parallax a button, it is puerile to think a bot can "answer" that question.
Dec 8, 2022 at 1:04 comment added wojtow Given its ability to supply plausible answers that do look like legitimate SO answers, I'm guessing it has been training on SO answers already. The question is when it starts using its own answers as more training data (creating a loop), does it get more or less capable? (Possibly smarter because it is taking human up/down votes and comments as feeback, or dumber because noise is amplified (like continually reprocessing a jpeg)
Dec 6, 2022 at 9:38 comment added CrandellWS This above my pay level but stack training AI models, yea that is how it should be. meta.stackoverflow.com/a/421878/1815624
Dec 5, 2022 at 18:25 history answered cottontail CC BY-SA 4.0