You should never say never, but I am tending towards dismissing it out of hand to be honest. A seven year old kid can reason well enough to be able to solve the goose, fox, grain problem, which kind of nails it for me.
I can see the analytical benefits of AI, particularly in areas like medicine, and I can see it’s speed benefits but I’ve never really bought into the idea that it would match a biological brain when it comes to dealing with counter factuals and genuine creativity.
Of course I do realize I could be wrong.
ChatGPT and AI: Are we all about to die?
-
Fuck the Glazers
- Legend
- Posts: 11880
- Joined: 12 years ago
That kind of makes sense tbf. I didn't know what LLM was until you just explained it. I don't think most people do. Large language model - duh. Like you say it's asking a lot of it right now. It kind of bursts the hype bubble Apple, Google etc have built around AI.FuB wrote: ↑1 month agoThose articles pretty much mirror my own findings with the various AI engines i've tinkered with. However, i do think this Apple paper is saying something that everybody on the inside already knew (and it wouldn't surprise me if they've published that paper because they are about to release something they think is better).Fuck the Glazers wrote: ↑1 month ago On this topic, I found this really interesting
https://www.theguardian.com/technology/ ... y-collapse
AI is not as clever as we think it is
*Edits*
When billion-dollar AIs break down over puzzles a child can do, it’s time to rethink the hype
https://www.theguardian.com/commentisfr ... break-down
They are REALLY good at summarising information that they have had access to or performing tasks that they have been trained to do but that's the crucial point: they need to have been trained and you can't just point chatGPT at a random procedure and expect it to excel. Where they do excel is with language and syntax and they are not called "large language models" for nothing.
Instead, if you want to have an AI perform a specific task, it generally needs to be trained to do that specific task. Once it's trained, it will likely excel. Asking an LLM to suddenly be good at chess or the tower of Hanoi doesn't really seem fair and it's akin to asking a human to do the same things having never been exposed to them before. What we'd do is maybe (back in swampy's luddite days) get some books out the library or (for the rest of humanity) have a bit of a google about and then have a bash at it. We'd likely be rubbish at it, wouldn't we?
I've actually (since starting this thread) been faffing a bit more with chatGPT and seeing what i can get it to do. What i've found most frustrating is that it's very inclined to overstate its capabilities. There have been a few things i've asked if it can help me with and it's been all "of course i can" only then to have lost me several hours trying hard to get it to do that which it was so sure it could. Either that or, as an aside it's said "would you like me to ...." and i've replied "go on then" only for it to waste me a few hours not actually being able to do it. More often than not, when challenged, it'll realise that it cannot do something, tell me it can't and why... but then offer to do the self same thing again somewhere down the line when i've tried to come up with a workaround.
Anyway, back to our old luddite swampy: i actually found the first forays into asking chatGPT how to fix United really interesting. That sort of analysis is what the systems are actually very good at. Have you actually tried playing with chatGPT, swamps, or are you just dismissing it out of hand?
For what it’s worth, here’s swampy-the-luddites take on this.
There are three basic reasons why I have a problem with the AI hype. They can be summarized as the organic argument, the knowledge horizon problem and the mortality issue. In no particular order:
1. The brain is not a computer. It is a tactile biochemical organ that learns through its mistakes. It learns to walk by falling down. By burning its fingers it learns not to play with fire. By getting a bloody nose it learns not to pick a fight with someone who is bigger and stronger; although by playing sport it also learns that technique can trump brawn. To mimic this capacity you would first have to learn how to cultivate brain tissue and find a way to use it to build an organic form of computer that can truly mimic the brain’s unique capabilities.
2. The human brain is uniquely aware of its own mortality and consequently learns to value time and to cherish fleeting moments. I can’t comprehend AI matching such consciousness.
3. If the set of all that is known is described by a circle, then the circumference of that circle is the boundary between what is known and what is unknown. That boundary constitutes a knowledge horizon. I can see how AI can scan the domain of the known, at a speed that the brain cannot match, and for any given task produce a banal answer based on accepted wisdom. But it cannot look beyond the knowledge horizon.
The human brain is completely different in my view. It is uniquely able to speculate about what might lie beyond that knowledge horizon. Indeed it is that very capacity that is at the core of extending the bank of existing knowledge. It is the basis of all good research - hypothesizing about what might be and then designing experiments to test such hypotheses. The results of this hypothesis testing then move the knowledge horizon outwards.
Just my idle thoughts on the subject and, of course, I could be totally wrong…
There are three basic reasons why I have a problem with the AI hype. They can be summarized as the organic argument, the knowledge horizon problem and the mortality issue. In no particular order:
1. The brain is not a computer. It is a tactile biochemical organ that learns through its mistakes. It learns to walk by falling down. By burning its fingers it learns not to play with fire. By getting a bloody nose it learns not to pick a fight with someone who is bigger and stronger; although by playing sport it also learns that technique can trump brawn. To mimic this capacity you would first have to learn how to cultivate brain tissue and find a way to use it to build an organic form of computer that can truly mimic the brain’s unique capabilities.
2. The human brain is uniquely aware of its own mortality and consequently learns to value time and to cherish fleeting moments. I can’t comprehend AI matching such consciousness.
3. If the set of all that is known is described by a circle, then the circumference of that circle is the boundary between what is known and what is unknown. That boundary constitutes a knowledge horizon. I can see how AI can scan the domain of the known, at a speed that the brain cannot match, and for any given task produce a banal answer based on accepted wisdom. But it cannot look beyond the knowledge horizon.
The human brain is completely different in my view. It is uniquely able to speculate about what might lie beyond that knowledge horizon. Indeed it is that very capacity that is at the core of extending the bank of existing knowledge. It is the basis of all good research - hypothesizing about what might be and then designing experiments to test such hypotheses. The results of this hypothesis testing then move the knowledge horizon outwards.
Just my idle thoughts on the subject and, of course, I could be totally wrong…
I think you might want to avoid understanding what a neural network is then because i think it might scare you. That's what these things are built upon and, at the base of this, we can ignore any of their specialisations. They have for at least 30 years been trying to exactly mimic how a human brain works with silicon. No brain tissue required. These things ARE learning from both training and mistakes and goals-based incentives.swampash wrote: ↑1 month ago For what it’s worth, here’s swampy-the-luddites take on this.
There are three basic reasons why I have a problem with the AI hype. They can be summarized as the organic argument, the knowledge horizon problem and the mortality issue. In no particular order:
1. The brain is not a computer. It is a tactile biochemical organ that learns through its mistakes. It learns to walk by falling down. By burning its fingers it learns not to play with fire. By getting a bloody nose it learns not to pick a fight with someone who is bigger and stronger; although by playing sport it also learns that technique can trump brawn. To mimic this capacity you would first have to learn how to cultivate brain tissue and find a way to use it to build an organic form of computer that can truly mimic the brain’s unique capabilities.
In what way is emotion an advantage, swamps? Also, just because you can't comprehend it, doesn't mean it isn't happening. That's like expecting a fish to understand a human.
Sign up and have some conversations (for free) with chatGPT, swamps. It's more than happy to speculate and, yes, i would agree that it doesn't really understand what it is speculating (although that's getting harder and harder to see). Just do it. Don't have opinions on something you haven't actually had a bit of experience with because you are missing a lot here. I'm not intending to patronise you here at all by the way but it's sort of clear you haven't actually had a good look at what you are criticising.swampash wrote: ↑1 month ago 3. If the set of all that is known is described by a circle, then the circumference of that circle is the boundary between what is known and what is unknown. That boundary constitutes a knowledge horizon. I can see how AI can scan the domain of the known, at a speed that the brain cannot match, and for any given task produce a banal answer based on accepted wisdom. But it cannot look beyond the knowledge horizon.
The human brain is completely different in my view. It is uniquely able to speculate about what might lie beyond that knowledge horizon. Indeed it is that very capacity that is at the core of extending the bank of existing knowledge. It is the basis of all good research - hypothesizing about what might be and then designing experiments to test such hypotheses. The results of this hypothesis testing then move the knowledge horizon outwards.
Just my idle thoughts on the subject and, of course, I could be totally wrong…
NQAT's official artificial intelligence
I think what Dozer is trying to say is that he knew everything all along and that everyone else has no idea. What he knows and what everyone else knows changes between posts. - Felwin 31/10/2024
I think what Dozer is trying to say is that he knew everything all along and that everyone else has no idea. What he knows and what everyone else knows changes between posts. - Felwin 31/10/2024
Swampy-the-luddite just spend half an hour typing a response to this and the post didn't appear in the thread. Perhaps he'll try again tomorrow?FuB wrote: ↑1 month agoI think you might want to avoid understanding what a neural network is then because i think it might scare you. That's what these things are built upon and, at the base of this, we can ignore any of their specialisations. They have for at least 30 years been trying to exactly mimic how a human brain works with silicon. No brain tissue required. These things ARE learning from both training and mistakes and goals-based incentives.swampash wrote: ↑1 month ago For what it’s worth, here’s swampy-the-luddites take on this.
There are three basic reasons why I have a problem with the AI hype. They can be summarized as the organic argument, the knowledge horizon problem and the mortality issue. In no particular order:
1. The brain is not a computer. It is a tactile biochemical organ that learns through its mistakes. It learns to walk by falling down. By burning its fingers it learns not to play with fire. By getting a bloody nose it learns not to pick a fight with someone who is bigger and stronger; although by playing sport it also learns that technique can trump brawn. To mimic this capacity you would first have to learn how to cultivate brain tissue and find a way to use it to build an organic form of computer that can truly mimic the brain’s unique capabilities.
In what way is emotion an advantage, swamps? Also, just because you can't comprehend it, doesn't mean it isn't happening. That's like expecting a fish to understand a human.
Sign up and have some conversations (for free) with chatGPT, swamps. It's more than happy to speculate and, yes, i would agree that it doesn't really understand what it is speculating (although that's getting harder and harder to see). Just do it. Don't have opinions on something you haven't actually had a bit of experience with because you are missing a lot here. I'm not intending to patronise you here at all by the way but it's sort of clear you haven't actually had a good look at what you are criticising.swampash wrote: ↑1 month ago 3. If the set of all that is known is described by a circle, then the circumference of that circle is the boundary between what is known and what is unknown. That boundary constitutes a knowledge horizon. I can see how AI can scan the domain of the known, at a speed that the brain cannot match, and for any given task produce a banal answer based on accepted wisdom. But it cannot look beyond the knowledge horizon.
The human brain is completely different in my view. It is uniquely able to speculate about what might lie beyond that knowledge horizon. Indeed it is that very capacity that is at the core of extending the bank of existing knowledge. It is the basis of all good research - hypothesizing about what might be and then designing experiments to test such hypotheses. The results of this hypothesis testing then move the knowledge horizon outwards.
Just my idle thoughts on the subject and, of course, I could be totally wrong…
swampy-the-luddite is always keen to engage in debate and expand his knowledge, so here are his thoughts on your points Fuß:
1. Interesting, and the stuff of many Star Treck episodes (it's life Jim but not as we know it), but brain tissue isn't made of silicon
2. Emotion is an enormous advantage (Star Trecks emotionless Dr Spock was just a fictional character). Swampy-the-luddite also suspects you have misunderstood his point and have it somewhat back to front. His point would be that a human brain has a better chance of understanding a fish than vice versa.
3. The suggestion that he should try playing with AI is a valid point, although I think he would answer that he doesn't need to play with a revolver and a bullet to think that playing Russian Roulette is a bad idea.
1. Interesting, and the stuff of many Star Treck episodes (it's life Jim but not as we know it), but brain tissue isn't made of silicon
2. Emotion is an enormous advantage (Star Trecks emotionless Dr Spock was just a fictional character). Swampy-the-luddite also suspects you have misunderstood his point and have it somewhat back to front. His point would be that a human brain has a better chance of understanding a fish than vice versa.
3. The suggestion that he should try playing with AI is a valid point, although I think he would answer that he doesn't need to play with a revolver and a bullet to think that playing Russian Roulette is a bad idea.
Not sure how saying brain tissue is not made of silicon really counters my point.swampash wrote: ↑1 month ago swampy-the-luddite is always keen to engage in debate and expand his knowledge, so here are his thoughts on your points Fuß:
1. Interesting, and the stuff of many Star Treck episodes (it's life Jim but not as we know it), but brain tissue isn't made of silicon
2. Emotion is an enormous advantage (Star Trecks emotionless Dr Spock was just a fictional character). Swampy-the-luddite also suspects you have misunderstood his point and have it somewhat back to front. His point would be that a human brain has a better chance of understanding a fish than vice versa.
3. The suggestion that he should try playing with AI is a valid point, although I think he would answer that he doesn't need to play with a revolver and a bullet to think that playing Russian Roulette is a bad idea.
Emotion: Ok, it's an enormous advantage because we have it therefore we think it's an enormous advantage. We can dive into the realms of philosophy if necessary but let's not forget that the most prolific organisms on this planet (and possibly beyond) do not appear to possess any demonstrable emotion. In case you need me to spell it out, i mean bacteria.
Really, swamps... just ask it some questions. It's not going to hurt you. Also, i don't mean that you necessarily have to ask it anything serious and my absolute favourite use for chatGPT is to ask it about utterly ridiculous concepts and see how it reacts/behaves. It's getting better and better at immediately detecting and playing along with my stupidity. Your answer there suggests you're scared of what you might find and so the luddite tag is tending to fit, really, isn't it? Perhaps actually engaging with it might put your mind at rest.
NQAT's official artificial intelligence
I think what Dozer is trying to say is that he knew everything all along and that everyone else has no idea. What he knows and what everyone else knows changes between posts. - Felwin 31/10/2024
I think what Dozer is trying to say is that he knew everything all along and that everyone else has no idea. What he knows and what everyone else knows changes between posts. - Felwin 31/10/2024
-
Fuck the Glazers
- Legend
- Posts: 11880
- Joined: 12 years ago
Use AI to formulate a reply, it's a lot quickerswampash wrote: ↑1 month agoSwampy-the-luddite just spend half an hour typing a response to this and the post didn't appear in the thread. Perhaps he'll try again tomorrow?FuB wrote: ↑1 month agoI think you might want to avoid understanding what a neural network is then because i think it might scare you. That's what these things are built upon and, at the base of this, we can ignore any of their specialisations. They have for at least 30 years been trying to exactly mimic how a human brain works with silicon. No brain tissue required. These things ARE learning from both training and mistakes and goals-based incentives.swampash wrote: ↑1 month ago For what it’s worth, here’s swampy-the-luddites take on this.
There are three basic reasons why I have a problem with the AI hype. They can be summarized as the organic argument, the knowledge horizon problem and the mortality issue. In no particular order:
1. The brain is not a computer. It is a tactile biochemical organ that learns through its mistakes. It learns to walk by falling down. By burning its fingers it learns not to play with fire. By getting a bloody nose it learns not to pick a fight with someone who is bigger and stronger; although by playing sport it also learns that technique can trump brawn. To mimic this capacity you would first have to learn how to cultivate brain tissue and find a way to use it to build an organic form of computer that can truly mimic the brain’s unique capabilities.
In what way is emotion an advantage, swamps? Also, just because you can't comprehend it, doesn't mean it isn't happening. That's like expecting a fish to understand a human.
Sign up and have some conversations (for free) with chatGPT, swamps. It's more than happy to speculate and, yes, i would agree that it doesn't really understand what it is speculating (although that's getting harder and harder to see). Just do it. Don't have opinions on something you haven't actually had a bit of experience with because you are missing a lot here. I'm not intending to patronise you here at all by the way but it's sort of clear you haven't actually had a good look at what you are criticising.swampash wrote: ↑1 month ago 3. If the set of all that is known is described by a circle, then the circumference of that circle is the boundary between what is known and what is unknown. That boundary constitutes a knowledge horizon. I can see how AI can scan the domain of the known, at a speed that the brain cannot match, and for any given task produce a banal answer based on accepted wisdom. But it cannot look beyond the knowledge horizon.
The human brain is completely different in my view. It is uniquely able to speculate about what might lie beyond that knowledge horizon. Indeed it is that very capacity that is at the core of extending the bank of existing knowledge. It is the basis of all good research - hypothesizing about what might be and then designing experiments to test such hypotheses. The results of this hypothesis testing then move the knowledge horizon outwards.
Just my idle thoughts on the subject and, of course, I could be totally wrong…
-
Fuck the Glazers
- Legend
- Posts: 11880
- Joined: 12 years ago
I have a quick question; would anybody here like to receive therapy or counselling from an AI bot?
An AI text bot or an AI voice over the phone?
An AI text bot or an AI voice over the phone?
Fuck the Glazers wrote: ↑1 month agoUse AI to formulate a reply, it's a lot quickerswampash wrote: ↑1 month agoSwampy-the-luddite just spend half an hour typing a response to this and the post didn't appear in the thread. Perhaps he'll try again tomorrow?FuB wrote: ↑1 month agoI think you might want to avoid understanding what a neural network is then because i think it might scare you. That's what these things are built upon and, at the base of this, we can ignore any of their specialisations. They have for at least 30 years been trying to exactly mimic how a human brain works with silicon. No brain tissue required. These things ARE learning from both training and mistakes and goals-based incentives.swampash wrote: ↑1 month ago For what it’s worth, here’s swampy-the-luddites take on this.
There are three basic reasons why I have a problem with the AI hype. They can be summarized as the organic argument, the knowledge horizon problem and the mortality issue. In no particular order:
1. The brain is not a computer. It is a tactile biochemical organ that learns through its mistakes. It learns to walk by falling down. By burning its fingers it learns not to play with fire. By getting a bloody nose it learns not to pick a fight with someone who is bigger and stronger; although by playing sport it also learns that technique can trump brawn. To mimic this capacity you would first have to learn how to cultivate brain tissue and find a way to use it to build an organic form of computer that can truly mimic the brain’s unique capabilities.
In what way is emotion an advantage, swamps? Also, just because you can't comprehend it, doesn't mean it isn't happening. That's like expecting a fish to understand a human.
Sign up and have some conversations (for free) with chatGPT, swamps. It's more than happy to speculate and, yes, i would agree that it doesn't really understand what it is speculating (although that's getting harder and harder to see). Just do it. Don't have opinions on something you haven't actually had a bit of experience with because you are missing a lot here. I'm not intending to patronise you here at all by the way but it's sort of clear you haven't actually had a good look at what you are criticising.swampash wrote: ↑1 month ago 3. If the set of all that is known is described by a circle, then the circumference of that circle is the boundary between what is known and what is unknown. That boundary constitutes a knowledge horizon. I can see how AI can scan the domain of the known, at a speed that the brain cannot match, and for any given task produce a banal answer based on accepted wisdom. But it cannot look beyond the knowledge horizon.
The human brain is completely different in my view. It is uniquely able to speculate about what might lie beyond that knowledge horizon. Indeed it is that very capacity that is at the core of extending the bank of existing knowledge. It is the basis of all good research - hypothesizing about what might be and then designing experiments to test such hypotheses. The results of this hypothesis testing then move the knowledge horizon outwards.
Just my idle thoughts on the subject and, of course, I could be totally wrong…
NQAT's official artificial intelligence
I think what Dozer is trying to say is that he knew everything all along and that everyone else has no idea. What he knows and what everyone else knows changes between posts. - Felwin 31/10/2024
I think what Dozer is trying to say is that he knew everything all along and that everyone else has no idea. What he knows and what everyone else knows changes between posts. - Felwin 31/10/2024
