he/him

  • 0 Posts
  • 33 Comments
Joined 1 year ago
cake
Cake day: June 9th, 2023

help-circle

  • I’ve followed Robert Miles’ YouTube channel for years and watched his old numberphile videos before that. He’s a great communicator and a genuinely thoughtful guy. I think he’s overly keen on anthropomorphising what AI is doing, partly because it makes it easier to communicate, but also because I think it suits the field of research he’s dedicated himself to. In this particular video, he ascribes a “theory of mind” based on the LLM’s response to a traditional and well-known theory of mind test. The test is included in the training data, and ChatGPT3.5 successfully recognises it and responds correctly. However, when the details of the test (i.e. specific names, items, etc.) are changed, but the form of the problem is the same, ChatGPT3.5 fails. ChatGPT 4, however, still succeeds – which Miles concludes means that ChatGPT 4 has a stronger theory of mind.

    My view is that this is obviously wrong. I mean, just prima facie absurd. ChatGPT3.5 correctly recognises the problem as a classic psychology question, and responds with the standard psychology answer. Miles says that the test is found in the training data. So it’s in ChatGPT4’s training data, too. And ChatGPT 4’s LLM is good enough that, even if you change the nouns used in the problem, it is still able to recognise that the problem is the same one found in its training data. That does not in any way prove it has a theory of mind! It just proves that the problem is in its training set! If 3.5 doesn’t have a theory of mind because a small change can break the link between training set and test set, how can 4.0 have a theory of mind, if 4.0 is doing the same thing that 3.5 is doing, just with the link intact?

    The most obvious problem is that the theory of mind test is designed for determining whether children have developed a theory of mind yet. That is, they test whether the development of the human brain has reached a stage that is common among other human brains, in which they can correctly understand that other people may have different internal mental states. We know that humans are, generally, capable of doing this, that this understanding is developed during childhood years, and that some children develop it sooner than others. So we have devised a test to distinguish between those children who have developed this capability and those children who have not yet.

    It would be absurd to apply the same test to anything other than a human child. It would be like giving the LLM the “mirror test” for animal self-awareness. Clearly, since the LLM cannot recognise itself in a mirror, it is not self-aware. Is that a reasonable conclusion too? I won’t go too hard on this, because it’s a small part of a much wider point, and I’m sure if you pushed him on this, he would agree that LLMs don’t actually have a theory of mind, they merely regurgitate the answer correctly (many animals can be similarly trained to pass theory of mind tests by rewarding them for pecking/tapping/barking etc at the right answer).

    Indeed, Miles’ substantial point is that the “overton window” for AI Safety has shifted, bringing it into the mainstream of tech and political discourse. To that extent, it doesn’t matter whether ChatGPT has consciousness or not, or a theory of mind, as long as enough people in mainstream tech and political discourse believe it does for it to warrant greater attention on AI Safety. Miles further believes that AI Safety is important in its own right, so perhaps he doesn’t mind whether or not the overton window has shifted on the basis of AI’s true capability or its imagined capability. He hints at, but doesn’t really explore, the ulterior motives for large tech companies to suggest that the tools they are developing are so powerful that they might destroy the world. (He doesn’t even say it as explicitly as I did just then, which I think is a failing.) But maybe that’s ok for him, as long as AI Safety research is being taken seriously.

    I disagree. It would be better to base policy on things that are true, and if you have to believe that LLMs have a theory of mind in order to gain mainstream attention on AI Safety, then I think this will lead us to bad policymaking. It will miss the real harms that AI pose – facial recognition used to bar people from shops that have a disproportionately high error rate for black people, resumé scanners and other hiring tools that, again, disproportionately discriminate against black people and other minorities, non-consensual AI porn, etc etc. We may well need policies to regulate this stuff, but focus on hypothetical existential risk of AGI in the future, over the very real and present harms that AI is doing right now, is misguided and dangerous.

    If policymakers actually understood the tech and the risks even to the extent that Miles’s YouTube viewers did, maybe they’d come to the same conclusion that he does about the risk of AGI, and would be able to balance the imperative to act against all of the other things that the government should be prioritising. But, call me a sceptic, but I do not believe that politicians actually get any of this at all, and they just like being on stage with Elon Musk…


  • The summary is total rubbish and completely misrepresents what it’s actually about. I’m not sure why anyone would bother including that poorly AI-generated summary, if they had already watched the video. Useless AI bullshit.

    The video is actually about the movement of AI Safety over the past year from something of fringe academic interest or curiosity into the mainstream of tech discourse, and even into active government policy. He discusses the advancements in AI in the past year in the context of AI Safety, namely, that they are moving faster than expected and that this increases the urgency of AI Safety research.

    I’ve followed Robert Miles’ YouTube channel for years and watched his old numberphile videos before “GenAI” was really a thing. He’s a great communicator and a genuinely thoughtful guy. I think he’s overly keen on anthropomorphising what AI is doing, partly because it makes it easier to communicate, but also because I think it suits the field of research he’s dedicated himself to. In this particular video, he ascribes a “theory of mind” based on the LLM’s response to a traditional and well-known theory of mind test. The test is included in the training data, and ChatGPT3.5 successfully recognises it and responds correctly. However, when the details of the test (i.e. specific names, items, etc.) are changed, but the form of the problem is the same, ChatGPT3.5 fails. ChatGPT 4, however, still succeeds – which Miles concludes means that ChatGPT 4 has a stronger theory of mind.

    My view is that this is obviously wrong. I mean, just prima facie absurd. ChatGPT3.5 correctly recognises the problem as a classic psychology question, and responds with the standard psychology answer. Miles says that the test is found in the training data. So it’s in ChatGPT4’s training data, too. And ChatGPT 4’s LLM is good enough that, even if you change the nouns used in the problem, it is still able to recognise that the problem is the same one found in its training data. That does not in any way prove it has a theory of mind! It just proves that the problem is in its training set! If 3.5 doesn’t have a theory of mind because a small change can mess up its answer, how can 4.0 have a theory of mind, if 4.0 is doing the same thing that 3.5 is doing, just a bit better?

    The most obvious problem is that the theory of mind test is designed for determining whether children have developed a theory of mind yet. That is, they test whether the development of the human brain has reached a stage that is common among other human brains, in which they can correctly understand that other people may have different internal mental states. We know that humans are, generally, capable of doing this, that this understanding is developed during childhood years, and that some children develop it sooner than others. So we have devised a test to distinguish between those children who have developed this capability and those children who have not.

    It would be absurd to apply the same test to anything other than a human child. It would be like giving the LLM the “mirror test” for animal self-awareness. Clearly, since the LLM cannot recognise itself in a mirror, it is not self-aware. Is that a reasonable conclusion too? Or do we cherry-pick the existing tests to suit the LLM’s capabilities?

    Now, Miles’ substantial point is that the “overton window” for AI Safety has shifted, bringing it into the mainstream of tech and political discourse. To that extent, it doesn’t matter whether ChatGPT has consciousness or not, or a theory of mind, as long as enough people in mainstream tech and political discourse believe it does for it to warrant greater attention on AI Safety. Miles further believes that AI Safety is important in its own right, so perhaps he doesn’t mind whether or not the overton window has shifted on the basis of true AI capability or imagined capability. He hints at, but doesn’t really explore, the ulterior motives for large tech companies to suggest that the tools they are developing are so powerful that they might destroy the world. (He doesn’t even say it as explicitly as I did just then, which I think is a failing.) But maybe that’s ok for him, as long as AI Safety research is being taken seriously.

    I disagree. It would be better to base policy on things that are true, and if you have to believe that LLMs have a theory of mind in order to gain mainstream attention on AI Safety, then I think this will lead us to bad policymaking. It will miss the real harms that AI pose – facial recognition used to bar people from shops that have a disproportionately high error rate for black people, resumé scanners and other hiring tools that, again, disproportionately discriminate against black people and other minorities, non-consensual AI porn, etc etc. We may well need policies to regulate this stuff, but focus on hypothetical existential risk of AGI in the future, over the very real and present harms that AI is doing right now, is misguided and dangerous.

    It’s a pity, because if AI Safety had just stayed an academic curiosity (as Rob says it was for him), maybe we’d have the policy resources to tackle the real and present problems that AI is causing for people.


















  • scrchngwsl@feddit.uktoAsk UK@feddit.ukMould.
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    9 months ago

    It’s worth finding out if the mould is caused by condensation (i.e. warm, humid air hitting a cold surface), or water ingress (i.e. water coming in from outside and making your walls damp). If it’s the former, there are things you can do to reduce the humidity in your house, get more air circulating, or deal with the mould as it arises. If it’s the latter, you really need to get the landlord to fix the problem that is causing water to get into the walls – not only is this a potentially serious structural problem with the fabric of the building that could destroy any investment the landlord has if left unchecked, but anything you do as a tenant may be quite ineffective.

    If it is just condensation though, there are things you can try. Condensation happens when the humidity is high enough, and the surface cold enough, for the moisture held in the air to be released onto that surface. Cold air holds less moisture than hot air, so even though the weather forecast might say it’s 90% humidity outside, if the outside air is only 5 degrees C, it will still hold far less moisture than your indoor air at 20 degrees C. So one strategy is to swap out the relatively moist air inside your house for the relatively dry air outside your house. Obviously that incurs a cost – it gets colder in your house, and therefore it costs more to heat your home. The other strategy is to extract moisture from the air inside your house. The most effective way is with a dehumidifier, but it also includes wiping down windows, shower cubicles, etc. and getting the moisture down the drain or out of the house.

    • (£100s) Run a dehumidifier to keep the humidity levels to <70% at least. The dehumidifier will be expensive to buy, and running it will also be expensive at today’s electricity prices. (We have a cheap overnight tariff, so we only run it during the “cheap” hours. Still quite expensive though.) You’ll find much cheaper options that involve absorbent gels or pellets that will absorb some moisture, but nowhere near enough to make a difference. I’d avoid these if you are already having mould/condensation problems.
    • (£30 - £2) Buy a Window Vac and vacuum all your windows, vacuum your shower cubicle after you use it, that sort of thing. Alternatively, a cheap £2 squeegee is fine, but when you use it make sure the water goes outside or down the drain, rather than sitting in the bottom of the shower/windowsill. I.e. squeegee your shower tray too. May sound obvious but apparently not to my family!
    • (£5 ish per bottle) HG Mould Spray is amazing at getting rid of mould caused by condensation. If it’s caused by condensation, and you deal with the humidity/condensation afterward, you probably won’t have to reapply it until the following year at worst. That’s been our experience anyway. (If you don’t adequately deal with the condensation then you’ll probably have to do this a few times over the winter, which is unpleasant and awkward with kids as you need to keep them away and ventilate the room well.)
    • (FREE!) Open the window in your shower/bathroom BEFORE you start your shower/bath, and leave it open the whole time. Close it when it feels like the temperature is slightly below the rest of the house.
    • (FREE!) You can also open windows in the kitchen when you’re boiling stuff – all that moisture has to go somewhere, so better that it goes out of the window. And since cooking will naturally involve warming up a room, you don’t have to worry as much about the heat loss.
    • (FREE!) Open your trickle vents above your windows if you have them. It will get slightly colder, but houses/double glazing is usually designed with trickle vents in mind. It won’t magically solve all condensation (probably won’t do much tbh) but it will contribute slightly to better ventilation.