We are developing the social individualist meta-context for the future. From the very serious to the extremely frivolous... lets see what is on the mind of the Samizdata people.

Samizdata, derived from Samizdat /n. - a system of clandestine publication of banned literature in the USSR [Russ.,= self-publishing house]

The chatbot ate my rational faculties

From the Daily Telegraph (£) today:

A quarter of 13 to 17-year-olds recently admitted to the Pew Research Centre that they use ChatGPT to write their homework, double the proportion found a year earlier. Last year, the Higher Education Policy Institute found that one in eight undergraduates – 13 per cent – were using AI to write assessments, and 3 per cent were handing in the chatbot’s output without checking it.

Oh dear. As the article says, there are AI programmes now that screen writing to see if a generative form of AI has written it. So we have a sort of arms race, as it were, between those using these systems to write essays or whatever, and those using it to spot the cheats.

Using AI is not quite the same, necessarily, as using a search engine to check up on sources, or a calculator to do sums rather than by hand. I do think that something is lost if a person has no idea of how to go about how to find things out: what references to check, how to validate such references and how to understand sources, levels of credibility and corroboration, etc. Being able to think through a topic, to structure an essay, marshal facts and figures, and come to a convincing conclusion, is a skill. It is also an important way that we hone our reasoning. And I don’t think there is anything specifically “Luddite” in pointing out that using AI to “write” your homework assignment will lead cause atrophy of our mental faculties. And in this age of social media, “coddling” of kids and all the problems associated with a “fragile generation” , it is easy to see this trend as being malign.

I am definitely not saying the government ought to step into this. I think that schools and places of higher learning ought, as part of the conditions of entry and admission (preferably with the consent of parents/students) to restrict AI’s use to avoid people not developing their own mental muscle and developing ability to truly grasp a subject, rather than simply “phone it in”. If a place of learning has a mission statement, it surely ought to want to develop the learning ability and skills of its students. If AI detracts from it, then it is out of bounds.

It is best, I think, to leave this up to individual schools. This is also another reason why I am a fanatic about school choice, and fear the dangers of state central control of schools.

Technology has its place, in my view. In my childhood, pocket calculators started to be used, but we were not allowed to use them in class until we’d already mastered maths the old-fashioned way. (I used them in doing my physics O-level, for example, so long as I clearly could show my workings if asked.)

Here is an associated article by Gizmondo. On a more optimistic point, venture capital mover and shaker Marc Andreessen has thoughts on the overall positives from AI.

I also have a more financial concern. If students, such as undergraduates, are using AI to write essays, even whole dissertations, etc, then it makes it even more scandalous that they rack up tens of thousands of dollars, euros or whatever in debt to pay for this. Because if they get a degree thanks to ChatGPT (that rhymes!), then what exactly have they got for their money?

100 comments to The chatbot ate my rational faculties

  • NickM

    I use AI a lot but in context I understand (coding, graphics, computer hardware) – stuff I have experience in parsing. So… I’m not using it for anything like the basic learning of the core of anything. I’ll go out on a bit of a limb here but a lot of it is a glorified search engine and that’s great. In that it’s more like (to use an analogy for the biddies and codgers) the difference between going into an old school library and rifling the card index versus speaking to a knowledgeable librarian.

    But then I’m a total technofetishist.

  • Fraser Orr

    I have mixed feelings on this Johnathan. On the one hand I think you make some reasonable points, but I think there is a lot more to say, and a lot more to say that can’t be said because it is unknown.

    The example of a calculator is illustrative. For sure I think it is useful for kids to learn how to do long division by hand, but I don’t think kids should have to do all their divisions by hand. Maybe learning the technique is useful, but demanding that we do things the hardest way is a hinderance to progress. Tools leverage us to be able to do greater things. LLMs are a tool and they are here to stay, so perhaps instead of banning them we should teach people how to use them? Perhaps the greatest sin is that the kids turned it in without checking it. But did anyone ever teach them how to check it? Imagine how much greater our kids’ research and essays could be or how much we could further the educational goals we have for them if they could leverage that tool?

    Should kids know how to create an essay? How to do research at a legacy library? How to formulate an argument? Sure, but should they have to do all that manual work every single time? Rather should we teach kids how to use the tool? How to engineer and manage prompts? How to review and check the results? Do we REALLY need to drill students in the proper formatting of citations, or should a tool do that?

    In the bad old days before computers, when you wrote a paper, you had to use a card catalog to find suitable books, read massive amounts of extraneous material to find relevant snippets… I remember when I was in High School I had to create a project in geography and had to use a manual typewriter to create it. My other essays were created with pen and paper. Sure kids should learn to write by hand, but surely all their papers should be typed at a computer and printed? Printed? My mistake, what is this the 1990s? I meant emailed. Emailed? What is this the 2000s? I mean shared in a cooperatively edited document.

    It is a challenge to know how to approach this because people in the adult world are still struggling with those tools, so understanding how to teach kids and prepare them for a future that is a miasma of uncertainty is a thankless task. So I don’t really know what to do about it, and I sympathize with schools.

    What I do know is that while schools are under the iron fist of the teachers’ union there will be almost no innovation.

  • NickM

    Long Division is a complete waste of time – at least how I was taught it as an agorithm learnt by rote. Advances the understanding of mathematical concepts not one iota.

    The AI moral panic is a complete diversion from the real issue which is abstraction layers. The kids of today are less computer-literate than Gen-Xers. They have no idea what is going on under the hood. And typing standards are way down due to phone use.

  • Barbarus

    if they get a degree thanks to ChatGPT …, then what exactly have they got for their money?

    In those circumstances the degree should really be awarded to ChatGPT. The human involved is more or less irrelevant, and would appear to be no more qualified than any other random person.

    This may be a problem for universities, which presumably would not want their degrees to be tainted by cheating but may be getting paid according to the number of people they pass.

  • Fraser Orr

    @Barbarus
    In those circumstances the degree should really be awarded to ChatGPT.

    Or how about we MASSIVELY raise the standard of what is expected to graduate based on the assumption that the raw capabilities of the student will be leveraged by the use of these powerful tools.

    When I got my college degree I had tools that were not available ten years before and the capabilities I had and the expectations of what I could deliver were massively higher than a student with a similar degree from ten years before. And today? In college I could not have created a piece of software with 1% of the capability that a college sophomore should be able to produce today.

    How exactly is the leverage of an LLM any different? Except if you think that the tool has exceeded the master, and humans have nothing left to contribute in the face of this new technology. I think this is probably not correct, but I am not 100% sure. But if it is, then college degrees don’t really matter any more. And we have to find of a whole new way of thinking about humanity.

  • bobby b

    I sat down with a few people a couple of weeks ago with some scotch and a computer, while one gentleman attempted to show the rest of us Luddites what AI could do.

    We asked it (I don’t even know what “it” was) 8 questions – queries? – about things that, amongst us, we already knew the answers to. Technical things, legal things, historical things. What’s the current legal holding regarding X? How to remove an engine from X vehicle? What happened to Native American chief X? That sort of thing.

    Two responses were laughably wrong. Joe Blow off the street would know they were wrong. Three were plausible, but if you actually knew the correct answer, they were just wrong. The rest were correct answers.

    I was less than impressed. Were our results out of the norm? Can you actually depend on getting correct responses?

  • Lee Moore

    For school, all they need to do is to add an oral follow up. 10 minutes in which the teacher grills you on “your” written answer.

  • Fraser Orr

    @NickM
    Long Division is a complete waste of time – at least how I was taught it as an agorithm learnt by rote. Advances the understanding of mathematical concepts not one iota.

    FWIW, I think the “new math” that everyone in America complains about is very much an attempt to make math, or at least early arithmetic, about the reality of calculation and what it means practically. However, parents are up in arms about it because it isn’t the way they learned and so “new” means “bad”. I also think it sometimes comes from a sense of insecurity that they find it hard to understand the new way so hard to understand what is going on with their kids. But maybe that is just pop psych on my part.

    FWIW, something like long division as an idea screams out for a tool that shows how that works in a graphical and easy to understand fashion. Although “new math” is progress, the sclerotic educational establishment is very hurtful to the possibilities of what technology can do for education.

    I mean back to LLMs. How about when a student writes an essay that the LLM looks and it and makes suggestions? Iteratively? Isn’t that awesome? An electronic on demand interaction to help them improve? But if education is about grading more than learning then that is very hard to countenance.

  • Fraser Orr

    @bobby b
    I was less than impressed. Were our results out of the norm? Can you actually depend on getting correct responses?

    But if you asked me about certain legal holdings or how to remove an engine from a car, you’d no doubt all fall on the floor laughing at my incompetence. Which is a shame because I brought a really great bottle of scotch and I’m not going to share with you bunch of bastards. 😊

    However, if you asked bobby-b about the legal holdings no doubt you’d get a great answer because he has been trained on that material. And similarly if you asked an AI specifically trained on legal matters and court holdings, you’d probably get an excellent answer. And bobby-b had to go to law school for three years and work in practice for 20 years to get to that level. The AI learned it all in a couple of days.

    AIs are build in layers. Go in ChatGPT.com and you get a base layer — a layer that understands language and basic general knowledge. You can add a layer on top of this, often called a RAG, that enhances that general knowledge with specialized knowledge and then it performs much better in that sphere. And AIs are new. They are only going to get better and better as time goes on.

  • Fraser Orr

    @Lee Moore
    For school, all they need to do is to add an oral follow up. 10 minutes in which the teacher grills you on “your” written answer.

    OK, my last comment on this for now. I thought this was a great idea, and it made me think. So I went into ChatGPT and asked it to write me an essay on the causes of the Spanish Civil War (which is the sort of thing teachers ask about.) It produced a nice essay. Then I asked it to create five multiple choice questions assessing my understanding of the material. And they were great questions.

    Which a teacher could definitely do. I think they call that an arms race.

  • Lee Moore

    I’m going to push back on “long division is a complete waste of time.”

    As tuition on getting the right answer it’s certainly a waste of time, but think it’s excellent training for things that require a bit of orderly thought. It’s so easy to screw up calculations by cutting corners, and long division is much better than, say, multiplication for teaching you that. Because if you’re reasonably smart, you can do multiplication in your head, or learn tricks to simplify the calculation. Really hard to do that with division.

    So it teaches you the value of order and structure in calculation. Even if you use spreadsheets to do your calculations, you still need to lay thing out in an orderly fashion. Otherwise you will screw it up and then spend a long time trying to figure out where you went wrong. And it’s invariably because you cut a corner. The truth is – humans aren’t that smart.

  • Lee Moore

    Fraser – You’re assuming the teachers would be able to recognise good multiple choice questions 🙂

    Though your idea is certainly better than mine, if one assume teachers of, er, mediocre quality. Also less expensive.

  • NickM

    Fraser,
    As I climbed the math-tree I got my head around “New Math” and it chimed with me. I had sort have been taught it sort-of, partially and very badly in primary school – sets and stuff. It only chimed with me much later and I realised the biggest problem with “New Math” was the teaching of it. Very few primary teachers had (have?) much more of an idea of maths as “sums” (or at best “arithmetic”) so were teaching something they had no idea about it. I feel deeply that kids rejecting maths as “dull” and “difficult” at an early age has a lot to do with bad teaching. The rote learning of times tables and all that. It puts kids off. It put me off for almost too many years for me to eventually yank the iron out of the fire at the last minute. I have a lot to say on this which means I have to really think it through before I write a long-winded comment somewhere like here. But I’ll leave it at this for now. Early teaching of maths has to be about teaching abstraction. Abstraction is the fundamental power of maths in the “Real World”. It is why it matters. It is why it is so elegant and therefore an excellent exercise for the mind. It is why I love it. It has to be taught not in terms of being Bob Cratchett as much as a subject about imagination, creativity and reasoning.

  • bobby b

    “It has to be taught not in terms of being Bob Cratchett as much as a subject about imagination, creativity and reasoning.”

    Funny, because that’s almost exactly how I think good law is taught.

  • Chester Draws

    The rote learning of times tables and all that. It puts kids off.

    This is plain wrong. Kids love learning skills.

    If it is taught quickly and effectively, learning the times table is a skill they master fast. Then they can move on. It has to be practiced a bit, but once you have learned it, showing off your knowledge is fun.

    What is bad is the current insistence that all rote skills must be boring, so avoided. So I have students who cannot do anything useful in Maths without a calculator, and therefore have no sense of number at all.

    How can you possibly teach the abstraction of maths to students who do not know that 382 is 3 x 100 x 8 x 10 + 2? You simply have to know how numbers work before you can usefully teach abstract concepts.

    It has to be taught not in terms of being Bob Cratchett as much as a subject about imagination, creativity and reasoning

    It is taught as a subject about reasoning. That’s why Algebra is so important in any sensible curriculum.

    It can only be about imagination if the person has sufficient skills to have something to imagine. Knowledge always precedes innovation. That whole 1% imagination and 99% perspiration thing. It’s like suggesting people should write books before they have grammar nailed. It’s simply ridiculous. No-one is going to usefully imagine anything about primes, say, until they have a very complete knowledge of factors, multiples etc.

    And no-one — no-one — can teach creativity. Think about it. Why not teach people to simply be clever, witty and perceptive while we are about it? Creativity is innate. All a teacher can do is nurture it by teaching the relevant skills to find its expression. So once again, we are back to teaching the basics first.

    Yours is the wet dream of progressive education — that we can make it all fun without any of the work. It is why modern education is failing.

  • NickM

    Chester,
    Times tables are not a skill. They are a set of facts. I was rote taught them, chanting them in class like in a madrassa. I couldn’t stand it so I worked out short-cuts. E.g. 9 times table. It’s the ten times table minus whatever it is times nine.

    No, you can’t teach creativity but what you can do is set questions (not just in exams, throughout) that take a bit of “thinking outside the box” (I hate that piece of jargon, but…) to solve or even better have multiple paths to a solution. Interestingly enough I have seeen over the last few years the maths GCSE question and the UK SATS. The GCSE stuff is bloody awful but the SATS seem more about testing understanding.

    I have to agree with you about algebra. Because that is about abstraction and generalisation and that is what gives maths it’s power over such a wide area of stuff.

  • jgh

    If I hadn’t learned how to do long division, I wouldn’t have been able to code division routines. If I hadn’t learned logs I wouldn’t have been able to code multiplication routines.

  • Clovis Sangrail

    Some great comments here.
    My university treats using AI to do any assessed work (unless that’s the job) in the same way as plagiarism-0 mark the first time, 0 mark for the whole course the 2nd time, then expulsion.
    Re long division etc-you need something to get pupils to think about process. Doesn’t have to be that, but something. Whatever works.
    One fundamental problem is that most maths teachers suck. I have taught maths to students at every level-from pre-school to PhD. You have to know enough to adapt the method to the student. Most teachers don’t know enough to do that at any level.

  • NickM

    jgh,
    By the time I did a physics degree I had quite forgotten long-division. I did learn it in my third year for an elective course on discrete maths. It made sense then. It can be useful but it’s very niche. Logs – no argument there. Vital to understand lots of things such as exponentials and orders of magnitude and earthquake scales and the scale of things in general.

  • NickM

    Clovis,
    Yes, most maths teachers are poor*. I have taught a little maths (in some odd circs) and am bloody awful at it. Having said that… I don’t think I ever taught it to anyone was at all interested. The worst was a pre-MBA class. They were in England to learn English before going on to apply for MBAs at (largely) US schools. Then they were thrown the curve ball that the GMAT exam has a maths element. So I was hired by Leeds Uni’s School of Languages to teach maths. If I was bad they were worse. The Chinese chatted amongst themselves, some others were too proud/deferential to ask for specific help/explanations and the Russians. Oh, the Russians! Minted post-Sov scions of the oligarchy who for whom Daddy had bought all advancement.

    *You got a maths degree (or similar) you’ve got a lot of better options…

  • Lord T

    AI is the next domino.

    First we learnt how to add numbers together and people could do that in their head. Along came calculators and they were fought against but then you could use them in school and now people can barely add up when they go shopping.
    Then along came computers and now it is mandatory for every child to have a computer to do your homework and schoolwork. Some don’t and their education suffers. They can’t think or do anything that does not have a defined process.
    You needed to learn things and remember them, along came search engines and if you want to know anything you searched for it, cut and pasted the response, and promptly forgot anything about it. Now if you didn’t and read books you may very well generate the same responses but you were more likely to retain it.

    So now we have AI. It’s here and like search engines people use it to perform their tasks, like homework, and AI is difficult to analyse. It can been seen as AI because it makes simple errors that people can see, other AI doesn’t, and if the task is written correctly who can tell if it is AI or not. It is all down to probabilities so people will get accused of using AI when they have not. In a few years it will get approved and everyone will use it.

    So eventually those further down the educational ladder will not be able to do any work without AI.

    Is it a problem though? When people start work they will be expected to use AI for productivity and nobody will even know that they can’t speel or put words two together on their own. 🙂

    In my opinion Yes. We are not supposed to be educating drones that can barely function intellectually although that appears to be what we are doing.

    We should go back to the basics for those in initial and secondary education so they learn how to spell and basic maths without recourse to any technology. Advanced education, maybe even in the last few years at secondary school, can use calculators but not computers. Save them for further education when they know the basics.

    Unfortunately, that won’t happen and AI will be in use in schools within a few years. Just my thoughts.

  • GregWA

    Re the “how best to teach” thread here, as others noted, it matters who you’re teaching and why. Do they have at least average intelligence? Are they curious, natural learners? Depending on the answers, what to teach them and how changes a lot!

    Point being, the elegance of math is not for everyone…although the beauty of it seems like something that should appeal to just about anyone. How could a chemist not be amazed to learn that the vibrational patterns in molecules (spectral lines) are directly related to the symmetry of the molecule (chemical applications of group theory)? But how to similarly amaze a non-chemist about this fact?

    So maybe it’s a matter of figuring out how tp convey the beauty, elegance, power of any thing to be learned to someone who is not as disposed to learning it as others. It surely does require great teachers but like most things, we can learn how to teach better, even when those who are doing the teaching are not “greats”.

  • NickM

    Lord T,
    I would reply properly but I’d have to rote learn cuneiform 😒

  • Paul Marks

    Computer systems such as ChatGPT scan the internet – and report back what they find (which is often WRONG), they are also “trained” by their human creators (who normally follow the establishment agenda of lies and distortions).

    ChatGPT has no Critical Reason – it is not an “intelligence” in the true sense of that word. It will report back utter nonsense and contradictory absurdities – if that is what can be found on the internet (and supports the establishment agenda its human creators have).

    People who rely on it in relation to political, historical, scientific, or cultural matters were there is serious disagreement, are idiots.

  • Fraser Orr

    @jgh
    If I hadn’t learned how to do long division, I wouldn’t have been able to code division routines. If I hadn’t learned logs I wouldn’t have been able to code multiplication routines.

    What percentage of people have to do that? I am one of the tiny fraction of people who have coded a division routine (on a Z80, a very long time ago) and I can assure you that doing it in binary is quite a bit different than doing it in decimal. I have also written multiplication routines for larger numbers and I have no idea what logs have to do with it. For sure you can multiply by adding the logarithms of the two numbers but no computer programmer would ever do that (because calculating a log actually requires you to be able to multiply, so it is a chicken and egg situation, among many other reasons.)

    I think it is useful to teach kids to do long division since it helpful to understand about numbers and to understand what division is, though I think the “algorithm — just do this and don’t ask questions” is deeply counter productive. I can easily imagine amazing computer manipulatives to make this a deeply revealing experience.

  • William H. Stoddard

    I have been trying the experiment of inputting material that I created myself, and asking ChatGPT to (a) restate it in a different format or (b) extrapolate something creative from it. I’ve gotten a few things that were actually elegant. I’ve gotten other things that were helpful, but needed critical commentary. And I’ve gotten far too many cases where ChatGPT included things that explicitly contradicted the material I had input, mixed it up with material from other threads (apparently it puts all my input into one big file), or made up things that had nothing to do with my prompts.

    It’s particularly bad at illustration. I’ve given it an itemized list of a number of characters in a specific arrangement with specific features; it gets very basic features wrong (such as male/female) and it routinely starts doing the wrong number of figures—it can’t count. Indeed, to generalize, it seems to have no cognitive functions at all toward the content it operates on. Which of course is what’s to be expected from large language models.

  • NickM

    Paul,

    People who rely on it in relation to political, historical, scientific, or cultural matters were there is serious disagreement, are idiots.

    Yes and no. As I have stated here (as have others) AI is very useful if you know enough about the subject to properly parse the results. I only use for things I have a grounding in. Just over a year ago I used it to help me build a high-end PC from scratch. I used it to check relatyive compatibility and performance of components and it was excellent for that (I used Poe – Quora’s AI) but I also got answers from flesh and blood Quora users and didn’t buy stuff until I had a concurrence. Even then it only worked because I know computers (Yes, Fraser, I know about Z80s – my first computer was a Speccy).

    And this raises an essential point about science/maths education. Most people do need to be educated as consumers/users so they don’t get fobbed off by everything from dodgy sales-pitches to “expert” witnesses like Roy Meadow. Some want or need to know more but that is select groups of very specialist things.

    Lord T,
    I assume you are aware of the story of Guttenburg? People objected because if a machine printed a Bible it wasn’t really the Word of God. People actually wanted monks in scriptoria reciting aloud as they copied. This created many errors because they weren’t all working at the same pace. Fast forward and people complain about kids and “screen-time”. It’s the same old moral panic bullshit. I read more than I ever have. It’s all on screen. The idea that it is in some sense “morally” superior to read from a paper book than a Kindle, if taken to it’s logical end-point, would mean me dumping Corel and CSS and starting daubing the walls in shit. It is exactly the same mentality that motivates the Greens. And dare I suggest it is just the labour theory of value rearing it’s head once more.

  • Fraser Orr

    @Lord T
    AI is the next domino.

    By “next domino” do you mean “next tool to take us into a brighter tomorrow”?

    now people can barely add up when they go shopping.

    But could they ever? I’m pretty numerate, but if I have a shopping cart full of thirty items I could probably add up the prices but it would be very tough. There is only so much information I can hold in my head. And I’m pretty sure I’d have a very hard time adding the 7.25% sales tax where I live (for you Brits, sales tax here in the US prices are shown without sales tax, which is later added at the register). I could, and I do, make a rough approximation of the total. But adding up thirty numbers is not at all something I’d ever do without a tool. And like I say, I’m pretty numerate.

    To further our example, could I divide 73,255 by 673 to two decimal places? Probably though it would take a pencil and paper and probably twenty minutes, and there is a large chance I’d make an error. Could most adults? Could my grandfather, who worked in a steel mill? My guess is probably not. But is the world any worse off because of that? Also probably not.

    You needed to learn things and remember them, along came search engines and if you want to know anything you searched for it, cut and pasted the response, and promptly forgot anything about it.

    Is your concern that I’d “promptly forget anything about it” or that I looked it up? After all, “remembering” is just another form of looking things up. It just isn’t nearly as effective. Using things solidifies them in your cognitive processes. To use an example, computer programming is DRAMATICALLY different today than it was when I was first writing video games as a kid. In those days there was no internet and you did have to look everything up in a book. In those days when I was coding I’d usually have two or three books sitting open on my desk. Today people use the web and copy/paste code exemplars all the time (or now, generate with LLMs). I used to have shelves full of books, now I never look things up in books. And what is the result? The software you use today is many, many orders of magnitude better than it was when I first started out. The internet is not the only reason for that, but it sure is one of the big reasons.

    Modern software frameworks and tools are so complicated there is simply no way you could learn and support them out of a book (not to mention the fact that new version come out far to fast for books to keep up.)

    Is it a problem though? When people start work they will be expected to use AI for productivity and nobody will even know that they can’t speel or put words two together on their own. 🙂

    Tools, whatever they are, help people be better. It is the person adding what is unique about people to the raw power of the tools that make the perfect combination. You are in favor of language and mathematics? They are just an technology earlier humans invented. Without, for example, the tool of calculus, we would all live in a vastly inferior world.

    If we are talking about people who actually know something about AI rather than the “they didn’t do it that way back in my day…. kids should use chalkboards and uncomfortable desk chairs and like it… I’m still kind of mad they gave up Latin hexameters” type people. For those people who actually know something there is this worry that AI is a different level of tool — a tool so powerful that human intelligence, creativity, insight and compassion, that with the tool humans have nothing to add. That the tool itself makes humans irrelevant. And I think all tools have provoked that fear. Though I have to say that there is some basis to think about that with regards to AI. I’m not sure it is true, but I’m also not sure it isn’t true. And in that case the discussion is about the future of humanity rather than a narrow focus on the specifics of schooling.

  • NickM

    William,
    For pics… Try Nightcafe. Many models and for human figures the negative prompt is your friend.

  • neonsnake

    I was less than impressed. Were our results out of the norm? Can you actually depend on getting correct responses?

    Bobby B – just after New Year, I lent my car battery charger to someone up my street. Youngish couple, first time with a full-on flat battery. They asked me how it works, and I had one of those “forgotten my PIN number” moments. I’ve done it dozens of times, but couldn’t for the life of me remember whether you connected negative first or positive first.

    Google’s AI (as in the first set of results) were wrong – it said negative first, then positive. Nothing implausible about that, it’s not like it said “get yourself a live lobster, and attach the left claw to the red terminal” – it *looked* like perfectly reasonable advice. Just would have made their lives a touch…livelier.

    I have friends who are into foraging, and they’ve told me about getting “hilariously” incorrect information about mushrooms. Which…well, if your choices are “a healthy and nutritious addition to a meal, painful death, or four hours talking to unicorns”, you kind of need it to be spot-on.

    At the moment, I don’t trust AI or LLMs at all. They’re optimised for *plausibility*, not correctness. They look amazing if you’re asking something you know nothing about – which, let’s be honestly, is what most lay-people will use them for – but the moment you start asking it stuff you know anything about, you stop trusting them. I’m a definite Luddite about them (in the true sense of what Luddite means, not the colloquial “I hate technology get off my lawn” sense)

    There’s other issues around the energy usage and the ethics around how they’re “trained”, which are knottier, but at the moment it’s as simple as “they’re not reliable” for me.

  • Fraser Orr

    @NickM
    Yes, Fraser, I know about Z80s – my first computer was a Speccy

    Me too!!! Made my first money writing video games on it. Geek alert… in the Sinclair ROM there was a routine to convert from an x,y coordinate to a screen memory location (something you need to do a lot in video games) that took 45 machine cycles. I disassembled Ultimate’s Jetpac and found a similar routine that did it in 35 machine cycles, but I rewrote it and had one that did it in 28 machine cycles. I understand nobody here cares, but I’ve been wanting to brag about that for thirty years….

  • neonsnake

    I disassembled Ultimate’s Jetpac and found a similar routine that did it in 35 machine cycles, but I rewrote it and had one that did it in 28 machine cycles.

    I confess to being old enough and geeky enough that the first line of code I wrote was probably LOAD “”, and to not only understand that sentence, but also to be rather impressed by it.

    (also, hate to break it to you, but I imagine you meant forty years, not thirty)

  • NickM

    Fraser,
    Jetpac – Oh, God! That takes me back…

  • J

    Go Old School. Essay tests with paper and pencil. If you know it you know it. If you don’t you don’t. Use cursive for extra points.

  • Fraser Orr

    @neonsnake
    (also, hate to break it to you, but I imagine you meant forty years, not thirty)

    Ouch…

  • neonsnake

    Essay tests with paper and pencil.

    Fountain pen, surely. And no tippex.

  • Fraser Orr

    @J
    Go Old School. Essay tests with paper and pencil. If you know it you know it. If you don’t you don’t. Use cursive for extra points.

    Yeah, and make the little scallywags walk to school in the snow, uphill, both ways…

    FWIW, IMHO cursive should not be taught in schools at all. It is slightly quicker but most people’s cursive is almost impossible to read. That’s what fancy fonts are for.

    BTW, one small matter… I’m left handed, and something you righties probably don’t know is that writing with a pen and ink (and to some degree with a pencil) is a significant disadvantage for a leftie, because as your hand sits on the newly written letters it tends to smear the ink as you write. I spend a lot of high school with sellotape (scotch tape) on the fingers of my left hand to prevent that from happening. However, I guess if we are going all luddite here we might as well demand that we write with quill pens (after all ballpoints and pencils don’t make that nice variable stroke width that makes writing beautiful) and freaks like me should have the devil exorcised out of us and be made to write with our right hand…. I mean in Latin, the language of the Holy Church, the word for “left” is “sinister”. Need I say any more?

  • neonsnake

    Yeah, and make the little scallywags walk to school in the snow, uphill, both ways…

    I think they were being humourously sarcastic, what with the “cursive” comment.

    It’s interesting, mind, to pick apart what technology is “useful” and what isn’t, and to your point around quills, I’m sure there were some people back in the day grizzling about the new fangled ballpoints pens. My desks at school still had the inkwells (top right, of course, because everyone was right-handed, unless you wanted a smack across the knuckles from a ruler – can only imagine what that was like reaching across your whole desk if you were a leftie), and we weren’t allowed to use biros – *gasp* – fountain pens or nowt, and damn the consequences when your ink cartridge leaked over your shirt.

    (I’m very glad we’ve moved beyond that sort of childishness)

  • William H. Stoddard

    Plato opposed reading as such, on the ground that it would lead to people no longer memorizing the material, and thus they wouldn’t really know it.

  • neonsnake

    Plato opposed reading as such, on the ground that it would lead to people no longer memorizing the material, and thus they wouldn’t really know it.

    Other than the fact that Plato was a knob, memorising isn’t the same as “understanding”. I’m sure a lot of us have examples of exams or subjects that we aced at school through rote memorisation, but no longer have any usable knowledge about.

    I’m a *huge* fan of things like Joplin (or Evernote, back in the day, before it went to shite), for keeping handy information that you cannot and should not be expected to recall.

    Or, you know, notebooks. Technology includes writing.

  • William H. Stoddard

    NickM, Thanks for the recommendation, but I’m not getting anywhere with a test run; it wants me to log in, but the verification e-mail didn’t reach me in two tries. I’m not sure there’s a way to recover from whatever I did wrong.

  • Yeah, and make the little scallywags walk to school in the snow, uphill, both ways…

    My grandfather went to school with MC Escher. For Grandpa, the walk really *was* uphill both ways.

  • NickM

    William,
    Just give-up and try again from a fresh start. The servers are frequently overwhelmed.

  • NickM

    Fraser,
    My wife is also a practising sinisterist 😈! So I know where you’re coming from. Oddly enough she simply cannot use a mechanical pencil at all even though it’s completely axially symmetric.

    All my university exams were done with a Rotring. That’s my weapon of choice. Has been for 30 years. I do use cursive in general though QWERTY is way more useful. Oddly enough I also draw pen and ink with an old school “dipping” pen. Oh, and Corel.

    neonsnake,

    Other than the fact that Plato was a knob, memorising isn’t the same as “understanding”. I’m sure a lot of us have examples of exams or subjects that we aced at school through rote memorisation, but no longer have any usable knowledge about.

    I’m a *huge* fan of things like Joplin (or Evernote, back in the day, before it went to shite), for keeping handy information that you cannot and should not be expected to recall.

    Or, you know, notebooks. Technology includes writing.

    ABSOLUTELY! And I love your opening. The most concise and accurate philosopical critique of Plato I have ever read. As to memorizing… It is enormously important in Islam. In Afghanistan they have competitions for boys (obviously not girls!) to recite the Qu’ran from memory as beautifully as possible without even understanding the language (Classical Arabic) it is written in. This is a Big Deal. I’m not even sure the beards who judge this farago understand it either. I have no probs using electronic prostheses across several screens. That’s just for details. Having said that you internalise a lot along the way…

  • neonsnake

    It is enormously important in Islam

    Sure. I grew up in and around Muslims, Hindus and Sikhs, in East London, as a child and older. There’s no massive difference in any real worthwhile sense that’s important *shrug*

  • Snorri Godhi

    memorising isn’t the same as “understanding”

    This is the entirety of my objection to considering LLMs as artificial “intelligence”. AFAIK LLMs are machine learning, not machine intelligence.

    The comparison of times tables to long division is illustrative.
    Times tables are look-up tables, while long division is an algorithm.
    You need intelligence to run an algorithm, but not to use a look-up table.

    It is also of interest to me that long division is one of the simplest algorithms based on iterative improvement that was ever devised. Division by multiplication. Analysis by synthesis.

  • Snorri Godhi

    The problems of left-handed writing come from writing left->right. They disappear when writing top->bottom. Or so i am told: i never tried; and am not left-handed anyway.

  • bobby b

    “You need intelligence to run an algorithm, but not to use a look-up table.”

    Don’t you actually need an algorithm to use a look-up table?

  • Snorri Godhi

    Don’t you actually need an algorithm to use a look-up table?

    🙂
    Perhaps i did not express myself clearly.
    Perhaps i did not even think it out clearly.
    The idea was that a human (as distinct from a machine) needs a minimum of intelligence to mentally run an algorithm such as long division; but not to use a look-up table.
    Also, learning a look-up table is rote learning (learning facts), while learning an algorithm is learning a skill.

    But i insist on the importance of iterative improvement. You’ll know about ChatGPT’s “hallucinations”. If ChatGPT did some fact-checking about the text that it produced, it could iteratively eliminate hallucinations. Maybe newer versions do?

  • Johnathan Pearce

    I just wanted to say thanks to everyone for the excellent comments.

  • NickM

    Snorri,

    This is the entirety of my objection to considering LLMs as artificial “intelligence”. AFAIK LLMs are machine learning, not machine intelligence.

    Agreed. But it’s still very useful. It also raises questions about the nature of intelligence and it can be truly spooky by which I mean exciting. Well for me it is.

    You need intelligence to run an algorithm, but not to use a look-up table.

    No, you don’t. You can produce a representation of the Mandelbrot set by running a simple algorithm. I did this 30 years ago in QBASIC on a 386* and nobody in their right mind would think that machine/software combo was actually intelligent in any sense.

    But… let’s think about raster graphics. My ZX Spectrum ran a screen at 256×192 pixels. My Samsung phone** can take (admittedly JPeg) images of 50 MP. The Speccy looked very grainy. The Samsung looks “real”. Neither are actual reality (what is? Eyes have limited acuity as does any sensor or image or whatever is looking at it) but… the Samsung is close enough as dammit for us humans. Perhaps AI is going that way too. It’s not really thinking but it’s getting closer to looking like it is. Perhaps close enough to get very difficult to tell apart from the real thing. Maybe not quite now but…

    *It was, of course, very slow. I still use QBASIC for some things if I need a quick and dirty answer to something simple.
    **Oddly enough both cost about GBP 130. One in 1982, one in 2025 (I am not adjusting for inflation).

  • Fraser Orr

    @NickM
    It also raises questions about the nature of intelligence and it can be truly spooky by which I mean exciting. Well for me it is.

    But this is not an extraneous question that is raised, it is the central question, and why I don’t agree with Snorri here. To answer the question “is LLM intelligent” obviously depends entirely on your definition of “intelligence”. His assumption that intelligence demands or at least implies the ability to execute algorithms is not one I would agree with at all. For example, chimpanzees and dolphins are widely considered intelligent but they don’t know how to do long division, nor have they the capacity to acquire that skill, or any ability to execute all but the most basic of algorithms. Intelligence is, at is core, the ability to process information into new information. LLMs are much better at that than humans in some ways and much worse in other ways. To define intelligence as “the cognitive skills that humans bring to the table” just seems disingenuous to me. But more importantly it is to dismiss a huge opportunity.

    If you think about the human brain and how it works, there is no calculator in there, it is something that we have shoehorned in — and it is why we really aren’t very good at long division. We don’t have the supporting hardware. And much in the way you can attach some paddles to a car wheel and float it on water and call it a motorboat, it’ll sort of work, but really badly. That is how I think of humans doing math. What is 77,623 divided by the square root of 81,522? A computer will tell you in a couple of nanoseconds. What is the second integral of cos(x)/log(x)*tan(∛x)? A symbolic calculator module can do that in a microsecond or two.

    The human brain has specialized areas to manage special skills, speech (Broca’s area), vision (the vision center in the occipital lobe) and hearing (Wernicke’s area), the cerebellum helps us to coordinate fine motor skills, the thalamus to wire up all the other parts in a remarkable process of maturation, the somato-sensory area to manage the balance between sensation and movement.

    Now if we add specialized modules to LLMs and AIs in general, it’ll blow past our capacity in these areas. Add a special module to do math problems (and the LLM can be configured to use its weightings to determine what “math problem” is) and it’ll be so good at math it would be embarrassing. Already we are seeing this, for example, ChatGPT added the capacity to draw images, which it does a very nice job at – certainly a hell of a lot better than me.

    The human brain has severe built in limitations because biology has shoehorned various specialized tasks into a medium — neurons and chemical transmissions — that is ill suited to much of the modern complex world.

    In a sense the LLM is more of a coordinating interface, analogous to both the thalamus and Wernicke’s area. As we add more and more specialized modules on the comparative shortcomings of existing systems will quickly become their strength.

    But when is that intelligent? It obviously depends on what you mean by intelligent. I think in many respects it is intelligent already. After all, to use my earlier example, it can produce a much better essay on the causes of the Spanish Civil War than I can. Apparently it is “intelligent” enough to dupe a bunch of teachers who are frustrated not because the kids are writing their essays that way, but because they can’t tell the difference.

    In fact in many ways it is the formulation of the question that is the problem. Things are not intelligent or not. They are intelligent to some degree or not. Moreover it is not just a spectrum but a multidimensional spectrum. Some people are more intelligent at one thing than the other for example.

  • jgh

    Fraser: yes, you do use logs when multiplying. A floating point number is held as m*2^e and to multiply two floating point numbers (m1*2^e1) * (m2*2^e2) you multiply the mantissas and add the exponents. ‘cos it’s logs. result=(m1*m2)*2^(e1+e2) and normalise.

    Yes, binary division is different to decimal division, but only because once the base is binary it’s all shifts and adds. When I first read up binary division and multiplication in (what I call my Orange Book, and have forgotten what it’s called as I’ve lost it) it demonstrated long division/multiplication in decimal followed by the equivalent in binary, and it was a revelation. Off course! It’s “just” the same as decimal. Prior to that I’d just encountered sample code with no explanation.

    Understanding the *why* of long division and long multiplication is vital to understanding the *how* of things like writing a n-bit routine with less-n-bit instructions (eg 32bit x 32bit with 16bit MULs).

  • Fraser Orr

    @NickM
    It’s not really thinking but it’s getting closer to looking like it is.

    I missed this in Nick’s comment above, but it is relevant to my previous comment. What is the difference between “thinking” and “looking like it is thinking”? When I “think” what is happening is a billion trillion chemical reactions and neurotransmitters flow through your brain through various connections and synapses and chemical processes sending signals along your axons, and a whole bunch of other things. But it is just, ultimately, chemistry. So am I “thinking” or does it just look like I am “thinking”? Aren’t they really exactly the same thing?

    I think for many people, though no doubt not the smart self aware denizens of these parts, there is a big defensive thing going on here. We humans want to protect our turf, and are very uncomfortable with the ideas of a computer getting in our private space, the things that only we are, that only we can do. And I think is a sense that comes from a general discomfort for the reality that what we call consciousness is just a materialistic thing based on chemistry. It is to challenge the idea that we have a soul. That we are special. That we are not just an animal or a soup of chemicals. Saying “computers are not intelligent” is also saying “I’m special, I’m not just a meat robot”. And that is hard to face.

  • bobby b

    SG: “Also, learning a look-up table is rote learning (learning facts), while learning an algorithm is learning a skill.”

    Okay, here’s where I am either getting obnoxiously pedantic, or just exhibiting my lack of knowledge of the terms. Feel free to let me know if one is the case.

    If someone say “five times six”, and my brain immediately yells at me “thirty”, I have memorized one rote fact.

    If I hear “5×6” and I pull out the table and I know to look at one axis for a 5 and the other axis for a 6 and follow the column and row out to intersection where it says “30”, I’ve run an algorithm.

    And, in fact, if my brain (as in my first example) has yelled out “30” to me, do I know that it didn’t follow some learned interior algorithm to that result? Is rote memorization really just a hidden algorithm? Or is it a specific memory address where “30” resides?

    This may just be angels-dancing-on-pinheads territory, but I’m trying to figure out what AI is really doing, and this seems to me to be a needed step in that knowledge.

  • Lee Moore

    I happened to see a little video by Hikaru Nakamura (who is certainly in the top 10 chess players in the world, maybe top 5) about one of his chess games. He’s definitely a worse chess player than Magnus Carlsen, but better at explaining what’s going on. One of his throwaway remarks was something like “and that shows how bad humans are at chess, compared to machines.”

    I remember not long ago – the Kasparov era – when it was a big thing that chess programs were on the verge of being able to beat humans. Now all the best human chess players are totally unfazed by the fact that they’d lose ignominiously to Stockfish. They learn from the computer things that they’d never have worked out for themselves. And when playing short format games – which is mostly what they do these days – they deliberately choose moves they know are objectively sub-optimal, because they reckon that that line will lead to simpler calculation and so quicker moves later on. A computer doesn’t bother with that, because it calculates even wacky lines lightning fast.

    The fun thing is that the humans just don’t care that the machines are better. They’re not competing against the machines (except as practice.) Just like women athletes are happy to train with the guys. But not have guys in their actual competitions.

    Humans can content themselves with the fact that if the task is {play a game of chess, while drinking a cup of coffee} they’re better than machines. If someone builds a chessbot that can do both better than humans, at the same time, humans are still better at {play a game of chess, while drinking a cup of coffee AND scratching your nose.} And so on. We’re flexible machines.

  • Lee Moore

    As to bobby’s 5 x 6 = 30 question, I think you can tell from the time it takes to get the answer. A memorised 30 will be delivered to output a tad quicker than a calculated 30. Which the boffins can measure.

    It’s a bit like regular and irregular verbs. Regular verbs follow a regular algoritm – I pontificate, he pontificates, they pontificated, pontificating and so on. The brain stores the root “pontificat-” ; and the algorithm common to regular verbs – “add -es in the third person singular” etc.

    But for irregular verbs – I am, she is, they were, being ; we store each answer as a word in its own right, immediately accessible without recourse to a time consuming algorithm. These words are the commonest in the language and come instantly to mind, without the need for algorithmic processessing. But obviously they consume much more memory than regular verbs.

    There are fun experiments with kids from which you can show that kids learn the regular algorithm and can produce regular versions of irregular verbs naturally, and then these are specifically overwritten with the irregular version, when the kids learn that they’re dealing with an irregular.

  • William H. Stoddard

    I’ve tried Nightcafe again, with a different e-mail, and the e-mailed links are still not showing up in my inbox. Nor in my trash or spam. It doesn’t seem to be functional.

  • Fraser Orr

    @jgh
    Understanding the *why* of long division and long multiplication is vital to understanding the *how* of things like writing a n-bit routine with less-n-bit instructions (eg 32bit x 32bit with 16bit MULs).

    But maybe one in a million people need to understand that and yet this is your argument on why we need to teach it to every kid? It is like insisting every kid understand the difference between succinylcholine and rocuronium. The people who need to know sure as hell need to know, but really we don’t all need to know. And the people who really need to know can look it up in a book.

    I think kids should gain some mastery of long division but not because they need to learn how to code it using binary representations of floating point numbers, but because it helps to understand what division is and how it works. And that understanding helps them understand how to apply the tool in real world situations. Unfortunately, if you teach it as an algorithm as is often done in skool, they don’t gain that understanding. I’m not really sure but I think that a lot of the thinking in “new math” was to try to get away from that algorithmic approach and try to teach a grok-able understanding of it. And like I say, it is a problem that cries out for computerized manipulatives to help them see what is going on.

  • NickM

    William,
    Odd. I can’t think what’s up. I’ll have a look/ask on Discord (the chatroom integral to NC).

    jgh & Fraser,
    I’m with Fraser on this one. I learnt LD at primary school and then only needed it for a single elective module as an undergrad. I haven’t needed it since. So, I don’t see it as a keystone of a general maths education. What I do see as much more important for a general maths education is things like probability and stats, grasping orders of magnitude, an understanding of margins of error*, estimation, basic cost/benefit analysis… Basically accepting the fact that for everybody who writes machine code there’ll be thousands tryng to work out the best type of mortgage for themselves or figuring when paying twice as much for a phone is worth a 25% boost in capability. Some people will have to know how to create maths or use maths but everyone has to know how to consume it.

    booby b,
    No. That isn’t pedantry. It’s a good question. I know Lee Moore provided an answer but I think it’s still worth thinking about. Especially for things a bit trickier than 5×6. For example I know the Euler Identity when I see it. It’s hard wired into me after all the years. But I also know the why and actually understand it because of all the time I spent doing physics with all those waves in both trig and exponential notation**. It’s not just a fact I know like the “France’s longest land border is with Brazil”

    Fraser,
    Well, one thing is for sure – modern generative AI has kicked (the always dodgey in my opinion) Turing Test into the long grass. As to John Searle’s Chinese Room… I dunno. The thing about AI as it currently stands is, for me, not so much about whether it understands what it is doing but whether it has any volition. I can give DALL-E3 prompts to produce pictures of fluffy kittens from now until Ragnarok and it will. What it won’t do is say, “FFS Nick, I’m bored with cats, can’t I for once do a puppy?” What I can’t do is ask for anything pornographic***. I don’t think it is possible to argue that is because it is prudish in a moral sense of good or evil. The people who produce these models have deliberately (and quite crudely) held a shotgun to the model’s head because they are terrified of the public backlash. And when I say, “crudely” I really mean it. The word “frontal” in almost any context gives the model a fit. That is clearly programmed behaviour and not learned.

    *Something social science and economic papers are very bad at.
    **Physics is all waves – except when it’s particles of course!
    ***The definition used by every AI art model I have tried makes Mary Whitehouse look debauched. I know this from trying to get produce variant images of classical nudes. AI won’t go near anything you can find in public architecture or major art galleries.

  • Snorri Godhi

    NickM: Yes, “generative AI” is useful, and i do believe that something like LLMs are modules within the human brain/mind. Consider the chess AIs mentioned above: they are based on tree search, as they were since the beginning. But, since it is still impossible even for a supercomputer to search the entire tree, some heuristics must be used to prune the tree. I think that something like LLMs would be good at providing the heuristics.
    But you cannot rationally choose chess moves with heuristics alone: you still need tree search, which is to say, planning ahead.

    As for needing intelligence to run an algorithm, i was considering only humans. Machines do not need any intelligence to run a simple algorithm, because humans (or AIs) design the software, or dedicated hardware; and humans also decide when a specific algorithm is appropriate to a specific problem, another instance of rational choice.

    — Which is why i find it pointless to argue with Fraser (except for the benefit of others listening in 🙂 ). I told him that intelligence involves the ability to make rational choices (rather than random choices, as ChatGPT does); the ability to plan ahead; and the ability to detect contradictions. He keeps ignoring all of this. He actually asked me what i mean by ‘choice’, and did not address my answer.
    Instead, he keeps repeating basically empty statements like this:

    Intelligence is, at is core, the ability to process information into new information.

    It is empty because not all abilities to process information can be considered intelligent. One has to specify **what kind** of information processing is intelligent, if one wants to say something meaningful.
    (Paul Marks makes a similar mistake in condemning compatibilism: Paul does not realize that thesis (a): a deterministic nervous system can make rational choices, does not imply thesis (b): any deterministic nervous system can make rational choices.)

    BTW: fish cannot do long divisions, but are intelligent enough to make rational choices which involve planning ahead.

  • Lord T

    NickM, Fraser Orr,

    I’m not against technology. I actually believe that will help us resolve all our problems in one way or another.

    The article was about AI at school. I believe we need to learn the basics without technology so we can make better use of the technology. Learn basic thinking, analysis and use it.

    I too was at the leading edge of technology, built a UK101 and taught myself to code, got a job in coding and moved on from there. The introduction of Wolfenstein 3D quickly followed by Doom and then Quake made create leaps forward for games and in the few short years I was coding dots around the screen people were generating NPCs and amazing graphics. It was a golden age.

    Even now though I still read books and question everything. That is the difference. Some lying scumbag politicians makes a statement and we think ‘Hold on. That doesn’t make sense’ but todays school leavers just take what is said, even if it contradicts reality and believes it. Even to a point where they are willing to kill you because you are a heathen.

    I believe it is because they just don’t think about anything, retain no knowledge because it can be googled if they need to find it out again.

  • Snorri Godhi

    As for bobby’s question, i subscribe to everything Lee Moore said, but i agree that it is still worth thinking about, and shall try to take a small step in this direction, since it overlaps with what i wrote above.

    If someone say “five times six”, and my brain immediately yells at me “thirty”, I have memorized one rote fact.

    If I hear “5×6” and I pull out the table and I know to look at one axis for a 5 and the other axis for a 6 and follow the column and row out to intersection where it says “30”, I’ve run an algorithm.

    Quite right. What i should have written yesterday is that a look-up table contains facts, but in addition you need an algorithm to search it.
    But please note that the latter is not an algorithm of iterative correction.

    And, in fact, if my brain (as in my first example) has yelled out “30” to me, do I know that it didn’t follow some learned interior algorithm to that result? Is rote memorization really just a hidden algorithm? Or is it a specific memory address where “30” resides?

    This is an important question. My view is that learning times tables is likely to be equivalent to acquiring a set of conditioned reflexes. We learn to associate eg ‘5 x 6′ with ’30’; and when we hear ‘5 x 6’, out pops the answer.
    This is connected to the issue of iterative correction, because a reflex arc typically provides little opportunity for iterative correction, other than some simple negative feedback.
    NB: I see planning, rational choice, and detecting contradictions as forms of iterative improvement.

    Basically, i see 2 main forms of brain function: reflexes and iterative improvement, for lack of better words.

    This may just be angels-dancing-on-pinheads territory, but I’m trying to figure out what AI is really doing, and this seems to me to be a needed step in that knowledge.

    No, these are perfectly valid questions, to which i am unable to fully articulate an answer. I actually published a refereed paper about this sort of issues, but it is framed in the concepts of Bayesian belief networks and computational complexity. Only the basic concepts, but even so, not for the uninitiated — and i am not sure that it gives a complete answer, anyway.

  • neonsnake

    But maybe one in a million people need to understand that and yet this is your argument on why we need to teach it to every kid?

    That’s fine, and that rings fair for a lot of stuff. *Most* people don’t need to understand the finer point of statistics (my degree speciality), or biology, or whatever. It’s clear that they don’t.

    But where stuff starts to fail is when you’re unable to even *ask* the question, because it’s been obscured so much because the question has not been written by humans.

    A trivial example: I have two cars. I have a 2015 Ford Mondeo, and a 1978 MGB. If the MGB has an issue, I can *at least* have a stab at diagnosing it. I have a Haynes manual, I can watch and go “I wonder if the fan should be spinning?? Oh crap, it’s not!” and follow the various wires until I get to a point where I go “Um. I think that bit probably shouldn’t be hanging off.”

    I can’t necessarily *fix it* with the tools/skills I have, but I can at least explain what’s wrong.

    The Mondeo? I can’t even change the tyres, let alone diagnose a problem with it. I can change the oil, fill the windscreen washers and the radiator, and that’s about it. (Believe me, as a car person, the moment you have to call the AA to change a tyre is a humbling experience)

    AI and LLM is like that – it removes your ability to fix something yourself, because it stops you from learning, and you place your trust in something that is inherently untrustworthy. When that happens, you’re at the mercy of other people (for better or worse).

    Until that changes, I’m avoiding AI and LLM like I would the plague. It’s too iffy; I’ve seen too many example of “facts” that were so wrong it would be dangerous to trust them.

    (I’ve no interest in AI images, FWIW, they’re just weird, unattractive and tricksy, and I say this as a photographer)

  • Fraser Orr

    @neonsnake
    Until that changes, I’m avoiding AI and LLM like I would the plague. It’s too iffy; I’ve seen too many example of “facts” that were so wrong it would be dangerous to trust them.

    Isn’t that a pretty good description of arguing with people on the internet?

  • neonsnake

    Isn’t that a pretty good description of arguing with people on the internet?

    Pfft. Yeah, it is.

    Upthread I said something along the lines of “the argument looks great when you don’t know anything about it, and falls apart completely when you do”, which is a decent approximation of arguing with people on the internet. I refrained at the time from pointing it out.

  • Fraser Orr

    I came across this article about some kid who broke a bunch of world records for mental math:

    30.9 seconds to mentally add 100 four-digit numbers
    One minute and 9.68 seconds to mentally add 200 four-digit numbers
    18.71 seconds to mentally add 50 five-digit numbers
    Five minutes and 42 seconds to mentally divide a 20-digit number by a ten-digit number ten times
    51.69 seconds to mentally multiply two five-digit numbers ten times
    Two minutes and 35.41 seconds to mentally multiply two eight-digit numbers ten times

    Which is pretty impressive, in fact amazing that a human can do that. But of course it is worth pointing out that the computer that controls the microwave in your kitchen could do all of these together in less time than this kid could blink his eye. Humans are not good at math because our brains are not a good mechanism for doing math, especially linguistically prompted math. Plug a math unit into an LLM using the LLM to convert the linguistic prompt into actual math and it will transform the whole field.

    It is worth pointing oit that humans are PARTICULARLY bad at understanding probability and statistics. I don’t necessarily mean the symbolic manipulation and calculation so much as translating it into action. If I can share a story of this from my life that made me laugh. During the height Covid pandemic I was at a gas station waiting in line. The cashier was wearing a mask but had pulled it down over nose because apparently she liked to breathe. So a customer walks up and goes off on her. “Don’t you know that you are spreading death? Do you want to kill people? Do you want to murder me!” the enthusiastic customer yelled. Suitably chastened the cashier pulled the mask up over her nose and asked the customer what she wanted. “Two cartons of Camel cigarettes please” the customer demanded, proudly feeling she had scored a point.

    But I think this is built into us because the cost benefit analysis of being an ape on the African plains is very different than the cost benefit analysis of living in suburban Dallas and our brains are bursting with such obsolete biological wiring. Humans make terrible choices all the time. We are deeply irrational, and are far more driven by our emotions, instincts and hormones than we would like to admit. Truthfully the decisions that we make rationally are a tiny island of sanity in the ocean of irrationality that drives our human lives.

  • Do you want to kill people? Do you want to murder me!” the enthusiastic customer yelled.

    Frankly, part of me does want to, if not kill, at least beat that customer to a pulp.

  • neonsnake

    Frankly, part of me does want to, if not kill, at least beat that customer to a pulp.

    Brilliant! now do the same for every time I’ve said that.

  • Paul Marks

    NickM.

    “AIs” such as ChatGPT, will not only present “facts” that are not facts – they will also make up sources, including articles that do not exist.

    This is not because the “AIs” are evil – they are neither good nor evil as they have no Free Will (without Free Will there is no moral good or moral evil – because there is no moral choice), they are just machines which do what they do.

    Relying on them is a mistake (but you know that) – because they are not really intelligences at all – see above.

    Of course, relying on a human source is also a mistake – as humans can lie, or just be mistaken.

    Always go back to the original data, and check it.

    Still, yes, it can be useful to know what the establishment are saying – and an AI will reflect that.

  • Lee Moore

    Paul has managed to explain AI to lil ‘ol luddite me.

    So “AI” is just a synonym for “journalist.”

  • bobby b

    “Always go back to the original data, and check it.”

    At some point, won’t it be impossible to tell which is original data? If the systems are incorrect, but are being trained on each other . . . .

  • GregWA

    bobby b, “…won’t it be impossible to tell which is original data?”

    I think we are at least 10 years past that point of no return.

    If I’m a full time journalist (a real journalist) with great access, then I can find what I think is the original data and go about confirming it. But I’m not that and neither is anyone else, so how do we decide a piece of data is the original? I think it boils down to how we used to figure out which things we read in the papers were true…you just keep sifting and comparing stories and use common sense. Except that job has gotten harder because we now have a billion “papers” to choose from.

    Maybe the AI/LLM/bots can help us? Could they be usefully prompted to answer the question, “I’m interested in un-biased news or at least news from sources whose bias is clear. What sources of news in the last 3 years have been shown over time to be the most accurate and un-biased? To answer this question you will have to iteratively remove from your training data all unreliable sources.”

  • Fraser Orr

    Can I suggest what seems to me to be a very obvious answer? If you are using ChatGPT, ask it to include its sources and it will tell you. You can then check them if you like.

    “Explain the causes of the Spanish Civil War and include the sources for your data.”

    BTW, in this debate I have learned more about the Spanish civil war than I ever cared to know. 😊

  • Lee Moore

    I’m still stuck on bobby b’s 5 x 6 problem. OK, I’m a bit slow.

    My problem is that I can’t think of how to calculate 5 x 6 – ie via an algorithm. In my head I mean. I suppose I could visualise an abacus and count the beads, or mentally construct a 5 by 6 rectangle of dots, and count them. But I’m pretty sure I’ve never actually done that, and it seems a very laborious approach.

    So I can’t actually see any other way of doing it than by memorising my times tables (at least up to 9.) Then maybe I get to use an algorithm for 25 x 46. But 5 x 6 seems irreducible, without mental rectangles which seem silly. Even little mental tricks like recognising that 5 x 6 = 10 x 3 still requires knowing that 10 is twice 5 and 3 is half 6. Which I’m sure I’m not taking time off to calculate. I just know it and have it stored.

  • bobby b

    “It is like insisting every kid understand the difference between succinylcholine and rocuronium.”

    I wouldn’t want to live in a world where this wasn’t common knowledge.

  • Paul Marks

    Lee Moore – MODERN journalist.

    Not the old shoe leather journalist who walked around their local neighbourhood, talking to ordinary people and keeping their eyes open.

    AI is like a modern journalist – someone who went to university and reads (and reworks) government and corporate press releases, and goes to swanky events.

    No disrespect meant to GregWA and other journalists around here.

    bobby b – it is the original data on (say) historic temperature figures, if it is in an old leather bound book that you are holding in your hands.

  • Paul Marks

    “Surely they would not lie on basic things like historic temperature figures” – if anyone is thinking that, yes-they-would, and they do lie about such things.

    Go back to the original data – although that is likely to mean finding old physical records (the next move will be to destroy those books – or just throw them out “we do not need them any more – it is all on computers now”).

    Look out for words such as “estimated” or “adjusted” – such words mean FAKE.

  • bobby b

    SG: “But please note that the latter is not an algorithm of iterative correction.”

    Apologies, but I do not know what this means. Closest I can figure is, the digits of pi constantly overshooting but by less and less and honing in closer to true, but I can’t relate that to the discussion.

  • Snorri Godhi

    Bobby: don’t worry too much, ‘algorithm of iterative correction’ is term that i made up in this thread. The fact is, i am not sure how to strictly define the concept even in technical terms. But the idea is: if you (or a machine) take a guess, see how it works (in an internal model, not the real world, which would be too expensive in many cases), then try to improve on the guess (or try another guess altogether), see how it works, and iterate between attempted improvement and checking how it works, then you hopefully will converge to a solution.
    I have given a few examples above: negative-feedback loops, long division, and computer chess. I might add Darwinian (but not Lamarckian) evolution. Quite an assortment, eh? Other such algorithms are more technical (and i worked on 3 of them).

    One might object (and more than one objected to me): why not do it all in one pass, instead of iterating? To which i reply: why are such algorithms in use, if they can be replaced by direct, one-pass algorithms?
    This is where i got into issues of computational complexity. In brief: yes, you can do it all in one pass, but that pass would take a huge amount of computation to get the same accuracy as the iterative algorithm. The huge power consumption of LLMs appears to be vindicating my claim.

  • GregWA

    Snorri reminded me of this (to date myself a bit): my first calculator, that I bought in high school, did not have a square root function. All I could afford was a four-function calculator. So, to do square roots, I would guess, divide by that number and then generate a new guess by approximating the difference between the answer and the number I guessed. I then divide by the new guess and iterate. I got very fast at this and could get a square root to 3-4 digits in a few seconds.

    There is no reason to ever do that again! 🙂

    And obviously, I was not a whiz with a slide rule. I had one but did not know how to do square roots with it (I assume that’s a thing…but it’s been awhile so I’m not sure).

  • neonsnake

    So, I’ve been doing a bit more research.

    Heavy caveat – I am not an expert in this, so take very large grains of salt etc etc

    LLMs appear to return back the most statistically likely grammatically correct response, with emphasis on the grammatically correct. The “correctness” is taken account for in the “statistically likely”, I believe, but they will err on the side of “plausibility” over “correctness” every time. I can only assume that the data that they are trained on has some bearing on this, but I don’t know enough to be sure on this.

    I still come back to: when I’ve asked them stuff I know about, they’ve got it wrong and dangerously so. If I asked them stuff where I’m not personally educated enough to say “um, that doesn’t sound right or pass the smell test”, I cannot trust them not to lead me in totally wrong, but plausible, directions.

    My stance at the moment is that they are useless and dangerous, except in some very specialised situtations where the person using them is an expert (in which case, I’m not wholly sure why they’re needed? Crap example, but Adobe’s AI for replacing parts of a photo is never as good as doing it manually, and saves, like less than a minute vs doing it yourself)

  • neonsnake

    There’s also a thing about having a set of skills – I’m not a “rugged individualist” in the sense that everyone should know everything, because that’s patently absurd, but there is a point where having enough skills to look after oneself is useful (one of those skills is, and should be, being able to network so you can find the person who can sew, for instance), which AI *might* take away for the more naïve folks.

    This ain’t a particularly political post, obvs, it’s just plain common sense.

  • Fraser Orr

    @neonsnake
    My stance at the moment is that they are useless and dangerous, except in some very specialised situtations where the person using them is an expert

    I’m pretty sure your stance is wrong, though your caveat is sufficiently broad as dig out of any contradictory examples. But the simple fact is that LLMs are used a huge amount now for practical reasons. For example, it is generally pretty good at writing code and is used a GREAT deal in the computer programming business. It is great for generating ideas, or putting together a draft plan. For example, one of my kids is starting a business and he used it to generate a marketing plan and some ideas for blog articles. I just finished writing a book and I needed some simple art to illustrate some points — ChatGPT did a great job generating that for me. It isn’t going in the Louvre, but it is perfectly adequate for what I need. (And BTW, you should check out some of the artwork NickM has done with NightCafe — it is beautiful, and sophisticated.)

    Of course that doesn’t mean the output is used as is, but these tools are EXTREMELY good for solving the tyranny of the blank page. But like I say, we can all sit about navel gazing wondering if they theoretically can be useful completely ignoring the fact that they absolutely are EXTREMELY useful and powerful tools being used by people all over the world today.

    They are neither useless nor dangerous, any more than any tool can be dangerous if used wrongly. Presumably you have a hammer and saw in your garage? And if you are an American (which from your spelling I am guessing you are not) a gun in your home. Those things sure can be dangerous. They are also useless without the input and guidance of a human.

  • neonsnake

    I’m pretty sure your stance is wrong, though your caveat is sufficiently broad as dig out of any contradictory examples

    Sure.

    I mean, look, my experience is rather limited – I’m a car person, a gearhead, and I asked it a bunch of questions after it got wrong the order in which to connect the leads, and it was consistently incorrect.

    I’m also a foodie, and did the same, and it was, again, consistently incorrect. Some of this was slightly amusing, and some of it was…not. I wasn’t dicking about with the mushroom example.

    (and, no, I’m English. And yes, I have a pretty well-stocked garage of tools. I don’t have a gun, FWIW, but I do have other… tools. That’s not the point tho)

  • neonsnake

    I’m okay with looking at the nightcafe stuff but a search doesn’t bring anything up. Any AI stuff I’ve seen so far has been…questionable…and still doesn’t answer the question that it’s fucking vital to understand the basics, whether that’s for art or other skills.

  • NickM

    Paul,
    OK, a good school pal of mine decided, after his first degree in physics, to do a PhD in historical astrophysics. This is using historical sources. I thought he was mad until half-way through the first pint in the pub… OK, example. We haven’t had a supernova in this galaxy for over 400 years. The last was in 1604AD and was observed by Johannes Kepler (and basically everyone – these things are bright enough to be visible in daylight). So, the thing is because of stuff like that and the simple fact that astronomical phenomena vary dramatically in the time scales they occur over (from less than a second to billions of years) the historical record is very useful. But it’s tricky because it is about comparing naked eye accounts with modern instruments. It is also about comparing modern understandings with people recording phenomena that they might have thought of as evidence of the Gods being peeved and therefore recorded it as as such. Note here that both Johannes Kepler and Tycho de Brahe were astrologers by profession. So, it is a fascinating subject. And by the end of the second pint with John I’d gone from thinking he was throwing-up his skills for a wild-goose chase to thinking, “That’s so cool!”

    In many sciences historical records are important. They certainly are in astrophysics because we just can’t set-off a stellar-collapse in a lab. Astrophysics is in many ways a historical science in a way that, say, particle physics isn’t.

    But… The historical record recquires a level of interpretation to an extent that actual replicable experiments don’t.

    Climatology is similar. Historical results need to be interpreted. And, yes, there are intrinsic issues here which you can’t get around. And there is also the elephant. Scientists seek truth. But they have to seek truth through funding. Green Ideology has utterly skewed that funding. A “Climate Crisis” has elevated climatology from a bit of a backwater into the hottest(!) thing around. That inevitably skews interpretation.

    Many of the first advances in nuclear physics were made by blokes in tweed jackets* in sheds. Then we get WWII (a crisis) and we go from Wallace and Grommit (or Marie and Pierre) to Oakridge and Los Alamos. The Manhatten Project cost $2 billion dollars (and that was back when $2 billion dollars got you a lot of bang for buck). As an aside it can’t be said the US Government didn’t get their money’s worth. I mean I don’t think there has ever been more brutally explicit examples of if we do x then y than Trinity, Hiroshima or Nagasaki.

    I don’t think climatology has become exactly a conspiracy in the formal sense. It is more a confluence of interests (Grren ideologues, dodgy businesses, subsidies, tax-breaks, stroppy Scandi adolescents, Al Gore…) which unfortunately leads to a perversification of science for “reasons” and there a lot of them.

    *And Marie Curie, obviously!

    PS. The John I referred to is (last I heard) a prof at an Ivy League Uni and one of the foremost experts in the World on the Antikythera Mechanism – which kinda neatly brings us back to computing!

    PPS. Thanks, Fraser for you comments on my pics 😊

  • Fraser Orr

    @NickM
    A curious thing about scientists in the press is that they are incorruptible truth seekers when they are finding things the press want them to find, and sold out charlatans when they don’t. For example, when scientists publish a paper saying that fracking doesn’t cause earthquakes and it turns out the research was funded by Shell Oil, then we toss it in the pile of garbage. Science funded by “Big Oil” doesn’t count. When scientists publish data saying a drug is safe, and we find the science is funded by Pfizer, we toss it in the pile of garbage because it is funded by “Big Pharma”. Or when scientists publish data saying GMO foods are safe, again, garbage because it was funded by Monsanto, by “Big Ag”. But when scientists publish data saying that the climate emergency demands that we give huge power and money to the government, nobody seems too fazed that that was funded by “Big Government”, the very people who would benefit from it.

    And to be clear, I think we absolutely should judge the credibility of science by who is funding it. It is just that we should do that consistently, even when that science happens to say what we want it to say. There used to be a principle in science that when you got the results you wanted that you should be an especially harsh critic of your methodology and data. This old fashioned idea seems to have gone out with the horse and buggy. In fact we are at the point that even if someone does such a critically vital analysis the science press itself will censor the story if it doesn’t fit the narative. Which, for anyone who really believes and advocates for science, is just plain horrifying.

    PPS. Thanks, Fraser for you comments on my pics

    YW. A few people have been asking, so if you feel you want to I am sure a link to your work would be appreciated by many, including me.

  • Paul Marks

    Nick M – a scientist once said that one should use instruments when possible (not just the naked eye) and one should have other people (reliable people) check one’s own research results – because sometimes we see what we want to see, and (without knowing it) interpret data in a distorted way – to fit our pre existing theories.

    That scientist was Roger Bacon – and he was writing in the 1200s.

  • Paul Marks

    Neonsnake.

    The division of labour is very capitalist – it was Karl Marx (in the German Ideology – 1845) who argued that a person could be skilled at everything all at once – so he could be “a hunter in the morning, a craftsman [or something else – my memory fails me] in the afternoon, and a critic after dinner” be wonderful at everything and do anything, or nothing, on whims.

    How? Well you see “society will organise production” – that is all we get, Dr Marx does not explain any more, and when asked – he said it would be “unscientific” to give any explanation at all as to how socialism would work.

    It is a great mystery how this joke of a thinker, Dr Karl Marx, got any followers.

    As for socialists who are “ex” Marxists – the leading “moderate” socialist of my youth was Denis Healey (he was the arch foe of “extreme” socialists).

    Under Denis Healey the top rate of tax on investment income (i.e. on “capitalists”) was 98% – 98%.

    This is indeed less than 100% – so he was not a Marxist, he was not “extreme”.

    Unsurprisingly investment in industry in post World War II Britain was very low – between the powers that government gave the unions (see W.H. Hutt “The Strike Threat System” – their power came from Acts of Parliament) and crushing taxes on investment income – there really was no point in investing in British industry, as there would be no real return on such investment.

  • Paul Marks

    My father, Harry Marks, did invest in British industry – back in the 1960s, in spite of union power and crushing taxation of investment income.

    The accounts of a certain investment company looked very good – in spite of union power and crushing taxation it was, according the accounts, making good investments in British industry.

    It was Slater/Walker (Great British industry – or whatever it called itself) – and its accounts were fake, of course they were fake – how could they be anything else under the conditions of Britain at the time

    But Harry Marks believed in Britain – and he flung in money that had taken a life time of back breaking work to make.

    I watched him being destroyed – I took note that the most impressive looking evidence can be a pack of lies.

    I also took note that the British establishment have no honour – which has been confirmed many times since then.

    Working out that claimed progress was a lie was also easy – as I watched good buildings being replaced by bad ones, in Gold Street and other streets of the town (indeed all over Britain). And good teaching methods replaced by bad ones at school – and the good books were also replaced by bad ones.

    There can be progress, it is possible, but progress means that what comes in is better than what was before – not worse. If it is worse – it is not progress, however “Progressive” it may be.

  • Paul Marks

    How does one make money in modern Britain?

    Well not the way that Josiah Wedgewood or other industrialists got rich.

    No – the way to get rich in modern Britain is government contracts – repairing the roads (or rather taking the money and not really repairing them), or, for example, in “looking after children” by demanding perhaps a million-Pounds-for-a-child – for a year.

    How? Easy – buy a building to house said-child, and bill your local council for said mortgage – which will be only a few years long. Lots of money per year – so you pay off the mortgage, and you (NOT the council) get to own the asset. You get to own the asset – that the taxpayers paid for.

    Of course a fat bald councillor may get angry with you – but you have done nothing illegal, and no one understands when he tries to explain to people that this is not good. So snap your fingers under his nose – what can he do.

    The media certainly do not care – they are interested in “Islamophobic” (or whatever) comments NOT in where the money goes.

    This “Woke” stuff is very useful to corrupt corporations – “look at this bright shinny bauble – Diversity – Inclusion – what-ever-it-is, and you will not notice as I pick your pocket, indeed strip you down to your underwear”.

    This is the way you get rich in modern Britain – from the government, national and local. There are thousand ways to do it.

    NOT by producing goods for private sale.

    How will it all end?

    It will end in collapse and mass suffering – but if the public can be kept distracted (with Wokeness or foreign wars, or whatever) some clever people will have made their fortune – and be far away from Britain when everything falls apart.

  • NickM

    Fraser,
    Yes, it is a very odd set of circs about who funded what and how people react to it. Not least because, for example, there is a huge intersection between the government and companies – whether “Green”, fossil fuels or pharma*. Indeed this is a huge issue. Far too big for me or this thread.

    As to the pics! Well, they’re all over the place really. But I think I do have some good ones.

    To look at my stuff: it’s: https://creator.nightcafe.studio/u/NickM

    If you use that link and join the fun I can earn credits. And, also, given the sheer quality of the discourse on just this Samizdata thread I’d love to see what you do with the infinite paintbox and give you upvotes etc.But, seriously, I’d just like to see what you do.

    *And in the UK (especially) who exactly is the biggest customer for drugs?

  • Fraser Orr

    For those who think AI art is ho hum, please look at Nick’s work. It is really beautiful and soulful. And he is just a hobbyist.

    What I like about your work Nick is not just the quality of the rendering, but, as I say, it has soul and emotion. Of course I also like all the partially clad flawless ladies, but, I look in their eyes and my guess is they also have great personalities!!

    Not sure I’m the guy to try it out, however, my daughter is a really amazing artist. It would be interesting to see what she can do with it.

  • Paul Marks

    How much of all this “Woke” is real ideology – stuff the people coming out with really believe? And how much is a cover for corruption?

    I do not know. But when, for example, I listen (for hours) to representatives of a road company talk about “diversity”, “inclusion”, “Carbon” (they mean C02) causing “Climate Change”, and so on – rather than what they are doing to repair the roads, then I have the feeling (perhaps an unfair feeling – but it is there) that I am in the presence of con artists.

    For example, what is the race or sexuality of a pot hole?

    As for people who take vast amounts of taxpayer money to buy properties “for the children” – and make sure that these assets (the buildings) are now owned by them (NOT by the taxpayers who paid for them) then I am certain I am dealing with con artists.

    But I can not seem to convince people in authority (whatever “in authority” really means) of this.

  • GregWA

    Regarding the bit of thread here about LLMs and the like degrading the development of skills, I have a friend who is very good at doing most mechanical jobs, on his car, house, RV, etc. Before taking a months long RV trip around the US, he re-built the vehicle he used to tow his RV. He’s not a car guy, he just figured out what to do and did it.

    His comment to me about being able to do this was along the lines of ‘I’ve fixed, re-built, or built enough different things with enough varied tools, that when a new thing comes along, I can tackle it because it only requires a combination or variations on things I know and have done.’

    I think that kind of skill set is what is likely diminished if things like LLMs are not used properly. Meaning if they are substitutes for being productive, creative, etc.

    Have we figured out how to teach the use of these new tools so that they enable a true advancement in an individual’s capabilities? I suspect ‘no’; these tools are too new, still developing. Good teaching likely only happens after a certain level of maturity. But are we on the right path to teaching and using these tools well?

  • Fraser Orr

    @GregWA
    But are we on the right path to teaching and using these tools well?

    This is a really good question Greg, and the answer is absolutely no. Instead people are running around like headless chickens saying “the robots are going to take over the world”, or “this tools are total crap, they just spit out what they have been told” or “kids are cheating and we can’t tell” or “we need an international agreement to ban or regulate these tools” and various reactions that seem rather more emotional than rational.

    They are tools and so we need to learn to use them. That means finding their strengths, their limitations, where they are useful and where they aren’t. As I said way back at the beginning, when someone said “these kids turned in the essays without even checking them”, my question was “did anyone explain HOW to check them”?

    This is a whole area of discourse around AI where the conversation hasn’t even started. Perhaps not surprisingly, because, as you say, it is a moving target.

  • neonsnake

    The division of labour is very capitalist

    Paul – division of labour is as old as when humans first banded together (probably other species as well, I guess); it’s (hopefully obviously) simply more efficient to split things up so that those with more skill in aspect of life do a bulk of it.

    Typically, people think of an assembly line in a factory when they think of this, and that’s okay, but it’s not quite the same thing. It is still – and I want to be very clear on this – more efficient, but there’s also a danger of deskilling if it’s taken too far. The analogy holds across many different industries, but it’s important to have a somewhat broad set of skills so that if, for example, someone is off sick, someone can replace them in their daily tasks.

    On the other hand, having the aforementioned “broad” set of skills is important to the individual. Again, hopefully obviously, it allows them to progress and adapt.

    Our current society degrades this. I’m not one for decrying the inability to write in cursive, but I will decry the ability of people to cook, or repair their own cars, or mend their own clothes and so on (add in your favoured “thing” here). And this is very much driven by how society is structured to an extreme of specialisation.

    An erstwhile poster on here used to say that a coder should spend all of their time coding, because that was the most “productive” use of their time, and should not learn to cook, because that’s what pizza was for. I found this laughable in the extreme.

    A little while ago, there was an interesting discussion on whether people should grow their vegetables on here. I was profoundly disappointed at people (including those I normally agree with) saying that one shouldn’t do so.

    It’s not just that having various interests and hobbies make one a more rounded person, but it also insulates you against the shocks and crises that can/will come your way. A broader skill-set gives you more options/freedom than a very limited one, and today’s society discourages that broad skillset, which makes you more reliant (and therefore less able to set terms) on an employer. Simply being able to cook bread from scratch and grow an amount of my own food gives me far more options than someone who cannot. “Actually Existing Capitalism” makes these things difficult to pull off, which reduces options – deliberately or not, the result is still the same: a population who are more and more reliant on other people to do tasks that used to be carried out in the home, and therefore more and more reliant on employment, thus reducing their bargaining power and opening them up to exploitation. AI, in this context, may well accelerate this.

    As for Marx – the full quote is

    “I could fish in the morning, hunt in the afternoon, rear cattle in the evening and do critical theory at night, just as I have a mind, without ever becoming hunter, fisherman, shepherd or critic.”

    I happen to know this quote. I’m not (as I’ve said many times before) a fan of Marx, but here his point is obvious and agreeable, and in context is this: once one has met their daily needs, which does not (and should not) take 8 or more hours a day, much more at the time, given the advances in technology we’ve had since, one should be free to pursue one’s leisure interests, be that fishing, hunting or whatever; further I would note that Marx absolutely is not the only person in the history of the world to ponder the idea that we would all be happier if we spent less time in our “job” and more time on stuff that actually interests us.

    I’m not massively interested in Marx or arguing in favour or against him. He hasn’t said anything (that I’ve seen) that hasn’t been said by other people who have far better solutions than he, and to be honest, I’m not going to spend my limited time in reading him; I’m not a state-communist, after all, I’m an anarchist.

  • neonsnake

    @Nick M

    They’re really good. I particularly like the more “Arts & Crafts” stuff, it’s a style I like, as well as some of the Steampunk stuff. Consider some of my earlier “snobbishness” slightly humbled.

  • NickM

    neonsnake,
    Thanks!

    Fraser,
    Thanks also. I hope you and/or your daughter enjoy it. It is interesting that you look at some of the pics and wonder what the creations are thinking. I have also done that and it seems remarkably common amongst the NC folks.

    One of the odd things I have learned using nightcafe is that what really helps more than my modest painting and drawing skills but my old school photographic skills and a knowledge of art history.

    If I can give you or your daughter a tip… mashing-up very different styles can produce some very interesting results. I shall say no more because 3/4 of the fun is working it out for oneself and producing some shockers along the way.

Leave a Reply

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>