These experts on AI are here to help us understand important things about AI.

Who are these generous, helpful experts that the CBC found, you ask?

“Dr. Muhammad Mamdani, vice-president of data science and advanced analytics at Unity Health Toronto”, per LinkedIn a PharmD, who also serves in various AI-associated centres and institutes.

“(Jeff) Macpherson is a director and co-founder at Xagency.AI”, a tech startup which does, uh, lots of stuff with AI (see their wild services page) that appears to have been announced on LinkedIn two months ago. The founders section lists other details apart from J.M.'s “over 7 years in the tech sector” which are interesting to read in light of J.M.'s own LinkedIn page.

Other people making points in this article:

C. L. Polk, award-winning author (of Witchmark).

“Illustrator Martin Deschatelets” whose employment prospects are dimming this year (and who knows a bunch of people in this situation), who per LinkedIn has worked on some nifty things.

“Ottawa economist Armine Yalnizyan”, per LinkedIn a fellow at the Atkinson Foundation who used to work at the Canadian Centre for Policy Alternatives.

Could the CBC actually seriously not find anybody willing to discuss the actual technology and how it gets its results? This is archetypal hood-welded-shut sort of stuff.

Things I picked out, from article and round table (before the video stopped playing):

Does that Unity Health doctor go back later and check these emergency room intake predictions against actual cases appearing there?

Who is the “we” who have to adapt here?

AI is apparently “something that can tell you how many cows are in the world” (J.M.). Detecting a lack of results validation here again.

“At the end of the day that’s what it’s all for. The efficiency, the productivity, to put profit in all of our pockets”, from J.M.

“You now have the opportunity to become a Prompt Engineer”, from J.M. to the author and illustrator. (It’s worth watching the video to listen to this person.)

Me about the article:

I’m feeling that same underwhelming “is this it” bewilderment again.

Me about the video:

Critical thinking and ethics and “how software products work in practice” classes for everybody in this industry please.

  • Lauchs@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    5
    ·
    1 year ago

    For the most part, no.

    Smartphones could not do many jobs. Some people made a lot of money working in smartphone tech (apps etc) but this is a fundamentally different paradigm.

    That being said,

    having a website

    How many successful businesses don’t have a website nowadays?

    To use my work as an example, I work in a standard IT unit for a large organization. Right now, people send our team all sorts of requests, easier ones get handled by new coders. However, AI will likely be able to do many of those same tasks faster and much cheaper than those junior devs. Someone (I’m hoping me) will get a raise and presumably, implement, train and run that AI.

    Junior coders who don’t know how to implement it are about to get screwed. And on the other end of the spectrum, senior coders who made a living by being good at very niche knowledge are about to have their exclusive knowledge exploded by AI.

    I’m not actually sure learning AI will help much but what else can we do?

    • David Gerard@awful.systemsM
      link
      fedilink
      English
      arrow-up
      13
      ·
      1 year ago

      senior coders who made a living by being good at very niche knowledge are about to have their exclusive knowledge exploded by AI.

      That sounds like precisely the opposite of what will happen, because LLMs are not competent at important detail.

      • Zed Lopez@wandering.shop
        cake
        link
        fedilink
        arrow-up
        5
        ·
        1 year ago

        @dgerard I do have some anxiety here, though: I know plenty of managers who’d look at the possibility and decide that they’re geniuses who have figured out a bold, brilliant plan to cut costs and have a great next quarter. Never mind every person with a technical clue saying it’s a irresponsibly bad idea – those naysayers are just focused on problems, not solutions.

        It’ll take enormous losses, outages, and data leaks to have a chance of getting through to them…

        • gerikson@awful.systems
          link
          fedilink
          English
          arrow-up
          5
          ·
          1 year ago

          That’s just creative destruction. Plenty of companies in the past have taken big bets on fads and failed, and yet, capitalism has not collapsed and keeps on exploiting workers and the planet.

      • Aceticon@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        1 year ago

        Well, a senior coder is somebody with maybe 5 years experience, tops.

        The only way I can see what is at the moment called AI even just touch things like systems design, requirements analysis, technical analysis, technical architecture design and software development process creation/adaption, is by transforming the clear lists of points which are the result of such processes into the kind of fluff-heavy thick documents that managerial types find familiar and measure (by thickness) as work.

      • Lauchs@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        3
        ·
        1 year ago

        I mean that it is incredibly easy to ask an LLM how to do something in a language with which you are unfamiliar. So if you’ve made a living by being the guy who knows whatever semi obscure language, things are about to change.

        • zogwarg@awful.systems
          link
          fedilink
          English
          arrow-up
          9
          ·
          edit-2
          1 year ago

          That’s the dangerous part:

          • The LLM being just about convincing enough
          • The language being unfamiliar

          You have no way of judging how correct or how wrong the output is, and no one to hold responsible or be a guarantor.

          With the recent release of the heygen drag-drop tool for video translating, and lip-syncing tool, I saw enough people say: “Look isn’t it amazing, I can speak Italian now”

          No, something makes look like you can, and you have no way of judging how convincing the illusion is. Even if the output is convincing/bluffing to a native speaker, you still can’t immediately check that the translation is correct. And again no one to hold accountable.

          • Lauchs@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            4
            ·
            1 year ago

            I am talking about coding languages. There are many ways to verify that your solutions are correct.

            • froztbyte@awful.systems
              link
              fedilink
              English
              arrow-up
              11
              ·
              edit-2
              1 year ago

              We are over half a century into programming computers, and the industry still fights itself over basic implementations of testing and using that in-process with development.

              The very nature of software correctness is a fuzzy problem (because defining the problem from requirements to code also often goes awry with imprecise specification).

              Just because there exists some tooling or options doesn’t mean it’s solved

              And then people like you have/argue the magical thinking belief that slapping LLMs on top of all this shit will tooooooootally work

              I look forward to charging you money to help you fix your mess later.

              • Steve@awful.systems
                link
                fedilink
                English
                arrow-up
                4
                ·
                edit-2
                1 year ago

                Genuine Q: Do you think we’ll start to see llm-friendly languages emerge? Languages that consider the “llm experience” that fools like this will welcome. Or even a reversion back to low-level languages

                • self@awful.systems
                  link
                  fedilink
                  English
                  arrow-up
                  8
                  ·
                  1 year ago

                  anything LLM-friendly is likely to be even more high level and less flexible than any ordinary language; as of now, LLMs seem to do the best on something like Python (there’s a shit ton of it and plenty of Python programs can handle a little bit of lexical reorganization without catastrophically failing) but tend to get utterly disastrous results for languages like C, where something seemingly trivial and hard to spot if you don’t know the language like insisting the string “hello world” is 11 characters long (it’s 12 including the null character at the end, which ChatGPT used to consistently fail to do), fucking up the ordering of statements that allocate or free memory, or misindexing an array in memory (along with hundreds of other trivial instances of undefined behavior and a combinatorial explosion of non-trivial cases) can create code that might appear to run fine but has severe security vulnerabilities, or might just silently corrupt data. LLMs aren’t even useful to regurgitate toy code for systems languages like C.

                • 200fifty@awful.systems
                  link
                  fedilink
                  English
                  arrow-up
                  8
                  ·
                  1 year ago

                  The problem is I guess you’d need a significant corpus of human-written stuff in that language to make the LLM work in the first place, right?

                  Actually this is something I’ve been thinking about more generally: the “ai makes programmers obsolete” take sort of implies everyone continues to use javascript and python for everything forever and ever (and also that those languages never add any new idioms or features in the future I guess.)

                  Like, I guess now that we have AI, all computer language progress is just supposed to be frozen at September 2021? Where are you gonna get the training data to keep the AI up to date with the latest language developments or libraries?

                  • gerikson@awful.systems
                    link
                    fedilink
                    English
                    arrow-up
                    7
                    ·
                    1 year ago

                    Correct, it presumes that everyone will be eagerly learning new languages, and new features to existing languages, and writing about them, and answering questions about them, at the same rate as before, despite knowing that their work will be instantly ingested into LLM engines and resold as LLM output. At the same time, the audience for this sort of writing will disappear, because they’re all using LLMs instead of reading articles, blog posts, and Stackoverflow answers.

                    It’s almost as if no-one has thought this through[1].

                    Relatedly: https://gerikson.com/m/2023/09/index.html#2023-09-27_wednesday_04


                    [1] unless the designers of LLMs actually fell for their own hype and believe they actually think.

                • froztbyte@awful.systems
                  link
                  fedilink
                  English
                  arrow-up
                  4
                  ·
                  1 year ago

                  short answer: unlikely on any nearby time horizon, because there’s a large impedance mismatch between the two applicable things at play. maybe some toy sub-examples can be created, but even that rapidly runs into scaling/scoping issues

                  longer answer: I started typing it and in a thought pause I clicked the upvote arrow on your post and now that in-progress reply is gone (thanks lemmy). I’ll write that up in emacs later and then post it here

            • self@awful.systems
              link
              fedilink
              English
              arrow-up
              7
              ·
              1 year ago

              not if you don’t know the language, and not in any generalized way thanks to the halting problem

        • gerikson@awful.systems
          link
          fedilink
          English
          arrow-up
          9
          ·
          1 year ago

          How does an LLM “know” a language? By ingesting a huge amount of text and source code around the language. A semi-obscure language, by definition, does not have a huge amount of text and source code associated with it.

          Similarly, people who speculate that their processes can be replaced by an LLM pre-suppose that those processes are clearly and unambiguously documented. The fact that there are humans still in the loop means they are not. So you can either make the huge effort of documenting them, then try to train an LLM, or you can just use a boring old language to automate them directly.

        • self@awful.systems
          link
          fedilink
          English
          arrow-up
          6
          ·
          1 year ago

          LLMs are godawful at obscure languages. not sure how many devs working on non-legacy projects are “the guy who knows whatever semi obscure language” though given how focused the industry is on choosing tech stacks based on dev availability. so I guess your threat is directed towards the legacy projects I’m not doing, or the open source shit I’m doing on my own time in the obscure languages I prefer? cause if there’s one thing I need in my off time it’s a torrent of garbage, unreviewable PRs

            • Lauchs@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              2
              ·
              1 year ago

              That’s well put!

              I keep thinking/worrying in terms of how I use chatgpt vs what people think chatgpt can accomplish on its own.

              To me, I feel like I’ve been given a supercharger and can handle way more than before by easily double checking syntax of better functions. But if people are relying on chatgpt to code chunks for them, god help them.

                • Lauchs@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  2
                  ·
                  1 year ago

                  I dunno, I do think these LLMs are objectively different and more comprehensive than any IDEA or resource. I don’t search for an answer, I just ask a question and get pretty much exactly what I need right away, rather than hunting through resources trying to find the right thing.

                  To each their own but I’m pretty sure if I can do more work much faster than before, others can too. Unless this creates additional work, I imagine this means fewer devs needed in total. Admittedly, I am a pessimist.

    • zogwarg@awful.systems
      link
      fedilink
      English
      arrow-up
      11
      ·
      1 year ago

      I wouldn’t be so confident in replacing junior devs with “AI”:

      1. Even if it did work without wasting time, it’s unsustainable since junior devs need to acquire these skills, senior devs aren’t born from the void, and will eventually graduate/retire.
      2. A junior dev willing to engage their brain, would still iterate through to the correct implementation for cheaper (and potentially faster), than senior devs needing spend time reviewing bullshit implementations, and at arcane attempts of unreliable “AI”-prompting.

      It’s copy-pasting from stack-overflow all over again. The main consequence I see for LLM based coding assistants, is a new source of potential flaws to watch out for when doing code reviews.

      • Aceticon@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        edit-2
        1 year ago

        It’s worse that “copy-pasting from stack-overflow” because the LLM actually loses all the answer trustworthiness context (i.e. counts and ratios of upvotes and downvotes, other people’s comments).

        That thing is trying to find the text tokens of answer text nearest to the text tokens of your prompt question in its text token distribution n-dimensional space (I know it sound weird, but its roughly how NNs work) and maybe you’re lucky and the highest probability combination of text-tokens was right there in the n-dimensional space “near” your prompt quest text-tokens (in which case straight googling it would probably have worked) or maybe you’re not luck and it’s picking up probabilistically close chains of text-tokens which are not logically related and maybe your’re really unlucky and your prompt question text tokens are in a sparcelly populated zone of the n-dimensional text space and you’re getting back something starting and a barelly related close cluster.

        But that’s not even the biggest problem.

        The biggest problem is that there is no real error margin output - the thing will give you the most genuine, professional-looking piece of output just as likely for what might be a very highly correlated chain of text-tokens as for what is just an association of text tokens which is has a low relation with your prompt question text-token.

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 year ago

        Isn’t the lack of junior positions already a problem in a few parts of the tech industry? Due to the pressures of capitalism (drink!) I’m not sure it will be as easy as this.

        • zogwarg@awful.systems
          link
          fedilink
          English
          arrow-up
          5
          ·
          edit-2
          1 year ago

          I said I wouldn’t be confident about it, not that enshitification would not occur ^^.

          I oscillate between optimisim and pessimism frequently, and for sure some many companies will make bad doo doo decisions. Ultimately trying to learn the grift is not the answer for me though, I’d rather work for some company with at least some practical sense and pretense at an attempt of some form of sustainability.

          The mood comes, please forgive the following, indulgent, poem:
          Worse before better
          Yet comes the AI winter
          Ousting the fever

        • Aceticon@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          1 year ago

          The outsourcing trend wasn’t good for junior devs in the West, mainly in english-speaking countries (except India, it was great there for them).

      • wagesj45@kbin.social
        link
        fedilink
        arrow-up
        2
        arrow-down
        5
        ·
        1 year ago

        who don’t know how to implement it

        He didn’t say anything about replacing them. Certain tedious aspects that get farmed out to junior devs the AI will certainly be able to do, especially under supervision of a developer. Junior devs that refuse to learn how to use and implement the AI probably will get left behind.

        AI won’t replace anyone for a long time (probably). What it will do is bring about a new paradigm on how we work, and people who don’t get on board will be left behind, like all the boomers that refuse to learn how to open PDF files, except it’ll happen much quicker than the analogue-to-digital transition did and the people effected will be younger.

    • gerikson@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 year ago

      However, AI will likely be able to do many of those same tasks faster and much cheaper than those junior devs.

      I work in support too, and predict a long and profitable career cleaning up the messes the AI will create.

      • sinedpick@awful.systems
        link
        fedilink
        English
        arrow-up
        11
        ·
        1 year ago

        Nah bro, when GPT-5 comes out all code it’ll write will exactly match the specification, and it’ll also sim the entire universe to guess your mental state and correct any mistakes you made in your specs.

        • froztbyte@awful.systems
          link
          fedilink
          English
          arrow-up
          8
          ·
          1 year ago

          The singularity happens. We invent the basilisk. But, oops, the alignment we ended up with is the frustrations of hundreds of thousands of derailed projects, and poor ‘ole basi just gets to write corpware forever

          Conway’s law strikes again!