• hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    29
    ·
    edit-2
    1 month ago

    We’ll see about that. AI is currently approaching the trough of disillusionment on the gartner hype cycle. That’s certainly not something one of the largest AI companies will admit to, but probably still true.

    And btw, the article doesn’t load for me. Not sure if it’s my browser or if I’m getting geo blocked… But the page is just white. No text.

    • BluesF@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      2
      ·
      1 month ago

      This headline certainly seems sensational, but I’ve also started seeing some really nice uses of LLMs cropping up. Some of the newer API features make them a lot more practical for development of things other than simple chat bots. It remains to be seen if the value delivered is worth the energy/data costs long term, but LLMs in general seems to be finding their feet in some ways.

      • hendrik@palaver.p3x.de
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        1 month ago

        Sure. I’m mainly basing my opinion on some more recent research (which I can’t find right now) that had some disheartening numbers on AI use in programming. As far as I remember it said at the end of the day it saves some time, but not a lot, but on the flipside the code that has been produced by programmers with help of AI has significantly more bugs in it. Which makes me doubt it’s a good fit to replace professionals (at this time).

        And secondly, the stock prices of companies like Nvidia tell us, some of the hot air in the AI bubble is escaping. I’d say things are calming down a bit, not accellerating.

        And regarding law, there is this funny story from a bit ago: https://www.forbes.com/sites/mollybohannon/2023/06/08/lawyer-used-chatgpt-in-court-and-cited-fake-cases-a-judge-is-considering-sanctions/
        Well, maybe funny for everyone except that lawyer and his client. And science hasn’t made fundamental progress on hallucinations since then. I’d say it’s going to start replacing professionals once we get that solved. And that’ll be when AI will become massively useful.

        And of course it’s already very useful within some more narrow use cases.

        • BluesF@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 month ago

          Oh yeah, I’m talking about calling the LLM with code, not using the LLM to help write the code. They still suck at providing anything reliant on factual accuracy. What they are very good at is extracting meaning from text, e.g. taking a user’s natural language request and deciding what to do with it from a set of options.

          • hendrik@palaver.p3x.de
            link
            fedilink
            arrow-up
            4
            arrow-down
            1
            ·
            1 month ago

            Sure. I believe that’s called “intent classification” and has been around in natural language processing for quite some time.

  • HexesofVexes@lemmy.world
    link
    fedilink
    English
    arrow-up
    22
    ·
    edit-2
    1 month ago

    A simple lawyer AI bot almost indistinguishable from the real thing:

    while(True):

    fees+=250.00
    
    sleep(60)
    
    • sunzu2@thebrainbin.org
      link
      fedilink
      arrow-up
      3
      arrow-down
      2
      ·
      1 month ago

      Exactly the same prolly… LLM is useful for any “learned” profession but so far I have not seen perform beyond college level type thing.

      I guess they can be developed better but there isn’t training data or not enough to train the model to be as good as a proper professional.

      Once that dataset is available then I can see LLMs to start taking some real jobs, legal or anyone else whose job is jockeying paper or spreadsheets or code on a computer.

      • just_another_person@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        1 month ago

        LLM is a sorting tool. It’s not capable of novel ideation, only derivative. The only thing this might help with is research. Not to mention federal and state regulations require human representation to file anyway.

  • Taleya@aussie.zone
    link
    fedilink
    English
    arrow-up
    13
    ·
    1 month ago

    Wordsalad batshit nonsensical lawyer - dude if i wanted that i’d just rep myself

  • Wanderer@lemm.ee
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 month ago

    Good.

    Sounds like we need to start talking about the four day work week and we can move from there.

  • Lugh@futurology.todayOPM
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 month ago

    People have often tended to think about AI and robots replacing jobs in terms of working-class jobs like driving, factories, warehouses, etc.

    When it starts coming for the professional classes, as this is now starting to, I think things will be different. It’s been a long-observed phenomena that many well-off sections of the population hate socialism, except when they need it - then suddenly they are all for it.

    I wonder what a small army of lawyers in support of UBI could achieve?

  • HootinNHollerin@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    1 month ago

    Sam Altman watched Terminator and rooted for the machines.

    Then Sam Altman watched The Matrix and rooted for the machines.

  • LordJer@beehaw.org
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    1 month ago

    Do you think the average consumer is going to want to have an AI represent them in court? People are still going to need lawyers to explain the law in laymen terms. For example, I work in tax law. And clients already struggle to understand what inventory capitalization ala code sections 263(a) is. And Why they need to adhere to it. I see how large language models can be useful. But I wonder if the hype is akin to crypto currency, or NFTs?

    • Rogue@feddit.uk
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 month ago

      There is a hell of hype but some of it is justified.

      Chat GPT is really good at explaining stuff. Try asking to explain inventory capitalization, and just repeatedly ask it to explain it simpler and simpler and simpler. Then ask why repeatedly. It has a hell of lot more patience than a human and the client is going to be far less embarrassed repeatedly asking an AI than a human.

      I’d also expect it to be pretty good at picking out relevant case law if to feed it a specific issue. However, where issues will arise is it will just make shit up at some point and it’ll seem absolutely legit so you’ll accept it without question.

  • UraniumBlazer@lemm.ee
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    18
    ·
    1 month ago

    And I believe them 100%.

    The legal profession revolves around logic. Legal arguments are formed based on the rules of logic. A fine tuned model would absolutely demolish any human opponent here.

    Take chess for example. There’s a predefined set of rules. Chess was one of the first games where ML models beat humans.

    Sure, the magnitude of complexity of rules in law is more than that in chess. The ground principle is still the same.

    • Prunebutt@slrpnk.net
      link
      fedilink
      English
      arrow-up
      26
      arrow-down
      1
      ·
      1 month ago

      The legal system does not revolve around logic and even it it was: LLMs can’t reason, so they’d be useless, anyways.

      • UraniumBlazer@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        16
        ·
        1 month ago

        Law = rules. Action A is legal. Action B isn’t. Doing X + Y + Z constitutes action A and so on. Legal arguments have to abide by all rules of logic. Law is one of the most logic heavy fields out there.

        As for LLMs not being able to reason, it’s very debatable. Whether they reason or not depends upon your definition of “reasoning”. Debating definitions here is useless however, as the end result speaks for itself.

        If LLMs can pass certification exams for lawyers, then it means either one of two things:

        1. the exams are flawed
        2. LLMs are capable of practicing law.
        • Tobberone@lemm.ee
          link
          fedilink
          English
          arrow-up
          15
          ·
          1 month ago

          I disagree with your first statement. Law is about the application of rules, not the rules themselves. In a perfect world, it would be about determining which law has precedence in matter at hand, a task in itself outside of AI capabilities as it involves weighing moral and ethical principles against eachother, but in reality it often comes down to why my interpretation of reality is the correct one.

          • UraniumBlazer@lemm.ee
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            8
            ·
            1 month ago
            1. The morals of LLMs match us closely, as they’ve been trained on human data. Therefore, weighing two laws against each other isn’t difficult for them.
            2. For the interpretation of reality part, it’s all logic again. Logic, which a fine tuned model can potentially be quite good at.
            • Tobberone@lemm.ee
              link
              fedilink
              English
              arrow-up
              6
              ·
              edit-2
              1 month ago
              1. Ai, which lacks morality by definition, is as capable in morals as it is describing smells. As for that human data the question quickly becomes which data? As expressed in literature, social media or un politics? And also which century? It’s enough to compare today with pre-millenia conditions to see how widely it differs.

              As for 2. You assume that there is an objective reality free from emotion? There might be, but I am unsure if it can be perceived by anything living. Or AI, for that matter. It is after all, like you said, trained on human data.

              Anyways, time will tell if Openai is correct in their assessment, or if humans will want the human touch. As a tool for trained professionals to use, sure. As a substitute for one? I’m not convinced yet.

        • Prunebutt@slrpnk.net
          link
          fedilink
          English
          arrow-up
          11
          ·
          1 month ago

          Law = rules. Action A is legal. Action B isn’t. Doing X + Y + Z constitutes action A and so on. Legal arguments have to abide by all rules of logic. Law is one of the most logic heavy fields out there.

          You’re ignoring the whole Job of a judge, where they put the actions and laws into a procedural, historical and social context (something which LLMs can’t emulate) to reach a verdict.

          You know what’s way closer to “pure logic”? Programming? You know what’s the quality of the code LLMs shit out? It’s quite bad.

          Debating definitions here is useless however, as the end result speaks for itself.

          Yes, it does speak for itself: They can’t.

          Yes, the exams are flawed. This podcast episode takes a look at these supposed AI lawyers.

          I also don’t agree with your assessment. If an LLM passes a perfect law exam (a thing that doesn’t really exist) and afterwards only invents laws and precedent cases, it’s still useless.

          • UraniumBlazer@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            7
            ·
            1 month ago

            You’re ignoring the whole Job of a judge, where they put the actions and laws into a procedural, historical and social context (something which LLMs can’t emulate) to reach a verdict.

            LLMs would have no problem doing any of this. There’s a discernible pattern in any judge’s verdict. LLMs can easily pick this pattern up.

            You know what’s the quality of the code LLMs shit out?

            LLMs in their current form are “spitting out” code in a very literal way. Actual programmers never do that. No one is smart enough to code by intuition. We write code, take a look at it, run it, see warnings/errors if any, fix them and repeat. No programmer writes code and gets it correct in the first try itself.

            LLMs till now have had their hands tied behind their backs. They haven’t been able to run the code by themselves at all. They haven’t been able to do recursive reasoning. TILL NOW.

            The new O1 model (I think) is able to do that. It’ll just get better from here. Look at the sudden increase in the quality of code output. There’s a very strong reason as to why I believe this as well.

            I heavily use LLMs for my code. They seem to write shit code in the first pass. I give it the output, the issues with the code, semantic errors if any and so on. By the third or fourth time I get back to it, the code it writes is perfect. I have stopped needing to manually type out comments and so on. LLMs do that for me now (of course, I supervise what it writes n don’t blindly trust it). Using LLMs has sped up my coding at least by 4 times (and I’m not even using a fine tuned model).

            I also don’t agree with your assessment. If an LLM passes a perfect law exam (a thing that doesn’t really exist) and afterwards only invents laws and precedent cases, it’s still useless.

            There’s no reason as to why it would do that. The underlying function behind verdicts/legal arguments has been the same, and will remain the same, because it’s based on logic and human morals. Tackling morals is easy because LLMs have been trained on human data. Their morals are a reflection of ours. If we want to specify our morals explicitly, then we could make them law (and we already have for the ones that matter most), which makes stuff even easier.

            • SinAdjetivos@beehaw.org
              link
              fedilink
              English
              arrow-up
              4
              ·
              1 month ago

              LLMs would have no problem doing any of this. There’s a discernible pattern in any judge’s verdict. LLMs can easily pick this pattern up.

              That’s worse! You do see how that’s worse right?!?

              You are factually correct, but those are called biases. That doesn’t mean that LLMs would be good at that job. It means they can do the job with comparable results for all the reasons that people are terrible at it. You’re arguing to build a racism machine because judges are racist.

              • UraniumBlazer@lemm.ee
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                3
                ·
                1 month ago

                Ok, so you just ignore the reports and continue to coast on feels over reals. Cool.

                I didn’t. I went through your links. Your links however, pointed at a problem with the environment our LLMs are in instead of the LLMs themselves. The code one, where the LLM invents package names is not the LLMs fault. Can you accurately come up with package names just from memory? No. Neither can the LLM. Give the LLM the functionality to look up npm’s database. Give it the functionality to read the docs and then look at what it can do. I have done this myself (manually giving it this information), and it has been a beast.

                As for the link in the reply, it’s a blog post about anecdotal evidence. Come on now… I literally have personal anecdotal evidence to parry this.

                But whatever, you’re always going to go “AI bad AI bad AI bad” till it takes your job. I really don’t understand why AI denialism is so prevalent on lemmy, a leftist platform, where we should be discussing about seizing the new means of production instead of denying its existence.

                Regardless, I won’t contribute to this thread any further, because I believe I’ve made my point.

                • Prunebutt@slrpnk.net
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  1
                  ·
                  1 month ago

                  where we should be discussing about seizing the new means of production instead of denying its existence

                  Look at what OpenAI, Google, Microsoft, etc. Does and tell me once again that this is supposedly good for the workers. Jeez. 🙄

        • RustyEarthfire@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          ·
          1 month ago
          1. The bar exam is just one part of a larger set of qualifications
          2. The bar exam is just a (closed-book) proxy for the actual skills and knowledge being tested. While a reasonable proxy for humans, it is a poor proxy for computers
          • UraniumBlazer@lemm.ee
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            1 month ago

            Ok. What test would an LLM need to pass to convince you that it capable of being a lawyer?

            • RustyEarthfire@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              1 month ago

              The short answer is being admitted by the bar; we already trust them to certify humans.

              If for some reason I were arbiter, I would say a convincing record of doing actual legal work, vetted by existing lawyers. The legal profession already has a well-defined model of how non-lawyers can contribute to the work, so there is no need for a quantum leap up to being a lawyer.

        • SinAdjetivos@beehaw.org
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          1 month ago

          I think you’re conflating formal and informal logic. Programmers are excellent at defining a formal logic system which the computer follows, but the computer itself isn’t particularly “logical”.

          What you describe as:

          Action A is legal. Action B isn’t. Doing X + Y + Z constitutes action A and so on.

          Is a particularly nasty form of logic called abstract reasoning. Biological brains are very good at that! Computers a lot less so…

          (Using a test designed to measure that)[https://arxiv.org/abs/1911.01547] humans average ~80% accuracy. The current best algorithm (last I checked…) has a 31% accuracy. (LLMs can get up to ~17% accuracy.)[https://arxiv.org/pdf/2403.11793] (With the addition of some prompt engineering and other fancy tricks). So they are technically capable… Just really bad at it…

          Now law ismarketed as a very logical profession but, at least Western, modern law is more akin to combatative theater. The law as written serves as the base worldbuilding and case law serving as addition canon. The goal of law is to put on a performance with the goal of tricking the audience (typically judge, jury, opposing legal) that it is far more logical and internally consistent than it actually is.

          That is essentially what LLMs are designed to do. Take some giant corpus of knowledge and return some permutation of it that maximizes the “believability” based on the input prompt. And it can do so with a shocking amount of internal logic and creativity. So it shouldn’t be shocking that they’re capable of passing bar exams, but that should not be conflated with them being rational, logical, fair, just, or accurate.

          And neither should the law. Friendly reminder to fuck the police and the corrupt legal system they enforce.

    • mindaika@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      1 month ago

      The legal system absolutely does not revolve around logic. Legal arguments, especially in court, are formed based on emotional appeal