• tetris11@lemmy.ml
        link
        fedilink
        arrow-up
        0
        ·
        9 days ago

        “But master, the toast is already burned, surely you-”
        *Me, eyes glowering with a grimace*
        “DOWN YOU GO.”
        “Master! Nooooo–!”

  • DragonsInARoom@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    10 days ago

    But the companies must posture that their on the cutting edge! Even if they only put the letters “AI” on the box of a rice cooker without changing the rice cooker

    • Zink@programming.dev
      link
      fedilink
      arrow-up
      0
      ·
      10 days ago

      When it comes to the marketing teams in such companies, I wonder what the ratio is between true believers and "this is stupid but if it spikes the numbers next quarter that will benefit me.”

    • TheKingBee@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      10 days ago

      No lie, I actually really love the concept of Microsoft Recall, I’ve got the adhd and am always trying to retrace my steps to figure out problems i solved months ago. The problem is for as useful as it might be it’s just an attack surface.

  • ryedaft@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 days ago

    Am I the only one who hates the way there’s text that follows a circle but there’s two of them and they don’t follow the same circle?

  • Bamboodpanda@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    10 days ago

    AI is one of the most powerful tools available today, and as a heavy user, I’ve seen firsthand how transformative it can be. However, there’s a trend right now where companies are trying to force AI into everything, assuming they know the best way for you to use it. They’re focused on marketing to those who either aren’t using AI at all or are using it ineffectively, promising solutions that often fall short in practice.

    Here’s the truth: the real magic of AI doesn’t come from adopting prepackaged solutions. It comes when you take the time to develop your own use cases, tailored to the unique problems you want to solve. AI isn’t a one-size-fits-all tool; its strength lies in its adaptability. When you shift your mindset from waiting for a product to deliver results to creatively using AI to tackle your specific challenges, it stops being just another tool and becomes genuinely life-changing.

    So, don’t get caught up in the hype or promises of marketing tags. Start experimenting, learning, and building solutions that work for you. That’s when AI truly reaches its full potential.

    • stringere@sh.itjust.works
      link
      fedilink
      arrow-up
      0
      ·
      10 days ago

      I think of AI like I do apps: every company thinks they need an app now instead of just a website. They don’t, but they’ll sure as hell pay someone to develop an app that serves as a walled garden front end for their website. Most companies don’t need AI for anything, and as you said: they are shoehorning it in anywhere they can without regard to whether it is effective or not.

    • fine_sandy_bottom@discuss.tchncs.de
      link
      fedilink
      arrow-up
      0
      ·
      10 days ago

      I think there’s specific industrial problems for which AI is indeed transformative.

      Just one example that I’m aware of is the AI-accelerated nazca lines survey that revealed many more geoglyphs that we were not previously aware of.

      However, this type of use case just isn’t relevant to most people who’s reliance on LLMs is “write an email to a client saying xyz” or “summarise this email that someone sent to me”.

      • Hexarei@programming.dev
        link
        fedilink
        arrow-up
        0
        ·
        10 days ago

        One of my favorite examples is “smart paste”. Got separate address information fields? (City, state, zip etc) Have the user copy the full address, clock “Smart paste”, feed the clipboard to an LLM with a prompt to transform it into the data your form needs. Absolutely game-changing imho.

        Or data ingestion from email - many of my customers get emails from their customers that have instructions in them that someone at the company has to convert into form fields in the app. Instead, we provide an email address (some-company-inbound@ myapp.domain) and we feed the incoming emails into an LLM, ask it to extract any details it can (number of copies, post process, page numbers, etc) and have that auto fill into fields for the customer to review before approving the incoming details.

        So many incredibly powerful use-cases and folks are doing wasteful and pointless things with them.

        • fine_sandy_bottom@discuss.tchncs.de
          link
          fedilink
          arrow-up
          0
          ·
          10 days ago

          If I’m brutally honest, I don’t find these use cases very compelling.

          Separate fields for addresses could be easily solved without an LLM. The only reason there isn’t already a common solution is that it just isn’t that much of a problem.

          Data ingestion from email will never be as efficient and accurate as simply having a customer fill out a form directly.

          These things might make someone mildly more efficient at their job, but given the resources required for LLMs is it really worth it?

          • Hexarei@programming.dev
            link
            fedilink
            arrow-up
            0
            ·
            10 days ago

            Well, the address one was an example. Smart paste is useful for more than just addresses - Think non-standard data formats where a customer provided janky data and it needs wrangling. Happens often enough and with unique enough data that an LLM is going to be better than a bespoke algo.

            The email one though? We absolutely have dedicated forms, but that doesn’t stop end users from sending emails to our customer anyway - The email ingestion via LLM is so our customer can just have their front desk folks forward the email in and have it make a best guess to save some time. When the customer is a huge shop that handles thousands of incoming jobs per day, the small value adds here and there add up to quite the savings for them (and thus, value we offer).

            Given we run the LLMs on low power machines in-house … Yeah they’re worth it.

            • fine_sandy_bottom@discuss.tchncs.de
              link
              fedilink
              arrow-up
              0
              ·
              9 days ago

              Yeah, still not convinced.

              I work in a field which is not dissimilar. Teaching customers to email you their requirements so your LLM can have a go at filling out the form just seems ludicrous to me.

              Additionally, the models you’re using require stupid amounts of power to produce so that you can run them on low power machines.

              Anyhow, neither of us is going to change our minds without actual data which neither of us have. Who knows, a decade from now I might be forwarding client emails to an LLM so it can fill out a form for me, at which time I’ll know I was wrong.

      • AnarchistArtificer@slrpnk.net
        link
        fedilink
        English
        arrow-up
        0
        ·
        10 days ago

        That’s really neat, thanks for sharing that example.

        In my field (biochemistry), there are also quite a few truly awesome use cases for LLMs and other machine learning stuff, but I have been dismayed by how the hype train on AI stuff has been working. Mainly, I just worry that the overhyped nonsense will drown out the legitimately useful stuff, and that the useful stuff may struggle to get coverage/funding once the hype has burnt everyone out.

        • fine_sandy_bottom@discuss.tchncs.de
          link
          fedilink
          arrow-up
          0
          ·
          10 days ago

          I suspect that this is “grumpy old man” type thinking, but my concern is the loss of fundamental skills.

          As an example, like many other people I’ve spent the last few decades developing written communication skills, emailing clients regarding complex topics. Communication requires not only an understanding of the subject, but an understanding of the recipient’s circumstances, and the likelihood of the thoughts and actions that may arise as a result.

          Over the last year or so I’ve noticed my assistants using LLMs to draft emails with deleterious results. This use in many cases reduces my thinking feeling experienced and trained assistant to an automaton regurgitating words from publicly available references. The usual response to this concern is that my assistants are using the tool incorrectly, which is certainly the case, but my argument is that the use of the tool precludes the expenditure of the requisite time and effort to really learn.

          Perhaps this is a kind of circular argument, like why do kids need to learn handwriting when nothing needs to be handwritten.

          It does seem as though we’re on a trajectory towards stupider professional services though, where my bot emails your bot who replies and after n iterations maybe they’ve figured it out.

          • AnarchistArtificer@slrpnk.net
            link
            fedilink
            English
            arrow-up
            0
            ·
            10 days ago

            Oh yeah, I’m pretty worried about that from what I’ve seen in biochemistry undergraduate students. I was already concerned about how little structured support in writing science students receive, and I’m seeing a lot of over reliance on chatGPT.

            With emails and the like, I find that I struggle with the pressure of a blank page/screen, so rewriting a mediocre draft is immensely helpful, but that strategy is only viable if you’re prepared to go in and do some heavy editing. If it were a case of people honing their editing skills, then that might not be so bad, but I have been seeing lots of output that has the unmistakable chatGPT tone.

            In short, I think it is definitely “grumpy old man” thinking, but that doesn’t mean it’s not valid (I say this as someone who is probably too young to be a grumpy old crone yet)

  • Agent641@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    11 days ago

    Is there a way to fight back? Like I do t need Adobe in my Microsoft Word at work, can I just make a script that constantly demands AI content from it that is absolutely drivel, and set it running over the weekend while I’m not there? To burn up all their electricity and/or processing power?

    • explodicle@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 days ago

      They would probably detect that and limit your usage.

      Even not using their service still leaves its pollution. IMO the best way to fight back is to support higher pollution taxes. Crypto, AI, whatever’s next - it should be technology agnostic.

  • MrsDoyle@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 days ago

    I was trying to take a photo of piece of jewellery in my hand tonight and accidentally activated my phone’s AI. It threw up a big Paperclip-type message, “How can I help you?” I muttered “fuck off” as I stabbed at the back button. “I’m sorry you feel that way!” it said.

    Yeah, I hate it. At least Paperclip didn’t give snark.

    • uis@lemm.ee
      link
      fedilink
      arrow-up
      0
      ·
      10 days ago

      You are overexaggerating under assumption that there will exist social and economic system based on greed and death threats, which sounds very unreali-- Right, capitalism.

    • Hackworth@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 days ago

      In film school (25 years ago), there was a lot of discussion around whether or not commerce was antithetical to art. I think it’s pretty clear now that it is. As commercial media leans more on AI, I hope the silver lining will be a modern Renaissance of art as (meaningful but unprofitable) creative expression.

        • randon31415@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          10 days ago

          Strangely, that is a lot of who is complaining. It was a Faustian bargin: draw furry porn and earn money but never be allowed to use your art in a professional sense ever again.

          Then AI art came and replaced them, so it became loose-loose.

          • Sabata@ani.social
            link
            fedilink
            arrow-up
            0
            ·
            10 days ago

            I don’t know where else you could find enough work to sustain yourself other than furry porn and hentai before Ai. Post Ai, even that is gone.

        • Hackworth@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          10 days ago

          Eh, I’ve made a decent living making commercials and corpo shit. But not for lack of trying to get paid for art. For all the money I made working on ~50 short films and a handful of features, I could maybe buy dinner. Just like in the music industry, distributors pocket most of the profit.

          • Sabata@ani.social
            link
            fedilink
            arrow-up
            0
            ·
            10 days ago

            Art seems like a side hussle or a hobby not a main job. I can’t think of a faster way to hate your own passion.

            I wanted to work as a programmer but getting a degree tought me I’m too poor to do it as a job as I need 6 more papers and to know the language for longer than it existed to even interview to earn the grind. Having fun building a stupid side project to bother my friends though.

            • AutistoMephisto@lemmy.world
              link
              fedilink
              arrow-up
              0
              ·
              10 days ago

              Exactly. I can code and make a simple game app. If it gets some downloads, maybe pulls in a little money, I’m happy. But I’m not gonna produce endless mtx and ad-infested shovelware to make shareholders and investors happy. I also own a 3D printer. I’ve done a few projects with it and I was happy to do them, I’ve even taken commissions to model and print some things, but it’s not my main job as there’s no way I could afford to sit at home and just print things out all month.

              • Sabata@ani.social
                link
                fedilink
                arrow-up
                0
                ·
                10 days ago

                My only side hussle worthy skill is fixing computers and I rather swallow a hot soldering iron than meet a stranger and get money involved.

      • ZILtoid1991@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        10 days ago

        Issue is, that 8 hours people spend in “real” jobs are a big hindrance, and could be spent on doing the art instead, and most of those ghouls now want us to do overtime for the very basics. Worst case scenario, it’ll be a creativity drought, with idea guys taking up the place of real artists by using generative AI. Best case scenario is AI boom totally collapsing, all commercial models become expensive to use. Seeing where the next Trump administration will take us, it’s second gilded age + heavy censorship + potential deregulation around AI.

  • Lila_Uraraka@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    0
    ·
    10 days ago

    I hate what AI has become and is being used for, i strongly believe that it could have been used way more ethically, solid example being Perplexity, it shows you the sources being used at the top, being the first thing you see when it give a response. The opposite of this is everything else. Even Gemini, despite it being rather useful in day to day life when I need a quick answer to something when I’m not in the position to hold my phone, like driving, doing dishes, or yard work with my ear buds in

    • mm_maybe@sh.itjust.works
      link
      fedilink
      arrow-up
      0
      ·
      10 days ago

      Yes, you’re absolutely right. The first StarCoder model demonstrated that it is in fact possible to train a useful LLM exclusively on permissively licensed material, contrary to OpenAI’s claims. Unfortunately, the main concerns of the leading voices in AI ethics at the time this stuff began to really heat up were a) “alignment” with human values / takeover of super-intelligent AI and b) bias against certain groups of humans (which I characterize as differential alignment, i.e. with some humans but not others). The latter group has since published some work criticizing genAI from a copyright and data dignity standpoint, but their absolute position against the technology in general leaves no room for re-visiting the premise that use of non-permissively licensed work is inevitable. (Incidentally they also hate classification AI as a whole; thus smearing AI detection technology which could help on all fronts of this battle. Here again it’s obviously a matter of responsible deployment; the kind of classification AI that UHC deployed to reject valid health insurance claims, or the target selection AI that IDF has used, are examples of obviously unethical applications in which copyright infringement would be irrelevant.)

      • uis@lemm.ee
        link
        fedilink
        arrow-up
        0
        ·
        10 days ago

        criticizing genAI from a copyright

        There is russian phrase “fight of beaver and donkey”, which loosely means fight of two shits. Copyright is cancer and capitalist abuse of genAI is cancer.

        • Lila_Uraraka@lemmy.blahaj.zone
          link
          fedilink
          arrow-up
          0
          ·
          10 days ago

          Copyright is actually very important, especially to independent authors, photographers, digital artists, traditional artists, videographers (YouTubers as an example), and especially movie producers. Copyright protects their work from being taken by someone else and claimed as their own, however special cases do exist where other individuals are allowed to use copywritten material that is not theirs, this is where fair use comes into play. If we did not have fair use, but still had Copyright, the large majority of YouTube videos would be illegal, from commentary videos to silly meme videos. So calling Copyright a cancer is like wanting their work to be out in a field of monkeys and hope they don’t notice it, spoiler, they always do.