• 2 Posts
  • 13 Comments
Joined 1 year ago
cake
Cake day: July 22nd, 2023

help-circle
  • My French is shit but here’s what I got from the comments:

    person: I am illiterate; what does it say?

    person2: I hope it says that the English like glitter. Please, let it be that.

    person3: <link to post with English commenters, presumably to run browser translate on it>

    person2: As I thought -_-

    Jeggs: Sorry for the English language! It says “British meeting point.”

    person4: Reminder of rule 3 of this community
    🤬 Mean things are shit.


  • I expect downvotes but:

    Taking this from Jeggs’ perspective, they probably believe you’re trying to pull the

    sees offensive joke Hmm, to make the joker realize how offensive they are, I’ll pretend to not understand it to “gotcha” them by making them explain it.

    tactic. It’s used enough to be known but not super widely known.

    And the joke is

    haha, fat people at British Place! How expectable

    which was obviously received poorly by this thread.

    Taking it from Jeggs’ perspective – again – other communities may receive this more favorably, regardless of perceived offensiveness. (Yes, offensive jokes aren’t completely bad. I’ve laughed at racist jokes against my race. IMO this meme isn’t particularly interesting tho.)







  • Speaking of fearmongering, you note that:

    an artist getting their style copied

    So if I go to an art gallery for inspiration I must declare this in a contract too? This is absurd. But to be fair I’m not surprised. Intellectual property is altogether an absurd notion in the digital age, and insanity like “copyrighting styles” is just the sharpest most obvious edge of it.

    I think also the fearmongering about artists is overplayed by people who are not artists.

    Ignoring the false equivalency between getting inspiration at an art gallery and feeding millions of artworks into a non-human AI for automated, high-speed, dubious-legality replication and derivation, copyright is how creative workers retain their careers and find incentivization. Your Twitter experiences are anecdotal; in more generalized reality:

    1. Chinese illustrator jobs purportedly dropped by 70% in part due to image generators
    2. Lesser-known artists are being hindered from making themselves known as visual art venues close themselves to known artists in order to reduce AI-generated issues – the opposite of democratizing art
    3. Artists have reported using image generators to avoid losing their jobs
    4. Artists’ works, such as those by Hollie Mengert and Karen Hallion among others, have been used without their compensation, attribution, nor consent in training data – said style mimicries have been described as “invasive” (someone can steal your mode of self-expression) and reputationally damaging – even if the style mimicries are solely “surface-level”

    The above four points were taken from the Proceedings of the 2023 AIII/ACM Conference on AI, Ethics, and Society (Jiang et al., 2023, section 4.1 and 4.2).

    Help me understand your viewpoint. Is copyright nonsensical? Are we hypocrites for worrying about the ways our hosts are using our produced goods? There is a lot of liability and a lot of worry here, but I’m having trouble reconciling: you seem to be implying that this liability and worry are unfounded, but evidence seems to point elsewhere.

    Thanks for talking with me! ^ᴗ^

    (Comment 2/2)


  • fool@programming.devtoPeople Twitter@sh.itjust.works*Permanently Deleted*
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    edit-2
    1 month ago

    Thanks for the detailed reply! :P

    I’d like to converse with every part of what you pointed out – real discussions are always exciting!

    …they pay the journals, not the other way around…

    Yes of course. It’s not at all relevant?

    It’s arguably relevant. Researchers pay journals to display their years of work, then these journals resell those years of work to AI companies who send indirect pressure to researchers for more work. It’s a form of labor where the pay direction is reversed. Yes, researchers are aware that their papers can be used for profit (like medical tech) but they didn’t conceive that it would be sold en masse to ethically dubious, historically copyright-violating, pollution-heavy server farms. Now, I see that you don’t agree with this, since you say:

    …not only is it very literally transparent and most models open-weight, and most libraries open-source, but it’s making knowledge massively more accessible.

    but I can’t help but feel obliged to share the following evidence.

    1. Though a Stanford report notes that most new models are open source (Lynch, 2024), the models with the most market-share (see this Forbes list) are not open-source. Of those fifty, only Cleanlab, Cohere, Hugging Face (duh), LangChain (among other Python stuff like scikit-learn or tensorflow), Weaviate, TogetherAI and notably Mistral are open source. Among the giants, OpenAI’s GPT-4 et al., Claude, and Gemini are closed-source, though Meta’s LLaMa is open-source.
    2. Transparency is… I’ll cede that it is improving! But it’s also lacking. According to the Stanford 2024 Foundation Model Transparency Index, which uses 100 indicators such as data filtration transparency, copyright transparency, and pollution transparency (Bommasani et al., 2024, p. 27 fig. 8), developers were opaque, including open-source developers. The pertinent summary notes that the mean FMTI company score improved from 37 to 58 over the last year, but information about copyright data, licenses, and guardrails have remained opaque.

    I see you also argue that:

    With [the decline of effort in average people’s fact-finding] in mind I see no reason not to feed [AI] products of the scientific method, [which is] the most rigorous and highest solution to the problems of epistemology we’ve come up with thus far.

    And… I partly agree with you on this. As another commenter said, “[AI] is not going back in the bottle”, so might as well make it not totally hallucinatory. Of course, this should be done in an ethical way, one that respects the rights to the data of all involved.

    But about your next point regarding data usage:

    …if you actually read the terms and conditions when you signed up to Facebook… and if you listened to the experts then you and these artists would not feel like you were being treated unfairly, because not only did you allow it to happen, you all encouraged it. Now that it might actually be used for good, you are upset. It’s disheartening. I’m sorry, most of you signed it all away by 2006. Data is forever.

    That’s a mischaracterization of a lot of views. Yes, a lot of people willfully ignored surveillance capitalism, but we never encouraged it, nor did we ever change our stance from affirmatory to negative because the data we intentionally or inadvertently produced began to be “used for good”. One of the earliest surveillance capitalism investigators, Harvard Business School professor Shoshana Zuboff, confirms that we were just scared and uneducated about these things outside of our control.

    “Every single piece of research, going all the way back to the early 2000s, shows that whenever you expose people to what’s really going on behind the scenes with surveillance capitalism, they don’t want anything to do [with] it. The only reason we keep engaging with it is because we feel like we have no choice. …[it] is a colossal market failure. Because it is not giving people what people want. …everything that’s inside that choice [i.e. the choice of picking between convenience and privacy] has been designed to keep us in ignorance.” (Kulwin, 2019)

    This kind of thing – corporate giants giving up thousands of papers to AI – is another instance of people being scared. But it’s not fearmongering. Fearmongering implies that we’re making up fright where it doesn’t really exist; however, there is indeed an awful, fear-inducing precedent set by this action. Researchers now have to live with the idea that corporations, these vast economic superpowers, can suddenly and easily pivot into using all of their content to fuel AI and make millions. This is the same content they spent years on, that they intended for open use in objectively humanity-supporting manners by peers, the same content they had few alternative options for storage/publishing/hosting other than said publishers. Yes, they signed the ToS and now they’re eating it. We’re evolving towards the future at breakneck pace – what’s next? they worry, what’s next?

    (Comment 1/2)



  • Despite the downvotes I’m interested why you think this way…

    The common Lemmy view is that morally, papers are meant to contribute to the sum of human knowledge as a whole, and therefore (1) shouldn’t be paywalled in a way unfair to authors and reviewers – they pay the journals, not the other way around – and (2) closed-source artificially intelligent word guessers make money off of content that isn’t their own, in ways that said content-makers have little agency or say, without contributing back to the sum of human knowledge by being open-source or transparent (Lemmy has a distaste for the cloisters of venture capital and multibillion-parameter server farms).

    So it’s not about using AI or not but about the lack of self-determination and transparency, e.g. an artist getting their style copied because they paid an art gallery to display it, and the art gallery traded rights to image generation companies without the artists’ say (although it can be argued that the artists signed the ToS, though there aren’t any viable alternatives to avoiding the signing).

    I’m happy to listen if you differ!



  • I… don’t have ADHD (relatively confident) but I’ve used both of your hacks before and they’ve measurably helped me.

    The templating thing slung me over its shoulder and carried me through battlefields. Procrastinate 'til the last hour? Assignment must be in LaTeX? Don’t worry, everything is already formatted, just add the double-dollar-signs and equate!

    Bored? Need to get this article done but it’ll be even more boring? Watch random dubbed animations or something while hitting the keys – low-pressure colors and music cushions the harder-thinking part. Somehow the perceived expenditure of I Need To Focus mutes itself!

    (Footgun if the side-video is too interesting.)



  • The “we have more than 5 senses” insistence, while interesting, misconstrues what is typically understood as a “sense” by the average person.

    When children are taught what the 5 senses are, i.e. seeing, hearing, touch, taste and smell, these are more literary senses than scientific ones. (In another vein, it’s like disagreeing whether a tomato is a vegetable, fruit, or both – scientists and cooks have different definitions!)

    Proprioception, the unconscious spatial perception of your body parts, falls under “feel.” Hunger and thirst do, too. I feel hungry, I feel that my leg is below me, I feel off-balance. These scientifically-defined senses fall under one literary sense or another.

    Since this is just a mangling of definitions, it’s almost irresponsible to call the five-senses thing a misconception. That being said, it did interest me; did you know that endolymph fluid in our ears uses its inertia to tell us what’s going on when we turn our heads? ツ