• 3 Posts
  • 82 Comments
Joined 1 year ago
cake
Cake day: June 21st, 2023

help-circle

  • Isn’t the whole point of something like End-to-End Encryption so that not even the company themselves can read your messages?

    In that case it wouldn’t matter even if they did turn the info over.

    Edit: I read more into the page you linked. Looks like those NSLs can’t even be used to request the contents either way:

    Can the FBI obtain content—like e-mails or the content of phone calls—with an NSL?

    Not legally. While each type of NSL allows the FBI to obtain a different type of information, that information is limited to records—such as “subscriber information and toll billing records information” from telephone companies.








  • Hmmm it was even able to pull in private DMs.

    Maybe private DMs on Mastadon aren’t as private as everyone thinks… that, or the open nature of Activity Pub is leaking them somehow?

    Edit - From the article:

    Even more shocking is the revelation that somehow, even private DMs from Mastodon were mirrored on their public site and searchable. How this is even possible is beyond me, as DM’s are ostensibly only between two parties, and the message itself was sent from two hackers.town users.

    From what @delirious_owl@discuss.online mentioned below, it sounds like this shouldn’t be very shocking at all.




  • It’s a pain to switch between accounts. It eats up a ton of CPU if I use it through my browser (unless I use it in Firefox). If I use it in Firefox I can’t get video/voice calls or join up on meetings.

    On a mobile device (iOS): It randomly logs me out (more like it will timeout if I haven’t opened the app recently). Notifications aren’t reliable. If I join a meeting with some other group as a “guest”, I can go back to view my active chat, but then I can only hear audio from the meeting and can’t get back to see what’s happening in the meeting unless I leave the room and come back.

    There’s more, but this is just off the top of my head.




  • It’s on the person using any AI tools to verify that they aren’t infringing on anything if they try to market/sell something generated by these tools.

    That goes for using ChatGPT just as much as it goes for Midjourney/Dall-E 3, tools that create music, etc.

    And you’re absolutely right, this is going to be a problem more and more for anyone using AI Tools and I’m curious to see how that will factor in to future lawsuits.

    I could see some new factor for fair use being raised in court, or else taking this into account under one of the pre-existing factors.


  • is it different from observing a video tape?

    I would think that it’s different, only because you have the potential to alter what could happen.

    Does traveling back in time guarantee that someone would react the same way in the same situation even?

    Maybe, maybe not, we’re entering the realm of Schrödinger’s cat as well as how time travel would actually work. Do we create some new branched timeline in travelling back? Do we enter an alternate universe entirely? Do we have a time machine where paradoxes are a problem? And the list can go on.






  • I don’t agree that it’s a fake vs fake issue here.

    Even if the “real” photos were touched up in Lightroom or Photoshop, those are tools that actual photographers use.

    It goes to show that there are cases where photos of real people look more AI generated than not.

    The problem here is that we start second guessing whether a photo was AI generated or not and we run into cases where real artists are being told that they need to find a “different style” to avoid it looking too much like AI generated photos.

    If that wasn’t a perfect example for you then maybe this one is better: https://www.pcgamer.com/artist-banned-from-art-subreddit-because-their-work-looked-ai-generated/

    Now think of what can happen to an artist if they publish something in California that has a style that makes it look somewhat AI generated.

    The problem with this law is that it will be weaponized against certain individuals or smaller companies.

    It doesn’t matter if they can eventually prove that the photo wasn’t AI generated or not. The damage will be done after they are put through the court system. Having a law where you can put someone through that system just because something “looks” AI generated is a bad idea.

    Edit: And the intent of that law is also to include AI text generation. Just think of all the students being accused of using AI for their homework and how reliable other tools have been for determining whether their work is AI generated or not.

    We’re going to unleash that on authors as well?