The ai is trained on recordings of his voice which they have not secured the rights to though.
Not really the case with the latest models, a couple of seconds of audio are enough to clone a voice, as you can essentially remix it from the all the other training data, you don’t need that persons specific voice for training anymore. This is more a personal rights and trademark issue than a copyright one.
In fact, machine translation started much earlier than this AI craze…
…and has since than switched over to deep learning based stuff like everything else. This “current” AI craze is not new, it has been going on for over a decade if you paid attention.
The thing is no one is outraged about machine translation because it is not a primary creative process.
But reading text from a script is? Seriously? It takes substantial creative effort to translate jokes into something that works in other cultures as well as have the dialog in another language fit into existing lip motions. All of which is now getting replaced by AI. And of course all the voice actors that did the dubs are out of a job as well.
ls
reaction to this is unexpected:
$ mkdir foo
$ echo Foo > foo/file
$ chmod a-x foo
$ ls -l foo
ls: cannot access 'foo/file': Permission denied
total 0
-????????? ? ? ? ? ? file
I expected to just get a “Permission denied”, but listing the content it can still do. So x
is for following the name to the inode and r
for listing directory content (i.e. just names)?
I am not terribly impressed. The ability to build and run apps in a well defined and portable sandbox environment is nice. But everything else is kind of terrible. Seemingly simple things like having a package that contains multiple binaries aren’t properly supported. There are no LTS runtimes, so you’ll have to update your packages every couple of months anyway or users will get scary errors due to obsolete runtimes. No way to run a flatpak without installing. Terrible DNS based naming scheme. Dependency resolving requires too much manual intervention. Too much magic behind the scene that makes it hard to tell what is going on (e.g. ostree). No support for dependency other than the three available runtimes and thus terrible granularity (e.g. can’t have a Qt app without pulling in all KDE stuff).
Basically it feels like one step forward (portable packages) and three steps back (losing everything else you learned to love about package managers). It feels like it was build to solve the problems of packaging proprietary apps while contributing little to the Free Software world.
I am sticking with Nix, which feels way closer to what I expect from a Free Software package manager (e.g. it can do nix run github:user/project?ref=v0.1.0
).
I just leave this link here as counter point (somewhat NSFW):
https://www.reddit.com/r/StableDiffusion/comments/11un888/flamboyant_origami_fgures/
A whole lot of weird stuff can be created by bashing things together with AI. The beauty of AI is after all that you can “edit” with high level concepts, not just raw pixels.
And as for humans and dogs: https://imgur.com/a/TdXO7tz
How does a completely decentralized platform handle data that should be removed?
You make a blacklists of forbidden content and relays can use or ignore it. It’s up to the relay, there is no central authority that can make content go away globally. Nostr is build to be censorship-resistent.
In the long run I think a platform like that
It’s not a platform, it’s just a protocol and apps using that protocol.
I can move to another Mastodon instance, and keep following the same accounts.
You can’t. What you can and can’t follow is determined by whatever the server federates with, which is not under your control. Also you lose all your followers and in case of server shutdown all the accounts on that server stop existing, so you can’t follow them either.
Federation is a brittle framework that starts collapsing the moment anybody tries to use it seriously.
Let’s not forget threads planned to monetize every interaction it was aware of,
So does GMail. Making money running a bit of the network should not be a problem, quite the opposite, that just means the network won’t run out of money. This kind of arbitrary enforcing of political ideology should have no place this low in the network structure.
Let’s not forget we’re really breaking new ground here
We really aren’t. It’s just repeating what EMail and Usenet have done for 40 years.
There is no central authority in mastodon.
There is no centralized authority on Twitter either, because you can always go and use Facebook. The Web is a federated system where everybody just decided they don’t want to talk to anybody else.
If you make a Mastodon account your digital identity is bound to that one server. You can’t move to another server. You can’t communicate with other servers that got defederated. Exactly the same as Facebook and Twitter. It’s only decentralized up until server admins decide that it isn’t, which already has happened numerous times in the past. The whole thing is basically just based around wishful thinking. If everybody would be niche to each other and servers would run forever, it would be totally fine, but that’s not how the world works.
There are many entities that are part of a federated system, just like email
Email is a terrible protocol by modern standards and the problems of federation show in email pretty clearly, as the majority of people will stick to Gmail and a handful of other major providers. There is no reason to repeat past mistakes. The saving grace with email is that you don’t have the moral police looking through your emails and kicking you from their server when they find something they don’t like (outside of sending spam), with Mastodon on the other side they do exactly that.
It’s federated, not decentralized. Which even Mastodon itself doesn’t seem to realize or care, since they falsely advertise themselves as decentralized.
Decentralized means there is no central authority.
Federation just means there are many centralized authorities, that might or might not communicate with each other.
I really don’t see what Mastodon is supposed to solve in the long run. The server has full control and can do whatever it wants. Just look at what happened Threads.net. Big company joins the Fediverse and instead of celebrating, everybody starts thinking about defederating them. This approach is doomed to fail if it ever gets popular.
Nostr looks like a much more promising approach, with proper cryptographic identities and signatures. Nobody owns you there. Servers are just dumb relays. If one steps out of line, you can just use another one.
NixOS uses a naming convention for packages that keeps them all separate from each other, that’s how you get /nix/store/b6gvzjyb2pg0kjfwrjmg1vfhh54ad73z-firefox-118.0/
. /usr
isn’t used for packages and only contains /usr/bin/env
for compatibility, nothing else.
The whole system is held together by nothing more than shell scripts, symlinks and environment variables, standard Unix stuff. Making it very easy to understand if you are already familiar with Linux.
“Declarative” means that you whole configuration happens in one Nix config file. You don’t edit files in /etc/
directly, you write your settings in /etc/nixos/configuration.nix
and all the other files are generated from there. Same is true for package installation, you add your packages to a text file and rebuild.
If that sounds a little cumbersome, that’s correct, but Nix has some very nice ways around that. Due to everything being nicely isolated from each other, you do not have to install software to use them, you can just run them directly, e.g.:
nix run nixpkgs#emacs
You can even run them directly from a Git repository if that repository contains a flake.nix
file:
nix run github:ggerganov/llama.cpp
All the dependencies will be downloaded and build in the background and garbage collected when they haven’t been used in a while. This makes it very easy to switch between versions, run older versions for testing and all that and you don’t have to worry about leaving garbage behind or accidentally breaking your distribution.
The downside of all this is that some proprietary third party software can be a problem, as they might expect files to be in /usr
that aren’t there. NixOS has ways around that (BuildFHSEnv
), but it is quite a bit more involved than just running a setup.sh
and hoping for the best.
The upside is that you can install the Nix package manager on your current distribution and play around with it. You don’t need to use the full NixOS to get started.
The crazy part is that it is not even clear what they signed up for. Everybody started talking “Metaverse” as if it was an actual thing. But it never was. There never was an app, a standard or much of anything.
Second Life ain’t exactly perfect either, but at least that’s an actual thing that exists and in which you can open up your virtual advertisement booth.
It’s crazy how Zuckerberg hyped it up to the extreme, even renamed his company for it and than never actually build anything remotely worth of that name. What is going on in Horizon Worlds still looks less interesting than what they demoed with Facebook Social all the way back in 2016 on Oculus Rift.
Just give me a virtual space where I can watch movies, play games and go shopping with friends. It shouldn’t be that hard to build something that at least feels a bit deeper than just yet another chat app. Or take the silly stuff CodeMiko is doing, that is what I expect to be happening in the Metaverse, yet it happens in 2D on Twitch. Even Meta’s own conferences are still real world events with video screens, not events in the Metaverse.
I don’t mind the idea of the Metaverse, but the implementation is lightyears behind of where it should be.
Yes, it’s public and official: “The external battery supports up to 2 hours of use, and all day use when plugged in.”
VisionPro can barely be considered a portable/mobile device and it won’t even last through a modern movie.
HMD already has replaced my TV, and that’s a crappy one from 5 years ago. VisionPro is on a whole different level in terms of features and resolution. The ability to have a virtual screen wherever you want it and however big you want it shouldn’t be underestimated. And that’s not even counting everything else the headset can do.
Quest2 is $300. That is a pretty reasonably entry price for a Metaverse. Problem there was more that Meta never actually implemented a Metaverse. Putting that thing on your head doesn’t launch you into the Metaverse, but just into the home screen where you select apps to launch from a 2D menu. Their whole software stack does a terrible job of making use of the fact that you have a 3D display on your head. They didn’t even have basic things like VR180-3D trailers for their games. There were no virtual shops to buy stuff. No cinemas to watch stuff. Just apps you can launch. Horizon World, which was supposed to be their Metaverse, was still just another app to launch and not meaningfully integrated with anything else. PlaystationHome was more of a Metaverse than anything Meta ever build, though even that fell rather short.
The thing is, computer interaction can benefit quite a bit from a 3D space. I really liked what Microsoft did with WMR Portal and how it let you organize your apps simply by placing them in a 3D space, meaning you could have a cinema space with all your video related apps, a stack with games that you were playing, a stack with games you finished, etc. You could have frequently used webpages pasted to the walls. You could just grab the things, resize them and put them somewhere else. It was far more intuitive than any 2D interface I ever used and extremely customizable to your needs.
The problem was that it was also incomplete and unfinished in a lot of other ways and Microsoft just gave up on it. Outside of WMR Portal there has been surprisingly little effort into building good VR user interfaces and even less when comes to actively taking advantage of the 3D space (e.g. plenty apps still use drop shadows to simulate 3D instead of making the buttons actually 3D).
Will be interesting to see how well VisionPro does in this space. They seem to be a lot better with the basic UI elements than everybody else (e.g. dynamically lighting them to fit the AR environment and using real 3D), but at the same time, their focus on a static sitting experience without locomotion drastically limits how much advantage you can take from the 3D. Their main menu so far looks more like a table-UI stuck to your face than an 3D UI.
AR has a huge battery life and size problem. The amount of video processing that thing would need to do to be useful, would result in an enormous device with an hour or two of battery life. Rendering it useless for any real world consumer application.
On top of that it has a gigantic privacy and surveillance problem.
And if that wouldn’t be enough, what the heck are you going to do with it? Everything an AR headset could do, you can do today with your phone already. There is very little need to wear that functionality on your head all the time.
For some rare business use cases it can make sense, that’s why Microsoft Hololens is still around, but even they struggle to finding any areas where it makes it past the “nice idea” stage and actually into a working product.
Meta’s very own Horizon Worlds still hasn’t even launched globally, it is still restricted to a small handful of countries. On top of that it isn’t even a Metaverse in any meaningful sense, it’s just yet another VR chat application.
What separates a “real” Metaverse from a normal chat app is that it connects all the other applications into one unified virtual space, but Horizon Worlds ain’t doing that and nobody else is either.
Sony’s Playstation Home back from the PS3 days or Second Life are still closer to a Metaverse than any of the modern attempts.
Even further back there was Lucasfilm’s Habitat all the way back in 1986. It’s kind of shocking how little the idea of the “Metaverse” has evolved since back then. It’s still just some virtual space with avatars, different hats and chatting.
Quite hard. We had Open Source’ish LLMs for only around six months, if they are even up to the task of verifying a translation is another issue and if they are up to Debian’s Open Source guidelines yet another. This is obviously going to be the long term solution, but the tech for that has simply not been around for very long.
And of course once you have translation tools good enough for the task, you might just skip the human translator altogether and just use machine translations.
The algorithms killed the platforms
And somewhat ironically, the lack of algorithm is what killed the Web. The Web has a huge problem with discovering new content, it’s essentially impossible unless you already know exactly what to search for, but than of course it isn’t new content anymore. Meanwhile the Facebooks, TikToks and the Youtubes are actually quite good at discovering content, the later two especially can dig up extremely niche interests content with only hundreds of views.
The crux is that they give you no direct way to interact with the algorithm, it’s all guesswork based on your view history and clicks. There is no “show me less clickbait garbage”-button.
The solution should be algorithms that are transparent, switchable and under the users control, but so far I have never seen anybody developing anything like that. It’s either all one algorithm maximizing engagement and ads or some federated thing without any algorithms at all and complete garbage discovery and search.
The appeal to “X11 is too complicated, Wayland is much simpler” ain’t holding much meat when we are 15 years into the project and it’s still not done. As it turns out, a lot of that “junk” in X11 is rather useful and cutting out the junk in Wayland just made it unusable. The work to reimplement the missing functionality has been eating up a lot of years.
Dumb stuff in Rust has to be explicitly marked with unsafe
. Meaning if you review the code you have to focus on only a couple of lines instead of the whole project.
You can of course still write lots of other bugs in Rust, but C-style buffer overflows are impossible in Rust, which eliminates the majority of security issues.
C has no memory protection. If you access to the 10th element of a 5 element array, you get to access whatever is in memory there, even if it has nothing to do with that array. Furthermore this doesn’t just allow access to data you shouldn’t be able to access, but also the execution of arbitrary code, as memory doesn’t make a (big) difference between data and code.
C++ provides a few classes to make it easier to avoid those issues, but still allows all of them.
Ruby/Python/Java/… provide memory safety and will throw an exception, but they manually check it at runtime, which makes them slow.
Rust on the other side tries to proof as much as it can at compile time. This makes it fast, but also requires some relearning, as it doesn’t allow pointers without clearly defined ownership (e.g. the classic case of keeping a pointer to the parent element in a tree structure isn’t allowed in Rust).
Adding the safeties of Rust into C would be impossible, as C allows far to much freedom to reliably figure out if a given piece of code is safe (halting problem and all that). Rust purposefully throws that freedom away to make safe code possible.
That’s the idea, and while at it, we could also make .zip
files a proper Web technology with browser support. At the moment ePub exists in this weird twilight where it is build out of mostly Web technology, yet isn’t actually part of the Web. Everything being packed into .zip
files also means that you can’t link directly to the individual pages within an ePub, as HTTP doesn’t know how to unpack them. It’s all weird and messy and surprising that nobody has cleaned it all up and integrated it into the Web properly.
So far the original Microsoft Edge is the only browser I am aware of with native ePub support, but even that didn’t survive when they switched to Chrome’s Bink.
It would solve the long-form document problem. It wouldn’t help with the editing however. The problem with HTML as it is today, is that it has long left it’s document-markup roots and turned into an app development platform, making it not really suitable for plain old documents. You’d need to cut it down to a subset of features that are necessary for documents (e.g. no Javascript), similar to how PDF/A removes features from PDF to create a more reliable and future proof format.
I’d setup a working group to invent something new. Many of our current formats are stuck in the past, e.g. PDF or ODF are still emulating paper, even so everybody keeps reading them on a screen. What I want to see is a standard document format that is build for the modern day Internet, with editing and publishing in mind. HTML ain’t it, as that can’t handle editing well or long form documents, EPUB isn’t supported by browsers, Markdown lacks a lot of features, etc. And than you have things like Google Docs, which are Internet aware, editable, shareable, but also completely proprietary and lock you into the Google ecosystem.
.tar
is pretty bad as it lacks in index, making it impossible to quickly seek around in the file. The compression on top adds another layer of complication. It might still work great as tape archiver, but for sending files around the Internet it is quite horrible. It’s really just getting dragged around for cargo cult reasons, not because it’s good at the job it is doing.
In general I find the archive situation a little annoying, as archives are largely completely unnecessary, that’s what we have directories for. But directories don’t exist as far as HTML is concerned and only single files can be downloaded easily. So everything has to get packed and unpacked again, for absolutely no reason. It’s a job computers should handle transparently in the background, not an explicit user action.
Many file managers try to add support for .zip
and allow you to go into them like it is a folder, but that abstraction is always quite leaky and never as smooth as it should be.
Do the Maintainers of most distros manually read the code to discover whether an app is malware?
No. At best you get a casual glance over the source code and at worst they won’t even test that the app works. It’s all held together with spit and baling wire, if an malicious entity wanted to do some damage, they could do so quite easily, it just would require some preparation.
The main benefit of classic package maintenance is really just time, as it can take months or even years before a package arrives in a distribution, and even once arrived, it has to still make it from unstable to stable, leaving plenty of room for somebody to find the issue before it even comes to packaging and making it substantially less attractive for any attacker, as they won’t get any results for months.
Humans are wrong all the time and confidently so. And it’s an apples and oranges competition anyway, as ChatGPT has to cover essentially all human knowledge, while a single human only knows a tiny subset of it. Nobody expects a human to know everything ChatGPT knows in the first place. A human put into ChatGPTs place would not perform well at all.
Humans make the mistake that they overestimate their own capabilities because they can find mistakes the AI makes, when they themselves wouldn’t be able to perform any better, at best they’d make different mistakes.
The current LLMs can’t loop and can’t see individual digits, so their failure at seemingly simple math problems is not terrible surprising. For some problems it can help to rephrase the question in such a way that the LLM goes through the individual steps of the calculation, instead of telling you the result directly.
And more generally, LLMs aren’t exactly the best way to do math anyway. Human’s aren’t any good at it either, that’s why we invented calculators, which can do the same task with a lot less computing power and a lot more reliably. LLMs that can interact with external systems are already available behind paywall.
I absolutely despise the following directories: Documents, Music, Pictures, Public, Templates, Videos.
Change them: https://wiki.archlinux.org/title/XDG_user_directories
Yes, but OP wanted .dotfiles
. Nice thing with XDG is that you can change all that.
The user directories Desktop, Downloads, etc. can be changed as well: https://wiki.archlinux.org/title/XDG_user_directories
Not really. Recommendation algorithms are great for discovering related information and new stuff. They even beat search at its own game, as search is often limited to plain text, while the algorithms take the broader context into account. The problem is that you have no control over the recommendations, no transparency how they work, no way to switch or disable them and no way to explore the deeper knowledge hidden in them. It’s all just a magical black box for more engagement and more ads.
A recommendation algorithms that somehow manages to be open and transparent would be a very big step towards fixing the Web. Lemmy and Co. are too busy replicating failed technology from 30 years ago instead of actually fixing the underlying problems.