Does it apply it to all feeds? Or can it detect what feeds are actually Youtube ones?
Does it apply it to all feeds? Or can it detect what feeds are actually Youtube ones?
I use rclone and duplicati depending on the needs of the backup.
For long term I use duplicati, it has a GUI and you can upload it to several places (mines are spread between e2 and drive).
You configure the backend, password for encryption, schedule, and version retention.
rclone, with the crypt submodule, you use it to mount your backups as am external drive, so you need to manually handle the actual copy of the data into it, plus versioning and retention.
I can’t give you the technical explanation, but it works.
My Caddyfile only something like this
@forgejo host forgejo.pe1uca
handle @forgejo {
reverse_proxy :8000
}
and everything else has worked properly cloning via ssh with git@forgejo.pe1uca:pe1uca/my_repo.git
My guess is git only needs the host to resolve the IP and then connects to the port directly.
Ohhh! Now I understand!
Yeah, then that’s an issue on mastodon.
I mentioned some time ago, the fact that mastodon and Lemmy use the same protocol is annoying, because the experiences are different, so it causes a lot of issues :/
Unless lemmy devs have changed something since last year, this shouldn’t be the case, there’s a bug in there.
All interactions are recived by the instance hosting the community, and that instance is responsible for broadcasting that interaction to each instance where a user subscribed to it is hosted.
So, mastodon is only responsible for sending the upvote to feddit.dk and then feddit.dk to all other instances.
I’m not saying to delete, I’m saying for the file system to save space by something similar to deduping.
If I understand correctly, deduping works by using the same data blocks for similar files, so there’s no actual data loss.
but often lead developers to just display them in the frontend
Oh boy I feel this one.
My API is meant for scripting (i.e. it’s for developers and the errors are for developers), but the UI team uses it and they just straight display the error from their HTTP request for none technical people which might also not get to know all the parameters actually needed for the request.
And even when the error is in fact in my code, and I sent all the data I need to debug and replicate the error, the users can’t tell me because the UI truncates the response, so the user only sees something like Error in pe1uca's API: {"error":"bad request","message":"Your request has an error, please check th... (truncated)
. So the message gets truncated and the link to the documentation is also never shown .-.
Don’t know what are the changes since 7807 (which this one obsoletes) but this article helped me quickly understand the first one, hopefully it’s still somewhat relevant.
https://lakitna.medium.com/understanding-problem-json-adf68e5cf1f8
What?
Well, I can only speak for myself, I’m not here to follow users but communities.
And if someone wants to follow me I’d see it as kind of annoying for them seeing all the different topics I post and comment instead of something focused.
IMO the ability to see Mastodon interactions in Lemmy and vice-versa is quite annoying since they use the same protocol for different experiences.
Text to speech is what piper is doing.
What I’m looking for is called voice changer since I want to change a voice which already read something.
That’s exactly what I want: “the thing in the Darth Vader halloween masks” but for linux, preferably via CLI to ingest audio files and be able to configure it to change the voice as I want, not only Darth Vader.
I don’t want to manage piper voices, I can handle that directly in my file system as I only have a few.
The issue is none of the ones I’ve found are good for me, so what I need is something to change the voice once it has been generated by piper.
I haven’t completely looked into creating a model for piper, but just having to deal with a dataset is not something I look forward to, like gathering the data and all of what this implies.
So, I’m thinking it’s easier to take an existing model and make adjustments to fit a bit better on what I would like to hear constantly.
I’m looking at this in eternity and seems only spoilers don’t work from the post you linked.
User and community links work properly.
Check the most upvoted answer and then look into tubearchivist which can take your yt-dpl parameters and URLs to download the videos plus process them to have a better index of them.
Well, it’s a bit of a pipeline, I use a custom project to have an API to be able to send files or urls to summarize videos.
With yt-dlp I can get the video and transcribe it with fast whisper (https://github.com/SYSTRAN/faster-whisper), then the transcription is sent to the LLM to actually make the summary.
I’ve been meaning to publish the code, but it’s embedded in a personal project, so I need to take the time to isolate it '^_^
I’ve used it to summarize long articles, news posts, or videos when the title/thumbnail looks interesting but I’m not sure if it’s worth the 10+ minutes to read/watch.
There are other solutions, like a dedicated summarizer, but I’ve investigated into them and they only extract exact quotes from the original text, an LLM can also paraphrase making the summary a bit more informative IMO.
(For example, one article mentioned a quote from an expert talking about a company, the summarizer only extracted the quote and the flow of the summary made me believe the company said it, but the LLM properly stated the quote came from the expert)
This project https://github.com/goniszewski/grimoire has in it’s road map a way to connect to an AI to summarize the bookmarks you make and generate at 3 tags.
I’ve seen the code, I don’t remember what the exact status of the integration.
Also I have a few models dedicated for coding, so I’ve also asked a few pieces of code and configurations to just get started on a project, nothing too complicated.
In that case I’d recommen you use immich-go to upload them and still backup only immich instead of your original folder, since if something happens to your immich library you’d have to manually recreate it because immich doesn’t update its db from the file system.
There was a discussion in github about worries of data being compressed in immich, but it was clarified the uploaded files are saved as they are and only copies are modified, so you can safely backup its library.
I’m not familiar with RAID, but yeah, I’ve also read its mostly about up time.
I’d also recommend you look at restic and duplocati.
Both are backup tools, restic is a CLI and duplocati is a service with an ui.
So if you want to create the crons go for restic.
Tho if you want to be able to read your backups manually maybe check how the data is stored, because I’m using duplicati and it saves it in files that need to be read by duplicati, I’m not sure if I could go and easily open them unlike the data copied with rsync.
Unless they’ve changed how it works I can confirm.
Some months ago I was testing lemmy in my local I used the same URL to create a new post, it never showed up in the ui, it was because Lemmy treated it as a crosspost and hid it under the older one.
At that time it was only a crosspost jf the URL was the same, I’m not so sure about the title, but the body could be different.
The thing would be to verify if this grouping is being done by the UI or by the server, which might explain some UIs showing duplicated posts.
For local backups I use this command
$ rsync --update -ahr --no-i-r --info=progress2 /source /dest
You could first compress them, but since I have the space for the important stuff, this is the only command I need.
Recently I also made a migration similar to yours.
I’ve read jellyfin is hard to migrate, so I just reinstalled it and manually recreated the libraries, I didn’t mind about the watch history and other stuff.
IIRC there’s a post or github repo with a script to try to migrate jellyfin.
For immich you just have to copy this database files with the same command above and that’s it (of course with the stack down, you don’t want to copy db files while the database is running).
For the library I already had it in an external drive with a symlink, so I just had to mount it in the new machine and create a simlar symlink.
I don’t run any *arr so I don’t know how they’d be handled.
But I did do the migrarion of syncthing and duplicati.
For syncthing I just had to find the config path and I copied it with the same command above.
(You might need to run chown
in the new machine).
For duplicati it was easier since it provides a way to export and import the configurations.
So depending on how the *arr programs handle their files it can be as easy as find their root directory and rsync it.
Maybe this could also be done for jellyfin.
Of course be sure to look for all config folders they need, some programs might split them into their working directory, into ~/.config
, or ./.local
, or /etc
, or any other custom path.
EDIT: for jellyfin data, evaluate how hard to find is, it might be difficult, but if it’s possible it doesn’t require the same level of backups as your immich data, because immich normally holds data you created and can’t be found anywhere else.
Most series I have them in just the main jellyfin drive.
But immich is backedup with 3-2-1, 3 copies of the data (I actually have 4), in at least 2 types of media (HDD and SSD), with 1 being offsite (rclone encrypted into e2 drive)
Start by learning docker, you don’t have to selfhost anything yet, just learn to run a container, specially to run automated stuff. Then learn to build the images and run docker compose.
Also you could start checking any form or infrastructure as code. I usually hear about ansible and nixos.
This helps having a way to redeploy your services in any hardware easily.