You haven’t addressed the case of migraine to a non geographic tld
You haven’t addressed the case of migraine to a non geographic tld
I trust none of the I can. People are running anything on kubernetes 😆
Oh wow! And that reservation makes so much sense under these circumstances. Obviously, we could never consider the possibility of a three-letter TLD for a country or migrating a two-letter TLD to a non country specific name because reasons.
iPlayer isn’t an ‘open’ service- you have to use a supported client, even if that client is a web browser. Your options are limited to platforms that can support those clients. Personally I’ve found Roku preferable to Chromecast, firestick, full PC. I may at some point have tried to get iPlayer running with Kodi back in the day, when it was XBMC, but XBMC was pretty clunky anyway, let alone on raspberry pi.
Looks like it’s too easy to delete. I click on the link and I get a not found exception
Welcome
Depends what you want to play it on. In my house we have:
3 laptops 2 tablets 2 mobile phones (1 android, 1 iPhone) TV
Not all these devices support local storage for music and it’s a pain to sync files between them. With Jellyfin the complete library is in one location with a consistent interface. It can also be made available remotely if I choose.
People like this
I feel this and some of the other comments in this thread are missing the point. It’s not about me and my followers. It’s about the news sources and topics that I search for or follow. They simply haven’t moved to Mastodon and where notable individuals that I follow have tried, it simply hasn’t worked out due to lack of interest. I’m not interested in the fediverse as a topic in itself, I’m interested in the topics and events I want to follow. Something happens and I can find and read and watch clips about it on Twitter. Not so Mastodon.
people I follow
I’ve been on Mastodon for over a year and the content simply isn’t there. Several of the people that I follow on Twitter have tried moving or duplicating to Mastodon. They’ve had a fraction of the visibility and engagement from commenters that they would get on Twitter. Invariably after a few months they have essentially given up on it as a primary medium. For me the discoverability is essentially non-existent, which I don’t think is helped by the idea of it being based around instance-local communities, which have no meaning when you’re looking at something like Twitter.
You might say that the definition is ‘Elastic’
Because they don’t know or trust them
GitLab just doesn’t compare in my view:
To begin with, you have three different major versions to work with:
Each of which have different features available and limitations, but all sharing the same documentation- A recipe for confusion if ever I saw one. Some of what’s documented only applies to you the enterprise SAAS as used by GitLab themselves and not available to customers.
Whilst theoretically, it should be possible to have a gitlab pipeline equivalent to GitHub actions, invariably these seem to metastasize In production to use includes
making them tens or hundreds of thousands of lines long. Yes, I’m speaking from production experience across multiple organisations. Things that you would think were obvious and straightforward, especially coming from GitHub actions, seen difficult or impossible, example:
I wanted to set up a GitHub action for a little Golang app: on push to any branch run tests and make a release build available, retaining artefacts for a week. On merging to main, make a release build available with artefacts retained indefinitely. Took me a couple of hours when I’d never done this before but all more or less as one would expect. I tried to do the equivalent in gitlab free SAAS and I gave up after a day and a half- testing and building was okay but it seems that you’re expected to use a third party artefact store. Yes, you could make the case that this is outside of remit, although given that the major competitor or alternative supports this, that seems a strange position. In any case though, you would expect it to be clearly documented, it isn’t or at least wasn’t 6 months ago.
It’s very mass market, not particularly well informed general news source and this is a specialist community where this is relevant to its specialist field
The OpenBSD project maintains portable versions of many subsystems as packages for other operating systems. Because of the project’s preferred BSD license, which allows binary redistributions without the source code, many components are reused in proprietary and corporate-sponsored software projects. The firewall code in Apple’s macOS is based on OpenBSD’s PF firewall code,[6] Android’s Bionic C standard library is based on OpenBSD code,[7] LLVM uses OpenBSD’s regular expression library,[8] and Windows 10 uses OpenSSH (OpenBSD Secure Shell) with LibreSSL.[9]
Zim desktop wiki? I’ve used it for years. Cross platform, open source, lots of features. Bear in mind that there are a lot of plugins, including one specifically for journaling
Coming from what looks to me like a different perspective to many of the commenters here (Disclosure I am a professional platform engineer):
If you are already scripting your setups then yes you should absolutely learn/use Ansible. The key reasons are that it is robust, explicit, and repeatable- doesn’t matter whether that’s the same host multiple times or multiple hosts. I have lost count of the number of pet Bash scripts I have encountered in various shops, many of them created by quite talented people. They all had problems. Some typical ones:
Issue | Example |
---|---|
Most people write bash scripts without dependency checks | ‘Of course everyone will have gnu coreutils installed, it’s part of every Linux distro’ - someone runs the script on a Mac |
We need to pass this action out to a command-line tool, that’s obvious | Fails if command-line tool isn’t available, no handling errors from tool if they aren’t exactly what’s expected |
Of course people will realise that they need to run this from an environment prepared in this exact (undocumented) way | Someone runs the script in a different environment |
Of course people will be running this on x86_64/AMD64, all these third party binaries are available for that | Someone runs it on ARM |
Of course people will know what to do if the script fails midway through | People try to re-run the script when it fails mid-way through and it’s a mess |
The thing about Ansible is that it can be modular (if you want) and you can use other people’s code but fundamentally it runs one step at a time. You will know for each step:
Look into ssh
I know right? I’m constantly confused by this when I’m dealing with kubernetes networking