• 0 Posts
  • 21 Comments
Joined 1 year ago
cake
Cake day: July 2nd, 2023

help-circle







  • Depends what you want to play it on. In my house we have:

    3 laptops 2 tablets 2 mobile phones (1 android, 1 iPhone) TV

    Not all these devices support local storage for music and it’s a pain to sync files between them. With Jellyfin the complete library is in one location with a consistent interface. It can also be made available remotely if I choose.



  • I feel this and some of the other comments in this thread are missing the point. It’s not about me and my followers. It’s about the news sources and topics that I search for or follow. They simply haven’t moved to Mastodon and where notable individuals that I follow have tried, it simply hasn’t worked out due to lack of interest. I’m not interested in the fediverse as a topic in itself, I’m interested in the topics and events I want to follow. Something happens and I can find and read and watch clips about it on Twitter. Not so Mastodon.



  • SquiffSquiff@lemmy.worldtoFediverse@lemmy.worldBluesky continues to soar
    link
    fedilink
    English
    arrow-up
    32
    arrow-down
    4
    ·
    2 months ago

    I’ve been on Mastodon for over a year and the content simply isn’t there. Several of the people that I follow on Twitter have tried moving or duplicating to Mastodon. They’ve had a fraction of the visibility and engagement from commenters that they would get on Twitter. Invariably after a few months they have essentially given up on it as a primary medium. For me the discoverability is essentially non-existent, which I don’t think is helped by the idea of it being based around instance-local communities, which have no meaning when you’re looking at something like Twitter.




  • GitLab just doesn’t compare in my view:

    To begin with, you have three different major versions to work with:

    • Self-Hosted open source
    • SAAS open source
    • Enterprise SAAS

    Each of which have different features available and limitations, but all sharing the same documentation- A recipe for confusion if ever I saw one. Some of what’s documented only applies to you the enterprise SAAS as used by GitLab themselves and not available to customers.

    Whilst theoretically, it should be possible to have a gitlab pipeline equivalent to GitHub actions, invariably these seem to metastasize In production to use includes making them tens or hundreds of thousands of lines long. Yes, I’m speaking from production experience across multiple organisations. Things that you would think were obvious and straightforward, especially coming from GitHub actions, seen difficult or impossible, example:

    I wanted to set up a GitHub action for a little Golang app: on push to any branch run tests and make a release build available, retaining artefacts for a week. On merging to main, make a release build available with artefacts retained indefinitely. Took me a couple of hours when I’d never done this before but all more or less as one would expect. I tried to do the equivalent in gitlab free SAAS and I gave up after a day and a half- testing and building was okay but it seems that you’re expected to use a third party artefact store. Yes, you could make the case that this is outside of remit, although given that the major competitor or alternative supports this, that seems a strange position. In any case though, you would expect it to be clearly documented, it isn’t or at least wasn’t 6 months ago.





  • Coming from what looks to me like a different perspective to many of the commenters here (Disclosure I am a professional platform engineer):

    If you are already scripting your setups then yes you should absolutely learn/use Ansible. The key reasons are that it is robust, explicit, and repeatable- doesn’t matter whether that’s the same host multiple times or multiple hosts. I have lost count of the number of pet Bash scripts I have encountered in various shops, many of them created by quite talented people. They all had problems. Some typical ones:

    Issue Example
    Most people write bash scripts without dependency checks ‘Of course everyone will have gnu coreutils installed, it’s part of every Linux distro’ - someone runs the script on a Mac
    We need to pass this action out to a command-line tool, that’s obvious Fails if command-line tool isn’t available, no handling errors from tool if they aren’t exactly what’s expected
    Of course people will realise that they need to run this from an environment prepared in this exact (undocumented) way Someone runs the script in a different environment
    Of course people will be running this on x86_64/AMD64, all these third party binaries are available for that Someone runs it on ARM
    Of course people will know what to do if the script fails midway through People try to re-run the script when it fails mid-way through and it’s a mess

    The thing about Ansible is that it can be modular (if you want) and you can use other people’s code but fundamentally it runs one step at a time. You will know for each step:

    • Are dependencies met?
    • Did that step succeed or fail (in realtime!)?
    • (If it failed) what was the error?
    • (Assuming you have written sane Ansible) you can re-run your playbook at any time to get the ‘same’ result. No worries about being left in an indeterminate state
    • (To an extent) It is self-documenting
    • Host architecture doesn’t really matter
    • Target architecture/OS is specified and clear