I’m still using my Galaxy S8 with only one problem: Verizon’s voicemail app won’t run on something this old. Every other app is fine. It figures that the only app that encourages me to upgrade is from the phone company.
I’m still using my Galaxy S8 with only one problem: Verizon’s voicemail app won’t run on something this old. Every other app is fine. It figures that the only app that encourages me to upgrade is from the phone company.
I’ve been doing this for 30+ years and it seems like the push lately has been towards oversimplification on the user side, but at the cost of resources and hidden complexity on the backend.
As an Assembly Language programmer I’m used to programming with consideration towards resource consumption. Did using that extra register just cause a couple of extra PUSH and POP commands in the loop? What’s the overhead on that?
But now some people just throw in a JavaScript framework for a single feature and don’t even worry about how it works or the overhead as long as the frontend looks right.
The same is true with computing. We’re abstracting containers inside of VMs on top of base operating systems which is adding so much more resource utilization to the mix (what’s the carbon footprint on that?) with an extremely complex but hidden backend. Everything’s great until you have to figure out why you’re suddenly losing packets that pass through a virtualized router to linuxbridge or OVS to a Kubernetes pod inside a virtual machine. And if one of those processes fails along the way, BOOM! it’s all gone. But that’s OK; we’ll just tear it down and rebuild it.
I get it. I understand the draw, and I see the benefits. IaC is awesome, and the speed with which things can be done is amazing. My concern is that I’ve seen a lot of people using these things who don’t know what’s going on under the hood, so they often make assumptions or mistakes that lead to surprises later.
I’m not sure what the answer is other than to understand what you’re doing at every step of the way, and always try to choose the simplest route (but future-proofed).
Technically, each time that it is viewed it is a republication from copyright perspective. It’s a digital copy that is redistributed; the original copy that was made doesn’t go away when someone views it. There’s not just one copy that people pass around like a library book.
I’m thinking about it from the perspective of an artist or creator under existing copyright law. You can’t just take someone’s work and republish it.
It’s not allowed with books, it’s not allowed with music, and it’s not even allowed with public sculpture. If a sculpture shows up in a movie scene, they need the artist’s permission and may have to pay a licensing fee.
Why should the creation of text on the internet have lesser protections?
But copyright law is deeply rooted in damages, and if advertising revenue is lost that’s a very real example.
And I have recourse; I used it. I used current law (DMCA) to remove over 1,000,000 pages because it was my legal right to remove infringing content. If it had been legal, they wouldn’t have had to remove it.
Yes, some wikipedia editors are submitting the pages to archive.org and then linking to that instead of to the actual source.
So when you go to the Wikipedia page it takes you straight to archive.org – that is their first stop.
It’s user-driven. Nothing would get archived in this case. And what if the content changes but the page remains up? What then? Fairly sure this is why Wikipedia uses archives.
That’s a good point.
Pretty sure mainstream ad blockers won’t block a custom in-house banner. And if it has no tracking, then it doesn’t matter whether it’s on Archive or not, you’re getting paid the same, no?
Some of them do block those kinds of ads – I’ve tried it out with a few. If it’s at archive.org I lose the ability to report back to the sponsor that their ad was viewed ‘n’ times (unless, ironically, if I put a tracker in). It also means that if sponsorship changes, the main drivers of traffic like Wikipedia may not see that. It makes getting new sponsors more difficult because they want something timely for seasonal ads. Imagine sponsoring a page, but Wikipedia only links to the archived one. Your ad for gardening tools isn’t reflected by one of the larger drivers of traffic until December, and nobody wants to buy gardening tools in December.
Yes, I could submit pages to archive.org as sponsorship changes if this model continues.
It was a much bigger deal when we used Google ads a decade ago, but we stopped in early 2018 because tracking was getting out of hand.
If I was submitting pages myself I’d be all for it because I could control when it happened. But there have times when I’ve edited a page and totally screwed it up, and archive.org just happened to grab it at that moment when the formatting was all weird or the wrong picture was loaded. I usually fix the page and forget about it until I see it on archive.org later.
I asked for pages like that to be removed, but archive.org was unresponsive until I used a DMCA takedown notice.
I don’t think you know what SEO is. I think you know what bad SEO is.
Anyhow, Wikipedia is always free to link somewhere else if they can find better content.
What do you mean by “engagement”, exactly? Clicking on ads?
In SEO terms user engagement refers to how people interact with the website. Do they click on another link? Does a new blog posting interest them?
Lmao you think Google needs to go through Archive to scrape your site? Delusional.
Any activiity from Google is easier to track and I have a record if who downloaded content if it’s coming from my servers.
The mechanisms used to serve ads over the internet nowadays are nasty in a privacy sense, and a psychological manipulation sense. And you want people to be affected by them just to line your pockets? Are you also opposed to ad blockers by any chance?
I agree that many sites use advertising in a different way. I use it in the older internet sense – someone contacts me to sponsor a page or portion of the site, and that page gets a single banner, created in-house, with no tracking. I’ve been using the internet for 36 years. I’m well aware of many uses that I view as unethical, and I take great pains not to replicate them on my own site.
I disapprove of ad blockers. I approve of things that block tracking.
As far as “lining my own pockets” goes, I want to recoup my hosting costs. I spend hours researching for each article/showcase, make the content free to view, and then I’m expected to pay to share it with anyone who’s interested? I have a day job. This is my hobby, but it’s also my blood, sweat, and tears.
And how do you suggest a site which has been wiped off the face of the internet gets archived? Maybe we need to invest in a time machine for the Internet Archive?
archive.org could archive the content and only publish it if the page has been dark for a certain amount of time.
You misunderstood. If they view the site at Internet Archive, our site loses on the opportunity for ad revenue.
I just sent a DMCA takedown last week to remove my site. They’ve claimed to follow meta tags and robots.txt since 1998, but no, they had over 1,000,000 of my pages going back that far. They even had the robots.txt configured for them archived from 1998.
I’m tired of people linking to archived versions of things that I worked hard to create. Sites like Wikipedia were archiving urls and then linking to the archive, effectively removing branding and blocking user engagement.
Not to mention that I’m losing advertising revenue if someone views the site in an archive. I have fewer problems with archiving if the original site is gone, but to mirror and republish active content with no supported way to prevent it short of legal action is ridiculous. Not to mention that I lose control over what’s done with that content – are they going to let Google train AI on it with their new partnership?
I’m not a fan. They could easily allow people to block archiving, but they choose not to. They offer a way to circumvent artist or owner control, and I’m surprised that they still exist.
So… That’s what I think is wrong with them.
From a security perspective it’s terrible that they were breached. But it is kind of ironic – maybe they can think of it as an archive of their passwords or something.
I set up LinkWarden about a month ago for the first time and have been enjoying it. Thank you!
I do have some feature requests – is GitHub the best place to submit those?
My worst feeling is “I tried that two years ago but couldn’t get enough people interested, so I dropped it…”
I’m a big fan of netdata; it’s part of my standard deployment. I put in some custom configs depending on what services are running on what servers. If there’s an issue it sends me an email and posts into a slack channel.
Next step is an influxdb backend to keep more history.
I also use monit to restart certain services in certain situations.
Same here. The most I get out of might be a pointer to a module that could be a better approach, but the code I get from ChatGPT is usually worthless.
I treat it as my water cooler talk, and maybe I’ll come away with a few new ideas.
I wish it was database agnostic. And I’m slightly concerned about the version three rewrite.
It does look awesome, and I’ll revisit it to see where things are in six months.
I have a problem with Amazon Drive going away for non-photos on December 31st.
For a while, they had unlimited storage and you could use a Linux API to access it – I stored 8TB of data.
Then they set a quota, but for those over quota it was read-only. Oh, and Linux access no longer works.
Now they’ve set a deadline to have everything off by December 31st, but the Windows app still doesn’t work (constantly crashing) and I see no way to get my files.
I started back with kernel 0.12 and called it Line-ucks. I still do and people look at me funny, but it’s an old habit and I’m an old dog.
When Linus released his audio file it was already etched into my brain the other way. I do remember being joking that I’m glad his name wasn’t Pinus (like the genus for pine trees) after hearing him say it.
My favorite exchange on ChatGPT (back in March):
**You**
Tell me a dad joke.
**ChatGPT**
Why did the tomato turn red?
Because it saw the salad dressing!
**You**
Can you explain to me why that joke is funny?
**ChatGPT**
Sure, I'd be happy to explain why that joke is considered funny!
The humor in this joke comes from a play on words and a bit of misdirection. The first part of the joke sets up an expectation that there will be a serious or logical explanation for why the tomato turned red. However, the punchline delivers a silly and unexpected answer that has nothing to do with the actual reason for the tomato's color change.
The play on words is in the phrase "saw the salad dressing." The word "saw" can be interpreted in two ways: as the past tense of "see" (as in, the tomato observed the dressing) or as a cutting tool (as in, the tomato was physically sliced by the dressing). This creates a bit of confusion and a humorous mental image, which is why the joke is often considered amusing. Additionally, the idea of a tomato being aware of salad dressing is absurd and adds to the silliness of the joke.
We went on a bit further with me pointing out that their murder take was distasteful, and it responded that humor is subjective.
It wasn’t always followed on Reddit, but downvoting there was supposed to be for comments that don’t contribute to the conversation.
Here the guidance is looser – the docs don’t address comments, but do say to “upvote posts that you like.”
I’ve tried contributing to some conversations and sometimes present a different viewpoint in the interest of thought exchange, but this often results in massive downvotes because people disagree. I’m not going to waste my energy contributing to a community that ends up burying my posts because we have different opinions.
That’s true on Reddit to, so I’m kind of being tangential to the original question. I guess what I’m saying is that some people might feel like I do and won’t engage in any community, be it Reddit or Lemmy, if it’s just going to be an echo chamber.