Ahh age of switching PSUs and led lights. 20 years ago you will notice it in an instant.
Ahh age of switching PSUs and led lights. 20 years ago you will notice it in an instant.
A4-6210 with build in GPU has TDP of 15W. There is no point to optimize anything it is seeping power already. may be try to use tlp to limit max charge level of the battery ( i’m not sure is you laptop is supported). You can play with governors too, but I personaly will not bother. You obviously need multiuser.target but not GUI.
With raid 10 - i would not risk it . With RAID6 ( obviously not on BTRFS) it is fair game if you have solid return policy for drives which are DOA. Go for SAS drives, they are cheaper (but generally hotter and nosier). And look for old-new-stock on specialized sites, no one in enterprise needs say 8tb drives, so they selling them cheap at times.
Get drive, connect , run long smart self-test ( for 18tb it is probably take a day). If it passed you are reasonably sure that it will not die soon. And keep running these test regularly, as soon as they start failing, replace.
Bump root cert to 10 years and use intermediate with shorter lifetime. root cert should be stored and processed off net.
It would be useful to know laptop spec. In general, do not bother power consumption should be lower enough as it is.
to stop guessing what HDD to replace when one failed. VM can’t see actual HDDs as SMART is not getting forwarded.
Biggest problem will be BW and latency to your lab from the Internet. I would use dedicated hardware and subnet for it. Security wise, if you can make your site 100% static it will help a lot with security. I’m personally set on AWS S3 + CloudFlare combo with static site generator running in my lab. Yes it is not really “self hosted” but worries free solution for me.
both works. Just do not forgot to assign fake serial numbers if you are passing disks. IMHO passing disk will be more performant, or may be just pass HBA controller if other disks are on different controller.
ZFS or BTRF mirror will know which side is at fault due to checksums. I’m more concern about simultaneous falures of two disks. Rebuilding of a RAID puts lots of pressure on remaining disks, so probability that remaining one dies too is much higher. with RAID6 3 disks need to die to lost date, which is less likely but not impossible.
I would not trust these kind of dives in the mirror. IMHO RAID6 is the only way.
ZFS ZIL will not help in this case.
Usually it can be solved by talking to hotel stuff. you are paying for that service and can expect it be suitable for any legal use.
Nope. You do not need physical access for it, just root access. and you HW is compromised with only means to recover it is SPI flashing of CPU.
3600 was released in 2019. And it they was making it for at least 2 years.
You need to be a root to exploit it, but if it get exploited any way to get rid of it is to throw MB to trash.
Why use SSD OS (unless he is using windows ). System can do to USB stick and rest od data to disk, and SSD may be a good option.
Does not really matter what wording they will put in. It is clear that project will go to pay or get nothing way. So just start working on decommissioning it. Free software really need better ways to pay developers, that will allow to avoid crap like that.
I would add LVM to the list of software raids, and remove btrfs as poorly engineered.
couple of old 2.5 HDD + usb to SATA converter. But Pi5 is hardly suitable to host anything. May just get old PC (which gives you HDD too). There are plenty for < $100 or even free. But you are going to pay more for power.
ICINGA/NAGIOS? you can even feed data already collected by Prometheus to it if you want.