• 0 Posts
  • 42 Comments
Joined 1 year ago
cake
Cake day: July 2nd, 2023

help-circle

  • I will admit that my familiarity with private law outside the USA is almost non-existent, except for what I skimmed from the Wikipedia article for the Inquisitorial system. So I had assumed that private law in European jurisdictions would follow the same judge-intensive approach. Rereading the article more closely, I do see that it really only talks about criminal proceedings.

    But I did some more web searching, and found this – honestly, extremely convenient – article comparing civil litigation procedure in Germany and California (the jurisdiction I’m most familiar with; IANAL). The three most substantial differences I could identify were the judge’s involvement in: serving papers, discovery, and depositions.

    Serving legal notice is the least consequential difference between California and Germany, but it seems that the former allows any qualified adult to chase down the respondent (ie person being sued) and deliver the notice of a lawsuit – hence the trope of yelling “you have been served” and then throwing a stack of papers at someone’s porch – on behalf of the complainant (person who filed the lawsuit). Whereas German courts take up the role themselves for notifying the complainant. Small difference, but notable.

    In Germany, the court, and not the plaintiff, is required to serve the complaint on the defendant without undue delay, which is usually immediately after it has been filed with the court.

    Next, discovery and pleadings in Germany appear to be different from the California custom. It seems that German courts require parties to thoroughly plead their positions first, and only afterwards will discovery begin, with the court deciding what topics can be investigated. Whereas California allows parties to make broad assertions that can later be proven or disproven during discovery. This is akin to throwing spaghetti at the wall and seeing what sticks, and a big reason this is done is because any argument that isn’t raised during trial cannot be reargued during a later appeal.

    I believe that discovery in California and other US States can get rather invasive, as each party’s lawyers are on a fact-finding mission where the truth will out. The general limitation on the pleadings in California is that they still must be germane to the complaint and at least be colorable. This obviously leads to a lot of pre-trial motions, as the targeted party will naturally want to resist a fishing expedition during discovery.

    Lastly, depositions in Germany involve the judge(s) a lot more than they would in California. Here, depositions are off-site from the court and conducted by the deposing party, usually video-taped and with all attorneys present, plus a privately hired stenographer, with the deposing attorney asking questions. Basically, after a deposition order is granted by the judge, the judge isn’t involved unless during the deposition, the process is interrupted in a way that would violate the judge’s order. But the solution to that is to simply phone the judge and ask for clarification or a new order to force the deposition to continue.

    Whereas that article describes the German deposition process as always occuring in court, during trial, and with questions asked by the judge(s). The parties may suggest certain questions by way of constructing arguments which require the judge(s) to probe in a particular direction. But it’s not clear that the lawyers get to dictate the exact questions asked.

    In contrast, depositions in Germany are conducted by the judge or the panel of judges and only during trial.

    I grant you that this is just an examination of the German court proceedings for private law. And perhaps Germany may be an outlier, with other European counterparts adopting civil law but with a more adversarial flavor for private law. But I would say that for Germany, these differences indicate that their private law is more inquisitorial overall, in stark contrast to the California or USA adversarial procedure for private litigation.



  • I am usually not wont to defend the dysfunction presently found in the USA federal (and state-level) judiciary, but I think this comparison to the German courts requires a bit more context. Generally speaking, the USA federal courts and US States adopt the adversarial system, originally following the English practice in both common law and equity. This means the judge takes on a referee role, and a plaintiff and a defendant will make their best, most convincing arguments.

    I should clarify that “common law” in this context refers to the criminal matters (akin to public law), and “equity” refers to person-versus-person disputes (akin to private law), such as contracts.

    For the adversarial system to work, the plaintiff and defendant need to be sufficiently motivated (and nowadays, well-monied) to put on good arguments, or else they’re just wasting the court’s time. Hence, there is a requirement (known as “standing”) where – grossly oversimplifying – the plaintiff must be the person with the most to gain, and the defendant must be the person with the most to lose. They are interested parties who will argue vigorously.

    Of course, that’s legal fiction, because oftentimes, a defendant might be unable to able to afford excellent legal counsel. Or plaintiffs will half-ass or drag out a lawsuit, so that it’s more an annoyance to the opposite party.

    In an adversarial system, it is each party’s responsibility to obtain subject-matter experts and their opinions to present to the court. The judge is just there to listen and evaluate the evidence – exception: criminal trials leave the evaluation of evidence to the jury.

    Why is the USA like this? For the USA federal courts, it’s because it’s part of our constitution, in the Case or Controversy Clause. One of the key driving forces for drafters of the USA Constitution was to restrict the powers of government officials and bureaucrats, after seeing the abuses committed during the Colonial Era. The Clause above is meant to constrain the unelected judiciary – which otherwise has awe-inducing powers such as jailing people, undoing legislation, and assigning wardship or custody of children – from doing anything unless some controversy actually needed addressing.

    With all that history in mind, if the judiciary kept their own in-house subject-matter experts, then that could be viewed as more unelected officials trying to tip the scale in matters of science, medicine, computer science, or any other field. Suddenly, landing a position as the judiciary’s go-to expert could have broad reaching impacts, despite no one in the federal judiciary being elected.

    In a sense, because of the fear of officials potentially running amok, the USA essentially “privatizes” subject matter experts, to be paid by the plaintiff or defendant, rather than employed by the judiciary. The adversarial system is thus an intentional value judgement, rather than “whoopsie” type of thing that we walked into.

    Small note: the federal executive (the US President and all the agencies) do keep subject matter experts, for the limited purpose of implementing regulations (aka secondary legislation). But at least they all report indirectly to the US President, who is term-limited and only stays 4 years at a time.

    This system isn’t perfect, but it’s also not totally insane.



  • This is an interesting application of so-called AI, where the result is actually desirable and isn’t some sort of frivolity or grift. The memory-safety guarantees offered by native Rust code would be a very welcome improvement over C code that guarantees very little. So a translation of legacy code into Rust would either attain memory safety, or wouldn’t compile. If AI somehow (very unlikely) manages to produce valid Rust that ends up being memory-unsafe, then it’s still an advancement as the compiler folks would have a new scenario to solve for.

    Lots of current uses of AI have focused on what the output could enable, but here, I think it’s worth appreciating that in this application, we don’t need the AI to always complete every translation. After all, some C code will be so hardware-specific that it becomes unwieldy to rewrite in Rust, without also doing a larger refactor. DARPA readily admits that their goal is simply to improve the translation accuracy, rather than achieve perfection. Ideally, this means the result of their research is an AI which knows its own limits and just declines to proceed.

    Assuming that the resulting Rust is: 1) native code, and 2) idiomatic, so humans can still understand and maintain it, this is a project worth pursuing. Meanwhile, I have no doubt grifters will also try to hitch their trailer on DARPA’s wagon, with insane suggestions that proprietary AI can somehow replace whole teams of Rust engineers, or some such nonsense.

    Edit: is my disdain for current commercial applications of AI too obvious? Is my desire for less commercialization and more research-based LLM development too subtle? :)


  • A commenter already provided a fairly comprehensive description of low-level computer security positions. But I also want to note that a firm foundation in low-level implementations is also useful for designing embedded software and firmware.

    As in, writing or deploying against custom BIOS/UEFI images, or for real-time devices where timing is of the essence. Most anyone dealing with an RTOS or kernel drivers or protocol buses will necessarily require an understanding of both the hardware architecture plus the programming language available to them. And if that appeals to you, you might consider looking into embedded software development.

    The field spans anything from writing the control loop for washing machines, to managing data exchange between multiple video co-processors onboard a flying drone to identify and avoid collisions, to negotiating the protocol to set up a 400 Gbps optical transceiver to shoot a laser down 40 km of fibre.

    If something “thinks” but doesn’t have a monitor and keyboard, it’s likely to have one or more processors running embedded software. Look around the room you’re in and see what this field has enabled.



    1. The return value of time.time() is actually a floating-point number … It’s also not guaranteed to be monotonically increasing, which is a whole other thing that can trip people up, but that will have to be a separate blog post.

    Oh god, I didn’t realize that about Python and the POSIX spec. Cautiously, I’m going to guess that GPS seconds are one of the few reliable ways to uniformly convey a monotonically-increasing time reference.

    Python has long since deprecated the datetime.datetime.utcnow() function, because it produces a naive object that is ostensibly in UTC.

    Ok, this is just a plainly bad decision then and now by the datetime library people. What possible reason could have existed to produce a TZ-naive object from a library call that only returns a reference to UTC?


  • If not code or documentation contributions, then well-written bug reports. Seriously, the quality of bug reports sometimes leaves a lot to be desired. And I don’t necessarily mean a full back-trace attached – and please, if you ever send a back-trace, copy-and-paste the text, never a screenshot – but just details like: system details, OS, version, step-by-step instructions to reproduce that a non-coder could also understand, plus what you expected to happen versus what actually happened.

    This stuff (usually) comes naturally to programmers and engineers, but users don’t necessarily see things this way. I sometimes think bug reports need to adopt a “so tell me what happened?” approach, where reporters are encouraged to describe free-form what they think of the software, then providing the specific details that developers need. That at least would collect all the relevant details, plus extra details that no developers thought to ask.

    Even just having folks that help gather and distill details from user reporters on a forum is easing a burden off of developers, and that effort should be welcomed by any competently-organized project. Many projects already have a template for reports, although it often gets mistaken for boilerplate. Helping reports recognize that they need to fill in all the details is a useful activity that isn’t code or docs.


  • The original reporting by 404media is excellent in that it covers the background context, links to the actual PDF of the lawsuit, and reaches out to an outside expert to verify information presented in the lawsuit and learned from their research. It’s a worthwhile read, although it’s behind a paywall; archive.ph may be effective though.

    For folks that just want to see the lawsuit and its probably-dodgy claims, the most recent First Amended Complaint is available through RECAP here, along with most of the other legal documents in the case. As for how RECAP can store copies of these documents, see this FAQ and consider donating to their cause.

    Basically, AXS complains about nine things, generally around: copyright infringement, DMCA violations (ie hacking/reverse engineering), trademark counterfeiting and infringement, various unfair competition statutes, civil conspiracy, and breach of contract (re: terms of service).

    I find the civil conspiracy claim to be a bit weird, since it would require proof that the various other ticket websites actually made contact with each other and agreed to do the other eight things that AXS is complaining about. Why would those other websites – who are mutual competitors – do that? Of course, this is just the complaint, so it’s whatever AXS wants to claim under “information and belief”, aka it’s what they think happened, not necessarily with proof yet.


  • There was a ton of hairbrained theories floating around, but nobody had any definitive explanation.

    Well I was new to the company and fresh out of college, so I was tasked with figuring this one out.

    This checks out lol

    Knowing very little about USB audio processing, but having cut my teeth in college on 8-bit 8051 processors, I knew what kind of functions tended to be slow.

    I often wonder if this deep level understanding of embedded software/firmware design is still the norm in university instruction. My suspicion has been that focus moved to making use of ever-increasing SoC performance and capabilities, in the pursuit of making it Just Work™ but also proving Wirth’s Law in the process via badly optimized code.

    This was an excellent read, btw.




  • 1 - I get that light is flashed in binary to code chips but how does it actually fookin work ? What is the machine emmiting [sic] this light made up of ?

    This video by Branch Education (on YouTube or Nebula) is a high level explanation of every step in a semiconductor fab. It doesn’t go over the details of how semiconductor junctions work, though. That sort of device physics is discussed in this YouTube video by Ben Eater, “how semiconductors work”

    2 - How was program’s, OSs, Kernal [sic] etc loaded on CPU in early days when there were no additional computers to feed it those like today ?

    When the CPU powers up, typically the very first thing it starts to execute is the bootloader. Bootloaders will vary depending on the system, and today’s modern Intel or AMD desktop machines boot very differently to their 1980s predecessor. However, since the IBM PC laid the foundation for how most computers booted up for a nearly four decades, it may be instructive to see how it worked in the 80s. This WikiBook on x86 bootloading should be valid for all 32-bit x86 targets, from the original 8086 to the i686. It may even be valid further, but UEFI started to take off, which changed everything into a more modern form.

    But even before the 80s, computers could have a program/kernel/whatever loaded using magnetic tape, punch cards, or even by hand with physical switches, each representing one bit.

    But how does the computer decode this binary “machine code” into instructions to perform? See this video by Ben Eater, explaining machine instructions for the MOS 6502 CPU (circa 1975). The age of the CPU is not important, but rather that by the 70s, the basics of CPU operations has already been laid down, and that CPU is easy to explain yet non-trivial.

    3 - I get internet is light storing information but how ? Fookin HOW ?

    The mechanics of light bouncing inside a fibre optic cable is well-explained in this YouTube video by engineerguy. But for an explanation of how ones-and-zeros get converted into light to be transmitted, that’s a bit more involved. I might just point you to the Wikipedia page for fibre optic communications.

    How the data is encoded is important, as this has significant impact on bandwidth and data integrity, not just for light but for wireless RF transmission and wireline transmission. For wireless, this Branch Education video on Starlink (YouTube or Nebula) is instructive. And for wired, this Computerphile YouTube video on ADSL covers the challenges faced.

    Quite frankly, I might just recommend the entirety of the Computerphile channel, particularly their back catalogue when they laid down computer fundamentals.

    4 - How did it all come to be like it is today and ist it possible for one human to even learn how it all works or are we just limited one or two things ? Like cab we only know how to program or how to make hardware but not both or all ?

    As of 2024, the field is enormous, to the point that a CompSci degree necessarily has to be focused on a specific concentration. But that doesn’t necessarily mean the hard stuff like device physics are off-limits, leaving just stuff like software and AI. Sam Zeloof has been making homemade microchips, devising his own semiconductor process and posting it on YouTube..

    Specifically to your question about either software or hardware, the specialty of embedded software engineering requires skills with low-level software or firmware, as well as dealing with substantial hardware-specific details. People that write drivers or libraries for new hardware require skills from both regimes, being the bridge between Electrical Engineers that design the hardware, and software developers that utilize the hardware.

    Likewise, developers for high performance computers need to know the hardware inside-out, to have any chance of extracting every last bit (pun intended) of speed. However, these developers tend to rely upon documentation such as data sheets, rather than having to be keenly aware of how the hardware was manufactured. Some level of logical abstraction is necessary to tractably understand today’s necessarily large and complex systems.

    5 - Do we have to join Intel first or something to learn how most of the things work lol ?

    Nope! Often, you can look to existing references, such as Linux source code, to provide a peek at what complexities exist in today’s machines. I say that, but the Linux kernel is truly a monster, not because it’s badly written, but because they willingly take code to support every single bleeding platform that people are willing to author code for. And that means lots and lots of edge cases; there’s no such thing as a “standard” computer. X86 might be the closest to a “standard” but Intel has never quite been consistent across that architecture’s existence. And ARM and RISC-V are on the rise, in any case.

    Perhaps what’s most important is to develop strong foundations to build on. Have a cursory understanding of computing, networking, storage, wireless, software licenses, encryption, video encoding/decoding, UI/UX, graphics, services, containers, data and statistical analysis, and data exchange formats. But then pick one and focus on it, seeing how it interacts with other parts of the computing world.

    Growing up, I had an interest in IT and computer maintenance. Then it evolved into writing websites. Then into writing C++ software. Right before university, I started playing around with the Arduino’s Atmel 328p CPU directly, and so I entered uni as a Computer Engineer, hoping to do both software and hardware.

    The space is huge, so start somewhere that interests you. From the examples above, I think online videos are a fantastic resource, but so can blog posts written by engineers at major companies, as can talks at conferences, as can sitting in at university courses.

    Good luck and good studies!


  • I think this can be more generalized as: why do some people eschew anonymity online? And a few plausible reasons come to mind:

    • a convention carried over from the pre-Internet days to be honest and frank as one would be in-person
    • having no prior experience with anonymity or a basis to expect anonymity to last
    • they’re already a real-life edgelord and so the in-person/online distinction is artificial, or have an IDGAF attitude to such distinctions

    IMO, older people tend to have the first reason, having grown up with the Internet as a communication tool. Younger, post-2000 people might have the second reason, because from the events during their lifetime, privacy has eroded to the point it’s almost mythical. Or that it’s like the landed gentry, that you have to be highly privileged to afford to maintain anonymity.

    I have no thoughts as to the prevalence of the third reason, but I’m reminded of a post I saw on Mastodon months ago, which went something like this: every village used to have the village idiot, but was mostly benign because everyone in town knew he was an idiot. One moron in every 5 or 10 thousand people is fine. But with the Internet, all the village idiots can network with each other, expanding their personal communities and hyping themselves up to do things they otherwise wouldn’t have found support for.

    Coming back to the question, in the context above, maybe online anonymity is a learned practice, meaning it has to be taught and isn’t plainly natural. Nothing quite like the Internet has ever existed in human history, so what’s “natural” may just not have caught up yet. That internet literacy and safety is a topic requiring instruction bolsters this thought.



  • I’m not quite sure I follow. The AGPL mirrors the GPL, with an extra proviso that accessing the software via the network constitutes “use” if the binary, not “distribution” of the binary. Under GPL, the mere use of a binary does not require the availability of source.

    Example: a student uses a GNU/Linux computer at their university computer lab. She runs the unmodified “tar” command from GNU Coreutils, which is GPL licensed. She is not entitled to a copy of the source from the university, because execution is a “use” of the binary on an already-provisioned machine, not a “distribution” of the binary.

    Example: a student is given a software assignment from her professor, along with a .7z file containing old versions of “tar” that contain bugs, all GPL licensed. This is a distribution – as in, a copy – of the binary, so she is entitled to a copy or link to the source from her professor.

    The first example helps explain what the AGPL adds, in the context of network use. Consider what happens if the university actually modified the “tar” command installed on their machines. They still would not have to distribute the modified source to the students, because students only execute (“use”) the binaries. But with AGPL, use of modified software obliges source distribution.

    Phrased another way, AGPL has every guarantee that GPL does, but adds another obligation for modified use via a network. Unmodified use does not require source distribution, under both GPL and AGPL.


  • One of the drawbacks of software licensing with community projects – although there are some (controversial) ways to sidestep this – is that the license needs to be selected at the onset of the project, and you’d have to have everyone agree to that license or change the license.

    If all the initial parties agree to use a FOSS license, they and all subsequent contributors under that license cannot complain that someone is actually employing that software per the terms of the license. A project might choose FOSS because they want to make sure the codebase only dies when it disappears from the last developer’s disk.

    If instead, the initial parties decided on some sort of profit-sharing license – I don’t know one of the top of my head – then they and future contributors cannot complain if no business wants to use the software, either because FOSS competitors exist or because they don’t like the profit split ratio in the license. If that ratio is fixed in the license, the project could die from lack of interest, since changing the license terms means everyone who contributed has to agree, so a single hardliner will doom the already-written code to obscurity.

    The sidestep method – which is what appears to have been used by Redis to do this relicensing to the SSPL – is that all contributors must sign a separate agreement giving Redis Inc a stake in your contribution’s copyright. This contributor agreement means any change to the Redis codebase – since its inception? Idk – has been dual-licensed: AGPL to everyone, and a special grant to Redis Inc who can then relicense your work to everyone under a new license.

    Does the latter mean Redis Inc could one day switch to a fully-closed source license? Absolutely! That’s why this mechanism is controversial, since it gives the legal entity of the project all the copyright powers, to level-up to FOSS or level-down to proprietary. Sure, you can still use the old code under the old license, but that’s cold comfort and is exactly why hard forks of Redis are becoming popular right now.

    In short, software projects have to lay out their priorities at the onset. If they want enduring code, that’s their choice. If they want people to pitch in a fair share, that’s fine too. But that choice entails tradeoffs, which they should have known from the start. Some mechanisms allow the flexibility to change priorities in the future, but it’s a centralized, double-edge sword.


  • litchralee@sh.itjust.workstoProgramming@programming.devRedis is no longer OSS
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    2
    ·
    6 months ago

    There are two concepts at play here: open-source and free software. An early example of open-source is AT&T Research UNIX, which was made source-available (for a few) to universities for research purposes, who could recompile the code and use the binaries for that purpose. Here, the use of the software is restricted by the license terms.

    On the free software side, as a reimplementation if the Unix software utilities – ie all the programs like tar, ps, sh – GNU coreutils is GPL licensed, meaning any use of the compiled binaries is allowed, but there are restrictions on the distribution, of both source and binaries. As it turns out, GPL is both free and open-source (FOSS); there are fewer major examples of free but non-open source, but WinRAR and nVidia drivers on Linux would count.

    Specifically, GPL and other copyleft licenses require that if you distribute the binary, you must make the source available under the same terms. If you’ve made no changes, then this is as simple as linking to the public source code repo. If you did add or remove code, you must release those alongside the binaries. If you simply use the binaries internally, you don’t need to release anything at all, and can still use them for any internal purpose.

    wouldn’t GPL and other copyleft licenses be considered non-free as well since you are not free to do whatever you want with the source

    From the background above, free software has always been understood to mean the freedom to use software, not necessarily distribute it. GPL complies with that definition for using the software, but also enforced a self-perpetuating distribution requirement. Unlike plain ol free software, under GPL, you must redistribute source if you distribute the software for use (aka binaries), and you must make that source also GPL.