Super Naughty Maid 2? 🤨📸
Super Naughty Maid 2? 🤨📸
Null pointers, runtime exceptions and try catch blocks in 2023
Having seen original source code hasn’t been an issue in previous cases where the reimplementation was done in another language with the changes one would expect coding up something a second time, I believe
“Good 4k tvs without bloat, site:lemm… oh.”
What
You can find it in action on regex101 with the regex indeed matching the query string in the maliciouswebsite and not matching even just something with the port and no user/password
It is valid (just weird & not recommended) to give a user:pw combo to a website that doesn’t ask for one in the headers. Browsers stripping it off is a different thing
The sheer number of things you have to take into account to properly parse a URL should convince you to not use regexes for it
The fact that it’s less code, more correct, faster and more readable to use new URL() should also be enough to convince you to not use regexes
Lmao ah yes, one of those
If you’re not convinced with this you never will
You can just wrap your var with “new URL()” and have something faster, correct and easier to read, but I’m guessing you’ll change your ways silently in a few years when you’ve forgotten about this interaction and managed to convince yourself it was your own idea!
Until then i guess you can add /c/whatev at the end of my two examples and find something else to criticize and decide not to support
Oh man I was hoping you’d ask because URLs are way worse than people imagine and that’s still not even a tenth of what emails can do
HtTpS://user:pw@lemdro.id:443 is a valid url to lemdro.id and should match but will not
Http://maliciouswebsite.to/?q=http://lemdro.id will match but should not
To give you an idea of how bad this is I suggest anyone tell me if their lemmy app parsed those properly because Thunder treats the 1st one as an email and Jerboa thinks both are URLs
You also have the instances that have a valid address with www in front of them for old school internet habits, there’s urls that can have quotes in them, urls with chinese characters or russian characters that are both valid in their encoding but have a canonical form in ASCII
It’s a mess, and the correct way to do this is still faster than your regex in the end which is crazy
Never use regex on URLs, they’re not enough to properly do the job, you have the perfect, fast and correct URL parser already in your browser or your node binary, you need to use it, make a list of hostnames and use the browser’s URL api to extract hostnames then match against the list
No and they started computing it differently at some point so it got a sudden jump from max 6 to 8k upvotes on the most popular posts to having 13ks in the front page frequently
But I took that into account, my account is 13 years old on there, when stuff barely ever went in the thousands it already made me switch from 9gag to reddit
2000 upvotes on front page posts is what reddit had in ~2012, 2014 and by that point reddit was already one of my main time wasters
To them 200k users leaving is nothing and they aren’t even all gone, most probably just use both now
But wait 4 years and lemmy might have become a painless viable alternative
Eh the activity is good enough
2000 upvotes on popular front page posts is like reddit in 2015
I guess printf “” > file