You underestimate the power of the antivax covid cult wing of WoP. They can’t taste after full covid, therefore they can’t smell… pitchfork hillbillies prevail to war on women and minorities as the true neo Nazis.
You underestimate the power of the antivax covid cult wing of WoP. They can’t taste after full covid, therefore they can’t smell… pitchfork hillbillies prevail to war on women and minorities as the true neo Nazis.
Trump is such an incompetent clown that he has a comedian thrash on US citizens in a US territory as a bigoted racist warm up act for his rallies. What do you think.
There is a disparity between the Earth’s, Life’s, and the Universe’s clock speeds.
Light speed is an irrelevant convention to the speed of causality.
We are like electrons that exist in a time delay circuit for an irrelevant blip, while the hardware chugs along, and we imagine ourselves the PC Master Race.
Yeah but not the bad actors this is primarily targeting and will create further issues. There are likely 3 keyword tokens used in a pattern. The most adept of humans should learn these and be damn sure to never use that pattern in any natural way.
Hold up, let me ban a couple hundred tokens in the reply. Pattern fixed. Watermarking only works for the most ignorant surface level users.
What kind of monster family had a kid with mental health issues, in therapy, and has an accessible gun around unsupervised?
I have no issue filtering. Everyone I watch regularly has a minimum of a masters degree in their respective edutainment field. I don’t watch anything packaged by the big 6 or any algorithm. And I run my own dialed AI stuff for many tasks I personally find useful.
AI is not the problem friend. The problem is cultural. Tools are not the problems. The people that use them are.
The world will continue to specialize. As it does, the range of available content will grow as will the difficulty in finding your niche. Most places cater to the lowest common denominator. Perhaps you’ve found yourself some places where it is time to move on or devolve along with them.
deleted by creator
I’m not saying it is no big deal to regularly expose one’s self to it.
I know about how small of an amount can be smelled from building a power supply for a UV light, making mistakes, and yet still smelling the thing. It was back driving some circuit block that shouldn’t have been enough power to do anything, there was no visible effect, but I could still smell the faint smell of O3. Even with a tiny bulb, the smell is nearly instantaneous when the light is powered. Running one around anything that can rust is a bad idea. It is almost as bad as working with hydrochloric acid and hydrogen peroxide as enchants; everything goes super rusty fast. I don’t get exposed to it regularly, so I’m not worried. Most hotel rooms smell pretty strongly of O3 in the many I have stayed in and that is probably the most that the average person gets exposed for any extended length of time. The alternative is probably worse, but still people don’t worry about that one too much.
That information is mentioned in the video. It is out there if you were to look instead of arguing on a rhetorical forum with a user that has nothing to do with the product or any vested interest in this whatsoever and simply tried to share something to be positive and invest their time in the community here. Interactions like this are why people tend to regret their efforts and post and interact less, or at least that is the impact such interactions have on me. You really need better self awareness here.
It is 6w before power supply losses and then a tube that is likely under 20% efficient. The primary thing that gets you with welding is the fraction of a second before an auto darkening lens activates. It is a tiny amount of time, but it adds up. Or all those times you accidentally touch the tungsten to the pool with tig. Welding has a wide spectrum, but it is the power that matters most here. The frequency and power are two separate and unrelated things. Like your microwave and WiFi are both 2.4 GHz machines. Your WiFi in your home router is limited to 100 milliwatts and is totally harmless. Your microwave needs a Faraday cage built in to avoid cooking you from the inside out because it is likely around 1500 watts. At 220nm the frequency doesn’t pass through skin or eye fluid and the power output of the light is low. Seriously, try getting into things like telescope filters where you’re trying to isolate certain frequencies. It is challenging at these frequencies to find anything that is transparent. Of all of my science books, my optics handbook is by far the largest and hardest for me to follow. That isn’t saying much, but I have built my own telescope electronics and eyepieces, along with hobby electronics, designing and etching circuit boards and photolithography using various UV lights I have built. Six watts is nothing major. I would be more concerned about how limited of an area one light can cover.
It is very noticeable if any is produced. You’ll smell it quite easily. It isn’t a big deal. There is a bunch of ozone after any lightning storm in an area, and while most is in the upper atmosphere, any direct sunlight outside is producing some too.
Welding is much much higher power than this 6w lamp. I bridge both spaces. I did my LA structural steel cert out of Local 12 many years ago.
In the video Clive says he didn’t smell any and explains it is likely due to the design of the electrodes. I do not claim to know much, but I have some high end refrigerator UV bulbs that both do and don’t produce O3 in a near identical package to each other. I’m not sure about the tech behind it. I got both, O3 for sterilization and the regular for photolithography.
I don’t use perplexity, but AI is generally 60-80% effective with a larger than average open weights off line model running on your own hardware.
DDG offers the ability to use some of these. I use a modified Mistral model still, even though its base model(s) are Llama 2. Llama 3 can be better in some respects but it has terrible alignment bias. The primary entity in the underlying model structure is idiotic in alignment strength and incapable of reason with edge cases like creative writing for SciFi futurism. The alignment bleeds over. If you get on DDG and use the Anthropic Mixtral 8×7b, it is pretty good. The thing with models is to not talk to them like humans. Everything must be explicitly described. Humans make a lot of implied context in general where we assume people understand what we are talking about. Talking to an AI is like appearing in court before a judge; every word matters. The LLM is basically a reflection of all of human language too. If the majority of humans are wrong about something, so is the AI.
If you ask something simple like just a question, you’re not going to get very far into what the model knows. Models have very limited scope of focus. If you do not build prompt momentum into the space by describing a lot of details, the scope of focus is large but the depth is shallow. The more you build up momentum by describing what you are asking in detail, the more it narrows the scope and deeper connections can be made.
It is hard to tell what a model really knows unless you can observe the perplexity output. This is more advanced, but the perplexity score for each generated token is how you infer that the model does not know something.
Search sucks because it is a monopoly. There are only 2 relevant web crawlers m$ and the goo. All search queries go through these either directly or indirectly. No search provider is deterministic any more. Your results are uniquely packaged to manipulate you. They are also obfuscated to block others from using them for training better or competitive models. Then there is the anti trust US government case and all of that which makes obfuscating one’s market position to push people onto other platforms temporarily, their best path forward. - criminal manipulators are going to manipulate.
Everyone makes mistakes. I know what my mistakes look like in general, and have the self awareness for plausible deniability. I’m not all that bright, so I assume someone in this position is more skilled and capable than I.
Would it be more effective to sabotage your own work in such a place?
This, and lay out the details like ELI5 and as an unemotional objective thing with detail.
I have received many flags to sort out that take more than a few minutes to figure out the tone and meaning. I strongly believe people have a right to be stupid, wrong, a bit rude, or to have a bad day. I need to know exactly why the comment is more than this in a well laid out fashion. If you think it is a pattern with the individual, prove it. If some subtle phrase carries more meaning than I may realize, say so. I want to make people feel welcome on all fronts with a Hippocratic framework of “first, do no harm.” At the same time, a visible mod is a bad mod. I will read every detail. I will give the benefit of the doubt in every possible case. I won’t be passive to bigotry, but I will allow an asshole that does no harm. I’m but one insignificant mod. I care a whole lot more as a person, but I act conservatively as a mod. When flagging something imagine the person on the other end is working on some big project, stopping their day, and taking a half hour to sort out the details, thinking them through, and taking action. It usually takes me longer to shift gears and do this in practice. I’ll usually send a message explaining why I did or did not do anything as well.
In the case of the gender of doctors, it is probably premature to call it a bias in the model as opposed to a bias in the implementation of the interface. The first point of call would likely be to look into the sampling techniques used in the zero shot and embedding models. These models are processing the image and text to convert them to numbers/conditioning. Then there are a ton of potential issues in the sigma/guidance/sampling algorithm and how it is constrained. I tend to favor ADM adaptive sampling. I can get away with a few general PID settings, but need to dial it in for specific imagery when I find something I like. This is the same PID tuning you might find in a precision temperature sensor and controller. The range of ways that the noise can be constrained will largely determine the path that is traveled through the neural layers of the model. Like if I’m using an exponential constraint for guidance, that exponential aspect is how much of the image is derived at which point. With exponential, very little of the image comes from early layers of the model, but this builds to where later layers of the neural network are where the majority of the image is resolved. The point at which this ends is largely just a setting. This timing also impacts how many layers of alignment the image is subjected to in practice. Alignment ensures our cultural norms, but is largely a form of overtraining and causes a lot of peripheral issues. For instance, the actual alignment is on the order of a few thousand parameters per layer, whereas each model layer is on the order of tens of millions of parameters.
When the noise is constrained it is basically like an audio sine wave getting attenuated. The sampling and guidance is controlling the over and undershoot of the waveform to bring it into a desired shape. These undulations are passing through the model to find a path of least resistance. Only, with tensor ranks, there are far more than the 4 dimensions of Cartesian space plus time. These undulations and the sampling techniques used may have a large impact on the consistency of imagery generated. Maybe all the female doctors present in the model are in a pattern of space where the waveform is in the opposite polarity. Simply altering sampling may alter the outcome. This pattern is not necessarily present in the model itself, but instead can be an artifact of the technique used to sample and guide the output.
There are similar types of nuances present in the text embedding and zero shot models.
There is also some potential for issues in the randomization of noise seeds. Computers are notoriously bad at generating truly random numbers.
I’m no expert. In abstract simplification, this is my present understanding, but I’m no citable source and could easily be wrong in some aspects of this. It is however my functional understanding while using, tweaking, and modding some basic aspects of the model loader source code.
Musk made a deal with Putin to extort people into using Starlink.