• 0 Posts
  • 133 Comments
Joined 1 year ago
cake
Cake day: June 15th, 2023

help-circle
  • Preheat and homogenization were not testing in these processes. Both are steps used in most US milk that would likely inactivate the virus. Moral of the story is still you are an idiot if you are drinking raw milk.

    Fragments of the virus that are being found in about 20% of all milk sampled. These fragments have not been shown to be enough to make anyone sick. The fact that we’re finding fragments and not intact viruses in store bought milk is a good indication that the various processes used for milk in most locations is doing the job it was intended to do.

    And most important of all: This is the current state of evidence gathered on this topic, that state could change with various factors at play and/or the addition of new evidence. Because apparently for some people they have forgotten that “things change as time progresses”.


  • For instance, this includes minerals for battery and other components to produce EVs and wind turbines – such as iron, lithium, and zinc

    I found nothing within the IEA’s announcement that indicates a shortage of those three elements. Iron is like the fourth most abundant thing on the planet.

    In fact, this story literally reports this whole thing all wrong. It’s not that there’s a shortage, it’s that the demand for renewables is vastly larger than what we’re mining for. Which “duh” we knew this already. The thing this report does is quantify it.

    That said, the “human rights abuses” isn’t the IEA report. That comes from the Business and Human Rights Resource Centre (BHRRC).

    Specifically, the BHRRC has tracked these for seven key minerals: bauxite, cobalt, copper, lithium, manganese, nickel and zinc. Companies and countries need these for renewable energy technology, and electrification of transport.

    These aren’t just limited to the renewable industry. Copper specifically, you’ve got a lot of it in your walls and in the device that you are reading this comment on. We have always had issues with copper and it’s whack-a-mole for solutions to this. I’m not dismissing BHRRC’s claim here, it’s completely valid, but it’s valid if we do or do not do renewables. Either way, we still have to tackle this problem. EVs or not.

    Of course, some companies were particularly complicit. Notably, BHRRC found that ten companies were associated with more than 50% of all allegations tracked since 2010

    And these are the usual suspects who routinely look the other way in human right’s abuses. China, Mexico, Canada, and Switzerland this is the list of folks who drive a lot of the human rights abuses, it’s how it has been for quite some time now. That’s not to be dismissive to the other folks out there (because I know everyone is just biting to blame the United States somehow) but these four are usually getting their hand smacked. Now to be fair, it’s really only China and Switzerland that usually does not care one way or the other. Canada and Mexico are just the folks the US convinced to take the fall for their particular appetite.

    For example, Tanzania is extracting manganese and graphite. However, he pointed out that it is producing none of the higher-value green tech items like electric cars or batteries that need these minerals

    Third Congo war incoming. But yeah, seriously, imperialism might have officially ended after World War II, but western nations routinely do this kind of economic fuckening, because “hey at least they get to self-govern”. It’s what first world nations tell themselves to sleep better for what they do.

    Avan also highlighted the IEA’s advice that companies and countries should shift emphasis to mineral recycling to meet the growing demand.

    This really should have happened yesterday. But if they would do something today, that would actually be proactive about the situation. Of course, many first world nations when they see a problem respond with “come back when it’s a catastrophe.”

    OVERALL This article is attempting to highlight that recycling is a very doable thing if governments actually invested in the infrastructure to do so and that if we actually recycled things, we could literally save ⅓ the overall cost for renewables. It’s just long term economic sense to recycle. But of course, that’s not short term economic sense. And so with shortages to meet demand on the horizon, new production is going to be demanded and that will in turn cause human rights violations.

    They really worded the whole thing oddly and used the word shortage, like we’re running out, when they meant shortage as in “we can’t keep up without new production”. They got the right idea here, I just maybe would have worded all of it a bit differently.




  • It does not. The Linux kernel is not a multikernel OS and HarmonyOS is. Now Harmony does indeed implement the ability to bring in a modified ASOP to provide Android app compatibility, but the actual OS that supervises that isn’t Linux based, though it does provide a UNIX environment.

    The reason HarmonyOS works well with the devices is because the OS and the devices are being built by the same person. It’s likely that HarmonyOS would run like ass or not at all on anything not made by Huawei, it’s also why the OS is mostly closed source with some open parts.

    But just because they both present a UNIX environment, does not mean HarmonyOS is or derived from Linux. They are indeed two different OSes with fundamentally different approaches to managing the underlying system.


  • Interesting; you have to dig past the usual misandry sites to find an impartial source but Pew research found 53% of stem graduates female in 2018 and rising

    I mean, at this point you’re just cherry picking and not doing all that well with it. As indicated from, again YOUR source.

    The gender dynamics in STEM degree attainment mirror many of those seen across STEM job clusters. For instance, women earned 85% of the bachelor’s degrees in health-related fields, but just 22% in engineering and 19% in computer science

    That lines up with the whole thing I had mentioned here. You keep wishing otherwise, but you also keep providing evidence to the contrary.

    So I mean at some point I guess you’ll read your own sources OR you won’t. But the sources you keep providing agree with the original statement that women are under represented in traditional STEM studies. So I mean you square that with yourself however you want.



  • Well I mean, do you read the links you provide?

    While women now account for 57% of bachelor’s degrees across fields and 50% of bachelor’s degrees in science and engineering broadly (including social and behavioral sciences), they account for only 38% of bachelor’s degrees in traditional STEM fields (i.e., engineering, mathematics, computer science, and physical sciences; Table 1).

    There’s where your 50% comes from. And as you can see, your link also aligns with the 38.6% previously mentioned.

    See? Now was that hard? See how once you explained yourself we could clear up the confusion you were having? Nothing wrong with that, easy to be confused by the various terms that are being tossed around.


  • What are you even going on about? It literally says:

    Women represent 57.3% of undergraduates but only 38.6% of STEM undergraduates

    That means women are obtaining most of their degrees via non-STEM studies.

    Women represent 52% of the college-educated workforce, but only 29% of the science and engineering workforce.

    And that is reflected in the study’s figures for employment as well.

    I’d search for another but people shooting themselves in the foot amuses me to know end

    Well let’s look over the score here. Someone has provided two different links to back up their argument and you’ve provided… Oh look, none. You’re making claims and pointing out things that clearly do not exist or are anecdotal. Nothing you have done in the last three comments indicates to anyone that any of us should take anything you have to say with any kind of value.

    So I guess you are amused to know [sic] end, but a point or logical argument you have not made. But hey if you thinking you took the W here and that keeps you quiet, then good job you totally owned everyone here. Amazing wordsmithing.


  • I have a Brother HL-L3230CDW. It has been a horse and has quickly become my most prized possession of all things that I own. It takes anyone’s toner and produces quality without question. It works with my various Linux, Macs, Windows, and Android devices without hesitation and minimal fuss to get setup.

    So that’s what I would recommend. Is a good bit of coin up front but in my opinion, it has paid for itself in cheaper long run TCO and sanity in that it just fucking works.




  • Both are vendor specific implementations of processing on GPUs. This is in opposition to open standards like OpenCL, which a lot of the exascale big boys out there mostly use.

    nVidia spent a lot of cash on “outreach” to get CUDA into a lot of various packages in R, python, and what not. That did a lot of displacement from OpenCL stuff. These libraries are what a lot of folks spin up on as most of the leg work is done for them in the library. With the exascale rigs, you literally have a team that does nothing but code very specific things on the machine in front of them, so yeah, they go with the thing that is the most portable, but doesn’t exactly yield libraries for us mere mortals to use.

    AMD has only recently had the cash to start paying folks to write libs for their stuff. So were starting to see it come to python libs and what not. Likely, once it becomes a fight of CUDA v ROCm, people will start heading back over to OpenCL. The “worth it” for vendor lock-in for CUDA and ROCm will diminish more and more over time. But as it stands, with CUDA you do get a good bit of “squeezing that extra bit of steam out of your GPU” by selling your soul to nVidia.

    That last part also plays into the “why” of CUDA and ROCm. If you happen to NOT have a rig with 10,000 GPUs, then the difference between getting 98% of your GPU and 99.999% of your GPU means a lot to you. If you do have 10,000 GPUs, having like a 1% inefficiency is okay, you’ve got 10,000 GPUs the 1% loss is barely noticeable and not worth it to lose portability with OpenCL.




  • It’s less executive action and more pipeline issues.

    The department related to approval for these things just hired 140 new employees that just got through on-boarding for this specific task. That 140 is literally the max Congress approved for handling applications in all tiers of the program, with a lot of Senators having hoped that somewhere around ½ that number would have been the final tally hired in.

    Those new employees will then need to navigate the 50+ page applications for each bid. With something around 480 bids in the first CHIPs acceptance round. So around 180 people will need to digest as quickly as possible somewhere around 24,000+ pages of applications. That’s not counting any kind of objections that might have been filed by State, private citizen, and also competitors (Samsung and Intel have a few of these already on some of the first tier 1s already).

    And all of this is just the first steps of the program that was written out by Congress. Congressional oversight, because remember all executive agencies established by law have Congressional oversight, has been really clear as mud on some of the finer details of this program. Biggest back and forth is the offshore but American funding, which holy crap I can’t write enough about how complicated that debate has become.

    All of this is happening on the several step process that Congress prescribed for this whole thing. Because, tier 3 and tier 2 funding are good candidates for abuse of funding. Solyndra echos can almost be heard when mentioning this. Or more recent example of what could go wrong when you just open the flood gates, PPP loans.

    Thing is, companies complain money isn’t coming fast enough all the time. The money could be on it’s way next week and they’d still complain that it wasn’t there on Monday. The more important thing is that the money isn’t being wasted too badly and that’s only happening with careful consideration. This is literally the exact same thing I said about Trump’s wall. One, I don’t think it would work anyway. Two, it’s ignoring a lot of commitments we have internationally. But three, if we’re going to do it, at least have a plan for properly spending the money which Trump did not because it basically took military funding and diverted it toward the wall, and thus it fell into one of those “you either spend it or lose it” and it meant that the construction had to be rushed and why a lot of “Trump’s wall” is mostly falling into the Rio Grande or made of metal slats that can easily be sawed through with a hand saw.

    So sure, whatever, but the more important thing… Well let me clarify, the more important thing from my perspective, so totally cool if someone disagrees, is that the money is carefully spent wisely. It doesn’t have to be a homerun, I just would rather the money not be tossed all over the place like PPP was.

    And I get some of the arguments for why PPP was done the way it was, but to put it to scale, I would prefer handing it out like a 3, I understand why people would want it handed out like a 7, but holy fuck we handed it out like a 9.3. Where 1 is being penny up the ass to make copper wire tight about the money and 10 is prolapsed colon flow of money out the ass. It was really indefensible how just loosey goosey we were with that money. And yes, there’s been other times we’ve done that (cough military cough), I didn’t like those either.


  • One of the specific issues from those who’ve worked with Wayland and is echoed here in Nate’s other post that you mentioned.

    Wayland has not been without its problems, it’s true. Because it was invented by shell-shocked X developers, in my opinion it went too far in the other direction.

    I tend to disagree. Had say the XDG stuff been specified in protocol, implementation of handlers for some of that XDG stuff would have been required in things that honestly wouldn’t have needed them. I don’t think infotainment systems need a concept of copy/paste but having to write:

    Some_Sort_Of_Return handle_copy(wl_surface *srf, wl_buffer* buf) {
    //Completely ignore this
    return 0;
    }
    
    Some_Sort_Of_Return handle_paste(wl_surface *srf, wl_buffer* buf) {
    //Completely ignore this
    return 0;
    }
    
    

    Is really missing the point of starting fresh, is bytes in the binary that didn’t need to be there, and while my example is pretty minimal for shits and giggles IRL would have been a great way to introduce “randomness” and “breakage” for those just wanting to ignore this entire aspect.

    But one of those agree to disagree. I think the level of hands off Wayland went was the correct amount. And now that we have things like wlroots even better, because if want to start there you can now start there and add what you need. XDG is XDG and if that’s what you want, you can have it. But if you want your own way (because eff working nicely with GNOME and KDE, if that’s your cup of tea) you’ve got all the rope in the world you will ever need.

    I get what Nate is saying, but things like XDG are just what happened with ICCCM. And when Wayland came in super lightweight, it allowed the inevitably of XDG to have lots of room to specify. ICCCM had to contort to fit around X. I don’t know, but the way I like to think about it is like unsalted butter. Yes, my potato is likely going to need salt and butter. But I like unsalted butter because then if I want a pretty light salt potato, I’m not stuck with starting from salted butter’s level of salt.

    I don’t know, maybe I’m just weird like that.


  • Over on Nate’s other blog entry he indicates this:

    The fundamental X11 development model was to have a heavyweight window server–called Xorg–which would handle everything, and everyone would use it. Well, in theory there could be others, and at various points in time there were, but in practice writing a new one that isn’t a fork of an old one is nearly impossible

    And I think this is something people tend to forget. X11 as a protocol is complex and writing an implementation of it is difficult to say the least. Because of this, we’ve all kind of relied on Xorg’s implementation of it and things like KDE and GNOME piggyback on top of that. However, nothing (outside of the pure complexity) prevented KWin (just as an example) implementing it’s own X server. KWin having it’s own X server would give it specific things that would better handle the things KWin specifically needed.

    Good parallel is how crazy insane the HTML5 spec has become and how now pretty much only Google can write a browser for that spec (with thankfully Firefox also keeping up) and everyone is just cloning that browser and putting their specific spin to it. But if a deep enough core change happens, that’s likely to find its way into all of the spins. And that was some of the issue with X. Good example here, because of the specific way X works an “OK” button (as an example) is actually implemented by your toolkit as a child window. Menus those are windows too. In fact pretty much no toolkit uses primitives anymore. It’s all windows with lots and lots of text attributes. And your toolkit Qt, Gtk, WINGs, EFL, etc handle all those attributes so that events like “clicking a mouse button” work like had you clicked a button and not a window that’s drawn to look like a button.

    That’s all because these toolkits want to do things that X won’t explicitly allow them to do. Now the various DEs can just write an X server that has their concept of what a button should do, how it should look, etc… And that would work except that, say you fire up GIMP that uses Gtk and Gtk has it’s idea of how that widget should look and work and boom things break with the KDE X server. That’s because of the way X11 is defined. There’s this middle man that always sits there dictating how things work. Clients draw to you, not to the screen in X. And that’s fundamentally how X and Wayland are different.

    I think people think of Wayland in the same way of X11. That there’s this Xorg that exists and we’ll all be using it and configuring it. And that’s not wholly true. In X we have the X server and in that department we had Xorg/XFree86 (and some other minor bit players). The analog for that in Wayland (roughly, because Wayland ≠ X) is the Compositor. Of which we have Mutter, Clayland, KWin, Weston, Enlightenment, and so on. Which that’s more than just one that we’re used to. That’s because the Wayland protocol is simple enough for these multiple implementations.

    The skinny is that a Compositor needs to at the very least provide these:

    • wl_display - This is the protocol itself.
    • wl_registry - A place to register objects that come into the compositor.
    • wl_surface - A place for things to draw.
    • wl_buffer - When those things draw there should be one of these for them to pack the data into.
    • wl_output - Where rubber hits the road pretty much, wl_surface should display wl_buffer onto this thing.
    • wl_keyboard/wl_touch/etc - The things that will interact with the other things.
    • wl_seat - The bringing together of the above into something a human being is interacting with.

    And that’s about it. The specifics of how to interface with hardware and what not is mostly left to the kernel. In fact, pretty much compositors are just doing everything in EGL, that is KWin’s wl_buffer (just random example here) is a eglCreatePbufferSurface with other stuff specific to what KWin needs and that’s it. I would assume Mutter is pretty much the same case here. This gets a ton of the formality stuff that X11 required out of the way and allows Compositors more direct access to the underlying hardware. Which was pretty much the case for all of the Window Managers since 2010ish. All of them basically Window Manage in OpenGL because OpenGL allowed them to skip a lot of X, but of course there is GLX (that one bit where X and OpenGL cross) but that’s so much better than dealing with Xlib and everything it requires that would routinely require “creative” workarounds.

    This is what’s great about Wayland, it allows KWin to focus on what KWin needs, mutter to focus on what mutter needs, but provides enough generic interface that Qt applications will show up on mutter just fine. Wayland goes out of its way to get out of the way. BUT that means things we’ve enjoyed previously aren’t there, like clipboards, screen recording, etc. Because X dictated those things and for Wayland, that’s outside of scope.


  • Most of the ads I’ve seen appear to be targeted at conservative Americans, as they’re all latching onto a mistrust in U.S. President Biden and the federal government

    LUL. Well at least they know their mark.

    中華人民共和國政府僱員 A: 您认为我们应该针对谁?

    中華人民共和國政府僱員 B: 那些买马膏来治病的人怎么样?

    中華人民共和國政府僱員 A: 木瓦哈哈哈!!