• 1 Post
  • 129 Comments
Joined 1 year ago
cake
Cake day: June 21st, 2023

help-circle

  • They have a secondary motherboard that hosts the Slot CPUs, 4 single core P3 Xeons. I also have the Dell equivalent model but it has a bum mainboard.

    With those 90’s systems, to get Windows NT to use more than 1 core, you have to get the appropriate Windows version that actually supports them.

    Now you can simply upgrade from a 1 to a 32 core CPU and Windows and Linux will pick up the difference and run with it.

    In the NT 3.5 and 4 days, you actually had to either do a full reinstall or swap out several parts of the Kernel to get it to work.

    Downgrading took the same effort as a multicore windows Kernel ran really badly on a single core system.

    As for the Sun Fires, the two models I mentioned tend to be highly available on Ebay in the 100-200 range and are very different inside than an X86 system. You can go for 400 or higher series to get even more difference, but getting a complete one of those can be a challenge.

    And yes, the software used on some of these older systems was a challenge in itself, but they aren’t really special, they are pretty much like having different vendors RGB controller softwares on your system, a nuisance that you should try to get past.

    For instance, the IBM 5000 series raid cards were simply LSI cards with an IBM branded firmware.

    The first thing most people do is put the actual LSI firmware on them so they run decently.


  • Oh, I get it. But a baseline HP Proliant from that era is just an x86 system barely different from a desktop today but worse/slower/more power hungry in every respect.

    For history and “how things changed”, go for something like a Sun Fire system from the mid 2000’s (280R or V240 are relatively easy and cheap to get and are actually different) or a Proliant from the mid to late 90’s (I have a functioning Compaq Proliant 7000 which is HUGE and a puzzlebox inside).

    x86 computers haven’t changed much at all in the past 20 years and you need to go into the rarer models (like blade systems) to see an actual deviation from the basic PC alike form factor we’ve been using for the past 20 years and unique approaches to storage and performance.

    For self hosting, just use something more recent that falls within your priceclass (usually 5-6 years old becomes highly affordable). Even a Pi is going to trounce a system that old and actually has a different form factor.








  • A mutation for having a higher radiation resistance or higher resistance to cancer is something that already happens in nature, but in most of the animal world those are relatively useless traits, normally cancer doesn’t develop fast enough to stop procreation.

    In Chernobyl, the highly elevated radiation would normally kill animals before they can even breed. The ones that don’t have the resistance die before they get the chance, the ones that do have a higher resistance breed.

    With humans in the modern age, a resistance to cancer or radiation trait never gets the chance to become a dominant evolutionary trait as most all people only develop the cancer later in life and the ones that do more and more often can get treatment giving them a chance to procreate even when they got cancer young.

    Outside Chernobyl, there is no evolutionary pressure for a trait like that to become dominant.

    Living long enough to procreate is the primary drive in nature.



  • Even as far back as XP/Vista Microsoft has wanted to run the file system as more of an adaptive database than a classical hierarchical file system.

    The leaked beta for Vista had this included and it ran like absolute shit, mostly because harddrives are slow and ram was at a premium, especially in Vista as it was such a bloated piece or shit.

    NTFS has since evolved to include more and more of these “smart” file system components.

    Now they want to go full on with this “smart” approach to the filesystem.

    It’ll still be slow and shit, just like it was 2 decades ago.







  • Besides that, mrna tech started to be developed in the 1970’s with the first labrat trials in the late 80’s or early 90’s.

    Clinical trials on humans, to test their safety and effectiveness in combating various diseases and viruses have been ongoing for the past decade.

    And as you said, the first several widely used vaccines based on mrna tech have been deployed to literally billions of people.

    This is an incredibly gigantic sample size for data and there have been very few issues for the past 3 years.

    And what bernieecclestoned brings up about herd immunity simply means the people they are talking to are, like most antivaxxers, blithering idiots that know some catch phrases and not a single meaning behind them.

    You only obtain herd immunity with minimal casualties through hardening the herd with vaccines and then hope the immune systems of the herd adjust to further combat the disease. If data doesn’t show that new variants are easily countered by the immune systems of the herd, you know you need to develop more vaccines.

    If you try to obtain herd immunity by letting a brand new disease like COVID run its course, you will probably obtain it eventually, but instead of 7 million dead worldwide (and lord knows how many with long covid or other long term disabilities due to the disease), you’ll have 70 million or more.

    Herd immunity doesn’t mean you should just let shit hit the fan and see who’s left standing. If you miscalculate the severity of the disease, you can have another situation like with the plague where it killed over 25 million out of the 180 million people on earth.

    In todays numbers that would mean like 1.1 billion people die. Probably far more since we’re extremely more connected than people were in 400AD.

    And you’d think that the better general healthcare and hygiene these days would lessen it, but the sheer increase in how we’re connected would easily wipe that advantage off the board.