Honestly I’ll just send it back at this point. I have kernel panics that point to at least two of the cores being bad. Which would explain the sporadic nature of the errors. Also why memcheck ran fine because it only uses the first core by default. Too bad I haven’t thought about it when running memtest because it lets you select cores explicitly.
lemmyvore
- 5 Posts
- 758 Comments
Welp no change. I’m guessing the motherboard firmware already contained the latest microcode. Oh well, was worth a try, thank you.
It’s a pain in the butt to swap CPUs one more time but that may pale in comparison to trying to convince the shop that a core is bad and having intermittent faults. 🤪
This sounds like my best shot, thank you.
I’ve installed the
amd-ucode
package. It already addsmicrocode
to theHOOKS
array in/etc/mkinitcpio.conf
and runsmkinitcpio -P
but I’ve movedmicrocode
beforeautodetect
so it bundles code for all CPUs not just for the current one (to have it ready when I swap) and re-ranmkinitcpio -P
. Also had to re-rungrub-mkconfig -o /boot/grub/grub.cfg
.I’ve seen the message “Early uncompressed CPIO image generation successful” pass by, and
lsinitcpio --early /boot/initramfs-6.12-x86_64.img|grep micro
showskernel/x86/microcode/AuthenticAMD.bin
, there’s a/boot/amd-ucode.img
, and aninitrd
parameter for it ingrub.cfg
. I’ve also confirmed that/usr/lib/firmware/amd-ucode/README
lists an update for that new CPU (and for the current one, speaking of which).Now from what I understand all I have to do is reboot and the early stage will apply the update?
Any idea what it looks like when it applies the microcode? Will it appear in
dmesg
after boot or is it something that happens too early in the boot process?
BIOS is up to date, CPU model explicitly listed as supported, memtest ran fine, not using XMP profiles.
All hardware is the same, I’m trying to upgrade from a Ryzen 3100 so everything should be compatible. Both old and new CPU have a 65W TDP.
I’m on Manjaro, everything is up to date, kernel is 6.12.17.
Memory runs at 2133 MHz, same as for the other CPU. I usually don’t tweak BIOS much if at all from the default settings, just change the boot drive and stuff like “don’t show full logo at startup”.
I’ve add some voltage readings in the post and answered some other posts here.
Everything is up to date as far as I can tell, I did Windows too.
memtest ran fine for a couple of hours, CPU stress test hang up partway through though, while CPU temp was around 75C.
RAM is indeed at 2133 MHz and the cooling is great, got a tower cooler (Scythe Kotetsu mark II), idle temps are in the low 30’s C, stress temp was 76C.
Motherboard is a Gigabyte B450 Aorus M. It’s fully updated and support for this particular CPU is explicitly listed in a past revision of the mobo firmware.
Manual doesn’t list any specific CPU settings but their website says stepping
A0
, and that’s what the defaults were setting. Also I got “core speed: 400 MHz”, “multiplier: x 4.0 (14-36)”.even some normal batch cpus might sometimes require a bit more (or less) juice or a system tweak
What does that involve? I wouldn’t know where to begin changing voltages or other parameters. I suspect I shouldn’t just faff about in the BIOS and hope for the best. :/
lemmyvore@feddit.nlto Linux@lemmy.ml•Valve Engineer Mike Blumenkrantz Hoping To Accelerate Wayland Protocol DevelopmentEnglish2·7 months agoPeople often think that things like recording your screen or keylogging are the worst but they’re not. These attacks would require you to be targeted by someone looking for something specific.
Meanwhile automated attacks can copy all your files, or encrypt them (ransomware), search for sensitive information, or use your hardware for bad things (crypto mining, spam, DDoS, spreading the malware further), or most likely all of the above.
Automated attacks are much more dangerous and pervasive because they are conducted at massive scale. Bots scan massive amounts of IPs and try all the known exploits and vulnerabilities without getting tired, without caring how daunting it may be, without even caring if they’re trying the right vulnerability against the right kind of OS or app. They just spray everything and see what sticks.
You’re thousands of times more likely to be caught by such malware than it is to be targeted by someone with the skill and motive to record your screen or your keyboard.
Secondly, if someone like that targets you and has access to your user account, Wayland won’t stop them. They can gain access to your root account, they can install elevated spyware, they can patch Wayland and so on.
What Wayland is doing is the equivalent of asking you to wear a motorcycle helmet 24/7, just in case you slip on some spilled juice, or a flower pot falls on your head, or the bus you’re in crashes. All those things are possible and the helmet would come in handy but are they likely? We don’t do it because it’s not, and it would be a major inconvenience.
You were merely lucky that they didn’t break.
Lucky… over 5 years and with a hundred AUR packages installed at any given time? I should play the lottery.
I’ve noticed you haven’t given me any example of AUR packages that can’t be installed on Manjaro right now, btw.
it wasn’t just a rise in popularity of Arch it was Manjaro’s PAMAC sending too many requests DDoSing the AUR.
You do realize that was never conlusively established, right? (1) Manjaro was already using search caching when that occured so they had no way to spam AUR, (2) there’s more than one distro using pamac, and (3) anybody can use “pamac” as a user agent and there’s no way to tell if it’s coming from an actual Manjaro install.
My money is on someone actually DDoS’ing AUR and using pamac as a convenient scapegoat.
Last but not least you’re trying to use this to divert from the fact AUR packages work fine on Manjaro.
lemmyvore@feddit.nlto Linux@lemmy.ml•Valve Engineer Mike Blumenkrantz Hoping To Accelerate Wayland Protocol DevelopmentEnglish11·7 months agoThat’s exactly the problem. Wayland is a set of standards, more akin to FreeDesktop.Org than to X. It lives and dies by its implementations, and it’s so utterly dependent on them that “KDE Wayland” has started to become its own thing. KDE are pretty much forging ahead alone nowadays and when they make changes it becomes the way to do it. Also what they do can’t be shared with other desktops because they’d have to use KDE’s own subsystems and become dependent on its whims.
It wasn’t supposed to be “Kdeland” and “Gnomeland” but that’s what it’s slowly becoming. We’re looking at major fragmentation of the Linux desktop because desktop teams have and do stop seeing eye to eye on major issues all the time. And because there’s no central implementation to keep them working together they’re free to do their own thing.
lemmyvore@feddit.nlto Linux@lemmy.ml•Valve Engineer Mike Blumenkrantz Hoping To Accelerate Wayland Protocol DevelopmentEnglish51·7 months agoWe need to keep a balance between security and convenience, to avoid systems becoming too awkward to use. Wayland tipped this balance too far on the side of security. Malicious local exploitation of the graphics stack has never been a big issue; consider the fact that someone or something would need to compromise your own account locally, at which point they could do much worse things than moving your windows around. It’s not that the security threat doesn’t exist, it’s that Wayland has approached it at the wrong end and killed a lot of useful functionality in the process.
Also consider that this issue has existed for the entire history of desktop graphics on *nix and nobody has ever deemed it worth to destroy automation for it. If it were such a grave security hole surely someone would have raised the alarm and fixed it during all this time.
My opinion is that Wayland has been using this as a red herring, to bolster its value proposition.
Manjaro has no purpose, it’s half-assed at being arch and it’s half-assed at being stable.
My experience with Manjaro and Fedora, OpenSUSE etc. contradicts yours. Manjaro has the best balance between stability and rolling out of the box I’ve seen.
“Out of the box” is key here. You can tweak any distro into doing anything you want, given enough time and effort. Manjaro achieves a good balance without the user having to do anything. I remind you that I’ve tested this with non-experienced users and they have no problem using it without any admin skills (or any admin access).
Debian testing is a rolling.
It is not.
AUR isn’t a problem in Manjaro because of lack of support, it’s a problem because packages there are made with Arch and 99.999% of its derivatives in mind, aka latest packages not one week old still-broken packages.
And yet I’ve managed to install dozens of AUR packages just fine. How do you explain that?
Matter of fact, I’ve never run into an AUR package I couldn’t install on Manjaro. What package is giving you trouble?
Manjaro literally accidentally DDoSes the AUR every now and then because again they’re incompetent.
You’re being confused.
AUR had very little bandwidth to begin with and could not cope with the rise in popularity of Arch-based distros. That’s a problem that needs to be solved by the AUR repo first and foremost. Manjaro did what they could when the problem became apparent and has added caching wherever it could. Both Manjaro and Arch devs have worked together to improve this.
lemmyvore@feddit.nlto Linux@lemmy.ml•Valve Engineer Mike Blumenkrantz Hoping To Accelerate Wayland Protocol DevelopmentEnglish27·7 months agoIf Wayland is so fragile as to only work with KDE, and is not responsible for anything, how long until it’s relegated to a KDE internal subsystem?
lemmyvore@feddit.nlto Linux@lemmy.ml•Valve Engineer Mike Blumenkrantz Hoping To Accelerate Wayland Protocol DevelopmentEnglish131·7 months agoIt actually is because of Wayland design. In their quest for “security” they’ve made it impossible for automation and accesibility tools to do their job.
It’s a glaring omission in Wayland going forward, for zero gain. Most of the touted Wayland security advantages are hogwash.
lemmyvore@feddit.nlto Linux@lemmy.ml•Hacking wizard gets Linux to run on a 1971 processor, though it takes almost 5 days to boot the kernelEnglish63·7 months agoWe don’t know yet, the first frame has been rendering for the last two weeks.
lemmyvore@feddit.nlto Linux@lemmy.ml•Valve Engineer Mike Blumenkrantz Hoping To Accelerate Wayland Protocol DevelopmentEnglish111·7 months agoOk but it’s not called Kdeland.
lemmyvore@feddit.nlto Linux@lemmy.ml•Valve Engineer Mike Blumenkrantz Hoping To Accelerate Wayland Protocol DevelopmentEnglish94·7 months agoOr try using any form of desktop automation… which is a show-stopper and it doesn’t look like Wayland plans to do anything about it any time soon.
Things like desktop automation, screen sharing, screen recording, remote desktop etc. are incredibly broken, with no hope in sight because the core design of Wayland simply didn’t account for them(!?), apparently.
Add to that the decision to push everything downstream into compositors, which led to widespread feature fragmentation and duplicated effort.
Add to that antagonizing the largest graphics chipset manufacturer (by usage among Linux desktop users) for no good reason. Nvidia has never had an incentive to cater to the Linux desktop, so Linux desktop users sending them bad vibes is… neither here nor there. It certainly won’t make them move faster.
Add to that the million little bugs that crop up when you try to use Wayland with any of the desktop apps whose developers aren’t snorting the Koolaid and not dedicating oustanding effort to catching up to Wayland – which is most of them.
I cannot use Wayland.
I’m an average Linux desktop user, who has an Nvidia card, has no need for Wayland “security”, doesn’t have multiple monitors with different refresh rates, uses desktop automation, screen sharing, screen recording, remote desktop on a daily basis, and uses lots of apps which don’t work perfectly with Wayland.
…how and why would I subject myself to it? I’d have to be a masochist.