

archive link
grow a plant, hug your dog, lift heavy, eat healthy, be a nerd, play a game and help each other out


archive link


This is probably the most succinct and most effective campaign that could be made for them. Hats off to you


probably, though I’ve not had that in a good while


just start a conspiracy theory
something along the lines of ‘the voting machines will turn their flat earth round’ 🤷


in-place upgrades are fine for just about any contemporary, mainstream Linux distro. You may find this experience to be more robust than on windows.
I believe you can also upgrade via separate installation media, but you won’t find yourself needing to.


totally fine in my experience, and I ‘dumb guy’ my way through the whole thing.
my primary workstation system started with Fedora 28 > 43 - persisting through many hardware swaps and all sorts - though that’s with the gnome desktop.
I’d imagine you could conduct full system upgrades via Discover on KDE too.
excellent find! hope it serves you well
nvidia have been promoting ‘big format gaming displays’ since I want to say about 2019. Some of them reach the dimensions you specify, I just hope these are VESA adaptive sync/FreeSync capable and not all GSync Ultimate module displays (they can be made to work in VESA mode but not without issues in my experience).
I think I’ve seen one or two obscure TV models offering DisplayPort over USB type C, it may have been from Hisense
No prob, really sorry about the situation though, I know it sucks. I’ve been looking into replacing my TVs with large PC displays with DisplayPort.
I’m not sure if you can somehow work around the HDMI forum limitation with an active converter, but I think they’re intended to be used at the adapter side (convert HDMI output to DP).
You don’t need proprietary drivers nor should you have to disable MST.
If you’re using HDMI 2.1, you won’t be able to use VRR on a Linux system as the HDMI forum have blocked the AMDGPU implementation for the feature - they don’t allow FOSS implementations of HDMI 2.1 VRR
More info here https://wiki.archlinux.org/title/Variable_refresh_rate


Thanks for these, I’ll discuss with the DAL team when I get the chance


Oh sorry, I misunderstood, so you actually get locked into a low mclk under specific display configurations? I’ve genuinely never heard of or personally experienced that across a breadth of hw and sw configs.
I’m wondering if it could be worth probing the power play sysfs interface or hwmon the next time this happens to try and understand what’s happening there.
Do you use client apps to interact with tuning settings like LACT? Can you link me to an existing bug report so I can follow up with engineering?


Can you elaborate on your display config?
You kind of alluded to part of it there; it’s not so much a bug in sw/fw as it is a hardware limitation at both the adapter and display side. The variables for displays are vertical blanking intervals (and differences between panels), as well as total display bandwidth.
with RDNA2, a feature was implemented in DAL to leverage VRR in order to allow a single connected display system to achieve a lower mclk, and thus lower idle power draw. With RDNA3, hardware changes (MALL specifically) broadened this capability two concurrent displays. Even then, it’s not bulletproof.
The display eng team has more or less exhaustively worked towards this over the course of RDNA3’s lifespan; their work is applicable to both Windows and Linux.


Do you have an OSD for active refresh rate built into your displays? FreeSync / VRR can be managed directly by your DE settings


I think you went from a 25.10 branch at a point where the KMD split had already occurred. This means that support for kmd3 devices (RDNA3+4) was not present, which lead to the abject chaos you saw on windows
I’m curious about the network remark though. Was this on windows 10 or 11? Can you tell me which platform (motherboard chipset) this is with?
I’m not sure I understand this post. did something not ship according to schedule for pixelfed? and if that’s the case, why is it a problem?


how is it a sub par GPU given it targets a specific segment (looking at it’s price point, die area, memory & power envelope) with its configuration?
You’re upset that they didn’t aim for a halo GPU and I can understand that, but how does this completely rule out a mid to high end offering from them?
the 9000 series is reminiscent of nv10 versus vega10 GPUs like the 56, 64, even the Radeon 7; achieving equivalent performance for less power and hardware.


*not without substantial hurdles (mostly due to hw / SoC support). I’m wondering if they were meaning to ask why this wasn’t more common today.
No but I do get about three or four challenges. I can paste the article for you if it helps?