• 0 Posts
  • 7 Comments
Joined 10 months ago
cake
Cake day: October 23rd, 2024

help-circle
  • It’s not stupid of AMD not to start a turf war where both they and they patron saint are hurt.

    AMD is far ahead of intel in iGPUs. Key to laptop and mini pc segments. Desktop motherboards are bad at USB 4, and so 4 monitor support, and performance per watt. Desktop PC vendors, with external GPUs, don’t promise exact number (3+) of monitors supported.

    Intel is the one that needed AMD to survive to avoid monopoly designation many years ago. That hasn’t stopped AMD from kicking Intel’s ass in iGPUs. 2+ generations ahead where latest high end intel barely outperforms 680m, with AMD having 780m cheaper, and 8600s and 890m. AMD is not “being thankful” to Intel by refusing to compete with it.

    Nvidia was never under monopoly scrutiny. AMD making stupid decisions on drivers and memory configurations has no explanation. Industry underbelly NVIDIA bribes to CEO would be an explanation.




  • Much easier black market than drugs, because it is legal to import it, and no one checks when you export it.

    Nvidia has 18%-28% of its sales to Singapore. They defend this by saying only 2% of their product is shipped there, but they don’t say which countries receive the bulk of shipments that are billed through Singapore.

    some 5090 cards in Hong Kong (from OP) are the same price as the US, which makes it hard to understand why US people have been charged with smuggling, and shows complicity of entire supply chain to funnel to China. Premiums over MSRPs in US would be zero or lower if this was all above board. You also can’t find a 5090 on Amazon, but you can in Hong Kong boutique?

    Prohibition for the loss, as always.




  • I’ve done a test of 8 LLMs, on coding. It was using the J language, asking all of them to generate a chess “mate in x solver”

    Even the bad models were good at organizing code, and had some understanding of chess, were good at understanding the ideas in their prompts. The bad models were bad mostly on logic. Not understanding indexing/amend on a table, not understanding proper function calling, or proper decomposition of arguments in J. Bad models included copilot and openAI’s 120g open source model. kimi k2 was ok. Sonet 4 the best. I’ve mostly used Qwen 3 245 for better free accessibility than Sonet 4, and the fact that it has a giant context that makes it think harder (slower) and better the more its used on a problem. Qwen 3 did a good job in writing a fairly lengthy chess position scoring function, and then separating it into 2 quick and medium function, incorporating self written library code, and recommending enhancements.

    There is a lot to get used to in working with LLMs, but the right ones, can generally help with code writting process. ie. there exists some code outputs which even when wrong, provide a faster path to objectives than if that code output did not exist. No matter how bad the code outputs, you are almost never dumber for having received it, unless perhaps you don’t understand the language well enough to know its bad.