By continuing to use the site or forum, you agree to the use of cookies, find out more by reading our GDPR policy

You can't update to Windows 12 yet, but here's when you might be able to, and what features to expect. Windows 12 could be Microsoft's replacement for Windows 11... in 2024.  Yes, it's still very early to be giving this any serious thought, plus nothing is official yet. But Windows' long history has us wondering what's in the queue for the next big update. Some changes we think Windows 12 could bring include UI enhancements, better Android app support, and increased reliance on the Settings app. We should start by saying we can't verify yet that Windows 12 is even real. It's not that we think Microsoft will pull a Windows 9 move and skip over this version to land on W13—we just haven't heard anything official from the company. That said, we do think it's coming. It's just not clear when. There is one rumor we've seen that points to an upgraded OS. Tom's Hardware spotted a mention by the German website that Microsoft would begin working on Windows 12. Remarkably, that was in February 2022, less than six months after Windows 11 was first available to the public! We're not sure if that source is reliable, but whether this version is being actively developed or not, Windows 12 won't arrive for a while longer, considering how close we still are to the Windows 11 launch. Looking back at the last several major Windows versions, there isn't a consistent timeline we can use to gauge when Windows 12 will come. But, we can still guess.  Before its public release, Windows 12 will probably follow a similar release structure as other versions of Windows. For example, the first Windows 11 Insider Preview build was available a few days after Microsoft announced the OS and a few months before its public release. A similar timeline is expected for this version, so you should be able to access a pre-release build of Windows 12 through the Windows Insider Program whenever that time comes. There's a good chance Windows 12 will be offered as an optional, free update for Windows 11 users, and possibly Windows 10 users, who have a valid copy of Windows. If you need a new license, we think you'll be able to get the digital version from Microsoft's website, or through other retailers on a USB device. As with any big OS update, there will surely be countless minor updates and changes under the hood. This should translate to things like better overall performance, new icons and animations, and additional settings you can tweak. Nothing is confirmed, and won't be for a while, but here are some bigger ideas that could make their way into Windows 12: The 2022 Microsoft Ignite keynote might have given us a glimpse at the Windows 12 user interface. The taskbar is only a little different from the existing one we've grown familiar with over the years because it's just slightly hovering over the bottom of the screen. The search bar, however, has never existed at the top like that and is definitely not entirely detached from the taskbar. Windows Central claims that there are plans for other UI changes, too, like a new lock screen and notification center, all in an effort to create a consistent interface across Microsoft's product line that will work for both touch and keyboard users. And that's to be expected with any major release. Below is a neat look at what Windows 12 could look like from Concept Central. It shows a new Start menu, an idea for a built-in messaging client called Windows Messenger, a redesigned volume hub, and desktop widgets. We also like this W12 concept from designer Kevin Kall. Follow this thread and more on OUR FORUM.

Homegrown chips remain behind for now, but for how much longer? China is set to get its hands on homegrown processors next year that purportedly rival the performance of AMD and Intel chips released over the past two years

Chinese semiconductor company Loongson recently announced that its next-generation Godson CPU, the 3A6000, will sample with customers in the first half of 2023, according to a Chinese-language news report. That means a launch could follow later in the year.

Previous reports have indicated that Loongson's 3A6000 processor will allegedly provide performance that is on par with AMD's Ryzen 5000 CPUs and Intel's 11th-Gen Core CPUs, which both debuted in 2020.
This expectation is based on simulation test results provided by Loongson showing that the 3A6000 will improve single-core fixed-point performance by 37 percent and single-core floating-point performance by 68 percent over the previous-generation 3A5000, based on the SPEC CPU 2006 benchmark. As always, claims made by vendors should be taken with a grain of salt, and one benchmark is not indicative of how a processor will perform across a wide range of applications.
If the 3A6000's performance comes anywhere close to what Loongson claims, it means China is still quite behind when compared to the latest x86 processors from Intel and AMD, which released their latest Ryzen 7000 processors and 13th-Gen Core processors, respectively, this fall.
However, the performance claims also show how China has progressed with processor technology that is based on the homegrown, MIPS-compatible LoongArch instruction set architecture. The company has previously claimed that its chips feature circuitry that helps with the emulation and binary translation of non-Loongson instruction sets such as x86 and Arm, as we have previously reported.

Sometime soon, Twitter will crash badly. Here's why. Elon Musk has taken over Twitter, and it appears he's already failing on his promise not to turn Twitter into a 'free-for-all hellscape.' But, I'm not here to talk about his policy blunders. That's a story for another day. No, I'm here to predict that Twitter, the site, will soon crash. And, once it fails, it won't be coming up for a while. Why? Simple. You can't lay off half of the staff of a cloud-based social network and expect things to keep running smoothly for Twitter's 450 million monthly active users. Indeed, Twitter accounts are already failing in odd ways. For example, Benjamin Dreyer, author of "Dreyer's English" and copy chief of Random House, found that the vast majority of replies to one of his tweets were vanishing into the aether. He wasn't the only one. Even Musk appears to have realized that maybe firing every other person was a mistake. On Monday, November 7th, he tried to get workers, especially software engineers, to return. Good luck with that. According to my Twitter sources and tweets on the site, they're not coming back. As Gergely Orosz, editor and author of the popular software engineering and management blog, The Pragmatic Engineer, said, "Several people who were let go on Friday, then asked to come back were given less than an hour as a deadline. Software engineers who got this call ... all said 'no' and the only ones who could eventually say 'yes' are on visas." Managers, according to my sources and Orosz, are "getting desperate, trying to call back more people. People are saying 'no' + more sr engineers are quitting." Orosz added, "None of this is surprising. As a rule of thumb, you get an additional half attrition after you lay off X% of people. Lay off 10%: expect another 5% to quit. Lay off 50%... not unreasonable to expect another 25% to quit." And, you can't expect to replace social network and cloud experts with Tesla embedded system engineers and get anything done. I'm a good technology and business writer, but no one in their right mind would hire me to write opera arias. Let's look at Twitter's technology, shall we? Twitter runs on CentOS 7. This free Red Hat Enterprise Linux (RHEL) clone comes to the end of its life at the end of June 2024. The leading choices for what to replace it with should be RHEL 9, Rocky Linux, or AlmaLinux. But instead of working on that transition, what few system administrators Twitter has left are both trying to get the platform ready for Musk's laundry list of new features and keeping it patched and up-to-date. That's a problem. You see, unlike RHEL, where a big part of the attraction is that you can depend on Red Hat for first-rate support, CentOS, Rocky, and AlmaLinux are all primarily meant for companies with in-house staff who already know Linux servers backward and forward. That's no longer the case at Twitter. For more visit OUR FORUM.

Containers are meant to be immutable. Once the image is made, it is what it is, and all container instances spawned from it will be identical. The container is defined as code, so its contents, intents, and dependencies are explicit. Because of this, if used carefully, containers can help reduce supply chain risks. However, these benefits have not gone unnoticed by attackers. A number of threat actors have started to leverage containers to deploy malicious payloads and even scale up their own operations. For the Sysdig 2022 Cloud-Native Threat Report, the Sysdig Threat Research Team (Sysdig TRT) investigated what is really lurking in publicly available containers. Docker Hub is the most popular free public-facing container registry. It houses millions of pre-made container images in convenient, self-contained packages with all required software installed and configured. Public registries also host official content and images signed by Verified Publishers, which adds some level of trust that they are not malicious and can be used safely. While public registries save developers time, if a user is not careful, there could be malicious aspects to the container they pull. With so many containers to choose from, it is easy to choose the wrong one. Threat actors also appreciate how much friction this technology removes from developer workflows. They count on the fact that many developers may not examine what exactly is being installed. According to the Sysdig threat report, DockerHub is being used by malicious actors to deliver malware, backdoors, and other unwelcome surprises to users and companies. One specific practice to watch out for is typosquatting, which is when an image is disguised as legitimate while hiding something nefarious within its layers. Its name can be just a letter off the real thing, or the attacker might rely on a developer carelessly copying some instructions containing the bad path. Sysdig TRT found images shared by suspicious users with names to appear as popular open-source software in order to trick users. For example, popular packages like Drupal and Joomla have had their names used in order to disguise malicious payloads. Deploying these images means opening the doors of our environment to attackers, letting them pursue their goals or move internally to business-critical assets. The Sysdig TRT analyzed more than 250,000 Linux images over several months. During the research, 1,777 images were found to contain various kinds of malicious IPs or domains and embedded credentials. Upon taking a closer look, we see that cryptomining images are the most common malicious image type. This is quite expected because mining cryptocurrency on someone else’s compute resources is the most prevalent type of attack targeting cloud and container environments today. Embedded secrets in Docker images is the second most prevalent attack technique. In this case, attackers insert secrets in an image and use this information to get a foothold in your environment and then try to move laterally. For example, an SSH key can be added, which could allow for simple remote access or AWS keys could be added to give them cloud capabilities. This highlights the persistent challenges of secrets management is still a battle we need to win. To learn more visit OUR FORUM.

The birth of the Internet in the 1990s and its subsequent expansion into every aspect of our lives began a digital revolution that has since refused to slow down. With it has come unimagined functionality, equipping us with instant access to information and communication. Those born before the Digital Enlightenment could never have imagined the power to cast aside unanswered questions with a mere "Google". Gazing across the digital expanse with our infantile stare, we failed to notice another set of eyes looking back at us. Those eyes belong to the world’s largest companies -- Big Tech giants like Facebook and Google -- who are continuously monitoring our movements across the Internet. Every time we open a website or App, our journeys are tracked and hunted down by a pack of algorithms designed to determine our interests -- products, ideas, and brands that we may feel positively towards. This data is coveted by advertisers; it is the elixir that enhances their powers of persuasion and consumer targeting and, inevitably, sales. This insatiable demand has propelled Big Tech’s rampant profiteering and extraction of consumer data. Stunned by the pace of digital expansion, consumers have failed to recognize how our data -- of which we are the sole producers -- is sold off to help influence our future decisions and expenditure. Although there have been some advancements made, such as the withdrawal of third-party cookies in some applications and regions, these have only come about due to societal pressure. Further change will not come until that pressure intensifies. We may have been the children of the Digital Age, but we must recognize that the Internet is no longer in its infancy, and neither are we. We must re-evaluate our perceptions with the experience of more than two decades behind us. We must consider how we fooled ourselves into believing that our data holds no personal value and that the sharing of our digital diaries is an inescapable part of the Internet…But what precisely is that value? To give an estimate, advertisers spend approximately £27 billion a year on digital marketing in the UK alone, which for the most part goes straight to Big Tech. This equates to around £80 per household per month. This staggering evaluation leaves little doubt as to why our data has been so exploited -- it is a precious commodity, yet one in which its creators hold no share of the reward. Advertisers are partially responsible for encouraging such pervasive and unjust looting of consumer data. Ultimately, it is the enormous paycheck that they have provided Twitter, Facebook and co. that has encouraged this activity. Advertisers must play their part in changing this. But first, consumers must embolden themselves by resisting this digital hegemony. We must demand remuneration for our data by moving en masse to direct-consumer marketing platforms that return cash rewards in exchange for data. Advertisers must also facilitate this transition; with direct access to target consumers through such platforms, they have a unique opportunity to change their mission statement from selling to selling and rewarding, realizing this by offering consumers exclusive benefits and cash rewards for their data. Such platforms allow consumers to determine the level of data access they wish to share, with rewards varying dependently. For instance, a consumer may choose to provide copies of their shopping receipts while remaining anonymous for an entry-level cash reward. Meanwhile, the most active consumers help develop the platform’s feedback loop and in exchange receive access to higher-value cash rewards. Within this setup exists an intrinsic market evaluation for consumer data that commissions its creators on a quid pro quo basis. Follow this thread on OUR FORUM.


Some scholars of AI warn that the present technologies may never add up to "true" intelligence or "human" intelligence. But much of the world may not care about that. The British mathematician Alan Turing wrote in 1950, "I propose to consider the question, 'Can machines think?'" His inquiry framed the discussion for decades of artificial intelligence research. For a couple of generations of scientists contemplating AI, the question of whether "true" or "human" intelligence could be achieved was always an important part of the work. AI may now be at a turning point where such questions matter less and less to most people. The emergence of something called industrial AI in recent years may signal an end to such lofty preoccupations. AI has more capability today than at any time in the 66 years since the term AI was first coined by computer scientist John McCarthy. As a result, the industrialization of AI is shifting the focus from intelligence to achievement. Those achievements are remarkable. They include a system that can predict protein folding, AlphaFold, from Google's DeepMind unit, and the text generation program GPT-3 from startup OpenAI. Both of those programs hold tremendous industrial promise irrespective of whether anyone calls them intelligent. Among other things, AlphaFold holds the promise of designing novel forms of proteins, a prospect that has electrified the biology community. GPT-3 is rapidly finding its place as a system that can automate business tasks, such as responding to employee or customer queries in writing without human intervention. That practical success, driven by a prolific semiconductor field, led by chipmaker Nvidia, seems like it might outstrip the old preoccupation with intelligence. In no corner of industrial AI does anyone seem to care whether such programs are going to achieve intelligence. It is as if, in the face of practical achievements that demonstrate obvious worth, the old question, "But is it intelligent?" ceases to matter. As computer scientist Hector Levesque has written, when it comes to the science of AI versus technology, "Unfortunately, it is the technology of AI that gets all the attention." To be sure, the question of genuine intelligence does still matter to a handful of thinkers. In the past month, ZDNET has interviewed two prominent scholars who are very much concerned with that question. Yann LeCun, chief AI scientist at Facebook owner Meta Properties, spoke at length with ZDNET about a paper he put out this summer as a kind of think piece on where AI needs to go. LeCun expressed concern that the dominant work of deep learning today if it simply pursues its present course, will not achieve what he refers to as "true" intelligence, which includes things such as the ability of a computer system to plan a course of action using common sense. To learn more please visit OUR Forum.