When I first started working with Microsoft over 30 years ago, we were still using MS-DOS with a Windows overlay, and they made me the top launch analyst for Windows 95. Copilot, like that old GUI overlay, is a precursor to Windows 12, but unlike that old GUI, Copilot will be actively helping to develop Windows 12 based on the massive user data that will be collected on Windows 11 by Copilot from customers and Microsoft employee usage feedback. In a way, this is a bit closer to the .NET wave when Microsoft responded to Netscape’s browser. The company made a hard pivot and, for a time, took leadership in browsers. But again, because the tool will be increasingly used to create what is coming, the development cycle won’t only be faster, but it is likely to make a technology jump far bigger than either of those other two events. This is Microsoft Bob, Clippy, and Cortana done right. Each of those products was powerful in concept but failed because the technology at the time fell far short of both the requirements and expectations of the developers and users. AI is also being hyped ahead of its capabilities, but it is advancing by several magnitudes faster than anyone has seen before, suggesting the hype, in a few short months, will be exceeded by the coming reality in 2024 and 2025. To say this is big is a huge understatement. This effort’s eventual goal is to turn your PC into a personal and work companion, co-worker, mentor, mentee, and, I expect for some of us, a friend. Let me explain. I’m going to start with security because this morning, I was reading an article where a ransomware company that wasn’t paid a ransom turned the victim of their attack into the SEC because the victim didn’t report the attack as is required by law. This adds another layer of pain to the victim as it basically makes them a criminal because they didn’t properly report that they were attacked, turning a law enforcement entity into a tool for the criminal organization. That is so twisted. Part of the problem with security breaches, and particularly ransomware, is that the attacker can generally work for an unlimited amount of time to create the problem and is free to make as many mistakes as they can before executing the attack successfully, while the defender’s tools only allow the security organization to respond once the attack is successful. One of the most compelling demonstrations was from Melissa Grant, who I’ve known for years. She and her partner showcased how a user of Copilot could simply ask their Copilot-enabled PC to do things like write copy, create pictures, format slides and documents, and copy edit with natural language commands (no learning command phrases) to more quickly create higher quality documents and slides. Her demonstration supported a Wharton study that showcased a 30% initial productivity improvement when using tools like this and up to an 80% improvement when the user became familiar with using AI. What the Wharton study did not address was how much improvement would result if the AI was advancing as fast as it currently is, suggesting that both of those metrics are understated against the more advanced AIs rolling out over the next several years. While I expect much of the PC of the future will reside in the cloud, I expect the Windows 12 hardware will evolve to more closely match this Cortana demonstration of a few years back, where you increased interface with your PC as if it were a person. Much of the information you’ll see today that would be on a screen will instead be projected into a type of floating VR display that will resize itself based on need. We’ll develop a far deeper relationship with our hardware, which will evolve quickly from something like an ever-smarter digital pet to a digital friend. Learn more by visiting OUR FORUM.
On the internet, people need to worry about more than just opening suspicious email attachments or entering their sensitive information into harmful websites—they also need to worry about their Google searches. That’s because last year, as revealed in our 2024 ThreatDown State of Malware report, cybercriminals flocked to a malware delivery method that doesn’t require they know a victim’s email address, login credentials, personal information, or, anything. Instead, cybercriminals need to fool someone into clicking on a search result that looks remarkably legitimate. This is the work of “malicious advertising,” or “malvertising,” for short. Malvertising is not malware itself. Instead, it’s a sneaky process of placing malware, viruses, or other cyber infections on a person’s computer, tablet, or smartphone. The malware that eventually slips onto a person’s device comes in many varieties, but cybercriminals tend to favor malware that can steal a person’s login credentials and information. With this newly stolen information, cybercriminals can then pry into sensitive online accounts that belong to the victim. But before any of that digital theft can occur, cybercriminals must first ensnare a victim, and they do this by abusing the digital ad infrastructure underpinning Google search results. Think about searching on Google for “running shoes”—you’ll likely see ads for Nike and Adidas. A Google search for “best carry-on luggage” will invariably produce ads for the consumer brands Monos and Away. And a Google search for a brand like Amazon will show, as expected, ads for Amazon. But cybercriminals know this, and in response, they’ve created ads that look legitimate, but instead direct victims to malicious websites that carry malware. The websites themselves, too, bear a striking resemblance to whatever product or brand they’re imitating, to maintain a charade of legitimacy. From these websites, users download what they think is a valid piece of software, instead of downloading malware that leaves them open to further attacks. Indeed, malvertising is often understood as a risk to businesses. Still, the copycat websites created by cyber criminals can and often do impersonate popular brands for everyday users, too. If Google ads have been around for over a decade, why are they only being abused by cybercriminals now? The truth is, that malvertising has been around for years, but a particular resurgence was recorded more recently. In 2022, cybercriminals lost access to one of their favorite methods of delivering malware. That summer, Microsoft announced that it would finally block “macros” that were embedded into files that were downloaded from the internet. Macros are essentially instructions that users can program so that multiple tasks can be bundled together. The danger, though, is that cybercriminals would pre-program macros within certain files for Microsoft Word, Excel, or PowerPoint, and then send those files as malicious email attachments. Once those attachments were downloaded and opened by users, the embedded macros would trigger a set of instructions directing a person’s computer to install malware from a dangerous website online. Macros were a scourge for cybersecurity for years, as they were effective and easy to deliver. But when Microsoft restricted macro capabilities in 2022, cybercriminals needed to find another malware delivery channel. They focused on malvertising. Today’s malvertising is increasingly sophisticated, as cybercriminals can create and purchase online ads that target specific types of users based on location and demographics. Concerningly, modern malvertising can even avoid basic fraud detection as cybercriminals can create websites that determine whether a user is a real person or simply a bot that is trawling the web to find and flag malicious activity. Learn more by visiting OUR FORUM.
The consumer champion looked at scams appearing on online platforms and found blatant fraudulent advertising, from copycats of major retail brands to investment scams and ‘recovery’ scams, which target previous victims of scams. Scam adverts using the identities of celebrities such as Richard Branson, despite them having nothing to do with the ads, also continue to target consumers. In November and December 2023, the consumer champion combed the biggest social media sites: Facebook, Instagram, TikTok, X (formerly Twitter) and YouTube. Researchers also looked at the two biggest search engines, Google and Bing. Which? researchers could easily find a range of obvious scam adverts, even though the landmark Online Safety Act had received Royal Assent weeks earlier. The Act will not officially come into force on scam adverts until after Ofcom finalizes the codes of practice, which the regulator will use to set the standard platforms must meet. Which? is concerned the findings suggest online platforms may not be taking scam adverts seriously enough and will continue to inadvertently profit from the misery inflicted by fraudsters until the threat of multi-million-pound fines becomes a reality. This is why Ofcom must make sure that its online safety codes of practice prioritize fraud prevention and takedown. While it is positive the government has passed key legislation such as the Online Safety Act, it is now time to appoint a dedicated fraud minister to make fighting fraud a national priority. Which? used a variety of methods including setting up fresh social media accounts for the investigation. Researchers tailored these accounts to interests frequently targeted by scam advertisers, such as shopping with big-name retailers, competitions and money-saving deals, investments, weight-loss gummies, and getting help to recover money after a scam. Researchers also scoured ad libraries – the searchable databases of adverts that are available for Facebook, Instagram, and TikTok – and investigated scams reported by some of the 26,000 members of its Which? Scam Action and Alerts community on Facebook. Which? captured scams they came across in the course of everyday browsing and scrolling for personal use. Researchers collected more than 90 examples of potentially fraudulent adverts. Whenever they were confident of something being a scam and in-site scam reporting tools were available, they reported the adverts. Most platforms did not update on the outcome of these reports. The exception was Microsoft, the parent company of Bing, which confirmed an advert had violated its standards and said it would act but did not specify how. Which? found what it considered to be clear examples of scam adverts on Bing, Facebook, Google, Instagram, and X. On Meta’s ad library, Which? found Facebook and Instagram hosting multiple copycat adverts impersonating major retailers around the time of the Black Friday sales, including electricals giant Currys plus clothing brands River Island and Marks & Spencer. Each advert attempted to lure victims to bogus sites in a bid to extract their payment details. On YouTube and TikTok, Which? found sponsored videos in which individuals without Financial Conduct Authority authorization gave often highly inappropriate investment advice. While these are not necessarily scam videos and would not come under the remit of the new laws, they are nonetheless extremely concerning Which? has shared these examples with the platforms. An advert impersonating Currys, appearing on both Facebook and Instagram, attempted to lure in victims by claiming to offer ‘90% off on a wide range of products’. However, it went through to a completely different URL and was a scam to lure in shoppers. On X, a dodgy advert led to a fake BBC website and featured an article falsely using Martin Lewis to endorse a dodgy company called Quantum AI, which promotes itself as a crypto-get-rich-quick platform. Beneath the advert was a note added by the platform with some context added by other site users, known as readers’ notes. It warned that: ‘This is yet another crypto scam using celebrities’. Despite the warning, the advert remained live. For more visit OUR FORUM.