By continuing to use the site or forum, you agree to the use of cookies, find out more by reading our GDPR policy

Once upon a time, signing into sites and apps was simple. You remember those days, right? (They really weren’t that long ago, though by tech standards, it’s been roughly seven centuries.) All you’d do is remember a single username and password — or maybe put it on a Post-it and stick it to the bottom of your 11″ oatmeal-gray 7,000-lb. monitor monster, if you were really feeling fancy — and that’s it: You’d be ready to rush into whatever site or service you wanted, whenever the need arose. Now, it’s a whole other story. If you’re following best practices, you’ve got unique, complex alphanumerical passwords for every single site and service you visit — managed by a password manager and supplemented by two-factor authentication. And if that isn’t enough, you’re increasingly being prompted to drop all of those elements and instead rely on a newer and even more mystifying method of authentication called a passkey. Whether you’re a gadget-loving technophile or a perpetually befuddled technophobe — and whether you’re an individual tech user or part of a broader corporate organization — the one consistent reality about passkeys is that they’re confusing as all get-out. Their aim may be to simplify security around sign-ins, but in actuality, they create all sorts of uncertainty and unanswered questions. Let’s start at the beginning: Passkeys are a relatively recent security feature that let you log in to an account simply by authenticating on a device with your fingerprint or face scan — or, in some cases, another screen lock mechanism (e.g., the PIN or passcode you put into your device when first firing it up). In a sense, it’s kind of like two-factor authentication — only instead of typing in a traditional password and then verifying it’s you as a second step, you’re basically just jumping right to that second step with the knowledge that such action shows you’ve already unlocked an approved device and demonstrated your identity. The idea is that passwords are inherently vulnerable, since they’re text-based codes that you type in or store somewhere and thus that someone else could potentially access or figure out (or find in one of the endless series of breaches we hear about these days). With a passkey, that risky variable is eliminated. Instead, you’re signing in solely based on the fact that you’ve already unlocked your phone or computer — ideally using some manner of biometric authentication but at the very least using a PIN or passcode there — and thus have already proven who you are. And you set up a different passkey for each site or service, eliminating the possibility of reused credentials. Plus, you personally have that device in front of you, which means a hacker couldn’t crack the code and pretend to be you without physically taking your device and being able to get past its lock screen. On a technical level, the bits and bytes that make up a passkey are encrypted with public key cryptography — a fancy way of saying they rely on a pair of keys, one that’s public and one that’s stored privately on your local device — which makes them exceptionally difficult to crack or plunder. That’s in large part because of the way the private key piece of the puzzle works: In short, the site you’re signing into never sees your private key and only receives confirmation that it’s present and valid. The key itself remains on your device, with encryption keeping it unreadable until the moment you authenticate. The actual passkey data is never transferred during the login, and there’s no real mechanism to even copy and paste it anywhere, like you would with a password, so the potential for a hacker to exploit it is pretty darn slim. The one extra wrinkle is that for most people and purposes, the underlying (and encrypted) passkey data is synced to a service that’s connected to a secure account you own and thus can use to sign back in and restore the passkey on a different device. That’s the case with the Google Password Manager system associated with Android, with the iCloud Keychain system associated with iOS, and with most third-party password managers such as 1Password and Bitwarden, too. For more visit OUR FORUM.

Microsoft has confirmed a new known issue causing delivery delays for June 2025 Windows security updates due to an incorrect metadata timestamp. As Redmond explains in recent advisory updates, this bug affects Windows 10 and Windows 11 systems in environments with quality update deferral policies that enable admins to delay update installation on managed devices. While update deployment delays are an expected result when using such policies, the wrong timestamp for the June security updates will postpone them beyond the period specified by administrators, potentially exposing unpatched systems to attacks. "Some devices in environments where IT admins use quality update (QU) deferral policies might experience delays in receiving the June 2025 Windows security update," Microsoft explains. "Although the update was released on June 10, 2025, its update metadata timestamp reflects a date of June 20, 2025. This discrepancy might cause devices with configured deferral periods to receive the update later than expected." While still investigating this known issue, Microsoft has provided Windows admins with several temporary workarounds to accelerate deployment for the June 2025 updates until a fix is available. To achieve this, Redmond recommends creating an expedited deployment policy to bypass deferral settings and ensure that the updates are delivered immediately in organizations using Windows Autopatch. As an alternative, admins can modify deferral configurations or deployment rings to minimize the delay for impacted devices. "This delay issue affects only the timing of update availability for organizations using QU deferral policies and doesn't impact the quality or applicability of the update," Microsoft added. "We will not change the metadata value from the current June 20, 2025, value. This workaround is the final resolution we will provide for this issue." Earlier this month, Microsoft rolled out a configuration update (KB5062324) to address a known issue that caused update failures after the scan for Windows updates stopped responding on some Windows 11 systems. In May, Redmond fixed another bug blocking Windows 11 24H2 feature updates from being delivered via Windows Server Update Services (WSUS) after installing the April 2025 security updates. One month earlier, it addressed what it described as a "latent code issue" that was causing some PCs to be upgraded to Windows 11 overnight despite Intune policies designed to block Windows 11 upgrades. In May, the company also revealed that it aims to update all software on your PC via a new update orchestration platform built on top of the existing Windows Update infrastructure, a platform that aims to unify the updating system for all apps, drivers, and system components across all Windows systems. Learn more by visiting OUR FORUM.

Whether it’s the FBI warning about smartphone attacks leveraging fears of deportation in the U.S. foreign student population, recommendations to use a secret code as AI-powered phishing campaigns evolve, instant takeover attacks targeting Meta and PayPal users, or confirmed threats aimed at compromising your Gmail account, there is no escaping the cyber-scammers. Indeed, the Global Anti-Scam Alliance, whose advisory board includes the head of scam prevention at Amazon, Microsoft’s director of fraud and abuse risk, and the vice president of security solutions with Mastercard, found that more than $1 trillion was lost globally to such fraud in 2024. But do not despair, despite the Federal Trade Commission warning of a 25% year-on-year increase in losses, Google is fighting back. Here’s what you need to know. There can be no doubt that online scams, of all flavors, are not only increasing in volume, but they are also evolving. We’ve seen evidence of this in the increasing availability and cost-effectiveness of employing AI to empower such threat campaigns. No longer the sole stomping ground of solo actors and chancers looking to make a few bucks here and there, the scams threat landscape is now dominated by organized international groups operating at scale. The boundary between online and physical, offline fraud is blurring. Hybrid campaigns are a reality, combining phone calls with internet calls to action. The Global Anti-Scam Alliance State of Scams Report, published in November 2024, revealed the true cost of such crimes: $1.03 trillion globally in just 12 months. A March 2025 report from the Federal Trade Commission showed that U.S. consumers alone had lost $12.5 billion last year, up 25% from 2023. And that GASA report also found that only 4% of victims worldwide reported being able to recover their losses. Something has to be done, and Google’s Trust and Safety teams, responsible for tracking and fighting scams of all kinds, are determined that they are the people to help do it. “Scammers are more effective and act without fear of punishment when people are uninformed about fraud and scam tactics,” Karen Courington, Google’s vice president of consumer trusted experiences, trust & safety, said. In addition to tracking and defending against scams, Google’s dedicated teams also aim to inform consumers by analyzing threats and sharing their observations, along with mitigation advice. The May 27 Google fraud and scams advisory, does just that, describing the most pressing of recent attack trends that have been identified. These are broken down into five separate scams, each complete with mitigating best practice recommendations, as follows: Customer support scams, often displaying fake phone numbers while pretending to be legitimate help services, are evolving and exploiting victims through a combination of social engineering and web vulnerabilities, Google warned. Along with the protection offered by Gemini Nano on-device to identify dangerous sites and scams, Google advised users should “seek out official support channels directly, avoid unsolicited contacts or pop-ups and always verify phone numbers for authenticity." Malicious advertising scams, often employing the use of lures including free or cracked productivity software and games, have also evolved. “Scammers are setting their sights on more sophisticated users,” Courington said, “those with valuable assets like crypto wallets or individuals with significant online influence.” Google uses AI and human reviews to combat the threat and block ad accounts involved in such activity. Only download software from official sources, beware of too good to be true offers, and pay particular attention browser warnings when they appear, Google said. Google’s teams have seen an increase in fake travel websites as the summer vacations get closer, usually luring victims with cheap prices and unbelievable experiences. Again, these will likely impersonate well-known brands, hotels, and agencies. Google advised users to use its tools such as “about this result’ to verify website authenticity. “Avoid payment methods such as wire transfers or direct bank deposits,” Courington said, “especially if requested via email or phone.” There are some people who just demand to be listened to, not through the loudness of their voice or the position of power they find themselves in, but rather because of the sheer experience they bring to the table. When it comes to the phishing threat, one of these people has to be Paul Walsh. I have been around the online business more than long enough to remember when, in 2004, Walsh was tasked with refining the World Wide Web creator, Tim Berners-Lee’s, vision of one web. This was when the W3C Mobile Web Initiative was co-founded by Walsh, who also happened to be head of the New Technologies Team at AOL in the 90s. See, I told you I had been around a long time, and AOL wasn’t even my first rodeo on the internet. The point being that Walsh has huge experience when it comes to the phishing threat, having helped launch AOL’s Instant Messenger AIM client and becoming one of the first people online to fall victim to impersonation attacks as a result. But, it doesn’t need there: “When I co-founded the W3C standard for URL Classification and Content Labeling in 2004,” Walsh told me, “I co-invented the very concept of classifying/labeling folders, user accounts, etc., on the web,” Walsh said. Now he’s the CEO at MetaCert, a business that seeks to cut off the phishing threat directly at its source with a network-based solution for carriers to shield subscribers from SMS phishing attacks. Walsh told me that when it comes to phishing protection, threat intelligence is a fundamentally flawed method. “Relying on historical data is useless—new URLs evade existing intelligence by design,” Walsh advised, adding that it is, in his opinion, the biggest threat in cybersecurity currently. While the advice from Google is certainly not to be ignored by users, in my never humble opinion, Walsh does not agree. Suspicious links and unexpected attachments, as red flags, Walsh claimed, are not only poor warning signs but positively harmful in 2025. With SMS taking over from email as the primary attack vector for phishing campaigns in 2024, Walsh said that “authenticating URLs before delivery” is the only way to ensure they are safe, “without relying on outdated historical data or AI.” For complete details visit OUR FORUM.