Here we go again. Google has confirmed another attack on Gmail users that combines inherent vulnerabilities in the platform with devious social engineering. The net result is a flurry of headlines and viral social media posts followed by an urgent platform update. Google’s security warning is clear. Users should stop using their passwords. This latest attack has been bubbling on X and in a number of crypto outlets given the victim was an Ethereum developer. Nick Johnson says he was “targeted by an extremely sophisticated phishing attack,” one which “exploits a vulnerability in Google’s infrastructure, and given their refusal to fix it, we’re likely to see it a lot more.” The attack started with an email from a legitimate Google address warning Johnson that it has been served with a subpoena for his Google account. “This is a valid, signed email,” Johnson says, “sent from This email address is being protected from spambots. You need JavaScript enabled to view it.. It passes the DKIM signature check, and Gmail displays it without any warnings - it even puts it in the same conversation as other, legitimate security alerts.” This is clever, and technically the attackers have exploited a way to send a correctly titled Google email to themselves from Google, which they can then forward to others with the same legitimate DKIM check even though it’s a copy of the original. But the objective is more simple. A credential phishing page that mimics the real thing. “We’re aware of this class of targeted attack,” Google has now confirmed in a statement, “and have been rolling out protections for the past week. These protections will soon be fully deployed, which will shut down this avenue for abuse. In the meantime, we encourage users to adopt two-factor authentication and passkeys, which provide strong protection against these kinds of phishing campaigns." That’s all that matters. Stop using your password to access your account, even if you have two-factor authentication (2FA) enabled and especially if that 2FA is SMS-based. It’s now too easy to trick you into giving up your login and password and then bypassing or stealing the SMS codes as they come into your device. There’s nothing to stop an attacker using your password and 2FA code on their own device. What does stop them is a passkey. This is linked to your own physical device and requires your device security to unlock your Google account. That means if an attacker does not have your device they can’t login. While Google has not yet gone as far as deleting passwords completely — which is Microsoft’s stated intention, you will know not to use your password to sign-in which will stop a malicious phishing page stealing it. The cleverness in this latest attack added to others we have seen in recent months is easily thwarted by updating your account security. These attacks are getting ever more sophisticated, and AI will enable this level of “targeting” to be done on a massive scale. As Microsoft warns, “AI has started to lower the technical bar for fraud and cybercrime actors looking for their own productivity tools, making it easier and cheaper to generate believable content for cyberattacks at an increasingly rapid rate. This latest Google scam, exploiting weaknesses in its core infrastructure to mask an attack, is now getting more media pick up (1,2). Unfortunately, most of this misses the point. Google has been very clear each time such stories make headlines, emphasizing two key points. First, that the company will never reach out proactively to users to warn them about a support or security issue or to recommend they take actions to stay safe. And second, enhancing account security per its advice will keep those accounts safe. ”Learn more by visiting OUR FORUM. Spam and phishing emails are an annoying everyday occurrence that everyone is probably familiar with and finds annoying. These intrusive messages often clog up your inbox and require tedious deletion or filtering. Worse still, those who are act carelessly run the risk of falling victim to scammers. But, as strange as it may sound, spam emails can actually be useful to the potential victims scammers are targeting, which is why you shouldn’t delete them. All major mail providers are starting to rely on complex and adaptive spam filters that are getting better and better at distinguishing between wanted and unwanted e-mails. An important prerequisite for this learning effect: The software must be able to practice and this is exactly what spam mails are useful for. Instead of deleting spam mails, we recommend you proceed as follows: If you use an email client such as Outlook or Thunderbird: Manually mark relevant messages as spam (or as “junk”) if your email program hasn’t already done it itself. This will train the software’s spam filter and you will (hopefully) have to deal with annoying spam mails less and less in future because the automatic filter will improve. If you retrieve emails with a browser: Depending on which provider you use, you can mark the annoying messages as spam in different ways. Of course, you only need to make this effort if the junk emails are displayed as normal emails in your inbox and haven’t already ended up in the spam folder. You can mark such messages in the inbox (tick the box) and send them directly to the spam folder using the “Spam” or “Junk” command in the menu bar. This also works with individual (open) emails, where the path to the spam bin is sometimes via a “Move” button above the message text. Both privately and professionally, these procedures promise less rubbish mail in the long term. The senders of such messages are also blacklisted more quickly. If you use a shared mail server in the office, you may be doing your colleagues a great service by preventing them from having to deal with the same scam messages that you’ve already marked as spam and sorted out yourself. Many providers and email clients now offer an easy way to unsubscribe from unwanted advertising emails, newsletters, and the like with a quick click directly in your inbox. This function is useful if you do not want to delete yourself from mailing lists by hand or aren’t interested in the advertising it contains. However, the well-intentioned function also harbors a danger, at least in the case of fraudulent messages. This is because you inadvertently inform the sender that your own e-mail address actually exists and is actively managed. Spam crooks send millions of emails every day, sometimes indiscriminately to randomly generated recipient addresses. They are often unaware of whether the accounts they write to really exist or whether messages are read there–until users click on the unsubscribe button. The scammers then receive a request to stop writing to the email address in question, whereupon, of course, they do exactly the opposite. Spammers and scammers are becoming more and more sophisticated. Even experienced users can be taken in by the brazen crooks. If you want to protect yourself better, you can turn to professional software, it makes life difficult for the scoundrels on the net. Learn useful tips to protect yourself by visiting OUR FORUM. Not long ago, AI seemed like a futuristic idea. Now, it's in everything. What happened? This AI thing has taken off really fast, hasn't it? It's almost like we mined some crashed alien spacecraft for advanced technology, and this is what we got. I know, I've been watching too much *Stargate*. But the hyper-speed crossing the chasm effects of generative AI are real. Generative AI, with tools like ChatGPT, hit the world hard in early 2023. All of a sudden, many vendors are incorporating AI features into their products, and our workflow patterns have changed considerably. How did this happen so quickly, essentially transforming the entire information technology industry overnight? What made this possible, and why is it moving so quickly? In this article, I look at ten key factors that contributed to the overwhelmingly rapid advancement of generative AI and its adoption into our technology stacks and workday practices. As I see it, the rapid rise of AI tools like ChatGPT and their widespread integration came in two main phases. Let's start with Phase I. Researchers have been working with AI for decades. I did one of my thesis projects on AI more than 20 years ago, launched AI products in the 1990s, and have worked with AI languages for as long as I've been coding. But while all of that was AI, it was incredibly limited compared to what ChatGPT can do. As much as I've worked with AI throughout my educational and professional career, I was rocked back on my heels by ChatGPT and its brethren. While AI has been researched and used for decades, for most of that time, it had some profound limitations. Most AIs had to be pre-trained with specific materials to create expertise. In the early 1990s, for example, I shipped an expert system-based product called *House Plant Clinic* that had been specifically trained on house plant maladies and remedies. It was very helpful as long as the plant and its related malady were in the training data. Any situation that fell outside that data was a blank to the system. The transformer approach gave researchers a way to train AIs on broad collections of information and determine context from the information itself. That meant that AIs could scale to train on almost anything, which enabled models like OpenAI's GPT-3.5 and GPT-4 to operate with knowledge bases that encompassed virtually the entire Internet and vast collections of printed books and materials. By the early 2020s, a number of companies and research teams developed software systems based on the transformer model and world-scale training datasets. But all of those sentence-wide transformation calculations required enormous computing capability. It wasn't just the need to be able to perform massively parallel and matrix operations at high speed, it was also the need to do so while keeping power and cooling costs at a vaguely practical level. Early on, it turned out that NVIDIA's gaming GPUs were capable of the matrix operations needed by AI (gaming rendering is also heavily matrix-based). But then, NVIDIA developed its Ampere and Hopper series chips, which substantially improved both performance and power utilization. Likewise, Google developed its TPUs (Tensor Processing Units), which were specifically designed to handle AI workflows. Microsoft and Amazon also developed custom chips (Maia and Graviton) to help them build out their AI data centers. And then came ChatGPT. It's a funny name and took a while for most of us to learn it. ChatGPT literally means a chat program that's generative, pre-trained, and uses transformer technology. But despite a name that only a geek could love, in early 2023, ChatGPT became the fastest-growing app of all time. OpenAI made ChatGPT free for everyone to use. Sure, there were usage limitations in the free version. It was also as easy (or easier) to use than a Google search. All you had to do was open the site and type in your prompt. That's it. And because of the three innovations we discussed earlier, ChatGPT's quality of response was breathtaking. Everyone who tried it suddenly realized they were touching the future. Further details are posted on OUR FORUM. |
Latest Articles
|