By continuing to use the site or forum, you agree to the use of cookies, find out more by reading our GDPR policy

After spending more than a decade building up massive profits off targeted advertising, Google announced on Wednesday that it’s planning to do away with any sort of individual tracking and targeting once the cookie is out of the picture. In a lot of ways, this announcement is just Google’s way of doubling down on its long-running pro-privacy proclamations, starting with the company’s initial 2020 pledge to eliminate third-party cookies in Chrome by 2022. The privacy-protective among us can agree that killing off these sorts of omnipresent trackers and targeters is a net good, but it’s not time to start cheering the privacy bona fides of a company built on our data—as some were inclined to do after Wednesday’s announcement. As the cookie-kill date creeps closer and closer, we’ve seen a few major names in the data-brokering and adtech biz—shady third parties that profit off of cookies—try to come up with a sort of “universal identifier” that could serve as a substitute once Google pulls the plug. In some cases, these new IDs rely on people’s email logins that get hashed and collectively scooped up from tons of sites across the web. In other cases, companies plan to flesh out the scraps of a person’s identifiable data with other data that can be pulled from non-browser sources, like their connected television or mobile phones. There are tons of other schemes that these companies are coming up with amid the cookie countdown, and apparently, Google’s having none of it. “We continue to get questions about whether Google will join others in the ad tech industry who plan to replace third-party cookies with alternative user-level identifiers,” David Temkin, who heads Google’s product management team for “Ads Privacy and Trust,” wrote in a blog post published on Wednesday. In response, Temkin noted that Google doesn’t believe that “these solutions will meet rising consumer expectations for privacy, nor will they stand up to rapidly evolving regulatory restrictions.” Based on that, these sorts of products “aren’t a sustainable long term investment,” he added, noting that Google isn’t planning on building “alternate identifiers to track individuals” once the cookie does get quashed. What Google does plan on building, though, is its own slew of “privacy-preserving” tools for ad targeting, like its Federated Learning of Cohorts, or FLoC for short. Just to get people up to speed: While cookies (and some of these planned universal ID’s) track people by their individual browsing behavior as they bounce from site to site, under FLoC, a person’s browser would take any data generated by that browsing and basically plop it into a large pot of data from people with similar browsing behavior—a “flock,” if you will. Instead of being able to target ads against people based on the individual morsels of data a person generates, Google would allow advertisers to target these giant pots of aggregated data. We’ve written out our full thoughts on FLoC before—the short version is that like the majority of Google’s privacy pushes that we’ve seen until now, the FLoC proposal isn’t as user-friendly as you might think. For one thing, others have already pointed out that this proposal doesn’t necessarily stop people from being tracked across the web, it just ensures that Google’s the only one doing it. This is one of the reasons that the upcoming cookiepocolypse has already drawn scrutiny from competition authorities over in the UK. Meanwhile, some American trade groups have already loudly voiced their suspicions that what Google’s doing here is less about privacy and more about tightening its obscenely tight grip on the digital ad economy. To learn more turn your attention to OUR FORUM.

It may have taken some time, but 5G is slowly starting to build momentum in the US. All major carriers now have nationwide 5G deployments covering at least 200 million people, with T-Mobile in the lead covering over 270 million people with its low-band network at the end of 2020. Verizon ended the year with a low-band network that covered 230 million, while AT&T's version reached 225 million. Next-generation networks from all the major carriers are set to continue to expand in the coming months, laying the foundation for advancements such as replacing home broadband, remote surgery, and self-driving cars that are expected to dominate the next decade. But with all that activity by competing carriers, there are myriad different names for 5G -- some of which aren't actually 5G. The carriers have had a history of twisting their stories when it comes to wireless technology. When 4G was just coming around, AT&T and T-Mobile opted to rebrand their 3G networks to take advantage of the hype. Ultimately the industry settled on 4G LTE. One technology, one name. Differing technologies and approaches for presenting 5G, however, have made this upcoming revolution more confusing than it should be. Here's a guide to help make sense of it all. When it comes to 5G networks, there are three different versions that you should know about. While all are accepted as 5G -- and Verizon, AT&T, and T-Mobile have pledged to use multiple flavors going forward for more robust networks -- each will give you different experiences. The first flavor is known as millimeter-wave (or mmWave). This technology has been deployed over the course of the last two years by Verizon, AT&T and T-Mobile, though it's most notable for being the 5G network Verizon has touted across the country. Using a much higher frequency than prior cellular networks, millimeter-wave allows for a blazing-fast connection that in some cases reaches well over 1Gbps. The downside? That higher frequency struggles when covering distances and penetrating buildings, glass, or even leaves. It also has had some issues with heat. Low-band 5G is the foundation for all three providers' nationwide 5G offerings. While at times a bit faster than 4G LTE, these networks don't offer the same crazy speeds that higher-frequency technologies like millimeter-wave can provide. The good news, however, is that this network functions similarly to 4G networks in terms of coverage, allowing it to blanket large areas with service. It should also work fine indoors. In between the two, mid-band is the middle area of 5G: faster than the low band, but with more coverage than millimeter wave. This was the technology behind Sprint's early 5G rollout and one of the key reasons T-Mobile worked so hard to purchase the struggling carrier.  The company has worked diligently since closing the deal, quickly deploying its mid-band network across the United States. The company now covers over 100 million people with the faster service, with a goal of reaching 200 million people before the end of 2021. T-Mobile has said that it expects average download speeds over the mid-band network to be between 300 to 400Mbps, with peak speeds of 1Gbps. While T-Mobile, AT&T, and Verizon have plenty of low-band spectrum, mid-band has previously been used by the military, making it a scarce resource despite its cellular benefits. Thankfully even with the name change in marketing and ads, the icons on phones and devices will remain the same. "Our customers will see a simple 5G icon when connecting to the next-generation wireless network, regardless of which spectrum they're using," said a T-Mobile spokesman. Complete details can be found on OUR FORUM.

A previously undetected piece of malware found on almost 30,000 Macs worldwide is generating intrigue in security circles, and security researchers are still trying to understand precisely what it does and what purpose its self-destruct capability serves. Once an hour, infected Macs check a control server to see if there are any new commands the malware should run or binaries to execute. So far, however, researchers have yet to observe the delivery of any payload on any of the infected 30,000 machines, leaving the malware’s ultimate goal unknown. The lack of a final payload suggests that the malware may spring into action once an unknown condition is met. Also curious, the malware comes with a mechanism to completely remove itself, a capability that’s typically reserved for high-stealth operations. So far, though, there are no signs the self-destruct feature has been used, raising the question of why the mechanism exists. Besides those questions, the malware is notable for a version that runs natively on the M1 chip that Apple introduced in November, making it only the second known piece of macOS malware to do so. The malicious binary is more mysterious still because it uses the macOS Installer JavaScript API to execute commands. That makes it hard to analyze installation package contents or the way that the package uses the JavaScript commands. The malware has been found in 153 countries with detections concentrated in the US, UK, Canada, France, and Germany. Its use of Amazon Web Services and the Akamai content delivery network ensures the command infrastructure works reliably and also makes blocking the servers harder. Researchers from Red Canary, the security firm that discovered the malware, are calling the malware Silver Sparrow. “Though we haven’t observed Silver Sparrow delivering additional malicious payloads yet, its forward-looking M1 chip compatibility, global reach, relatively high infection rate, and operational maturity suggest Silver Sparrow is a reasonably serious threat, uniquely positioned to deliver a potentially impactful payload at a moment’s notice,” Red Canary researchers wrote in a blog post published on Friday. “Given these causes for concern, in the spirit of transparency, we wanted to share everything we know with the broader infosec industry sooner rather than later.” Silver Sparrow comes in two versions—one with a binary in mach-object format compiled for Intel x86_64 processors and the other Mach-O binary for the M1. So far, researchers haven’t seen either binary do much of anything, prompting the researchers to refer to them as “bystander binaries.” Curiously, when executed, the x86_64 binary displays the words “Hello World!” while the M1 binary reads “You did it!” The researchers suspect the files are placeholders to give the installer something to distribute content outside the JavaScript execution. Apple has revoked the developer certificate for both bystander binary files. Silver Sparrow is only the second piece of malware to contain code that runs natively on Apple’s new M1 chip. An adware sample reported earlier this week was the first. Native M1 code runs with greater speed and reliability on the new platform than x86_64 code does because the former doesn’t have to be translated before being executed. Many developers of legitimate macOS apps still haven’t completed the process of recompiling their code for the M1. Silver Sparrow’s M1 version suggests its developers are ahead of the curve. Once installed, Silver Sparrow searches for the URL the installer package was downloaded from, most likely so the malware operators will know which distribution channels are most successful. In that regard, Silver Sparrow resembles previously seen macOS adware. It remains unclear precisely how or where the malware is being distributed or how it gets installed. The URL check, though, suggests that malicious search results may be at least one distribution channel, in which case, the installers would likely pose as legitimate apps. For more turn to OUR FORUM.

Android is the world’s most popular smartphone operating system, running on billions of smartphones around the world. As a result, even the tiniest of changes in the OS has the potential to affect millions of users. But because of the way that Android updates are delivered, it’s debatable whether these changes actually make a difference. Despite that, we’re always looking forward to the next big Android update in hopes that it brings significant change. Speaking of which, the first developer preview for the next major update, Android 12, is right around the corner, and it can bring about many improvements. In case you missed our previous coverage, here’s everything we know about Android 12 so far. Android 12 will first make an appearance as Developer Preview releases. We expect to get a couple of these, with the first one, hopefully landing on Wednesday, 17th February 2021. The Developer Preview for Android 11 began in February 2020, a few weeks ahead of the usual release in March, which gave developers more time to adapt their apps to the new platform behaviors and APIs introduced in the update. Since the COVID-19 pandemic hasn’t completely blown over in several parts of the world, we expect Google to follow a longer timeline this year as well. As their name implies, the Android 12 Developer Previews will allow developers to begin platform migration and start the adaption process for their apps. Google is expected to detail most of the major platform changes in the previews to inform the entire Android ecosystem of what’s coming. Developer Previews are largely unstable, and they are not intended for average users. Google also reserves the right to add or remove features at this stage, so do not be surprised if you see a feature in the first Developer Preview missing in the following releases. Developer Previews are also restricted to supported Google Pixel devices, though you can try them out on other phones by sideloading a GSI. After a couple of Developer Preview releases, we will make our way to Android 12 Beta releases, with the first one expected either in May or June this year. These releases will be a bit more polished, and they will give us a fair idea of what the final OS release will look like. There may also be minor releases in between Betas, mainly to fix any critical bugs. Around this time we will also start seeing releases for devices outside of the supported Google Pixel lineup. OEMs will start migrating their UX skins to the Beta version of Android 12 and they will begin recruitments for their own “Preview” programs. However, these releases may lag a version behind the ones available on the Google Pixel. Again, bugs are to be expected in these preview programs, and as such, they are recommended only for developers and advanced users. After a beta release or two, the releases will achieve Platform Stability status co-existing alongside the Beta status. This is expected to happen around July-August this year. Platform Stability means that the Android 12 SDK, NDK APIs, app-facing surfaces, platform behaviors, and even restrictions on non-SDK interfaces have been finalized. There will be no further changes in terms of how Android 12 behaves or how APIs function in the betas that follow. At this point, developers can start updating their apps to target Android 12 (API Level 31) without being concerned about any unexpected changes breaking their app behavior. After one or two beta releases with the platform stability tag, we can expect Google to roll out the first Android 12 stable release. This is expected to happen in late-August or September. As is the case, Google’s Pixel devices are expected to be the first to get Android 12 stable releases. For non-Pixel phones, we expect to see wider public betas at this stage. The exact timeline for the same will depend upon your phone and its OEM’s plans. A good rule of thumb is that flagships will be prioritized for the update, so if you have a phone that is lower down the price range, you can expect to receive the update a few weeks or months down the line. The complete 2 part report is posted on OUR FORUM.

Our thoughts are private – or at least they were. New breakthroughs in neuroscience and artificial intelligence are changing that assumption, while at the same time inviting new questions around ethics, privacy, and the horizons of brain/computer interaction. Research published last week from Queen Mary University in London describes an application of a deep neural network that can determine a person’s emotional state by analyzing wireless signals that are used like radar. In this research, participants in the study watched a video while radio signals were sent towards them and measured when they bounced back. Analysis of body movements revealed “hidden” information about an individual’s heart and breathing rates. From these findings, the algorithm can determine one of four basic emotion types: anger, sadness, joy, and pleasure. The researchers proposed this work could help with the management of health and wellbeing and be used to perform tasks like detecting depressive states. Ahsan Noor Khan, a Ph.D. student and first author of the study, said: “We’re now looking to investigate how we could use low-cost existing systems, such as Wi-Fi routers, to detect emotions of a large number of people gathered, for instance in an office or work environment.” Among other things, this could be useful for HR departments to assess how new policies introduced in a meeting are being received, regardless of what the recipients might say. Outside of an office, police could use this technology to look for emotional changes in a crowd that might lead to violence. The research team plans to examine the public acceptance and ethical concerns around the use of this technology. Such concerns would not be surprising and conjure up a very Orwellian idea of the ‘thought police’ from 1984. In this novel, the thought police watchers are experts at reading people’s faces to ferret out beliefs unsanctioned by the state, though they never mastered learning exactly what a person was thinking. This is not the only thought technology example on the horizon with dystopian potential. In “Crocodile,” an episode of Netflix’s series Black Mirror, the show portrayed a memory-reading technique used to investigate accidents for insurance purposes. The “corroborator” device used a square node placed on a victim’s temple, then displayed their memories of an event on the screen. The investigator says the memories: “may not be totally accurate, and they’re often emotional. But by collecting a range of recollections from yourself and any witnesses, we can help build a corroborative picture.” If this seems farfetched, consider that researchers at Kyoto University in Japan developed a method to “see” inside people’s minds using an fMRI scanner, which detects changes in blood flow in the brain. Using a neural network, they correlated these with images shown to the individuals and projected the results onto a screen. Though far from polished, this was essentially a reconstruction of what they were thinking about. One prediction estimates this technology could be in use by the 2040s. Brain-computer interfaces (BCI) are making steady progress on several fronts. In 2016, research at Arizona State University showed a student wearing what looks like a swim cap that contained nearly 130 sensors connected to a computer to detect the student’s brain waves. The student is controlling the flight of three drones with his mind. The device lets him move the drones simply by thinking directional commands: up, down, left, right. Flying drones with your brain in 2019. Source: University of Southern FloridaAdvance a few years to 2019 and the headgear is far more streamlined. Now there are brain-drone races. Besides the flight examples, BCIs are being developed for medical applications. MIT researchers have developed a computer interface that can transcribe words that the user verbalizes internally but does not actually speak aloud. Visit OUR FORUM for more.

When Nvidia launched its RTX A6000 48GB professional graphics card last October, the company said that it would offer at least twice the performance of the company's previous-gen Quadro cards. These types of claims are not unusual, but how fast is the $4,650 RTX A6000 really in real-world benchmarks? (Interestingly, that's only $650 more than Galax's flagship RTX 3090 GPU.) Workstation maker Puget Systems decided to find out and ran multiple professional-grade benchmarks on the card.  Nvidia's RTX A6000 48GB graphics card is powered by its GA102 GPU with 10,752 CUDA cores, 336 tensor cores, and 84 RT cores, and a 384-bit memory bus that pairs the chip with a beefy 48GB slab of GDDR6 memory. In contrast, Nvidia's top-of-the-range GeForce RTX 3090 consumer board based on the same graphics processor features a different GPU configuration containing 10,496 CUDA cores, 328 tensor cores, 82 RT cores, and a 384-bit memory interface for its 'mere' 24GB of GDDR6X memory. While the Nvidia RTX A6000 has a slightly better GPU configuration than the GeForce RTX 3090, it uses slower memory and therefore features 768 GB/s of memory bandwidth, which is 18% lower than the consumer graphics card (936GB/s), so it will not beat the 3090 in gaming. Meanwhile, because the RTX A6000 has 48GB of DRAM on board, it will perform better in memory-hungry professional workloads. While all GeForce RTX graphics cards come with Nvidia Studio drivers that support acceleration in some professional applications, they are not designed to run all professional software suites. In contrast, professional ISV-certified drivers of the Quadro series and Nvidia RTX A6000 make them a better fit for workstations. Not all professional workloads require enormous onboard memory capacity, but GPU-accelerated rendering applications benefit greatly, especially when it comes to large scenes. Since we are talking about graphics rendering, the same programs also benefit from GPU capabilities. That said, it is not surprising that the Nvidia RTX A6000 48GB outperformed its predecessor by 46.6% ~ 92.2% in all four rendering benchmarks ran by Puget. Evidently, V-Ray 5 scales better with the increase of GPU horsepower and onboard memory capacity, whereas Redshift 3 is not that good. Still, the new RTX A6000 48GB is tangibly faster than any other professional graphics card in GPU-accelerated rendering workloads. Modern video editing and color correction applications, such as DaVinci Resolve 16.2.8 and Adobe Premiere Pro 14.8, can also accelerate some of the tasks using GPUs. In both cases, the Nvidia RTX A6000 48GB offers tangible performance advantages compared to its predecessor, but its advantages look even more serious when the board is compared to graphics cards released several years ago. Like other modern professional graphics applications, Adobe After Effects and Adobe Photoshop can take advantage of GPUs. Yet, both programs are CPU bottlenecked in many cases, which means that any decent graphics processor (and not necessarily a professional one) is usually enough for both suites. Nonetheless, the new Nvidia RTX A6000 64GB managed to show some speed gains compared to the predecessor in these two apps as well. More facts and figures along with possible pricing can be found on OUR FORUM.