Cybercriminals can gain access to browser fingerprints

Browser fingerprinting is one of the many tactics used by criminals using phishing to bypass security checks and thus extend the usefulness of malicious attack campaigns.

While legitimate organizations have been using browser fingerprinting to uniquely identify web browsers for the past 15 years, it is now also routinely exploited by cybercriminals: a recent study shows that one in four phishing actors use some form of this technique.

The Director of Operational Intelligence of the Fortra’s PhishLabs, Kevin Cryan, explains in the study that browser fingerprinting uses a variety of client-side checks to establish browser identities, which can then be used to detect bots or other unwanted site visits. In this context, numerous pieces of data can be collected as part of fingerprinting, such as: time zone, language settings, IP address, cookie settings, screen resolution, browser privacy or user-agent string.

Many legitimate providers use browser fingerprinting to detect bots misusing their services and other suspicious activity, but phishing site authors have also realized its advantages and are using the technique to avoid automated systems that may flag their website as phishing. By implementing their own browser fingerprinting controls by loading their site’s content, cybercriminals can hide phishing content in real time.

For example, Fortra has observed that threat actors use browser fingerprinting to bypass Google‘s ad review process. Since Google ‘s review process is semi-automated, the implementation of browser fingerprint checks allowed the criminals to identify when the server was seeing their ad destinations compared to a normal user. If the threat actor suspected Google activity, benign content was displayed. This resulted in phishing reports being rejected by Google because malicious content could not be detected.

The bot fight mode of Cloudflare is an example of a legitimate provider that uses browser fingerprinting techniques to identify and block bots. Whenever a website loads with bot fight mode, the following JavaScript is executed and sends the results to Cloudflare. Depending on the results, a captcha will be presented or blocked.

If the JavaScript is decoded, security teams will see that someone is investigating and can infer from the strings displayed that it is requesting numerous browser properties and testing to see the results.

Once the JavaScript is finished, it generates a fingerprint and sends all the information to the phishing site where the results are analysed by the server. Depending on what it determines, benign or malicious content will be displayed.

This fingerprint contains all browser properties, including information about screen dimensions, operating system, GPU hardware, time zone and many other data points. All this information combined can make it very easy to determine whether the browser is real or an emulator.

In the past, cybercriminals could easily avoid detection by exploiting an intermediate server and changing its UserAgent. However, browser fingerprinting is very effective in identifying these automated systems, allowing authors to modify the content of their site based on the results. Understanding the browser properties being collected by criminals through fingerprinting is critical for security teams to avoid suspicion.

_____

Aquest apunt en català / Esta entrada en español / Post en français

Sharp increase in the death of pedestrians at night in the United States

It was around 2009 that U.S. roads began to become deadlier for pedestrians, especially in the dark of night. Fatalities have increased since then and have reversed the effects of decades of safety improvements. And experts still do not agree on the reasons for this situation.

But what is most mystifying of all is that this upward trend in the number of accidents has not occurred in other comparatively wealthy countries. In places like Canada or Australia, a much smaller proportion of pedestrian deaths occur at night, and these deaths, less common in number, generally follow a pattern of decline, not growth.

In an article published in the New York Times authors Emily Badger, Ben Blatt and Josh Katz explain that nighttime pedestrian fatality trends in the U.S. represent a puzzle that neither experts in vehicle design, nor in the study of driver behaviour or road safety and how they interact have been able to figure out.

During 2021, more than 7,300 pedestrians were killed in the United States, three out of four of them during the hours between sunset and sunrise.

This trend adds to what is already a growing gap in highway fatalities between the U.S. and the rest of the world. Speed limits on local roads tend to be higher in North America, laws and cultural prohibitions against dangerous driving may be weaker, and infrastructure in the U.S., in many ways, has been designed to allow speeding cars.

Part of the problem could be that U.S. highways and pedestrians walking along them have been especially susceptible to new potential risks, such as smartphones and ever-increasing vehicles.

Darkness seems to especially threaten pedestrians, or rather, people on foot on U.S. highways. In comparable countries, pedestrians are often more likely to be fatally struck during the daytime.

The most obvious potential risks that have brought about changes in the United States since 2009 within vehicles would be drivers operating a smartphone or a screen, which has become increasingly complex. The curve that marks the timeline of mobile phone development ends up overlapping closely with the increase in pedestrian deaths.

When it comes to other sources of driver impairment, there would be no particular reason to believe that alcohol, speeding or fatigue would necessarily have made a major difference. What has changed is the amount of technology we surround ourselves with behind the wheel.

Smartphones and the way they can distract both drivers and pedestrians are not uniquely American. But there is one aspect that remains distinctly so: the ubiquity in the United States of automatic transmissions that help free up the driver’s hand for other uses. Only 1% of all new passenger vehicles sold in the U.S. in recent years had manual transmissions, while in Europe, for example, they stand at 75%.

Driver behaviour inside the vehicles, of any type, may also have changed during that time for some additional reasons, the researchers suggest. This timeline also overlaps with the rise of opioids and the legalisation of recreational marijuana. But there is still little research on how marijuana affects driving.

_____

Aquest apunt en català / Esta entrada en español / Post en français

Concern in the U.S. over advances in facial recognition technology

Some uses of facial recognition technology raise significant concerns that would merit a swift government response, as expressed by a new report of the National Academies of Sciences, Engineering and Medicine. The paper recommends taking into account federal legislation and an executive order, as well as the attention of the courts, the private sector, civil society associations and other organisations working with facial recognition technology, providing guidance for the responsible development and deployment of the technology.

A powerful and increasingly used tool, facial recognition technology is useful for a wide range of verification and identification applications and offers capabilities to check if someone is who they say they are and identify a person in an image. The systems use trained artificial intelligence models to extract facial features and create a biometric template from an image, and then compare the template features with features from another image or set of images to produce a similarity score. According to the report, the accuracy and speed of these systems have advanced very rapidly in the last decade with the adoption of machine learning.

With few exceptions, the United States currently has no authoritative guidelines, regulations or laws to adequately address issues related to the use of facial recognition technology, the report details. It also states that, even if it does not necessarily violate the rights and obligations included in statutes or constitutional provisions, facial recognition technology may interfere with and substantially affect the values embodied in U.S. privacy, civil liberties and human rights commitments.

According to University of Wisconsin-Madison chancellor Jennifer Mnookin, facial recognition technology creates new and complex legal challenges and raises a variety of different and unresolved legal issues. It also raises complicated social questions about privacy and public and private surveillance, given the very personal implications of the technology.

It is crucial for governments to address these issues and to make them a priority: failure to adopt policies and regulations on the development and use of facial recognition technology would effectively cede decision-making and rulemaking on these important issues of major public concern entirely to the private sector and the marketplace.

Facial recognition technology has been increasingly incorporated into everyday life, with a wide range of uses, the report states. Some of these uses are innocuous, such as allowing people to unlock their smartphones. But when applied broadly and without safeguards, technology can enable repressive regimes to create detailed records of people’s movements and activities and block citizens’ participation in public life. Many potential uses fall somewhere in between, creating a large grey area where individual assessments of risks, benefits, trade-offs and values may vary, thus affecting how they should be regulated or permitted. The report recognises the value of facial recognition technology and does not advocate a blanket ban, but states that a number of uses may cause sufficient concern to ban them.

The study says that there are two main categories of facial recognition concerns, although they may overlap. One is potential harms from problematic use or misuse of technology, which become more prominent as technology becomes more accurate and capable. The second is potential harms from errors or limitations of the technology itself, such as when systems have different false positive or false negative rates for different demographic groups.

_____

Aquest apunt en català / Esta entrada en español / Post en français

Fraud is such a big problem that children should be taught to detect it from an early age

The UK National Statistics Office (ONS) reported a 25% increase in the number of fraud offences in 2021 compared to 2020. Fraud, which accounts for more than 40% of all crimes against individuals, is the most common crime in the United Kingdom.

As if these statistics were not alarming enough, there is the added belief that victims of fraud are often blamed for being foolish or too trusting. But it is time to accept that this can happen to anyone. It has become such a problem that the concept of fraud as an experience suffered only by vulnerable people should be revised. The human brain cannot keep up with all the new types of fraud enabled by technology.

As recently published in The Conversation, a new approach is needed that holds financial institutions and companies accountable for identifying or facilitating fraud and that leverages artificial intelligence to detect suspicious transactions. It is unreasonable to expect consumers to know when they are being scammed if banks and social media platforms cannot.

A 2023 report from UK Finance indicates that there is a growing trend among fraudsters to target people between 18 and 24 years of age, who are much more likely to fall for a phishing scam than people aged 65 and over. In addition, the rate of 13- to 17-year-olds who are victims of video game scams has also risen sharply.

However, the programs on how to protect yourself from fraud that are currently offered are rather scarce. For example, the children’s charity NSPCC has programs to protect minors from online abuse and to keep them safe while using social networks and from possible legal but harmful content, but not to protect them from online scams. Fraud prevention should be taught in schools and universities as part of the curriculum.

One of the most important theories in criminology is deterrence theory, which states that crime reduction is related to the severity of punishment and, more importantly, to the likelihood of being caught. Research suggests that increasing the likelihood of catching the offender is much more effective than increasing the punishment. However, fraudsters have little reason to worry. According to UK government sources, fraud accounts for more than 40% of all crime, but less than 1% of police resources are devoted to combating it.

In fact, the Federal Trade Commission (FTC) noted that a quarter of those who had lost money to fraud said the process had started on social media platforms.

The nature of social media gives fraudsters the ability to hide behind fake profiles and pretend to operate as a legitimate business. They also allow fraudsters to reach millions of people who have only one click to make, among whom younger adults, who tend to make heavier and more prolific use of social networks, are particularly vulnerable.

The FTC has issued orders on a number of social networks, such as Meta, TikTok and YouTube, to request information from them on how they detect malicious ads or scams.

California lawmakers, meanwhile, are considering a bill that would offer seniors greater protection against fraud and hold banks accountable when ATMs facilitate fraudulent transactions.

In the United Kingdom, an anti-fraud strategy was presented to Parliament in May 2023 proposing a number of measures, including a ban on all telephone calls related to financial products.

These two bills are a move in the right direction, but more work needs to be done urgently. Policy makers must allocate funds for research, and police agencies must introduce laws that provide greater protection for individuals and collaborate with international law enforcement agencies such as Interpol.

Fraud affects society at all levels: individuals, organisations and governments.

_____

Aquest apunt en català / Esta entrada en español / Post en français

How to protect yourself from crime in the metaverse

The metaverse has the potential to change the way we interact and relate to each other and technology. That being said, there are also potential pitfalls and risks, as with any new technology. Potential problems with privacy, security and legislation are part of the downside of the metaverse. This is explained in a recently published report by the website Cointelegraph.

When it comes to metaverse platforms, one of the main problems is privacy. Individuals may share sensitive data and personal information in the metaverse, increasing the risk of hacking and data breaches. In addition, there may be less oversight and regulation of how companies collect and use this data, which could lead to the misuse of their personal data.

Since it is a virtual environment, the metaverse is open to several security risks, such as hacking, intellectual property theft and misuse of user data, which can lead to loss of personal data, financial damage and harm to the reputation and stability of virtual communities. For example, criminals can use the metaverse to commit additional crimes, spread malware or steal personal data.

Regulation is another problem, because the metaverse is a young and rapidly evolving environment. Governments and other institutions may struggle to keep up with technology and lack the resources or tools needed to govern it successfully. This lack of supervision may lead to problems such as illegal activities or dangerous content.

In addition, it is also unclear how the metaverse will affect society, because it is a totally new area that is developing and transforming very quickly. While some experts claim that technology will create more options for community and connection, others respond that it will simply increase social alienation and isolation.

By exploiting flaws in virtual systems and user behaviour, such as malware infections, phishing, and illegal access to personal and financial information, cybercriminals take advantage of the metaverse in a number of ways:

  • Phishing: thieves may use phishing techniques to trick victims into revealing personal information or login credentials, which can then be used for identity or data theft or other illegal acts.
  • Hacking: to steal money or personal information, criminals may attempt to hack into user accounts or metaverse platforms.
  • Malware: to access sensitive data or perform illicit operations, criminals can use malware to infect virtual environments or metaverse-compatible devices.
  • Frauds: criminals may take advantage of the anonymity and lax regulation of the metaverse to carry out scams or pyramid schemes.
  • Ransomware: thieves may use ransomware to encrypt a user’s digital possessions or personal data before requesting payment in exchange for the decryption key.
  • Exploitation of virtual goods and assets: cybercriminals may use bots or other tools to buy virtual goods and assets, which they then sell to the black market for real money.
  • Creation of fake digital assets: criminals may create fake virtual assets and sell them to unsuspecting buyers, causing victims to lose money.
  • Social engineering: thieves may take advantage of the social elements of the metaverse to gain people’s trust before scamming them.

_____

Aquest apunt en català / Esta entrada en español / Post en français

Hackers are already using ChatGPT to introduce new malware

According to a recent report by CheckPointResearch, hackers are already using the new artificial intelligence chatbot ChatGPT to create new low-level cyber tools, such as malware and encryption scripts. As Sam Sabin reports from the Axios website, security experts have warned that OpenAI’s ChatGPT tool could help cybercriminals accelerate their attacks, and all in a short period of time.

The report lists three cases in which hackers figured out various ways to use ChatGPT to write malicious software, create data encryption tools and write code creating new dark web marketplaces.

Hackers are always looking for ways to save time and speed up their attacks, and ChatGPT’s artificial intelligence-based responses often provide a good starting point for most hackers writing malware and phishing emails.

CheckPoint noted that the data encryption tool created could easily become hijacking software once some minor issues are fixed.

OpenAI has warned on several occasions that ChatGPT is a research preview and that the organisation is constantly looking for ways to improve the product to prevent potential abuse.

The AI-enabled chatbot that has stunned the tech community can also be manipulated to help cybercriminals hone their attack strategies.

The arrival of OpenAI’s ChatGPT tool could allow the fraudsters behind email- and text-based phishing attacks, as well as malware groups, to speed up the development of their schemes.

Several cybersecurity researchers have been able to get the AI-enabled text generator to write phishing emails or even malware for them over the past few weeks.

But it should be clear that hackers were already becoming very adept at incorporating more humane and harder-to-detect tactics into their attacks before ChatGPT came on the scene.

And, often, hackers can gain access through simple computer errors, such as hacking into the corporate account of a former employee still active.

ChatGPT arguably speeds up the hackers’ process by giving them a launching pad, although the responses are not always perfect.

Although OpenAI has implemented some content moderation warnings in the chatbot, it is easy for researchers to circumvent the current system and avoid penalties.

Users still need to have some basic knowledge of coding and attack launching to understand what works correctly in ChatGPT and what needs to be adjusted.

Organisations were already struggling to defend against the most basic attacks, including those in which hackers use a stolen password leaked online to log into accounts. AI-enabled tools such as ChatGPT could only exacerbate the problem.

Therefore, network defenders and IT teams must intensify efforts to detect phishing emails and text messages to stop these types of attacks.

_____

Aquest apunt en català / Esta entrada en español / Post en français

Irregular border crossings in the European Union reach their highest level since 2016

Nearly half of irregular border crossings during 2022 were overland through the Western Balkans region, according to a report by the EU border agency Frontex. Preliminary figures do not include Ukrainian refugees.

The number of irregular border crossings in the European Union increased by 64% last year compared to 2021. According to agency estimates, some 330,000 entries were detected, of which 45% were carried through the Western Balkans region.

The Central Mediterranean route had the second highest number of crossings, increasing by more than half to over 100,000.

Most of the people who attempted the dangerous sea route last year were nationals of Egypt, Tunisia and Bangladesh. Frontex also reports that 2022 reached the highest number of people in five years from Libya, the main departure point from North Africa.

Regardless of the entry route, Syrians, Afghans and Tunisians accounted for approximately 47% of border crossing attempts. The number of Syrians approximately doubled to 94,000.

Males accounted for more than 80% of attempts to enter the Union. The proportion of reported minors decreased by about 9% of all irregular entries.

The latest Frontex figures did not include millions of Ukrainian refugees who entered the EU between February, when Russia invaded Ukraine, and December.

In this vein, during this January, Europol supported Bulgarian authorities during a large-scale day of action against organised crime groups involved in migrant smuggling. The activities, coordinated by the Bulgarian prosecutor and with the participation of the General Directorate for Combating Organised Crime, the National Police and the Border Police, have targeted criminal networks active along the Balkan route. Bulgarian investigations were also coordinated with Turkish and Serbian authorities and other cooperating agencies.

The joint actions took place in Bulgaria and focused on a number of migrant smuggling networks from Turkey through Bulgaria to Serbia and then Western Europe. The main organisers of the networks active along this route are based in Bulgaria, Serbia and Turkey. They have created their own national networks of members responsible for transportation and accommodation in their respective countries.

The main means of transportation used by smugglers were vans, caravans and buses.

Bulgarian authorities have reported an increase in migrant smuggling activities at its southern border. In August 2022, an incident involving a bus carrying irregular migrants resulted in the death of two police officers on duty. Months later the same year, a Bulgarian border police officer was shot dead during a regular patrol on the green border with Turkey. These facts suggest an increase in both smuggling activities and the violence of the criminal networks involved.

_____

Aquest apunt en català / Esta entrada en español / Post en français

New Texas law allows firearms to be carried without a licence

The new U.S. law being enforced in the state of Texas that allows most adults over the age of 21 to carry a firearm without a licence has caused sharp divisions between supporters and opponents of the measure. Some sheriffs, police leaders and district attorneys in urban areas of Texas are alarmed by the increase in people carrying guns and the improvised risks this has posed.

Likewise, especially in rural areas, other sheriffs believe that there have been no profound changes since the implementation of the new law. Gun-rights advocates believe the fact that more people are armed could be the explanation for why shootings have declined in some parts of the state.

Far from being an outlier, the new Texas law is yet another step towards expanding the elimination of nearly all restrictions on carrying handguns. When Alabama’s unlicensed carry law is in effect in January 2023, half of the U.S. states, from Maine to Arizona, will not require a licence to carry a handgun.

Legislative momentum in several states has coincided with a federal judiciary that is increasingly leaning in favour of carrying guns, and against state efforts to regulate them. The problem is that Texas is the most populous state yet to remove restrictions on carrying firearms. Five of the 15 largest U.S. cities are in Texas, and so this permissive approach to guns is a new phenomenon in urban areas to an extent not seen in other states.

To date, no statistics have been released on shootings in the state of Texas since the law went into effect in September 2021. The law’s detractors are pessimistic after homicides and suicides involving firearms soared in 2020, the first year of the pandemic, and continued rising in 2021, reaching the highest rates in three decades.

Big-city police departments and major law enforcement groups opposed the new firearms law when it came before the state legislature in the spring of 2021, concerned about the loss of training requirements needed for a licence and greater danger to officers.

Police officers report that, nowadays, arguments between drunk people in the border town of Eagle Pass, people out binge-drinking at night, fights over a parking spot or bad driving, or marital infidelities end in shootings. And they ratify it in light of the increasing number of complaints received by Houston prosecutors of armed incidents everywhere.

The law still prohibits carrying a pistol to those convicted of a felony, who are under the influence of alcohol, or who commit other crimes. Along these lines, advocates of the law stress that in Harris County, criminal cases related to illegal gun possession have increased considerably since the new law came into effect: 3,500 in 2022, compared to 2,300 for all of 2021.

In Dallas, the number of homicides considered “justifiable”, such as those committed in self-defence, has increased since the law was passed. In relation to this, the author of the book More guns, less crime, John Lott, stresses that his research already predicted this scenario: a greater reduction in crime if people who are more likely to be victims of violent crime are armed.

_____

Aquest apunt en català / Esta entrada en español / Post en français

Challenges the Future Holds for the Metaverse and Cybersecurity

The metaverse is increasingly likely to be the target of cyberattacks that pose a real risk, both to the companies that choose to be active in it and to the users who access it. The growth of the metaverse emphasises the need to address the cybersecurity challenges posed by this new multimedia environment.

The metaverse is estimated to account for a 1 % share of the global economy, reaching $8-13 trillion by 2030, according to investment bank Citi. Precisely because of this growth, the metaverse is increasingly likely to be targeted by cybercriminals.

As explained by the websites Ooda and Lexology, the metaverse refers to a digital universe resulting from multiple technological elements that include virtual reality and augmented reality. The idea is that users can access the metaverse through 3D viewers and have virtual experiences. In fact, it is possible to create realistic avatars, meet other users or perform all those actions that we carry out on the Internet on a single platform, even including things like building real estate or a marketplace.

Therefore, the metaverse requires the concurrent use of many technologies, where augmented reality, cloud technologies and artificial intelligence are combined to become functional. In this universe, there is also the possibility of creating a new economy through cryptocurrencies.

Given the technologies involved, the risk of becoming a victim of cyberattacks in the metaverse is very high. In addition, the simultaneous use of such different technologies, as well as the collection and storage of infinite amounts of both personal and non-personal data, and the use of blockchain, make traditional monitoring and preventing of cyberattacks a complex and demanding task. For instance, there are dozens of cases of counterfeit works or products being sold in the decentralised world.

Although it is assumed that phishing activities may increase significantly with the metaverse, the following are also possible:

  • Identity theft: cybercriminals, through information found online and in the metaverse, could partake in user identity theft, for example by stealing avatars.
  • Cryptocurrency theft: cybercriminals could take possession of users’ wallets and passwords in the metaverse and carry out criminal actions.

However, the main cybersecurity concern in the metaverse should focus on personal data (as in the real world), which will be cybercriminals’ main target of attack.

Biometric data released by users can be used to take control of devices that enable the transition from virtual reality to augmented reality, as these use the user’s biometric data to enable access within the metaverse.

Companies will need to take precautions to prevent this type of attack, and ensure that their security systems are safe and do not include any vulnerable aspects that can cause serious damage not only to the economy and their reputation, but also to users. However, in this regard, there is still a lack of regulatory regimes that should be put in place as soon as possible to ensure the protection of the metaverse and its users.

_____

Aquest apunt en català / Esta entrada en español / Post en français

Remote control of touch screens – the new cyberattack

As explained in an article published on the website thehackernews.com, researchers have demonstrated what they call the first active contactless attack against all types of touch screens.

According to research by a group of academics from Zhejiang University and the Technical University of Darmstadt in a new research paper, GhostTouch uses electromagnetic interference (EMI) to inject fake touch points into a touchscreen without the need to physically touch it.

The basic idea is to harness electromagnetic signals to execute basic touch events, such as taps and swipes to specific locations on the touch screen with the goal of taking over remote control and manipulating the underlying device.

The attack, which works from a distance of up to 40 mm, is based on the fact that touch screens are sensitive to EMI, which is exploited to inject electromagnetic signals into transparent electrodes that are incorporated into the touch screen to register them as touch events.

The experimental setup involves an electrostatic gun to generate a pulse signal that is then sent to an antenna to transmit an electromagnetic field on the phone’s touch screen, which causes electrodes, acting as antennas, to pick up the EMI.

This can be further adjusted by selecting the signal and antenna to induce a variety of touch behaviours, such as press and hold and swipe to select, depending on the device model.

In a real-world scenario, this could occur in a variety of ways, such as swiping up to unlock a phone, connecting to a Wi-Fi network, stealthily clicking on a malicious link containing malware, and even answering a phone call on the victim’s mobile phone.

In places such as a cafe, library, meeting room or conference lobbies, people should put the smartphone face down on the table, the researchers explained. However, an attacker can embed the attack equipment under the table and launch attacks remotely.

Up to nine different smartphone models have been found vulnerable to GhostTouch: Galaxy A10s, Huawei P30 Lite, Honor View 10, Galaxy S20 FE 5G, Nexus 5X, Redmi Note 9S, Nokia 7.2, Redmi 8 and an iPhone SE (2020), the last of which was used to establish a malicious Bluetooth connection.

To counter the threat, the researchers recommend adding electromagnetic shielding to block EMI, improving the touchscreen detection algorithm, and asking users to enter the phone’s PIN or verify their faces or fingerprints before carrying out high-risk actions.

GhostTouch controls and shapes the near-field electromagnetic signal and injects touch events into the targeted area of the touchscreen without the need to physically touch or access the victim’s device, researchers explain.

_____

Aquest apunt en català / Esta entrada en español / Post en français