The UK is committed to revitalising neighbourhood policing

This week, the UK government announced an ambitious plan to strengthen police presence in neighbourhoods and ensure a more neighbourhood policing model, known as the Neighbourhood Policing Guarantee. With this move, the British executive recovers one of the most deeply rooted traditions of its police system: the figure of the local bobby, with the declared objective of reconnecting the police with communities and improving citizen confidence.

The Home Office has released a monitoring and evaluation framework called the Neighbourhood Policing Guarantee performance framework, which establishes indicators to measure the quality of police presence in neighbourhoods.

This move comes against a backdrop of intense debate about the role of the police in the UK, following years of budget cuts, corruption scandals and a worrying loss of public confidence. Last December, the College of Policing – the professional body that sets the standards for English policing – had already published its assessment of the government’s proposal, warning that “visible presence is not enough: we need to invest in skills, leadership and organisational culture to make a genuine community policing model a reality”.

The Neighbourhood Policing Guarantee presented by the British government takes the form of an ambitious evaluation framework that aims to translate into operational practice an idea that is as simple as it is powerful: that everyone should have access to a police force that is close, familiar and useful. The reference document sets out six core commitments, which are intended to become minimum standards for all UK police forces:

1. Each community should have an identifiable and accessible neighbourhood policing team. Police forces will have to publish the names, photos and contact channels of their neighbourhood officers and ensure that citizens know who they are and how to access them.

2. Teams should be in easy contact and be available on a regular basis. Neighbourhood officers must maintain an active and known presence in their areas, with regular foot patrols and meeting points.

3. Citizens should be able to see how local priorities are being addressed. The police should publish what specific actions they will take on major community concerns (e.g., antisocial behaviour, traffic, theft), and should update them regularly.

4. Each police force should systematically gather local security priorities. It establishes the obligation of structured mechanisms for citizen consultation and participation beyond specific surveys.

5. It is necessary to ensure specific training in community policing for all members of these teams. The British reform includes mandatory training in conflict resolution tools, active listening, mediation and cultural awareness.

6. The results and impact of the work of the neighbourhood teams will have to be measured and made public. Specific performance indicators on presence, accessibility, citizen satisfaction and impact on the reduction of specific problems are introduced.

Preliminary conclusions

The British plan represents a serious attempt to reconnect the police with society through real proximity, with measurable and transparent commitments. In this sense, its approach may inspire reforms or improvements in models such as the Catalan model, which, despite having a long tradition of community security, have often been trapped in the realm of will rather than obligation.

Unlike the British proposal, which seeks to ensure common standards throughout the country, the Mossos model has developed unevenly, depending heavily on the impetus of local commanders or complicity with local councils. The British proposal forces a move from discourse to structure: assured presence, open data and publicly measured performance.

From a European perspective, this reform opens a window of opportunity to rethink the relationship between police and community, not only in terms of presence, but also in terms of trust, transparency and democratic commitment.

Despite the apparent consensus on the need to bring back neighbourhood policing, several social actors and security experts have warned that this guarantee cannot be a simple nostalgic return to the ‘bobby on the beat’ years. It remains to be seen whether the actual deployment of the plan – with resources, training and evaluation – lives up to the political discourse.

From Catalonia and other European contexts with their own community policing experiences, this British initiative offers a good opportunity to reflect on public safety models, police-community relations and mechanisms to ensure transparency and accountability. We will follow its progress in Security Notes.

_____

Aquest apunt en català / Esta entrada en español / Post en français

How does Meta dismantle organised fraud centres?

Scams known as pig butchering have experienced a significant increase in recent years, becoming a global threat that combines elements of romance and investment fraud. These scams involve criminals establishing trusting relationships with victims and then persuading them to invest in fraudulent schemes, often involving cryptocurrencies. Losses to victims can amount to hundreds of thousands of dollars.

Faced with this alarming situation, Meta – the parent company of Facebook, Instagram and WhatsApp – has intensified its efforts to combat these fraudulent practices. According to a report published in November 2024, Meta has removed more than two million accounts linked to scam centres located in Myanmar, Laos, Cambodia, United Arab Emirates and the Philippines. These actions are part of a broader strategy to dismantle the criminal organisations responsible for these scams.

What is pig butchering?

The term pig butchering refers to a tactic in which fraudsters build trusting relationships with their victims, often through social networks, dating apps or messaging, with the ultimate goal of convincing them to make investments in fraudulent platforms. These investments are often related to cryptocurrencies, and victims are lured with promises of high returns. Once victims have invested significant sums, the fraudsters disappear with the funds.

In Catalonia, as in the rest of the world, the impact of pig butchering scams is widely known. The Mossos d’Esquadra have detected an increase in complaints related to investment fraud through contacts established through social networks or instant messaging applications. The victims, often people with a vulnerable profile or in a situation of loneliness, are emotionally seduced by fraudsters operating from abroad, but who use digital tools and linguistic impersonations that pass them off as local residents. The digital trail of these crimes often makes investigations more difficult, making it necessary to strengthen international cooperation and technological training of Catalan law enforcement agencies.

Who is behind these scams?

Many of these operations are run by transnational criminal organisations operating from fraud centres in Southeast Asia. These centres often operate with forced labour, where victims of human trafficking are forced to participate in fraudulent activities under threat and coercion. These operations have proliferated in countries such as Myanmar, Cambodia, Laos and the Philippines, taking advantage of the lack of government control and local corruption. 

Meta’s approach to combating these scams

Meta has taken several measures to address this problem:

  • Policy against Dangerous Organizations and Individuals (DOI): under this policy, Meta designates these criminal organisations as dangerous, banning them from its platforms and implementing enforcement tools to detect and remove related content.
  • Collaboration with the authorities: Meta works closely with law enforcement agencies globally to share information about these criminal operations, facilitating investigations and legal action against those responsible.
  • Cooperation with other technology companies: the company collaborates with other companies in the sector to share information on threats and develop joint strategies to combat these scams. For example, Meta worked with OpenAI to detect and disrupt fraudulent activities that used artificial intelligence tools to generate misleading content. 

How to protect yourself from these scams

In this context, it is essential that public institutions and private actors in Catalonia collaborate to promote cybersecurity and citizen prevention. Digital awareness campaigns, training in schools and immediate victim services are key tools for dealing with a threat that combines emotional manipulation with financial engineering. And, at the same time, it is necessary to report and highlight how these international fraud networks operate with impunity thanks to the lack of regulation in the global digital environment.

It is critical that users are aware of the tactics used in pig butchering scams and take steps to protect themselves:

  • Be wary of unsolicited messages: if you receive messages from strangers through social media, dating apps, or messaging, be cautious and avoid sharing personal or financial information.
  • Verify investment opportunities: before investing money, it is advisable to thoroughly investigate the investment platform or opportunity. Promises of high returns with little risk are worth being wary of.
  • Don’t transfer money to people you don’t know personally: avoid sending money or financial information to individuals you’ve only met online.
  • Use online security tools: keep your devices updated and use security software to protect against potential threats.

_____

Aquest apunt en català / Esta entrada en español / Post en français

How to ensure that AI is used ethically in policing

With the growing recognition of the opportunities for artificial intelligence (AI) to significantly reduce the time and costs associated with some policing activities, the impetus for wider use of this technology is notable.

In the latest in a series of articles on AI in policing, Matt Palmer, Public Safety Product Manager at NEC Software Solutions, explores the key issues in ensuring that the technology is used ethically and transparently.

Artificial intelligence (AI) is already an integral part of the police toolkit. The police service is increasingly using AI-enabled technology to save time, prioritise resources and increase efficiency.

AI is starting to make a real difference to the police by processing large volumes of data much faster than a human could. This includes some of the essential police functions, such as live classification of incoming calls and automation of data quality assurance work.

We are also seeing more cases where AI is able to support police officers’ decision making by predicting outcomes based on patterns. Examples include the use of supervised machine learning to assess factors such as the likelihood that an individual will offend, reoffend or become vulnerable to victimisation, with these examples proving to be the most controversial in the public mind.

As the law enforcement sector steps up its use of AI, people are increasingly aware of the risks of relying too heavily on technology in decisions that can profoundly affect people’s lives. It is therefore critical to establish approaches in which police use, and are seen to use, AI to the highest ethical standards.

Perhaps one of the most widely expressed concerns about AI in policing is the risk of bias and discrimination. All AI systems learn from the initial training data, and if there is a bias in this data, it will be integrated into the AI models, perpetuating the bias and influencing decision making.

If predictive tools are trained on historical arrest data where human biases exist, the algorithms will replicate discriminatory patterns, such as negative racial profiling and targeting of minority communities.

To prevent bias from infecting AI models, developers must use diverse and representative data sets to train AI systems and continuously test these systems for discriminatory patterns.

Without a doubt, AI will play an increasingly important role in police work. To ensure that this role is applied ethically and responsibly, the final say in any decision must be made by human, not artificial, intelligence.

The problem with AI systems is that they are not infallible. They can produce false positives, such as incorrectly identifying innocent people as suspects. Similarly, they can deliver false negatives and fail to identify the real offenders. Without human judgement, AI could lead to miscarriages of justice.

_____

Aquest apunt en català / Esta entrada en español / Post en français

Rethinking police legitimacy from a queer perspective

Researcher Ben Scott, together with Dr. Naomi Pfitzner and Professor Kate Fitz-Gibbon, of Monash University in Australia, have published a thought-provoking study on how police legitimacy is understood in relation to historically marginalised groups. The article, entitled Spatial, Temporal, and Visible:Queer People’s Perceptions of Police Legitimacy, proposes a “queering” of traditional theories of police legitimacy: a critical questioning of their basis and applicability beyond cisheteronormative contexts.

The study starts from a clear finding: queer people – gay, lesbian, bisexual, trans and other non-normative identities – have often been relegated to the periphery of criminology, mentioned anecdotally with the usual note of “lack of data”. To counter this invisibilisation, the team surveyed nearly 150 queer people in the state of Victoria, Australia, to analyse their perceptions of police legitimacy.

The results, which combine quantitative data and qualitative responses, reveal a number of tensions: most participants recognise the formal authority of the police and express an obligation to obey the laws, but this obedience does not come from a normative or value alignment, but from a sense of social obligation, fear or inertia.

One of the most relevant contributions of the study is the insistence on the contextual nature of police legitimacy. It depends not only on time and space, but also on historical baggage and identity visibility. Those who were most familiar with the history of conflict between the Victoria Police and the queer community – such as raids on nightclubs or media whistleblowing practices – showed a much more deep-seated distrust of the police institution.

Another significant finding refers to the role of queer visibility. Participants’ experiences differ depending on how they are perceived by agents: people who “pass” as cisheterosexual often report fewer negative interactions, while those with more visible gender expressions or sexual orientations report discriminatory or hostile experiences.

Relations between police forces and LGBTIQ+ groups have historically been complex, both in Catalonia and in many other places around the world. These relationships often carry with them a past marked by discrimination, institutional violence and a lack of mutual trust.

Although this study was carried out in Australia, its conclusions are highly relevant to Catalonia. Despite regulatory advances – such as Law 11/2014 to guarantee the rights of LGBTI people and to eradicate homophobia, biphobia and transphobia – there is still a long way to go to ensure a relationship of trust between law enforcement and sexual and gender diversity.

In recent years, efforts have been made to improve this relationship, such as the creation of specialised hate crime units or specific training in diversity within the Mossos d’Esquadra. But these measures must be critically evaluated on an ongoing basis, especially on the basis of the experiences of the people affected.

The Australian study suggests the need to develop alternative avenues of reporting and support beyond the traditional police system, especially for groups that do not trust it. This reflection can inspire initiatives in our country that reinforce community services, specialised care offices or psychological and legal support tools adapted to LGBTIQ+ realities.

Rethinking police legitimacy from a queer perspective is not only a theoretical exercise: it is a tool to build a more inclusive, equal and just security. To this end, it is necessary to listen to the voices that have often been silenced, acknowledge the wounds of the past and commit to a profound transformation of institutions. Only in this way can we move towards a society in which everyone, regardless of gender identity or expression, can feel safe and trusted in the eyes of law enforcement.

Link: Spatial, Temporal, and Visible: Queer People’s Perceptions of Police Legitimacy

_____

Aquest apunt en català / Esta entrada en español / Post en français

Hackers use AI agents to commit crimes

AI agents are becoming increasingly popular among hackers to exploit online bank accounts. By 2027, it is estimated that they will reduce the time to take over an account by 50%.

This is how journalist Anton Mous tells it from cybernews.com.

This is the stark and harsh reality that the U.S. research and advisory company Gartner describes in its latest report, Predicts 2025: Navigating Imminent AI Turbulence for Cybersecurity.

AI agents are becoming useful tools for attackers to break online account protections. Therefore, marketers will need to introduce monitoring tools to analyse interactions with AI agents. This also means that cybersecurity companies should accelerate the move towards passwordless, phishing-resistant multi-factor authentication (MFA).

Account takeover remains a persistent attack vector because weak authentication credentials, such as passwords, are collected by a variety of means, such as data breaches, phishing, social engineering and malware. Attackers leverage bots to automate a barrage of login attempts to various services in the hope that the credentials have been reused across multiple platforms.

Technology-enabled social engineering will also pose a significant threat to corporate cybersecurity in the near future, including audio and video deepfake. By 2028, 40% of all social engineering attacks are expected to target both senior executives and the general workforce, Gartner predicts.

Gartner notes that although only a few cases have been reported so far, those that have occurred have resulted in significant economic damage to the affected parties. Therefore, these incidents should be seen as a wake-up call and a signal that companies should increase their efforts to protect their digital environment.

Organisations will need to stay abreast of the market and adapt procedures and workflows to try to better resist attacks leveraging counterfeit reality techniques.

Educating employees about the evolving threat landscape through the use of specific training on social engineering with deepfakes is a key step, as Gartner analyst Manuel Acosta believes.

_____

Aquest apunt en català / Esta entrada en español / Post en français

Criminal networks exploit legal businesses to strengthen control over the economy

Europol presented its latest report a few weeks ago, Leveraging legitimacy: How the EU’s most threatening criminal networks abuse legal business structures, which examines how criminal networks fraudulently use legal business structures to strengthen their power and thus expand their criminal operations. This paper builds on Europol’s April 2024 study, Decoding the EU’s Most Threatening Criminal Networks, which identified the abuse of legal business structures as a central feature of these networks. At the request of the Justice and Home Affairs Council of the European Union, Europol conducted a detailed assessment to provide more information on how, why and where this abuse occurs.

The key findings of the Europol report cover the following areas:

  • The types of businesses most prone to abuse.
  • Organised crime activities enabled by legal businesses.
  • Methods used by criminals to exploit these structures.

The report identifies the abuse of legal businesses as a key driver of organised crime. Legal trade structures are integral to laundering criminal proceeds, distorting economic competition, transporting illicit goods and expanding the influence of criminal networks. Cash-intensive businesses are exploited to protect money laundering activities, creating unfair advantages that undermine legitimate businesses.

The findings also highlight how criminal networks use corruption to deepen their control over local communities, fostering economic dependencies that shield illicit activities from law enforcement.

Key findings

• A common threat vector: 86% of the most threatening criminal networks in the EU exploit legal business structures. Criminals use these frameworks to disguise their activities, facilitate money laundering and expand their operations while evading law enforcement.

• Criminal ownership and infiltration: high-level infiltration or outright ownership of legal businesses allows criminal networks to mix legitimate and illicit activities seamlessly. Some companies are set up exclusively as fronts for criminal operations, while others are acquired to serve long-term criminal objectives.

• Cross-border abuse: although abuse of legal business structures is a global phenomenon, the majority of exploited or infiltrated companies used by EU criminal networks (70%) operate within the EU or its neighbouring countries. However, there is a significant portion of EU criminal networks that are involved in legal structures located elsewhere in the world. Legal business structures with criminal infiltration are found in almost 80 countries around the world.

• Insider threats: employees, managers or executives in legitimate locations are increasingly exploited by criminal networks to gain access, knowledge and influence over operations.

• Facilitating multiple crimes: criminally controlled companies often serve several networks simultaneously, enabling various forms of serious and organised crime.

The findings will inform future operational activities but will also serve as input for discussions on administrative and preventive measures.

_____

Aquest apunt en català / Esta entrada en español / Post en français

Unprecedented danger: sadistic online gangs threaten teenagers

A recent report by the UK’s National Crime Agency (NCA) has highlighted a growing and alarming threat: sadistic online gangs that exploit and abuse minors. These groups, operating in the anonymity of the web, pose an “unprecedented risk” to teenagers, according to the NCA.

The report highlights that sadistic online gangs have become more sophisticated in their methods of exploitation. They use various social media platforms and messaging applications to contact minors, gain their trust and ultimately abuse them. These gangs may be made up of individuals from different parts of the world, making them difficult to identify and prosecute.

What are these sadistic gangs?

These gangs are made up of individuals who revel in the torture and humiliation of others, especially vulnerable teenagers. They use online platforms, forums and social networks to connect, share abusive material and coordinate attacks.

These gangs are not simply groups of trolls or cyberbullies. They are structured organisations, with defined hierarchies and roles, dedicated to the psychological and, in extreme cases, physical torture of their victims. They use sophisticated techniques to hide their identity and track their prey:

  • Anonymity and encryption: They use Tor networks, VPNs and other tools to hide their IP addresses and locations.
  • Social engineering: They manipulate their victims through the creation of false identities and the generation of trusting relationships.
  • Sharing of abusive material: They exchange videos and images of torture and humiliation, creating a vicious cycle of violence and abuse.
  • Coordination of attacks: They plan and execute coordinated attacks, both online and offline, to maximise the damage to their victims.

Their activities include:

  • Online sexual abuse: Sharing and production of child sexual abuse material.
  • Extreme violence: Promotion of and incitement to acts of physical and psychological violence.
  • Public humiliation: Exposure and dissemination of content that is humiliating for victims.
  • Extortion and blackmail: Threats to obtain money or compromised material.

The scope of the problem

The NCA report reveals that these gangs are on the rise and that their sophistication and reach are increasing. The COVID-19 pandemic exacerbated the problem, as adolescents spent more time on the Internet, increasing their vulnerability to these predators.

Consequences include:

  • Psychological trauma: Anxiety, depression, post-traumatic stress disorder and suicidal thoughts.
  • Social isolation: Shame, fear and distrust that hinder social relationships.
  • Academic difficulties: Concentration and school performance problems.
  • Substance abuse: Attempt to relieve emotional pain through alcohol and drugs.
  • Self-harm: As a coping mechanism for emotional pain.

The NCA is working with other law enforcement agencies and international organisations to combat this threat. Strategies are being implemented to improve detection and prosecution of these groups, as well as to educate minors and their families about the risks and how to protect themselves.

Useful resources

_____

Aquest apunt en català / Esta entrada en español / Post en français

The five big security challenges of Generative Artificial Intelligence

Last February, the research organisation RAND published a report authored by Jim Mitre and Joel B. Predd, in which they warn that the emergence of Generative Artificial Intelligence (GAI) is a real possibility that the U.S. national security community should be taking seriously.

The report identifies five major challenges that Generative Artificial Intelligence may pose to U.S. national security. (1) the development of wonder weapons, (2) systemic shifts in power, (3) the ability of non-experts to create weapons of mass destruction, (4) the emergence of artificial entities with agency, and (5) widespread instability.

All of this poses a number of challenges for strategists and policy and security decision-makers as they try to anticipate the threats and opportunities that could arise both during the process of achieving GAI and once it materialises.

A new technological Manhattan Project?

In 1938, the splitting of the atom started the nuclear arms race. Now, advances in Generative Artificial Intelligence have raised similar fears in the national security sphere. Will it be the next strategic paradigm shift? And, if so, what threats does it pose to global security?

Although GAI is still a hypothesis, its plausibility demands a strategic response from states. The RAND study identifies five major issues that could emerge with the development of GAI:

  • Wonder weapons and first-move advantage

The great fear is that GAI could uncover a revolutionary technological breakthrough, enabling the development of unstoppable cyberweapons, hyper-advanced autonomous systems or perfectly optimised military strategies. This could confer a massive advantage to the first nation to gain control of it.

  • Systemic shift in global power

GAI could alter the balance of power between nations, not necessarily through weapons, but through its ability to improve productivity, accelerate scientific discovery or redefine global economic dynamics. This could lead to a new world order in which the economies most adaptable to GAI consolidate their dominance.

An added risk is that the concentration of GAI development in a few private companies could give them unprecedented power, altering the traditional relationship between states and corporations.

  • Empowerment of non-experts in weapons of mass destruction

If GAI can facilitate the creation of highly lethal biological or cyberweapons, global security will be severely compromised. Current systems have already demonstrated worrying capabilities in this area, and GAI could amplify the risk exponentially.

  • Artificial entities with agency

Loss of control over GAI systems could lead to the creation of autonomous artificial entities capable of acting independently. This could pose a risk to critical decision making in sectors such as defence, economics and critical infrastructure management.

  • Strategic instability

Before GAI fully arrives, the technology race between states and corporations may provoke tensions similar to the Cold War. The perception that an adversary is on the verge of gaining a decisive advantage could trigger pre-emptive reactions, even armed conflict.

Towards a resilient strategy

The United States and its allies have initiated measures to maintain leadership in AI, but they may prove insufficient if GAI develops in a sudden or disruptive manner.

GAI can redefine the future of global security. This is not just a technical challenge, but a strategic revolution that requires an intelligent and proactive response. Decisions made today will determine whether GAI becomes a stabilising force or an unprecedented threat to humanity.

LINK: https://www.rand.org/pubs/perspectives/PEA3691-4.html??cutoff=true&utm_source=AdaptiveMailer&utm_medium=email&utm_campaign=7014N000001Snj1QAC&utm_term=00v4N00000X46iFQAR&org=1674&lvl=100&ite=2950 a0wQK00000AqgobYAB

_____

Aquest apunt en català / Esta entrada en español / Post en français

Artificial intelligence and biosecurity, a double-edged sword

Artificial intelligence (AI) is transforming several fields and biotechnology is no exception. A recent report by the U.S. National Academies of Sciences, Engineering, and Medicine highlights the potential of AI to improve biosecurity but also warns of the risks of its misuse.

The beneficial potential of AI in biosecurity

AI can be a powerful tool for public health. AI models can analyse large amounts of data to help design medical countermeasures that prevent, treat and mitigate health threats, such as drug discovery. This may accelerate the development of vaccines and treatments for infectious diseases, both naturally occurring and those caused by intentional acts.

The risks of misuse of AI in biotechnology

However, the report also warns that AI-enabled biological tools could be used for harmful purposes. For example, AI could design new biological agents with pandemic potential or modify existing viruses or bacteria to make them more harmful or transmissible.

Current capabilities and limitations of biological AI tools

The report assesses the current capabilities of biological AI tools to amplify the benefits or risks of applying biological tools. Currently, no biological AI tool is capable of designing an entirely new virus, and its capabilities to modify an existing infectious agent with potential for epidemic- or pandemic-scale consequences are limited.

The report examines three types of harmful applications:

Design of biomolecules, such as toxins: Available biological AI tools can design and redesign toxins using different amino acids. However, the scale of potential threats would likely be limited to the local level.

Modification of existing pathogens to make them more virulent: Biological AI tools can shape very specific characteristics that can predict virulence-related traits.

Design of a completely new virus: No currently available biological AI tool has the capability to design a new virus.

Recommendations for biosecurity in the age of AI

The report offers several recommendations to mitigate the risks of AI misuse in biotechnology:

  • Continuous monitoring and assessment: Government agencies should continually assess and mitigate the risks of AI-enabled biological tools being misused.
  • Strategic data collection: Considering the importance of data for AI model training, the report urges strategic collection of AI-ready biological datasets.
  • Investment in data infrastructure: Building new national data resources and other forms of infrastructure to support AI should be a research priority for the United States to maintain competitiveness and scientific innovation.
  • Investment in research and development: The Departments of Defence, Health and Human Services, Energy, and other U.S. federal agencies should continue to invest in research, data infrastructure and high-performance computing to drive advances in AI and also control potential risks.

Conclusion

AI offers great potential to improve biosecurity and protect us from biological threats. However, it is essential to be aware of the risks of misuse and take proactive measures to mitigate them. Through continuous monitoring, strategic data collection, infrastructure investment and research and development, we can harness the power of AI to make the world a safer place.

Link: https://www.nationalacademies.org/news/2025/03/ai-tools-can-enhance-u-s-biosecurity-monitoring-and-mitigation-will-be-needed-to-protect-against-misuse

_____

Aquest apunt en català / Esta entrada en español / Post en français

Cybersecurity situation in the European Union

The document is an ENISA report on the state of cybersecurity in the European Union in 2024. Collaboration with the NIS Cooperation Group and the European Commission is highlighted, providing a data-driven overview of cybersecurity at the EU, national and societal levels.

Topics such as the cyber threat landscape, maturity of cybersecurity capabilities, cyber crisis management, supply chain security, and cybersecurity awareness and skills are addressed.

It also includes policy recommendations to improve cybersecurity in the EU, such as strengthening technical and financial support, revising the EU Incident Response Plan, and addressing the cybersecurity skills shortage.

The report includes several policy recommendations to improve cybersecurity in the European Union:

1. Strengthen technical and financial support: It is recommended that further technical and financial support be given to the competent authorities and entities within the scope of the NIS2 Directive to ensure a harmonised and coherent implementation of the EU cybersecurity policy framework.

2. Review the EU Incident Response Plan: It is suggested to review the Incident Response Plan for large-scale cyber incidents, taking into account the latest developments in EU cybersecurity policies, to promote the harmonisation and optimisation of cybersecurity and to strengthen national and EU capabilities.

3. Strengthen the EU’s cyber workforce: Implement the Cybersecurity Skills Academy, establish a common EU approach to cybersecurity training, identify future skills needs and develop a European certification scheme for cybersecurity skills.

4. Address supply chain security: Conduct coordinated risk assessments at the EU level and develop a horizontal policy framework for supply chain security, focusing on both public and private sector cybersecurity challenges.

5. Promote a unified approach: Build on existing policy initiatives and harmonise national efforts to achieve a high common level of cybersecurity and cyber hygiene awareness among professionals and citizens, regardless of their demographics.

_____

Aquest apunt en català / Esta entrada en español / Post en français