Skip to main content

Table of Contents

Need a compliance guide for the Cyber Resilience Act (CRA)? Here’s a 15-step roadmap to prepare you for the EU’s most important cybersecurity regulation for products with digital elements.

This guide has been developed to support entities whose products are affected by the CRA’s obligations in their compliance journey, providing them with a clear and practical resource for navigating the complex requirements of the regulation. You’ll find explanations and practical advice, each time backed up by the official text of the Cyber Resilience Act, which will be quoted several times.

This compliance guide is primarily aimed at those directly involved in digital product security, such as security managers, Chief Technology Officers (CTOs) and Chief Product Officers (CPOs), architects and product teams, as well as legal teams and GRC (Governance, Risk and Compliance) managers.

In short, all those who, in the course of their work, are responsible, at one time or another, for the digital security of regulated products placed on the European market.

You can continue reading this guide as an article, or download an enhanced PDF version below for easier reading.

Cyber Resilience Act (CRA): Step-By-Step Guide to Compliance

A 80-page compliance guide to walk security managers and legal departments through the 21 essential requirements of the CRA. No mumbo jumbo, just useful, actionable information.

GET THE E-BOOK!

A Compliance Guide Intended to Be Up-To-Date and Sustainable

This guide also aims to provide an up-to-date reading of the CRA, as close as possible to the final text. The Cyber Resilience Act has undergone numerous revisions over the course of its legislative journey, and the multitude of resources available on the subject, the oldest of which are often obsolete, can make it difficult to understand and apply the law.

This publication is therefore based on the most recent version of the CRA to date, i.e. the text adopted on first reading by the European Parliament on March 12, 2024.  It is available on the European Parliament website, and here in English in PDF format.

We have deemed this document to be reliable as it’s highly unlikely that major changes — if any — will be introduced to the regulation before it is officially adopted. We therefore hope to offer a guide that is as useful and durable as possible.

It should also be stressed that this document is the result of our own work, and cannot replace the necessary due diligence required of entities concerned, nor the enlightenment of a qualified legal expert. In other words, it’s up to you to ensure your own compliance, and this document shouldn’t be considered as anything more than what it is: an exhaustive and rigorous guide, but with no legal value whatsoever.

In the first few chapters, we’ll cover a few general aspects, such as the nature and objectives of the Cyber Resilience Act, its effective date and the fines incurred by offenders. If you’re already familiar with these points, you can skip ahead to Chapter 1 (you can use the scroll bar on the right).

What Is the Cyber Resilience Act (CRA)?

Let’s start with a reminder, which can do no harm. The Cyber Resilience Act (CRA), also known as Regulation (EU) 2022/0272, is a piece of European Union legislation that governs the cybersecurity of products with digital elements distributed on its territory.

This is a major text in terms of European digital resilience, which directly complements other legislative spearheads such as the IA Act (Artificial Intelligence Act) or the NIS2 Directive — for which we have also written a comprehensive compliance guide.


NIS2 Directive: Step-by-Step Guide to Compliance

A 40-page guide to walk CISOs, DPOs and legal departments through the directive. No mumbo jumbo, only useful and actionable insights.

GET THE E-BOOK!

The CRA represents a major turning point, insofar as it officializes the responsibility of manufacturers, and in some cases importers and distributors, for the digital security of the products they put on the market. They now have no choice but to think about and guarantee the digital security of their products throughout their entire lifecycle, from the earliest stages of design to the end of the support period.

The Objectives of the CRA

The aim of the CRA is obviously to protect consumers, but also to strengthen the Union’s overall level of resilience. Making digital products more secure also means reducing the risks for all users of these products, whether they are private individuals or key entities such as those regulated by NIS2 — hospitals, banks, drinking water production plants, postal services and so on.

To this end, the Cyber Resilience Act establishes:

  1. Rules for making products with digital elements available on the market, in order to guarantee their cybersecurity;
  2. Essential requirements for the design, development and production of these products;
  3. Essential requirements for the vulnerability management processes put in place by manufacturers;
  4. Rules and provisions for market surveillance and enforcement.

The CRA is, of course, mandatory, and compliance with it is a prerequisite for CE marking of regulated products, as well as for their distribution on the European market. It should also be stressed that the CRA is not a toothless regulation, but that it comes with coercitive measures such as heavy fines.

A Defensive Vision of Cybersecurity, in Line With the Cybersecurity Act

Before going any further, we feel it is important to clarify the notion of “cybersecurity” as defined by the CRA.

Article 3 of the Cyber Resilience Act specifies that the term “cybersecurity” is understood in the same sense as in Article 2 of Regulation 2019/881, which is none other than the Cybersecurity Act (CSA), i.e.:

“‘Cybersecurity’ means the activities necessary to protect network and information systems, the users of such systems, and other persons affected by cyber threats.” – Cybersecurity Act, Article 2

The CRA therefore adopts a defensive definition of cybersecurity: it’s about defending products, protecting them from potential threats. So keep in mind that “cybersecurity” will hereafter be synonymous with defensive posture — even if offensive approaches such as penetration testing will play their part, notably to assess product security and the effectiveness of defensive measures.

And What About Digital Resilience?

Some may be surprised by this semantic choice, at a time when the term “digital resilience” is flourishing in various European legislations, from the Cyber Resilience Act (CRA) to the Digital Operational Resilience Act (DORA) governing the financial sector.

Indeed, resilience is a concept that goes far beyond the simple protection of assets. It’s not just about defense, but also about resistance and business continuity, even under heavy fire. But it’s important to understand that, unlike NIS2 or DORA, the CRA regulates products, not entities. This distinction is fundamental.

We can expect a bank or a nuclear power plant to be resilient, i.e. capable of ensuring the continuity of its activities and services even in the event of an attack — which is why it is useful to have a DRP and a BCP — in order to spare the European Union from disastrous systemic effects. But the same cannot reasonably be expected of a smartwatch or a baby monitor.

On the other hand, we can expect an IoT product to be operational so as to fulfill its function, and not to be compromised so as not to endanger its users. From the attacker’s point of view, compromising a smart device is never an end in itself, but always a way of reaching the target: the user, whether a private individual or a critical organization.

By improving the security of products with digital elements, the CRA contributes to strengthening the digital resilience of all users, and thus of the entire European ecosystem. With the Cyber Resilience Act, defensive cybersecurity is at the service of digital resilience.

When Will the CRA Come into Force?

Here, we need to refer to Article 71 of the regulation to find the answer. The Cyber Resilience Act will come into force 3 years (36 months) after its publication in the Official Journal of the European Union.

Adoption and publication of the CRA is expected in the course of 2024, resulting in a compliance deadline of 2027 for regulated products. It should be remembered that the CRA is a European regulation, and as such will be applicable as it stands in all member states (unlike a directive, such as NIS2, which must be transposed into the national law of each country).

Mandatory Notification of Incidents and Exploited Vulnerabilities as of 2026

There are two exceptions to this three-year deadline, clearly set out in Article 71 and recital 127.

Firstly, the application date is set at 21 months for the obligations to report severe incidents and actively exploited vulnerabilities — two concepts we’ll come back to later — to ENISA (European Union Agency for Cyber Security) and national CSIRTs (Computer Security Incident Response Teams). With the CRA scheduled for 2024, notifications to the authorities will be mandatory from 2026.

The second exception applies to the provisions concerning the notification of conformity assessment bodies, which will come into effect 18 months after publication of the CRA. We won’t dwell on this point, as it concerns auditors rather than manufacturers.

Penalties and Fines Applicable Under the Cyber Resilience Act

The Cyber Resilience Act intends to enforce compliance, so it comes with penalties for offenders. And as always, the best deterrent is to hit where it hurts: the wallet. Article 64 of the CRA provides for a number of administrative fines, in addition to any other corrective or restrictive measures decided by the market surveillance authorities. They can also order the withdrawal of a product from the market if this is necessary for security reasons.

Up to €15M or 2.5% of Worldwide Sales

The amount of the fine depends mainly on two factors: the organization at fault, and the nature of the non-compliance. A start-up will not be penalized in the same way as a multinational, and a product security-related failure will be more heavily sanctioned than an administrative one. It’s all quite logical.

All the compliance elements below may seem rather vague if you’re not (yet) familiar with the regulation, but rest assured, we’ll be explaining them throughout this guide.

Failure to comply with the essential requirements of the regulation, with incident and vulnerability notification obligations, or with any other obligations incumbent upon them, manufacturers are liable to a fine of up to 15 million euros or 2.5% of total worldwide annual turnover for the previous financial year, whichever is higher.

Authorized representatives, importers, distributors, assessment bodies and their subcontractors who breach their obligations are liable to fines of up to €10 million or 2% of total worldwide annual turnover, whichever is higher.

This second scale also applies to all players when non-compliance concerns obligations relating to the EU declaration of conformity, technical documentation, CE marking rules, conformity assessment or access to data and documentation.

Finally, the provision of inaccurate, incomplete or misleading information to notified bodies or market surveillance authorities may be subject to a fine of up to €5 million or 1% of total worldwide annual turnover, whichever is higher.

The Exceptions: Open-Source Software, Microenterprises and SMEs

​​Article 64.10 of the Cyber Resilience Act notes two exceptions to these sanction regimes.

Firstly, manufacturers considered to be micro or small enterprises cannot be financially sanctioned for failure to meet notification deadlines for severe incidents and actively exploited vulnerabilities (see Chapter 10).

Secondly, open-source software stewards cannot be subject to financial penalties, regardless of the violation of the Cyber Resilience Act. 

Having said all this, we can now move on to the heart of this guide, namely the different steps to be taken to achieve compliance.

1. Check if Your Products Are Regulated by the Cyber Resilience Act

As a manufacturer, the first thing to do is to find out which products are regulated by the Cyber Resilience Act. In this respect, Article 2 couldn’t be clearer:

This Regulation applies to products with digital elements made available on the market, the intended purpose or reasonably foreseeable use of which includes a direct or indirect logical or physical data connection to a device or network.– CRA, Article 2.1

Yes, the CRA concerns a lot of products; that’s exactly why it’s a major piece of cybersecurity legislation. And if you still have any hope of escaping it, Article 3 defines “products with digital elements” — which will be shortened to “products” in the rest of this guide.

“‘Product with digital elements’ means a software or hardware product and its remote data processing solutions, including software or hardware components being placed on the market separately.” – CRA, Article 3

This definition is important, as it confirms that all software solutions that go hand in hand with a regulated product also fall within the scope of the CRA. SaaS platforms and other mobile apps accompanying IoT products are therefore subject to the same obligations, as, for example, those associated with smart watches and scales. Conversely, a SaaS platform that isn’t linked to any product is not covered by the CRA (although it may be regulated by the NIS2 directive). 

The List of Exceptions

Article 2 lists some rare exceptions, which should be borne in mind. The following are excluded from the scope of the CRA:

  • Professional medical devices covered by regulations (EU) 2017/745 and (EU) 2017/746;
  • Motor vehicles and their trailers, and their systems, components and separate technical units, covered by regulation (EU) 2019/2144;
  • Civil aviation systems and marine equipment, respectively governed by regulations (EU) 2018/1139 and 2014/90/EU;
  • Digital elements developed or modified exclusively for security or national defense purposes.

The Four Product Categories Regulated by the CRA

The Cyber Resilience Act differentiates four categories among the products it regulates:

  • The default category, which is not mentioned as such, but exists de facto. It includes all products that meet the general definitions given above;
  • Important products of Class I;
  • Important products of Class II;
  • Critical products.

It’s very important to understand this classification, as it determines, among other things, the conformity assessment procedure required for each product. The more critical the product, the more rigorous the assessment will be — meaning the involvement of a third-party auditor.

The four product categories regulated by the Cyber Resilience Act (CRA)

Important Products

Important products are divided into two classes: Class I and Class II, both of which are detailed in Annex III of the regulation, which we’ll examine below.

According to Article 7 of the CRA, a product is considered important if it meets at least one of the following two criteria:

  • The product primarily performs functions critical to the cybersecurity of other products, networks or services, including securing authentication and access, intrusion prevention and detection, endpoint security or network protection;
  • The product performs a function which carries a significant risk of adverse effects in terms of its intensity and ability to disrupt, control or cause damage to a large number of other products or to the health, security or safety of its users through direct manipulation, such as a central system function, including network management, configuration control, virtualisation or processing of personal data.

Obviously, it’s not up to manufacturers to decide whether their products are important or not. The CRA drew up an exhaustive, albeit general, list. Also, Article 7 requires the European Commission to specify the technical descriptions of important and critical product categories within 12 months of publication of the regulation.

It should also be noted that the European Commission can modify Annex III as it sees fit, and thus add, delete or move a product category from one class to another. In such cases, the general rule is to observe a 12-month transition period before the new rules apply.

Finally, the CRA points out that a product which integrates an important product is not automatically important in itself, and is therefore not necessarily subject to the same conformity assessment obligations.

List of Important Products of Class I

Important Class I products are detailed in Annex III of the CRA. The exact list is reproduced below.

Important products falling under Class I are:

  1. Identity management systems and privileged access management software and hardware, including authentication and access control readers, including biometric readers;
  2. Standalone and embedded browsers;
  3. Password managers;
  4. Software that searches for, removes, or quarantines malicious software;
  5. Products with digital elements with the function of virtual private network (VPN);
  6. Network management systems;
  7. Security information and event management (SIEM) systems;
  8. Boot managers;
  9. Public key infrastructure and digital certificate issuance software
  10. Physical and virtual network interfaces;
  11. Operating systems;
  12. Routers, modems intended for the connection to the internet, and switches;
  13. Microprocessors with security-related functionalities;
  14. Microcontrollers with security-related functionalities;
  15. Application specific integrated circuits (ASIC) and field-programmable gate arrays (FPGA) with security-related functionalities;
  16. Smart home general purpose virtual assistants;
  17. Smart home products with security functionalities, including smart door locks, security cameras, baby monitoring systems and alarm systems;
  18. Internet connected toys covered by Directive 2009/48/EC that have social interactive features (e.g. speaking or filming) or that have location tracking features;
  19. Personal wearable products to be worn or placed on a human body that have a health monitoring (such as tracking) purpose or personal wearable products that are intended for the use by and for children. (Editor’s note: excluding professional medical devices covered by regulations (EU) 2017/745 and (EU) 2017/746).

List of Important Products of Class II

Important products falling under Class II are:

  1. Hypervisors and container runtime systems that support virtualized execution of operating systems and similar environments;
  2. Firewalls, intrusion detection and/or prevention systems;
  3. Tamper-resistant microprocessors;
  4. Tamper-resistant microcontrollers;

Critical Products 

The critical product classification is the highest of all those established by the CRA. Unsurprisingly, products belonging to the designated categories are those subject to the strictest conformity assessment obligations.

To find out what makes a product critical, we must refer to Article 8 of the CRA. It states that a product category may be considered critical if, in addition to the criteria relating to important products, it appears as such with regard to the following two assessments:

  • The extent to which there is a critical dependency of essential entities referred to in Article 3 of Directive (EU) 2022/2555 — aka NIS2 — on the category of products with digital elements;
  • The extent to which incidents and exploited vulnerabilities concerning the category of products with digital elements can lead to serious disruptions to critical supply chains across the internal market.

Behind these somewhat cryptic formulations lie two obsessions of the NIS2 Directive:

  • To protect essential entities, those whose failure could have systemic effects within the EU, such as players in the transport, energy or health sectors. If you’d like to find out more about Essential Entities (EE), please refer to the relevant section of our NIS2 compliance guide;
  • To secure the supply chain, those service providers who, when digitally fragile, represent a major attack vector against the entities that work with them. If you’re not familiar with the subject, we suggest you take a look at the cyber attack on SolarWinds, which has become a textbook case of supply chain attacks.

Understandably, the products deemed critical by the CRA are those that pose a risk to entities vital to the proper functioning of the Union, or that are likely to jeopardize the most important supply chains.

List of Critical Products

It should be noted here that the list below is not set in stone, since, as with important products, the European Commission is entitled to amend it as it sees fit by means of delegated acts. In this case, the general rule is to observe a 6-month transition period before applying the new rules.

Critical products are listed in Annex IV of the CRA:

  • Smartcards or similar devices, including secure elements;
  • Hardware Devices with Security Boxes (Editor’s note: products such as secure smart card readers, tachographs, Hardware Security Modules (HSM), etc.);
  • Smart meter gateways within smart metering systems as defined in Article 2.23 of Directive (EU) 2019/944 and other devices for advanced security purposes, including for secure cryptoprocessing.

The last point being rather obscure, we’d like to make a few clarifications. The Directive (EU) 2019/944 referred to is a European legislation of June 5, 2019 regarding common rules for the internal electricity market. It defines an “intelligent metering system” as follows:

“‘Smart metering system’ means an electronic system that is capable of measuring electricity fed into the grid or electricity consumed from the grid, providing more information than a conventional meter, and that is capable of transmitting and receiving data for information, monitoring and control purposes, using a form of electronic communication.” – Directive (EU) 2019/944 , Article 2.23

2. Learn About the 21 Essential Requirements for Cybersecurity

We’re tackling a big chunk of the regulation here; if you want to pour yourself a cup of coffee, now’s the time.

The Cyber Resilience Act lays down essential requirements — an euphemism for obligations — for all regulated products and their manufacturers. They are detailed in Annex I of the regulation, which will be quoted regularly hereafter.

The essential requirements of the CRA fall into two groups:

  1. Product cybersecurity requirements. These cover the level of security and the intrinsic characteristics of the products;
  2. Vulnerability handling requirements. These cover the measures and processes implemented by manufacturers.

Please bear in mind that these requirements are absolutely crucial. They are at the heart of the Cyber Resilience Act, and their implementation will determine whether a product is considered compliant or not. In fact, Article 6 of the CRA states that only products that meet ALL these requirements can be placed on the European market.

Therefore, implementing these requirements should be your roadmap to Cyber Resilience Act compliance, your light in the night when you feel lost in the face of all there is to accomplish.

These requirements also underpin the document you’re currently reading. In this chapter, we’ll list them as they are set out in the regulation, then come back to each of them in greater detail as the guide progresses. Remember that the CRA is a legal text, not a technical specification manual, so some requirements may appear somewhat vague or generic on first reading.

The 13 CRA Requirements for Product Cybersecurity

The first part of the CRA’s essential requirements is devoted to product properties, which must guarantee a solid degree of intrinsic security. The text is crystal clear:

“Products with digital elements shall be designed, developed and produced in such a way that they ensure an appropriate level of cybersecurity based on the risks.” – CRA, Annex I

This means that IoT products need to be secure with regard to the risks associated with them. These risks are to be determined by a cyber risk assessment (see Chapter 5), to be carried out by the manufacturer at the design stage.

Products must meet the following 13 essential cybersecurity requirements (Source: Annex I, Part 1):

  1. Be made available on the market without known exploitable vulnerabilities;
  2. Be made available on the market with a secure by default configuration, unless otherwise agreed between manufacturer and business user in relation to a tailor-made product, including the possibility to reset the product to its original state;
  3. Ensure that vulnerabilities can be addressed through security updates, including, where applicable, through automatic security updates that are installed within an appropriate timeframe enabled as a default setting, with a clear and easy-to-use opt-out mechanism, through the notification of available updates to users, and the option to temporarily postpone them;
  4. Ensure protection from unauthorized access by appropriate control mechanisms, including but not limited to authentication, identity or access management systems, as well as report on possible unauthorized access;
  5. Protect the confidentiality of stored, transmitted or otherwise processed data, personal or other, such as by encrypting relevant data at rest or in transit by state of the art mechanisms, and by using other technical means;
  6. Protect the integrity of stored, transmitted or otherwise processed data, personal or other, commands, programs and configuration against any manipulation or modification not authorized by the user, as well as report on corruptions;
  7. Process only data, personal or other, that are adequate, relevant and limited to what is necessary in relation to the intended purpose of the product (‘minimisation of data’);
  8. Protect the availability of essential and basic functions, also after an incident, including with resilience and mitigation measures against denial-of-service attacks;
  9. Minimize the negative impact by themselves or connected devices on the availability of services provided by other devices or networks;
  10. Be designed, developed and produced to limit attack surfaces, including external interfaces;
  11. Be designed, developed and produced to reduce the impact of an incident using appropriate exploitation mitigation mechanisms and techniques;
  12. Provide security related information by recording and/or monitoring relevant internal activity, including the access to or modification of data, services or functions, with an opt-out mechanism for the user;
  13. Provide the possibility for users to securely and easily remove on a permanent basis all data and settings and, where such data can be transferred to other products or systems, ensure this is done in a secure manner.

The 8 CRA Requirements for Vulnerability Handling

The second part of the CRA’s essential requirements covers manufacturers’ obligations regarding vulnerability management. These are just as important as the requirements applicable to products, and their implementation is also a prerequisite for placing a product on the market.

Manufacturers of a product regulated by the CRA must meet the following 8 vulnerability management requirements (Source: Appendix I, Part 2):

  1. Identify and document vulnerabilities and components contained in the product, including by drawing up a software bill of materials in a commonly used and machine-readable format covering at the very least the top-level dependencies of the product;
  2. In relation to the risks posed to the products with digital elements, address and remediate vulnerabilities without delay, including by providing security updates. Where technically feasible, new security updates shall be provided separately from functionality updates;
  3. Apply effective and regular tests and reviews of the security of the product with digital elements;
  4. Once a security update has been made available, share and publicly disclose information about fixed vulnerabilities, including a description of the vulnerabilities, information allowing users to identify the product affected, the impacts of the vulnerabilities, their severity and clear and accessible information helping users to remediate the vulnerabilities. In duly justified cases, where manufacturers consider the security risks of publication to outweigh the security benefits, they may delay making public information regarding a fixed vulnerability until after users have been given the possibility to apply the relevant patch;
  5. Put in place and enforce a policy on coordinated vulnerability disclosure (CVD);
  6. Take measures to facilitate the sharing of information about potential vulnerabilities in their product as well as in third party components contained in that product, including by providing a contact address for the reporting of the vulnerabilities discovered in the product;
  7. Provide for mechanisms to securely distribute updates for products to ensure that vulnerabilities are fixed or mitigated in a timely manner, and, where applicable for security updates, in an automatic manner;
  8. Ensure that available security updates are disseminated without delay and, unless otherwise agreed between manufacturer and business user in relation to a tailor-made product, free of charge, accompanied by advisory messages providing users with the relevant information, including on potential action to be taken.

3. Be Aware of Special Cases: Importers, Distributors and Open Source Software

The above 21 requirements apply to manufacturers of products regulated by the Cyber Resilience Act. But the regulation also addresses blind spots, specifying the obligations applicable to importers, distributors and open-source software stewards. If these particular cases do not concern you, you can skip ahead to the next Chapter.

Importers’ Obligations

The regulation devotes its entire Article 19 to the obligations of importers. Obviously, they are absolutely forbidden to place on the market products that do not comply with the 13 essential cybersecurity requirements, or whose manufacturers do not apply the 8 essential vulnerability management requirements. The application of all these requirements remains the responsibility of manufacturers.

Before placing products on the market, the importer must ensure that:

  1. The conformity assessment procedures have been carried out by the manufacturer;
  2. The manufacturer has drawn up the technical documentation;
  3. The product bears the CE marking, and is accompanied by the EU declaration of conformity and the information and instructions to the user;
  4. The product bears a type, batch or serial number or any other element enabling it to be identified. Where this isn’t possible, this information must be provided on the packaging or in a document accompanying the product;
  5. Both the manufacturer AND the importer have indicated their name, company name or registered trademark, as well as the postal, email or website address at which they can be contacted respectively. This information must appear on the product, its packaging or an accompanying document;
  6. The manufacturer has ensured that the end date of the support period, in particular the month and year, is specified at the time of purchase, in a clear, comprehensible and easily accessible manner and, as the case may be, on the product, its packaging or by digital means.

Importers must be able to provide all the necessary documents proving compliance with the above requirements. In addition, if an importer becomes aware of a vulnerability in a product, they must of course inform the relevant authorities.

Distributors’ Obligations

For the obligations incumbent on distributors, we must turn to Article 20. As with importers, the main task is to ensure that paperwork is in order. Distributors must therefore “act with due diligence”, checking that the product bears the CE mark and that the manufacturer and importer have complied with all the points in the above list.

White-Label Products

Noteworthy exception: an importer or distributor is considered a manufacturer when it places a product on the market “under its name or trademark”, or carries out “a substantial modification” of a product already available. (Article 21) All the obligations incumbent on manufacturers then apply.

The same goes for any natural or legal person other than the manufacturer, importer or distributor who makes a substantial modification to a product and makes it available on the European market. (Article 22)

Open Source Software

Open source software was a major issue throughout the whole process of drafting the Cyber Resilience Act, prompting numerous outcries from open source players.

The first versions of the regulation directly threatened the model by making creators liable, so much so that the community feared a “chilling effect” on the development of software — free or not, the latter often depending on the former. There is no such thing as free software on the one hand, and licensed software on the other: the two coexist to form the software ecosystem we know today, in a kind of symbiosis that benefits everyone.

It was such a cause for concern that in April 2023, the Eclipse Foundation — whose members include big players such as Google, IBM, Oracle and Microsoft — published an open letter to the European Commission, co-signed by numerous organizations such as the Linux Foundation Europe and the French CNLL.

Fortunately, the European Commission has consulted with open source representatives, and the final version seems to satisfy everyone.

What Open Source Softwares Are Regulated by the CRA?

First of all, let’s explain what open source software is according to the CRA:

“‘Free and open-source software’ means software the source code of which is openly shared and which is made available under a free and open-source license which provides for all rights to make it freely accessible, usable, modifiable and redistributable.” – CRA, Article 3.48

Let’s then look at Recital 18, which states that “only free and open-source software made available on the market, and therefore supplied for distribution or use in the course of a commercial activity, should fall within the scope of this Regulation.” This clearly indicates that free software that isn’t part of a commercial activity is not affected by the CRA.

But what constitutes a commercial activity according to the regulation? Here, Recital 15 provides a most welcome clarification.

Supply in the course of a commercial activity might be characterized by:

  • Charging a price for a product;
  • Charging a price for technical support services where this does not serve only the recuperation of actual costs;
  • An intention to monetise, for instance by providing a software platform through which the manufacturer monetises other services;
  • Requiring as a condition for use the processing of personal data for reasons other than exclusively for improving the security, compatibility or interoperability of the software;
  • Accepting donations exceeding the costs associated with the design, development and provision of the product.

If the software meets one of these criteria, it falls within the scope of the Cyber Resilience Act. On the other hand, the following elements alone are not sufficient to characterize a commercial activity:

  • Accepting donations without the intention of making a profit.” (Recital 15);
  • Supply “as part of the delivery of a service for which a fee is charged solely to recover the actual costs directly related to the operation of that service, such as may be the case with certain products by public administration entities.” (Recital 16);
  • “The provision of products with digital elements qualifying as free and open-source software that are not monetised by their manufacturers.” (Recital 18);
  • The mere fact that an open-source software product receives financial support from manufacturers or that manufacturers contribute to the development of such a product.” (Recital 18);
  • “The mere presence of regular releases” (Recital 18);
  • The development by NGOs, “provided that the organization is set up in such a way that ensures that all earnings after costs are used to achieve not-for-profit objectives.” (Recital 18).

Finally, the CRA states that the following are not considered as making a product available on the market:

  • The supply of an open source product intended for integration by other manufacturers into their own products UNLESS “the component is monetized by its original manufacturer”. (Recital 18) So it’s perfectly possible to distribute open source software without having to worry about it being monetized by someone else.
  • The sole act of hosting products on open repositories, including through package managers or on collaboration platforms.” (Recital 20) So it’s possible to aggregate and make available other open-source projects without being considered a distributor — unless there’s a commercial intent, of course.

The Status of Open-Source Software Steward

Now, what about organizations that develop and maintain free software for commercial purposes? The CRA decided to create a separate status for them, that of open-source software steward, defined as follows:

“‘Open-source software steward’ means a legal person, other than a manufacturer, that has the purpose or objective of systematically providing support on a sustained basis for the development of specific products with digital elements, qualifying as free and open-source software and intended for commercial activities, and that ensures the viability of those products.” – CRA, Article 3.14

Without a doubt, it’s the major foundations and structures of the open source community, such as the Linux, Mozilla and Eclipse foundations, and their administrators who are targeted. Contributors are spared:

This Regulation does not apply to natural or legal persons who contribute with source code to products with digital elements qualifying as free and open-source software that are not under their responsibility.” – CRA, Recital 18

As for stewards, let’s not forget that they cannot be fined for breaching the Cyber Resilience Act.

Obligations of Open Source Software Stewards

After this long but necessary digression on the particular case of open source software, it’s time to look at the specific obligations incumbent on administrators, detailed in Article 24.

Open source software stewards must implement and verifiably document a cybersecurity policy. It must encourage secure developments, as well as effective vulnerability management by developers. This policy shall “in particular, include aspects related to documenting, addressing and remediating vulnerabilities and promote the sharing of information concerning discovered vulnerabilities within the open-source community.”

This cybersecurity policy must be made available to any market surveillance authority requesting it, in order to address potential security risks. It goes without saying that the administrator’s cooperation is required throughout the process.

Furthermore, Article 24.3 states that stewards are subject to the same reporting requirements that apply to manufacturers, i.e. :

  • For actively exploited vulnerabilities, as long as they are involved in the development of the products concerned;
  • For severe incidents affecting networks and information systems used in the development of the affected products.

We won’t go into further detail here, as all these requirements for notifying exploited vulnerabilities and severe incidents are covered in Chapter 10 of this guide.

Finally, Article 25 of the Cyber Resilience Act paves the way for “voluntary security attestation programs” to assess the security of open source software. There is every reason to believe that these programs will be carried out jointly by the regulator and the main open source foundations.

Part of the community has already indicated that they are working to develop common cybersecurity processes for compliance with the CRA. Among the signatories are the Apache, Blender, OpenSSL, PHP, Python, Rust and Eclipse foundations – quite an impressive list.

4. Prepare a Technical Documentation for Each Product Placed on the Market

Let’s get back to business: the obligations incumbent on manufacturers of CRA-regulated products. The production of a technical documentation to accompany each product is a sine qua non condition for its commercialization in Europe. We’re just paraphrasing the official text:

“Before placing a product with digital elements on the market, manufacturers shall draw up the technical documentation referred to in Article 31.” – CRA, Article 13.12

Article 31 states that documentation must contain all information demonstrating that the product and the processes implemented by the manufacturer comply with the essential requirements of the CRA. Yes, all of it. More precisely, it must include at least all the elements listed in Annex VII (see below).

An Excellent Roadmap for Compliance

The preparation of the technical documentation is one of the last steps in the compliance process, precisely because it must contain the information needed to prove that the previous steps have been carried out properly. Nevertheless, we thought it would be a good idea to introduce it early on in this guide.

Indeed, since it lists virtually everything that needs to be done, the technical documentation provides an excellent roadmap for compliance with the CRA. We therefore recommend that you pin it above your desk, alongside the 21 essential requirements for product security and vulnerability management.

What to Include in the Technical Documentation

If you’re fond of lengthy paperwork, you’re in for a treat. The technical documentation must include at least all the elements listed in Annex VII of the CRA, i.e. :

  1. A general description of the product with digital elements, including:
    • its intended purpose;
    • versions of software affecting compliance with essential requirements;
    • where the product is a hardware product, photographs or illustrations showing external features, marking and internal layout;
    • user information and instructions as set out in Annex II;
  2. A description of the design, development and production of the product and vulnerability handling processes, including:
    • necessary information on the design and development of the product, where applicable, drawings and schemes and a description of the system architecture explaining how software components build on or feed into each other and integrate into the overall processing;
    • necessary information and specifications of the vulnerability handling processes put in place by the manufacturer, including the software bill of materials, the coordinated vulnerability disclosure policy, evidence of the provision of a contact address for the reporting of the vulnerabilities and a description of the technical solutions chosen for the secure distribution of updates;
    • necessary information and specifications of the production and monitoring processes of the product and the validation of those processes;
  3. An assessment of the cybersecurity risks against which the product is designed, developed and produced, including how the essential requirements are applicable;
  4. Relevant information that was taken into account to determine the support period (see Chapter 13);
  5. A list of the harmonized standards, common specifications or European certification schemes applied ;
    • If they haven’t been applied: descriptions of the solutions adopted to meet the essential requirements, including a list of other relevant technical specifications applied.
    • In the event of partial application, the technical documentation shall specify the parts which have been applied;
  6. Reports of the tests carried out to verify the conformity of the product and of the vulnerability handling processes with the CRA’s essential requirements;
  7. A copy of the EU declaration of conformity;
  8. Where applicable, the software bill of materials, further to a request from a market surveillance authority.

This technical documentation must be kept available to market surveillance authorities for at least 10 years, or for the remainder of the support period, whichever is longer.

A simplified technical documentation for micro-enterprises and SMEs

It’s worth noting that micro-enterprises and SMEs, including startups, can produce a simplified technical documentation form, as per Article 33.5. It states that the European Commission shall specify the details of the simplified form by means of delegated acts. To be continued.

5. Conduct a Cybersecurity Risk Assessment Upstream of Product Design

As mentioned above, the 13 essential requirements of the regulation relating to products must be applied taking into account the cybersecurity risks specific to the product in question. And there’s only one way to determine these risks: carry out a cybersecurity risk assessment.

Security Throughout the Entire Product Lifecycle

The risk assessment must be carried out by the manufacturer, upstream of the product design phase. It will then be used to guide security considerations and choices throughout the product’s lifecycle.

“Manufacturers shall undertake an assessment of the cybersecurity risks associated with a product with digital elements and take the outcome of that assessment into account during the planning, design, development, production, delivery and maintenance phases of the product with a view to minimizing cybersecurity risks, preventing incidents and minimizing the impacts of such incidents, including in relation to the health and safety of users.” – CRA, Article 13.2

This idea of guaranteeing security at every phase of the product lifecycle, from the earliest stages of design to the end of the support period, is at the heart of the Cyber Resilience Act. Manufacturers now have a duty to ensure that their products are always secure, and not just when they are put on the market. Because it is the raison d’être of all the essential requirements of the regulation, ongoing product security must be a central concern for all security managers involved.

Elements to Be Included in the Cybersecurity Risk Assessment

Article 13 stipulates that the assessment must include, as a minimum, a cybersecurity risk analysis based on:

  • The intended purpose and reasonably foreseeable use;
  • The conditions of use of the product, such as the operational environment or the assets to be protected;
  • And take into account the length of time the product is expected to be in use.

It must also clearly indicate:

  • How the essential product requirements apply to the product concerned, and how they are implemented;
  • How the manufacturer will ensure that the product is designed, developed and produced in such a way as to guarantee an appropriate level of cybersecurity;
  • How the manufacturer will apply the essential requirements for vulnerability handling.

A Due Diligence Obligation if the Product Integrates Third-Party Components, Including Open-Source Components

The CRA requires manufacturers to “exercise due diligence” if their products incorporate third-party components, including open-source ones. This means assessing the risks intrinsic to each component, as well as those posed by the way they are integrated into the product.

“Manufacturers shall exercise due diligence when integrating components sourced from third parties so that those components do not compromise the cybersecurity of the product with digital elements, including when integrating components of free and open-source software that have not been made available on the market in the course of a commercial activity.” – CRA, Article 13.5

The regulation sets out that manufacturers must notify the entity or person owning the third-party component if they discover a vulnerability in it – which is the bare minimum of politeness. The vulnerability must then be dealt with and remediated in accordance with the essential requirements for vulnerability handling (Article 13.6). We believe that this is a tricky situation, since the owner of the third-party component will most often be the one responsible for correcting the vulnerability.

In our opinion, the wisest thing to do is to avoid, wherever possible, integrating third-party components that include an identified vulnerability; all the more so as the CRA makes no distinction between vulnerabilities that represent a real security risk and those that would have no impact. Nevertheless, it should be pointed out that if the manufacturer decides to release a software or hardware patch, they are obliged to share its content with the owner of the third-party component.

Finally, be aware that the risk assessment must be included in the technical documentation, as well as being documented and updated throughout the product support period (see Chapter 13).

6. Bring Security Into Every Stage of the Product Creation Process

Once the risk assessment has been carried out, it’s time to move on to the real thing: product creation. Anyone who works in cybersecurity or product development knows where we’re going with this: the most important thing is security by design. Basically, this means implementing the 13 essential cybersecurity requirements for products by integrating security milestones at every stage of the lifecycle.

Moving From the General Requirements of the CRA to Harmonized Standards

First and foremost, you have to specify the technical solutions that will enable you to meet the essential requirements. The CRA is a piece of European legislation, not a technical document written by and for engineers. The essential requirements are never more than that; the CRA lists objectives to be achieved, but never says how they are to be achieved.

Let’s take the example of the second requirement applicable to products, namely that they should “be made available on the market with a secure default configuration.” Everyone can understand this clear, general objective, but it’s a different kettle of fish to make sure it’s actually achieved.

Transposing these general regulatory requirements into technical requirements with which manufacturers can comply is the role of European harmonized standards. These are a set of guidelines drawn up by standardization bodies, providing technical specifications whose use proves that products and services achieve a certain level of quality, security and reliability.

Harmonized standards exist for all sorts of things, and it’s only a matter of time before those recommended for the CRA emerge. However, there are already numerous cybersecurity standards and best practices whose application can only be beneficial. European harmonized standards will standardize what already exists and fill in the gaps, but some efforts are already clearly identified, provided we know where to look.

Mirror Essential Requirements With Technical Specifications

Until specific harmonized standards are available, one of the best documents available today is the Cyber Resilience Act Requirements Standards Mapping (2024), published by the Joint Research Centre (JRC) and the European Union Agency for Cyber Security (ENISA).

This valuable document maps each of the CRA’s essential requirements to one or more existing standards. The primary aim of this publication is to contribute to the development of harmonized standards by identifying what already exists, but it can also be useful to manufacturers wishing to adopt cybersecurity best practices now. 

Let’s take the example of the seventh essential requirement relating to products: “process only data, personal or other, that are adequate, relevant and limited to what is necessary in relation to the intended purpose of the product.”

Here, the JRC and ENISA document highlights the value of ISO/IEC 27701, an extension of ISO/IEC 27001 and ISO/IEC 27002 for privacy information management. Although not product-specific, ISO 27701 offers a useful mapping between various standards and legislation such as the GDPR, in addition to properly covering the concept of data minimization. It will therefore be useful to manufacturers with regard to the aforementioned essential requirement.

This is just one example, but the Standards Mapping offers interesting food for thought for each of the essential requirements of the regulation, both in terms of product cybersecurity and vulnerability management — even if, quite often, the authors’ observation is that what exists is too theoretical, and therefore insufficient, and that future harmonized standards will have to flesh out the expected technical specifications.

In any case, we highly recommend this read to anyone involved in product security issues, be they engineers, developers, product managers, CTOs, CPOs or others.

Plan Technical Specifications for Each Phase of the Product Lifecycle

Each essential requirement can be met by appropriate technical specifications, which can be planned for a specific phase of the product lifecycle. Clearly defining which requirements and specifications need to be addressed at which stage will help you organize your tasks and those of your teams more efficiently.

The first thing to do is to define precisely the stages in your product’s lifecycle. They may vary from one product to another, but some general breakdowns offer a solid basis for further work. For example, Article 13 of the CRA breaks down the lifecycle into six phases: planning, design, development, production, delivery and maintenance.

Personally, we prefer the approach suggested in the Cyber Resilience Act Requirements Standards Mapping, where the authors decompose the lifecycle of a product as follows:

  1. Design;
  2. Implementation;
  3. Validation;
  4. Commissioning;
  5. Surveillance/Maintenance;
  6. End of Life.

This breakdown is particularly appropriate, since it considers the end of the product’s life as part of the cycle; a wise choice, since the CRA’s security requirements must be applied from beginning to end.

Once the lifecycle phases and technical specifications to be implemented have been defined, all that’s left to do is match them up to get an overview of what needs to be done and when — yes, that’s easier said than done. Bear in mind, however, that you’ll have to wait for the harmonized standards to get the job done properly.

7. Run “Effective and Regular” Security Testing

While it is essential (and mandatory) to consider security throughout the entire lifecycle, it’s just as important to test the actual security of products. 

Cybersecurity can’t just be theoretical: you have to make sure that everything you’ve put in place is actually effective. In other words, you need to introduce security milestones throughout the product lifecycle to ensure that there are no vulnerabilities that could be exploited by a malicious actor.

Regular and effective testing is an obligation for manufacturers, as it is the third essential requirement for vulnerability handling.

“Manufacturers shall apply effective and regular tests and reviews of the security of the product with digital elements.” – CRA, Annex I, Part 2.

Besides, some conformity assessment modules, such as Module H, to which we’ll return later, call for a detailed description of the tests “carried out before, during and after production, and the frequency with which they will be carried out”. So it’s not enough just to conduct security tests: you need to draw up and document a coherent, comprehensive and continuous testing strategy.

Why Carry out Security Testing?

Beyond their mandatory nature, tests are an evidence to ensure a minimum level of cybersecurity. Any product with digital elements inevitably contains vulnerabilities. Testing is there to detect and classify them. 

Some vulnerabilities can be extremely difficult to identify, while others will be obvious to any tester. Similarly, the exploit of some vulnerabilities can have dramatic consequences, while others will have no impact on the actual security of the product. After all, a vulnerability may concern a feature that cannot be exploited, or that cannot affect the security or usability of the product. Suppose a vulnerability in a smart clock only allows the color of the digital display to be changed from blue to red: it’s annoying, but it’s not the hack of the century…

It’s rather odd to note that this notion of the impact of exploitable vulnerabilities is absent from the Cyber Resilience Act; this is all the more regrettable given that some cybersecurity players had already pointed out this inconsistency in the first versions of the regulation. The French Alliance pour la Confiance Numérique (ACN, “Alliance for Digital Confidence”), for example, was already lamenting in early 2023 that “an exploitable vulnerability is by no means an absolute or binary concept”. Let’s hope that the bridges between the CRA and certification schemes introduced by the Cybersecurity Act, which themselves make this distinction, and which we’ll discuss in the next chapter, will erase this lack of nuance.

In any case, security testing is imperative because, beyond detection, it enables manufacturers to assess the criticality and potential impact of vulnerabilities on the product and its users. A vulnerability successfully exploited by a malicious actor can not only compromise everyone’s security, but also have consequences for brand image and business. As a manufacturer, it is therefore essential to do everything in your power to reduce the risks.

Which Security Tests to Choose?

There is no single form of testing that is sufficient to verify everything. The tests to be implemented depend on the objective being pursued and, above all, on the phase in the product’s lifecycle. Cybersecurity tests are numerous and eclectic, with different objectives and at different stages. It’s the combination of the various tests that makes it possible to build a solid testing strategy.

For example, during the software development phase, it is advisable to set up source code reviews, unit testing, integration testing or end-to-end testing. These take place upstream of production, during the Software Development Life Cycle (SDLC), as part of a DevSecOps approach. They do not specifically seek to identify vulnerabilities, but rather problems in general, not only in terms of security, but also compatibility or logic. In the wrong hands, these can lead to exploitable and dangerous vulnerabilities (or simply affect the quality and proper functioning of the product).

Other forms of testing are specifically designed to detect vulnerabilities proactively. They can be automatic, such as vulnerability scanning, or manual, such as penetration testing. They are part of what is more generally known as Offensive Security, or OffSec.

The Role of Offensive Security in Test Strategy

Offensive Security is a proactive approach to cybersecurity: it doesn’t seek to protect products, but to identify their vulnerabilities before malicious actors do. OffSec testing simulates attacks by adopting the posture and methods of cyber-attackers, in order to test the defenses of organizations and their products, identify potential weaknesses, and assess their actual level of security. To put it more simply: Offensive Security enables manufacturers to challenge their own products in order to identify and correct vulnerabilities.

OffSec enables the testing of almost any type of asset, from cloud infrastructures to web and mobile apps, and all forms of hardware products, whether consumer or industrial.

It’s no coincidence that we’re telling you all this, but because we, Yogosha, specialize in Offensive Security Testing.

Pentest as a Service (PtaaS)

Penetration testing is a technique used to simulate a cyber-attack on an asset, in this case a product, in order to identify its vulnerabilities. It’s a proven technique that security teams are already familiar with, and it’s imperative to use it to test regulated products before bringing them to market.

Yogosha offers an innovative approach to penetration testing: Pentest as a Service (PtaaS). An ideal solution for all organizations that need to carry out multiple tests throughout the year, on numerous products and scopes.

Pentest as a Service (PtaaS) allows for:

  1. Continuous security testing, which can be staggered throughout the product lifecycle, both before and after market launch, to ensure robust, ongoing security in line with the essential requirements of the CRA;
  2. Direct access to the Yogosha Strike Force (YSF), a community of over 1,000 certified security experts specializing in different types of assets and technologies;
  3. Digitization and industrialization of testing activities through a European OffSec platform available as SaaS, or self-hosted for players with the most stringent security requirements.

Adopt a Continuous Approach to Security Testing

The CRA calls for continuous product security throughout the whole lifecycle. At Yogosha, we live by this philosophy on a daily basis.

Historically, security testing has always been a one-off affair: a pentest before production, then nothing for months or even years. But the punctual nature of traditional security tests means that they’re no longer suited to meeting the security challenges faced by today’s manufacturers. While they are useful for identifying vulnerabilities at a given point in time, they are by their very nature inadequate for dealing with the fluctuating nature of cyberthreats.

The digital ecosystem is dynamic, product updates are numerous and regular, new vulnerabilities appear every day, and malicious actors are constantly renewing their approaches. One-off testing offers only limited visibility, exposing companies to the potential risks that can arise between tests.

For CRA-regulated manufacturers with growing security needs, the answer lies elsewhere than in traditional approaches. They need to be able to handle Agile projects, numerous and regular deliveries, business imperatives… They need an approach to penetration testing that is flexible, scalable, streamlined and continuous.

With Pentest as a Service, Yogosha offers on-demand penetration testing, which can be easily scheduled and repeated throughout the product lifecycle. This not only complements in-house capabilities, but also ensures a constant, vigilant eye on the security of regulated products. This is a serious advantage, given that regular security testing is one of the essential requirements of the CRA.

Find Skilled Security Experts

Setting up security tests raises another issue: finding qualified experts to carry them out. Here, manufacturers have two options.

Firstly, they can hire in-house human resources. This option has many advantages, but unfortunately there’s a worldwide shortage of cybersecurity talent. Forbes estimated the number of vacancies in the sector at 3.5 million at the beginning of 2023, an increase of 350% in less than 10 years. Talent is scarce and expensive.

Secondly, turn to a specialized service provider. This is the solution favored by many companies, who choose to outsource security testing to a service provider, usually an auditing firm. But a firm’s expertise is always limited to that of its own employees, and their numbers. Even the most talented pentester won’t be familiar with all types of assets and technologies, it’s impossible. When choosing the service provider who will carry out the tests, it is therefore essential to ensure that they have the in-house skills to match the specificities of the scope to be tested.

Faced with these two challenges — finding qualified professionals or ensuring that the service provider has the necessary in-house skills — we offer a unique solution: the Yogosha Strike Force.

The Yogosha Strike Force, 1000+ Screened Security Researchers

The Yogosha Strike Force (YSF) is a private, selective community of over 1,000 international security researchers:

  • Specialized in finding the most critical vulnerabilities by simulating sophisticated hacker attacks.
  • Experts in multiple asset types — Web and Mobile Apps, IOT, Cloud, Networks, APIs, Infrastructure…
  • Holders of recognized cybersecurity certifications — OSCP, OSEP, OSWE, OSEE, GXPN, GCPN, eWPTXv2, PNPT, CISSP…

We select only the most talented researchers: only 10% of applicants are accepted, after passing technical and redactional exams. Identity and background checks are also carried out.

Through the Yogosha Strike Force, manufacturers falling under the Cyber Resilience Act have access to:

  • A large number of carefully selected international security experts;
  • An unrivaled range of skills to address all types of products and technologies.

“Through Yogosha, we’ve managed to find very talented people.” — Éric Vautier, CISO, Groupe ADP (Paris’ Airports)

Yogosha, an Offensive Security Platform Available as SaaS or Self-Hosted

Security testing involves a certain amount of sensitive data, such as information on potentially exploitable vulnerabilities. It is therefore essential to choose a reliable and solid test provider. Here, it’s up to regulated organizations to investigate the quality of each provider.

For our part, we offer several types of deployment of our Offensive Security platform to best meet the security requirements of every organization, including the most sensitive.

The Yogosha platform is available:

  • SaaS: a turnkey solution, hosted via 3DS Outscale and its SecNumCloud-certified sovereign cloud (the highest French security standard, established by the French CERT). Data is hosted on French soil.
  • Self-Hosted: a solution designed for organizations with the most stringent security requirements. You’re free to host the Yogosha platform wherever you like — private cloud, on premises — to retain total control over your data and the execution context.

In both cases, the intrinsic robustness of our product is at the heart of our concerns. We continually secure our assets through a DevSecOps pipeline, OWASP guidelines, recurrent penetration testing and an ongoing bug bounty program. In addition, Yogosha is in the process of obtaining ISO 27001 certification — something that will probably be completed by the time you read this.

Bug Bounty, an Approach Recommended by the Cyber Resilience Act

Bug Bounty is another approach to Offensive Security, referred to in Recital 77 of the CRA. It enables manufacturers to face up to the reality of the field by mobilizing numerous security researchers to test all or part of their products or systems.

Bug Bounty is a bug hunt based on a pay-per-result logic. Organizations pay monetary rewards — so-called bounties — to researchers for each valid vulnerability they manage to identify. The more critical the vulnerability, the higher the bounty. If the researchers find nothing, the organization spends nothing.

We won’t dwell on Bug Bounty here, as it’s an important element of the Coordinated Vulnerability Disclosure (CVD) policy, which is one of the essential requirements of the CRA and the subject of Chapter 11 of this guide.

However, let’s state right now that Yogosha, as a specialist in Offensive Security, does of course offer bug bounty programs. In fact, this is one of our core activities, as we’re the only entirely private European platform to offer bug bounty programs, and have been doing so since 2015.

8. Obtain Cybersecurity Certification for Your Products

While testing helps to reduce risk by identifying product vulnerabilities, certification attests to an overall high level of security. Obtaining cybersecurity certification means obtaining proof that the product is as secure as possible.

Indeed, certification involves an in-depth assessment of the product’s security and the processes put in place by its manufacturer. If all goes well, certification is the reward for a job well done.

Why Get Cybersecurity Certification?

Achieving certification takes time, money and commitment from the teams involved. Some may wonder whether it’s worth the effort — spoiler: yes.

The benefits of certification are many, and not just in terms of security. In a competitive environment, certification can be a strong differentiating factor between products. Customers may be more inclined to choose a certified product over one that isn’t, because they’ll have the assurance that it meets high security standards. If cybersecurity considerations are not (yet) part of the main choice criteria for the average person, they can be absolutely paramount if the customers happen to be organizations, all the more so if they are regulated by the NIS2 Directive.

With regard to the CRA in particular, obtaining cybersecurity certification has one major advantage: it dispenses with third-party conformity assessment procedures — under certain conditions, of course.

Products Certified Under a European Scheme Are Presumed Compliant With CRA Requirements

First and foremost, we need to talk briefly about the Cybersecurity Act (CSA). This is another major EU regulation adopted in 2019, which among other things introduced an EU-wide cybersecurity certification framework for ICT products, services and processes. The objective was to guarantee an adequate level of cybersecurity and harmonize assessments, so that companies operating in the EU can certify their products once and for all with a certificate recognized throughout the territory.

As you might guess, we’re not telling you this for your general knowledge. As a matter of fact, the Cyber Resilience Act provides that products certified according to a European certification scheme established under the Cybersecurity Act are presumed to comply with the essential requirements of the CRA, provided they have been assessed during the certification process.

“Products with digital elements and processes put in place by the manufacturer for which an EU statement of conformity or certificate has been issued under a European cybersecurity certification scheme adopted pursuant to Regulation (EU) 2019/881 [Editor’s note: the Cybersecurity Act], shall be presumed to be in conformity with the essential requirements set out in Annex I in so far as the EU statement of conformity or European cybersecurity certificate, or parts thereof, cover those requirements.” – CRA, Article 27.8

The regulation clearly states that only certifications issued as part of a European scheme under the Cybersecurity Act can constitute proof of conformity.

Furthermore, certification to an assurance level of at least “substantial” dispenses with the obligation to carry out a third-party conformity assessment, otherwise mandatory for certain product categories — more on this later.

“Furthermore, the issuance of a European cybersecurity certificate issued under such schemes, at least at assurance level ‘substantial’, eliminates the obligation of a manufacturer to carry out a third-party conformity assessment for the corresponding requirements.” – CRA, Article 27.9

European Certification Schemes Introduced by the Cybersecurity Act (CSA)

The Cybersecurity Act introduced three European certification schemes, the development of which was entrusted to ENISA:

  • The EU5G scheme for 5G network security;
  • The EUCS scheme for the security of cloud services;
  • The EUCC scheme for ICT products (hardware, software and components).

It’s the latter scheme that is of particular interest to us in the context of the CRA.

EUROPEAN CERTIFICATION SCHEMES INTRODUCED BY THE CYBERSECURITY ACT (CSA)

The EUCC Scheme: Harmonized Certification Rules for ICT Products

The EUCC (EU Common Criteria) scheme is the first — and so far only — of the three frameworks to have been officially adopted by the European Commission, on January 31, 2024. It provides a set of harmonized rules and procedures for certifying ICT products, both hardware and software, the very ones regulated by the Cyber Resilience Act.

The EUCC framework is based on the Common Criteria (a set of standards for evaluating IT systems), and takes its inspiration from the various existing national frameworks that were brought together under SOG-IS, a European agreement for the mutual recognition of certificates issued by a national authority, such as the ANSSI in France. This same institution has also confirmed that “existing SOG-IS certificates can be re-evaluated as EUCC certificates as soon as the new requirements are met” (Source), so we can assume that the same will apply in other countries.

We won’t go into further detail on the content of the EUCC scheme, as the topic is so rich that it could easily be the subject of another guide. However, if you are interested in learning more, here are a few key documents:

It should be noted that the first EUCC certificates can be issued from February 2025, one year after publication of the scheme.

Which Products Are Subject to Mandatory Certification?

Products deemed critical to the CRA may be subject to mandatory certification, but not by default. In fact, Article 8 gives the European Commission the right to decide, through delegated acts, which categories of critical products are required to obtain European cybersecurity certification at a level of assurance that is at least “substantial”. Here again, the extent to which the entities regulated by the NIS2 Directive are dependent on the products in question will be a determining factor.

This obligation is only materialized by such a delegated act; in its absence, critical products are not required to undergo certification. They are then subject to the same conformity assessment procedure as important Class II products, to which we’ll return in the next chapter.

In the case of a delegated act which introduces a certification obligation for a category of critical products, the general rule is to observe a 6-month transition period before the new rules apply, unless a security imperative justifies faster application.

9. Determine the Conformity Assessment Procedure to Be Applied for Each Product

Implementation of the essential requirements of the Cyber Resilience Act is mandatory, and manufacturers will have to demonstrate their commitment to security before placing regulated products on the market. This means they’ll have to provide proof of compliance, in the form of an assessment.

The CRA establishes different conformity assessment procedures, more or less stringent depending on the product’s level of risk. The first level consists of self-assessment by manufacturers, while advanced procedures require the intervention of a third-party assessment body, or cybersecurity certification.

Four Assessment Procedures Based on Different Modules

Article 32 of the CRA introduces four conformity assessment procedures, each based on one or more modules:

  • Assessment procedure 1: Module A
  • Assessment procedure 2: Module B + Module C
  • Assessment procedure 3: Module H
  • Assessment procedure 4: Certification under a European scheme issued pursuant to the Cybersecurity Act, a topic we discussed in the previous chapter.

Manufacturers can use any of these procedures, with the exception of important or critical products which cannot benefit from the first.

The modules in question are:

  1. Module A: Conformity Assessment procedure based on internal control;
  2. Module B: EU-type examination;
  3. Module C: Conformity to type based on internal production control;
  4. Module H: Conformity based on full quality assurance.

Module A consists of self-assessment by the manufacturer, while modules B + C involve the intervention of an independent certification body. Module H calls for the application of an appropriate quality system approved by a third-party organization.

We won’t detail the exact content of each module in this guide, as all the relevant information is clearly set out in Annex VIII of the Cyber Resilience Act.

Nevertheless, you should be aware that, whatever the module, the documentation required is substantial. None of the four is an easy task, and all will require organization and discipline in the management of the compliance project.

The General Case: Self-Assessment

Where products are neither important nor critical, manufacturers can opt for self-assessment, which is the first assessment procedure (Module A). This is the simplest of all the methods set out in the regulation.

In this case, the manufacturer assures and declares under his sole responsibility that the products meet all the essential cybersecurity requirements, and that he has complied with the essential requirements for vulnerability handling. He must also draw up the technical documentation and the EU declaration of conformity, and affix the CE marking to the product.

Manufacturers who can benefit from self-assessment but wish to apply a stricter procedure are naturally free to do so.

Evaluation of Important and Critical Products

Important products (classes I and II) can only be evaluated using procedures 2, 3 or 4. It is therefore compulsory to involve an independent assessment body, or to obtain appropriate cybersecurity certification (see next section). Self-assessment is not an option.

Please note that some critical products may be subject to mandatory cybersecurity certification, particularly when NIS2-regulated entities depend on them, to ensure that they cannot be used as attack vectors against them. The European Commission will have to specify the categories in question by means of delegated acts. In the absence of such an act, critical products can choose between certification, the second procedure (module B+C) or the third (module H).

Certification Under a European Scheme as Proof of Compliance

As mentioned previously, obtaining certification under a European scheme published as part of the Cybersecurity Act may suffice to prove compliance with the essential requirements of the CRA. More specifically, this is the EUCC scheme discussed in the previous chapter.

EUCC certification at a level of assurance that is at least “substantial” dispenses with the need for third-party conformity assessment, which is otherwise mandatory for important and critical products. It’s thus up to the manufacturers of these products to choose between obtaining EUCC certification or conformity assessment by an independent body.

“Furthermore, the issuance of a European cybersecurity certificate issued under such schemes, at least at assurance level ‘substantial’, eliminates the obligation of a manufacturer to carry out a third-party conformity assessment for the corresponding requirements.” – CRA, Article 27.9

Special Cases: Open-Source, EHR and High-Risk AI Systems

There are a few special cases that we believe are important to highlight: open-source products, electronic health record (EHR) systems and high-risk AI systems.

Evaluation of Products With Open-Source Elements

Products with open-source elements can resort to self-assessment (module A). This also applies to open-source products considered important, as stated in Article 32.5 of the CRA, and its Recital 92:

“Manufacturers of important products with digital elements qualifying as free and open-source software should be able to follow the internal control  procedure based on Module A, provided that they make the technical documentation available to the public.” – CRA, Recital 92

Evaluation of Electronic Health Record (EHR) Systems

Electronic health record (EHR) systems are regulated by both the CRA and the EHDS (European Health Data Space Regulation). However, to demonstrate compliance, manufacturers do not have to apply the assessment procedures of the CRA, but those detailed in Chapter III of the EHDS.

This measure is outlined in Article 32.6 of the CRA, although it could disappear from the final version of the regulation. Indeed, this same article could be introduced directly into the EHDS if it were to be published much later than the CRA. But this is merely a clerical tweak that wouldn’t make much difference to the manufacturers concerned.

Evaluation of High-Risk AI Systems

In some cases, products with digital elements may also be considered high-risk AI systems under Article 6 of the Artificial Intelligence Act (AI Act), a EU legislation adopted in March 2024.

Article 15 of the AI Act sets out requirements for the accuracy, robustness and cybersecurity of AI systems. This could leave manufacturers wondering which law prevails over the other when it comes to digital security.

Article 12 of the CRA removes any doubt by stating that high-risk AI systems are considered compliant with the cybersecurity requirements of the AI Act if:

  • These products meet the CRA’s essential requirements for product cybersecurity and vulnerability handling;
  • The achievement of the level of cybersecurity protection is demonstrated in the EU declaration of conformity issued under the CRA.

Generally speaking, the conformity assessment procedure laid down in Article 43 of the IA Act always applies, even if the evaluator must be competent in both regulations. However, if the high-risk AI system is also considered an important or critical product by the CRA, then its assessment procedures also apply for its essential requirements.

10. Know (And Respect) the Timeframes for Reporting Severe Incidents and Exploited Vulnerabilities

In line with the NIS2 Directive, the Cyber Resilience Act also introduces obligations to notify severe incidents and actively exploited vulnerabilities to the authorities.

Please note that compliance with these requirements is of the utmost importance. Not only for obvious security reasons, but also because any failure to comply can result in substantial fines (excluding micro-enterprises, SMEs and open-source software stewards).

All of the relevant provisions will be applicable 21 months after official publication of the CRA, unlike the others, which will come into force after 36 months. Given that the regulation is expected to be published in 2024, it is reasonable to say that the obligations to report severe incidents and exploited vulnerabilities will apply as early as 2026.

Mandatory Notifications to Be Made Simultaneously to ENISA and the Relevant CSIRT

Obligations to notify severe incidents and exploited vulnerabilities are introduced by Article 14. We’ll spare you the paraphrase of the official text, as it’s crystal clear:

A manufacturer shall notify any actively exploited vulnerability contained in the product with digital elements that it becomes aware of simultaneously to the CSIRT designated as coordinator […] and to ENISA. The manufacturer shall notify that actively exploited vulnerability via the single reporting platform established pursuant to Article 16.” – CRA, Article 14.1

Article 14.3 sets out the same obligation with regard to severe incidents.

To sum up, manufacturers must:

  • Notify severe incidents and actively exploited vulnerabilities;
  • within the set deadlines;
  • simultaneously to ENISA and the appropriate CSIRT;
  • via a single reporting platform.

Several questions arise here, which we will answer later on:

  • What are ENISA and the CSIRTs?
  • How to know which CSIRT to contact?
  • What is the single reporting platform?
  • What is a ” severe incident “?
  • What is an “actively exploited vulnerability”?
  • What are the notification timeframes introduced by the CRA?

What Are ENISA and the CSIRTs?

ENISA is the European Union Agency for Cybersecurity, a key player in the EU’s cyber landscape and, de facto, in the Cyber Resilience Act. Among its many missions, ENISA is notably responsible for creating the different cybersecurity certification schemes stemming from the Cybersecurity Act mentioned earlier (EU5G, EUCS, EUCC).

In the context of the CRA, CSIRTs (Computer Security Incident Response Teams) are specialized teams designated by the different Member States of the EU, responsible for coordinating cybersecurity at national level and responding to cyber incidents on their soil. For information, you may come across the term CERT (Computer Emergency Response Team), which is often used interchangeably with CSIRT.

For example, there is the CERT-BE in Belgium, the CERT-Bund in Germany, the GR-CSIRT in Greece, or the CERT-FR in France. You’ll find a list of CSIRTs in every EU country on the website CSIRTs Network.

How to Know Which CSIRT to Contact?

It’s simple: in the event of a severe incident or an exploited vulnerability, manufacturers must notify the CSIRT designated as coordinator in the Member State where they have their main establishment in the Union.

Article 14.7 of the CRA states that a manufacturer is deemed to have its main establishment in the Union in the Member State where decisions relating to the cybersecurity of products with digital elements are predominantly taken.

If such a Member State cannot be determined, the main establishment is deemed to be in the Member State where the manufacturer concerned has the establishment with the highest number of employees.

If the main establishment still cannot be determined, the manufacturer must notify the most relevant CSIRT according to the following order:

  1. The Member State in which the authorized representative acting on behalf of the manufacturer for the highest number of products with digital elements of that manufacturer is established;
  2. The Member State in which the importer placing on the market the highest number of products with digital elements of that manufacturer is established;
  3. The Member State in which the distributor making available on the market the highest number of products with digital elements of that manufacturer is established;
  4. The Member State in which the highest number of users of products with digital elements of that manufacturer are located.

What Is the Single Reporting Platform?

Reports must be made simultaneously to ENISA and the appropriate CSIRT, and each time “via the single reporting platform” of the European Union. This is to be administered and maintained by ENISA, and to allow each national CSIRT to put “in place their own electronic notification endpoints.”

Unfortunately, it’s difficult to tell you more at the moment, as this platform is still a work in progress at the time of writing. While the CRA endorses ENISA’s responsibility for the development of the project, the details of its implementation remain uncertain. Indeed, some wonder about its capacity to carry out such a task. Naturally, it’s not the competence and quality of ENISA’s teams that are being challenged, but its manpower and bandwidth.

For the record, the European Agency employs around 100 people, divided between its head office in Athens, Heraklion and Brussels. A modest headcount, especially considering the scale of the work to be done, from its day-to-day missions to the development of the certification schemes required by the Cybersecurity Act, not to mention the creation of a European database of vulnerabilities following NIS2, similar to the CVE database maintained by the MITRE. All these tasks are in addition to those imposed by the CRA, such as the reporting platform and biannual technical reports on emerging trends in cybersecurity risks in products with digital elements (Article 17).

What Is a “Severe Incident”?

It’s all very well having to report severe incidents, but we still need to know what they are! Fortunately, Article 14.5 of the Cyber Resilience Act is here to enlighten us.

An incident having an impact on the security of a product shall be considered to be severe where:

  • It negatively affects or is capable of negatively affecting the ability of a product to protect the availability, authenticity, integrity or confidentiality of sensitive or important data or functions; or
  • It has led or is capable of leading to the introduction or execution of malicious code in a product with digital elements or in the network and information systems of a user of the product.

What Is an “Actively Exploited Vulnerability”?

To answer this question, we refer to article 3 of the CRA, devoted to definitions.

“‘Actively exploited vulnerability’ means a vulnerability for which there is reliable evidence that a malicious actor has exploited it in a system without permission of the system owner.” – CRA, Article 3.42

Mandatory Deadlines for Notifying a Severe Incident

Reporting deadlines for severe incidents are detailed in Article 14.4. One can only salute the regulator’s decision to align with the notification timeframes introduced by the NIS2 Directive, thereby standardizing processes across the Union and simplifying the tasks of regulated entities.

CRA - Timeline of response obligations in the event of a severe incident involving a product with digital elements

In the event of a severe incident, the manufacturer must simultaneously submit to ENISA and the relevant CSIRT:

  1. An early warning notification within 24 hours of becoming aware of the incident. It must indicate, as a minimum, whether the incident is suspected to have been caused by unlawful or malicious acts and, if relevant, the EU countries in which the product has been made available;
  2. A second notification no later than 72 hours after becoming aware of the incident, unless the relevant information has already been communicated. It must provide:
    • available general information on the nature of the incident;
    • an initial assessment of the incident, and any corrective or mitigating measures taken;
    • corrective or mitigating measures that users can take;
    • if applicable, how sensitive the manufacturer deems the notified information to be;
  3. A final report within one month of the previous notification, unless the relevant information has already been provided. This report must include at least:
    • a detailed description of the incident, including its severity and impact;
    • the type of threat or root cause that is likely to have triggered the incident;
    • applied and ongoing mitigation measures.

Mandatory Deadlines for Notifying an Actively Exploited Vulnerability

Reporting deadlines for exploited vulnerabilities are detailed in Article 14.2. Here again, the Cyber Resilience Act aligns with NIS2.

CRA - Timeline of response obligations in the event of an actively exploited vulnerability in a product with digital elements

In the event of an actively exploited vulnerability, the manufacturer must simultaneously submit to ENISA and the relevant CSIRT:

  1. An early warning notification within 24 hours of becoming aware of it. It must indicate the EU Member States in which the product has, to its knowledge, been made available;
  2. A second notification no later than 72 hours after becoming aware of it, unless the relevant information has already been communicated. It must provide:
    • available general information about the product with digital elements concerned;
    • the general nature of the exploit and of the vulnerability concerned;
    • any corrective or mitigating measures taken;
    • corrective or mitigating measures that users can take;
    • how sensitive the manufacturer deems the notified information to be;
  3. A final report no later than 14 days after a corrective or mitigating measure has been made available, unless the relevant information has already been provided. This report must include at least:
    • a description of the vulnerability, including its severity and impact;
    • where available, information concerning any malicious actor that has exploited or that is exploiting the vulnerability;
    • details about the security update or other corrective measures that have been made available to remedy the vulnerability.

Obligation to Notify the Impacted Users

In Article 14.8, the CRA also introduces a requirement to notify users of a product subject to a severe incident or exploited vulnerability.

“After becoming aware of an actively exploited vulnerability or a severe incident, the manufacturer shall inform the impacted users of the product and where appropriate all users, about the actively exploited vulnerability or a severe incident having an impact on the security of the product and, where necessary, about risk mitigation and any corrective measures that the users can deploy to mitigate the impact of that vulnerability or incident, where appropriate in a structured and easily automatically processible, machine readable format.

Where the manufacturer fails to inform the users of the product in a timely manner, the notified CSIRTs designated as coordinators may provide such information to the users when considered proportionate and necessary for preventing or mitigating the impact of that vulnerability or incident.” – CRA, Article 14.8

The text takes care to distinguish between impacted users, who must be notified in any event, and all users of the product, who must be notified “where appropriate”. In both cases, notification of users must be accompanied by corrective or mitigating measures that they can put in place — without which the notification would be of little use anyway, or even cause more harm than good…

The Delicate Issue of User Notification

User notification is a delicate issue, where timing is everything — timing that is not specified by the Cyber Resilience Act, as it will be specific to each situation.

To put it simply, there are three possibilities:

  • Too early: Notifying users before the deployment of a patch would put them at risk, as it would also alert potential malicious actors who might try to take advantage of the information.
  • When deploying an update: Informing users of the details of a vulnerability as soon as an update is available could expose those who don’t apply it immediately; this is not uncommon in the case of IoT products, which may never be updated by users, or even never connected to the Internet, hindering manufacturers’ automatic updates.
  • Too late: Never warning users, or warning them too late, runs the risk of leaving them vulnerable to a threat of which they are unaware, and to which they won’t seek to apply a patch. 

There is no magic bullet. It’s up to manufacturers to assess the situation according to the nature of the problem and that of the product, as well as the nature of the users.

Indeed, the stakes involved in notifying users will differ according to the profiles involved. A vulnerability exploited in Mr. Everyman’s smart toaster won’t have the same consequences as a vulnerability in a smartcard reader installed in every cash dispenser of a Member State.

This is one of the reasons why the CRA gives CSIRTs the authority to inform users if the manufacturer fails to do so, and if they deem it “proportionate and necessary” to prevent or mitigate the consequences. It’s a safe bet that this measure will rarely, if ever, be applied in the case of individual users, but it may provide an effective means of alerting users who would be sensitive or critical entities, such as those regulated by NIS2.

Finally, let’s remember that, while the CRA requires manufacturers to “share and publicly disclose information about fixed vulnerabilities” as soon as an update has been distributed, it also leaves some leeway by accepting that the release of certain information may be delayed if security reasons demand it. We refer you here to Chapter 14 of this guide, dedicated to security updates.

11. Implement a Coordinated Vulnerability Disclosure (CVD) Policy

The implementation and enforcement of a Coordinated Vulnerability Disclosure policy is an obligation for manufacturers, as it is the fifth essential requirement of the CRA for vulnerability handling. Coordinated vulnerability disclosure is also known as Responsible Disclosure, the two terms being used interchangeably.

The concept grew out of the relationship between companies and ethical hackers. While some benevolent hackers wanted to expose vulnerabilities in the public arena to warn users, some organizations preferred to keep them under wraps indefinitely. As a general rule, neither attitude serves the public interest or the security of all. Coordinated Vulnerability Disclosure has emerged as a good compromise, satisfying both the general interest and that of all stakeholders.

Under responsible disclosure, manufacturers and security researchers commit to disclosing potential product vulnerabilities to users, but not before a patch is available. Users can then adopt corrective measures, without being at the mercy of an exploitable vulnerability.

What is Coordinated Vulnerability Disclosure?

Coordinated Vulnerability Disclosure (CVD) is the process of collaboration between the owners of digital assets and the people who might report a vulnerability in those assets. In the context of CRA, the assets are the regulated products, the owners are the manufacturers, and the people who can report vulnerabilities are anyone and everyone. However, as identifying a vulnerability in a product requires a certain level of expertise, reports are generally submitted by security researchers, or ethical hackers.

The CVD process begins with the collection of information about a vulnerability from the researcher community, and continues with the coordination of this information between researchers and manufacturers. It can lead to a coordinated disclosure of the vulnerability (hence the name): a public disclosure after appropriate remediation that serves the interests and security of all parties concerned — the researcher, the users and the manufacturer, but also its partners, suppliers and potentially the general public.

What Is a Coordinated Vulnerability Disclosure Policy?

The CVD policy is, as the name suggests, a policy. It’s a document that outlines everything the manufacturer puts in place to facilitate the responsible disclosure process: the communication channel made available for reporting vulnerabilities, the internal stakeholders, the procedure for handling the reports received, and all the stages of collaboration, from initial contact to patch deployment and user prevention.

Put another way, it’s about clearly defining the responsible disclosure process in writing, to ensure that there’s a plan of action to follow the day a vulnerability is reported. As a reminder, the implementation of a CVD policy is a mandatory essential requirement of the Cyber Resilience Act.

If you’d like to delve deeper into the subject, we recommend reading Carnegie Mellon University’s CERT Guide to Coordinated Vulnerability Disclosure. Although published in 2017, it remains one of the reference documents on the topic. The ISO/IEC 29147:2020 and ISO/IEC 30111:2020 standards may also prove useful. The former offers guidelines for vulnerability disclosure, but without going into the details of CVD, and the latter focuses on vulnerability management processes, including responsible disclosure, albeit at a theoretical level.

Setting up a Communication Channel for Reporting Vulnerabilities

The responsible disclosure process begins when someone informs the manufacturer of a potential vulnerability in a product. But for this to be possible, the manufacturer must have set up a public communication channel dedicated to reporting vulnerabilities, and have clearly indicated it to users. Establishing such a contact address is an obligation for manufacturers, since it is the sixth essential requirement of the CRA regarding vulnerability handling.

“Manufacturers’ coordinated vulnerability disclosure policy should specify a structured process through which vulnerabilities are reported to a manufacturer in a manner allowing the manufacturer to diagnose and remedy such vulnerabilities before detailed vulnerability information is disclosed to third parties or to the public.” – CRA, Recital 77

This communication channel has a name: a Vulnerability Disclosure Program (VDP). It’s important to understand that the VDP is an element of the CVD policy, but that they are two different things. The former is a communication channel, while the latter is a document that defines the steps in the responsible disclosure process.

Vulnerability Disclosure Program (VDP)

A Vulnerability Disclosure Policy (VDP) is a structured channel provided by an organization for anyone to report a digital security issue to it. In other words, it is a secure way for third parties to know where and how to report vulnerabilities to an entity.

It is entirely possible to write and implement a vulnerability disclosure policy on your own. There are many resources to guide you, such as:

  • Disclose.io, which offers a vulnerability disclosure policy generator;
  • securitytxt.org, which helps to create security.txt files.

Indeed, it is good practice to indicate the means of contact provided by your VDP in a security.txt file, available at www.website.tld/.well-known/security.txt. As an example, here is the security.txt of the Yogosha website.

Set up a VDP for Free With Yogosha

Another solution is to call upon a professional platform to create and host your VDP. An exercise we are used to at Yogosha, as we have been offering VDPs to our customers since 2015. In fact, setting up a VDP with Yogosha is completely free of charge for our Offensive Security test customers.

We assist organizations every step of the way, from drafting the disclosure policy — scope, disclosure rules, guidelines, legal and Safe Harbor clauses, etc. — to providing a secure platform to collect and centralize all submitted vulnerability reports.

Setting up a Bug Bounty Program

Bug bounty is an important element of the most effective Coordinated Vulnerability Disclosure policies, to the point of being recommended by Recital 77 of the Cyber Resilience Act.

“[…] Given the fact that information about exploitable vulnerabilities in widely used products with digital elements can be sold at high prices on the black market, manufacturers of such products should be able to use programmes, as part of their coordinated vulnerability disclosure policies, to incentivise the reporting of vulnerabilities by ensuring that individuals or entities receive recognition and compensation for their efforts. This refers to so-called ‘bug bounty programmes’.– CRA, Recital 77

Setting up a bug bounty program is therefore not an obligation for manufacturers, but a best practice recommended as part of the mandatory CVD policy.

The rationale for bug bounty in a responsible disclosure policy is summarized very clearly in Recital 77: given that vulnerability information can be sold at a high price on the black market, it’s important to reward benevolent researchers for their efforts. From the manufacturers’ point of view, it is better to encourage and reward the collection of information on potentially critical vulnerabilities, rather than see them exploited later by malicious actors. All systems have flaws, and the challenge is the same for the “good guys” as it is for the “bad guys”: to identify vulnerabilities first.

They do bug bounty and share their stories.

Bug Bounty as a Security Test Under the CRA

Bug bounty is not just a part of the responsible disclosure policy, but a security test in its own right. As such, it enables compliance with not one, but two essential requirements of the regulation: the implementation of a CVD policy along with effective, regular testing.

Bug bounty is a way of testing the level of cybersecurity of a system by confronting it with the reality of the real world. Manufacturers can mobilize from a few dozen to several hundred security researchers to test the security of a product under conditions as close to reality as possible.

It’s an exercise that comes after previous tests, complementing penetration testing, and helps identify blind spots and complex, high-risk vulnerabilities. Researchers examine assets using a series of tactics, techniques and procedures (TTPs) to detect in-depth, critical vulnerabilities that escape more calibrated forms of testing, such as automated scans.

Read also: Bug Bounty, Benefits and Drawbacks

Bug bounty has another advantage for manufacturers: it is based on a pay-per-result logic. Companies pay researchers a monetary reward only if they succeed in identifying vulnerabilities. The more critical the vulnerability, the higher the bounty. And if the researchers find nothing, the company spends nothing.

We could talk about this topic for hours, since Yogosha has been a recognized bug bounty specialist since 2015. But this guide is long enough as it is, so we’ll just say this:

  • The Yogosha Strike Force (YSF) is a community of over 1,000 international certified security researchers. They specialize in finding the most critical vulnerabilities, are experts in multiple asset types and technologies, and hold recognized cybersecurity certifications — OSCP, OSEP, OSWE, OSEE, GXPN, GCPN, eWPTX, PNPT, CISSP… We select only the most talented researchers: only 10% of applicants are accepted, after passing technical and redactional exams. Identity and background checks are also carried out.
  • Discovered vulnerabilities are reported live on our Offensive Security platform, and documented with their CVSS score, proof of concept (POC) and remediation advice.
  • Yogosha is the only entirely private bug bounty platform in Europe. Our community of experts is restricted and selective, and all our bug bounty programs are private and confidential.

12. Provide a Single Point of Contact for Users, Along With Information and Instructions

Beyond responsible disclosure and bug bounty hunting, it’s essential that all users of a connected product are informed of the means of communication to be used for cybersecurity-related issues. To this end, Article 13.17 of the CRA requires manufacturers to define “a single point of contact to enable users to communicate directly and rapidly with them”.

If this is not already the case, and if your organization’s structure allows it, we can only recommend creating a Product Security Incident Response Team (PSIRT), which will be a perfect point of contact. PSIRTs are the alter-egos of CSIRTs, but specialized in products. They are responsible for product security, and for responding to incidents related to product vulnerabilities.

Users must be able to choose “their preferred means of communication”, and the single point of contact cannot be limited to automated tools.

“Where manufacturers choose to offer automated tools, e.g. chat boxes, they should also offer a phone number or other digital means of contact, such as an email address or a contact form. The single point of contact should not rely exclusively on automated tools.– CRA, Recital 64

It must be kept up to date, easily identifiable by users, and included in information and instructions to them — yes, more paperwork.

Items to Be Included in Information and Instructions to Users

Information and instructions to users is a mandatory document, which must accompany any product placed on the market “in paper or electronic form”. As this document is intended for the general public, the CRA insists that the information must be “clear, understandable, intelligible and legible”. (Article 13.18)

Information and instructions to users must include at least all the elements listed in Annex II, i.e. :

  1. The name, registered trade name or registered trademark of the manufacturer, and the postal address, the email address or other digital contact as well as, where available, the website at which the manufacturer can be contacted;
  2. The single point of contact where information about vulnerabilities of the product can be reported and received, and where the manufacturer’s policy on coordinated vulnerability disclosure can be found;
  3. Product name and type and any additional information enabling its unique identification;
  4. The intended purpose of the product, including the security environment provided by the manufacturer, as well as the product’s essential functionalities and information about the security properties;
  5. Any known or foreseeable circumstances likely to result in significant cybersecurity risks, whether related to the intended use of the product or to reasonably foreseeable conditions of misuse;
  6. The internet address at which the EU declaration of conformity can be accessed, where applicable;
  7. The type of technical security support offered by the manufacturer and the end-date of the support period during which users can expect vulnerabilities to be handled and to receive security updates;
  8. Detailed instructions or an internet address referring to such detailed instructions and information on:
    • the necessary measures during initial commissioning and throughout the lifetime of the product to ensure its secure use;
    • how changes to the product can affect the security of data;
    • how security-relevant updates can be installed;
    • the secure decommissioning of the product, including information on how user data can be securely removed;
    • how the default setting enabling the automatic installation of security updates can be turned off;
    • where the product is intended for integration into other products with digital elements, the information necessary for the integrator to comply with the essential requirements and the documentation requirements.
  9. Information on where to find the Software Bill of Materials (SBOM), when the manufacturer decides to make it available to the user.

This document must be made available to users and market surveillance authorities for at least ten years after the product is placed on the market, including online if this was the case initially, or during the support period, whichever is longer.

13. Determine the Product Support Period

It’s essential to determine the support period for the regulated product, as this will determine, among other things, how long the manufacturer must commit to updating the cybersecurity risk assessment mentioned earlier, or to applying all the essential requirements for vulnerability handling. As the CRA aims to ensure the security of products throughout their whole lifecycle, it’s important to know where it ends.

It’s up to manufacturers to determine the support period for their products, in compliance with the guidelines set out in Article 13.8 of the CRA, which requires them to take into account:

  • The length of time during which the product is expected to be in use, taking into account, in particular:
    • reasonable user expectations;
    • the nature of the product, including its intended purpose;
    • as well as relevant Union law determining the lifetime of products with digital elements.
  • The support periods of products offering a similar functionality placed on the market by other manufacturers;
  • The availability of the operating environment;
  • The support periods of integrated components that provide core functions and are sourced from third parties;
  • Relevant guidance provided by the dedicated administrative cooperation group (ADCO) to be created within the CRA context. This group may also issue recommendations on the minimum support period expected for certain product categories.

In all cases, the support period for a product may not be less than 5 years, or must be equal to the product’s expected usage period if this is less than 5 years. The information used to determine the support period must be included in the technical documentation accompanying the product.

14. Ensure the Availability of Security Updates for at Least 10 Years

Security updates must be kept available to users for at least 10 years, even if the product’s support period is shorter.

“Manufacturers shall ensure that each security update which has been made available to users during the support period, remains available after it has been issued for a minimum of 10 years after the product has been placed on the market or for the remainder of the support period, whichever is longer.” – CRA, Article 13.9

Security updates should be free of charge and, where technically possible, distributed separately from functionality updates. Furthermore, if archives are maintained for the public, they should clearly mention the cybersecurity risks associated with using an older version.

An Obligation to Remediate Vulnerabilities in Older Versions if the Latest Version Is a Paid-for Version

The obligation for manufacturers to remedy vulnerabilities via security updates (see the 13 essential requirements applicable to products) only applies to the latest version placed on the market, if and only if “the users of the versions previously placed on the market have access to the version last placed on the market free of charge and do not incur additional costs to adjust the hardware and software environment in which they use the original version of that product”. (Article 13.10)

This clarification is important in the case of software solutions (application, platform…) linked to an IoT product, which are also targeted by the CRA. Manufacturers will have to correct vulnerabilities and offer security updates for all previous versions of said software if users have to pay for the latest version, or to upgrade the environment in which they use it.

In a similar vein, free security updates could threaten extended security support business models, in which updates are free for the latest version, but chargeable for earlier versions. A virtuous scheme that is most often used in the open-source community to finance new developments. In early 2023, the French ACN (Alliance for Digital Confidence) expressed its concern about this point, pointing to the OpenSSL model as an example.

Updates to Be Distributed Automatically and “Without Delay”

In addition to this free-of-charge obligation, the product-related essential requirements call for vulnerability remediation through “automatic security updates” activated by default, while those relating to vulnerability handling require these updates to be disseminated “without delay”. These two injunctions raise a number of issues for manufacturers, once again highlighted by the French ACN.

For a product to be updated automatically, it must be connected to the Internet. However, this is a parameter that depends on the user, not the manufacturer or the product. Some digital products may be used for a long time without being reconnected by the user, if at all. In this case, the user is likely to be using a product whose vulnerabilities are known and have been corrected by the manufacturer, but without the latter being able to act directly.

This same situation also poses a problem with regard to another CRA requirement, namely to “share and publicly disclose information about patched vulnerabilities” as soon as an update has been distributed. But this ignores the fact that, in many cases, a significant proportion of users will not immediately benefit from the update, and will therefore be vulnerable to attack by malicious actors who may have learned of the vulnerability to be exploited through the manufacturer’s communications.

Indeed, it’s not always wise to communicate about vulnerabilities as soon as a patch is released, and it may even be that never communicating is the best option. We can therefore only recommend that manufacturers judiciously apply this provision of the essential requirements for vulnerability handling:

“In duly justified cases, where manufacturers consider the security risks of publication to outweigh the security benefits, they may delay making public information regarding a fixed vulnerability until after users have been given the possibility to apply the relevant patch.” – CRA, Annex I, Part II

Products Exempted From Automatic Updates

Recital 57 of the CRA states that the provisions on automatic updates do not apply:

  1. To products with digital elements primarily intended to be integrated as components into other products” ;
  2. To products with digital elements for which users would not reasonably expect automatic updates, including products intended to be used in professional ICT networks, and especially in critical and industrial environments where an automatic update could cause interference with operations.”

In the first case, we’re talking about products that are themselves integrated into other products, for which it’s not possible to push updates directly. Indeed, the manufacturer of a microcontroller can make updates available, but cannot distribute them on its own if it’s integrated into another product over which it has no control. In this case, updating depends on the manufacturer of the final product.

In the second case, we’re talking about products whose automatic updates could have a direct impact on the activities of certain companies, especially in the industrial sector. There’s no need for a big speech to explain why: just imagine a water treatment plant or nuclear power station being paralyzed because a digital component has been updated… The example is far-fetched, but you get the idea.

15. Draw up an EU Declaration of Conformity, and Affix the CE Marking to Products

We’re nearing the end, but there are still a few administrative details to be settled: the EU declaration of conformity and CE marking. These must be settled once the conformity assessment procedure, discussed in Chapter 9, has been completed.

“Where compliance of the product with digital elements with the essential requirements […] has been demonstrated by that conformity assessment procedure, manufacturers shall draw up the EU declaration of conformity in accordance with Article 28 and affix the CE marking in accordance with Article 30.” – CRA, Article 13.12

The EU Declaration of Conformity

The EU declaration of conformity must be drawn up by the manufacturer and attest that conformity with the essential requirements has been demonstrated. It must be drafted in accordance with the model available in Annex V of the regulation, the content of which is as follows:

  1. Name and type and any additional information enabling the unique identification of the product with digital elements;
  2. Name and address of the manufacturer or its authorized representative;
  3. A statement that the EU declaration of conformity is issued under the sole responsibility of the provider;
  4. Object of the declaration (identification of the product with digital elements allowing traceability, which may include a photograph, where appropriate);
  5. A statement that the object of the declaration described above is in conformity with the relevant Union harmonization legislation;
  6. References to any relevant harmonized standards used or any other common specification or cybersecurity certification in relation to which conformity is declared;
  7. Where applicable, the name and number of the notified body, a description of the conformity assessment procedure performed and identification of the certificate issued;
  8. Additional information: (to be completed if necessary);
  9. Signed for and on behalf of : (date and place of issue).

A simplified declaration template for SMEs and startups is available in Annex VI. It should also be noted that the EU declaration of conformity must be kept at the disposal of market surveillance authorities for at least ten years after the product is placed on the market, or for the support period, whichever is longer.

CE Marking

The rules and conditions for affixing the CE mark are laid down in Article 30 of the CRA. It is mandatory and must be affixed visibly, legibly and indelibly to products. We’re not going to go into detail about all the rules governing CE marking, as this falls outside our area of expertise, i.e. cybersecurity, and we believe that manufacturers are already well acquainted with this subject, which is nothing new. Therefore, we refer you to Article 30 of the regulation for further information.

To Sum Up

The Cyber Resilience Act is expected for 2024, and will come into force three years after its publication in the Official Journal of the EU, leading to a compliance deadline for manufacturers in 2027. Meanwhile, obligations to report severe incidents and actively exploited vulnerabilities (see Chapter 10) will come into force after 21 months, i.e. as early as 2026.

There’s no rush, but there’s no time to waste either. The obligations arising from the CRA are numerous, and three years won’t be too long to get things right. We therefore advise all security managers of CRA-affected entities to start the compliance process now. Better safe than sorry.

The bulk of the work should focus on the 21 essential requirements of the Cyber Resilience Act. Thirteen of these relate to product cybersecurity, and eight to vulnerability management. They are all mandatory and represent most of the effort needed to make products secure and compliant. We advise you to start by putting in place the processes for notifying severe incidents and exploited vulnerabilities, as this obligation will come into force before the others.

For more information on the essential requirements, we refer you to Chapter 2 of this guide and to Annex I of the regulation. The technical documentation (see Chapter 4) is also an excellent roadmap, as it lists just about everything that needs to be done under the CRA.

After the essential requirements comes conformity assessment (see Chapter 9), whose procedure and workload depend on the sensitivity of the product concerned. Then there are all the administrative formalities to be completed, from technical documentation to the EU declaration of conformity.

Finally, regular cybersecurity testing is an important step towards compliance, and an essential requirement in its own right. It ensures that products placed on the European market are properly secured, free from serious vulnerabilities, and pose no danger to their users, be they private individuals or businesses.

When it comes to testing, don’t hesitate to contact us. Yogosha has been a recognized European specialist in Offensive Security since 2015, and we offer several types of testing as part of the CRA:

  • Pentest as a Service (PtaaS) : Fast, agile and continuous penetration testing to address one of the major concerns of the regulation: product security throughout the lifecycle. Access an international community of 1000+ vetted and certified security researchers, and receive real-time vulnerability reports via our platform.
  • Bug Bounty : A security test in its own right, but also an element of the Coordinated Vulnerability Disclosure policy called for by the CRA (see Chapter 11). Evaluate the security of your products on an ongoing basis, and identify critical vulnerabilities that have remained under the radar. Agilely reward, correct and retest before they are exploited.

All that’s left for us to do is to wish you good luck in your journey towards compliance with the Cyber Resilience Act. And if, by any chance, you need advice at any time, or even expert guidance, please feel free to contact us.