If you run engineering or product at a Danish or Nordic SaaS company, you've probably heard the word NIS2 about forty times in the last year. From your board, from an enterprise customer's procurement team, from a lawyer, from someone at a conference. Most of the time the message is the same: “you should look into this.”
The problem is that the articles you've read so far tend to land in one of two buckets. Either they are legal summaries written for a general audience that tell you very little about what to actually do, or they are vendor pieces that conclude, surprise, that you should buy their platform. This article is neither. It's a working CTO's guide: what NIS2 is, whether you're in scope, what real readiness looks like for a 50 to 250 person SaaS, and what I'd do if I were sitting in your seat starting today.
What NIS2 actually is, for a growing SaaS CTO
NIS2 is the EU Directive on measures for a high common level of cybersecurity across the Union. It replaces the original 2016 NIS Directive and was adopted in December 2022. The transposition deadline for member states was 17 October 2024. It is not a regulation like GDPR, which means each member state implements it in local law, and the details vary.
Denmark transposed NIS2 through the NIS 2-loven, formally LOV nr 434 af 06/05/2025, passed by the Folketing on 29 April 2025 and in force from 1 July 2025. Covered entities had to register by 1 October 2025. The Danish supervisory model is split across sector regulators rather than consolidated into one agency. Styrelsen for Samfundssikkerhed (SAMSIK), the civilian agency under the Ministeriet for Samfundssikkerhed og Beredskab, acts as the coordinating NIS2 authority and national single point of contact, having taken over that function from Center for Cybersikkerhed (CFCS) in early 2025. Sector-specific oversight sits with Energistyrelsen, Finanstilsynet, Sundhedsdatastyrelsen, and others. If you’re in scope, knowing which regulator supervises you is step one, because the registration process, reporting channel, and likely the inspection style are specific to that authority. Significant incidents are reported via virk.dk; Forsvarets Efterretningstjeneste acts as the national CSIRT.
What makes NIS2 materially different from the old NIS is reach. The original directive covered a narrow set of operators of essential services and a handful of digital service providers. NIS2 dramatically widens the sectoral scope, tightens reporting timelines, introduces personal accountability for management, and backs it all up with fines that are now in the same order of magnitude as GDPR. For a 100-person SaaS, that means NIS2 is the first cybersecurity regulation that can actually show up at your door with real teeth. GDPR set the precedent. NIS2 operationalises it on the security side.
Am I in scope? The question that matters most
This is where most of the confusion lives, and for good reason. The scope rules are a combination of sector, size, and role in supply chains, and they do not map cleanly to how SaaS companies think about their business.
The basic test
NIS2 applies if you meet two conditions at the same time. First, you operate in one of the listed sectors. Second, you pass the size threshold: at least 50 employees, or at least €10 million in annual turnover, or at least €10 million in balance sheet total. Medium enterprises and above are in. Small and micro enterprises are generally out, with sector-specific exceptions for providers where size doesn't matter (for example qualified trust service providers, top-level domain registries, or DNS service providers).
Essential versus important entities
Sectors are split into two tiers. Essential entities include energy, transport, banking, financial market infrastructure, health, drinking water, waste water, digital infrastructure, ICT service management, public administration, and space. Important entities include postal and courier services, waste management, the manufacture, production and distribution of chemicals, production, processing and distribution of food, manufacturing (medical devices, computers and electronics, electrical equipment, machinery, motor vehicles, and other transport equipment), digital providers, and research.
Lex specialis note for financial entities.Credit institutions and financial market infrastructure listed above are principally regulated for ICT risk management under DORA (Regulation (EU) 2022/2554). DORA carves out NIS2's ICT risk management obligations for covered financial entities; NIS2 applies as lex generalis and in specific sectoral overlays. If your firm is a credit institution or authorised payment or investment firm, read the DORA article first and treat NIS2 as the secondary lens.
The two tiers have the same core obligations. The differences are in supervision and fines. Essential entities are subject to proactive supervision, meaning the competent authority can inspect you without cause. Important entities are subject to reactive supervision, meaning the regulator shows up after something happens or a complaint is raised. Maximum administrative fines sit at up to €10 million or 2% of global annual turnover, whichever is higher, for essential entities, and up to €7 million or 1.4% for important entities.
Where SaaS companies usually land
Most Nordic B2B SaaS companies that fall in scope do so under one of three labels. Digital infrastructure, if you run a DNS service, a content delivery network, a data centre service, or act as a trust service provider. ICT service management business-to-business, if you are a managed service provider or a managed security service provider. Or digital provider, the important-entity bucket that covers online marketplaces, online search engines, and social networking platforms.
Pure B2B SaaS that is not an MSP, not a marketplace, not a search engine, and not a social platform often falls outside direct scope, even at several hundred employees. A vertical SaaS for Danish law firms, a fintech-adjacent reporting tool, or a workforce management product may all sit outside NIS2 direct scope by sector. This surprises people who expected a blanket “all tech companies are in” rule. There isn't one.
The supply chain argument
Here is the part that changes the calculus. Even if you are not directly in scope, your enterprise customers almost certainly are. Danish banks, hospitals, utilities, transport operators, telcos, and public sector bodies are all in scope as essential entities. NIS2 Article 21 requires them to manage risks in their supply chain and their supplier relationships. In practice, that means they have to assess the security of their vendors, including SaaS providers, as part of their own NIS2 programme.
What this looks like operationally: within the next 12 months, if you sell into any regulated or semi-regulated customer in the Nordics, expect a vendor security questionnaire that references NIS2 directly. Expect contractual clauses requiring you to notify the customer of significant incidents within 24 hours. Expect their procurement team to ask for evidence of a security programme that maps to the Article 21 measures. You can refuse, and they can go with a competitor who said yes.
Concrete examples
- Danish vertical SaaS for clinics, 80 employees.Not an MSP, not a marketplace. Not directly in scope by sector. But sells into health sector customers who are essential entities. Supply-chain exposure is high. Treat as “NIS2-equivalent by contract.”
- Nordic workforce management platform, 200 employees.Sells into manufacturing and logistics. Likely not directly in scope. Customers are a mix of in-scope and out-of-scope. Expect questionnaires but not regulator knocks.
- Danish data centre colocation provider, 60 employees.Digital infrastructure sector. In scope as essential entity. Full NIS2 obligations apply directly.
- Nordic MSP managing Microsoft 365 tenants for mid-market customers, 120 employees. ICT service management. In scope as important entity at minimum. Full obligations apply.
- Fintech SaaS selling into Nordic banks, 40 employees.Under the size threshold, so not in direct scope by size, and probably not by sector either. But DORA likely applies through the customer relationship, which is a parallel and arguably stricter regime.
In my experience, the single most common mistake at this stage is a yes or no answer that ignores the supply chain. You can be out of direct scope and still have to build an NIS2-aligned programme because your top ten customers will require it. Work out both answers before you commit to a plan.
What NIS2 actually requires: the Article 21 measures in plain language
Article 21 lists ten categories of cybersecurity risk-management measures. They are framed at a high level on purpose. Implementing guidance varies by sector and member state, and much of the detail in Denmark will come from SAMSIK and the individual sector regulators over time. Here is what each of the ten means in practice for a 100-person SaaS.
1. Policies on risk analysis and information security
You need a written information security policy approved by management, and a documented method for analysing risks to your systems and data. Practically, this means an ISMS scope statement, a top-level infosec policy, and a risk register that you actually update. Not a three-ring binder. A living document. Most SaaS companies already have something that half-resembles this in a Confluence page. The NIS2 requirement is that it is intentional, reviewed, and traceable to risks you've identified.
2. Incident handling
A documented process for detecting, triaging, containing, and resolving security incidents, with clear roles and escalation paths. This is where the 24-hour and 72-hour reporting clocks live. Practically: an incident response runbook, a severity matrix, a defined incident commander role, and a post-incident review process that produces lessons learned. If you already run engineering incidents with clear severity levels, most of this exists. You need to fold security incidents into the same muscle, or build a parallel track if your ops team can't absorb it.
3. Business continuity and crisis management
Backup management, disaster recovery, and crisis communications. You need a tested backup and restore process, a documented DR plan with defined RTO and RPO for your critical services, and a crisis comms plan that covers customers, regulators, and employees. The tested part is important. Untested backups are not backups. Most SaaS companies I've worked with have not run a full DR exercise in the last 12 months. That's a finding waiting to happen.
4. Supply chain security
Security assessment of your direct suppliers and service providers, proportionate to the criticality of what they do for you. For a SaaS company, your critical suppliers are usually your cloud provider, your identity provider, your payment processor, your observability stack, and any sub-processor that handles customer data. You need a register of these, a risk assessment, and contractual terms that push appropriate security obligations down the chain. The bar is not “assess every vendor.” The bar is “assess the ones that matter, document the method, and apply it consistently.”
5. Security in network and information systems acquisition, development, and maintenance
Secure SDLC. Vulnerability management. Patching. Change control. For a SaaS, this is the part your engineering team is most likely to already be doing, at least partially. What regulators will want to see is evidence: documented secure development standards, a vulnerability management process with SLAs by severity, a change management process, and some form of application security testing in the pipeline. If you run SAST, DAST, dependency scanning, and container scanning, you are in a reasonable place. If you don't, this is usually the fastest operational lift because the tools are mature.
6. Policies and procedures to assess the effectiveness of cybersecurity risk-management measures
You have to measure whether your controls are working. Internal audits, control testing, penetration testing, metrics reporting to management. This is the measure that separates a paper programme from a real one. Concretely: an annual penetration test against your production environment, quarterly internal control reviews, and a set of KPIs that management sees at least twice a year. If you can't say how you know your controls work, you haven't satisfied this measure, regardless of what the policies say.
7. Basic cyber hygiene practices and cybersecurity training
Training for all employees, not just engineers. Phishing simulation, awareness on data handling, secure remote work basics. At the same time, the management body has its own training obligation, which I'll get to in the next section. Practically: a yearly training cycle for all staff with completion tracking, and a separate, deeper curriculum for management. Avoid generic off-the-shelf training that nobody finishes. Tailored content gets completed, and the evidence trail is cleaner.
8. Policies and procedures regarding the use of cryptography and, where appropriate, encryption
Encryption in transit, encryption at rest for sensitive data, key management, and a documented cryptographic standard. For a modern SaaS running on AWS, Azure, or GCP, most of this is a configuration and documentation exercise rather than an engineering project. You need to state what you encrypt, how, with what keys, and who manages those keys. The weak spot I see most often is key rotation and key management responsibilities that are informal or entirely owned by one engineer.
9. Human resources security, access control policies, and asset management
Joiner-mover-leaver processes, least privilege, access reviews, asset inventory. The unglamorous middle of any security programme. For SaaS companies with good HRIS and SSO hygiene, this is mostly about documenting what you already do and running a quarterly access review that produces a signed-off report. Asset management in the cloud era is also workload and data inventory, not just laptops. A CMDB you never update is worse than none at all.
10. Use of multi-factor authentication, secured communications, and secured emergency communication
MFA for all privileged access and, realistically, for all employees. Secured voice, video, and text communications for sensitive business conversations. An out-of-band emergency communications channel you can use if your main systems are compromised. If your incident response plan assumes Slack works during an incident, and Slack is what just got breached, you have a problem. A tested alternative, even a simple Signal group for the incident response team, is enough to satisfy the intent.
Management body accountability: the part that scares boards
Article 20 is the piece most SaaS leaders underestimate. The management body -- which in a Danish A/S or ApS means the board of directors and executive management -- has to approve the cybersecurity risk-management measures, oversee their implementation, and can be held liable for breaches of these obligations. Member states, including Denmark, can impose personal sanctions on management. Temporary bans from management roles in essential entities are on the menu.
The second part of Article 20 is a training obligation. Members of the management body must follow training, and they have to offer similar training to their employees on a regular basis, to gain sufficient knowledge to identify risks and assess cybersecurity risk-management practices. This is not a nice-to-have. It is a specific legal duty at the board level.
In practice, the first time a board hears “you are personally on the hook for this” is usually when the company's general counsel or an external advisor spells it out. I'd rather they hear it from you in a controlled conversation than from a regulator in a post-incident one. Bring it to your next board meeting. It changes the conversation about security budget in ways that nothing else does.
Incident reporting obligations
NIS2 standardises a three-stage reporting timeline for significant incidents. This is one of the few parts of the directive that is crisp and easy to memorise.
- Early warning, within 24 hours of becoming aware. A notification to the competent authority or CSIRT indicating whether the incident is suspected to be caused by unlawful or malicious acts, and whether it could have a cross-border impact.
- Incident notification, within 72 hours. An update to the early warning including an initial assessment of the incident, its severity, its impact, and where available indicators of compromise.
- Final report, within one month of the incident notification.A detailed description, the type of threat or root cause that likely triggered the incident, the mitigation measures applied and ongoing, and where applicable the cross-border impact.
What counts as “significant” is defined as any incident that has caused or is capable of causing severe operational disruption of the service or financial loss for the entity, or that has affected or is capable of affecting other natural or legal persons by causing considerable material or non-material damage. In plain terms: a serious outage that degrades your service to customers, a confirmed data breach of customer data, a confirmed intrusion into your production environment. A phished credential that was caught before it was used is not significant. A phished credential used to access a production database is.
The 24-hour clock is the one that catches people. If you detect a serious incident at 02:00 on a Saturday, the clock is running. Your incident response plan needs to define, in advance, who makes the call, who submits the early warning, and through what channel. Figure this out in calm air. Do not figure it out during the incident.
What real readiness looks like for a 50 to 250 person SaaS
If you're starting from zero and you want to be in a defensible position within 90 days, here's the shape of the work. I've led and observed this sprint a few times. It does not produce a finished programme. It produces a credible one, with a 12-month plan to finish.
The 90-day readiness sprint
Weeks 1 to 3: scoping and diagnostic. Confirm scope status: direct in-scope, supply-chain-in-scope, or out of scope. Identify your competent authority if in direct scope. Run a gap assessment against the ten Article 21 measures plus Article 20. Interview the CEO, CTO, head of engineering, head of people, head of legal. Output: a scoped gap assessment and a prioritised backlog.
Weeks 4 to 8: policy and process foundation. Draft or refresh the top-level documents. Information security policy, acceptable use policy, access control policy, incident response plan, business continuity and disaster recovery plan, supplier security policy, cryptography policy. Hold working sessions with the teams who actually own the processes. Do not write policies in a vacuum and throw them over the wall.
Weeks 6 to 10: operational controls. In parallel with the documentation work, close the highest-risk control gaps. Typical quick wins: MFA coverage audit and remediation, privileged access review, endpoint protection coverage, vulnerability management SLAs, logging and monitoring coverage of production.
Weeks 9 to 12: testing and evidence.Run at least one incident response tabletop exercise with management in the room. Run a DR restore test. Review backups. Kick off an external penetration test if you don't have a recent one. Produce the first management report.
End of week 13: board-level readiness review.Management body reviews the state of the programme, approves the policy stack, signs off the risk treatment plan, and completes their first round of management training. This is the moment you can say, with a straight face, that you have an NIS2-aligned programme in flight.
The 12 things you need on paper
- Information security policy, approved by management.
- Risk management methodology and current risk register.
- Access control policy with joiner-mover-leaver procedures.
- Incident response plan with the 24/72/30 reporting workflow.
- Business continuity and disaster recovery plan with tested RTO/RPO.
- Supplier security policy and supplier register with risk ratings.
- Secure development standard and change management procedure.
- Vulnerability management policy with SLAs by severity.
- Cryptography and key management policy.
- Data classification and handling policy.
- Security awareness training programme with completion records.
- Management reporting pack and board security charter.
The 5 things you need in practice
- MFA on everything that matters, with a recent coverage audit.
- A functioning detection and response capability -- either in-house with decent logging and an on-call rotation, or a managed SOC you actually talk to.
- A tested backup and restore process with evidence of a real restore inside the last 12 months.
- A vulnerability management loop that closes criticals within days, not quarters.
- Incident response muscle memory: at least one tabletop exercise in the last 12 months, with the CEO and the CTO in the room.
How to avoid building a paper tiger
The failure mode for NIS2 programmes is a binder full of policies that nobody in engineering has read, signed off by a management team that doesn't really understand them. It passes a superficial audit and falls apart the moment anything real happens. Three rules that keep this from happening.
Write every policy with the person who operates the process. If your head of platform engineering didn't co-author the incident response plan, it's not going to work during an incident. Second, instrument your controls. If a control isn't measured, it doesn't exist. Pick three to five KPIs per measure and report them quarterly. Third, test under stress. A tabletop exercise with realistic injects will tell you more about your readiness than a compliance checklist ever will.
Common traps I've watched companies fall into
Over-scoping
The most expensive mistake. A CTO hears “NIS2” and the next thing I see is a proposal to pursue full ISO 27001 certification, sometimes with ISO 27017 and ISO 27018 thrown in, on a six-month timeline. ISO 27001 is a useful framework, and if you have a strong commercial reason for certification, fine. But NIS2 does not require ISO 27001. You can map your NIS2 programme to the ISO control set for structure without committing to the certification audit and surveillance cycle. That decision is worth six to seven figures over three years. Make it deliberately.
Under-documenting supplier assessments
Companies do supplier reviews but don't write them down. Then a regulator or a customer asks for the supplier risk register and there is nothing to hand over. The work was done. The evidence wasn't produced. Equally common: companies treat supplier assessment as a procurement-only activity and don't involve security. That produces checkboxes without substance.
Confusing deployer and provider obligations
This is a copy-paste from the EU AI Act conversation, but the same pattern shows up in NIS2. If you're a SaaS vendor selling into a hospital, the hospital is the regulated entity. You are their supplier. Your obligations are a mix of what the directive imposes on you directly (if you're in scope) and what flows down through the contract. Knowing which hat you are wearing for which customer matters, because the evidence you need to produce differs.
Assuming the DPO handles it
The Data Protection Officer is a GDPR role, focused on personal data protection. NIS2 is broader, covering operational security of network and information systems, regardless of whether personal data is involved. A DPO is not automatically qualified to run the NIS2 programme, and in many Danish companies the DPO is part-time external counsel. NIS2 ownership should sit with the CTO, CISO, or head of security. The DPO is a stakeholder, not the owner.
Buying a GRC tool before knowing what you need it for
Every compliance vendor on LinkedIn is selling an NIS2 module. Some of them are genuinely useful. But buying a GRC platform before you've defined your control set, mapped your risks, and worked out your evidence model is like buying a CRM before you know who your customers are. You end up configuring the tool for six months, then realise it doesn't quite fit. The order is: understand obligations, define controls, pilot evidence collection manually, then buy tooling to scale what works. Not the other way round.
Treating it as a one-off project
NIS2 is a continuous obligation, not a certificate you earn and frame. If your internal narrative is “we're doing the NIS2 project,” you will build something that finishes and then decays. If the narrative is “we're setting up our security operating model and NIS2 is part of what it satisfies,” you will build something that lasts. The words matter because the resourcing follows them.
Costs and timelines: what a 100-person SaaS should realistically expect
Every situation is different, but after running and advising several of these programmes, here is a realistic range for a SaaS of roughly 100 people starting from a low-to-moderate baseline. I'm giving you the shape, not a quote. Your mileage depends on your starting posture, your sector, your customer pressure, and your tolerance for risk.
| Delivery model | First 12 months | Ongoing (annual) | Time to defensible |
|---|---|---|---|
| In-house only, 0.3 to 0.5 FTE + eng time | DKK 400k to 700k (internal cost) | DKK 150k to 300k | 6 to 9 months |
| In-house + fractional advisor | DKK 500k to 900k total | DKK 200k to 400k | 3 to 6 months |
| Mid-tier consultancy | DKK 800k to 1.5m | DKK 400k to 700k | 4 to 6 months |
| Big 4 engagement | DKK 1.5m to 3m+ | DKK 700k to 1.5m+ | 6 to 12 months |
The in-house-only path is cheapest on cash but most expensive on elapsed time and CTO attention. Expect to lose 0.3 to 0.5 FTE of senior engineering time for six months if you go this route, then around 0.1 FTE ongoing once it's embedded. The risk is that without someone who has done this before, you'll over-engineer some parts and miss others entirely.
The fractional advisor path is how I usually recommend 50 to 250 person SaaS companies approach this. A senior operator embedded one to two days a week for three to six months, doing the scoping, writing the core policies with your team, running the readiness sprint, then stepping back to a governance role. Total engagement cost typically falls in the 50k to 150k DKK range depending on duration and depth, sitting alongside the internal FTE cost.
The Big 4 path buys you brand cover and deep benches, which matters in some board contexts. It also buys you fourteen junior consultants, a methodology deck, and a lot of meetings. For a 100-person SaaS, the cost-to-value ratio usually does not work.
What about the cost of doing nothing? Direct fines are the obvious number. Less obvious but usually more damaging: losing enterprise deals in your pipeline because you cannot answer the security questionnaire, having a significant incident with no prepared reporting workflow and compounding regulatory exposure on top of the incident itself, and the board time cost of handling an inspection cold. I've seen a single large customer walk because of a failed security review. That one deal usually dwarfs the annual cost of the programme.
What to do this week, this quarter, this year
If you read one section of this article, read this one. Concrete actions, ordered by urgency.
This week
- Make the scope call. Direct in scope, supply-chain in scope, or out of scope. Write down the reasoning. Share it with your CEO and general counsel.
- If you're in direct scope, identify your competent authority. If in doubt, start with SAMSIK and work back from there.
- Put NIS2 on the next board agenda with a 15-minute slot. Brief the chair in advance on the personal accountability angle.
This quarter
- Run a gap assessment against the ten Article 21 measures. Two weeks of work, internal or external.
- Stand up an incident response runbook that explicitly includes the 24/72/30 reporting workflow. Run one tabletop exercise against it.
- Complete a supplier inventory, classify by criticality, and start the assessment cycle on your top ten.
- Audit MFA coverage, privileged access, and backups. Close whatever you find.
This year
- Finish the 12-document policy stack, signed off by management.
- Deliver management body training and a broader staff awareness programme with tracked completion.
- Run an external penetration test and remediate the findings with documented SLAs.
- Establish quarterly KPI reporting to the management body, with a written charter describing what security owns and how it reports.
- Review and renegotiate top customer contracts to align notification obligations with what NIS2 actually requires, not whatever clause their procurement team copy-pasted in.
Closing
NIS2 is not the hardest regulatory regime a growing SaaS will face. DORA is more prescriptive. The AI Act is more ambiguous. GDPR is more entrenched. But NIS2 is the one most Nordic tech leaders are currently under-prepared for, because it arrived in a year where everyone was already chasing three other things and the Danish implementation details are still settling. The companies that handle it well will treat it as a forcing function for the security programme they probably should have built anyway, not as a box-ticking exercise.
This article is as practical as I can make it in 4,500 words. It cannot replace a proper look at your specific sector, your customer base, and your starting posture. If you want to talk through where you actually stand and what the sensible next ninety days look like, a scoping call is the fastest way to get there. Otherwise, take the three actions in the “this week” list above. They cost you nothing and they put you ahead of most of the companies I talk to.