A senior engineer's Monday morning
A Monday at PowerGrid Solution looked like this. Five Excel workbooks open. Approximately 35,000 rows per sheet. CSV exports from three different distribution operators. PDF scans of engineering parameters. DXF drawings from CAD.
The same generation site appeared in every file under a slightly different name. The same capacity value was 48.5 MW in the statement of works and 48.3 MW in the connection contract. The substation name was abbreviated one way on the operator's export and a different way on the G99-equivalent application. One wrong value copied across three sources and the rework cycle started three weeks later when the client caught it.
A senior engineer, the kind of person PowerGrid needed on actual grid design work, spent three weeks per project reconciling this manually before any real engineering work could begin. VLOOKUP formulas that broke when an operator renamed a column. Manual cross-references between the connection offer and the simulation input data. Line-by-line duplicate checks. Every project, again.
This is what happens when your data pipeline depends on file formats you do not control. The operator has zero obligation to keep things consistent. They change the format tomorrow, and your team absorbs the cost every time.
How PowerGrid found us
The first email came through a referral. Alex Zelinca, PowerGrid's strategy partner, described a problem we had heard in different words from different engineering teams across Europe. Their team in Iași was losing senior-engineer weeks to file cleanup before any modelling could start. They had tried Excel macros. The macros broke within months of any operator format change. They had tried hiring more people. The bottleneck stayed.
Florin Băiceanu, their lead engineer and partner, shared sample files under NDA the same week. We recognized the pattern within an hour of opening them. The distribution operators were Romanian (Delgaz, E-Distribuție, DEER). The failure mode was not. Every European grid engineering team working across multiple DSOs and TSOs is fighting the same battle. The operator vocabulary changes country to country. The problem does not.
What was actually broken
PowerGrid runs grid connection studies across Romania. Their clients are developers, municipalities, and industrial operators connecting to the medium and high-voltage distribution grid. Every project starts with data from the distribution operator. Every operator sends it differently.
The team had data in four disconnected formats:
- Excel workbooks, typically 5 sheets per workbook, approximately 35,000 rows per sheet, with the header row in a different position depending on which operator exported it
- CSV exports from operators, each with different column conventions, different separators (comma, semicolon, tab), and different character encodings that broke parsing if guessed wrong
- PDF documents with scanned engineering parameters, often with critical capacity values placed in free-text comments rather than structured columns
- DXF files from CAD, with station names abbreviated differently from how they appeared in the operator's own exports
Producing one documentation set took three weeks. A senior engineer pulled data from every source, cross-checked values by hand, assembled the deliverable in Word, and formatted it for the operator's template. One wrong comma, one renamed column, one line break in the wrong place, and the formulas holding the process together collapsed.
The errors propagated silently into the simulation model, into the centralizator, and into the technical memorandum, and surfaced weeks later as a client query or an operator rework cycle. The team had been living this for years. Every format change from an operator meant rebuilding the internal process. Institutional knowledge about how a specific operator formatted their files lived in one senior engineer's head. Nothing was written down.
The three-hour screen share
We asked Florin if we could watch a senior engineer work for three hours. Not a walkthrough. Not a demo. Actual project work on actual operator files, with us taking notes.
What we watched was not engineering.
The engineer opened the ATR sheet first. Filtered by zone. Scanned for missing fields. Cross-referenced against the connection offer in a second workbook. Manually corrected a station name abbreviation she recognized from memory. Opened the PDF scan, read a capacity value, copy-pasted it into a cell. Caught a duplicate by eye because the capacity values were off by 0.2 MW between two sources and the name was slightly different. Ran a VLOOKUP. The VLOOKUP returned #N/A because the operator had renamed a column since the last project. Rebuilt the formula. Moved on.
This was what a senior engineer was spending three weeks on, per project. Power systems domain expertise applied to data janitorial work.
That screen share became the architecture. We did not propose a platform based on how we thought grid engineering should work. We proposed one that mirrored exactly what their engineer was already doing in her head, with each manual step replaced by an automated equivalent.
What the engineers stopped doing
This is the part of the story that matters.
After we deployed Module 1, the engineers stopped filtering files manually. Deduplication that used to take two days of manual cross-referencing ran in four minutes.
After Module 2 went live, they stopped opening simulation exports and comparing N against N-1 scenarios by eye. The binary parser read the files directly. Threshold violations were flagged automatically.
After Module 3, they stopped assembling technical memorandums in Word.
Florin said it to us directly in the fourth week of deployment. His senior engineers had just realized that most of what they had been doing for the past five years was not engineering. It was work that anyone with Excel training could have done, except it needed their domain knowledge to catch the specific errors that would propagate into a regulatory deliverable. That is why it had always fallen to them. That is why it had never scaled.
Noda took that layer. The engineers got their time back for the design decisions, the technical reviews, and the client conversations that actually required their expertise.
That was the moment of truth. Not the performance numbers. The realization that what felt like engineering work had been a category of task that should never have belonged to senior engineers in the first place.
What we shipped
Three modules. All live in production at PowerGrid today.
Module 1 — Ingestion and deduplication
A guided workflow that ingests Excel, CSV, PDF, and DXF in whatever format a distribution operator sends. Source headers are mapped to a canonical grid-project schema with confidence scores. The engineer confirms low-confidence mappings once. The decision is remembered for every future file from the same operator.
Duplicates across files are detected using field-level similarity scoring, with source evidence shown side by side. Auto-merge above 95 percent confidence. Engineer review between 85 and 95. Skip below 85.
On PowerGrid's production data: 35,000 rows reconciled in 4 minutes. Previously 2 full days of senior-engineer time.
Module 2 — Scenario analysis
Noda reads simulation exports directly through a binary parser we built from scratch for the dominant European grid simulation tool. To our knowledge, we are the only company outside the original vendor who can read these files programmatically. The parser runs deterministic N and N-1 scenario comparisons across multiple time horizons and seasonal regimes, and flags threshold violations automatically.
On PowerGrid's files: 1,500+ busbars and 1,400+ power lines parsed and visualized in under 30 minutes. Previously 2 to 3 days.
Module 3 — Document assembly
Safety-critical engineering values are computed deterministically. Load flow. Fault current. Voltage thresholds. Coefficient calculations. Economic estimates. AI drafts narrative sections with regulatory citations. The engineer approves every output before it reaches a client. Every value carries a source file, a source row, a transformation rule, a reviewer identity, and an approval timestamp.
On PowerGrid's workflow: under 1 hour per regulated technical document. Previously 4 to 8 hours.
AI assists. Engineers approve.
Nothing leaves the system without a human sign-off. AI scope is deliberately limited to pattern recognition, narrative drafting, and anomaly detection. Every safety-critical calculation is deterministic. Every output is auditable end-to-end.
This is how Noda is designed against the EU AI Act Annex III high-risk requirements enforceable August 2026. Generic AI tools (ChatGPT, Copilot) will not be compliant on critical infrastructure data after that date. Noda was built to be.
The results
- ✕ 4 disconnected file formats
- ✕ 3 weeks per documentation set
- ✕ Senior engineers on data janitorial work
- ✕ Errors caught only at client delivery
- ✕ Excel formulas broke on every operator format change
- ✕ Institutional knowledge in one senior engineer's head
- ✓ One unified pipeline for all formats
- ✓ 4 minutes for the data reconciliation layer
- ✓ Senior engineers on design and review
- ✓ AI verification before any export leaves the system
- ✓ Format-agnostic processing that adapts when operators change
- ✓ Workflow logic as a repeatable platform
312 substations processed. 8 operator formats covered. 35,000 rows per file reconciled in minutes instead of days. Zero duplicates reaching the final deliverable. EU-hosted infrastructure, GDPR compliant, full audit trail on every output.
The engineering team onboarded without a single training session. That was the test we cared about most. If the system had required training, it would have meant we had not understood their workflow well enough.
What Florin said
Noda works the way our engineers think. We did not have to change how we work. The system adapted to us. What used to take weeks now takes minutes, and the senior engineers are finally spending their time on engineering.
What this means for your team
If your grid engineering team is losing senior-engineer days to operator-file reconciliation, the problem is not your engineers. The problem is that every distribution operator sends data differently, every transmission operator has its own template conventions, and the tools between them were never designed to talk to each other.
Noda sits in that gap. We do not replace NEPLAN, PowerFactory, or CAD. We do not replace engineering judgment. We remove the manual work around those tools so your senior engineers can spend their time on the decisions that actually require their expertise.
The pattern we saw at PowerGrid is the pattern we are now seeing at grid engineering teams across Poland, Germany, and the UK. The operator vocabulary changes country to country. The problem does not.
