Reality: Designing the Foundation of NGX's Climate Infrastructure

How design guided a product through multiple reinventions

Executive Summary

Reality is NGX's first act of definition, the product where climate infrastructure became real. Where compliance, carbon, and code met in one living system. My job was to design that system from zero, shaping not just how it looked, but how it worked, scaled, and communicated truth.

Reality was NGX's first attempt at bringing real-world emissions data into a living, visual system. It became the base layer that everything else would depend on: Trace, Vault, Energy Bank. The brief was ambitious: "Build an IoT-powered platform that helps enterprises track, analyze, and report their emissions in real time."

In practice, it meant designing for uncertainty. The data was messy, the architecture was still forming, and the expectations were sky-high. This is where NGX's design language and its discipline were born.

The result: Reality became the MENA region's first real-time, IoT-powered emissions accounting platform, reducing supplier onboarding by 70%, earning regulatory trust, and becoming the architectural bedrock for NGX's entire climate stack.

Context & Problem

Before Reality, emissions reporting was broken, especially in the MENA region.

Governments were tightening ESG disclosure requirements. Global partners started asking tough questions. Fines were coming. Incentives were tied to verified reductions. And yet, most organizations still reported carbon like it was 1995: Excel files, email attachments, third-party audits six months after the fact.

This wasn't just inefficient. It was dangerous.

Non-compliance could lead to millions in regulatory fines. Unverifiable data could disqualify enterprises from green finance and global tenders. Mistrust in the data meant mistrust in the entire climate effort.

NGX saw the opportunity to fix this at the source: build a product that could gather, standardize, and structure emissions data in real time across facilities, fleets, and suppliers, with IoT as its heartbeat.

But emissions data is messy. IoT devices speak a hundred dialects. Operational staff don't think in carbon. Admins need auditability. And reporting frameworks? They change like the wind.

We weren't just designing a dashboard. We were designing a language for climate truth, one both machines and humans could trust.

The idea was to connect IoT meters, vehicles, and facility sensors into one unified platform that could automatically calculate Scope 1–3 emissions and generate compliant reports. But we weren't just visualizing data. We were building trust in it.

The Dual Challenge

The challenge was balancing two user perspectives:

Suppliers and operators, who uploaded data from the field. They wanted speed, clarity, and error forgiveness.

Admins and enterprises, who validated and reported that data to regulators. They needed traceability, control, and documentation.

And we had to do this without ever speaking directly to those users.

My Role & Constraints

As the founding designer, I was responsible for:

Defining the product's architecture, flows, and visual system

Creating NGX's core design language

Translating compliance logic and IoT complexity into usable interfaces

But I wasn't just drawing screens. I was embedded with the backend and data teams, often writing interface contracts, defining edge cases, and shaping feature scopes.

Constraints defined everything:

  • A small team meant design had to maximize reuse and minimize complexity.

  • Constant pivots risked breaking coherence; I had to build systems that could bend, not snap.

  • Competing against massive incumbents meant differentiation wasn’t optional — it was survival.

Design became the bridge between uncertainty and direction.

The Constraints Were Intense

No user interviews. Our clients were large enterprises. Legal, security, and procurement made access impossible. We would never sit in a room with the operators uploading emissions data or the sustainability managers reviewing compliance reports.

Unstable backend. The data infrastructure evolved daily. What was true today might be obsolete tomorrow. APIs changed. Data schemas shifted. Integration patterns got rewritten mid-sprint.

Multiple personas. Suppliers uploading field data; admins analyzing and reporting it. One product had to serve both without compromising either experience.

Regulatory ambiguity. Reporting standards (like GHG Protocol or ISO 14064) were shifting. We had to design for compliance that didn't exist yet. The rules were moving targets, and our interface had to flex with them.

Technical complexity. IoT streams brought their own chaos: sensor calibration errors, unit mismatches, connectivity drops, delayed syncs. Every design decision had to account for failure states we couldn't predict.

These weren't just constraints. They were the reality we had to design within.

Process & Methods

1. System as User: Research Through Data, Not Interviews

In the absence of direct access to users, I treated the data itself as a proxy. Instead of user interviews, I worked closely with the two people who knew the product's logic inside out: Sebastian, who led data systems, and Sławek, who handled the backend architecture.

Our "research" was technical. It happened in Slack threads, Figma comments, and long working sessions where I'd ask:

"What's the most common failure point in data submission?"

"Where do suppliers usually get stuck?"

"What kind of errors make admins lose confidence in the report?"

"Which fields are most frequently misconfigured?"

"What error types correlate with supplier churn?"

"How do admins audit a data trail when trust is in doubt?"

They weren't describing user feelings. They were describing behaviors through data patterns. And that was enough.

Every failed upload, every malformed dataset, every support ticket became a signal. These questions gave me not user personas, but system personas: insights based on behavior patterns, failure logs, and friction maps.

This collaboration gave me something better than user quotes. It gave me system empathy. A deep understanding of how data really moves, breaks, and rebuilds inside emission reporting.

  1. Building and Breaking Prototypes

We worked in public: Slack, Figma, Notion. I'd drop a prototype. The backend team would try to break it.

This became our rhythm: design, stress-test, redesign.

Iteration 1: The Data Wall

Our first build was purely functional. Every metric visible, every field exposed, every data point rendered in one endless grid.

We called it The Data Wall. It showed everything. Every metric. Every sensor. Every field. A truth machine.

Sebastian loved it. Everyone else was terrified.

It was accurate but incomprehensible, the kind of screen that makes sense only if you wrote the code behind it. Engineers loved it. Users didn't. It was unreadable unless you had built the backend.

That failure gave me my first real insight:

"Clarity isn't hiding data. It's sequencing attention."

Or as I told the team: "Clarity isn't about simplifying data. It's about designing what people should notice first."

So we scrapped it and started again.

Iteration 1: The Data Wall

Our first build was purely functional. Every metric visible, every field exposed, every data point rendered in one endless grid.

We called it The Data Wall. It showed everything. Every metric. Every sensor. Every field. A truth machine.

Sebastian loved it. Everyone else was terrified.

It was accurate but incomprehensible, the kind of screen that makes sense only if you wrote the code behind it. Engineers loved it. Users didn't. It was unreadable unless you had built the backend.

That failure gave me my first real insight:

"Clarity isn't hiding data. It's sequencing attention."

Or as I told the team: "Clarity isn't about simplifying data. It's about designing what people should notice first."

So we scrapped it and started again.

Iteration 2: Trying 3D, Then Taming It

We shifted direction. If numbers confused users, maybe space could help them understand the data. That's where the 3D idea came in.

We explored spatial thinking early. Could emissions data feel grounded in space? We built 3D scenes: warehouses, fleets, sites. A visual representation of hubs, warehouses, and fleets that could help admins contextualize their emissions footprint.

But 3D became its own problem.

The first renders were too heavy: high poly counts, reflective surfaces, long export times. Renders were too heavy. Real-time overlays clashed with lighting. The style felt "architectural" instead of operational. Scenes looked like architecture, not operations. Data overlays clashed with lighting and color hierarchy.

After 30+ visual tests and dozens of test renders, we found a visual logic that worked. We stripped it down to what worked:

  • Flat isometric geometry (not full 3D)

  • Soft mint glows for active sites and nodes

  • Layered, tappable data markers built right into the scene

This wasn't visual flair. It was cognitive orientation. It helped admins understand where emissions happened, not just what happened.

That was the beginning of NGX's visual identity: clarity through calm, precision through restraint.

Iteration 3: Serving Two Personas with One Interface

Reality had two audiences using one interface. Every feature was a balancing act:

  • Suppliers wanted speed, clarity, and error forgiveness

  • Admins needed traceability, control, and documentation

We ran three complete dashboard redesigns, testing three architectures:

Shared View: Same layout for both users. Instant chaos. The interface tried to serve everyone and served no one.

Split Dashboards: Separate environments. Inconsistent and hard to maintain. Every update required double the design work, and the experiences drifted apart visually.

Role-based Layering: One unified layout, dynamic modules for each role. Success.

That final approach worked. Suppliers got a simplified, status-driven upload flow with clear progress states. Admins got full data lineage, audit flags, and export-ready reports. Same product. Two experiences.

This became the architecture template for every NGX product that followed.

Iteration 3: Serving Two Personas with One Interface

Reality had two audiences using one interface. Every feature was a balancing act:

  • Suppliers wanted speed, clarity, and error forgiveness

  • Admins needed traceability, control, and documentation

We ran three complete dashboard redesigns, testing three architectures:

Shared View: Same layout for both users. Instant chaos. The interface tried to serve everyone and served no one.

Split Dashboards: Separate environments. Inconsistent and hard to maintain. Every update required double the design work, and the experiences drifted apart visually.

Role-based Layering: One unified layout, dynamic modules for each role. Success.

That final approach worked. Suppliers got a simplified, status-driven upload flow with clear progress states. Admins got full data lineage, audit flags, and export-ready reports. Same product. Two experiences.

This became the architecture template for every NGX product that followed.

Iteration 4: Designing for Failure, Not Just Flow

Even when the UX stabilized, data visualization kept breaking.

Our first charting engine collapsed under large fleet datasets. Stacked bars distorted totals. Pie charts failed accessibility color checks. Data fails. Uploads drop. Sensors misreport.

Sławek and I spent days tweaking logic and render limits.

We built Reality to surface, not suppress those failures.

  • Every dataset had a lineage log

  • Every anomaly was flagged, not hidden

  • Every upload had a status state with recovery options

Eventually, we developed adaptive chart density:

  • Macro charts summarizing emissions by category

  • Micro charts nested in list views for detailed analysis

  • Consistent visual logic for energy sources: solar, diesel, electric

This turned frustration into understanding. Blame into transparency.

This was the version that finally felt alive: real-time, legible, and compliant.

Solution & Rationale

Reality became a layered system, both operationally and visually. It wasn't just a product. It was a climate operating system.

The platform consisted of four core operational modules, each designed to serve a specific stage in the emissions lifecycle: data capture, analysis, compliance, and audit.

1. Transport Operations: Real-Time Fleet Emissions

What it does:
Visualizes all transport-related emissions (Scope 1 & 3) using data from IoT sensors, trip logs, and telematics systems.


The challenge:
Fleet emissions are notoriously messy. Each trip involves multiple variables: distance, cargo weight, fuel type, route conditions, third-party logistics. And for enterprises with hundreds of vehicles across multiple customers, it's nearly impossible to see patterns without proper structure.

We needed to turn raw telematics data into actionable operational intelligence, something that worked for both logistics managers and sustainability leads.


Key capabilities I designed:

Trip Analytics:
Each journey tracked end-to-end: distance, weight, cargo type, and associated CO₂e. The interface shows trip-level granularity with expandable rows revealing route segments, delivery status, and emission allocation per leg.


Emission Breakdown:
Diesel vs Electric vs Third-Party trips shown per month, helping operators see transition progress toward cleaner fleets. The chart design uses consistent color logic (mint for electric, teal for diesel) that carries across the entire platform.


Operational Insights:
Charts for total kilometers, avoided emissions, and customer retention, blending environmental and business metrics in one view. This was critical: sustainability teams cared about carbon, but operations cared about efficiency and cost.


Route Timeline:
Designed a hybrid table + map-style route tracker showing each leg, delivery status, and emission allocation. The timeline gave spatial context to what would otherwise be abstract numbers.


Comparative Filters:
Toggle between customers, cargo types, or time frames to benchmark efficiency. Admins could see which clients generated the most emissions, which routes were least efficient, and where electrification would have the highest impact.

\Design thinking:
We wanted the dashboard to feel like an operational cockpit rather than a sustainability report: visual, fast, and insight-driven.

The challenge was balancing density (thousands of trips per month) with clarity. Our solution: a two-tier hierarchy. Summary KPIs at the top (total emissions, distance traveled, cost per ton). Granular trip logs below, with expandable rows for route-level details.

The most contentious design decision: should we show customer names in the emissions breakdown? We did, because accountability matters. Enterprises needed to see which clients drove their footprint, especially for Scope 3 calculations.

2. GHG Dashboard: The Core Analytics Engine

2. GHG Dashboard: The Core Analytics Engine

What it does:
Aggregates all emissions (Scope 1, 2, 3) and turns them into business-readable performance metrics and compliance-ready reports.

The challenge:
The GHG Protocol is technical, dense, and unforgiving. Scope definitions change based on industry and operational structure. Most platforms either oversimplify (losing accuracy) or over-complicate (losing usability).

We needed to design an interface that respected the Protocol's rigor while making it navigable for non-technical sustainability managers.

Key capabilities I designed:

Scope Segregation:
Dedicated metric cards for Scope 1 (Fuel, Direct Emissions), Scope 2 (Electricity), and Scope 3 (Supply Chain, Refrigerants), each showing year-over-year changes with percentage deltas and trend indicators.


Emission Distribution:
Pie and bar charts showing top emission sources across the enterprise: fleet diesel, warehouse electricity, refrigerant leaks. The visual hierarchy prioritized the largest sources while keeping smaller contributors visible.


Fuel & Energy Trends:
Continuous monthly line charts connecting usage patterns to emission intensity. We overlaid multiple energy types (diesel, electricity, gas) with a 30-day moving average to smooth out operational noise and reveal actual trends.


Operational Efficiency Module:
Monthly emissions per scope with stacked bar charts showing the proportion of Scope 1 vs Scope 2 emissions. This helped admins understand whether operational changes (like fleet electrification) were actually moving the needle.


Compliance Progress Tracking:
Integrated tracking for SBTi (Science Based Targets initiative), CDP (Carbon Disclosure Project), TCFD (Task Force on Climate-related Financial Disclosures), and Net-Zero goals, each with percentage completion bars and status indicators.


Document Traceability:
Every data point linked to its original invoice, utility bill, or IoT feed in the Source Document Log. This was non-negotiable for audits. Users could click any number and see the proof.


Design thinking:
This dashboard had to communicate credibility. Regulators, auditors, and sustainability leads all use it. We designed every graph to have dual context: environmental (tCO₂e) and operational (cost, energy, utilization).

A key design challenge was avoiding chart fatigue. Early prototypes had 15+ visualizations on one screen. Users felt overwhelmed.

Our solution: smart insight banners. Instead of forcing users to interpret every chart, we added AI-generated insights like "Scope 2 reduced by 26% YoY" or "Diesel usage spiked 18% in Q3, investigate fleet efficiency." These acted as narrative guides through the data.

The hardest UX decision: how much complexity to expose. We tested three levels of depth:

  • Lite View: Summary cards only (admins hated it, too shallow)

  • Full View: Every chart, every breakdown (users got lost)

  • Progressive Disclosure: Summary first, drill-downs on demand (this worked)

We also introduced chart density adaptation: if a dataset had fewer than 50 records, we showed full granularity. Above 1,000 records, we aggregated by week or month to prevent rendering collapse.

What it does:
Aggregates all emissions (Scope 1, 2, 3) and turns them into business-readable performance metrics and compliance-ready reports.

The challenge:
The GHG Protocol is technical, dense, and unforgiving. Scope definitions change based on industry and operational structure. Most platforms either oversimplify (losing accuracy) or over-complicate (losing usability).

We needed to design an interface that respected the Protocol's rigor while making it navigable for non-technical sustainability managers.

Key capabilities I designed:

Scope Segregation:
Dedicated metric cards for Scope 1 (Fuel, Direct Emissions), Scope 2 (Electricity), and Scope 3 (Supply Chain, Refrigerants), each showing year-over-year changes with percentage deltas and trend indicators.


Emission Distribution:
Pie and bar charts showing top emission sources across the enterprise: fleet diesel, warehouse electricity, refrigerant leaks. The visual hierarchy prioritized the largest sources while keeping smaller contributors visible.


Fuel & Energy Trends:
Continuous monthly line charts connecting usage patterns to emission intensity. We overlaid multiple energy types (diesel, electricity, gas) with a 30-day moving average to smooth out operational noise and reveal actual trends.


Operational Efficiency Module:
Monthly emissions per scope with stacked bar charts showing the proportion of Scope 1 vs Scope 2 emissions. This helped admins understand whether operational changes (like fleet electrification) were actually moving the needle.


Compliance Progress Tracking:
Integrated tracking for SBTi (Science Based Targets initiative), CDP (Carbon Disclosure Project), TCFD (Task Force on Climate-related Financial Disclosures), and Net-Zero goals, each with percentage completion bars and status indicators.


Document Traceability:
Every data point linked to its original invoice, utility bill, or IoT feed in the Source Document Log. This was non-negotiable for audits. Users could click any number and see the proof.


Design thinking:
This dashboard had to communicate credibility. Regulators, auditors, and sustainability leads all use it. We designed every graph to have dual context: environmental (tCO₂e) and operational (cost, energy, utilization).

A key design challenge was avoiding chart fatigue. Early prototypes had 15+ visualizations on one screen. Users felt overwhelmed.

Our solution: smart insight banners. Instead of forcing users to interpret every chart, we added AI-generated insights like "Scope 2 reduced by 26% YoY" or "Diesel usage spiked 18% in Q3, investigate fleet efficiency." These acted as narrative guides through the data.

The hardest UX decision: how much complexity to expose. We tested three levels of depth:

  • Lite View: Summary cards only (admins hated it, too shallow)

  • Full View: Every chart, every breakdown (users got lost)

  • Progressive Disclosure: Summary first, drill-downs on demand (this worked)

We also introduced chart density adaptation: if a dataset had fewer than 50 records, we showed full granularity. Above 1,000 records, we aggregated by week or month to prevent rendering collapse.

3. Hub Operations: Facility-Level Emission Intelligence

What it does:
Tracks and analyzes emissions inside physical hubs: energy consumption, refrigerant leakage, and storage efficiency across cold and ambient chambers.

The challenge:
Facilities generate the noisiest data. Multiple meters, irregular readings, simultaneous processes across chambers with wildly different emissions profiles (cold storage vs ambient warehousing). And clients share space, so attribution gets messy fast.

We needed to make chamber-level emissions legible without losing operational nuance.

Key capabilities I designed:

Business Performance Summary:
Four primary KPIs at the top: Total Emission, Total Electricity Consumption, Total Electricity Cost, Total Refrigerant Leak. Each broken down by Cold vs Ambient with delta indicators.

Key Insight Banners:
Context-aware messages surfaced from the data. Example: "Customer growth (+40%) is outpacing cargo growth (+12%), suggesting opportunity to increase cargo per customer." These helped facility managers connect emissions data to business decisions.

Environmental Impact Section:

  • Emissions Timeline: Monthly trend chart with 30-day moving average to detect anomalies and smooth operational noise.

  • Total Emissions Distribution: Donut chart showing proportion between Cold Room (85.4%) and Ambient Room (14.6%) emissions. The visual made it immediately clear where to focus reduction efforts.

  • Ambient Emissions Breakdown: Pie chart splitting storing (80%) vs handling (20%) activities, critical for understanding operational vs static energy use.

  • Emission by Chamber: Horizontal bar chart showing each chamber's contribution. We used proportional width + exact values to maintain precision.


Refrigerant Leak Monitoring:
Dedicated donut chart for refrigerant emissions split by storage type (99% from cold rooms). This became a critical operational alert. Leaks are both costly and high-emission.


Electricity Consumption Analysis:
Stacked bar chart showing monthly usage by Total Storage, Ambient Storage, and Cold Storage with an annual summary sidebar. The mint/teal color system kept it consistent with the rest of the platform.


Top Clients by Emissions:
Horizontal bar chart ranking customers by their emission footprint, labeled by sector (Government, Industrial, Energy, Education). This helped facility managers allocate emissions accurately for Scope 3 reporting.


Hub Operations Table:
Granular client view showing Date, Client Name, Area/Type (Chamber 1-4, Cold/Ambient), Energy Consumption, Occupancy Volume, and Total Emission per entry. Expandable rows revealed detailed metrics like CBM In/Out for storage density analysis.

Design thinking:
We designed layered summaries: Total → Cold/Ambient → Chamber → Client. Each layer added context without overwhelming the previous one.

The hardest challenge was visual proportionality. 85% of emissions came from cold rooms. If we used pure percentage-based sizing, ambient data would disappear. Our solution: proportional + absolute. Charts showed proportional splits, but labels always included exact values. This preserved both the story and the nuance.

Another critical decision: making leaks visible. Early designs buried refrigerant data in a sub-menu. But leaks are often the highest-impact, lowest-cost fix. We promoted them to the primary dashboard and added threshold alerts.

4. Report History: Auditability Made Simple

What it does:
Keeps a complete archive of every report generated across modules (Transport, Hub, GHG) for audit, download, and traceability.


The challenge:
Compliance isn't a one-time event. It's continuous. Auditors show up months after data is submitted. Clients request historical reports. Regulators ask for proof. If you can't reproduce a report from six months ago, you've lost credibility.

Most platforms treat reporting as an afterthought. We made it a first-class feature.

Key capabilities I designed:

Unified Report Table:
Centralized view showing Report ID, Report Type (Transport/Hub/GHG), Customer, Date Generated, Date Range, and Download Action. Filterable by type and time range with a date picker for precision.


Expandable Customer Lists:
Reports covering multiple entities (e.g., "Show less (4)") could be expanded inline to reveal all covered customers without leaving the page.


One-Click Download:
Standardized PDF/CSV outputs formatted for regulators and clients. Each download includes metadata: generation timestamp, data sources used, and calculation methodology.


Version Control:
If new data gets uploaded that affects a past report period, the system flags the discrepancy and allows regeneration with updated values while preserving the original for audit trail integrity.


Search & Filter:
Quick filters for "All Type" (Transport/Hub/GHG) and date ranges. Pagination for large report archives (hundreds of entries over years).


Design thinking:
The goal was zero ambiguity. Every report had to be traceable back to its source data, its generation logic, and its point-in-time accuracy.


Early designs hid older reports in an archive section. Bad idea. Auditors need easy access to history, so we made everything equally accessible with smart filters instead of artificial hierarchy.

The most important design decision: showing report metadata inline. Users could see at a glance which customers were included, which date range was covered, and which module generated it, without opening the file first.

Why It All Worked Together

Isometric maps grounded emissions in physical reality. Admins could see where their carbon was coming from, not just read about it in a table.

Role-based UX meant clarity for both data entry and data audit. Suppliers weren't overwhelmed. Admins weren't limited.

Consistent visual logic across charts, markers, and alerts. Once you learned the language in one module, you could read any part of the system. Mint always meant clean energy. Teal always meant fossil. Red always meant alert.

Fail-first design meant issues were visible, not buried. Transparency built trust. When something broke, users knew immediately and knew how to fix it.

Modular architecture allowed the system to grow. As NGX added new data sources and compliance frameworks, Reality absorbed them without breaking. Transport, Hub, and GHG modules operated independently but shared the same design DNA.

Reality became NGX's operating layer, the proof that data could be both auditable and human.

Design Highlights: Cross-Cutting Decisions

Beyond individual features, several design principles unified the entire platform:

Visual Language

Isometric 3D with restraint

After 10+ failed renders, we landed on flat isometric geometry with soft glows for active nodes. This gave spatial context without performance cost or visual noise.

Consistent color semantics:

  • Mint (#00D9A3): Clean energy, electric vehicles, positive trends

  • Teal (#008B8B): Fossil fuels, diesel, baseline operations

  • Red/Orange: Alerts, anomalies, threshold breaches

  • Gray: Inactive, historical, or archived data

This color system carried across every module, every chart, every status indicator.

Adaptive chart density

 Charts automatically adjusted granularity based on dataset size:

  • <50 records: Full daily granularity

  • 50-500 records: Weekly aggregation

  • 500+ records: Monthly aggregation with moving averages

This prevented render collapse while maintaining analytical value.

Interaction Patterns

Progressive disclosure:
Summary metrics to category breakdowns to item-level details. Users could drill down without getting lost, and surface without losing context.

Expandable data rows:
Tables showed essentials by default, with expandable rows for deeper inspection. This kept the interface scannable while preserving access to granular data.

Inline editing with validation:
Where users could input data (calibration, tags, thresholds), validation happened in real-time with visual feedback. Green checkmarks for valid entries, inline error messages for failures.

Smart defaults:
Every filter, every time range, every view started with the most statistically relevant default. "Last 30 Days" for timelines. "All Scopes" for emissions. "Highest Impact" for sorting.

Information Architecture

4-step enterprise workflow:
Connect to Calibrate to Baseline to Report. This became the mental model for every user journey. Each step had a dedicated interface, but the flow was always visible in the navigation.

Dual-role architecture:
One codebase, two experiences. Suppliers saw simplified upload flows. Admins saw full audit trails. Same data, different lenses.

System logging visible to users:
Unlike most platforms that hide logs, we exposed them. Every calculation, every data transformation, every automated decision was traceable. This wasn't just transparency. It was trust infrastructure.

Metrics & Impact

Metrics & Impact

First IoT-driven emissions platform in MENA. No other product in the region was doing real-time, sensor-connected carbon accounting at this scale.

70% reduction in supplier onboarding time. What used to take weeks of back-and-forth now happened in days, with higher data quality.

Zero critical data mismatches in third-party audits. The compliance layer worked. Auditors trusted the output because they could trace every number back to its source.

Visual and architectural foundation for NGX's ecosystem. Reality's design language, role architecture, and data visualization patterns became the template for Trace, Vault, and Energy Bank.

Recognized in regional sustainability reports as a benchmark for digital ESG tooling. Industry analysts cited Reality as an example of how climate tech should handle transparency.

Reflections

Reflections

Reality didn't start with users. It started with data. And maybe that's why it worked.

Designing Reality was an act of translation: from regulation to interface, from sensor to insight, from chaos to clarity.

But what made it work wasn't the UI. It was the mindset: design isn't just how something looks. It's how well it explains itself.

Instead of designing from assumption, we designed from observation, understanding the system's truth through the people who built it. We didn't guess what users wanted. We studied how the system failed. We debugged trust. We didn't just serve users. We served the systems they depend on.

The 3D failures, the broken dashboards, the conflicting roles, all of it was part of finding balance between accuracy and empathy.

What I Learned

What I Learned

The best way to design for compliance is to design for transparency. When users can see the full data trail (errors included), they trust the output. Hiding failures destroys confidence. Surfacing them builds it.

The best way to scale climate software is to build visual logic that mirrors operational truth. Reality's interface didn't abstract away complexity. It made complexity legible. That's why it scaled across dozens of enterprise deployments without breaking.

Sometimes, the fastest way to design for people is to design with the ones who understand their pain indirectly, the data guys, the engineers, the systems thinkers. Sebastian and Sławek taught me more about user needs through system behavior than a hundred interviews could have.

And sometimes, the most user-centered thing you can do is sit with the backend team and ask: "Where does this usually go wrong?" Because in complex environments like this, clarity isn't given. It's earned, line by line, render by render, iteration by iteration.

Because trust (especially in climate) isn't a feature. It's a consequence.

Reality earned that trust, line by line, flag by flag, decision by decision.

In climate tech, the real product isn't the platform. It's the proof.