FIELD REPORT 2026 · §3

Methodology — paired-measurement protocol

Edge vs Cloud IoT for one greenhouse, under identical sensor conditions

IN DEVELOPMENT FILED · 2026-05-06 TAGS · METHODOLOGY · EDGE-VS-CLOUD · PAIRED-MEASUREMENT · MASTER-THESIS
ABSTRACT
ON FILE DRAFT · 2026-05-06

Working draft of the protocol used to compare edge-local (ESP32 + Raspberry Pi) and cloud-centric (Aurora + Lambda) processing of the same sensor stream. Final prose lands by 2026-08; this revision is a structural skeleton with binding section headings.

§3.1 · Research question

Does edge-local processing of a single-greenhouse telemetry stream produce materially different operational outcomes than cloud-centric processing when both consume the same sensor stream under identical operational conditions? “Materially different” is operationalised under §3.4.

DRAFT — final framing pending defence-committee review (2026-Q3).

§3.2 · Hypothesis

H₀ — Under steady-state operation, edge and cloud paths produce statistically indistinguishable results across the four measured variables (latency, energy, monetary cost, data integrity).

H₁ — Under failure conditions (network partition, MQTT broker outage, hub power loss), the two paths diverge in operationally significant ways that can be characterised quantitatively.

DRAFT — numeric thresholds for “significant” pending §3.4 calibration.

§3.3 · Experimental setup

Identical sensor stream, two parallel processing paths, no operator intervention during a measurement window.

Both paths see the same publish events at the broker, so any divergence is attributable to processing, not sampling.

DRAFT — wiring diagrams + part numbers in §1; cross-reference once §1 lands.

§3.4 · Variables

VariableEdge measurementCloud measurementSignificance threshold
Latency (publish → derived metric available)ms via local timestampms via Aurora recorded_at − publish tsTBD
Energy (per 1k samples)Pi wall-meter WhLambda invocation count × billed-MB-ms × upstream MQTT bytesTBD
Monetary cost (per month, normalised to 1 node)amortised hardware + electricityAWS bill (Aurora ACU-hours + Lambda + Data API)TBD
Data integritygap-count, gap-duration, samples-lostsame metrics computed over AuroraTBD

DRAFT — threshold values pending §3.5 confounder analysis.

§3.5 · Confounding factors

Captured as a checklist for the audit committee. Each row will be addressed by a measurement protocol decision and any residual risk recorded in the published dataset.

DRAFT — protocol decisions per row pending.

§3.6 · Data integrity protocol (ADR-010)

When a node has been silent past its threshold, the dashboard surfaces STALE: N hours next to the reading, and the daily archive for that day records the gap as a gap. This is a methodology decision, not a UX one — it is why the public dataset and the methodology dataset are the same dataset, with the same gaps, treated the same way.

The 2026-04-23 four-day gap (docs/thesis-incidents-log.md INC-001) is the cautionary tale: detected by manual row-count audit, not by any alarm. The remediation (LIVE-001..LIVE-004 watchers in Phase 5.9) is itself part of the methodology — if the data integrity protocol must include alarms, then the alarms are part of the apparatus and are documented as such.

§3.7 · Limitations


Status: structural draft v0.1, 2026-05-06. Citation: https://plantir.garden/thesis/2026/methodology is locked per ADR-011 — citable in this state, although the prose will be replaced before defence (target 2026-08). Related ADRs: docs/adr/010-public-sensor-data-policy.md, docs/adr/011-thesis-url-schema.md.

RELATED DECISIONS
NOMINAL DISCLOSURE SCHEDULE
  • ADR-010 — see docs/adr/ in source repo
  • ADR-011 — see docs/adr/ in source repo