Oracle Submission Data Analysis Report (December 2024 - March 2025)

1 Executive Summary

This report documents trends in Autonity Oracle submissions from December 2024 through March 2025. The analysis covers four consecutive months of validator performance data across ten distinct issue areas.

1.1 Overview of Issues Analyzed

The analysis covered ten distinct issue areas:

  1. Missing or Null Submissions: Examining validators that failed to submit price data
  2. Irregular Submission Frequency: Analyzing abnormal timing patterns in submissions
  3. Out-of-Range Values: Detecting suspicious price values compared to benchmarks
  4. Stale/Lagging Data: Identifying validators that fail to update prices when markets move
  5. Confidence Value Anomalies: Examining issues with confidence metrics
  6. Cross-Rate Inconsistency: Assessing mathematical consistency across token prices
  7. Timing/Synchronization Issues: Analyzing timestamp disparities between validators
  8. Weekend Effects: Investigating behavior during market closures
  9. Vendor Downtime: Detecting submission stoppages
  10. Security/Malicious Behavior: Looking for potential manipulation patterns

The analysis presents quantitative metrics for each issue area in a month-over-month format.

2 ACU Index Comparison – Oracle vs Yahoo Finance

The Autonity Oracle’s ACU quote should track a benchmark computed from public FX prices. The following chunk recreates the ACU series from:

  • Oracle on-chain submissions
  • Yahoo Finance minute bars
Figure 1: ACU derived from Oracle submissions (blue) versus ACU computed from Yahoo Finance FX quotes (grey).
Pearson correlation between ACU_Oracle and ACU_Yahoo (weekdays): 0.992107
Pearson correlation of simple returns (weekdays): 0.133703
Pearson correlation of log returns (weekdays):    0.134633
Std Dev of (Oracle - Yahoo) ACU differences:   0.015484
Std Dev of Yahoo ACU:                          0.120714
Difference as % of Yahoo ACU's volatility:     12.83%

4 Notable Validators

4.1 Consistently Anomalous Validators

The validators listed below appear in the “Most Problematic” (or equivalent) list of two or more monthly notebooks.

  1. 0x100E38f7BCEc53937BDd79ADE46F34362470577B – 100 % missing‐submission rate in every month (December-March).
  2. 0x3fe573552E14a0FC11Da25E43Fef11e16a785068 – flagged for 100 % missing submissions in December, January and February.
  3. 0x01F788E4371a70D579C178Ea7F48f9DF4d20eAF3 – very long stale-data runs (6 000 in January, 92 160 in February) and fixed confidence values; member of a small coordination cluster in every month after December.
  4. 0x6747c02DE7eb2099265e55715Ba2ddE7D0A131dE (January–February) / 0x6747c02DE7eb2099265e55715Ba2E03e8563D051 (March) – bursty submission cadence and high share of suspicious values; appears in the coordinated-group lists from January onward.
  5. 0xf34CD6c09a59d7D3d1a6C3dC231a46CED0b51D4C – most frequent outage events (14 in January, 17 in February, 23 in March) and largest cross-rate deviation recorded in February (≈ 42 %).

4.2 Validators with Largest Month-to-Month Changes

  1. 0x26E2724dBD14Fbd52be430B97043AA4c83F05852 – active until 12 January, then 100 % missing for the remainder of the study period.
  2. 0xc5B9d978715F081E226cb28bADB7Ba4cde5f9775 – active in December / January, then 100 % missing from February onward.
  3. 0x8dA2d75276AcB21Dc45C067AFb7A844ee7a6c2A2 – participated in the main coordination cluster; moved from normal operation in December to partial activity in January and February, then absent in March.

4.3 Validators with Consistently Strong Metrics

  1. 0x197B2c44b887c4aC01243BDE7E4b7E7b98A8d35A – listed as a top performer in every month (≥ 99 % completeness, ≤ 0.3 % suspicious values).
  2. 0xdF239e0D5b4E6e820B0cFEF6972A7c1aB7c6a4be – top-tier completeness (≈ 99 %) and negligible suspicious values from December through March.

5 Monthly Rating Comparison

The standardized rating system shows the following changes:

Issue Area December 2024 January 2025 February 2025 March 2025 Trend
Missing/Null Submissions 🟡 🟠 🟠 🔴 ⬇️
Irregular Submission Frequency 🟢 🟡 🟡 🟡 ↔︎️
Out-of-Range Values 🟢 🟢 🟢 🟢 ↔︎️
Stale/Lagging Data 🟠 🔴 🔴 🟢 ⬆️
Confidence Value Anomalies 🔴 🔴 🔴 🟢 ⬆️
Cross-Rate Inconsistency 🟢 🟢 🟢 🟢 ↔︎️
Timing/Synchronization 🟢 🟢 🟠 🟡 ⬇️
Weekend Effect Severity 🟢 🟢 🟠 🔴 ⬇️
Vendor Downtime Impact 🟢 🟡 🟠 🟢 ⬆️
Security Concern Level 🟡 🟠 🔴 🟡 ↔︎️
Overall Rating 🟢 🟡 🔴 🟡 ⬆️

Rating Scale: - ⚫ Critical - Severe issues requiring immediate intervention - 🔴 Poor - Significant issues affecting reliability - 🟠 Fair - Notable issues requiring attention - 🟡 Good - Minor issues with limited impact - 🟢 Excellent - Minimal or no issues

Each issue area is rated based on specific quantitative thresholds:

  • Missing/Null Submissions: ⚫ Critical (> 60 %) 🔴 Poor (30–60 %) 🟠 Fair (15–30 %) 🟡 Good (5–15 %) 🟢 Excellent (< 5 %)
  • Irregular Submission Frequency: ⚫ Critical (> 25 % irregular) 🔴 Poor (15–25 %) 🟠 Fair (8–15 %) 🟡 Good (2–8 %) 🟢 Excellent (< 2 %)
  • Out-of-Range Values: ⚫ Critical (> 8 %) 🔴 Poor (3–8 %) 🟠 Fair (1–3 %) 🟡 Good (0.3–1 %) 🟢 Excellent (< 0.3 %)
  • Stale/Lagging Data: ⚫ Critical (> 15 % runs) 🔴 Poor (7–15 %) 🟠 Fair (3–7 %) 🟡 Good (0.5–3 %) 🟢 Excellent (< 0.5 %)
  • Confidence-Value Anomalies: ⚫ Critical (> 85 % fixed) 🔴 Poor (60–85 %) 🟠 Fair (35–60 %) 🟡 Good (15–35 %) 🟢 Excellent (< 15 %)
  • Cross-Rate Inconsistency: ⚫ Critical (> 12 %) 🔴 Poor (6–12 %) 🟠 Fair (3–6 %) 🟡 Good (1–3 %) 🟢 Excellent (< 1 %)
  • Timing/Synchronization: ⚫ Critical (> 60 s) 🔴 Poor (30–60 s) 🟠 Fair (10–30 s) 🟡 Good (3–10 s) 🟢 Excellent (< 3 s)
  • Weekend Effect Severity: ⚫ Critical (> 30 %) 🔴 Poor (15–30 %) 🟠 Fair (7–15 %) 🟡 Good (2–7 %) 🟢 Excellent (< 2 %)
  • Vendor Downtime Impact: ⚫ Critical (> 10 % time) 🔴 Poor (4–10 %) 🟠 Fair (2–4 %) 🟡 Good (0.5–2 %) 🟢 Excellent (< 0.5 %)
  • Security Concern Level: ⚫ Critical (confirmed) 🔴 Poor (strong evidence) 🟠 Fair (some evidence) 🟡 Good (minimal) 🟢 Excellent (none)

6 Detailed Progression Analysis

6.1 Percentage of Issue-Area Ratings by Severity

The table below counts how many of the ten issue areas fall into each rating colour for every month, expressed as a percentage of the total (10 = 100 %). All figures are taken directly from the individual monthly rating tables.

Severity Level December 2024 January 2025 February 2025 March 2025
Critical (⚫) 0 % 0 % 0 % 0 %
Poor (🔴) 10 % 20 % 40 % 20 %
Fair (🟠) 10 % 20 % 30 % 0 %
Good (🟡) 20 % 20 % 10 % 30 %
Excellent (🟢) 60 % 40 % 20 % 50 %

Method: for each month we count the number of 🟢, 🟡, 🟠, 🔴, ⚫ symbols across the ten issue-area rows, then divide by ten to derive the percentage shown.