Issue 4

4. Stale / Lagging Data

This notebook documents the analysis for Issue #4: Stale / Lagging Data in the Autonity Oracle data. It covers:

  • What is this issue about?
  • Why conduct this issue analysis?
  • How to conduct this issue analysis?
  • What are the results?

4.1 What Is This Issue About?

In the Oracle system, validators submit price data that must reflect real-world market movements. However, issues may occur:

  • Stale data: Validator submits identical prices repeatedly for prolonged periods.
  • Lagging data: Validator’s reported price remains nearly unchanged despite significant market changes.

These indicate problems such as disconnected feeds or outdated caches.


4.2 Why Conduct This Issue Analysis?

  • Accuracy: Ensuring data freshness and reliability.
  • Troubleshooting: Detect potential API disconnections, stuck feeds, or caching issues.
  • Confidence: Critical for Mainnet readiness.

4.3 How to Conduct the Analysis?

Use Python with the Polars library (v1.24.0) to:

  • Loading Oracle submission data and Yahoo Finance benchmarks.
  • Detecting stale data (repeated identical submissions ≥30 consecutive intervals).
  • Detecting lagging data (market moves significantly, validator’s submission barely changes within 60-minute windows).

Below is the Python script to perform the analysis:

import polars as pl
import glob
from typing import List, Dict
import warnings

warnings.filterwarnings("ignore")
def load_and_preprocess_submissions(submission_glob: str) -> pl.DataFrame:
    """
    Loads Oracle Submission CSVs and returns a Polars DataFrame.
    """
    files = sorted(glob.glob(submission_glob))
    if not files:
        raise ValueError(f"No CSV files found matching pattern {submission_glob}")
    lf_list = []
    for f in files:
        lf_temp = pl.scan_csv(
            f,
            dtypes={"Timestamp": pl.Utf8},
            null_values=[""],
            ignore_errors=True,
        )
        lf_list.append(lf_temp)

    lf = pl.concat(lf_list)
    lf = lf.with_columns(
        pl.col("Timestamp").str.strptime(pl.Datetime, strict=False).alias("Timestamp_dt")
    )
    lf = lf.with_columns(
        [
            pl.col("Timestamp_dt").cast(pl.Date).alias("date_only"),
            pl.col("Timestamp_dt").dt.weekday().alias("weekday_num"),
        ]
    )
    return lf.collect()


def load_yahoo_finance_data(directory_pattern: str, pair_label: str) -> pl.DataFrame:
    """
    Loads Yahoo Finance CSVs and returns a Polars DataFrame.
    """
    files = sorted(glob.glob(directory_pattern))
    if not files:
        raise ValueError(f"No Yahoo Finance CSV files found: {directory_pattern}")

    lf_list = []
    for f in files:
        lf_temp = pl.scan_csv(
            f,
            has_header=False,
            skip_rows=3,
            new_columns=["Datetime", "Close", "High", "Low", "Open", "Volume"],
            try_parse_dates=True,
        )
        lf_list.append(lf_temp)

    lf = pl.concat(lf_list).sort("Datetime").select(
        [
            pl.col("Datetime").alias("timestamp_benchmark"),
            pl.col("Close").alias("benchmark_close"),
        ]
    )
    df = lf.collect().with_columns(pl.lit(pair_label).alias("symbol"))
    return df


def load_all_fx_benchmarks() -> Dict[str, pl.DataFrame]:
    """
    Loads FX data from Yahoo Finance.
    """
    mapping = {
        "AUD-USD": "../yahoo-finance/data/AUDUSD/AUDUSD=X_1m_*.csv",
        "CAD-USD": "../yahoo-finance/data/CADUSD/CADUSD=X_1m_*.csv",
        "EUR-USD": "../yahoo-finance/data/EURUSD/EURUSD=X_1m_*.csv",
        "GBP-USD": "../yahoo-finance/data/GBPUSD/GBPUSD=X_1m_*.csv",
        "JPY-USD": "../yahoo-finance/data/JPYUSD/JPYUSD=X_1m_*.csv",
        "SEK-USD": "../yahoo-finance/data/SEKUSD/SEKUSD=X_1m_*.csv",
    }
    result = {}
    for pair_label, pattern in mapping.items():
        df_pair = load_yahoo_finance_data(pattern, pair_label)
        result[pair_label] = df_pair
    return result


def detect_stale_data(
    df: pl.DataFrame,
    price_cols: List[str],
    max_consecutive_threshold: int = 30,
    stale_tolerance: float = 1e-9  # Tolerance for float comparison
) -> pl.DataFrame:
    """
    Identifies potential stale data when the same price is repeated for
    at least max_consecutive_threshold intervals, allowing small float tolerance.
    Skips any rows with None in the relevant price columns to avoid TypeError.
    """
    suspicious_frames = []
    df_local = df.clone()
    
    new_cols = []
    for pc in price_cols:
        dec_col = pc.replace(" Price", " Price Decimal")
        new_cols.append((pl.col(pc).cast(pl.Float64) / 1e18).alias(dec_col))
    df_local = df_local.with_columns(new_cols)
    
    for pc in price_cols:
        dec_col = pc.replace(" Price", " Price Decimal")
        if dec_col in df_local.columns:
            df_local = df_local.filter(pl.col(dec_col).is_not_null())

    for pc in price_cols:
        dec_col = pc.replace(" Price", " Price Decimal")
        if dec_col not in df_local.columns:
            continue

        df_sub = (
            df_local.select(["Validator Address", "Timestamp_dt", dec_col])
            .filter(pl.col("Validator Address").is_not_null())
            .sort(["Validator Address", "Timestamp_dt"])
        )

        df_list = df_sub.to_dicts()
        suspicious_records = []
        
        if not df_list:
            continue

        current_run_price = None
        current_run_start_idx = 0
        current_run_len = 0
        current_validator = None

        def finalize_run(run_val, start_i, end_i, run_len):
            start_ts = df_list[start_i]["Timestamp_dt"]
            end_ts = df_list[end_i]["Timestamp_dt"]
            vaddr = df_list[start_i]["Validator Address"]
            return {
                "Validator Address": vaddr,
                "price_col": pc,
                "repeated_value": run_val,
                "start_timestamp": start_ts,
                "end_timestamp": end_ts,
                "run_length": run_len,
            }

        for i, row in enumerate(df_list):
            vaddr = row["Validator Address"]
            price_val = row[dec_col]
            
            if (current_validator is not None) and (vaddr != current_validator):
                if current_run_len >= max_consecutive_threshold:
                    rec = finalize_run(current_run_price, current_run_start_idx, i - 1, current_run_len)
                    suspicious_records.append(rec)
                current_run_price = None
                current_run_start_idx = i
                current_run_len = 0
                current_validator = vaddr

            if (
                current_run_price is not None
                and vaddr == current_validator
                and abs(price_val - current_run_price) < stale_tolerance
            ):
                current_run_len += 1
            else:
                if current_run_len >= max_consecutive_threshold:
                    rec = finalize_run(current_run_price, current_run_start_idx, i - 1, current_run_len)
                    suspicious_records.append(rec)
                
                current_run_price = price_val
                current_run_start_idx = i
                current_run_len = 1
                current_validator = vaddr

        if current_run_len >= max_consecutive_threshold:
            rec = finalize_run(current_run_price, current_run_start_idx, len(df_list) - 1, current_run_len)
            suspicious_records.append(rec)

        if suspicious_records:
            df_sus = pl.DataFrame(suspicious_records)
            suspicious_frames.append(df_sus)

    if suspicious_frames:
        return pl.concat(suspicious_frames, how="vertical")
    else:
        return pl.DataFrame(
            {
                "Validator Address": [],
                "price_col": [],
                "repeated_value": [],
                "start_timestamp": [],
                "end_timestamp": [],
                "run_length": [],
            }
        )


def detect_lagging_data(
    df_oracle: pl.DataFrame,
    fx_benchmarks: Dict[str, pl.DataFrame],
    fx_pairs: List[str],
    lag_threshold: float = 0.05,
    time_window_minutes: int = 60
) -> pl.DataFrame:
    """
    Compare each validator's reported FX price vs. Yahoo's benchmark.
    Now uses a forward as-of join to find the price 'at or after' (T + time_window_minutes).
    """
    df_local = df_oracle.clone()
    for pc in fx_pairs:
        dec_col = pc.replace(" Price", " Price Decimal")
        df_local = df_local.with_columns(
            (pl.col(pc).cast(pl.Float64) / 1e18).alias(dec_col)
        )

    suspicious_frames = []

    for pc in fx_pairs:
        base_label = pc.replace(" Price", "")
        dec_col = base_label + " Price Decimal"
        if dec_col not in df_local.columns:
            continue
        if base_label not in fx_benchmarks:
            continue

        df_sub = df_local.select(["Timestamp_dt", "Validator Address", dec_col]).filter(
            pl.col("Validator Address").is_not_null()
        )
        df_sub = df_sub.with_columns(
            pl.col("Timestamp_dt").dt.truncate("1m").alias("ts_minute")
        )

        lf_sub = (
            df_sub.lazy()
            .group_by(["ts_minute", "Validator Address"])
            .agg(pl.col(dec_col).last().alias("price_decimal"))
        )
        df_val_prices = lf_sub.collect().sort(["Validator Address", "ts_minute"])

        df_val_prices_future = df_val_prices.with_columns(
            (pl.col("ts_minute") + pl.duration(minutes=time_window_minutes)).alias("ts_future")
        )

        left_lf = df_val_prices_future.lazy().sort(["Validator Address", "ts_minute"])
        right_lf = (
            df_val_prices_future.lazy()
            .select([
                pl.col("Validator Address"),
                pl.col("ts_minute").alias("ts_minute_future"),
                pl.col("price_decimal").alias("price_decimal_future"),
            ])
            .sort(["Validator Address", "ts_minute_future"])
        )

        joined_lf = left_lf.join_asof(
            right_lf,
            left_on="ts_future",
            right_on="ts_minute_future",
            on="Validator Address",
            strategy="forward",
            suffix="_r"
        )

        df_joined = joined_lf.collect().with_columns(
            pl.col("price_decimal").alias("price_now")
        )

        df_joined = df_joined.with_columns(
            pl.when(
                (pl.col("price_decimal_future").is_not_null())
                & (pl.col("price_decimal_future") > 0)
                & (pl.col("price_now") > 0)
            )
            .then((pl.col("price_decimal_future") - pl.col("price_now")) / pl.col("price_now"))
            .otherwise(None)
            .alias("validator_pct_change")
        )

        df_bench = fx_benchmarks[base_label]
        df_bench = df_bench.with_columns(
            pl.col("timestamp_benchmark").dt.truncate("1m").alias("ts_minute_bench")
        ).sort("ts_minute_bench")

        lf_bench_now = (
            df_bench.lazy()
            .group_by("ts_minute_bench")
            .agg(pl.col("benchmark_close").last().alias("bench_price"))
            .sort("ts_minute_bench")
        )
        df_bench_now = lf_bench_now.collect().with_columns(
            (pl.col("ts_minute_bench") + pl.duration(minutes=time_window_minutes)).alias("ts_future_bench")
        )

        df_bench_future = df_bench_now.select([
            pl.col("ts_minute_bench").alias("ts_minute_bench_future"),
            pl.col("bench_price").alias("bench_price_future"),
        ]).sort("ts_minute_bench_future")

        ldf_bench_now = df_bench_now.lazy().sort("ts_minute_bench")
        ldf_bench_future = df_bench_future.lazy()

        ldf_bench_joined = ldf_bench_now.join_asof(
            ldf_bench_future,
            left_on="ts_future_bench",
            right_on="ts_minute_bench_future",
            strategy="forward",
            suffix="_r"
        )

        df_bench_joined = ldf_bench_joined.collect().with_columns([
            pl.when(
                (pl.col("bench_price_future").is_not_null())
                & (pl.col("bench_price_future") > 0)
                & (pl.col("bench_price") > 0)
            )
            .then(
                (pl.col("bench_price_future") - pl.col("bench_price")) / pl.col("bench_price")
            )
            .otherwise(None)
            .alias("bench_pct_change")
        ])

        df_final_join = (
            df_joined.lazy()
            .join(
                df_bench_joined.select(["ts_minute_bench", "bench_pct_change"]).lazy(),
                left_on="ts_minute",
                right_on="ts_minute_bench",
                how="left"
            )
            .collect()
        )

        df_lagging_ = df_final_join.with_columns([
            pl.when(
                (pl.col("bench_pct_change").abs() > lag_threshold)
                & (pl.col("validator_pct_change").abs() < lag_threshold)
            )
            .then(pl.lit("Lagging data vs. real market"))
            .otherwise(pl.lit(""))
            .alias("lag_reason")
        ]).filter(pl.col("lag_reason") != "")

        if not df_lagging_.is_empty():
            df_lagging_ = df_lagging_.select([
                pl.col("Validator Address"),
                pl.lit(base_label).alias("pair_label"),
                pl.col("ts_minute").alias("window_start"),
                pl.col("price_now"),
                pl.col("price_decimal_future").alias("price_future"),
                pl.col("validator_pct_change"),
                pl.col("bench_pct_change"),
                pl.col("lag_reason"),
            ])
            suspicious_frames.append(df_lagging_)

    if suspicious_frames:
        return pl.concat(suspicious_frames, how="vertical")
    else:
        return pl.DataFrame(
            {
                "Validator Address": [],
                "pair_label": [],
                "window_start": [],
                "price_now": [],
                "price_future": [],
                "validator_pct_change": [],
                "bench_pct_change": [],
                "lag_reason": [],
            }
        )


def analyze_stale_lagging_data(
    submission_glob: str,
    fx_pairs: List[str],
    autonity_pairs: List[str],
    yahoo_data_dict: Dict[str, pl.DataFrame],
    max_consecutive_threshold: int = 30,
    lag_threshold: float = 0.05,
    lag_window_minutes: int = 60,
):
    """
    Main analysis function.
    """
    df_all = load_and_preprocess_submissions(submission_glob)

    price_cols_all = fx_pairs + autonity_pairs
    df_stale = detect_stale_data(df_all, price_cols_all, max_consecutive_threshold)

    df_lagging = detect_lagging_data(
        df_oracle=df_all,
        fx_benchmarks=yahoo_data_dict,
        fx_pairs=fx_pairs,
        lag_threshold=lag_threshold,
        time_window_minutes=lag_window_minutes,
    )

    return {
        "df_all_data": df_all,
        "df_stale": df_stale,
        "df_lagging": df_lagging,
    }
fx_price_cols = [
    "AUD-USD Price",
    "CAD-USD Price",
    "EUR-USD Price",
    "GBP-USD Price",
    "JPY-USD Price",
    "SEK-USD Price",
]
autonity_price_cols = [
    "ATN-USD Price",
    "NTN-USD Price",
    "NTN-ATN Price",
]

yahoo_data = load_all_fx_benchmarks()

results = analyze_stale_lagging_data(
    submission_glob="../submission-data/Oracle_Submission_*.csv",
    fx_pairs=fx_price_cols,
    autonity_pairs=autonity_price_cols,
    yahoo_data_dict=yahoo_data,
    max_consecutive_threshold=30,
    lag_threshold=0.05,
    lag_window_minutes=60,
)

4.4 What are the results?

The following cells summarize the results obtained dynamically from the analysis above.

4.4.1 Stale Data Analysis

Identify validators that repeatedly submit the same price data beyond the threshold.

df_stale = results["df_stale"]

num_stale = df_stale.height
print(f"Total stale data runs detected: {num_stale}")

if num_stale > 0:
    display(df_stale.sort("run_length", descending=True))
else:
    print("No stale data runs exceeding threshold were detected.")
Total stale data runs detected: 1966
shape: (1_966, 6)
Validator Address price_col repeated_value start_timestamp end_timestamp run_length
str str f64 datetime[μs, UTC] datetime[μs, UTC] i64
"0x94d28f08Ff81A80f4716C0a8EfC6… "JPY-USD Price" 0.006361 2025-01-01 00:03:42 UTC 2025-01-01 23:59:44 UTC 2873
"0x1Be7f70BCf8393a7e4A5BcC66F6f… "AUD-USD Price" 0.61885 2025-01-01 00:08:42 UTC 2025-01-01 23:59:44 UTC 2863
"0x1Be7f70BCf8393a7e4A5BcC66F6f… "CAD-USD Price" 0.695386 2025-01-01 00:08:42 UTC 2025-01-01 23:59:44 UTC 2863
"0x1Be7f70BCf8393a7e4A5BcC66F6f… "EUR-USD Price" 1.035626 2025-01-01 00:08:42 UTC 2025-01-01 23:59:44 UTC 2863
"0x1Be7f70BCf8393a7e4A5BcC66F6f… "GBP-USD Price" 1.250801 2025-01-01 00:08:42 UTC 2025-01-01 23:59:44 UTC 2863
"0xfD97FB8835d25740A2Da27c69762… "CAD-USD Price" 0.695314 2025-01-01 00:06:12 UTC 2025-01-01 00:20:42 UTC 30
"0xfD97FB8835d25740A2Da27c69762… "EUR-USD Price" 1.035626 2025-01-01 00:06:12 UTC 2025-01-01 00:20:42 UTC 30
"0xfD97FB8835d25740A2Da27c69762… "GBP-USD Price" 1.25159 2025-01-01 00:06:12 UTC 2025-01-01 00:20:42 UTC 30
"0xfD97FB8835d25740A2Da27c69762… "JPY-USD Price" 0.006361 2025-01-01 00:06:12 UTC 2025-01-01 00:20:42 UTC 30
"0xfD97FB8835d25740A2Da27c69762… "SEK-USD Price" 0.090369 2025-01-01 00:06:12 UTC 2025-01-01 00:20:42 UTC 30

Interpretation:

  • High counts or long durations suggest systematic feed issues or stalled updates.
  • Validators frequently appearing here may need urgent investigation.

4.4.2 Lagging Data Analysis

Detect intervals where the validator’s price fails to reflect significant market movements (≥5% within 60 minutes):

df_lagging = results["df_lagging"]

num_lagging = df_lagging.height
print(f"Total lagging data intervals detected: {num_lagging}")

if num_lagging > 0:
    df_top_lagging = (
        df_lagging
        .with_columns([
            pl.col("bench_pct_change").cast(pl.Float64),
            pl.col("validator_pct_change").cast(pl.Float64),
        ])
        .with_columns([
            (pl.col("bench_pct_change") - pl.col("validator_pct_change")).abs().alias("abs_diff")
        ])
        .sort("abs_diff", descending=True)
    )
    display(df_top_lagging)
else:
    print("No lagging data intervals exceeding threshold were detected.")
Total lagging data intervals detected: 0
No lagging data intervals exceeding threshold were detected.

Interpretation:

  • High differences indicate significant mismatches, suggesting disconnections or feed issues.
  • Frequent occurrences for specific validators or currency pairs indicate persistent issues.

4.4.3 Combined Summary and Interpretation

The tables and statistics above directly highlight:

  • Validators with stale or lagging data: Indicating possible systemic issues or node misconfigurations.
  • Affected currency pairs: Useful for pinpointing feed-related problems.
if num_stale > 0:
    top_stale_validators = df_stale.group_by("Validator Address").agg(
        pl.sum("run_length").alias("total_stale_intervals"),
        pl.count().alias("num_stale_runs")
    ).sort("total_stale_intervals", descending=True)
    print("Top validators by total stale intervals:")
    display(top_stale_validators)
else:
    print("No stale data to summarize.")

if num_lagging > 0:
    top_lagging_validators = df_lagging.group_by("Validator Address").count().sort("count", descending=True)
    print("Top validators by number of lagging intervals:")
    display(top_lagging_validators)
else:
    print("No lagging data to summarize.")
Top validators by total stale intervals:
shape: (55, 3)
Validator Address total_stale_intervals num_stale_runs
str i64 u32
"0x36142A4f36974e2935192A1111C3… 17280 21
"0xB5d8be2AB4b6d7E6be7Ea28E91b3… 17280 12
"0xDCA5DFF3D42f2db3C18dBE823380… 17280 12
"0x3597d2D42f8Fbbc82E8b10460487… 17280 12
"0xBBf36374eb23968F25aecAEbb97B… 17280 12
"0xDF2D0052ea56A860443039619f6D… 15935 34
"0xE9FFF86CAdC3136b3D94948B8Fd2… 15873 59
"0x551f3300FCFE0e392178b3542c00… 15400 8
"0x22A76e194A49c9e5508Cd4A3E1cD… 15356 8
"0x1476A65D7B5739dE1805d5130441… 13581 64
No lagging data to summarize.

List of all Validators and their Stale Scores

df_all = results["df_all_data"]
df_stale = results["df_stale"]

df_totals = (
    df_all
    .group_by("Validator Address")
    .agg(pl.count().alias("total_submissions"))
    .filter(pl.col("Validator Address").is_not_null())
)

df_stale_sum = (
    df_stale
    .group_by("Validator Address")
    .agg(pl.col("run_length").sum().alias("sum_stale_intervals"))
)

df_scores = (
    df_totals
    .join(df_stale_sum, on="Validator Address", how="left")
    .fill_null(0)
    .with_columns(
        (pl.col("sum_stale_intervals") / pl.col("total_submissions")).alias("stale_score")
    )
    .sort("stale_score", descending=True)
)

for row in df_scores.to_dicts():
    print(
        f"Validator {row['Validator Address']}: "
        f"total={row['total_submissions']}, "
        f"sum_stale_intervals={row['sum_stale_intervals']}, "
        f"stale_score={row['stale_score']:.1f}"
    )
Validator 0xBBf36374eb23968F25aecAEbb97BF3118f3c2fEC: total=2880, sum_stale_intervals=17280, stale_score=6.0
Validator 0x94470A842Ea4f44e668EB9C2AB81367b6Ce01772: total=2880, sum_stale_intervals=17280, stale_score=6.0
Validator 0x527192F3D2408C84087607b7feE1d0f907821E17: total=2880, sum_stale_intervals=17280, stale_score=6.0
Validator 0xF9B38D02959379d43C764064dE201324d5e12931: total=2880, sum_stale_intervals=17280, stale_score=6.0
Validator 0x791A7F840ac11841cCB0FaA968B2e3a0Db930fCe: total=2880, sum_stale_intervals=17280, stale_score=6.0
Validator 0xDCA5DFF3D42f2db3C18dBE823380A0A81db49A7E: total=2880, sum_stale_intervals=17280, stale_score=6.0
Validator 0x36142A4f36974e2935192A1111C39330aA296D3C: total=2880, sum_stale_intervals=17280, stale_score=6.0
Validator 0x383A3c437d3F12f60E5fC990119468D3561EfBfc: total=2880, sum_stale_intervals=17280, stale_score=6.0
Validator 0x9d28e40E9Ec4789f9A0D17e421F76D8D0868EA44: total=2880, sum_stale_intervals=17280, stale_score=6.0
Validator 0xB5d8be2AB4b6d7E6be7Ea28E91b370223a06289f: total=2880, sum_stale_intervals=17280, stale_score=6.0
Validator 0x358488a4EdCA493FCD87610dcd50c62c8A3Dd658: total=2880, sum_stale_intervals=17280, stale_score=6.0
Validator 0x3597d2D42f8Fbbc82E8b1046048773aD6DDB717E: total=2880, sum_stale_intervals=17280, stale_score=6.0
Validator 0xEf0Ba5e345C2C3937df5667A870Aae5105CAa3a5: total=2879, sum_stale_intervals=17268, stale_score=6.0
Validator 0x8f91e0ADF8065C3fFF92297267E02DF32C2978FF: total=2879, sum_stale_intervals=17268, stale_score=6.0
Validator 0x718361fc3637199F24a2437331677D6B89a40519: total=2880, sum_stale_intervals=17263, stale_score=6.0
Validator 0x5603caFE3313D0cf56Fd4bE4A2f606dD6E43F8Eb: total=2880, sum_stale_intervals=17250, stale_score=6.0
Validator 0x9C7dAABb5101623340C925CFD6fF74088ff5672e: total=2880, sum_stale_intervals=17250, stale_score=6.0
Validator 0x984A46Ec685Bb41A7BBb2bc39f80C78410ff4057: total=2877, sum_stale_intervals=17232, stale_score=6.0
Validator 0x197B2c44b887c4aC01243BDE7E4bBa8bd95BC3a8: total=2880, sum_stale_intervals=17245, stale_score=6.0
Validator 0x6a395dE946c0493157404E2b1947493c633f569E: total=2874, sum_stale_intervals=17208, stale_score=6.0
Validator 0x94d28f08Ff81A80f4716C0a8EfC6CAC2Ec74d09E: total=2880, sum_stale_intervals=17238, stale_score=6.0
Validator 0xd61a48b0e11B0Dc6b7Bd713B1012563c52591BAA: total=2873, sum_stale_intervals=17196, stale_score=6.0
Validator 0x00a96aaED75015Bb44cED878D927dcb15ec1FF54: total=2866, sum_stale_intervals=17136, stale_score=6.0
Validator 0xBE287C82A786218E008FF97320b08244BE4A282c: total=2879, sum_stale_intervals=17202, stale_score=6.0
Validator 0x5E17e837DcBa2728C94f95c38fA8a47CB9C8818F: total=2876, sum_stale_intervals=17178, stale_score=6.0
Validator 0x99E2B4B27BDe92b42D04B6CF302cF564D2C13b74: total=2880, sum_stale_intervals=17196, stale_score=6.0
Validator 0x8584A78A9b94f332A34BBf24D2AF83367Da31894: total=2879, sum_stale_intervals=17190, stale_score=6.0
Validator 0x7232e75a8bFd8c9ab002BB3A00eAa885BC72A6dd: total=2877, sum_stale_intervals=17178, stale_score=6.0
Validator 0x3AaF7817618728ffEF81898E11A3171C33faAE41: total=2874, sum_stale_intervals=17148, stale_score=6.0
Validator 0xcf716b3930d7cf6f2ADAD90A27c39fDc9D643BBd: total=2880, sum_stale_intervals=17181, stale_score=6.0
Validator 0x1Be7f70BCf8393a7e4A5BcC66F6f15d6e35cfBBC: total=2880, sum_stale_intervals=17178, stale_score=6.0
Validator 0x831B837C3DA1B6c2AB68a690206bDfF368877E19: total=2863, sum_stale_intervals=17076, stale_score=6.0
Validator 0x23b4Be9536F93b8D550214912fD0e38417Ff7209: total=2880, sum_stale_intervals=17154, stale_score=6.0
Validator 0x24915749B793375a8C93090AF19928aFF1CAEcb6: total=2880, sum_stale_intervals=17154, stale_score=6.0
Validator 0xE4686A4C6E63A8ab51B458c52EB779AEcf0B74f7: total=2880, sum_stale_intervals=17124, stale_score=5.9
Validator 0x59031767f20EA8F4a3d90d33aB0DAA2ca469Fd9a: total=2880, sum_stale_intervals=17124, stale_score=5.9
Validator 0xcdEed21b471b0Dc54faF74480A0E700fCc42a7b6: total=2880, sum_stale_intervals=17090, stale_score=5.9
Validator 0x2928FE5b911BCAf837cAd93eB9626E86a189f1dd: total=2829, sum_stale_intervals=16710, stale_score=5.9
Validator 0x6747c02DE7eb2099265e55715Ba2E03e8563D051: total=2840, sum_stale_intervals=16770, stale_score=5.9
Validator 0xf10f56Bf0A28E0737c7e6bB0aF92f3DDad34aE6a: total=2880, sum_stale_intervals=16997, stale_score=5.9
Validator 0xC1F9acAF1824F6C906b35A0D2584D6E25077C7f5: total=2880, sum_stale_intervals=16957, stale_score=5.9
Validator 0xfD97FB8835d25740A2Da27c69762D74F6A931858: total=2880, sum_stale_intervals=16951, stale_score=5.9
Validator 0xbfDcAF35f52F9ef423ac8F2621F9eef8be6dEd17: total=2833, sum_stale_intervals=16632, stale_score=5.9
Validator 0xf34CD6c09a59d7D3d1a6C3dC231a7834E5615D6A: total=2834, sum_stale_intervals=16623, stale_score=5.9
Validator 0x01F788E4371a70D579C178Ea7F48E04e8B2CD743: total=2837, sum_stale_intervals=16614, stale_score=5.9
Validator 0x4cD134001EEF0843B9c69Ba9569d11fDcF4bd495: total=2823, sum_stale_intervals=16518, stale_score=5.9
Validator 0x64F83c2538A646A550Ad9bEEb63427a377359DEE: total=2880, sum_stale_intervals=16826, stale_score=5.8
Validator 0xD9fDab408dF7Ae751691BeC2efE3b713ba3f9C36: total=2880, sum_stale_intervals=16781, stale_score=5.8
Validator 0x19E356ebC20283fc74AF0BA4C179502A1F62fA7B: total=2833, sum_stale_intervals=16448, stale_score=5.8
Validator 0xc5B9d978715F081E226cb28bADB7Ba4cde5f9775: total=2879, sum_stale_intervals=15975, stale_score=5.5
Validator 0xDF2D0052ea56A860443039619f6DAe4434bc0Ac4: total=2879, sum_stale_intervals=15935, stale_score=5.5
Validator 0x1476A65D7B5739dE1805d5130441A94022Ee49fe: total=2462, sum_stale_intervals=13581, stale_score=5.5
Validator 0xE9FFF86CAdC3136b3D94948B8Fd23631EDaa2dE3: total=2880, sum_stale_intervals=15873, stale_score=5.5
Validator 0x551f3300FCFE0e392178b3542c009948008B2a9F: total=2880, sum_stale_intervals=15400, stale_score=5.3
Validator 0x22A76e194A49c9e5508Cd4A3E1cD555D088ECB08: total=2880, sum_stale_intervals=15356, stale_score=5.3
Validator 0x26E2724dBD14Fbd52be430B97043AA4c83F05852: total=2880, sum_stale_intervals=0, stale_score=0.0
Validator 0x3fe573552E14a0FC11Da25E43Fef11e16a785068: total=2880, sum_stale_intervals=0, stale_score=0.0
Validator 0xd625d50B0d087861c286d726eC51Cf4Bd9c54357: total=2856, sum_stale_intervals=0, stale_score=0.0
Validator 0xdF239e0D5b4E6e820B0cFEF6972A90893c2073AB: total=2880, sum_stale_intervals=0, stale_score=0.0
Validator 0x100E38f7BCEc53937BDd79ADE46F34362470577B: total=2876, sum_stale_intervals=0, stale_score=0.0

Please note, total represents the total number of submissions for this validator. sum_stale_intervals sums all “stale” runs across each price column. For instance, if a validator has several columns remain identical for 30+ consecutive intervals, each column’s run is added. stale_score = sum_stale_intervals / total, which can exceed 1 because a single row (submission) may contribute to multiple stale runs (one per column).