How Polyvision Actually Works

No black boxes. No vague promises. Here's exactly what happens when you paste a wallet address.

The Problem

Copy trading on Polymarket is broken. Leaderboards show profit without context. A trader sitting on $500K in unrealized losses looks identical to one with clean exits. A bot front-running the orderbook looks like a genius. A lucky gambler who hit one big bet looks like a whale.

We got tired of losing money copying traders who looked profitable. So we built the tool we wished existed: something that pulls apart a wallet's full history — every closed trade, every open position, every redeem — and answers a single question: is this trader actually worth copying?

That's all Polyvision does. We don't manage your money. We don't execute trades. We don't sell signals. We run the math and give you the result.

What Happens When You Scan a Wallet

Every analysis hits Polymarket's public data API and runs through the same pipeline. No shortcuts for premium, no different algorithm for different users.

1. Data Collection

We fetch the wallet's full trading history from Polymarket's public API: closed positions (paginated), current open positions, trade activity with timestamps, and the official leaderboard P&L.

2. P&L Calculation

We use Polymarket's official leaderboard as the single source of truth for total P&L. This is critical — most tools miss redeems (positions closed by market resolution), which can understate profit by 20-50%. We don't.

3. Performance Metrics

  • Win rate (excluding break-even)
  • ROI as % of capital invested
  • Average trade size & P&L
  • Best and worst single trade
  • 7-day, 30-day, 90-day windows

4. Risk Analysis

  • Sharpe ratio (risk-adjusted returns)
  • Sortino ratio (downside volatility only)
  • Maximum drawdown ($ and %)
  • Position sizing consistency
  • Winning & losing streak analysis

5. Red Flag Detection

  • Loss hiding (closing winners, holding losers)
  • Bot activity (median hold time analysis)
  • Wash trading patterns
  • Inactivity (no trades in 30+ days)
  • Luck vs. skill (100% win rate, small sample)

6. Scoring

Multiple independent criteria are weighted and combined into a 1-10 score. Hard caps prevent inflated scores — small sample sizes, high-risk profiles, and bot-like behavior are all ceiling-limited regardless of other metrics. The score answers one thing: copy or don't.

Why We Penalize First

Most scoring systems are designed to make traders look good. Ours is designed to protect you from bad ones.

The scoring algorithm evaluates multiple independent criteria: track record length, risk-adjusted returns, risk management, win rate consistency, recent activity, trading speed, and loss hiding behavior.

What makes it different is the hard caps. A trader with a small number of trades is ceiling-limited — no matter how good the numbers look. That's because with a small sample, you're measuring luck, not skill. High-risk profiles are capped. Wallets showing bot-like trading speeds are flagged and capped. These limits aren't optional; they're structural.

We'd rather give a good trader a conservative score than give a lucky gambler a misleading one. If you're using this tool to decide where to put real money, we think that's the right tradeoff.

What Raises the Score

  • Long track record with consistent results
  • Strong ROI with controlled drawdown
  • High win rate with meaningful sample size
  • Recent trading activity
  • Average winner larger than average loser
  • Low maximum drawdown

What Lowers It

  • Negative ROI
  • Large maximum drawdown
  • Low win rate with meaningful sample
  • Extended inactivity
  • Bot-speed trading patterns
  • Majority of open positions underwater

Does the Scoring Work?

We back-tested every trader we've ever flagged, then validated the results with statistical significance testing. Here's what holds up.

The methodology matters more than the numbers: we only count trades placed on or after the date we first flagged each trader. This eliminates look-ahead bias entirely. We're not cherry-picking past winners — we're measuring what would have happened if you followed our picks in real time, starting from the moment we surfaced them.

We apply a conservative cost adjustment — 0.5% slippage per entry plus a 2% fee on profit — to simulate real-world copy-trading execution. We then run a grid search over 4,800 filter combinations to find the optimal recommended filter, and validate it with bootstrap resampling and FDR-adjusted p-values. These are the cost-adjusted, statistically validated numbers.

+10.1%
Cost-Adjusted ROI (Avg 53-Day Hold)
Annualized ~69%
74.0%
Win Rate Across 24,061 Resolved Trades
88%
Profitable Traders (22 of 25 Recommended)

Methodology

  • 81 traders flagged across daily leaderboard scans
  • Only trades after the flag date are counted
  • 33,880 trades analyzed (24,061 resolved)
  • Cost-adjusted for 0.5% slippage + 2% profit fee
  • Recommended filter: Score ≥9.5, ≥4 appearances, WR ≥50%, ≥10 trades

Statistical Validation

  • Sharpe ratio: 0.738
  • FDR-adjusted p-value: 0.0015
  • Bootstrap 95% CI: [+5.2%, +15.6%]
  • 100% of bootstrap iterations returned positive ROI
  • Grid search over 4,800 filter combinations

The Caveat

This is historical data. Past performance does not guarantee future results. Markets change, traders change, and any back-test has survivorship bias baked in. We publish this to show the scoring system has a quantifiable track record — not to promise returns.

What We Believe

Transparency Over Trust

We show every metric we calculate — Sharpe ratio, Sortino ratio, drawdown percentages, hold duration percentiles, streak data. You shouldn't have to trust a number you can't verify. The score is a summary, not a replacement for the data behind it.

Protection Over Hype

The hardest part of copy trading isn't finding winners — it's avoiding losers disguised as winners. Our red flag system exists to catch what leaderboards don't show: hidden losses, bot activity, wash trading, and inflated track records.

Public Data Only

Everything we analyze is already public on the blockchain and Polymarket's API. We don't access private keys, we don't connect wallets, we don't touch your funds. We read the same data anyone can — we just read more of it, faster, and with math.

Independently Verified

Polyvision is listed as benign with high confidence on ClawHub, an independent registry that audits MCP servers for security and trust. We didn't ask for the rating — they assessed us.

370+ users — a mix of human traders and AI agents — use Polyvision across our Telegram bot, REST API, and MCP server.

View on ClawHub

What We Don't Do

We don't provide financial advice. We don't execute trades on your behalf. We don't guarantee that high-scoring traders will remain profitable. Markets change, traders change, and past performance is never a guarantee. Polyvision gives you better data to make your own decisions — the decisions are still yours.

Built for Accuracy, Not Speed-to-Market

Analysis results are cached so that repeated scans of the same wallet return instantly without redundant upstream API calls. Rate limiting is persistent and distributed — we maintain a safe margin below Polymarket's API limits to protect both our infrastructure and theirs.

The system handles pagination edge cases that most tools ignore: Polymarket's API has hard limits on how much history you can pull. When we hit those limits, we tell you — a data truncated flag in the analysis explains exactly what was cut off and why, rather than silently giving you incomplete results.

Analysis Pipeline

  • 200+ data points per wallet
  • Multiple analysis categories in parallel
  • Persistent caching for instant repeat scans
  • Distributed rate limiting

Available Via

  • Telegram bot (interactive analysis)
  • REST API (programmatic access)
  • MCP server (AI agent integration)
  • All use the same engine and scoring