Your timeline, without the noise.
xrai runs a local LLM against every tweet and hides the garbage. Nothing leaves your machine. No cloud, no API keys, no accounts.
X is 90% noise now.
Engagement bait, clickbait videos, recycled takes, crypto pumps, rage posts. Scrolling it all costs you hours a week and trashes your attention.
Feed without xrai
Feed with xrai
Four dimensions. One local model.
Every tweet runs a fast pipeline. Regex kills obvious junk. The LLM scores the rest on four axes. 3 or 4 out of 4 means signal. 0 to 2 means hide.
Novelty
New information or the 412th recycled take?
Specificity
Concrete numbers and details, or vague claims?
Density
High insight-to-word ratio, or padded word count?
Authenticity
Genuine sharing, or engagement farming?
What it actually does.
No accounts, no telemetry, no cloud. The whole thing is ~2,000 lines of vanilla JS running in a Chrome extension and a local Ollama instance.
100% local
All classification runs through Ollama on your machine. Zero API calls, zero accounts, zero billing.
Tech-tuned
Safelist for AI, startup, and engineering content. The model has been benchmarked on 89 real tweets from founders and devs.
Result cache
Scroll back up? Cached verdict applied instantly. No re-classification on the same tweet ever.
Regex prefilter
Eleven categories catch obvious junk instantly. No model call wasted on "RT if you agree".
Self-improving
Classification logs stream to a local collector. Run the improve script to generate prompt fixes from misclassifications.
Copy-paste replies
Generate replies in your voice. Copy them. Paste them yourself. xrai never touches X's compose box.
Real numbers. 89 tweets. Apple Silicon.
Models scored against hand-labeled tweets pulled from actual timelines and bookmarks. Pick speed or pick accuracy.
| Model | Accuracy | Avg speed | Size | Use when |
|---|---|---|---|---|
| phi4-mini | 92% | 518ms | 2.5 GB | you want the best signal-noise call |
| gemma2:2b | 88% | 231ms | 1.6 GB | you want the fastest feed response |
# node benchmarks/benchmark.js
Three steps. About four minutes.
You need a Mac or a machine that can run Ollama. No other setup. No CLI knowledge beyond copying one command.
Install Ollama
Download from ollama.ai. On Mac it auto-starts on login and lives in your menubar.
→ ollama.aiPull a model
phi4-mini for best accuracy, gemma2:2b for best speed. Either works.
Load the extension
Open chrome://extensions, turn on Developer mode, click "Load unpacked", pick the extension folder. Visit x.com.
→ github.com/phuaky/xraiNot a bot. Not a scraper.
xrai reads the same DOM your browser is already rendering. It hides things with CSS, same as an ad blocker. Here's what it does not do.
- No API access. Never calls X's API. Only reads your own rendered feed.
- No automation. Never clicks, likes, follows, or submits forms.
- No auto-posting. Reply text is generated. You copy and paste it yourself.
- No scraping. Processes only what you are currently viewing.
- CSS hide only. Same technique ad blockers use. Widely accepted.
- User-initiated. Activates when you open x.com. Inactive otherwise.