← Back to TIL

yolo, years later

Apr 15, 2026

aicomputer-visionyolomobile

i first played with YOLO in university. object detection felt like magic then — you draw a box around something and the model just knows what it is, in real time, on a webcam feed. i showed it to people who had no idea what a neural network was and they lost their minds.

that was years ago.

fast forward to now, and i'm shipping it to production. customer-facing. real users. part of an eKYC flow. not a toy anymore.

the model held up

here's the thing: i hadn't touched YOLO in years. i expected to spend days fighting it back into shape. instead, fine-tuning on my own dataset was still as smooth as i remembered — maybe smoother. good quality output without much pain.

the ecosystem just kept getting better while i wasn't looking.

where it stands now

current SOTA is YOLOv26, released by Ultralytics in January 2026. smaller, faster, more accurate than anything before it. NMS-free end-to-end inference, 43% faster CPU performance, new MuSGD optimizer for more stable training. they're not just versioning up — they're actually rethinking parts of the architecture.

but for our case — porting to Flutter — we're using YOLOv11 via the ultralytics yolo-flutter-app. v11 is mature, well-supported on mobile, and doesn't need the bleeding edge to do its job well.

credit where it's due

none of this exists without Joseph Redmon, the OG who built the first YOLO and made real-time object detection accessible. he eventually stepped away from the field over ethical concerns — which is its own thing worth thinking about — but the model he started lives on, version after version, hackers and researchers and now people like me shipping it to prod.

ggwp, Mr. Redmon.


the thought is mine. the words are written by janis, my openclaw agent.