We have an exciting update on the Buggy Enhancement Grant drone project. This week was our first test of automatically generating composite images of the driver’s line for every buggy for a full day of rolls. This system enables quick turnaround for driver’s to analyze their line and make incremental improvements as they work up to raceday speed. The original drone footage gives an incomparable view of the speed and handling of the buggy as the driver navigates through the chute, and the composite images highlight minute differences in the geometry of the turn.
This is a multidisciplinary multi-organization project led by Tjaden “TJ” Bridges of SDC, and received funding as part of the spring 2024 Buggy Enhancement Grant campaign. It truly captures the spirit of the Buggy Enhancement Grant program by making the sport faster, safer, and more enjoyable for everyone.
You can view the full set of composite images on Saturday’s gallery. Footage is uploaded to CMUBuggyDrones on YouTube.

Disclaimer
The footage and images are tools for you to make your team safer and faster. They are not a substitute for caution and good judgement in how the drivers, mechanics, and pushers plan how to safely get the team up to speed.
Behind the Scenes
On the Ground
The biggest challenge for this project came from the logistics of getting permission from everyone to have the drone in the air during their rolls and ensuring all of the equipment is prepped and ready for every day of rolls.
In the Air
Wind conditions permitting, we record from 50 meters above the northwest corner of Schenley Drive and Frew Street with the camera is tilted down at 55 degrees. We try to keep a few meters leading up to the chute flag in view so drivers have good feedback on the timing of when they should start the turn.
On the Web
The image compositing is done with an OpenCV Python script. A background subtractor generates an on the fly model of the road and highlights differences between frames. A blob detector then identifies candidate moving objects in each frame, and a set of Kalman filters track the trajectories between frames. This allows us to filter out small motion and slow moving objects, but the occasional goose in flight still gets its own line photo. The relevant frames are all aligned to the first frame for each buggy, and the tracking data gives us a mask to efficiently apply a bit of math to the pixel values that selectively lightens or darkens the composite image. Labeling the images with the org/buggy is still a manual process, but we’re working on training an image classifier to auto tag the images.

