Been reading about "Virtual Visual Servoing" http://rainbow-doc.irisa.fr/pdf/2002_eurographics_marchand.pdf -- maybe the most convoluted way to explain a smoothing/filtering algorithm (?) that I've seen lol
(it's mentioned in https://github.com/Jcparkyn/dpoint as part of their pipeline, how they postprocess the camera tag detection, although if you look at the source code they pretty much just call an OpenCV function https://github.com/Jcparkyn/dpoint/blob/a108c19b9b240c1531b2d30ed47ba18bc604862e/python/app/marker_tracker.py#L166)
The survey paper that they did 14 years later https://inria.hal.science/hal-01246370v1/document completely drops the "visual servoing" metaphor/terminology and explains it purely as a computation -- you have to dig into the citations to figure out that they're talking about VVS here
@omar ooo is this a motion smoothing method for tracking stuff?
@arcade Yes! I think we're gonna need it (or something like it) for 3D, it's just not stable enough out of the box (I think there's more degrees of freedom in 3D than in our current planar tracking) so we need a way to integrate previous frame's pose
@arcade There's also fundamental ambiguity where the same 2D image of a tag may represent one of 2+ discrete different poses so you really want to be able to disambiguate
@omar the method looks super sleek! That’d be great
@arcade I think it's not *quite* just motion smoothing, because it's not like you start with a 3D pose detector already and then use this as a filter to smooth that, you start with [a 2D detection] and [previous frame's estimated 3D pose] then this method gives you [new estimated 3D pose], but yeah it includes smoothing in some sense
@omar it would help keep the tracking more stable if I understand it.