I've been looking at this
http://www.ok.sc.e.titech.ac.jp/res/PCS/research/procamcalib/
https://github.com/bytedeco/procamcalib
http://www.ok.sc.e.titech.ac.jp/res/PCS/publications/procams2009.pdf
and am pretty excited/optimistic
I've spent a few days looking at / implementing the classic Zhang (1999) camera calibration https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/tr98-71.pdf and only today got far enough to be like, 'OK, now how do I do projector co-calibration, what does the end-to-end process look like'
I was worried I'd have to junk my earlier work when I found this paper, but it turns out this is still built on the same Zhang technique (for calibrating based on a known 2D planar pattern)
The 2009 Audet paper is basically an incremental/interactive/hack way to generate sets of valid example points that you then feed to two instances of the 1999 Zhang calibration system (one instance for camera calibration, one instance for projector calibration)
(for calibration, also see https://people.cs.rutgers.edu/~elgammal/classes/cs534/lectures/CameraCalibration-book-chapter.pdf: given example points from N different poses of the calibration pattern, you come up with N homographies for the N poses, then do some linear algebra to get guesses at linear pinhole camera parameters + distortion parameters, then do a sorta-black-box nonlinear optimization to refine it)
honestly, a lot of it is a preference on _my_ end that I don't want to link a giant library like OpenCV, I don't want to write a bunch of Java or Python or C++, and I want to know what's going on -- the AprilTag fiducial marker library is a few files of pure C and we already link it, so it would be really nice to run all traditional CV problems through it, rather than also need our own checkerboard/line/node/grid/whatever detection pipeline just for calibration
@omar It is perpetually baffling to me that “I want to know what’s going on“ isn’t more of a determinant in this field.