One question I have about VR/AR headsets like Apple's expensive new Vision device is why you need to have glasses with them? (Apple's Vision device doesn't work over your existing glasses, but requires an additional purchase to get custom lenses from Zeiss.) They are already digitally tracking your eyes with extreme precision, why can't they digitally adjust the display to appear in focus based on each person's individual eyesight? Isn't this a problem AI could solve?
Looks like someone at MIT did something along these lines about 10 years ago, but there doesn't seem to be any further news after this... https://web.media.mit.edu/~gordonw/VisionCorrectingDisplay/