I'm new to Isaac Sim and Isaac Lab, and I'm currently trying to figure out how to integrate a Vision-Language-Action (VLA) model with a robotic arm in a simulation. I’d really appreciate any guidance ...
I'm using the Go2 model trained with your official lab code for real-machine deployment. The dog's rotation and translation are fine, but the motor goes into safety protection after just a few steps ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results