Abstract: Capsule Networks (CapsNet) surges as an alternative to typical convolution networks, avoiding pooling and improving pose representation to face challenges such as the Picasso problem.However, without pooling, the network does not reduce its number of parameters as it goes deep, and the routing algorithm does not have efficient implementations, thus leading to poor training performance.To face that, we propose the Multi-lane Capsule Network (MLCN), a reorganization of the original CapsNet, that has data-independent lanes to calculate the final capsule dimensions.MLCN achieves similar results as CapsNet, still maintaining its pose representation property but achieving up to 130% faster-training speed in single GPUs.Moreover, we show that the lane's organization opens the opportunity to easily model parallelism, which we explore and bring solutions to heterogeneous lane and hardware scenario scheduling, parallelization, and automatic network construction with computational resources aware Neural Architecture Search (NAS).We achieve speedups as high as 7.18× in 8 GPUs scenarios, 2× training with our scheduling solution for heterogeneous lanes, and find 18.6% better networks with our NAS approach.