Deep neural networks (DNN) have achieved remarkable accuracy in classifying images. The image-classification problem is vital to computer vision and its applications. However, DNNs can be fooled by adversarial perturbed inputs; the original and perturbed images are not distinguishable by the human eye. This work investigates iterative training to make DNNs robust against adversarial perturbed inputs. The network architecture generates adversarial inputs and strengthens a target classifier. Upon training iterations, the hope is the target become robust against image perturbations. Future work involving a vision application is to iteratively train classifiers in autonomous vehicles against adversarial inputs for safety procedures.