Visual simultaneous localization and mapping (SLAM) is an emerging technology that enables low-power devices with a single camera to perform robotic navigation. Most visual SLAM algorithms are tuned for images produced through the image sensor processing (ISP) pipeline optimized for highly aesthetic photography. We investigate the feasibility of varying sensor quantization on RAW images directly from the sensor to save energy for visual SLAM. An 88% energy savings has been achieved by decreasing quantization bit level to five bits. We also introduce a gradient-based quantization scheme that increases energy savings. This work opens a new direction in energy-efficient image sensing for SLAM.