Visual Simultaneous Localization and Mapping (VSLAM) is widely used in construction robots because it is an efficient and inexpensive information acquisition method. However, low-light construction scenes pose significant challenges for V-SLAM detection and positioning. In low-light scenes such as underground garages or dim indoor scenes, V-SLAM is difficult to detect enough valid feature points, which causes navigation to fail. To address this issue, we propose an Unsupervised V-SLAM Light Enhancement Network (UVLE-Net) to enhance low-light images. After image enhancement, we add a robust Shi-Tomasi method in ORBSLAM2 to detect feature points and use the sparse optical flow algorithm to track the feature points. By using UVLENet, the brightness and contrast of the images can be significantly increased, and feature points can be detected easily. The optical flow and Shi-Tomasi method improve the ability of feature point extraction and tracking in low light. To validate the robustness and superiority of our method in lowlight conditions, we conduct comparison experiments with other enhancement techniques on published and real-world construction datasets.