Accurately and rapidly obtaining the position of the plasma boundary is a necessary condition for the stable operation of tokamaks. In order to address the various shortcomings of the magnetic measurement methods commonly used in modern tokamak devices and to meet the development trend of diagnostic methods for fusion devices, it is necessary to meet the more stringent operational requirements in the future. Therefore, it is necessary to conduct research on optical diagnostic methods.Traditional optical methods often rely on complex physical models and manual feature extraction. This paper proposes a method based on deep learning algorithms that reconstructs the plasma boundary position solely through image reconstruction. For the plasma image recognition task on the Experimental Advanced Superconducting Tokamak (EAST), we utilized a multiband and high-speed visible endoscope diagnostic system to construct the dataset required for building neural network models. We proposed an improved lightweight U-Net network model to identify optical boundaries. Subsequently, we introduced the convolutional block attention module (CBAM) to further extract image information, and we addressed the overfitting issue of the neural network by employing the Dropkey regularizer. Finally, we utilized the skeleton refinement algorithm to extract boundary coordinates and mapped them to the Equilibrium Fitting code (EFIT) reconstruction results using the CatBoost algorithm. The method proposed in this paper can convert the plasma optical boundary coordinates obtained from visible light cameras into tokamak-level plane coordinates, thereby achieving the reconstruction of plasma shapes on EAST. This approach circumvents the issues encountered by magnetic measurement and traditional optical methods. From the experimental results, it can be observed that compared to the original U-Net model, the optical boundary recognition model proposed in this paper has improved the mean accuracy, recall, F1 score, and mean intersection over union metrics by 3.99%, 8.06%, 2.74%, and 3.73%, respectively, on the test set. Additionally, the average error of the boundary reconstruction algorithm compared to the EFIT data is only 6.45 mm.