The data acquisition (DAQ) and management system of the Large Helical Device (LHD), named the LABCOM system, has been in development since 1995. The recently acquired data have grown to 7 gigabytes per shot, 10 times bigger than estimated before the experiment. In 2006 during 1-h pulse experiments, 90 gigabytes of data was acquired, a new world record. This data explosion has been enabled by the massively distributed processing architecture and the newly developed capability of real-time streaming acquisition. The former provides linear expandability since increasing the number of parallel DAQs avoids I/O bottlenecks. The latter improves the unit performance from 0.7 megabytes/s in conventional CAMAC digitizers to nonstop 110 megabytes/s in CompactPCI. The technical goal of this system is to be able to handle one hundred 100 megabytes/s concurrent DAQs even for steady-state plasma diagnostics. This is similar to the data production rate of the next-generation experiments, such as ITER. The LABCOM storage has several hundred terabytes of storage in double-tier structure: The first consists of tens of hard drive arrays, and the second some Blu-ray Disc libraries. Multiplex and redundant storage servers are mandatory for higher availability and throughputs. They together serve sharable volumes on Red Hat GFS2 cluster file systems. The LABCOM system is used not only for LHD but also for the QUEST and GAMMA10 experiments, creating a new Fusion Virtual Laboratory remote participation environment that others can access regardless of their location.