The measurement performance of the baseline system design for the ITER high-frequency magnetic diagnostic has been analyzed using an algorithm based on the sparse representation of signals. This algorithm, derived from the SparSpec code [S. Bourguignon et al., Astron. Astrophys., 462, 379 (2007)] has previously been extensively benchmarked on real and simulated JET data. To optimize the system design of the ITER high-frequency magnetic diagnostic, we attempt to reduce false detection of the modes and to minimize the sensitivity of the measurement with respect to noise in the data, loss of faulty sensors, and the displacement of the sensors. Using this approach, the original layout design for the ITER high-frequency magnetic diagnostic system, which uses 168 sensors, is found to be inadequate to meet the ITER measurement requirements.

Based on this analysis, and taking into account the guidelines for the risk mitigation strategies that are given in the ITER management plan, various attempts at optimization of this diagnostic system have been performed. A revised proposal for its implementation has been developed, which now meets the ITER requirements for measurement performance and risk management. For toroidal mode number detection, this implementation includes two arrays of 50 to 55 sensors and two arrays of 25 to 35 unevenly spaced sensors each on the low-field side and two arrays of 25 to 35 unevenly spaced sensors each on the high-field side. For poloidal mode number detection, we propose six arrays of 25 to 40 sensors each located in nonequidistant machine sectors, not covering the divertor region and, possibly, poloidal angles in the range 75 < [vertical bar][vertical bar](deg) < 105, as this region is the most sensitive to the details of the magnetic equilibrium. In this paper we present the general summary results of this work, for which more details and an overview of our test calculations are reported in the companion paper.