Tony Cai and Dongwoo Kim
We first establish the minimax rate of convergence under the spectral norm and propose a rate-optimal estimation procedure. Our findings reveal intriguing phase transition phenomena that highlight the effectiveness of transfer learning and the use of source samples. We then address the problem of adaptation, establishing the adaptive rate of convergence up to a logarithmic factor. Our results demonstrate that, in sharp contrast to conventional settings, the cost of adaptation in transfer learning can be substantial in certain cases. We propose a novel data-driven algorithm that dynamically adapts to unknown model parameters. These theoretical insights are further validated by a simulation study, demonstrating the practicality and efficiency of the proposed adaptive algorithm.