This paper develops a hybrid deep reinforcement learning approach to manage an insurance portfolio for diffusion models. To address the model uncertainty, we adopt the recently developed modelling of exploration and exploitation strategies in a continuous-time decision-making process with reinforcement learning. We consider an insurance portfolio management problem in which an entropy-regularized reward function and corresponding relaxed stochastic controls are formulated. To obtain the optimal relaxed stochastic controls, we develop a Markov chain approximation and stochastic approximation-based iterative deep reinforcement learning algorithm where the probability distribution of the optimal stochastic controls is approximated by neural networks. In our hybrid algorithm, both Markov chain approximation and stochastic approximation are adopted in the learning processes. The idea of using the Markov chain approximation method to find initial guesses is proposed. A stochastic approximation is adopted to estimate the parameters of neural networks. Convergence analysis of the algorithm is presented. Numerical examples are provided to illustrate the performance of the algorithm.