Abstract:
To solve the shortcomings of single-network model with learning preference for data, the crop disease identification method was proposed based on multi-model fusion. The single model performance of four mainstream convolutional neural networks of ResNet50, DenseNet121, Xception and MobileNetV2 was evaluated by the proposed method, and the four single models were respectively conducted by multi-model feature-level and decision-level fusion to obtain the identification output. The feature-level fusion method was used to average, maximize and splice the final output feature layer of each sub-network to achieve efficient complementarity of heterogeneous features. The decision-level fusion method was used to maximize and average the output probability of each sub-network to achieve efficient union of probability distribution decisions. The experimental results on the PDR2018 crop disease datasets show that feature-level fusion outperforms decision-level fusion and single-model methods significantly. Among the fusion methods, splicing compression feature fusion achieves the highest recognition accuracy of 98.44%. The cross-database experiments on PlantDoc data subset and the actual images confirm that the feature fusion method exhibits superior accuracy and generalization performance compared to the single-model method.