In Neural Networks models the knowledge synthesized from the training process is represented in a subsymbolic fashion (weights, kernels, combination of numerical descriptions) that makes difficult its interpretation. The interpretation of the internal representation of a successful Neural Network can be useful to understand the nature of the problem and its solution, to use the Neural "model" as a tool that gives insights about the problem solved and not just as a solving mechanism treated as a black box. The internal representation used by the family of kernel-based Neural Networks (including Radial Basis Functions, Support Vector machines, Coulomb potential methods, and some probabilistic Neural Networks) can be seen as a set of positive instances of classification and, thereafter, used to derive fuzzy rules suitable for explanation or inference processes. The probabilistic nature of the kernel-based Neural Networks is captured by the membership functions associated to the components of the rules extracted. In this work we propose a method to extract fuzzy rules from trained Neural Networks of the family mentioned; comparing the quality of the knowledge extracted by different methods using known machine learning benchmarks.
展开▼