2021-12-01 01:45 AM
Solved! Go to Solution.
2021-12-01 01:45 AM
Your problem can be related to an “overfitting�? issue, which is a quite common mistake in the decision tree generation. When you select the features that are used to detect your scenario, you must be careful in selecting a good number of features (good = not too many, not too much). Choosing few features can lead to many false positives while, choosing way too many features, can lead to overfitting; this means that you are tailoring your decision tree to recognize only the training dataset and any other dataset (even if similar) will most probably not be recognized correctly. To solve it, you can try to select less features and build the decision tree again, to see if the results match your expectations. Other methods that can help to avoid overfitting are: collecting more data logs and pruning the decision tree. You can find more info related to feature selection and other methods to avoid overfitting in our design tip document DT0139
2021-12-01 01:45 AM
Your problem can be related to an “overfitting�? issue, which is a quite common mistake in the decision tree generation. When you select the features that are used to detect your scenario, you must be careful in selecting a good number of features (good = not too many, not too much). Choosing few features can lead to many false positives while, choosing way too many features, can lead to overfitting; this means that you are tailoring your decision tree to recognize only the training dataset and any other dataset (even if similar) will most probably not be recognized correctly. To solve it, you can try to select less features and build the decision tree again, to see if the results match your expectations. Other methods that can help to avoid overfitting are: collecting more data logs and pruning the decision tree. You can find more info related to feature selection and other methods to avoid overfitting in our design tip document DT0139