This paper investigates the classification of different emotional states using speech features from different feature groups. We use both suprasegmental feature groups like pitch, energy, and duration and segmental feature groups like voice quality, zero crossing rate, and articulation. We want to exploit the selection of the most relevant features from these different feature groups to get a better understanding of the speaker independent emotion recognition. We study how these different feature groups overlap or complement each other. By using the sequential floating forward selection algorithm (SFFS), feature subsets maximizing the classification rate will be generated. For this purpose, we use a Bayesian classifier and a speaker independent cross validation. A detailed study is also done on the relevance of the feature groups for classifying different emotion dimensions known from the psychological emotion research.