288 resultados para Gender Representations
Resumo:
Two field studies demonstrated that majority and minority size moderate perceived group variability. In Study 1 we found an outgroup homogeneity (OH) effect for female nurses in the majority, but an ingroup homogeneity (IH) effect for a token minority of male nurses. In Study 2 we found similar effects in a different setting - an OH effect for policemen in the majority and an IH effect for policewomen in the minority. Although measures of visibility, status, and, especially, familiarity tended to show the same pattern as perceived variability, there was no evidence that they mediated perceived dispersion. Results are discussed in terms of group size, rather than gender, being moderators of perceived variability, and with reference to Kanter's (1977a, 1977b) theory of group proportions.
Resumo:
Automatic gender classification has many security and commercial applications. Various modalities have been investigated for gender classification with face-based classification being the most popular. In some real-world scenarios the face may be partially occluded. In these circumstances a classification based on individual parts of the face known as local features must be adopted. We investigate gender classification using lip movements. We show for the first time that important gender specific information can be obtained from the way in which a person moves their lips during speech. Furthermore our study indicates that the lip dynamics during speech provide greater gender discriminative information than simply lip appearance. We also show that the lip dynamics and appearance contain complementary gender information such that a model which captures both traits gives the highest overall classification result. We use Discrete Cosine Transform based features and Gaussian Mixture Modelling to model lip appearance and dynamics and employ the XM2VTS database for our experiments. Our experiments show that a model which captures lip dynamics along with appearance can improve gender classification rates by between 16-21% compared to models of only lip appearance.