536 resultados para Collectionwise Normality
Resumo:
QUESTION UNDER STUDY: Emergency room (ER) interpretation of the ECG is critical to assessment of patients with acute coronary syndromes (ACS). Our aim was to assess its reliability in our institution, a tertiary teaching hospital. METHODS: Over a 6-month period all consecutive patients admitted for ACS were included in the study. ECG interpretation by emergency physicians (EPs) was recorded on a preformatted sheet and compared with the interpretation of two specialist physicians (SPs). Discrepancies between the 2 specialists were resolved by an ECG specialist. RESULTS: Over the 6-month period, 692 consecutive patients were admitted with suspected ACS. ECG interpretation was available in 641 cases (93%). Concordance between SPs was 87%. Interpretation of normality or abnormality of the ECG was concordant between EPs and SPs in 475 cases (74%, kappa = 0.51). Interpretation of ischaemic modifications was concordant in 69% of cases, and as many ST segment elevations were unrecognised as overdiagnosed (5% each). The same findings occurred for ST segment depressions and negative T waves (12% each). CONCLUSIONS: Interpretation of the ECG recorded during ACS by 2 SPs was discrepant in 13% of cases. Similarly, EP interpretation was discrepant from SP interpretation in 25% of cases, equally distributed between over- and underdiagnosing of ischaemic changes. The clinical implications and impact of medical education on ECG interpretation require further study.
Resumo:
Biological scaling analyses employing the widely used bivariate allometric model are beset by at least four interacting problems: (1) choice of an appropriate best-fit line with due attention to the influence of outliers; (2) objective recognition of divergent subsets in the data (allometric grades); (3) potential restrictions on statistical independence resulting from phylogenetic inertia; and (4) the need for extreme caution in inferring causation from correlation. A new non-parametric line-fitting technique has been developed that eliminates requirements for normality of distribution, greatly reduces the influence of outliers and permits objective recognition of grade shifts in substantial datasets. This technique is applied in scaling analyses of mammalian gestation periods and of neonatal body mass in primates. These analyses feed into a re-examination, conducted with partial correlation analysis, of the maternal energy hypothesis relating to mammalian brain evolution, which suggests links between body size and brain size in neonates and adults, gestation period and basal metabolic rate. Much has been made of the potential problem of phylogenetic inertia as a confounding factor in scaling analyses. However, this problem may be less severe than suspected earlier because nested analyses of variance conducted on residual variation (rather than on raw values) reveals that there is considerable variance at low taxonomic levels. In fact, limited divergence in body size between closely related species is one of the prime examples of phylogenetic inertia. One common approach to eliminating perceived problems of phylogenetic inertia in allometric analyses has been calculation of 'independent contrast values'. It is demonstrated that the reasoning behind this approach is flawed in several ways. Calculation of contrast values for closely related species of similar body size is, in fact, highly questionable, particularly when there are major deviations from the best-fit line for the scaling relationship under scrutiny.
Resumo:
Conflicts over human rights in relations between East Asia and the West have increased since the end of the Cold War. Western governments express concern about human rights standards in East Asian countries. In the East, these expressions have been perceived as interference in internal affairs. Due to dramatic economic development, East Asian nations recently have gained in pride and self-confidence as global actors. Such development is observed with suspicion in the West. Concerned about the decline of global U.S. influence, some American scholars have re-invented the notion of "culture" to point at an alleged East Asian threat. Also East Asian statesmen use the cultural argument by claiming the existence of so-called 'Asian values', which they allege are the key to Eastern economic success. This thesis argues that issues of human rights in East-West relations are not only a consequence of well-intended concern by Western governments regarding the human rights and welfare of the citizens of East Asian nations, but are in fact dominated by and used as a pawn in interplay with more complicated questions of global power and economic relations between East and West. The thesis reviews the relevance of culture in East-West relations. In the West, particularly Samuel P. Huntington with his prediction of the Clash of Civilizations stands out. Singapore's Lee Kuan Yew has been very vocal on the Eastern side. Whereas the West tries to cope with its decrease of global influence, after hundreds of years under Western hegemonism, the East believes in an Asian way of development without interference form the West. Most of this dispute revolves around the issue of human rights. The West claims the universality of rights which in fact emphasizes political and civil rights. Western countries critizise poor human rights standards in East Asia. The East, in return, accuses the West of hypocritical policies that seek global dominance. East Asian governments assert that due to a different stage of development they have to stress first their rights to development in order to assure stability. In particular, China argues this way. The country's leadership, however, shows concern about human rights and has already improved its human rights record over the past years. This thesis analyses the dispute over human rights in a case study on Germany and China. Both countries have a mutual interest in trade relations which has conflicted with Germany's criticism of China's problematic human rights record. In 1996, the two countries clashed after the German parliament passed a resolution condemning China's treatment of Tibet. This caused a lot of damage to the Chinese-German relationship which in the course of the year went back to normality. In the light of these frictions a German human rights policy that focuses on unspectacular grass-roots support of China, for example in strengthening China's legal system, would be preferable. Such co-operation must be based on mutual respect.
Resumo:
Previous studies have shown that adults and 8-year-olds process faces using norm-based coding and that prolonged exposure to one kind of facial distortion (e.g., compressed features) temporarily shifts the prototype, a process called adaptation, making similarly distorted faces appear more attractive (Anzures et aI., 2009; Valentine, 1999; Webster & MacLin, 1999). Aftereffects provide evidence that our prototype is continually updated by experience. When adults are adapted to two face categories (e.g., Caucasian and Chinese; male and female) distorted in opposing directions (e.g., expanded vs. compressed), their attractiveness ratings shift in opposite directions (Bestelmeyer et aI., 2008; Jaquet et aI., 2007), indicating that adults have dissociable prototypes for some face categories. I created a novel meth04 to investigate whether children show opposing aftereffects. Children and adults were adapted to Caucasian and Chinese faces distorted in opposite directions in the context of a computerized storybook. When testing adults to validate my method, I discovered that opposing aftereffects are contingent on how participants categorize faces and that this categorization is dependent on the context in which adapting stimuli are presented. Opposing aftereffects for Caucasian and Chinese faces were evident when the salience of race was exaggerated by presenting faces in the context of racially segregated birthday parties; expanded faces selected as most normal more often for the race of face that was expanded during adaptation than for the race of face that was compressed. However, opposing aftereffects were not evident when members of the two groups were presented engaging in cooperative social interactions at a racially integrated birthday party. Using the storybook that emphasized face race I 11 provide the first evidence that 8-year-olds demonstrate opposing aftereffects for two face categories defined by race, both when judging face normality and when rating attractiveness.
Resumo:
The present set of experiments was designed to investigate the organization and refmement of young children's face space. Past research has demonstrated that adults encode individual faces in reference to a distinct face prototype that represents the average of all faces ever encountered. The prototype is not a static abstracted norm but rather a malleable face average that is continuously updated by experience (Valentine, 1991); for example, following prolonged viewing of faces with compressed features (a technique referred to as adaptation), adults rate similarly distorted faces as more normal and more attractive (simple attractiveness aftereffects). Recent studies have shown that adults possess category-specific face prototypes (e.g., based on race, sex). After viewing faces from two categories (e.g., Caucasian/Chinese) that are distorted in opposite directions, adults' attractiveness ratings simultaneously shift in opposite directions (opposing aftereffects). The current series of studies used a child-friendly method to examine whether, like adults, 5- and 8-year-old children show evidence for category-contingent opposing aftereffects. Participants were shown a computerized storybook in which Caucasian and Chinese children's faces were distorted in opposite directions (expanded and compressed). Both before and after adaptation (i.e., reading the storybook), participants judged the normality/attractiveness of a small number of expanded, compressed, and undistorted Caucasian and Chinese faces. The method was first validated by testing adults (Experiment I ) and was then refined in order to test 8- (Experiment 2) and 5-yearold (Experiment 4a) children. Five-year-olds (our youngest age group) were also tested in a simple aftereffects paradigm (Experiment 3) and with male and female faces distorted in opposite directions (Experiment 4b). The current research is the first to demonstrate evidence for simple attractiveness aftereffects in children as young as 5, thereby indicating that similar to adults, 5-year-olds utilize norm-based coding. Furthermore, this research provides evidence for racecontingent opposing aftereffects in both 5- and 8-year-olds; however, the opposing aftereffects demonstrated by 5-year-olds were driven largely by simple aftereffects for Caucasian faces. The lack of simple aftereffects for Chinese faces in 5-year-olds may be reflective of young children's limited experience with other-race faces and suggests that children's face space undergoes a period of increasing differentiation over time with respect to race. Lastly, we found no evidence for sex -contingent opposing aftereffects in 5-year-olds, which suggests that young children do not rely on a fully adult-like face space even for highly salient face categories (i.e., male/female) with which they have comparable levels of experience.
Resumo:
Estee Klar is the founder and executive director of The Autism Acceptance Project, an organization that strives to support people with autism by promoting acceptance and inclusion of these individuals. She is the mother of a son, Adam, who has autism, and writes about her experiences with him on her blog, found at http://www.esteeklar.com. She also writes about issues concerning autism in the area of human rights, law, and social justice, and has contributed to several books, including The Thinking Person's Guide to Autism, Between Interruptions: Thirty Women Tell the Truth about Motherhood, and Concepts of Normality: The Autistic and Typical Spectrum. Currently, she is a Ph.D. candidate at York University, Critical Disability Studies, as well as a writer and freelance curator of art.
Resumo:
Adults code faces in reference to category-specific norms that represent the different face categories encountered in the environment (e.g., race, age). Reliance on such norm-based coding appears to aid recognition, but few studies have examined the development of separable prototypes and the way in which experience influences the refinement of the coding dimensions associated with different face categories. The present dissertation was thus designed to investigate the organization and refinement of face space and the role of experience in shaping sensitivity to its underlying dimensions. In Study 1, I demonstrated that face space is organized with regard to norms that reflect face categories that are both visually and socially distinct. These results provide an indication of the types of category-specific prototypes that can conceivably exist in face space. Study 2 was designed to investigate whether children rely on category-specific prototypes and the extent to which experience facilitates the development of separable norms. I demonstrated that unlike adults and older children, 5-year-olds rely on a relatively undifferentiated face space, even for categories with which they receive ample experience. These results suggest that the dimensions of face space undergo significant refinement throughout childhood; 5 years of experience with a face category is not sufficient to facilitate the development of separable norms. In Studies 3 through 5, I examined how early and continuous exposure to young adult faces may optimize the face processing system for the dimensions of young relative to older adult faces. In Study 3, I found evidence for a young adult bias in attentional allocation among young and older adults. However, whereas young adults showed an own-age recognition advantage, older adults exhibited comparable recognition for young and older faces. These results suggest that despite the significant experience that older adults have with older faces, the early and continuous exposure they received with young faces continues to influence their recognition, perhaps because face space is optimized for young faces. In Studies 4 and 5, I examined whether sensitivity to deviations from the norm is superior for young relative to older adult faces. I used normality/attractiveness judgments as a measure of this sensitivity; to examine whether biases were specific to norm-based coding, I asked participants to discriminate between the same faces. Both young and older adults were more accurate when tested with young relative to older faces—but only when judging normality. Like adults, 3- and 7-year-olds were more accurate in judging the attractiveness of young faces; however, unlike adults, this bias extended to the discrimination task. Thus by 3 years of age children are more sensitive to differences among young relative to older faces, suggesting that young children's perceptual system is more finely tuned for young than older adult faces. Collectively, the results of this dissertation help elucidate the development of category-specific norms and clarify the role of experience in shaping sensitivity to the dimensions of face space.
Resumo:
In the context of multivariate linear regression (MLR) models, it is well known that commonly employed asymptotic test criteria are seriously biased towards overrejection. In this paper, we propose a general method for constructing exact tests of possibly nonlinear hypotheses on the coefficients of MLR systems. For the case of uniform linear hypotheses, we present exact distributional invariance results concerning several standard test criteria. These include Wilks' likelihood ratio (LR) criterion as well as trace and maximum root criteria. The normality assumption is not necessary for most of the results to hold. Implications for inference are two-fold. First, invariance to nuisance parameters entails that the technique of Monte Carlo tests can be applied on all these statistics to obtain exact tests of uniform linear hypotheses. Second, the invariance property of the latter statistic is exploited to derive general nuisance-parameter-free bounds on the distribution of the LR statistic for arbitrary hypotheses. Even though it may be difficult to compute these bounds analytically, they can easily be simulated, hence yielding exact bounds Monte Carlo tests. Illustrative simulation experiments show that the bounds are sufficiently tight to provide conclusive results with a high probability. Our findings illustrate the value of the bounds as a tool to be used in conjunction with more traditional simulation-based test methods (e.g., the parametric bootstrap) which may be applied when the bounds are not conclusive.
Resumo:
Dans ce texte, nous analysons les développements récents de l’économétrie à la lumière de la théorie des tests statistiques. Nous revoyons d’abord quelques principes fondamentaux de philosophie des sciences et de théorie statistique, en mettant l’accent sur la parcimonie et la falsifiabilité comme critères d’évaluation des modèles, sur le rôle de la théorie des tests comme formalisation du principe de falsification de modèles probabilistes, ainsi que sur la justification logique des notions de base de la théorie des tests (tel le niveau d’un test). Nous montrons ensuite que certaines des méthodes statistiques et économétriques les plus utilisées sont fondamentalement inappropriées pour les problèmes et modèles considérés, tandis que de nombreuses hypothèses, pour lesquelles des procédures de test sont communément proposées, ne sont en fait pas du tout testables. De telles situations conduisent à des problèmes statistiques mal posés. Nous analysons quelques cas particuliers de tels problèmes : (1) la construction d’intervalles de confiance dans le cadre de modèles structurels qui posent des problèmes d’identification; (2) la construction de tests pour des hypothèses non paramétriques, incluant la construction de procédures robustes à l’hétéroscédasticité, à la non-normalité ou à la spécification dynamique. Nous indiquons que ces difficultés proviennent souvent de l’ambition d’affaiblir les conditions de régularité nécessaires à toute analyse statistique ainsi que d’une utilisation inappropriée de résultats de théorie distributionnelle asymptotique. Enfin, nous soulignons l’importance de formuler des hypothèses et modèles testables, et de proposer des techniques économétriques dont les propriétés sont démontrables dans les échantillons finis.
Resumo:
Several Authors Have Discussed Recently the Limited Dependent Variable Regression Model with Serial Correlation Between Residuals. the Pseudo-Maximum Likelihood Estimators Obtained by Ignoring Serial Correlation Altogether, Have Been Shown to Be Consistent. We Present Alternative Pseudo-Maximum Likelihood Estimators Which Are Obtained by Ignoring Serial Correlation Only Selectively. Monte Carlo Experiments on a Model with First Order Serial Correlation Suggest That Our Alternative Estimators Have Substantially Lower Mean-Squared Errors in Medium Size and Small Samples, Especially When the Serial Correlation Coefficient Is High. the Same Experiments Also Suggest That the True Level of the Confidence Intervals Established with Our Estimators by Assuming Asymptotic Normality, Is Somewhat Lower Than the Intended Level. Although the Paper Focuses on Models with Only First Order Serial Correlation, the Generalization of the Proposed Approach to Serial Correlation of Higher Order Is Also Discussed Briefly.
Resumo:
This paper addresses the issue of estimating semiparametric time series models specified by their conditional mean and conditional variance. We stress the importance of using joint restrictions on the mean and variance. This leads us to take into account the covariance between the mean and the variance and the variance of the variance, that is, the skewness and kurtosis. We establish the direct links between the usual parametric estimation methods, namely, the QMLE, the GMM and the M-estimation. The ususal univariate QMLE is, under non-normality, less efficient than the optimal GMM estimator. However, the bivariate QMLE based on the dependent variable and its square is as efficient as the optimal GMM one. A Monte Carlo analysis confirms the relevance of our approach, in particular, the importance of skewness.
Resumo:
In this paper we propose exact likelihood-based mean-variance efficiency tests of the market portfolio in the context of Capital Asset Pricing Model (CAPM), allowing for a wide class of error distributions which include normality as a special case. These tests are developed in the frame-work of multivariate linear regressions (MLR). It is well known however that despite their simple statistical structure, standard asymptotically justified MLR-based tests are unreliable. In financial econometrics, exact tests have been proposed for a few specific hypotheses [Jobson and Korkie (Journal of Financial Economics, 1982), MacKinlay (Journal of Financial Economics, 1987), Gib-bons, Ross and Shanken (Econometrica, 1989), Zhou (Journal of Finance 1993)], most of which depend on normality. For the gaussian model, our tests correspond to Gibbons, Ross and Shanken’s mean-variance efficiency tests. In non-gaussian contexts, we reconsider mean-variance efficiency tests allowing for multivariate Student-t and gaussian mixture errors. Our framework allows to cast more evidence on whether the normality assumption is too restrictive when testing the CAPM. We also propose exact multivariate diagnostic checks (including tests for multivariate GARCH and mul-tivariate generalization of the well known variance ratio tests) and goodness of fit tests as well as a set estimate for the intervening nuisance parameters. Our results [over five-year subperiods] show the following: (i) multivariate normality is rejected in most subperiods, (ii) residual checks reveal no significant departures from the multivariate i.i.d. assumption, and (iii) mean-variance efficiency tests of the market portfolio is not rejected as frequently once it is allowed for the possibility of non-normal errors.
Resumo:
In this paper, we propose several finite-sample specification tests for multivariate linear regressions (MLR) with applications to asset pricing models. We focus on departures from the assumption of i.i.d. errors assumption, at univariate and multivariate levels, with Gaussian and non-Gaussian (including Student t) errors. The univariate tests studied extend existing exact procedures by allowing for unspecified parameters in the error distributions (e.g., the degrees of freedom in the case of the Student t distribution). The multivariate tests are based on properly standardized multivariate residuals to ensure invariance to MLR coefficients and error covariances. We consider tests for serial correlation, tests for multivariate GARCH and sign-type tests against general dependencies and asymmetries. The procedures proposed provide exact versions of those applied in Shanken (1990) which consist in combining univariate specification tests. Specifically, we combine tests across equations using the MC test procedure to avoid Bonferroni-type bounds. Since non-Gaussian based tests are not pivotal, we apply the “maximized MC” (MMC) test method [Dufour (2002)], where the MC p-value for the tested hypothesis (which depends on nuisance parameters) is maximized (with respect to these nuisance parameters) to control the test’s significance level. The tests proposed are applied to an asset pricing model with observable risk-free rates, using monthly returns on New York Stock Exchange (NYSE) portfolios over five-year subperiods from 1926-1995. Our empirical results reveal the following. Whereas univariate exact tests indicate significant serial correlation, asymmetries and GARCH in some equations, such effects are much less prevalent once error cross-equation covariances are accounted for. In addition, significant departures from the i.i.d. hypothesis are less evident once we allow for non-Gaussian errors.
Resumo:
In this paper, we propose exact inference procedures for asset pricing models that can be formulated in the framework of a multivariate linear regression (CAPM), allowing for stable error distributions. The normality assumption on the distribution of stock returns is usually rejected in empirical studies, due to excess kurtosis and asymmetry. To model such data, we propose a comprehensive statistical approach which allows for alternative - possibly asymmetric - heavy tailed distributions without the use of large-sample approximations. The methods suggested are based on Monte Carlo test techniques. Goodness-of-fit tests are formally incorporated to ensure that the error distributions considered are empirically sustainable, from which exact confidence sets for the unknown tail area and asymmetry parameters of the stable error distribution are derived. Tests for the efficiency of the market portfolio (zero intercepts) which explicitly allow for the presence of (unknown) nuisance parameter in the stable error distribution are derived. The methods proposed are applied to monthly returns on 12 portfolios of the New York Stock Exchange over the period 1926-1995 (5 year subperiods). We find that stable possibly skewed distributions provide statistically significant improvement in goodness-of-fit and lead to fewer rejections of the efficiency hypothesis.
Resumo:
In this paper, we study the asymptotic distribution of a simple two-stage (Hannan-Rissanen-type) linear estimator for stationary invertible vector autoregressive moving average (VARMA) models in the echelon form representation. General conditions for consistency and asymptotic normality are given. A consistent estimator of the asymptotic covariance matrix of the estimator is also provided, so that tests and confidence intervals can easily be constructed.