953 resultados para Two-sample tests
Resumo:
Objective: Autism spectrum disorders are now recognized to occur in up to 1% of the population and to be a major public health concern because of their early onset, lifelong persistence, and high levels of associated impairment. Little is known about the associated psychiatric disorders that may contribute to impairment. We identify the rates and type of psychiatric comorbidity associated with ASDs and explore the associations with variables identified as risk factors for child psychiatric disorders. Method: A subgroup of 112 ten- to 14-year old children from a population-derived cohort was assessed for other child psychiatric disorders (3 months' prevalence) through parent interview using the Child and Adolescent Psychiatric Assessment. DSM-IV diagnoses for childhood anxiety disorders, depressive disorders, oppositional defiant and conduct disorders, attention-deficit/hyperactivity disorder, tic disorders, trichotillomania, enuresis, and encopresis were identified. Results: Seventy percent of participants had at least one comorbid disorder and 41% had two or more. The most common diagnoses were social anxiety disorder (29.2%, 95% confidence interval [CI)] 13.2-45.1), attention-deficit/hyperactivity disorder (28.2%, 95% CI 13.3-43.0), and oppositional defiant disorder (28.1%, 95% CI 13.9-42.2). Of those with attention/deficit/hyperactivity disorder, 84% received a second comorbid diagnosis. There were few associations between putative risk factors and psychiatric disorder. Conclusions: Psychiatric disorders are common and frequently multiple in children with autism spectrum disorders. They may provide targets for intervention and should be routinely evaluated in the clinical assessment of this group.
Resumo:
Background: Consistency of performance across tasks that assess syntactic comprehension in aphasia has clinical and theoretical relevance. In this paper we add to the relatively sparse previous work on how sentence comprehension abilities are influenced by the nature of the assessment task. Aims: Our aims are: (1) to compare linguistic performance across sentence-picture matching, enactment, and truth-value judgement tasks; (2) to investigate the impact of pictorial stimuli on syntactic comprehension. Methods Procedures: We tested a group of 10 aphasic speakers (3 with fluent and 7 with non-fluent aphasia) in three tasks (Experiment 1): (i) sentence-picture matching with four pictures, (ii) sentence-picture matching with two pictures, and (iii) enactment. A further task of truth-value judgement was given to a subgroup of those speakers (n=5, Experiment 2). Similar sentence types across all tasks were used and included canonical (actives, subject clefts) and non-canonical (passives, object clefts) sentences. We undertook two types of analyses: (a) we compared canonical and non-canonical sentences in each task; (b) we compared performance between (i) actives and passives, (ii) subject and object clefts in each task. We examined the results of all participants as a group and as case-series. Outcomes Results: Several task effects emerged. Overall, the two-picture sentence-picture matching and enactment tasks were more discriminating than the four-picture condition. Group performance in the truth-value judgement task was similar to two-picture sentence-picture matching and enactment. At the individual level performance across tasks contrasted to some group results. Conclusions: Our findings revealed task effects across participants. We discuss reasons that could explain the diverse profiles of performance and the implications for clinical practice.
Resumo:
Background: High rates of co-morbidity between Generalized Social Phobia (GSP) and Generalized Anxiety Disorder (GAD) have been documented. The reason for this is unclear. Family studies are one means of clarifying the nature of co-morbidity between two disorders. Methods: Six models of co-morbidity between GSP and GAD were investigated in a family aggregation study of 403 first-degree relatives of non-clinical probands: 37 with GSP, 22 with GAD, 15 with co-morbid GSP/GAD, and 41 controls with no history of GSP or GAD. Psychiatric data were collected for probands and relatives. Mixed methods (direct and family history interviews) were utilised. Results: Primary contrasts (against controls) found an increased rate of pure GSP in the relatives of both GSP probands and co-morbid GSP/GAD probands, and found relatives of co-morbid GSP/GAD probands to have an increased rate of both pure GAD and comorbid GSP/GAD. Secondary contrasts found (i) increased GSP in the relatives of GSP only probands compared to the relatives of GAD only probands; and (ii) increased GAD in the relatives of co-morbid GSP/GAD probands compared to the relatives of GSP only probands. Limitations: The study did not directly interview all relatives, although the reliability of family history data was assessed. The study was based on an all-female proband sample. The implications of both these limitations are discussed. Conclusions: The results were most consistent with a co-morbidity model indicating independent familial transmission of GSP and GAD. This has clinical implications for the treatment of patients with both disorders. (C) 2006 Elsevier B.V. All fights reserved.
Resumo:
The THz water content index of a sample is defined and advantages in using such metric in estimating a sample's relative water content are discussed. The errors from reflectance measurements performed at two different THz frequencies using a quasi-optical null-balance reflectometer are propagated to the errors in estimating the sample water content index.
Resumo:
Many kernel classifier construction algorithms adopt classification accuracy as performance metrics in model evaluation. Moreover, equal weighting is often applied to each data sample in parameter estimation. These modeling practices often become problematic if the data sets are imbalanced. We present a kernel classifier construction algorithm using orthogonal forward selection (OFS) in order to optimize the model generalization for imbalanced two-class data sets. This kernel classifier identification algorithm is based on a new regularized orthogonal weighted least squares (ROWLS) estimator and the model selection criterion of maximal leave-one-out area under curve (LOO-AUC) of the receiver operating characteristics (ROCs). It is shown that, owing to the orthogonalization procedure, the LOO-AUC can be calculated via an analytic formula based on the new regularized orthogonal weighted least squares parameter estimator, without actually splitting the estimation data set. The proposed algorithm can achieve minimal computational expense via a set of forward recursive updating formula in searching model terms with maximal incremental LOO-AUC value. Numerical examples are used to demonstrate the efficacy of the algorithm.
Resumo:
Resistance baselines were obtained for the first generation anticoagulant rodenticides chlorophacinone and diphacinone using laboratory, caesarian-derived Norway rats (Rattus norvegicus) as the susceptible strain and the blood clotting response test method. The ED99 estimates for a quantal response were: chlorophacinone, males 0.86 mg kg−1, females 1.03 mg kg−1; diphacinone, males 1.26 mg kg−1, females 1.60 mg kg−1. The dose-response data also showed that chlorophacinone was significantly (p<0.0001) more potent than diphacinone for both male and female rats, and that male rats were more susceptible than females to both compounds (p<0.002). The ED99 doses were then given to groups of five male and five female rats of the Welsh and Hampshire warfarin-resistant strains. Twenty-four hours later, prothrombin times were slightly elevated in both strains but all the animals were classified as resistant to the two compounds, indicating cross-resistance from warfarin to diphacinone and chlorophacinone. When rats of the two resistant strains were fed for six consecutive days on baits containing either diphacinone or chlorophacinone, many animals survived, indicating that their resistance might enable them to survive treatments with these compounds in the field.
Resumo:
This paper examines the short and long-term persistence of tax-exempt real estate funds in the UK through the use of winner-loser contingency table methodology. The persistence tests are applied to a database of varying numbers of funds from a low of 16 to a high of 27 using quarterly returns over the 12 years from 1990 Q1 to 2001 Q4. The overall conclusion is that the real estate funds in the UK show little evidence of persistence in the short-term (quarterly and semi-annual data) or for data over a considerable length of time (bi-annual to six yearly intervals). In contrast, the results are better for annual data with evidence of significant performance persistence. Thus at this stage, it seems that an annual evaluation period, provides the best discrimination of the winner and loser phenomenon in the real estate market. This result is different from equity and bond studies, where it seems that the repeat winner phenomenon is stronger over shorter periods of evaluation. These results require careful interpretation, however, as the results show that when only small samples are used significant adjustments must be made to correct for small sample bias and second the conclusions are sensitive to the length of the evaluation period and specific test used. Nonetheless, it seems that persistence in performance of real estate funds in the UK does exist, at least for the annual data, and it appears to be a guide to beating the pack in the long run. Furthermore, although the evidence of persistence in performance for the overall sample of funds is limited, we have found evidence that two funds were consistent winners over this period, whereas no one fund could be said to be a consistent loser.
Resumo:
Objective To examine the impact of increasing numbers of metabolic syndrome (MetS) components on postprandial lipaemia. Methods Healthy men (n = 112) underwent a sequential meal postprandial investigation, in which blood samples were taken at regular intervals after a test breakfast (0 min) and lunch (330 min). Lipids and glucose were measured in the fasting sample, with triacylglycerol (TAG), non-esterified fatty acids and glucose analysed in the postprandial samples. Results Subjects were grouped according to the number of MetS components regardless of the combinations of components (0/1, 2, 3 and 4/5). As expected, there was a trend for an increase in body mass index, blood pressure, fasting TAG, glucose and insulin, and a decrease in fasting high-density lipoprotein cholesterol with increasing numbers of MetS components (P≤0.0004). A similar trend was observed for the summary measures of the postprandial TAG and glucose responses. For TAG, the area under the curve (AUC) and maximum concentration (maxC) were significantly greater in men with ≥ 3 than < 3 components (P < 0.001), whereas incremental AUC was greater in those with 3 than 0/1 and 2, and 4/5 compared with 2 components (P < 0.04). For glucose, maxC after the test breakfast (0-330 min) and total AUC (0-480 min) were higher in men with ≥ 3 than < 3 components (P≤0.001). Conclusions Our data analysis has revealed a linear trend between increasing numbers of MetS components and magnitude (AUC) of the postprandial TAG and glucose responses. Furthermore, the two meal challenge discriminated a worsening of postprandial lipaemic control in subjects with ≥ 3 MetS components.
Resumo:
Deception-detection is the crux of Turing’s experiment to examine machine thinking conveyed through a capacity to respond with sustained and satisfactory answers to unrestricted questions put by a human interrogator. However, in 60 years to the month since the publication of Computing Machinery and Intelligence little agreement exists for a canonical format for Turing’s textual game of imitation, deception and machine intelligence. This research raises from the trapped mine of philosophical claims, counter-claims and rebuttals Turing’s own distinct five minutes question-answer imitation game, which he envisioned practicalised in two different ways: a) A two-participant, interrogator-witness viva voce, b) A three-participant, comparison of a machine with a human both questioned simultaneously by a human interrogator. Using Loebner’s 18th Prize for Artificial Intelligence contest, and Colby et al.’s 1972 transcript analysis paradigm, this research practicalised Turing’s imitation game with over 400 human participants and 13 machines across three original experiments. Results show that, at the current state of technology, a deception rate of 8.33% was achieved by machines in 60 human-machine simultaneous comparison tests. Results also show more than 1 in 3 Reviewers succumbed to hidden interlocutor misidentification after reading transcripts from experiment 2. Deception-detection is essential to uncover the increasing number of malfeasant programmes, such as CyberLover, developed to steal identity and financially defraud users in chatrooms across the Internet. Practicalising Turing’s two tests can assist in understanding natural dialogue and mitigate the risk from cybercrime.
Resumo:
This paper investigates the psychometric properties of Vigneron and Johnson's Brand Luxury Index scale. The authors developed the scale using data collected from a student sample in Australia. To validate the scale, the study reported in this paper uses data collected from Taiwanese luxury consumers. The scale was initially subjected to reliability analysis yielding low α values for two of its five proposed dimensions. Exploratory and confirmatory factors analyses were subsequently performed to examine the dimensionality of brand luxury. Discriminant and convergent validity tests highlight the need for further research into the dimensionality of the construct. Although the scale represents a good initial contribution to understanding brand luxury, in view of consumers' emerging shopping patterns, further investigation is warranted to establish the psychometric properties of the scale and its equivalence across cultures.
Resumo:
A two-dimensional X-ray scattering system developed around a CCD-based area detector is presented, both in terms of hardware employed and software designed and developed. An essential feature is the integration of hardware and software, detection and sample environment control which enables time-resolving in-situ wide-angle X-ray scattering measurements of global structural and orientational parameters of polymeric systems subjected to a variety of controlled external fields. The development and operation of a number of rheometers purpose-built for the application of such fields are described. Examples of the use of this system in monitoring degrees of shear-induced orientation in liquid-crystalline systems and crystallization of linear polymers subsequent to shear flow are presented.
Resumo:
Recent research in social neuroscience proposes a link between mirror neuron system (MNS) and social cognition. The MNS has been proposed to be the neural mechanism underlying action recognition and intention understanding and more broadly social cognition. Pre-motor MNS has been suggested to modulate the motor cortex during action observation. This modulation results in an enhanced cortico-motor excitability reflected in increased motor evoked potentials (MEPs) at the muscle of interest during action observation. Anomalous MNS activity has been reported in the autistic population whose social skills are notably impaired. It is still an open question whether traits of autism in the normal population are linked to the MNS functioning. We measured TMS-induced MEPs in normal individuals with high and low traits of autism as measured by the autistic quotient (AQ), while observing videos of hand or mouth actions, static images of a hand or mouth or a blank screen. No differences were observed between the two while they observed a blank screen. However participants with low traits of autism showed significantly greater MEP amplitudes during observation of hand/mouth actions relative to static hand/mouth stimuli. In contrast, participants with high traits of autism did not show such a MEP amplitude difference between observation of actions and static stimuli. These results are discussed with reference to MNS functioning.
Resumo:
This paper presents practical approaches to the problem of sample size re-estimation in the case of clinical trials with survival data when proportional hazards can be assumed. When data are readily available at the time of the review, on a full range of survival experiences across the recruited patients, it is shown that, as expected, performing a blinded re-estimation procedure is straightforward and can help to maintain the trial's pre-specified error rates. Two alternative methods for dealing with the situation where limited survival experiences are available at the time of the sample size review are then presented and compared. In this instance, extrapolation is required in order to undertake the sample size re-estimation. Worked examples, together with results from a simulation study are described. It is concluded that, as in the standard case, use of either extrapolation approach successfully protects the trial error rates. Copyright © 2012 John Wiley & Sons, Ltd.
Resumo:
Data assimilation refers to the problem of finding trajectories of a prescribed dynamical model in such a way that the output of the model (usually some function of the model states) follows a given time series of observations. Typically though, these two requirements cannot both be met at the same time–tracking the observations is not possible without the trajectory deviating from the proposed model equations, while adherence to the model requires deviations from the observations. Thus, data assimilation faces a trade-off. In this contribution, the sensitivity of the data assimilation with respect to perturbations in the observations is identified as the parameter which controls the trade-off. A relation between the sensitivity and the out-of-sample error is established, which allows the latter to be calculated under operational conditions. A minimum out-of-sample error is proposed as a criterion to set an appropriate sensitivity and to settle the discussed trade-off. Two approaches to data assimilation are considered, namely variational data assimilation and Newtonian nudging, also known as synchronization. Numerical examples demonstrate the feasibility of the approach.
Resumo:
Airborne lidar provides accurate height information of objects on the earth and has been recognized as a reliable and accurate surveying tool in many applications. In particular, lidar data offer vital and significant features for urban land-cover classification, which is an important task in urban land-use studies. In this article, we present an effective approach in which lidar data fused with its co-registered images (i.e. aerial colour images containing red, green and blue (RGB) bands and near-infrared (NIR) images) and other derived features are used effectively for accurate urban land-cover classification. The proposed approach begins with an initial classification performed by the Dempster–Shafer theory of evidence with a specifically designed basic probability assignment function. It outputs two results, i.e. the initial classification and pseudo-training samples, which are selected automatically according to the combined probability masses. Second, a support vector machine (SVM)-based probability estimator is adopted to compute the class conditional probability (CCP) for each pixel from the pseudo-training samples. Finally, a Markov random field (MRF) model is established to combine spatial contextual information into the classification. In this stage, the initial classification result and the CCP are exploited. An efficient belief propagation (EBP) algorithm is developed to search for the global minimum-energy solution for the maximum a posteriori (MAP)-MRF framework in which three techniques are developed to speed up the standard belief propagation (BP) algorithm. Lidar and its co-registered data acquired by Toposys Falcon II are used in performance tests. The experimental results prove that fusing the height data and optical images is particularly suited for urban land-cover classification. There is no training sample needed in the proposed approach, and the computational cost is relatively low. An average classification accuracy of 93.63% is achieved.