243 resultados para Library statistics.
Resumo:
Examples from the Murray-Darling basin in Australia are used to illustrate different methods of disaggregation of reconnaissance-scale maps. One approach for disaggregation revolves around the de-convolution of the soil-landscape paradigm elaborated during a soil survey. The descriptions of soil ma units and block diagrams in a soil survey report detail soil-landscape relationships or soil toposequences that can be used to disaggregate map units into component landscape elements. Toposequences can be visualised on a computer by combining soil maps with digital elevation data. Expert knowledge or statistics can be used to implement the disaggregation. Use of a restructuring element and k-means clustering are illustrated. Another approach to disaggregation uses training areas to develop rules to extrapolate detailed mapping into other, larger areas where detailed mapping is unavailable. A two-level decision tree example is presented. At one level, the decision tree method is used to capture mapping rules from the training area; at another level, it is used to define the domain over which those rules can be extrapolated. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
Small area health statistics has assumed increasing importance as the focus of population and public health moves to a more individualised approach of smaller area populations. Small populations and low event occurrence produce difficulties in interpretation and require appropriate statistical methods, including for age adjustment. There are also statistical questions related to multiple comparisons. Privacy and confidentiality issues include the possibility of revealing information on individuals or health care providers by fine cross-tabulations. Interpretation of small area population differences in health status requires consideration of migrant and Indigenous composition, socio-economic status and rural-urban geography before assessment of the effects of physical environmental exposure and services and interventions. Burden of disease studies produce a single measure for morbidity and mortality - disability adjusted life year (DALY) - which is the sum of the years of life lost (YLL) from premature mortality and the years lived with disability (YLD) for particular diseases (or all conditions). Calculation of YLD requires estimates of disease incidence (and complications) and duration, and weighting by severity. These procedures often mean problematic assumptions, as does future discounting and age weighting of both YLL and YLD. Evaluation of the Victorian small area population disease burden study presents important cross-disciplinary challenges as it relies heavily on synthetic approaches of demography and economics rather than on the empirical methods of epidemiology. Both empirical and synthetic methods are used to compute small area mortality and morbidity, disease burden, and then attribution to risk factors. Readers need to examine the methodology and assumptions carefully before accepting the results.
Resumo:
Sum: Plant biologists in fields of ecology, evolution, genetics and breeding frequently use multivariate methods. This paper illustrates Principal Component Analysis (PCA) and Gabriel's biplot as applied to microarray expression data from plant pathology experiments. Availability: An example program in the publicly distributed statistical language R is available from the web site (www.tpp.uq.edu.au) and by e-mail from the contact. Contact: scott.chapman@csiro.au.
Resumo:
Since dilute Bose gas condensates were first experimentally produced, the Gross-Pitaevskii equation has been successfully used as a descriptive tool. As a mean-field equation, it cannot by definition predict anything about the many-body quantum statistics of condensate. We show here that there are a class of dynamical systems where it cannot even make successful predictions about the mean-field behavior, starting with the process of evaporative cooling by which condensates are formed. Among others are parametric processes, such as photoassociation and dissociation of atomic and molecular condensates.
Resumo:
Background and aim of the study: Results of valve re-replacement (reoperation) in 898 patients undergoing aortic valve replacement with cryopreserved homograft valves between 1975 and 1998 are reported. The study aim was to provide estimates of unconditional probability of valve reoperation and cumulative incidence function (actual risk) of reoperation. Methods: Valves were implanted by subcoronary insertion (n = 500), inclusion cylinder (n = 46), and aortic root replacement (n = 352). Probability of reoperation was estimated by adopting a mixture model framework within which estimates were adjusted for two risk factors: patient age at initial replacement, and implantation technique. Results: For a patient aged 50 years, the probability of reoperation in his/her lifetime was estimated as 44% and 56% for non-root and root replacement techniques, respectively. For a patient aged 70 years, estimated probability of reoperation was 16% and 25%, respectively. Given that a reoperation is required, patients with non-root replacement have a higher hazard rate than those with root replacement (hazards ratio = 1.4), indicating that non-root replacement patients tend to undergo reoperation earlier before death than root replacement patients. Conclusion: Younger patient age and root versus non-root replacement are risk factors for reoperation. Valve durability is much less in younger patients, while root replacement patients appear more likely to live longer and hence are more likely to require reoperation.
Resumo:
The Fornax Cluster Spectroscopic Survey (FCSS) project utilizes the Two-degree Field (2dF) multi-object spectrograph on the Anglo-Australian Telescope (AAT). Its aim is to obtain spectra for a complete sample of all 14 000 objects with 16 5 less than or equal to b(j) less than or equal to 19 7 irrespective of their morphology in a 12 deg(2) area centred on the Fornax cluster. A sample of 24 Fornax cluster members has been identified from the first 2dF field (3.1 deg(2) in area) to be completed. This is the first complete sample of cluster objects of known distance with well-defined selection limits. Nineteen of the galaxies (with -15.8 < M-B < 12.7) appear to be conventional dwarf elliptical (dE) or dwarf S0 (dS0) galaxies. The other five objects (with -13.6 < M-B < 11.3) are those galaxies which were described recently by Drinkwater et al. and labelled 'ultracompact dwarfs' (UCDs). A major result is that the conventional dwarfs all have scale sizes alpha greater than or similar to 3 arcsec (similar or equal to300 pc). This apparent minimum scale size implies an equivalent minimum luminosity for a dwarf of a given surface brightness. This produces a limit on their distribution in the magnitude-surface brightness plane, such that we do not observe dEs with high surface brightnesses but faint absolute magnitudes. Above this observed minimum scale size of 3 arcsec, the dEs and dS0s fill the whole area of the magnitude-surface brightness plane sampled by our selection limits. The observed correlation between magnitude and surface brightness noted by several recent studies of brighter galaxies is not seen with our fainter cluster sample. A comparison of our results with the Fornax Cluster Catalog (FCC) of Ferguson illustrates that attempts to determine cluster membership solely on the basis of observed morphology can produce significant errors. The FCC identified 17 of the 24 FCSS sample (i.e. 71 per cent) as being 'cluster' members, in particular missing all five of the UCDs. The FCC also suffers from significant contamination: within the FCSS's field and selection limits, 23 per cent of those objects described as cluster members by the FCC are shown by the FCSS to be background objects.
Resumo:
A two-component survival mixture model is proposed to analyse a set of ischaemic stroke-specific mortality data. The survival experience of stroke patients after index stroke may be described by a subpopulation of patients in the acute condition and another subpopulation of patients in the chronic phase. To adjust for the inherent correlation of observations due to random hospital effects, a mixture model of two survival functions with random effects is formulated. Assuming a Weibull hazard in both components, an EM algorithm is developed for the estimation of fixed effect parameters and variance components. A simulation study is conducted to assess the performance of the two-component survival mixture model estimators. Simulation results confirm the applicability of the proposed model in a small sample setting. Copyright (C) 2004 John Wiley Sons, Ltd.
Resumo:
The present exploratory-descriptive cross-national study focused on the career development of 11- to 14-yr.-old children, in particular whether they can match their personal characteristics with their occupational aspirations. Further, the study explored whether their matching may be explained in terms of a fit between person and environment using Holland's theory as an example. Participants included 511 South African and 372 Australian children. Findings relate to two items of the Revised Career Awareness Survey that require children to relate personal-social knowledge to their favorite occupation. Data were analyzed in three stages using descriptive statistics, i.e., mean scores, frequencies, and percentage agreement. The study indicated that children perceived their personal characteristics to be related to their occupational aspirations. However, how this matching takes place is not adequately accounted for in terms of a career theory such as that of Holland.
Resumo:
This paper describes algorithms that can identify patterns of brain structure and function associated with Alzheimer's disease, schizophrenia, normal aging, and abnormal brain development based on imaging data collected in large human populations. Extraordinary information can be discovered with these techniques: dynamic brain maps reveal how the brain grows in childhood, how it changes in disease, and how it responds to medication. Genetic brain maps can reveal genetic influences on brain structure, shedding light on the nature-nurture debate, and the mechanisms underlying inherited neurobehavioral disorders. Recently, we created time-lapse movies of brain structure for a variety of diseases. These identify complex, shifting patterns of brain structural deficits, revealing where, and at what rate, the path of brain deterioration in illness deviates from normal. Statistical criteria can then identify situations in which these changes are abnormally accelerated, or when medication or other interventions slow them. In this paper, we focus on describing our approaches to map structural changes in the cortex. These methods have already been used to reveal the profile of brain anomalies in studies of dementia, epilepsy, depression, childhood and adult-onset schizophrenia, bipolar disorder, attention-deficit/ hyperactivity disorder, fetal alcohol syndrome, Tourette syndrome, Williams syndrome, and in methamphetamine abusers. Specifically, we describe an image analysis pipeline known as cortical pattern matching that helps compare and pool cortical data over time and across subjects. Statistics are then defined to identify brain structural differences between groups, including localized alterations in cortical thickness, gray matter density (GMD), and asymmetries in cortical organization. Subtle features, not seen in individual brain scans, often emerge when population-based brain data are averaged in this way. Illustrative examples are presented to show the profound effects of development and various diseases on the human cortex. Dynamically spreading waves of gray matter loss are tracked in dementia and schizophrenia, and these sequences are related to normally occurring changes in healthy subjects of various ages. (C) 2004 Published by Elsevier Inc.
Resumo:
1. Cluster analysis of reference sites with similar biota is the initial step in creating River Invertebrate Prediction and Classification System (RIVPACS) and similar river bioassessment models such as Australian River Assessment System (AUSRIVAS). This paper describes and tests an alternative prediction method, Assessment by Nearest Neighbour Analysis (ANNA), based on the same philosophy as RIVPACS and AUSRIVAS but without the grouping step that some people view as artificial. 2. The steps in creating ANNA models are: (i) weighting the predictor variables using a multivariate approach analogous to principal axis correlations, (ii) calculating the weighted Euclidian distance from a test site to the reference sites based on the environmental predictors, (iii) predicting the faunal composition based on the nearest reference sites and (iv) calculating an observed/expected (O/E) analogous to RIVPACS/AUSRIVAS. 3. The paper compares AUSRIVAS and ANNA models on 17 datasets representing a variety of habitats and seasons. First, it examines each model's regressions for Observed versus Expected number of taxa, including the r(2), intercept and slope. Second, the two models' assessments of 79 test sites in New Zealand are compared. Third, the models are compared on test and presumed reference sites along a known trace metal gradient. Fourth, ANNA models are evaluated for western Australia, a geographically distinct region of Australia. The comparisons demonstrate that ANNA and AUSRIVAS are generally equivalent in performance, although ANNA turns out to be potentially more robust for the O versus E regressions and is potentially more accurate on the trace metal gradient sites. 4. The ANNA method is recommended for use in bioassessment of rivers, at least for corroborating the results of the well established AUSRIVAS- and RIVPACS-type models, if not to replace them.
Resumo:
PURPOSE: Many guidelines advocate measurement of total or low density lipoprotein cholesterol (LDL), high density lipoprotein cholesterol (HDL), and triglycerides (TG) to determine treatment recommendations for preventing coronary heart disease (CHD) and cardiovascular disease (CVD). This analysis is a comparison of lipid variables as predictors of cardiovascular disease. METHODS: Hazard ratios for coronary and cardiovascular deaths by fourths of total cholesterol (TC), LDL, HDL, TG, non-HDL, TC/HDL, and TG/HDL values, and for a one standard deviation change in these variables, were derived in an individual participant data meta-analysis of 32 cohort studies conducted in the Asia-Pacific region. The predictive value of each lipid variable was assessed using the likelihood ratio statistic. RESULTS: Adjusting for confounders and regression dilution, each lipid variable had a positive (negative for HDL) log-linear association with fatal CHD and CVD. Individuals in the highest fourth of each lipid variable had approximately twice the risk of CHD compared with those with lowest levels. TG and HDL were each better predictors of CHD and CVD risk compared with TC alone, with test statistics similar to TC/HDL and TG/HDL ratios. Calculated LDL was a relatively poor predictor. CONCLUSIONS: While LDL reduction remains the main target of intervention for lipid-lowering, these data support the potential use of TG or lipid ratios for CHD risk prediction. (c) 2005 Elsevier Inc. All rights reserved.
Resumo:
In this paper we study the nondegenerate optical parametric oscillator with injected signal, both analytically and numerically. We develop a perturbation approach which allows us to find approximate analytical solutions, starting from the full equations of motion in the positive-P representation. We demonstrate the regimes of validity of our approximations via comparison with the full stochastic results. We find that, with reasonably low levels of injected signal, the system allows for demonstrations of quantum entanglement and the Einstein-Podolsky-Rosen paradox. In contrast to the normal optical parametric oscillator operating below threshold, these features are demonstrated with relatively intense fields.
Resumo:
The main problem with current approaches to quantum computing is the difficulty of establishing and maintaining entanglement. A Topological Quantum Computer (TQC) aims to overcome this by using different physical processes that are topological in nature and which are less susceptible to disturbance by the environment. In a (2+1)-dimensional system, pseudoparticles called anyons have statistics that fall somewhere between bosons and fermions. The exchange of two anyons, an effect called braiding from knot theory, can occur in two different ways. The quantum states corresponding to the two elementary braids constitute a two-state system allowing the definition of a computational basis. Quantum gates can be built up from patterns of braids and for quantum computing it is essential that the operator describing the braiding-the R-matrix-be described by a unitary operator. The physics of anyonic systems is governed by quantum groups, in particular the quasi-triangular Hopf algebras obtained from finite groups by the application of the Drinfeld quantum double construction. Their representation theory has been described in detail by Gould and Tsohantjis, and in this review article we relate the work of Gould to TQC schemes, particularly that of Kauffman.