964 resultados para statistic
Resumo:
Background The Environments for Healthy Living (EFHL) study is a repeated sample, longitudinal birth cohort in South East Queensland, Australia. We describe the sample characteristics and profile of maternal, household, and antenatal exposures. Variation and data stability over recruitment years were examined. Methods Four months each year from 2006, pregnant women were recruited to EFHL at routine antenatal visits on or after 24 weeks gestation, from three public maternity hospitals. Participating mothers completed a baseline questionnaire on individual, familial, social and community exposure factors. Perinatal data were extracted from hospital birth records. Descriptive statistics and measures of association were calculated comparing the EFHL birth sample with regional and national reference populations. Data stability of antenatal exposure factors was assessed across five recruitment years (2006–2010 inclusive) using the Gamma statistic for ordinal data and chi-squared for nominal data. Results Across five recruitment years 2,879 pregnant women were recruited which resulted in 2904 live births with 29 sets of twins. EFHL has a lower representation of early gestational babies, fewer still births and a lower percentage of low birth weight babies, when compared to regional data. The majority of women (65%) took a multivitamin supplement during pregnancy, 47% consumed alcohol, and 26% reported having smoked cigarettes. There were no differences in rates of a range of antenatal exposures across five years of recruitment, with the exception of increasing maternal pre-pregnancy weight (p=0.0349), decreasing rates of high maternal distress (p=0.0191) and decreasing alcohol consumption (p<0.0001). Conclusions The study sample is broadly representative of births in the region and almost all factors showed data stability over time. This study, with repeated sampling of birth cohorts over multiple years, has the potential to make important contributions to population health through evaluating longitudinal follow-up and within cohort temporal effects.
Resumo:
Two hundred million people are displaced annually due to natural disasters with a further one billion living in inadequate conditions in urban areas. Architects have a responsibility to respond to this statistic as the effects of natural and social disasters become more visibly catastrophic when paired with population rise. The research discussed in this paper initially questions and considers how digital tools can be employed to enhance rebuilding processes, but still achieve sensitive, culturally appropriate and accepted built solutions. Secondly the paper reflects on the impact ‘real-world’ projects have on architectural education. Research aspirations encouraged an atypical ‘research by design’ methodology involving a focused case study in the recently devastated village Keigold, Ranongga, Solomon Islands. Through this qualitative approach specific place data and the accounts of those affected were documented through naturalistic and archival methods of observation and participation. Findings reveal a number of unanticipated results which would have been otherwise undetected if field research within the design and rebuilding process was not undertaken, reflecting the importance of place specific research in the design process. Ultimately, the study proves that it is critical for issues of disaster to be addressed on a local rather than global scale; decisions cannot be speculative, or solved at a distance, but require intensive collaborative work with communities to achieve optimum solutions. Architectural education and design studios would continue to benefit from focused community engagement and field research within the design process.
Resumo:
Interpreting acoustic recordings of the natural environment is an increasingly important technique for ecologists wishing to monitor terrestrial ecosystems. Technological advances make it possible to accumulate many more recordings than can be listened to or interpreted, thereby necessitating automated assistance to identify elements in the soundscape. In this paper we examine the problem of estimating avian species richness by sampling from very long acoustic recordings. We work with data recorded under natural conditions and with all the attendant problems of undefined and unconstrained acoustic content (such as wind, rain, traffic, etc.) which can mask content of interest (in our case, bird calls). We describe 14 acoustic indices calculated at one minute resolution for the duration of a 24 hour recording. An acoustic index is a statistic that summarizes some aspect of the structure and distribution of acoustic energy and information in a recording. Some of the indices we calculate are standard (e.g. signal-to-noise ratio), some have been reported useful for the detection of bioacoustic activity (e.g. temporal and spectral entropies) and some are directed to avian sources (spectral persistence of whistles). We rank the one minute segments of a 24 hour recording in descending order according to an "acoustic richness" score which is derived from a single index or a weighted combination of two or more. We describe combinations of indices which lead to more efficient estimates of species richness than random sampling from the same recording, where efficiency is defined as total species identified for given listening effort. Using random sampling, we achieve a 53% increase in species recognized over traditional field surveys and an increase of 87% using combinations of indices to direct the sampling. We also demonstrate how combinations of the same indices can be used to detect long duration acoustic events (such as heavy rain and cicada chorus) and to construct long duration (24 h) spectrograms.
Resumo:
Acoustic recordings of the environment are an important aid to ecologists monitoring biodiversity and environmental health. However, rapid advances in recording technology, storage and computing make it possible to accumulate thousands of hours of recordings, of which, ecologists can only listen to a small fraction. The big-data challenge is to visualize the content of long-duration audio recordings on multiple scales, from hours, days, months to years. The visualization should facilitate navigation and yield ecologically meaningful information. Our approach is to extract (at one minute resolution) acoustic indices which reflect content of ecological interest. An acoustic index is a statistic that summarizes some aspect of the distribution of acoustic energy in a recording. We combine indices to produce false-colour images that reveal acoustic content and facilitate navigation through recordings that are months or even years in duration.
Resumo:
Which statistic would you use if you were writing the newspaper headline for the following media release: "Tassie’s death rate of deaths arising from transport-related injuries was 13 per 100,000 people, or 50% higher than the national average”? (Martain, 2007). The rate “13 per 100,000” sounds very small whereas “50% higher” sounds quite large. Most people are aware of the tendency to choose between reporting data as actual numbers or using percents in order to gain attention. Looking at examples like this one can help students develop a critical quantitative literacy viewpoint when dealing with “authentic contexts” (Australian Curriculum, Assessment and Reporting Authority [ACARA], 2013a, p. 37, 67). The importance of the distinction between reporting information in raw numbers or percents is not explicitly mentioned in the Australian Curriculum: Mathematics (ACARA, 2013b, p. 42). Although the document specifically mentions making “connections between equivalent fractions, decimals and percentages” [ACMNA131] in Year 6, there is no mention of the fundamental relationship between percent and the raw numbers represented in a part-whole fashion. Such understanding, however, is fundamental to the problem solving that is the focus of the curriculum in Years 6 to 9. The purpose of this article is to raise awareness of the opportunities to distinguish between the use of raw numbers and percents when comparisons are being made in contexts other than the media. It begins with the authors’ experiences in the classroom, which motivated a search in the literature, followed by a suggestion for a follow-up activity.
Resumo:
INTRODUCTION Dengue fever (DF) in Vietnam remains a serious emerging arboviral disease, which generates significant concerns among international health authorities. Incidence rates of DF have increased significantly during the last few years in many provinces and cities, especially Hanoi. The purpose of this study was to detect DF hot spots and identify the disease dynamics dispersion of DF over the period between 2004 and 2009 in Hanoi, Vietnam. METHODS Daily data on DF cases and population data for each postcode area of Hanoi between January 1998 and December 2009 were obtained from the Hanoi Center for Preventive Health and the General Statistic Office of Vietnam. Moran's I statistic was used to assess the spatial autocorrelation of reported DF. Spatial scan statistics and logistic regression were used to identify space-time clusters and dispersion of DF. RESULTS The study revealed a clear trend of geographic expansion of DF transmission in Hanoi through the study periods (OR 1.17, 95% CI 1.02-1.34). The spatial scan statistics showed that 6/14 (42.9%) districts in Hanoi had significant cluster patterns, which lasted 29 days and were limited to a radius of 1,000 m. The study also demonstrated that most DF cases occurred between June and November, during which the rainfall and temperatures are highest. CONCLUSIONS There is evidence for the existence of statistically significant clusters of DF in Hanoi, and that the geographical distribution of DF has expanded over recent years. This finding provides a foundation for further investigation into the social and environmental factors responsible for changing disease patterns, and provides data to inform program planning for DF control.
Resumo:
Background & Aims Nutrition screening and assessment enable early identification of malnourished people and those at risk of malnutrition. Appropriate assessment tools assist with informing and monitoring nutrition interventions. Tool choice needs to be appropriate to the population and setting. Methods Community-dwelling people with Parkinson’s disease (>18 years) were recruited. Body mass index (BMI) was calculated from weight and height. Participants were classified as underweight according to World Health Organisation (WHO) (≤18.5kg/m2) and age specific (<65 years,≤18.5kg/m2; ≥65 years,≤23.5kg/m2) cut-offs. The Mini-Nutritional Assessment (MNA) screening (MNA-SF) and total assessment scores were calculated. The Patient-Generated Subjective Global Assessment (PG-SGA), including the Subjective Global Assessment (SGA), was performed. Sensitivity, specificity, positive predictive value, negative predictive value and weighted kappa statistic of each of the above compared to SGA were determined. Results Median age of the 125 participants was 70.0(35-92) years. Age-specific BMI (Sn 68.4%, Sp 84.0%) performed better than WHO (Sn 15.8%, Sp 99.1%) categories. MNA-SF performed better (Sn 94.7%, Sp 78.3%) than both BMI categorisations for screening purposes. MNA had higher specificity but lower sensitivity than PG-SGA (MNA Sn 84.2%, Sp 87.7%; PG-SGA Sn 100.0%, Sp 69.8%). Conclusions BMI lacks sensitivity to identify malnourished people with Parkinson’s disease and should be used with caution. The MNA-SF may be a better screening tool in people with Parkinson’s disease. The PG-SGA performed well and may assist with informing and monitoring nutrition interventions. Further research should be conducted to validate screening and assessment tools in Parkinson’s disease.
Resumo:
Due to the health impacts caused by exposures to air pollutants in urban areas, monitoring and forecasting of air quality parameters have become popular as an important topic in atmospheric and environmental research today. The knowledge on the dynamics and complexity of air pollutants behavior has made artificial intelligence models as a useful tool for a more accurate pollutant concentration prediction. This paper focuses on an innovative method of daily air pollution prediction using combination of Support Vector Machine (SVM) as predictor and Partial Least Square (PLS) as a data selection tool based on the measured values of CO concentrations. The CO concentrations of Rey monitoring station in the south of Tehran, from Jan. 2007 to Feb. 2011, have been used to test the effectiveness of this method. The hourly CO concentrations have been predicted using the SVM and the hybrid PLS–SVM models. Similarly, daily CO concentrations have been predicted based on the aforementioned four years measured data. Results demonstrated that both models have good prediction ability; however the hybrid PLS–SVM has better accuracy. In the analysis presented in this paper, statistic estimators including relative mean errors, root mean squared errors and the mean absolute relative error have been employed to compare performances of the models. It has been concluded that the errors decrease after size reduction and coefficients of determination increase from 56 to 81% for SVM model to 65–85% for hybrid PLS–SVM model respectively. Also it was found that the hybrid PLS–SVM model required lower computational time than SVM model as expected, hence supporting the more accurate and faster prediction ability of hybrid PLS–SVM model.
Resumo:
In this paper, we propose a new steganalytic method to detect the message hidden in a black and white image using the steganographic technique developed by Liang, Wang and Zhang. Our detection method estimates the length of hidden message embedded in a binary image. Although the hidden message embedded is visually imperceptible, it changes some image statistic (such as inter-pixels correlation). Based on this observation, we first derive the 512 patterns histogram from the boundary pixels as the distinguishing statistic, then we compute the histogram difference to determine the changes of the 512 patterns histogram induced by the embedding operation. Finally we propose histogram quotient to estimate the length of the embedded message. Experimental results confirm that the proposed method can effectively and reliably detect the length of the embedded message.
Resumo:
Mortality following hip arthroplasty is affected by a large number of confounding variables each of which must be considered to enable valid interpretation. Relevant variables available from the 2011 NJR data set were included in the Cox model. Mortality rates in hip arthroplasty patients were lower than in the age-matched population across all hip types. Age at surgery, ASA grade, diagnosis, gender, provider type, hip type and lead surgeon grade all had a significant effect on mortality. Schemper's statistic showed that only 18.98% of the variation in mortality was explained by the variables available in the NJR data set. It is inappropriate to use NJR data to study an outcome affected by a multitude of confounding variables when these cannot be adequately accounted for in the available data set.
Resumo:
This paper uses a nonstructural, ordered discrete choice model to measure the effects of various parent and child characteristics upon the independent caregiving decisions of the adult children of elderly parents sampled in the 1982 and 1984 National Long Term Care Survey (NLTCS). While significant effects are noted, emphasis is placed on test statistics constructed to measure the independence of caregiving decisions. The test statistic results are conclusive: The caregiving decisions of adult children are dependent across time and family members. Structural models taking dependencies among family members into account note effects similar to those in the nonstructural model.
Resumo:
Acoustic recordings of the environment are an important aid to ecologists monitoring biodiversity and environmental health. However, rapid advances in recording technology, storage and computing make it possible to accumulate thousands of hours of recordings, of which, ecologists can only listen to a small fraction. The big-data challenge addressed in this paper is to visualize the content of long-duration audio recordings on multiple scales, from hours, days, months to years. The visualization should facilitate navigation and yield ecologically meaningful information. Our approach is to extract (at one minute resolution) acoustic indices which reflect content of ecological interest. An acoustic index is a statistic that summarizes some aspect of the distribution of acoustic energy in a recording. We combine indices to produce false-color images that reveal acoustic content and facilitate navigation through recordings that are months or even years in duration.
Resumo:
This thesis presents an empirical study of the effects of topology on cellular automata rule spaces. The classical definition of a cellular automaton is restricted to that of a regular lattice, often with periodic boundary conditions. This definition is extended to allow for arbitrary topologies. The dynamics of cellular automata within the triangular tessellation were analysed when transformed to 2-manifolds of topological genus 0, genus 1 and genus 2. Cellular automata dynamics were analysed from a statistical mechanics perspective. The sample sizes required to obtain accurate entropy calculations were determined by an entropy error analysis which observed the error in the computed entropy against increasing sample sizes. Each cellular automata rule space was sampled repeatedly and the selected cellular automata were simulated over many thousands of trials for each topology. This resulted in an entropy distribution for each rule space. The computed entropy distributions are indicative of the cellular automata dynamical class distribution. Through the comparison of these dynamical class distributions using the E-statistic, it was identified that such topological changes cause these distributions to alter. This is a significant result which implies that both global structure and local dynamics play a important role in defining long term behaviour of cellular automata.
Resumo:
Study Design Delphi panel and cohort study. Objective To develop and refine a condition-specific, patient-reported outcome measure, the Ankle Fracture Outcome of Rehabilitation Measure (A-FORM), and to examine its psychometric properties, including factor structure, reliability, and validity, by assessing item fit with the Rasch model. Background To our knowledge, there is no patient-reported outcome measure specific to ankle fracture with a robust content foundation. Methods A 2-stage research design was implemented. First, a Delphi panel that included patients and health professionals developed the items and refined the item wording. Second, a cohort study (n = 45) with 2 assessment points was conducted to permit preliminary maximum-likelihood exploratory factor analysis and Rasch analysis. Results The Delphi panel reached consensus on 53 potential items that were carried forward to the cohort phase. From the 2 time points, 81 questionnaires were completed and analyzed; 38 potential items were eliminated on account of greater than 10% missing data, factor loadings, and uniqueness. The 15 unidimensional items retained in the scale demonstrated appropriate person and item reliability after (and before) removal of 1 item (anxious about footwear) that had a higher-than-ideal outfit statistic (1.75). The “anxious about footwear” item was retained in the instrument, but only the 14 items with acceptable infit and outfit statistics (range, 0.5–1.5) were included in the summary score. Conclusion This investigation developed and refined the A-FORM (Version 1.0). The A-FORM items demonstrated favorable psychometric properties and are suitable for conversion to a single summary score. Further studies utilizing the A-FORM instrument are warranted. J Orthop Sports Phys Ther 2014;44(7):488–499. Epub 22 May 2014. doi:10.2519/jospt.2014.4980
Resumo:
Approximate Bayesian Computation’ (ABC) represents a powerful methodology for the analysis of complex stochastic systems for which the likelihood of the observed data under an arbitrary set of input parameters may be entirely intractable – the latter condition rendering useless the standard machinery of tractable likelihood-based, Bayesian statistical inference [e.g. conventional Markov chain Monte Carlo (MCMC) simulation]. In this paper, we demonstrate the potential of ABC for astronomical model analysis by application to a case study in the morphological transformation of high-redshift galaxies. To this end, we develop, first, a stochastic model for the competing processes of merging and secular evolution in the early Universe, and secondly, through an ABC-based comparison against the observed demographics of massive (Mgal > 1011 M⊙) galaxies (at 1.5 < z < 3) in the Cosmic Assembly Near-IR Deep Extragalatic Legacy Survey (CANDELS)/Extended Groth Strip (EGS) data set we derive posterior probability densities for the key parameters of this model. The ‘Sequential Monte Carlo’ implementation of ABC exhibited herein, featuring both a self-generating target sequence and self-refining MCMC kernel, is amongst the most efficient of contemporary approaches to this important statistical algorithm. We highlight as well through our chosen case study the value of careful summary statistic selection, and demonstrate two modern strategies for assessment and optimization in this regard. Ultimately, our ABC analysis of the high-redshift morphological mix returns tight constraints on the evolving merger rate in the early Universe and favours major merging (with disc survival or rapid reformation) over secular evolution as the mechanism most responsible for building up the first generation of bulges in early-type discs.