311 resultados para STATISTICAL METHODOLOGY
Resumo:
Matched case–control research designs can be useful because matching can increase power due to reduced variability between subjects. However, inappropriate statistical analysis of matched data could result in a change in the strength of association between the dependent and independent variables or a change in the significance of the findings. We sought to ascertain whether matched case–control studies published in the nursing literature utilized appropriate statistical analyses. Of 41 articles identified that met the inclusion criteria, 31 (76%) used an inappropriate statistical test for comparing data derived from case subjects and their matched controls. In response to this finding, we developed an algorithm to support decision-making regarding statistical tests for matched case–control studies.
Resumo:
Understanding the effects of design interventions on the meanings people associate with landscapes is important to critical and ethical practice in landscape architecture. Case study research has become a common way researchers evaluate design interventions and related issues, with a standardised method promoted by the Landscape Architecture Foundation (LAF). However, the method is somewhat undeveloped for interpreting landscape meanings – something most commonly undertaken as historic landscape studies, but not as studies of design effect. This research proposes a new method for such interpretation, using a case study of Richard Haag’s radical 1971 proposal for a new kind of park on the site of the former Seattle gas works.
Resumo:
When a community already torn by an event such as a prolonged war, is then hit by a natural disaster, the negative impact of this subsequent disaster in the longer term can be extremely devastating. Natural disasters further damage already destabilised and demoralised communities, making it much harder for them to be resilient and recover. Communities often face enormous challenges during the immediate recovery and the subsequent long term reconstruction periods, mainly due to the lack of a viable community involvement process. In post-war settings, affected communities, including those internally displaced, are often conceived as being completely disabled and are hardly ever consulted when reconstruction projects are being instigated. This lack of community involvement often leads to poor project planning, decreased community support, and an unsustainable completed project. The impact of war, coupled with the tensions created by the uninhabitable and poor housing provision, often hinders the affected residents from integrating permanently into their home communities. This paper outlines a number of fundamental factors that act as barriers to community participation related to natural disasters in post-war settings. The paper is based on a statistical analysis of, and findings from, a questionnaire survey administered in early 2012 in Afghanistan.
Resumo:
Background: Developing sampling strategies to target biological pests such as insects in stored grain is inherently difficult owing to species biology and behavioural characteristics. The design of robust sampling programmes should be based on an underlying statistical distribution that is sufficiently flexible to capture variations in the spatial distribution of the target species. Results: Comparisons are made of the accuracy of four probability-of-detection sampling models - the negative binomial model,1 the Poisson model,1 the double logarithmic model2 and the compound model3 - for detection of insects over a broad range of insect densities. Although the double log and negative binomial models performed well under specific conditions, it is shown that, of the four models examined, the compound model performed the best over a broad range of insect spatial distributions and densities. In particular, this model predicted well the number of samples required when insect density was high and clumped within experimental storages. Conclusions: This paper reinforces the need for effective sampling programs designed to detect insects over a broad range of spatial distributions. The compound model is robust over a broad range of insect densities and leads to substantial improvement in detection probabilities within highly variable systems such as grain storage.
Resumo:
Operational modal analysis (OMA) is prevalent in modal identifi cation of civil structures. It asks for response measurements of the underlying structure under ambient loads. A valid OMA method requires the excitation be white noise in time and space. Although there are numerous applications of OMA in the literature, few have investigated the statistical distribution of a measurement and the infl uence of such randomness to modal identifi cation. This research has attempted modifi ed kurtosis to evaluate the statistical distribution of raw measurement data. In addition, a windowing strategy employing this index has been proposed to select quality datasets. In order to demonstrate how the data selection strategy works, the ambient vibration measurements of a laboratory bridge model and a real cable-stayed bridge have been respectively considered. The analysis incorporated with frequency domain decomposition (FDD) as the target OMA approach for modal identifi cation. The modal identifi cation results using the data segments with different randomness have been compared. The discrepancy in FDD spectra of the results indicates that, in order to fulfi l the assumption of an OMA method, special care shall be taken in processing a long vibration measurement data. The proposed data selection strategy is easy-to-apply and verifi ed effective in modal analysis.
Resumo:
This thesis explored the development of statistical methods to support the monitoring and improvement in quality of treatment delivered to patients undergoing coronary angioplasty procedures. To achieve this goal, a suite of outcome measures was identified to characterise performance of the service, statistical tools were developed to monitor the various indicators and measures to strengthen governance processes were implemented and validated. Although this work focused on pursuit of these aims in the context of a an angioplasty service located at a single clinical site, development of the tools and techniques was undertaken mindful of the potential application to other clinical specialties and a wider, potentially national, scope.
Resumo:
Aim: To describe the recruitment, ophthalmic examination methods and distribution of ocular biometry of participants in the Norfolk Island Eye Study, who were individuals descended from the English Bounty mutineers and their Polynesian wives. Methods: All 1,275 permanent residents of Norfolk Island aged over 15 years were invited to participate, including 602 individuals involved in a 2001 cardiovascular disease study. Participants completed a detailed questionnaire and underwent a comprehensive eye assessment including stereo disc and retinal photography, ocular coherence topography and conjunctival autofluorescence assessment. Additionally, blood or saliva was taken for DNA testing. Results: 781 participants aged over 15 years were seen (54% female), comprising 61% of the permanent Island population. 343 people (43.9%) could trace their family history to the Pitcairn Islanders (Norfolk Island Pitcairn Pedigree). Mean anterior chamber depth was 3.32mm, mean axial length (AL) was 23.5mm, and mean central corneal thickness was 546 microns. There were no statistically significant differences in these characteristics between persons with and without Pitcairn Island ancestry. Mean intra-ocular pressure was lower in people with Pitcairn Island ancestry: 15.89mmHg compared to those without Pitcairn Island ancestry 16.49mmHg (P = .007). The mean keratometry value was lower in people with Pitcairn Island ancestry (43.22 vs. 43.52, P = .007). The corneas were flatter in people of Pitcairn ancestry but there was no corresponding difference in AL or refraction. Conclusion: Our study population is highly representative of the permanent population of Norfolk Island. Ocular biometry was similar to that of other white populations. Heritability estimates, linkage analysis and genome-wide studies will further elucidate the genetic determinants of chronic ocular diseases in this genetic isolate.
Resumo:
Purpose A knowledge-based urban development needs to be sustainable and, therefore, requires ecological planning strategies to ensure a better quality of its services. The purpose of this paper is to present an innovative approach for monitoring the sustainability of urban services and help the policy-making authorities to revise the current planning and development practices for more effective solutions. The paper introduces a new assessment tool–Micro-level Urban-ecosystem Sustainability IndeX (MUSIX) – that provides a quantitative measure of urban sustainability in a local context. Design/methodology/approach A multi-method research approach was employed in the construction of the MUSIX. A qualitative research was conducted through an interpretive and critical literature review in developing a theoretical framework and indicator selection. A quantitative research was conducted through statistical and spatial analyses in data collection, processing and model application. Findings/results MUSIX was tested in a pilot study site and provided information referring to the main environmental impacts arising from rapid urban development and population growth. Related to that, some key ecological planning strategies were recommended to guide the preparation and assessment of development and local area plans. Research limitations/implications This study provided fundamental information that assists developers, planners and policy-makers to investigate the multidimensional nature of sustainability at the local level by capturing the environmental pressures and their driving forces in highly developed urban areas. Originality/value This study measures the sustainability of urban development plans through providing data analysis and interpretation of results in a new spatial data unit.
Resumo:
Nitrous oxide emissions from soil are known to be spatially and temporally volatile. Reliable estimation of emissions over a given time and space depends on measuring with sufficient intensity but deciding on the number of measuring stations and the frequency of observation can be vexing. The question of low frequency manual observations providing comparable results to high frequency automated sampling also arises. Data collected from a replicated field experiment was intensively studied with the intention to give some statistically robust guidance on these issues. The experiment had nitrous oxide soil to air flux monitored within 10 m by 2.5 m plots by automated closed chambers under a 3 h average sampling interval and by manual static chambers under a three day average sampling interval over sixty days. Observed trends in flux over time by the static chambers were mostly within the auto chamber bounds of experimental error. Cumulated nitrous oxide emissions as measured by each system were also within error bounds. Under the temporal response pattern in this experiment, no significant loss of information was observed after culling the data to simulate results under various low frequency scenarios. Within the confines of this experiment observations from the manual chambers were not spatially correlated above distances of 1 m. Statistical power was therefore found to improve due to increased replicates per treatment or chambers per replicate. Careful after action review of experimental data can deliver savings for future work.
Resumo:
Research into the human dynamics of expeditions is a potentially rewarding and fruitful area of study. However, the complex nature of expedition work presents the researcher with numerous challenges. This paper presents a personal reflection on the challenges linked to determining appropriate methodological processes for a study into expedition teamwork. Previous expedition research is outlined and reviewed for appropriateness. Some alternative methodological theories are described and limitations highlighted. Lastly the actual data gathering and analysis processes are detailed. The aim being to show that what happened in the field inevitably dictated how methodological processes were adapted. Essentially the paper describes a personal journey into research. A journey that sparked numerous personal insights in the science of human dynamics and expeditions and one that I hope will add to the development of expedition research in general.
Resumo:
Electrostatic discharges have been identified as the most likely cause in a number of incidents of fire and explosion with unexplained ignitions. The lack of data and suitable models for this ignition mechanism creates a void in the analysis to quantify the importance of static electricity as a credible ignition mechanism. Quantifiable hazard analysis of the risk of ignition by static discharge cannot, therefore, be entirely carried out with our current understanding of this phenomenon. The study of electrostatics has been ongoing for a long time. However, it was not until the wide spread use of electronics that research was developed for the protection of electronics from electrostatic discharges. Current experimental models for electrostatic discharge developed for intrinsic safety with electronics are inadequate for ignition analysis and typically are not supported by theoretical analysis. A preliminary simulation and experiment with low voltage was designed to investigate the characteristics of energy dissipation and provided a basis for a high voltage investigation. It was seen that for a low voltage the discharge energy represents about 10% of the initial capacitive energy available and that the energy dissipation was within 10 ns of the initial discharge. The potential difference is greatest at the initial break down when the largest amount of the energy is dissipated. The discharge pathway is then established and minimal energy is dissipated as energy dissipation becomes greatly influenced by other components and stray resistance in the discharge circuit. From the initial low voltage simulation work, the importance of the energy dissipation and the characteristic of the discharge were determined. After the preliminary low voltage work was completed, a high voltage discharge experiment was designed and fabricated. Voltage and current measurement were recorded on the discharge circuit allowing the discharge characteristic to be recorded and energy dissipation in the discharge circuit calculated. Discharge energy calculations show consistency with the low voltage work relating to discharge energy with about 30-40% of the total initial capacitive energy being discharged in the resulting high voltage arc. After the system was characterised and operation validated, high voltage ignition energy measurements were conducted on a solution of n-Pentane evaporating in a 250 cm3 chamber. A series of ignition experiments were conducted to determine the minimum ignition energy of n-Pentane. The data from the ignition work was analysed with standard statistical regression methods for tests that return binary (yes/no) data and found to be in agreement with recent publications. The research demonstrates that energy dissipation is heavily dependent on the circuit configuration and most especially by the discharge circuit's capacitance and resistance. The analysis established a discharge profile for the discharges studied and validates the application of this methodology for further research into different materials and atmospheres; by systematically looking at discharge profiles of test materials with various parameters (e.g., capacitance, inductance, and resistance). Systematic experiments looking at the discharge characteristics of the spark will also help understand the way energy is dissipated in an electrostatic discharge enabling a better understanding of the ignition characteristics of materials in terms of energy and the dissipation of that energy in an electrostatic discharge.
Resumo:
This thesis explored the knowledge and reasoning of young children in solving novel statistical problems, and the influence of problem context and design on their solutions. It found that young children's statistical competencies are underestimated, and that problem design and context facilitated children's application of a wide range of knowledge and reasoning skills, none of which had been taught. A qualitative design-based research method, informed by the Models and Modeling perspective (Lesh & Doerr, 2003) underpinned the study. Data modelling activities incorporating picture story books were used to contextualise the problems. Children applied real-world understanding to problem solving, including attribute identification, categorisation and classification skills. Intuitive and metarepresentational knowledge together with inductive and probabilistic reasoning was used to make sense of data, and beginning awareness of statistical variation and informal inference was visible.
Resumo:
Indirect inference (II) is a methodology for estimating the parameters of an intractable (generative) model on the basis of an alternative parametric (auxiliary) model that is both analytically and computationally easier to deal with. Such an approach has been well explored in the classical literature but has received substantially less attention in the Bayesian paradigm. The purpose of this paper is to compare and contrast a collection of what we call parametric Bayesian indirect inference (pBII) methods. One class of pBII methods uses approximate Bayesian computation (referred to here as ABC II) where the summary statistic is formed on the basis of the auxiliary model, using ideas from II. Another approach proposed in the literature, referred to here as parametric Bayesian indirect likelihood (pBIL), we show to be a fundamentally different approach to ABC II. We devise new theoretical results for pBIL to give extra insights into its behaviour and also its differences with ABC II. Furthermore, we examine in more detail the assumptions required to use each pBII method. The results, insights and comparisons developed in this paper are illustrated on simple examples and two other substantive applications. The first of the substantive examples involves performing inference for complex quantile distributions based on simulated data while the second is for estimating the parameters of a trivariate stochastic process describing the evolution of macroparasites within a host based on real data. We create a novel framework called Bayesian indirect likelihood (BIL) which encompasses pBII as well as general ABC methods so that the connections between the methods can be established.
Resumo:
Construction works are project-based and interdisciplinary. Many construction management (CM) problems are ill defined. The knowledge required to address such problems is not readily available and mostly tacit in nature. Moreover, the researchers, especially the students in the higher education, often face difficulty in defining the research problem, adopting an appropriate research process and methodology for designing and validating their research. This paper describes a ‘Horseshoe’ research process approach and its application to address a research problem of extracting construction-relevant information from a building information model (BIM). It describes the different steps of the process for understanding a problem, formulating appropriate research question/s, defining different research tasks, including a methodology for developing, implementing and validating the research. It is argued that a structure research approach and the use of mixed research methods would provide a sound basis for research design and validation in order to make contribution to existing knowledge.
Resumo:
Utility functions in Bayesian experimental design are usually based on the posterior distribution. When the posterior is found by simulation, it must be sampled from for each future data set drawn from the prior predictive distribution. Many thousands of posterior distributions are often required. A popular technique in the Bayesian experimental design literature to rapidly obtain samples from the posterior is importance sampling, using the prior as the importance distribution. However, importance sampling will tend to break down if there is a reasonable number of experimental observations and/or the model parameter is high dimensional. In this paper we explore the use of Laplace approximations in the design setting to overcome this drawback. Furthermore, we consider using the Laplace approximation to form the importance distribution to obtain a more efficient importance distribution than the prior. The methodology is motivated by a pharmacokinetic study which investigates the effect of extracorporeal membrane oxygenation on the pharmacokinetics of antibiotics in sheep. The design problem is to find 10 near optimal plasma sampling times which produce precise estimates of pharmacokinetic model parameters/measures of interest. We consider several different utility functions of interest in these studies, which involve the posterior distribution of parameter functions.