437 resultados para Statistical hypothesis testing.
Resumo:
Many researchers in the field of civil structural health monitoring have developed and tested their methods on simple to moderately complex laboratory structures such as beams, plates, frames, and trusses. Field work has also been conducted by many researchers and practitioners on more complex operating bridges. Most laboratory structures do not adequately replicate the complexity of truss bridges. This paper presents some preliminary results of experimental modal testing and analysis of the bridge model presented in the companion paper, using the peak picking method, and compares these results with those of a simple numerical model of the structure. Three dominant modes of vibration were experimentally identified under 15 Hz. The mode shapes and order of the modes matched those of the numerical model; however, the frequencies did not match.
Resumo:
Urbanisation significantly changes the characteristics of a catchment as natural areas are transformed to impervious surfaces such as roads, roofs and parking lots. The increased fraction of impervious surfaces leads to changes to the stormwater runoff characteristics, whilst a variety of anthropogenic activities common to urban areas generate a range of pollutants such as nutrients, solids and organic matter. These pollutants accumulate on catchment surfaces and are removed and trans- ported by stormwater runoff and thereby contribute pollutant loads to receiving waters. In summary, urbanisation influences the stormwater characteristics of a catchment, including hydrology and water quality. Due to the growing recognition that stormwater pollution is a significant environmental problem, the implementation of mitigation strategies to improve the quality of stormwater runoff is becoming increasingly common in urban areas. A scientifically robust stormwater quality treatment strategy is an essential requirement for effective urban stormwater management. The efficient design of treatment systems is closely dependent on the state of knowledge in relation to the primary factors influencing stormwater quality. In this regard, stormwater modelling outcomes provide designers with important guidance and datasets which significantly underpin the design of effective stormwater treatment systems. Therefore, the accuracy of modelling approaches and the reliability modelling outcomes are of particular concern. This book discusses the inherent complexity and key characteristics in the areas of urban hydrology and stormwater quality, based on the influence exerted by a range of rainfall and catchment characteristics. A comprehensive field sampling and testing programme in relation to pollutant build-up, an urban catchment monitoring programme in relation to stormwater quality and the outcomes from advanced statistical analyses provided the platform for the knowledge creation. Two case studies and two real-world applications are discussed to illustrate the translation of the knowledge created to practical use in relation to the role of rainfall and catchment characteristics on urban stormwater quality. An innovative rainfall classification based on stormwater quality was developed to support the effective and scientifically robust design of stormwater treatment systems. Underpinned by the rainfall classification methodology, a reliable approach for design rainfall selection is proposed in order to optimise stormwater treatment based on both, stormwater quality and quantity. This is a paradigm shift from the common approach where stormwater treatment systems are designed based solely on stormwater quantity data. Additionally, how pollutant build-up and stormwater runoff quality vary with a range of catchment characteristics was also investigated. Based on the study out- comes, it can be concluded that the use of only a limited number of catchment parameters such as land use and impervious surface percentage, as it is the case in current modelling approaches, could result in appreciable error in water quality estimation. Influential factors which should be incorporated into modelling in relation to catchment characteristics, should also include urban form and impervious surface area distribution. The knowledge created through the research investigations discussed in this monograph is expected to make a significant contribution to engineering practice such as hydrologic and stormwater quality modelling, stormwater treatment design and urban planning, as the study outcomes provide practical approaches and recommendations for urban stormwater quality enhancement. Furthermore, this monograph also demonstrates how fundamental knowledge of stormwater quality processes can be translated to provide guidance on engineering practice, the comprehensive application of multivariate data analyses techniques and a paradigm on integrative use of computer models and mathematical models to derive practical outcomes.
Resumo:
While the implementation of the IEC 61850 standard has significantly enhanced the performance of communications in electrical substations, it has also increased the complexity of the system. Subsequently, these added elaborations have introduced new challenges in relation to the skills and tools required for the design, test and maintenance of 61850-compatible substations. This paper describes a practical experience of testing a protection relay using a non-conventional test equipment; in addition, it proposes a third party software technique to reveal the contents of the packets transferred on the substation network. Using this approach, the standard objects can be linked and interpreted to what the end-users normally see in the IED and test equipment proprietary software programs.
Resumo:
Introduction Different types of hallucinations are symptomatic of different conditions. Schizotypal hallucinations are unique in that they follow existing delusional narrative patterns: they are often bizarre, they are generally multimodal, and they are particularly vivid (the experience of a newsreader abusing you personally over the TV is both visual and aural. Patients who feel and hear silicone chips under their skin suffer from haptic hallucinations as well as aural ones, etc.) Although there are a number of hypotheses for hallucinations, few cogently grapple the sheer bizarreness of the ones experienced in schizotypal psychosis. Methods A review-based hypothesis, traversing theory from the molecular level to phenomenological expression as a distinct and recognizable symptomatology. Conclusion Hallucinations appear to be caused by a two-fold dysfunction in the mesofrontal dopamine pathway, which is considered here to mediate attention of different types: in the anterior medial frontal lobe, the receptors (largely D1 type) mediate declarative awareness, whereas the receptors in the striatum (largely D2 type) mediate latent awareness of known schemata. In healthy perception, most of the perceptual load is performed by the latter: by the top-down predictive and mimetic engine, with the bottom-up mechanism being used as a secondary tool to bring conscious deliberation to stimuli that fails to match up against expectations. In schizophrenia, the predictive mode is over-stimulated, while the bottom-up feedback mechanism atrophies. The dysfunctional distribution pattern effectively confines dopamine activity to the striatum, thereby stimulating the structural components of thought and behaviour: well-learned routines, narrative structures, lexica, grammar, schemata, archetypes, and other procedural resources. Meanwhile, the loss of activity in the frontal complex reduces the capacity for declarative awareness and for processing anything that fails to meet expectations.
Resumo:
BACKGROUND Law is increasingly involved in clinical practice, particularly at the end of life, but undergraduate and postgraduate education in this area remains unsystematic. We hypothesised that attitudes to and knowledge of the law governing withholding/withdrawing treatment from adults without capacity (the WWLST law) would vary and demonstrate deficiencies among medical specialists. AIMS We investigated perspectives, knowledge and training of medical specialists in the three largest (populations and medical workforces) Australian states, concerning the WWLST law. METHODS Following expert legal review, specialist focus groups, pre-testing and piloting in each state, seven specialties involved with end-of-life care were surveyed, with a variety of statistical analyses applied to the responses. RESULTS Respondents supported the need to know and follow the law. There were mixed views about its helpfulness in medical decision-making. Over half the respondents conceded poor knowledge of the law; this was mirrored by critical gaps in knowledge that varied by specialty. There were relatively low but increasing rates of education from the undergraduate to continuing professional development (CPD) stages. Mean knowledge score did not vary significantly according to undergraduate or immediate postgraduate training, but CPD training, particularly if recent, resulted in greater knowledge. Case-based workshops were the preferred CPD instruction method. CONCLUSIONS Teaching of current and evolving law should be strengthened across all stages of medical education. This should improve understanding of the role of law, ameliorate ambivalence towards the law, and contribute to more informed deliberation about end-of-life issues with patients and families.
Resumo:
Provides an accessible foundation to Bayesian analysis using real world models This book aims to present an introduction to Bayesian modelling and computation, by considering real case studies drawn from diverse fields spanning ecology, health, genetics and finance. Each chapter comprises a description of the problem, the corresponding model, the computational method, results and inferences as well as the issues that arise in the implementation of these approaches. Case Studies in Bayesian Statistical Modelling and Analysis: •Illustrates how to do Bayesian analysis in a clear and concise manner using real-world problems. •Each chapter focuses on a real-world problem and describes the way in which the problem may be analysed using Bayesian methods. •Features approaches that can be used in a wide area of application, such as, health, the environment, genetics, information science, medicine, biology, industry and remote sensing. Case Studies in Bayesian Statistical Modelling and Analysis is aimed at statisticians, researchers and practitioners who have some expertise in statistical modelling and analysis, and some understanding of the basics of Bayesian statistics, but little experience in its application. Graduate students of statistics and biostatistics will also find this book beneficial.
Resumo:
Background In the emergency department, portable point-of-care testing (POCT) coagulation devices may facilitate stroke patient care by providing rapid International Normalized Ratio (INR) measurement. The objective of this study was to evaluate the reliability, validity, and impact on clinical decision-making of a POCT device for INR testing in the setting of acute ischemic stroke (AIS). Methods A total of 150 patients (50 healthy volunteers, 51 anticoagulated patients, 49 AIS patients) were assessed in a tertiary care facility. The INR's were measured using the Roche Coaguchek S and the standard laboratory technique. Results The interclass correlation coefficient and 95% confidence interval between overall POCT device and standard laboratory value INRs was high (0.932 (0.69 - 0.78). In the AIS group alone, the correlation coefficient and 95% CI was also high 0.937 (0.59 - 0.74) and diagnostic accuracy of the POCT device was 94%. Conclusions When used by a trained health professional in the emergency department to assess INR in acute ischemic stroke patients, the CoaguChek S is reliable and provides rapid results. However, as concordance with laboratory INR values decreases with higher INR values, it is recommended that with CoaguChek S INRs in the > 1.5 range, a standard laboratory measurement be used to confirm the results.
Resumo:
Background The aim of this study was to compare through surface electromyographic (sEMG) recordings of the maximum voluntary contraction (MVC) on dry land and in water by manual muscle test (MMT). Method Sixteen healthy right-handed subjects (8 males and 8 females) participated in measurement of muscle activation of the right shoulder. The selected muscles were the cervical erector spinae, trapezius, pectoralis, anterior deltoid, middle deltoid, infraspinatus and latissimus dorsi. The MVC test conditions were random with respect to the order on the land/in water. Results For each muscle, the MVC test was performed and measured through sEMG to determine differences in muscle activation in both conditions. For all muscles except the latissimus dorsi, no significant differences were observed between land and water MVC scores (p = 0.063–0.679) and precision (%Diff = 7–10%) were observed between MVC conditions in the muscles trapezius, anterior deltoid and middle deltoid. Conclusions If the procedure for data collection is optimal, under MMT conditions it appears that comparable MVC sEMG values were achieved on land and in water and the integrity of the EMG recordings were maintained during wáter immersion.
Resumo:
We defined a new statistical fluid registration method with Lagrangian mechanics. Although several authors have suggested that empirical statistics on brain variation should be incorporated into the registration problem, few algorithms have included this information and instead use regularizers that guarantee diffeomorphic mappings. Here we combine the advantages of a large-deformation fluid matching approach with empirical statistics on population variability in anatomy. We reformulated the Riemannian fluid algorithmdeveloped in [4], and used a Lagrangian framework to incorporate 0 th and 1st order statistics in the regularization process. 92 2D midline corpus callosum traces from a twin MRI database were fluidly registered using the non-statistical version of the algorithm (algorithm 0), giving initial vector fields and deformation tensors. Covariance matrices were computed for both distributions and incorporated either separately (algorithm 1 and algorithm 2) or together (algorithm 3) in the registration. We computed heritability maps and two vector and tensorbased distances to compare the power and the robustness of the algorithms.
Resumo:
In this paper, we used a nonconservative Lagrangian mechanics approach to formulate a new statistical algorithm for fluid registration of 3-D brain images. This algorithm is named SAFIRA, acronym for statistically-assisted fluid image registration algorithm. A nonstatistical version of this algorithm was implemented, where the deformation was regularized by penalizing deviations from a zero rate of strain. In, the terms regularizing the deformation included the covariance of the deformation matrices Σ and the vector fields (q). Here, we used a Lagrangian framework to reformulate this algorithm, showing that the regularizing terms essentially allow nonconservative work to occur during the flow. Given 3-D brain images from a group of subjects, vector fields and their corresponding deformation matrices are computed in a first round of registrations using the nonstatistical implementation. Covariance matrices for both the deformation matrices and the vector fields are then obtained and incorporated (separately or jointly) in the nonconservative terms, creating four versions of SAFIRA. We evaluated and compared our algorithms' performance on 92 3-D brain scans from healthy monozygotic and dizygotic twins; 2-D validations are also shown for corpus callosum shapes delineated at midline in the same subjects. After preliminary tests to demonstrate each method, we compared their detection power using tensor-based morphometry (TBM), a technique to analyze local volumetric differences in brain structure. We compared the accuracy of each algorithm variant using various statistical metrics derived from the images and deformation fields. All these tests were also run with a traditional fluid method, which has been quite widely used in TBM studies. The versions incorporating vector-based empirical statistics on brain variation were consistently more accurate than their counterparts, when used for automated volumetric quantification in new brain images. This suggests the advantages of this approach for large-scale neuroimaging studies.
Resumo:
The discovery of several genes that affect the risk for Alzheimer's disease ignited a worldwide search for single-nucleotide polymorphisms (SNPs), common genetic variants that affect the brain. Genome-wide search of all possible SNP-SNP interactions is challenging and rarely attempted because of the complexity of conducting approximately 1011 pairwise statistical tests. However, recent advances in machine learning, for example, iterative sure independence screening, make it possible to analyze data sets with vastly more predictors than observations. Using an implementation of the sure independence screening algorithm (called EPISIS), we performed a genome-wide interaction analysis testing all possible SNP-SNP interactions affecting regional brain volumes measured on magnetic resonance imaging and mapped using tensor-based morphometry. We identified a significant SNP-SNP interaction between rs1345203 and rs1213205 that explains 1.9% of the variance in temporal lobe volume. We mapped the whole brain, voxelwise effects of the interaction in the Alzheimer's Disease Neuroimaging Initiative data set and separately in an independent replication data set of healthy twins (Queensland Twin Imaging). Each additional loading in the interaction effect was associated with approximately 5% greater brain regional brain volume (a protective effect) in both Alzheimer's Disease Neuroimaging Initiative and Queensland Twin Imaging samples.
Resumo:
This paper provides an important and timely overview of a conceptual framework designed to assist with the development of message content, as well as the evaluation, of persuasive health messages. While an earlier version of this framework was presented in a prior publication by the authors in 2009, important refinements to the framework have seen it evolve in recent years, warranting the need for an updated review. This paper outlines the Step approach to Message Design and Testing (or SatMDT) in accordance with the theoretical evidence which underpins, as well as empirical evidence which demonstrates the relevance and feasibility, of each of the framework’s steps. The development and testing of the framework have thus far been based exclusively within the road safety advertising context; however, the view expressed herein is that the framework may have broader appeal and application to the health persuasion context.
Resumo:
To this point, the collection has provided research-based, empirical accounts of the various and multiple effects of the National Assessment Program – Literacy and Numeracy (NAPLAN) in Australian schooling as a specific example of the global phenomenon of national testing. In this chapter, we want to develop a more theoretical analysis of national testing systems, globalising education policy and the promise of national testing as adaptive, online tests. These future moves claim to provide faster feedback and more useful diagnostic help for teachers. There is a utopian testing dream that one day adaptive, online tests will be responsive in real time providing an integrated personalised testing, pedagogy and intervention for each student. The moves towards these next generation assessments are well advanced, including the work of Pearson’s NextGen Learning and Assessment research group, the Organization for Economic Co-operation and Development’s (OECD) move into assessing affective skills and the Australian Curriculum, Assessment and Reporting Authority’s (ACARA) decision to phase in NAPLAN as an online, adaptive test from 2017...
Resumo:
Introduction This book examines a pressing educational issue: the global phenomenon of national testing in schooling and its vernacular development in Australia. The Australian National Assessment Program – Literacy and Numeracy (NAPLAN), introduced in 2008, involves annual census testing of students in Years 3, 5, 7 and 9 in nearly all Australian schools. In a variety of ways, NAPLAN affects the lives of Australia’s 3.5 million school students and their families, as well as more than 350,000 school staff and many other stakeholders in education. This book is organised in relation to a simple question: What are the effects of national testing for systems, schools and individuals? Of course, this simple question requires complex answers. The chapters in this edited collection consider issues relating to national testing policy, the construction of the test, usages of the testing data and various effects of testing in systems, schools and classrooms. Each chapter examines an aspect of national testing in Australia using evidence drawn from research. The final chapter by the editors of this collection provides a broader reflection on this phenomenon and situates developments in testing globally...
Resumo:
Since 2008, Australian schoolchildren in Years 3, 5, 7 and 9 have sat a series of tests each May designed to assess their attainment of basic skills in literacy and numeracy. These tests are known as the National Assessment Program – Literacy and Numeracy (NAPLAN). In 2010, individual school NAPLAN data were first published on the MySchool website which enables comparisons to be made between individual schools and statistically like schools across Australia. NAPLAN represents the increased centrality of the federal government in education, particularly in regards to education policy. One effect of this has been a recast emphasis of education as an economic, rather than democratic, good. As Reid (2009) suggests, this recasting of education within national productivity agendas mobilises commonsense discourses of accountability and transparency. These are common articles of faith for many involved in education administration and bureaucracy; more and better data, and holding people to account for that data, must improve education...