984 resultados para Tests accuracy


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A number of tests and test batteries are available for the prediction of older driver safety, but many of these have not been validated against standardized driving outcome measures. The aim of this study was to evaluate a series of previously described screening tests in terms of their ability to predict the potential for safe and unsafe driving. Participants included 79 community-dwelling older drivers (M=72.16 years, SD=5.46; range 65-88 years; 57 males and 22 females) who completed a previously validated multi-disciplinary driving assessment, a hazard perception test, a hazard change detection test and a battery of vision and cognitive tests. Participants also completed a standardized on-road driving assessment. The multi-disciplinary test battery had the highest predictive ability with a sensitivity of 80% and a specificity of 73%, followed by the hazard perception test which demonstrated a sensitivity of 75% and a specificity of 61%. These findings suggest that a relatively simple and practical battery of tests from a range of domains has the capacity to predict safe and unsafe driving in older adults.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The development and design of electric high power devices with electromagnetic computer-aided engineering (EM-CAE) software such as the Finite Element Method (FEM) and Boundary Element Method (BEM) has been widely adopted. This paper presents the analysis of a Fault Current Limiter (FCL), which acts as a high-voltage surge protector for power grids. A prototype FCL was built. The magnetic flux in the core and the resulting electromagnetic forces in the winding of the FCL were analyzed using both FEM and BEM. An experiment on the prototype was conducted in a laboratory. The data obtained from the experiment is compared to the numerical solutions to determine the suitability and accuracy of the two methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Exponential growth of genomic data in the last two decades has made manual analyses impractical for all but trial studies. As genomic analyses have become more sophisticated, and move toward comparisons across large datasets, computational approaches have become essential. One of the most important biological questions is to understand the mechanisms underlying gene regulation. Genetic regulation is commonly investigated and modelled through the use of transcriptional regulatory network (TRN) structures. These model the regulatory interactions between two key components: transcription factors (TFs) and the target genes (TGs) they regulate. Transcriptional regulatory networks have proven to be invaluable scientific tools in Bioinformatics. When used in conjunction with comparative genomics, they have provided substantial insights into the evolution of regulatory interactions. Current approaches to regulatory network inference, however, omit two additional key entities: promoters and transcription factor binding sites (TFBSs). In this study, we attempted to explore the relationships among these regulatory components in bacteria. Our primary goal was to identify relationships that can assist in reducing the high false positive rates associated with transcription factor binding site predictions and thereupon enhance the reliability of the inferred transcription regulatory networks. In our preliminary exploration of relationships between the key regulatory components in Escherichia coli transcription, we discovered a number of potentially useful features. The combination of location score and sequence dissimilarity scores increased de novo binding site prediction accuracy by 13.6%. Another important observation made was with regards to the relationship between transcription factors grouped by their regulatory role and corresponding promoter strength. Our study of E.coli ��70 promoters, found support at the 0.1 significance level for our hypothesis | that weak promoters are preferentially associated with activator binding sites to enhance gene expression, whilst strong promoters have more repressor binding sites to repress or inhibit gene transcription. Although the observations were specific to �70, they nevertheless strongly encourage additional investigations when more experimentally confirmed data are available. In our preliminary exploration of relationships between the key regulatory components in E.coli transcription, we discovered a number of potentially useful features { some of which proved successful in reducing the number of false positives when applied to re-evaluate binding site predictions. Of chief interest was the relationship observed between promoter strength and TFs with respect to their regulatory role. Based on the common assumption, where promoter homology positively correlates with transcription rate, we hypothesised that weak promoters would have more transcription factors that enhance gene expression, whilst strong promoters would have more repressor binding sites. The t-tests assessed for E.coli �70 promoters returned a p-value of 0.072, which at 0.1 significance level suggested support for our (alternative) hypothesis; albeit this trend may only be present for promoters where corresponding TFBSs are either all repressors or all activators. Nevertheless, such suggestive results strongly encourage additional investigations when more experimentally confirmed data will become available. Much of the remainder of the thesis concerns a machine learning study of binding site prediction, using the SVM and kernel methods, principally the spectrum kernel. Spectrum kernels have been successfully applied in previous studies of protein classification [91, 92], as well as the related problem of promoter predictions [59], and we have here successfully applied the technique to refining TFBS predictions. The advantages provided by the SVM classifier were best seen in `moderately'-conserved transcription factor binding sites as represented by our E.coli CRP case study. Inclusion of additional position feature attributes further increased accuracy by 9.1% but more notable was the considerable decrease in false positive rate from 0.8 to 0.5 while retaining 0.9 sensitivity. Improved prediction of transcription factor binding sites is in turn extremely valuable in improving inference of regulatory relationships, a problem notoriously prone to false positive predictions. Here, the number of false regulatory interactions inferred using the conventional two-component model was substantially reduced when we integrated de novo transcription factor binding site predictions as an additional criterion for acceptance in a case study of inference in the Fur regulon. This initial work was extended to a comparative study of the iron regulatory system across 20 Yersinia strains. This work revealed interesting, strain-specific difierences, especially between pathogenic and non-pathogenic strains. Such difierences were made clear through interactive visualisations using the TRNDifi software developed as part of this work, and would have remained undetected using conventional methods. This approach led to the nomination of the Yfe iron-uptake system as a candidate for further wet-lab experimentation due to its potential active functionality in non-pathogens and its known participation in full virulence of the bubonic plague strain. Building on this work, we introduced novel structures we have labelled as `regulatory trees', inspired by the phylogenetic tree concept. Instead of using gene or protein sequence similarity, the regulatory trees were constructed based on the number of similar regulatory interactions. While the common phylogentic trees convey information regarding changes in gene repertoire, which we might regard being analogous to `hardware', the regulatory tree informs us of the changes in regulatory circuitry, in some respects analogous to `software'. In this context, we explored the `pan-regulatory network' for the Fur system, the entire set of regulatory interactions found for the Fur transcription factor across a group of genomes. In the pan-regulatory network, emphasis is placed on how the regulatory network for each target genome is inferred from multiple sources instead of a single source, as is the common approach. The benefit of using multiple reference networks, is a more comprehensive survey of the relationships, and increased confidence in the regulatory interactions predicted. In the present study, we distinguish between relationships found across the full set of genomes as the `core-regulatory-set', and interactions found only in a subset of genomes explored as the `sub-regulatory-set'. We found nine Fur target gene clusters present across the four genomes studied, this core set potentially identifying basic regulatory processes essential for survival. Species level difierences are seen at the sub-regulatory-set level; for example the known virulence factors, YbtA and PchR were found in Y.pestis and P.aerguinosa respectively, but were not present in both E.coli and B.subtilis. Such factors and the iron-uptake systems they regulate, are ideal candidates for wet-lab investigation to determine whether or not they are pathogenic specific. In this study, we employed a broad range of approaches to address our goals and assessed these methods using the Fur regulon as our initial case study. We identified a set of promising feature attributes; demonstrated their success in increasing transcription factor binding site prediction specificity while retaining sensitivity, and showed the importance of binding site predictions in enhancing the reliability of regulatory interaction inferences. Most importantly, these outcomes led to the introduction of a range of visualisations and techniques, which are applicable across the entire bacterial spectrum and can be utilised in studies beyond the understanding of transcriptional regulatory networks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It is acknowledged around the world that many university students struggle with learning to program (McCracken et al., 2001; McGettrick et al., 2005). In this paper, we describe how we have developed a research programme to systematically study and incrementally improve our teaching. We have adopted a research programme with three elements: (1) a theory that provides an organising framework for defining the type of phenomena and data of interest, (2) data on how the class as a whole performs on formative assessment tasks that are framed from within the organising framework, and (3) data from one-on-one think aloud sessions, to establish why students struggle with some of those in-class formative assessment tasks. We teach introductory computer programming, but this three-element structure of our research is applicable to many areas of engineering education research.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The UN Convention on the Rights of Persons with Disability (CRPD) promotes equal and full participation by children in education. Equity of educational access for all students, including students with disability, free from discrimination, is the first stated national goal of Australian education (MCEETYA 2008). Australian federal disability discrimination law, the Disability Discrimination Act 1992 (DDA), follows the Convention, with the federal Disability Standards for Education 2005 (DSE) enacting specific requirements for education. This article discusses equity of processes for inclusion of students with disability in Australian educational accountability testing, including international tests in which many countries participate. The conclusion drawn is that equitable inclusion of students with disability in current Australian educational accountability testing in not occurring from a social perspective and is not in principle compliant with law. However, given the reluctance of courts to intervene in education matters and the uncertainty of an outcome in any court consideration, the discussion shows that equitable inclusion in accountability systems is available through policy change rather than expensive, and possibly unsuccessful, legal challenges.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Vertical displacements are one of the most relevant parameters for structural health monitoring of bridges in both the short and long terms. Bridge managers around the globe are always looking for a simple way to measure vertical displacements of bridges. However, it is difficult to carry out such measurements. On the other hand, in recent years, with the advancement of fiber-optic technologies, fiber Bragg grating (FBG) sensors are more commonly used in structural health monitoring due to their outstanding advantages including multiplexing capability, immunity of electromagnetic interference as well as high resolution and accuracy. For these reasons, using FBG sensors is proposed to develop a simple, inexpensive and practical method to measure vertical displacements of bridges. A curvature approach for vertical displacement measurements using curvature measurements is proposed. In addition, with the successful development of FBG tilt sensors, an inclination approach is also proposed using inclination measurements. A series of simulation tests of a full- scale bridge was conducted. It shows that both of the approaches can be implemented to determine vertical displacements for bridges with various support conditions, varying stiffness (EI) along the spans and without any prior known loading. These approaches can thus measure vertical displacements for most of slab-on-girder and box-girder bridges. Besides, the approaches are feasible to implement for bridges under various loading. Moreover, with the advantages of FBG sensors, they can be implemented to monitor bridge behavior remotely and in real time. A beam loading test was conducted to determine vertical displacements using FBG strain sensors and tilt sensors. The discrepancies as compared with dial gauges reading using the curvature and inclination approaches are 0.14mm (1.1%) and 0.41mm (3.2%), respectively. Further recommendations of these approaches for developments will also be discussed at the end of the paper.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

PURPOSE: To test the reliability of Timed Up and Go Tests (TUGTs) in cardiac rehabilitation (CR) and compare TUGTs to the 6-Minute Walk Test (6MWT) for outcome measurement. METHODS: Sixty-one of 154 consecutive community-based CR patients were prospectively recruited. Subjects undertook repeated TUGTs and 6MWTs at the start of CR (start-CR), postdischarge from CR (post-CR), and 6 months postdischarge from CR (6 months post-CR). The main outcome measurements were TUGT time (TUGTT) and 6MWT distance (6MWD). RESULTS: Mean (SD) TUGTT1 and TUGTT2 at the 3 assessments were 6.29 (1.30) and 5.94 (1.20); 5.81 (1.22) and 5.53 (1.09); and 5.39 (1.60) and 5.01 (1.28) seconds, respectively. A reduction in TUGTT occurred between each outcome point (P ≤ .002). Repeated TUGTTs were strongly correlated at each assessment, intraclass correlation (95% CI) = 0.85 (0.76–0.91), 0.84 (0.73–0.91), and 0.90 (0.83–0.94), despite a reduction between TUGTT1 and TUGTT2 of 5%, 5%, and 7%, respectively (P ≤ .006). Relative decreases in TUGTT1 (TUGTT2) occurred from start-CR to post-CR and from start-CR to 6 months post-CR of −7.5% (−6.9%) and −14.2% (−15.5%), respectively, while relative increases in 6MWD1 (6MWD2) occurred, 5.1% (7.2%) and 8.4% (10.2%), respectively (P < .001 in all cases). Pearson correlation coefficients for 6MWD1 to TUGTT1 and TUGTT2 across all times were −0.60 and −0.68 (P < .001) and the intraclass correlations (95% CI) for the speeds derived from averaged 6MWDs and TUGTTs were 0.65 (0.54, 0.73) (P < .001). CONCLUSIONS: Similar relative changes occurred for the TUGT and the 6MWT in CR. A significant correlation between the TUGTT and 6MWD was demonstrated, and we suggest that the TUGT may provide a related or a supplementary measurement of functional capacity in CR.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Russell, Benton and Kingsley (2010) recently suggested a new association football test comprising three different tasks for the evaluation of players' passing, dribbling and shooting skills. Their stated intention was to enhance ‘ecological validity’ of current association football skills tests allowing generalisation of results from the new protocols to performance constraints that were ‘representative’ of experiences during competitive game situations. However, in this comment we raise some concerns with their use of the term ‘ecological validity’ to allude to aspects of ‘representative task design’. We propose that in their paper the authors confused understanding of environmental properties, performance achievement and generalisability of the test and its outcomes. Here, we argue that the tests designed by Russell and colleagues did not include critical sources of environmental information, such as the active role of opponents, which players typically use to organise their actions during performance. Static tasks which are not representative of the competitive performance environment may lead to different emerging patterns of movement organisation and performance outcomes, failing to effectively evaluate skills performance in sport.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objectives The relationship between performance variability and accuracy in cricket fast bowlers of different skill levels under three different task conditions was investigated. Bowlers of different skill levels were examined to observe if they could adapt movement patterns to maintain performance accuracy on a bowling skills test. Design 8 national, 12 emerging and 12 junior pace bowlers completed an adapted version of the Cricket Australia bowling skills test, in which they performed 30 trials involving short (n = 10), good (n = 10), and full (n = 10) length deliveries. Methods Bowling accuracy was recorded by digitising ball position relative to the centre of a target. Performance measures were mean radial error (accuracy), variable error (consistency), centroid error (bias), bowling score and ball speed. Radial error changes across the duration of the skills test were used to record accuracy adjustment in subsequent deliveries. Results Elite fast bowlers performed better in speed, accuracy, and test scores than developing athletes. Bowlers who were less variable were also more accurate across all delivery lengths. National and emerging bowlers were able to adapt subsequent performance trials within the same bowling session for short length deliveries. Conclusions Accuracy and adaptive variability were key components of elite performance in fast bowling which improved with skill level. In this study, only national elite bowlers showed requisite levels of adaptive variability to bowl a range of lengths to different pitch locations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The power of testing for a population-wide association between a biallelic quantitative trait locus and a linked biallelic marker locus is predicted both empirically and deterministically for several tests. The tests were based on the analysis of variance (ANOVA) and on a number of transmission disequilibrium tests (TDT). Deterministic power predictions made use of family information, and were functions of population parameters including linkage disequilibrium, allele frequencies, and recombination rate. Deterministic power predictions were very close to the empirical power from simulations in all scenarios considered in this study. The different TDTs had very similar power, intermediate between one-way and nested ANOVAs. One-way ANOVA was the only test that was not robust against spurious disequilibrium. Our general framework for predicting power deterministically can be used to predict power in other association tests. Deterministic power calculations are a powerful tool for researchers to plan and evaluate experiments and obviate the need for elaborate simulation studies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A satellite based observation system can continuously or repeatedly generate a user state vector time series that may contain useful information. One typical example is the collection of International GNSS Services (IGS) station daily and weekly combined solutions. Another example is the epoch-by-epoch kinematic position time series of a receiver derived by a GPS real time kinematic (RTK) technique. Although some multivariate analysis techniques have been adopted to assess the noise characteristics of multivariate state time series, statistic testings are limited to univariate time series. After review of frequently used hypotheses test statistics in univariate analysis of GNSS state time series, the paper presents a number of T-squared multivariate analysis statistics for use in the analysis of multivariate GNSS state time series. These T-squared test statistics have taken the correlation between coordinate components into account, which is neglected in univariate analysis. Numerical analysis was conducted with the multi-year time series of an IGS station to schematically demonstrate the results from the multivariate hypothesis testing in comparison with the univariate hypothesis testing results. The results have demonstrated that, in general, the testing for multivariate mean shifts and outliers tends to reject less data samples than the testing for univariate mean shifts and outliers under the same confidence level. It is noted that neither univariate nor multivariate data analysis methods are intended to replace physical analysis. Instead, these should be treated as complementary statistical methods for a prior or posteriori investigations. Physical analysis is necessary subsequently to refine and interpret the results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The use of Wireless Sensor Networks (WSNs) for Structural Health Monitoring (SHM) has become a promising approach due to many advantages such as low cost, fast and flexible deployment. However, inherent technical issues such as data synchronization error and data loss have prevented these distinct systems from being extensively used. Recently, several SHM-oriented WSNs have been proposed and believed to be able to overcome a large number of technical uncertainties. Nevertheless, there is limited research verifying the applicability of those WSNs with respect to demanding SHM applications like modal analysis and damage identification. This paper first presents a brief review of the most inherent uncertainties of the SHM-oriented WSN platforms and then investigates their effects on outcomes and performance of the most robust Output-only Modal Analysis (OMA) techniques when employing merged data from multiple tests. The two OMA families selected for this investigation are Frequency Domain Decomposition (FDD) and Data-driven Stochastic Subspace Identification (SSI-data) due to the fact that they both have been widely applied in the past decade. Experimental accelerations collected by a wired sensory system on a large-scale laboratory bridge model are initially used as clean data before being contaminated by different data pollutants in sequential manner to simulate practical SHM-oriented WSN uncertainties. The results of this study show the robustness of FDD and the precautions needed for SSI-data family when dealing with SHM-WSN uncertainties. Finally, the use of the measurement channel projection for the time-domain OMA techniques and the preferred combination of the OMA techniques to cope with the SHM-WSN uncertainties is recommended.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

One of the primary desired capabilities of any future air traffic separation management system is the ability to provide early conflict detection and resolution effectively and efficiently. In this paper, we consider the risk of conflict as a primary measurement to be used for early conflict detection. This paper focuses on developing a novel approach to assess the impact of different measurement uncertainty models on the estimated risk of conflict. The measurement uncertainty model can be used to represent different sensor accuracy and sensor choices. Our study demonstrates the value of modelling measurement uncertainty in the conflict risk estimation problem and presents techniques providing a means of assessing sensor requirements to achieve desired conflict detection performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

High-stakes literacy testing is now a ubiquitous educational phenomenon. However, it remains a relatively recent phenomenon in Australia. Hence it is possible to study the ways in which such tests are reorganising educators’ work during this period of change. This paper draws upon Dorothy Smith’s Institutional Ethnography and critical policy analysis to consider this problem and reports on interview data from teachers and the principal in small rural school in a poor area of South Australia. In this context high-stakes testing and the associated diagnostic school review unleashes a chain of actions within the school which ultimately results in educators doubting their professional judgments, increasing the investment in testing, narrowing their teaching of literacy and purchasing levelled reading schemes. The effects of high-stakes testing in disadvantaged schools are identified and discussed.