938 resultados para Accuracy.


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The accuracy of data derived from linked-segment models depends on how well the system has been represented. Previous investigations describing the gait of persons with partial foot amputation did not account for the unique anthropometry of the residuum or the inclusion of a prosthesis and footwear in the model and, as such, are likely to have underestimated the magnitude of the peak joint moments and powers. This investigation determined the effect of inaccuracies in the anthropometric input data on the kinetics of gait. Toward this end, a geometric model was developed and validated to estimate body segment parameters of various intact and partial feet. These data were then incorporated into customized linked-segment models, and the kinetic data were compared with that obtained from conventional models. Results indicate that accurate modeling increased the magnitude of the peak hip and knee joint moments and powers during terminal swing. Conventional inverse dynamic models are sufficiently accurate for research questions relating to stance phase. More accurate models that account for the anthropometry of the residuum, prosthesis, and footwear better reflect the work of the hip extensors and knee flexors to decelerate the limb during terminal swing phase.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Query reformulation is a key user behavior during Web search. Our research goal is to develop predictive models of query reformulation during Web searching. This article reports results from a study in which we automatically classified the query-reformulation patterns for 964,780 Web searching sessions, composed of 1,523,072 queries, to predict the next query reformulation. We employed an n-gram modeling approach to describe the probability of users transitioning from one query-reformulation state to another to predict their next state. We developed first-, second-, third-, and fourth-order models and evaluated each model for accuracy of prediction, coverage of the dataset, and complexity of the possible pattern set. The results show that Reformulation and Assistance account for approximately 45% of all query reformulations; furthermore, the results demonstrate that the first- and second-order models provide the best predictability, between 28 and 40% overall and higher than 70% for some patterns. Implications are that the n-gram approach can be used for improving searching systems and searching assistance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper reports results from a study in which we automatically classified the query reformulation patterns for 964,780 Web searching sessions (composed of 1,523,072 queries) in order to predict what the next query reformulation would be. We employed an n-gram modeling approach to describe the probability of searchers transitioning from one query reformulation state to another and predict their next state. We developed first, second, third, and fourth order models and evaluated each model for accuracy of prediction. Findings show that Reformulation and Assistance account for approximately 45 percent of all query reformulations. Searchers seem to seek system searching assistant early in the session or after a content change. The results of our evaluations show that the first and second order models provided the best predictability, between 28 and 40 percent overall, and higher than 70 percent for some patterns. Implications are that the n-gram approach can be used for improving searching systems and searching assistance in real time.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The driving task requires sustained attention during prolonged periods, and can be performed in highly predictable or repetitive environments. Such conditions could create drowsiness or hypovigilance and impair the ability to react to critical events. Identifying vigilance decrement in monotonous conditions has been a major subject of research, but no research to date has attempted to predict this vigilance decrement. This pilot study aims to show that vigilance decrements due to monotonous tasks can be predicted through mathematical modelling. A short vigilance task sensitive to short periods of lapses of vigilance called Sustained Attention to Response Task is used to assess participants’ performance. This task models the driver’s ability to cope with unpredicted events by performing the expected action. A Hidden Markov Model (HMM) is proposed to predict participants’ hypovigilance. Driver’s vigilance evolution is modelled as a hidden state and is correlated to an observable variable: the participant’s reactions time. This experiment shows that the monotony of the task can lead to an important vigilance decline in less than five minutes. This impairment can be predicted four minutes in advance with an 86% accuracy using HMMs. This experiment showed that mathematical models such as HMM can efficiently predict hypovigilance through surrogate measures. The presented model could result in the development of an in-vehicle device that detects driver hypovigilance in advance and warn the driver accordingly, thus offering the potential to enhance road safety and prevent road crashes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The paper analyses the expected value of OD volumes from probe with fixed error, error that is proportional to zone size and inversely proportional to zone size. To add realism to the analysis, real trip ODs in the Tokyo Metropolitan Region are synthesised. The results show that for small zone coding with average radius of 1.1km, and fixed measurement error of 100m, an accuracy of 70% can be expected. The equivalent accuracy for medium zone coding with average radius of 5km would translate into a fixed error of approximately 300m. As expected small zone coding is more sensitive than medium zone coding as the chances of the probe error envelope falling into adjacent zones are higher. For the same error radii, error proportional to zone size would deliver higher level of accuracy. As over half (54.8%) of the trip ends start or end at zone with equivalent radius of ≤ 1.2 km and only 13% of trips ends occurred at zones with equivalent radius ≥2.5km, measurement error that is proportional to zone size such as mobile phone would deliver higher level of accuracy. The synthesis of real OD with different probe error characteristics have shown that expected value of >85% is difficult to achieve for small zone coding with average radius of 1.1km. For most transport applications, OD matrix at medium zone coding is sufficient for transport management. From this study it can be drawn that GPS with error range between 2 and 5m, and at medium zone coding (average radius of 5km) would provide OD estimates greater than 90% of the expected value. However, for a typical mobile phone operating error range at medium zone coding the expected value would be lower than 85%. This paper assumes transmission of one origin and one destination positions from the probe. However, if multiple positions within the origin and destination zones are transmitted, map matching to transport network could be performed and it would greatly improve the accuracy of the probe data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a model to estimate travel time using cumulative plots. Three different cases considered are i) case-Det, for only detector data; ii) case-DetSig, for detector data and signal controller data and iii) case-DetSigSFR: for detector data, signal controller data and saturation flow rate. The performance of the model for different detection intervals is evaluated. It is observed that detection interval is not critical if signal timings are available. Comparable accuracy can be obtained from larger detection interval with signal timings or from shorter detection interval without signal timings. The performance for case-DetSig and for case-DetSigSFR is consistent with accuracy generally more than 95% whereas, case-Det is highly sensitive to the signal phases in the detection interval and its performance is uncertain if detection interval is integral multiple of signal cycles.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The application of object-based approaches to the problem of extracting vegetation information from images requires accurate delineation of individual tree crowns. This paper presents an automated method for individual tree crown detection and delineation by applying a simplified PCNN model in spectral feature space followed by post-processing using morphological reconstruction. The algorithm was tested on high resolution multi-spectral aerial images and the results are compared with two existing image segmentation algorithms. The results demonstrate that our algorithm outperforms the other two solutions with the average accuracy of 81.8%.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Bone mineral density (BMD) is currently the preferred surrogate for bone strength in clinical practice. Finite element analysis (FEA) is a computer simulation technique that can predict the deformation of a structure when a load is applied, providing a measure of stiffness (N mm− 1). Finite element analysis of X-ray images (3D-FEXI) is a FEA technique whose analysis is derived from a single 2D radiographic image. This ex-vivo study demonstrates that 3D-FEXI derived from a conventional 2D radiographic image has the potential to significantly increase the accuracy of failure load assessment of the proximal femur compared with that currently achieved with BMD.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cold-formed steel members have been widely used in residential, industrial and commercial buildings as primary load bearing structural elements and non-load bearing structural elements (partitions) due to their advantages such as higher strength to weight ratio over the other structural materials such as hot-rolled steel, timber and concrete. Cold-formed steel members are often made from thin steel sheets and hence they are more susceptible to various buckling modes. Generally short columns are susceptible to local or distortional buckling while long columns to flexural or flexural-torsional buckling. Fire safety design of building structures is an essential requirement as fire events can cause loss of property and lives. Therefore it is essential to understand the fire performance of light gauge cold-formed steel structures under fire conditions. The buckling behaviour of cold-formed steel compression members under fire conditions is not well investigated yet and hence there is a lack of knowledge on the fire performance of cold-formed steel compression members. Current cold-formed steel design standards do not provide adequate design guidelines for the fire design of cold-formed steel compression members. Therefore a research project based on extensive experimental and numerical studies was undertaken at the Queensland University of Technology to investigate the buckling behaviour of light gauge cold-formed steel compression members under simulated fire conditions. As the first phase of this research, a detailed review was undertaken on the mechanical properties of light gauge cold-formed steels at elevated temperatures and the most reliable predictive models for mechanical properties and stress-strain models based on detailed experimental investigations were identified. Their accuracy was verified experimentally by carrying out a series of tensile coupon tests at ambient and elevated temperatures. As the second phase of this research, local buckling behaviour was investigated based on the experimental and numerical investigations at ambient and elevated temperatures. First a series of 91 local buckling tests was carried out at ambient and elevated temperatures on lipped and unlipped channels made of G250-0.95, G550-0.95, G250-1.95 and G450-1.90 cold-formed steels. Suitable finite element models were then developed to simulate the experimental conditions. These models were converted to ideal finite element models to undertake detailed parametric study. Finally all the ultimate load capacity results for local buckling were compared with the available design methods based on AS/NZS 4600, BS 5950 Part 5, Eurocode 3 Part 1.2 and the direct strength method (DSM), and suitable recommendations were made for the fire design of cold-formed steel compression members subject to local buckling. As the third phase of this research, flexural-torsional buckling behaviour was investigated experimentally and numerically. Two series of 39 flexural-torsional buckling tests were undertaken at ambient and elevated temperatures. The first series consisted 2800 mm long columns of G550-0.95, G250-1.95 and G450-1.90 cold-formed steel lipped channel columns while the second series contained 1800 mm long lipped channel columns of the same steel thickness and strength grades. All the experimental tests were simulated using a suitable finite element model, and the same model was used in a detailed parametric study following validation. Based on the comparison of results from the experimental and parametric studies with the available design methods, suitable design recommendations were made. This thesis presents a detailed description of the experimental and numerical studies undertaken on the mechanical properties and the local and flexural-torsional bucking behaviour of cold-formed steel compression member at ambient and elevated temperatures. It also describes the currently available ambient temperature design methods and their accuracy when used for fire design with appropriately reduced mechanical properties at elevated temperatures. Available fire design methods are also included and their accuracy in predicting the ultimate load capacity at elevated temperatures was investigated. This research has shown that the current ambient temperature design methods are capable of predicting the local and flexural-torsional buckling capacities of cold-formed steel compression members at elevated temperatures with the use of reduced mechanical properties. However, the elevated temperature design method in Eurocode 3 Part 1.2 is overly conservative and hence unsuitable, particularly in the case of flexural-torsional buckling at elevated temperatures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Major global changes are placing new demands on the Australian education system. Recent statements by the Prime Minister, together with current education policy and national curriculum documents available in the public domain, look to education’s role in promoting economic prosperity and social cohesion. Collectively, they emphasise the need to equip young Australians with the knowledge, understandings and skills required to compete in the global economy and participate as engaged citizens in a culturally diverse world. However, the decision to prioritise discipline-based learning in the forthcoming Australian history curriculum without specifically encompassing culture as a referent, raises the following question. How will students acquire the cultural knowledge, understandings and skills necessary for this process? This paper addresses this question by situating the current push for a national history curriculum, with specific reference to the study of Indigenous history and the study of Asia in Australia.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose –The introduction of Building Information Model tools over the last 20 years is resulting in radical changes in the Architectural, Engineering and Construction industry. One of these changes concerns the use of Virtual Prototyping - an advanced technology integrating BIM with realistic graphical simulations. Construction Virtual Prototyping (CVP) has now been developed and implemented on ten real construction projects in Hong Kong in the past three years. This paper reports on a survey aimed at establishing the effects of adopting this new technology and obtaining recommendations for future development. Design/methodology/approach – A questionnaire survey was conducted in 2007 of 28 key participants involved in four major Hong Kong construction projects – these projects being chosen because the CVP approach was used in more than one stage in each project. In addition, several interviews were conducted with the project manager, planning manager and project engineer of an individual project. Findings –All the respondents and interviewees gave a positive response to the CVP approach, with the most useful software functions considered to be those relating to visualisation and communication. The CVP approach was thought to improve the collaboration efficiency of the main contractor and sub-contractors by approximately 30 percent, and with a concomitant 30 to 50 percent reduction in meeting time. The most important benefits of CPV in the construction planning stage are the improved accuracy of process planning and shorter planning times, while improved fieldwork instruction and reducing rework occur in the construction implementation stage. Although project teams are hesitant to attribute the use of CVP directly to any specific time savings, it was also acknowledged that the workload of project planners is decreased. Suggestions for further development of the approach include incorporation of automatic scheduling and advanced assembly study. Originality/value –Whilst the research, development and implementation of CVP is relatively new in the construction industry, it is clear from the applications and feedback to date that the approach provides considerable added value to the organisation and management of construction projects.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objective: To summarise the extent to which narrative text fields in administrative health data are used to gather information about the event resulting in presentation to a health care provider for treatment of an injury, and to highlight best practise approaches to conducting narrative text interrogation for injury surveillance purposes.----- Design: Systematic review----- Data sources: Electronic databases searched included CINAHL, Google Scholar, Medline, Proquest, PubMed and PubMed Central.. Snowballing strategies were employed by searching the bibliographies of retrieved references to identify relevant associated articles.----- Selection criteria: Papers were selected if the study used a health-related database and if the study objectives were to a) use text field to identify injury cases or use text fields to extract additional information on injury circumstances not available from coded data or b) use text fields to assess accuracy of coded data fields for injury-related cases or c) describe methods/approaches for extracting injury information from text fields.----- Methods: The papers identified through the search were independently screened by two authors for inclusion, resulting in 41 papers selected for review. Due to heterogeneity between studies metaanalysis was not performed.----- Results: The majority of papers reviewed focused on describing injury epidemiology trends using coded data and text fields to supplement coded data (28 papers), with these studies demonstrating the value of text data for providing more specific information beyond what had been coded to enable case selection or provide circumstantial information. Caveats were expressed in terms of the consistency and completeness of recording of text information resulting in underestimates when using these data. Four coding validation papers were reviewed with these studies showing the utility of text data for validating and checking the accuracy of coded data. Seven studies (9 papers) described methods for interrogating injury text fields for systematic extraction of information, with a combination of manual and semi-automated methods used to refine and develop algorithms for extraction and classification of coded data from text. Quality assurance approaches to assessing the robustness of the methods for extracting text data was only discussed in 8 of the epidemiology papers, and 1 of the coding validation papers. All of the text interrogation methodology papers described systematic approaches to ensuring the quality of the approach.----- Conclusions: Manual review and coding approaches, text search methods, and statistical tools have been utilised to extract data from narrative text and translate it into useable, detailed injury event information. These techniques can and have been applied to administrative datasets to identify specific injury types and add value to previously coded injury datasets. Only a few studies thoroughly described the methods which were used for text mining and less than half of the studies which were reviewed used/described quality assurance methods for ensuring the robustness of the approach. New techniques utilising semi-automated computerised approaches and Bayesian/clustering statistical methods offer the potential to further develop and standardise the analysis of narrative text for injury surveillance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose: To investigate whether wearing different presbyopic vision corrections alters the pattern of eye and head movements when viewing and responding to driving-related traffic scenes. Methods: Participants included 20 presbyopes (mean age: 56.1 ± 5.7 years) who had no experience of wearing presbyopic vision corrections, apart from single vision (SV) reading spectacles. Each participant wore five different vision corrections: distance SV lenses, progressive addition spectacle lenses (PAL), bifocal spectacle lenses (BIF), monovision (MV) and multifocal contact lenses (MTF CL). For each visual condition, participants were required to view videotape recordings of traffic scenes, track a reference vehicle, and identify a series of peripherally presented targets. Digital numerical display panels were also included as near visual stimuli (simulating the visual displays of a vehicle speedometer and radio). Eye and head movements were measured, and the accuracy of target recognition was also recorded. Results: The path length of eye movements while viewing and responding to driving-related traffic scenes was significantly longer when wearing BIF and PAL than MV and MTF CL (both p ≤ 0.013). The path length of head movements was greater with SV, BIF, and PAL than MV and MTF CL (all p < 0.001). Target recognition and brake response times were not significantly affected by vision correction, whereas target recognition was less accurate when the near stimulus was located at eccentricities inferiorly and to the left, rather than directly below the primary position of gaze (p = 0.008), regardless of vision correction. Conclusions: Different presbyopic vision corrections alter eye and head movement patterns. The longer path length of eye and head movements and greater number of saccades associated with the spectacle presbyopic corrections may affect some aspects of driving performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Spoken term detection (STD) popularly involves performing word or sub-word level speech recognition and indexing the result. This work challenges the assumption that improved speech recognition accuracy implies better indexing for STD. Using an index derived from phone lattices, this paper examines the effect of language model selection on the relationship between phone recognition accuracy and STD accuracy. Results suggest that language models usually improve phone recognition accuracy but their inclusion does not always translate to improved STD accuracy. The findings suggest that using phone recognition accuracy to measure the quality of an STD index can be problematic, and highlight the need for an alternative that is more closely aligned with the goals of the specific detection task.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

XML document clustering is essential for many document handling applications such as information storage, retrieval, integration and transformation. An XML clustering algorithm should process both the structural and the content information of XML documents in order to improve the accuracy and meaning of the clustering solution. However, the inclusion of both kinds of information in the clustering process results in a huge overhead for the underlying clustering algorithm because of the high dimensionality of the data. This paper introduces a novel approach that first determines the structural similarity in the form of frequent subtrees and then uses these frequent subtrees to represent the constrained content of the XML documents in order to determine the content similarity. The proposed method reduces the high dimensionality of input data by using only the structure-constrained content. The empirical analysis reveals that the proposed method can effectively cluster even very large XML datasets and outperform other existing methods.