893 resultados para metrics
Resumo:
Transport regulators consider that, with respect to pavement damage, heavy vehicles (HVs) are the riskiest vehicles on the road network. That HV suspension design contributes to road and bridge damage has been recognised for some decades. This thesis deals with some aspects of HV suspension characteristics, particularly (but not exclusively) air suspensions. This is in the areas of developing low-cost in-service heavy vehicle (HV) suspension testing, the effects of larger-than-industry-standard longitudinal air lines and the characteristics of on-board mass (OBM) systems for HVs. All these areas, whilst seemingly disparate, seek to inform the management of HVs, reduce of their impact on the network asset and/or provide a measurement mechanism for worn HV suspensions. A number of project management groups at the State and National level in Australia have been, and will be, presented with the results of the project that resulted in this thesis. This should serve to inform their activities applicable to this research. A number of HVs were tested for various characteristics. These tests were used to form a number of conclusions about HV suspension behaviours. Wheel forces from road test data were analysed. A “novel roughness” measure was developed and applied to the road test data to determine dynamic load sharing, amongst other research outcomes. Further, it was proposed that this approach could inform future development of pavement models incorporating roughness and peak wheel forces. Left/right variations in wheel forces and wheel force variations for different speeds were also presented. This led on to some conclusions regarding suspension and wheel force frequencies, their transmission to the pavement and repetitive wheel loads in the spatial domain. An improved method of determining dynamic load sharing was developed and presented. It used the correlation coefficient between two elements of a HV to determine dynamic load sharing. This was validated against a mature dynamic loadsharing metric, the dynamic load sharing coefficient (de Pont, 1997). This was the first time that the technique of measuring correlation between elements on a HV has been used for a test case vs. a control case for two different sized air lines. That dynamic load sharing was improved at the air springs was shown for the test case of the large longitudinal air lines. The statistically significant improvement in dynamic load sharing at the air springs from larger longitudinal air lines varied from approximately 30 percent to 80 percent. Dynamic load sharing at the wheels was improved only for low air line flow events for the test case of larger longitudinal air lines. Statistically significant improvements to some suspension metrics across the range of test speeds and “novel roughness” values were evident from the use of larger longitudinal air lines, but these were not uniform. Of note were improvements to suspension metrics involving peak dynamic forces ranging from below the error margin to approximately 24 percent. Abstract models of HV suspensions were developed from the results of some of the tests. Those models were used to propose further development of, and future directions of research into, further gains in HV dynamic load sharing. This was from alterations to currently available damping characteristics combined with implementation of large longitudinal air lines. In-service testing of HV suspensions was found to be possible within a documented range from below the error margin to an error of approximately 16 percent. These results were in comparison with either the manufacturer’s certified data or test results replicating the Australian standard for “road-friendly” HV suspensions, Vehicle Standards Bulletin 11. OBM accuracy testing and development of tamper evidence from OBM data were detailed for over 2000 individual data points across twelve test and control OBM systems from eight suppliers installed on eleven HVs. The results indicated that 95 percent of contemporary OBM systems available in Australia are accurate to +/- 500 kg. The total variation in OBM linearity, after three outliers in the data were removed, was 0.5 percent. A tamper indicator and other OBM metrics that could be used by jurisdictions to determine tamper events were developed and documented. That OBM systems could be used as one vector for in-service testing of HV suspensions was one of a number of synergies between the seemingly disparate streams of this project.
Resumo:
The Lane Change Test (LCT) is one of the growing number of methods developed to quantify driving performance degradation brought about by the use of in-vehicle devices. Beyond its validity and reliability, for such a test to be of practical use, it must also be sensitive to the varied demands of individual tasks. The current study evaluated the ability of several recent LCT lateral control and event detection parameters to discriminate between visual-manual and cognitive surrogate In-Vehicle Information System tasks with different levels of demand. Twenty-seven participants (mean age 24.4 years) completed a PC version of the LCT while performing visual search and math problem solving tasks. A number of the lateral control metrics were found to be sensitive to task differences, but the event detection metrics were less able to discriminate between tasks. The mean deviation and lane excursion measures were able to distinguish between the visual and cognitive tasks, but were less sensitive to the different levels of task demand. The other LCT metrics examined were less sensitive to task differences. A major factor influencing the sensitivity of at least some of the LCT metrics could be the type of lane change instructions given to participants. The provision of clear and explicit lane change instructions and further refinement of its metrics will be essential for increasing the utility of the LCT as an evaluation tool.
Resumo:
Earlier studies have shown that the influence of fixation stability on bone healing diminishes with advanced age. The goal of this study was to unravel the relationship between mechanical stimulus and age on callus competence at a tissue level. Using 3D in vitro micro-computed tomography derived metrics, 2D in vivo radiography, and histology, we investigated the influences of age and varying fixation stability on callus size, geometry, microstructure, composition, remodeling, and vascularity. Compared were four groups with a 1.5-mm osteotomy gap in the femora of Sprague–Dawley rats: Young rigid (YR), Young semirigid (YSR), Old rigid (OR), Old semirigid (OSR). Hypothesis was that calcified callus microstructure and composition is impaired due to the influence of advanced age, and these individuals would show a reduced response to fixation stabilities. Semirigid fixations resulted in a larger ΔCSA (Callus cross-sectional area) compared to rigid groups. In vitro μCT analysis at 6 weeks postmortem showed callus bridging scores in younger animals to be superior than their older counterparts (pb0.01). Younger animals showed (i) larger callus strut thickness (pb0.001), (ii) lower perforation in struts (pb0.01), and (iii) higher mineralization of callus struts (pb0.001). Callus mineralization was reduced in young animals with semirigid fracture fixation but remained unaffected in the aged group. While stability had an influence, age showed none on callus size and geometry of callus. With no differences observed in relative osteoid areas in the callus ROI, old as well as semirigid fixated animals showed a higher osteoclast count (pb0.05). Blood vessel density was reduced in animals with semirigid fixation (pb0.05). In conclusion, in vivo monitoring indicated delayed callus maturation in aged individuals. Callus bridging and callus competence (microstructure and mineralization) were impaired in individuals with an advanced age. This matched with increased bone resorption due to higher osteoclast numbers. Varying fixator configurations in older individuals did not alter the dominant effect of advanced age on callus tissue mineralization, unlike in their younger counterparts. Age-associated influences appeared independent from stability. This study illustrates the dominating role of osteoclastic activity in age-related impaired healing, while demonstrating the optimization of fixation parameters such as stiffness appeared to be less effective in influencing healing in aged individuals.
Resumo:
In November 2009 the researcher embarked on a project aimed at reducing the amount of paper used by Queensland University of Technology (QUT) staff in their daily workplace activities. The key goal was to communicate to staff that excessive printing has a tangible and negative effect on their workplace and local environment. The research objective was to better understand what motivates staff towards more ecologically sustainable printing practises, whilst meeting their job’s demands. The current study is built on previous research that found that one interface does not address the needs of all users when creating persuasive Human Computer Interaction (HCI) interventions targeting resource consumption. In response, the current study created and trialled software that communicates individual paper consumption in precise metrics. Based on preliminary research data different metric sets have been defined to address the different motivations and beliefs of user archetypes using descriptive and injunctive normative information.
Resumo:
It is possible to estimate the depth of focus (DOF) of the eye directly from wavefront measurements using various retinal image quality metrics (IQMs). In such methods, DOF is defined as the range of defocus error that degrades the retinal image quality calculated from IQMs to a certain level of the maximum value. Although different retinal image quality metrics are used, currently there have been two arbitrary threshold levels adopted, 50% and 80%. There has been limited study of the relationship between these threshold levels and the actual measured DOF. We measured the subjective DOF in a group of 17 normal subjects, and used through-focus augmented visual Strehl ratio based on optical transfer function (VSOTF) derived from their wavefront aberrations as the IQM. For each subject, a VSOTF threshold level was derived that would match the subjectively measured DOF. Significant correlation was found between the subject’s estimated threshold level and the HOA RMS (Pearson’s r=0.88, p<0.001). The linear correlation can be used to estimate the threshold level for each individual subject, subsequently leading to a method for estimating individual’s DOF from a single measurement of their wavefront aberrations.
Resumo:
Abstract—The role of cardiopulmonary signals in the dynamics of wavefront aberrations in the eye has been examined. Synchronous measurement of the eye’s wavefront aberrations, cardiac function, blood pulse, and respiration signals were taken for a group of young, healthy subjects. Two focusing stimuli, three breathing patterns, as well as natural and cycloplegic eye conditions were examined. A set of tools, including time–frequency coherence and its metrics, has been proposed to acquire a detailed picture of the interactions of the cardiopulmonary system with the eye’s wavefront aberrations. The results showed that the coherence of the blood pulse and its harmonics with the eye’s aberrations was, on average, weak (0.4 ± 0.15), while the coherence of the respiration signal with eye’s aberrations was, on average, moderate (0.53 ± 0.14). It was also revealed that there were significant intervals during which high coherence occurred. On average, the coherence was high (>0.75) during 16% of the recorded time, for the blood pulse, and 34% of the time for the respiration signal. A statistically significant decrease in average coherence was noted for the eye’s aberrations with respiration in the case of fast controlled breathing (0.5 Hz). The coherence between the blood pulse and the defocus was significantly larger for the far target than for the near target condition. After cycloplegia, the coherence of defocus with the blood pulse significantly decreased, while this was not the case for the other aberrations. There was also a noticeable, but not statistically significant, increase in the coherence of the comatic term and respiration in that case. By using nonstationary measures of signal coherence, a more detailed picture of interactions between the cardiopulmonary signals and eye’s wavefront aberrations has emerged.
Resumo:
Background, aim, and scope Urban motor vehicle fleets are a major source of particulate matter pollution, especially of ultrafine particles (diameters < 0.1 µm), and exposure to particulate matter has known serious health effects. A considerable body of literature is available on vehicle particle emission factors derived using a wide range of different measurement methods for different particle sizes, conducted in different parts of the world. Therefore the choice as to which are the most suitable particle emission factors to use in transport modelling and health impact assessments presented as a very difficult task. The aim of this study was to derive a comprehensive set of tailpipe particle emission factors for different vehicle and road type combinations, covering the full size range of particles emitted, which are suitable for modelling urban fleet emissions. Materials and methods A large body of data available in the international literature on particle emission factors for motor vehicles derived from measurement studies was compiled and subjected to advanced statistical analysis, to determine the most suitable emission factors to use in modelling urban fleet emissions. Results This analysis resulted in the development of five statistical models which explained 86%, 93%, 87%, 65% and 47% of the variation in published emission factors for particle number, particle volume, PM1, PM2.5 and PM10 respectively. A sixth model for total particle mass was proposed but no significant explanatory variables were identified in the analysis. From the outputs of these statistical models, the most suitable particle emission factors were selected. This selection was based on examination of the statistical robustness of the statistical model outputs, including consideration of conservative average particle emission factors with the lowest standard errors, narrowest 95% confidence intervals and largest sample sizes, and the explanatory model variables, which were Vehicle Type (all particle metrics), Instrumentation (particle number and PM2.5), Road Type (PM10) and Size Range Measured and Speed Limit on the Road (particle volume). Discussion A multiplicity of factors need to be considered in determining emission factors that are suitable for modelling motor vehicle emissions, and this study derived a set of average emission factors suitable for quantifying motor vehicle tailpipe particle emissions in developed countries. Conclusions The comprehensive set of tailpipe particle emission factors presented in this study for different vehicle and road type combinations enable the full size range of particles generated by fleets to be quantified, including ultrafine particles (measured in terms of particle number). These emission factors have particular application for regions which may have a lack of funding to undertake measurements, or insufficient measurement data upon which to derive emission factors for their region. Recommendations and perspectives In urban areas motor vehicles continue to be a major source of particulate matter pollution and of ultrafine particles. It is critical that in order to manage this major pollution source methods are available to quantify the full size range of particles emitted for traffic modelling and health impact assessments.
Resumo:
The Transport Certification Australia on-board mass feasibility project is testing various on-board mass devices in a range of heavy vehicles (HVs). Extensive field tests of on-board mass measurement systems for HVs have been conducted during 2008. These tests were of accuracy, robustness and tamper-evidence of heavy vehicle on-board mass telematics. All the systems tested showed accuracies within approximately +/- 500 kg of gross combination mass or approximately +/- 2% of the attendant weighbridge reading. Analysis of the dynamic data also showed encouraging results and has raised the possibility of use of such dynamic information in tamper evidence in two areas. This analysis was to determine if the use of averaged dynamic data could identify potential tampering or incorrect operating procedures as well as the possibility of dynamic measurements flagging a tamper event by the use of metrics including a tampering index (TIX). Technical and business options to detect tamper events will now be developed during implementation of regulatory OBM system application to Australian heavy vehicles (HVs).
Resumo:
Refactoring focuses on improving the reusability, maintainability and performance of programs. However, the impact of refactoring on the security of a given program has received little attention. In this work, we focus on the design of object-oriented applications and use metrics to assess the impact of a number of standard refactoring rules on their security by evaluating the metrics before and after refactoring. This assessment tells us which refactoring steps can increase the security level of a given program from the point of view of potential information flow, allowing application designers to improve their system’s security at an early stage.
Resumo:
This paper reports on the empirical comparison of seven machine learning algorithms in texture classification with application to vegetation management in power line corridors. Aiming at classifying tree species in power line corridors, object-based method is employed. Individual tree crowns are segmented as the basic classification units and three classic texture features are extracted as the input to the classification algorithms. Several widely used performance metrics are used to evaluate the classification algorithms. The experimental results demonstrate that the classification performance depends on the performance matrix, the characteristics of datasets and the feature used.
Resumo:
Process modeling is a central element in any approach to Business Process Management (BPM). However, what hinders both practitioners and academics is the lack of support for assessing the quality of process models – let alone realizing high quality process models. Existing frameworks are highly conceptual or too general. At the same time, various techniques, tools, and research results are available that cover fragments of the issue at hand. This chapter presents the SIQ framework that on the one hand integrates concepts and guidelines from existing ones and on the other links these concepts to current research in the BPM domain. Three different types of quality are distinguished and for each of these levels concrete metrics, available tools, and guidelines will be provided. While the basis of the SIQ framework is thought to be rather robust, its external pointers can be updated with newer insights as they emerge.
Resumo:
Shrinking product lifecycles, tough international competition, swiftly changing technologies, ever increasing customer quality expectation and demanding high variety options are some of the forces that drive next generation of development processes. To overcome these challenges, design cost and development time of product has to be reduced as well as quality to be improved. Design reuse is considered one of the lean strategies to win the race in this competitive environment. design reuse can reduce the product development time, product development cost as well as number of defects which will ultimately influence the product performance in cost, time and quality. However, it has been found that no or little work has been carried out for quantifying the effectiveness of design reuse in product development performance such as design cost, development time and quality. Therefore, in this study we propose a systematic design reuse based product design framework and developed a design leanness index (DLI) as a measure of effectiveness of design reuse. The DLI is a representative measure of reuse effectiveness in cost, development time and quality. Through this index, a clear relationship between reuse measure and product development performance metrics has been established. Finally, a cost based model has been developed to maximise the design leanness index for a product within the given set of constraints achieving leanness in design process.
Resumo:
Conceptual modeling grammars are a fundamental means for specifying information systems requirements. However, the actual usage of these grammars is only poorly understood. In particular, little is known about how properties of these grammars inform usage beliefs such as usefulness and ease of use. In this paper we use an ontological theory to describe conceptual modeling grammars in terms of their ontological deficiencies, and formulate two propositions in regard to how these ontological deficiencies influence primary usage beliefs. Using BPMN as an example modeling grammar, we surveyed 528 modeling practitioners to test the theorized relationships. Our results show that users of conceptual modeling grammars perceive ontological deficiencies to exist, and that these deficiency perceptions are negatively associated with usefulness and ease of use of these grammars. With our research we provide empirical evidence in support of the predictions of the ontological theory of modeling grammar expressiveness, and we identify previously unexplored links between conceptual modeling grammars and grammar usage beliefs. This work implies for practice a much closer coupling of the act of (re ) designing modeling grammars with usage-related success metrics.
Resumo:
Post license advanced driver training programs in the US and early programs in Europe have often failed to accomplish their stated objectives because, it is suspected, that drivers gain self perceived driving skills that exceed their true skills—leading to increased post training crashes. The consensus from the evaluation of countless advanced driver training programs is that these programs are a detriment to safety, especially for novice, young, male drivers. Some European countries including Sweden, Finland, Austria, Luxembourg, and Norway, have continued to refine these programs, with an entirely new training philosophy emerging around 1990. These ‘post-renewal’ programs have shown considerable promise, despite various data quality and availability concerns. These programs share in common a focus on teaching drivers about self assessment and anticipation of risk, as opposed to teaching drivers how to master driving at the limits of tire adhesion. The programs focus on factors such as self actualization and driving discipline, rather than low level mastery of skills. Drivers are meant to depart these renewed programs with a more realistic assessment of their driving abilities. These renewed programs require considerable specialized and costly infrastructure including dedicated driver training facilities with driving modules engineered specifically for advanced driver training and highly structured curricula. They are conspicuously missing from both the US road safety toolbox and academic literature. Given the considerable road safety concerns associated with US novice male drivers in particular, these programs warrant further attention. This paper reviews the predominant features and empirical evidence surrounding post licensing advanced driver training programs focused on novice drivers. A clear articulation of differences between the renewed and current US advanced driver training programs is provided. While the individual quantitative evaluations range from marginally to significantly effective in reducing novice driver crash risk, they have been criticized for evaluation deficiencies ranging from small sample sizes to confounding variables to lack of exposure metrics. Collectively, however, the programs sited in the paper suggest at least a marginally positive effect that needs to be validated with further studies. If additional well controlled studies can validate these programs, a pilot program in the US should be considered.
Resumo:
Stereo vision is a method of depth perception, in which depth information is inferred from two (or more) images of a scene, taken from different perspectives. Applications of stereo vision include aerial photogrammetry, autonomous vehicle guidance, robotics, industrial automation and stereomicroscopy. A key issue in stereo vision is that of image matching, or identifying corresponding points in a stereo pair. The difference in the positions of corresponding points in image coordinates is termed the parallax or disparity. When the orientation of the two cameras is known, corresponding points may be projected back to find the location of the original object point in world coordinates. Matching techniques are typically categorised according to the nature of the matching primitives they use and the matching strategy they employ. This report provides a detailed taxonomy of image matching techniques, including area based, transform based, feature based, phase based, hybrid, relaxation based, dynamic programming and object space methods. A number of area based matching metrics as well as the rank and census transforms were implemented, in order to investigate their suitability for a real-time stereo sensor for mining automation applications. The requirements of this sensor were speed, robustness, and the ability to produce a dense depth map. The Sum of Absolute Differences matching metric was the least computationally expensive; however, this metric was the most sensitive to radiometric distortion. Metrics such as the Zero Mean Sum of Absolute Differences and Normalised Cross Correlation were the most robust to this type of distortion but introduced additional computational complexity. The rank and census transforms were found to be robust to radiometric distortion, in addition to having low computational complexity. They are therefore prime candidates for a matching algorithm for a stereo sensor for real-time mining applications. A number of issues came to light during this investigation which may merit further work. These include devising a means to evaluate and compare disparity results of different matching algorithms, and finding a method of assigning a level of confidence to a match. Another issue of interest is the possibility of statistically combining the results of different matching algorithms, in order to improve robustness.