241 resultados para Kahler metrics
Resumo:
This article presents the design and implementation of a trusted sensor node that provides Internet-grade security at low system cost. We describe trustedFleck, which uses a commodity Trusted Platform Module (TPM) chip to extend the capabilities of a standard wireless sensor node to provide security services such as message integrity, confidentiality, authenticity, and system integrity based on RSA public-key and XTEA-based symmetric-key cryptography. In addition trustedFleck provides secure storage of private keys and provides platform configuration registers (PCRs) to store system configurations and detect code tampering. We analyze system performance using metrics that are important for WSN applications such as computation time, memory size, energy consumption and cost. Our results show that trustedFleck significantly outperforms previous approaches (e.g., TinyECC) in terms of these metrics while providing stronger security levels. Finally, we describe a number of examples, built on trustedFleck, of symmetric key management, secure RPC, secure software update, and remote attestation.
Resumo:
Visual servoing has been a viable method of robot manipulator control for more than a decade. Initial developments involved positionbased visual servoing (PBVS), in which the control signal exists in Cartesian space. The younger method, image-based visual servoing (IBVS), has seen considerable development in recent years. PBVS and IBVS offer tradeoffs in performance, and neither can solve all tasks that may confront a robot. In response to these issues, several methods have been devised that partition the control scheme, allowing some motions to be performed in the manner of a PBVS system, while the remaining motions are performed using an IBVS approach. To date, there has been little research that explores the relative strengths and weaknesses of these methods. In this paper we present such an evaluation. We have chosen three recent visual servo approaches for evaluation in addition to the traditional PBVS and IBVS approaches. We posit a set of performance metrics that measure quantitatively the performance of a visual servo controller for a specific task. We then evaluate each of the candidate visual servo methods for four canonical tasks with simulations and with experiments in a robotic work cell.
Resumo:
Presents a unified and systematic assessment of ten position control strategies for a hydraulic servo system with single-ended cylinder driven by a proportional directional control valve. We aim at identifying those methods that achieve better tracking, have a low sensitivity to system uncertainties, and offer a good balance between development effort and end results. A formal approach for solving this problem relies on several practical metrics, which is introduced herein. Their choice is important, as the comparison results between controllers can vary significantly, depending on the selected criterion. Apart from the quantitative assessment, we also raise aspects which are difficult to quantify, but which must stay in attention when considering the position control problem for this class of hydraulic servo systems.
Resumo:
Six Sigma provides a framework for quality improvement and business excellence. Introduced in the 1980s in manufacturing, the concept of Six Sigma has gained popularity in service organizations. After initial success in healthcare and banking, Six Sigma has gradually gained traction in other types of service industries, including hotels and lodging. Starwood Hotels and Resorts was the first hospitality giant to embrace Six Sigma. In 2001, Starwood adopted the method to develop innovative, customer-focused solutions and to transfer these solutions throughout the global organization. To analyze Starwood's use of Six Sigma, the authors collected data from articles, interviews, presentations and speeches published in magazines, newspapers and Web sites. This provided details to corroborate information, and they also made inferences from these sources. Financial metrics can explain the success of Six Sigma in any organization. There was no shortage of examples of Starwood's success resulting from Six Sigma project metrics uncovered during the research.
Resumo:
Transport regulators consider that, with respect to pavement damage, heavy vehicles (HVs) are the riskiest vehicles on the road network. That HV suspension design contributes to road and bridge damage has been recognised for some decades. This thesis deals with some aspects of HV suspension characteristics, particularly (but not exclusively) air suspensions. This is in the areas of developing low-cost in-service heavy vehicle (HV) suspension testing, the effects of larger-than-industry-standard longitudinal air lines and the characteristics of on-board mass (OBM) systems for HVs. All these areas, whilst seemingly disparate, seek to inform the management of HVs, reduce of their impact on the network asset and/or provide a measurement mechanism for worn HV suspensions. A number of project management groups at the State and National level in Australia have been, and will be, presented with the results of the project that resulted in this thesis. This should serve to inform their activities applicable to this research. A number of HVs were tested for various characteristics. These tests were used to form a number of conclusions about HV suspension behaviours. Wheel forces from road test data were analysed. A “novel roughness” measure was developed and applied to the road test data to determine dynamic load sharing, amongst other research outcomes. Further, it was proposed that this approach could inform future development of pavement models incorporating roughness and peak wheel forces. Left/right variations in wheel forces and wheel force variations for different speeds were also presented. This led on to some conclusions regarding suspension and wheel force frequencies, their transmission to the pavement and repetitive wheel loads in the spatial domain. An improved method of determining dynamic load sharing was developed and presented. It used the correlation coefficient between two elements of a HV to determine dynamic load sharing. This was validated against a mature dynamic loadsharing metric, the dynamic load sharing coefficient (de Pont, 1997). This was the first time that the technique of measuring correlation between elements on a HV has been used for a test case vs. a control case for two different sized air lines. That dynamic load sharing was improved at the air springs was shown for the test case of the large longitudinal air lines. The statistically significant improvement in dynamic load sharing at the air springs from larger longitudinal air lines varied from approximately 30 percent to 80 percent. Dynamic load sharing at the wheels was improved only for low air line flow events for the test case of larger longitudinal air lines. Statistically significant improvements to some suspension metrics across the range of test speeds and “novel roughness” values were evident from the use of larger longitudinal air lines, but these were not uniform. Of note were improvements to suspension metrics involving peak dynamic forces ranging from below the error margin to approximately 24 percent. Abstract models of HV suspensions were developed from the results of some of the tests. Those models were used to propose further development of, and future directions of research into, further gains in HV dynamic load sharing. This was from alterations to currently available damping characteristics combined with implementation of large longitudinal air lines. In-service testing of HV suspensions was found to be possible within a documented range from below the error margin to an error of approximately 16 percent. These results were in comparison with either the manufacturer’s certified data or test results replicating the Australian standard for “road-friendly” HV suspensions, Vehicle Standards Bulletin 11. OBM accuracy testing and development of tamper evidence from OBM data were detailed for over 2000 individual data points across twelve test and control OBM systems from eight suppliers installed on eleven HVs. The results indicated that 95 percent of contemporary OBM systems available in Australia are accurate to +/- 500 kg. The total variation in OBM linearity, after three outliers in the data were removed, was 0.5 percent. A tamper indicator and other OBM metrics that could be used by jurisdictions to determine tamper events were developed and documented. That OBM systems could be used as one vector for in-service testing of HV suspensions was one of a number of synergies between the seemingly disparate streams of this project.
Resumo:
The Lane Change Test (LCT) is one of the growing number of methods developed to quantify driving performance degradation brought about by the use of in-vehicle devices. Beyond its validity and reliability, for such a test to be of practical use, it must also be sensitive to the varied demands of individual tasks. The current study evaluated the ability of several recent LCT lateral control and event detection parameters to discriminate between visual-manual and cognitive surrogate In-Vehicle Information System tasks with different levels of demand. Twenty-seven participants (mean age 24.4 years) completed a PC version of the LCT while performing visual search and math problem solving tasks. A number of the lateral control metrics were found to be sensitive to task differences, but the event detection metrics were less able to discriminate between tasks. The mean deviation and lane excursion measures were able to distinguish between the visual and cognitive tasks, but were less sensitive to the different levels of task demand. The other LCT metrics examined were less sensitive to task differences. A major factor influencing the sensitivity of at least some of the LCT metrics could be the type of lane change instructions given to participants. The provision of clear and explicit lane change instructions and further refinement of its metrics will be essential for increasing the utility of the LCT as an evaluation tool.
Resumo:
Earlier studies have shown that the influence of fixation stability on bone healing diminishes with advanced age. The goal of this study was to unravel the relationship between mechanical stimulus and age on callus competence at a tissue level. Using 3D in vitro micro-computed tomography derived metrics, 2D in vivo radiography, and histology, we investigated the influences of age and varying fixation stability on callus size, geometry, microstructure, composition, remodeling, and vascularity. Compared were four groups with a 1.5-mm osteotomy gap in the femora of Sprague–Dawley rats: Young rigid (YR), Young semirigid (YSR), Old rigid (OR), Old semirigid (OSR). Hypothesis was that calcified callus microstructure and composition is impaired due to the influence of advanced age, and these individuals would show a reduced response to fixation stabilities. Semirigid fixations resulted in a larger ΔCSA (Callus cross-sectional area) compared to rigid groups. In vitro μCT analysis at 6 weeks postmortem showed callus bridging scores in younger animals to be superior than their older counterparts (pb0.01). Younger animals showed (i) larger callus strut thickness (pb0.001), (ii) lower perforation in struts (pb0.01), and (iii) higher mineralization of callus struts (pb0.001). Callus mineralization was reduced in young animals with semirigid fracture fixation but remained unaffected in the aged group. While stability had an influence, age showed none on callus size and geometry of callus. With no differences observed in relative osteoid areas in the callus ROI, old as well as semirigid fixated animals showed a higher osteoclast count (pb0.05). Blood vessel density was reduced in animals with semirigid fixation (pb0.05). In conclusion, in vivo monitoring indicated delayed callus maturation in aged individuals. Callus bridging and callus competence (microstructure and mineralization) were impaired in individuals with an advanced age. This matched with increased bone resorption due to higher osteoclast numbers. Varying fixator configurations in older individuals did not alter the dominant effect of advanced age on callus tissue mineralization, unlike in their younger counterparts. Age-associated influences appeared independent from stability. This study illustrates the dominating role of osteoclastic activity in age-related impaired healing, while demonstrating the optimization of fixation parameters such as stiffness appeared to be less effective in influencing healing in aged individuals.
Resumo:
In November 2009 the researcher embarked on a project aimed at reducing the amount of paper used by Queensland University of Technology (QUT) staff in their daily workplace activities. The key goal was to communicate to staff that excessive printing has a tangible and negative effect on their workplace and local environment. The research objective was to better understand what motivates staff towards more ecologically sustainable printing practises, whilst meeting their job’s demands. The current study is built on previous research that found that one interface does not address the needs of all users when creating persuasive Human Computer Interaction (HCI) interventions targeting resource consumption. In response, the current study created and trialled software that communicates individual paper consumption in precise metrics. Based on preliminary research data different metric sets have been defined to address the different motivations and beliefs of user archetypes using descriptive and injunctive normative information.
Resumo:
It is possible to estimate the depth of focus (DOF) of the eye directly from wavefront measurements using various retinal image quality metrics (IQMs). In such methods, DOF is defined as the range of defocus error that degrades the retinal image quality calculated from IQMs to a certain level of the maximum value. Although different retinal image quality metrics are used, currently there have been two arbitrary threshold levels adopted, 50% and 80%. There has been limited study of the relationship between these threshold levels and the actual measured DOF. We measured the subjective DOF in a group of 17 normal subjects, and used through-focus augmented visual Strehl ratio based on optical transfer function (VSOTF) derived from their wavefront aberrations as the IQM. For each subject, a VSOTF threshold level was derived that would match the subjectively measured DOF. Significant correlation was found between the subject’s estimated threshold level and the HOA RMS (Pearson’s r=0.88, p<0.001). The linear correlation can be used to estimate the threshold level for each individual subject, subsequently leading to a method for estimating individual’s DOF from a single measurement of their wavefront aberrations.
Resumo:
Abstract—The role of cardiopulmonary signals in the dynamics of wavefront aberrations in the eye has been examined. Synchronous measurement of the eye’s wavefront aberrations, cardiac function, blood pulse, and respiration signals were taken for a group of young, healthy subjects. Two focusing stimuli, three breathing patterns, as well as natural and cycloplegic eye conditions were examined. A set of tools, including time–frequency coherence and its metrics, has been proposed to acquire a detailed picture of the interactions of the cardiopulmonary system with the eye’s wavefront aberrations. The results showed that the coherence of the blood pulse and its harmonics with the eye’s aberrations was, on average, weak (0.4 ± 0.15), while the coherence of the respiration signal with eye’s aberrations was, on average, moderate (0.53 ± 0.14). It was also revealed that there were significant intervals during which high coherence occurred. On average, the coherence was high (>0.75) during 16% of the recorded time, for the blood pulse, and 34% of the time for the respiration signal. A statistically significant decrease in average coherence was noted for the eye’s aberrations with respiration in the case of fast controlled breathing (0.5 Hz). The coherence between the blood pulse and the defocus was significantly larger for the far target than for the near target condition. After cycloplegia, the coherence of defocus with the blood pulse significantly decreased, while this was not the case for the other aberrations. There was also a noticeable, but not statistically significant, increase in the coherence of the comatic term and respiration in that case. By using nonstationary measures of signal coherence, a more detailed picture of interactions between the cardiopulmonary signals and eye’s wavefront aberrations has emerged.
Resumo:
Background, aim, and scope Urban motor vehicle fleets are a major source of particulate matter pollution, especially of ultrafine particles (diameters < 0.1 µm), and exposure to particulate matter has known serious health effects. A considerable body of literature is available on vehicle particle emission factors derived using a wide range of different measurement methods for different particle sizes, conducted in different parts of the world. Therefore the choice as to which are the most suitable particle emission factors to use in transport modelling and health impact assessments presented as a very difficult task. The aim of this study was to derive a comprehensive set of tailpipe particle emission factors for different vehicle and road type combinations, covering the full size range of particles emitted, which are suitable for modelling urban fleet emissions. Materials and methods A large body of data available in the international literature on particle emission factors for motor vehicles derived from measurement studies was compiled and subjected to advanced statistical analysis, to determine the most suitable emission factors to use in modelling urban fleet emissions. Results This analysis resulted in the development of five statistical models which explained 86%, 93%, 87%, 65% and 47% of the variation in published emission factors for particle number, particle volume, PM1, PM2.5 and PM10 respectively. A sixth model for total particle mass was proposed but no significant explanatory variables were identified in the analysis. From the outputs of these statistical models, the most suitable particle emission factors were selected. This selection was based on examination of the statistical robustness of the statistical model outputs, including consideration of conservative average particle emission factors with the lowest standard errors, narrowest 95% confidence intervals and largest sample sizes, and the explanatory model variables, which were Vehicle Type (all particle metrics), Instrumentation (particle number and PM2.5), Road Type (PM10) and Size Range Measured and Speed Limit on the Road (particle volume). Discussion A multiplicity of factors need to be considered in determining emission factors that are suitable for modelling motor vehicle emissions, and this study derived a set of average emission factors suitable for quantifying motor vehicle tailpipe particle emissions in developed countries. Conclusions The comprehensive set of tailpipe particle emission factors presented in this study for different vehicle and road type combinations enable the full size range of particles generated by fleets to be quantified, including ultrafine particles (measured in terms of particle number). These emission factors have particular application for regions which may have a lack of funding to undertake measurements, or insufficient measurement data upon which to derive emission factors for their region. Recommendations and perspectives In urban areas motor vehicles continue to be a major source of particulate matter pollution and of ultrafine particles. It is critical that in order to manage this major pollution source methods are available to quantify the full size range of particles emitted for traffic modelling and health impact assessments.
Resumo:
The Transport Certification Australia on-board mass feasibility project is testing various on-board mass devices in a range of heavy vehicles (HVs). Extensive field tests of on-board mass measurement systems for HVs have been conducted during 2008. These tests were of accuracy, robustness and tamper-evidence of heavy vehicle on-board mass telematics. All the systems tested showed accuracies within approximately +/- 500 kg of gross combination mass or approximately +/- 2% of the attendant weighbridge reading. Analysis of the dynamic data also showed encouraging results and has raised the possibility of use of such dynamic information in tamper evidence in two areas. This analysis was to determine if the use of averaged dynamic data could identify potential tampering or incorrect operating procedures as well as the possibility of dynamic measurements flagging a tamper event by the use of metrics including a tampering index (TIX). Technical and business options to detect tamper events will now be developed during implementation of regulatory OBM system application to Australian heavy vehicles (HVs).
Resumo:
Refactoring focuses on improving the reusability, maintainability and performance of programs. However, the impact of refactoring on the security of a given program has received little attention. In this work, we focus on the design of object-oriented applications and use metrics to assess the impact of a number of standard refactoring rules on their security by evaluating the metrics before and after refactoring. This assessment tells us which refactoring steps can increase the security level of a given program from the point of view of potential information flow, allowing application designers to improve their system’s security at an early stage.
Resumo:
This paper reports on the empirical comparison of seven machine learning algorithms in texture classification with application to vegetation management in power line corridors. Aiming at classifying tree species in power line corridors, object-based method is employed. Individual tree crowns are segmented as the basic classification units and three classic texture features are extracted as the input to the classification algorithms. Several widely used performance metrics are used to evaluate the classification algorithms. The experimental results demonstrate that the classification performance depends on the performance matrix, the characteristics of datasets and the feature used.
Resumo:
Process modeling is a central element in any approach to Business Process Management (BPM). However, what hinders both practitioners and academics is the lack of support for assessing the quality of process models – let alone realizing high quality process models. Existing frameworks are highly conceptual or too general. At the same time, various techniques, tools, and research results are available that cover fragments of the issue at hand. This chapter presents the SIQ framework that on the one hand integrates concepts and guidelines from existing ones and on the other links these concepts to current research in the BPM domain. Three different types of quality are distinguished and for each of these levels concrete metrics, available tools, and guidelines will be provided. While the basis of the SIQ framework is thought to be rather robust, its external pointers can be updated with newer insights as they emerge.