248 resultados para Kähler-Einstein Metrics
Resumo:
Background, aim, and scope Urban motor vehicle fleets are a major source of particulate matter pollution, especially of ultrafine particles (diameters < 0.1 µm), and exposure to particulate matter has known serious health effects. A considerable body of literature is available on vehicle particle emission factors derived using a wide range of different measurement methods for different particle sizes, conducted in different parts of the world. Therefore the choice as to which are the most suitable particle emission factors to use in transport modelling and health impact assessments presented as a very difficult task. The aim of this study was to derive a comprehensive set of tailpipe particle emission factors for different vehicle and road type combinations, covering the full size range of particles emitted, which are suitable for modelling urban fleet emissions. Materials and methods A large body of data available in the international literature on particle emission factors for motor vehicles derived from measurement studies was compiled and subjected to advanced statistical analysis, to determine the most suitable emission factors to use in modelling urban fleet emissions. Results This analysis resulted in the development of five statistical models which explained 86%, 93%, 87%, 65% and 47% of the variation in published emission factors for particle number, particle volume, PM1, PM2.5 and PM10 respectively. A sixth model for total particle mass was proposed but no significant explanatory variables were identified in the analysis. From the outputs of these statistical models, the most suitable particle emission factors were selected. This selection was based on examination of the statistical robustness of the statistical model outputs, including consideration of conservative average particle emission factors with the lowest standard errors, narrowest 95% confidence intervals and largest sample sizes, and the explanatory model variables, which were Vehicle Type (all particle metrics), Instrumentation (particle number and PM2.5), Road Type (PM10) and Size Range Measured and Speed Limit on the Road (particle volume). Discussion A multiplicity of factors need to be considered in determining emission factors that are suitable for modelling motor vehicle emissions, and this study derived a set of average emission factors suitable for quantifying motor vehicle tailpipe particle emissions in developed countries. Conclusions The comprehensive set of tailpipe particle emission factors presented in this study for different vehicle and road type combinations enable the full size range of particles generated by fleets to be quantified, including ultrafine particles (measured in terms of particle number). These emission factors have particular application for regions which may have a lack of funding to undertake measurements, or insufficient measurement data upon which to derive emission factors for their region. Recommendations and perspectives In urban areas motor vehicles continue to be a major source of particulate matter pollution and of ultrafine particles. It is critical that in order to manage this major pollution source methods are available to quantify the full size range of particles emitted for traffic modelling and health impact assessments.
Resumo:
The Transport Certification Australia on-board mass feasibility project is testing various on-board mass devices in a range of heavy vehicles (HVs). Extensive field tests of on-board mass measurement systems for HVs have been conducted during 2008. These tests were of accuracy, robustness and tamper-evidence of heavy vehicle on-board mass telematics. All the systems tested showed accuracies within approximately +/- 500 kg of gross combination mass or approximately +/- 2% of the attendant weighbridge reading. Analysis of the dynamic data also showed encouraging results and has raised the possibility of use of such dynamic information in tamper evidence in two areas. This analysis was to determine if the use of averaged dynamic data could identify potential tampering or incorrect operating procedures as well as the possibility of dynamic measurements flagging a tamper event by the use of metrics including a tampering index (TIX). Technical and business options to detect tamper events will now be developed during implementation of regulatory OBM system application to Australian heavy vehicles (HVs).
Resumo:
Refactoring focuses on improving the reusability, maintainability and performance of programs. However, the impact of refactoring on the security of a given program has received little attention. In this work, we focus on the design of object-oriented applications and use metrics to assess the impact of a number of standard refactoring rules on their security by evaluating the metrics before and after refactoring. This assessment tells us which refactoring steps can increase the security level of a given program from the point of view of potential information flow, allowing application designers to improve their system’s security at an early stage.
Resumo:
This paper reports on the empirical comparison of seven machine learning algorithms in texture classification with application to vegetation management in power line corridors. Aiming at classifying tree species in power line corridors, object-based method is employed. Individual tree crowns are segmented as the basic classification units and three classic texture features are extracted as the input to the classification algorithms. Several widely used performance metrics are used to evaluate the classification algorithms. The experimental results demonstrate that the classification performance depends on the performance matrix, the characteristics of datasets and the feature used.
Resumo:
Process modeling is a central element in any approach to Business Process Management (BPM). However, what hinders both practitioners and academics is the lack of support for assessing the quality of process models – let alone realizing high quality process models. Existing frameworks are highly conceptual or too general. At the same time, various techniques, tools, and research results are available that cover fragments of the issue at hand. This chapter presents the SIQ framework that on the one hand integrates concepts and guidelines from existing ones and on the other links these concepts to current research in the BPM domain. Three different types of quality are distinguished and for each of these levels concrete metrics, available tools, and guidelines will be provided. While the basis of the SIQ framework is thought to be rather robust, its external pointers can be updated with newer insights as they emerge.
Resumo:
Shrinking product lifecycles, tough international competition, swiftly changing technologies, ever increasing customer quality expectation and demanding high variety options are some of the forces that drive next generation of development processes. To overcome these challenges, design cost and development time of product has to be reduced as well as quality to be improved. Design reuse is considered one of the lean strategies to win the race in this competitive environment. design reuse can reduce the product development time, product development cost as well as number of defects which will ultimately influence the product performance in cost, time and quality. However, it has been found that no or little work has been carried out for quantifying the effectiveness of design reuse in product development performance such as design cost, development time and quality. Therefore, in this study we propose a systematic design reuse based product design framework and developed a design leanness index (DLI) as a measure of effectiveness of design reuse. The DLI is a representative measure of reuse effectiveness in cost, development time and quality. Through this index, a clear relationship between reuse measure and product development performance metrics has been established. Finally, a cost based model has been developed to maximise the design leanness index for a product within the given set of constraints achieving leanness in design process.
Resumo:
Conceptual modeling grammars are a fundamental means for specifying information systems requirements. However, the actual usage of these grammars is only poorly understood. In particular, little is known about how properties of these grammars inform usage beliefs such as usefulness and ease of use. In this paper we use an ontological theory to describe conceptual modeling grammars in terms of their ontological deficiencies, and formulate two propositions in regard to how these ontological deficiencies influence primary usage beliefs. Using BPMN as an example modeling grammar, we surveyed 528 modeling practitioners to test the theorized relationships. Our results show that users of conceptual modeling grammars perceive ontological deficiencies to exist, and that these deficiency perceptions are negatively associated with usefulness and ease of use of these grammars. With our research we provide empirical evidence in support of the predictions of the ontological theory of modeling grammar expressiveness, and we identify previously unexplored links between conceptual modeling grammars and grammar usage beliefs. This work implies for practice a much closer coupling of the act of (re ) designing modeling grammars with usage-related success metrics.
Resumo:
Le présent essai soutient, un peu le long d'une ligne simmelienne, que la théorie démocratique peut produire des théories pratiques et universelles, comme celles développées en physique théorique. Le raisonnement qui sous-tend cet essai est de montrer que la théorie de la «démocratie de base" peut-être vrai par le faite si on la comparer à la Relative Spécifique d’Einstein portant spécifiquement sur les paramètres de symétrie, l'unification, la simplicité et l'utilité. Ces paramètres sont ce qui fait qu’une théorie en physique comme ont la rencontre s’adapte non seulement aux connaissances actuelles, mais aussi de produire des chemins vers l'essai (application). Comme la théorie de la «démocratie de base » peut satisfaire ces mêmes paramètres, il pourrait trancher le débat relatif à la définition de la démocratie. Ceci sera d'abord soutenu pour discuter de ce qui est la théorie de la «démocratie de base» et pourquoi cela diffère des travaux précédents, en deuxième lieu, en expliquant les paramètres choisis (comme pour quoi ceux-ci et pas à d'autres confirment ou échouent les théories) et, troisièmement, en comparant comment la relativité et la théorie de la «démocratie de base » peut correspondre aux paramètres.
Resumo:
Post license advanced driver training programs in the US and early programs in Europe have often failed to accomplish their stated objectives because, it is suspected, that drivers gain self perceived driving skills that exceed their true skills—leading to increased post training crashes. The consensus from the evaluation of countless advanced driver training programs is that these programs are a detriment to safety, especially for novice, young, male drivers. Some European countries including Sweden, Finland, Austria, Luxembourg, and Norway, have continued to refine these programs, with an entirely new training philosophy emerging around 1990. These ‘post-renewal’ programs have shown considerable promise, despite various data quality and availability concerns. These programs share in common a focus on teaching drivers about self assessment and anticipation of risk, as opposed to teaching drivers how to master driving at the limits of tire adhesion. The programs focus on factors such as self actualization and driving discipline, rather than low level mastery of skills. Drivers are meant to depart these renewed programs with a more realistic assessment of their driving abilities. These renewed programs require considerable specialized and costly infrastructure including dedicated driver training facilities with driving modules engineered specifically for advanced driver training and highly structured curricula. They are conspicuously missing from both the US road safety toolbox and academic literature. Given the considerable road safety concerns associated with US novice male drivers in particular, these programs warrant further attention. This paper reviews the predominant features and empirical evidence surrounding post licensing advanced driver training programs focused on novice drivers. A clear articulation of differences between the renewed and current US advanced driver training programs is provided. While the individual quantitative evaluations range from marginally to significantly effective in reducing novice driver crash risk, they have been criticized for evaluation deficiencies ranging from small sample sizes to confounding variables to lack of exposure metrics. Collectively, however, the programs sited in the paper suggest at least a marginally positive effect that needs to be validated with further studies. If additional well controlled studies can validate these programs, a pilot program in the US should be considered.
Resumo:
Diffusion is the process that leads to the mixing of substances as a result of spontaneous and random thermal motion of individual atoms and molecules. It was first detected by the English botanist Robert Brown in 1827, and the phenomenon became known as ‘Brownian motion’. More specifically, the motion observed by Brown was translational diffusion – thermal motion resulting in random variations of the position of a molecule. This type of motion was given a correct theoretical interpretation in 1905 by Albert Einstein, who derived the relationship between temperature, the viscosity of the medium, the size of the diffusing molecule, and its diffusion coefficient. It is translational diffusion that is indirectly observed in MR diffusion-tensor imaging (DTI). The relationship obtained by Einstein provides the physical basis for using translational diffusion to probe the microscopic environment surrounding the molecule.
Resumo:
Stereo vision is a method of depth perception, in which depth information is inferred from two (or more) images of a scene, taken from different perspectives. Applications of stereo vision include aerial photogrammetry, autonomous vehicle guidance, robotics, industrial automation and stereomicroscopy. A key issue in stereo vision is that of image matching, or identifying corresponding points in a stereo pair. The difference in the positions of corresponding points in image coordinates is termed the parallax or disparity. When the orientation of the two cameras is known, corresponding points may be projected back to find the location of the original object point in world coordinates. Matching techniques are typically categorised according to the nature of the matching primitives they use and the matching strategy they employ. This report provides a detailed taxonomy of image matching techniques, including area based, transform based, feature based, phase based, hybrid, relaxation based, dynamic programming and object space methods. A number of area based matching metrics as well as the rank and census transforms were implemented, in order to investigate their suitability for a real-time stereo sensor for mining automation applications. The requirements of this sensor were speed, robustness, and the ability to produce a dense depth map. The Sum of Absolute Differences matching metric was the least computationally expensive; however, this metric was the most sensitive to radiometric distortion. Metrics such as the Zero Mean Sum of Absolute Differences and Normalised Cross Correlation were the most robust to this type of distortion but introduced additional computational complexity. The rank and census transforms were found to be robust to radiometric distortion, in addition to having low computational complexity. They are therefore prime candidates for a matching algorithm for a stereo sensor for real-time mining applications. A number of issues came to light during this investigation which may merit further work. These include devising a means to evaluate and compare disparity results of different matching algorithms, and finding a method of assigning a level of confidence to a match. Another issue of interest is the possibility of statistically combining the results of different matching algorithms, in order to improve robustness.
Resumo:
The depth of focus (DOF) can be defined as the variation in image distance of a lens or an optical system which can be tolerated without incurring an objectionable lack of sharpness of focus. The DOF of the human eye serves a mechanism of blur tolerance. As long as the target image remains within the depth of focus in the image space, the eye will still perceive the image as being clear. A large DOF is especially important for presbyopic patients with partial or complete loss of accommodation (presbyopia), since this helps them to obtain an acceptable retinal image when viewing a target moving through a range of near to intermediate distances. The aim of this research was to investigate the DOF of the human eye and its association with the natural wavefront aberrations, and how higher order aberrations (HOAs) can be used to expand the DOF, in particular by inducing spherical aberrations ( 0 4 Z and 0 6 Z ). The depth of focus of the human eye can be measured using a variety of subjective and objective methods. Subjective measurements based on a Badal optical system have been widely adopted, through which the retinal image size can be kept constant. In such measurements, the subject.s tested eye is normally cyclopleged. Objective methods without the need of cycloplegia are also used, where the eye.s accommodative response is continuously monitored. Generally, the DOF measured by subjective methods are slightly larger than those measured objectively. In recent years, methods have also been developed to estimate DOF from retinal image quality metrics (IQMs) derived from the ocular wavefront aberrations. In such methods, the DOF is defined as the range of defocus error that degrades the retinal image quality calculated from the IQMs to a certain level of the possible maximum value. In this study, the effect of different amounts of HOAs on the DOF was theoretically evaluated by modelling and comparing the DOF of subjects from four different clinical groups, including young emmetropes (20 subjects), young myopes (19 subjects), presbyopes (32 subjects) and keratoconics (35 subjects). A novel IQM-based through-focus algorithm was developed to theoretically predict the DOF of subjects with their natural HOAs. Additional primary spherical aberration ( 0 4 Z ) was also induced in the wavefronts of myopes and presbyopes to simulate the effect of myopic refractive correction (e.g. LASIK) and presbyopic correction (e.g. progressive power IOL) on the subject.s DOF. Larger amounts of HOAs were found to lead to greater values of predicted DOF. The introduction of primary spherical aberration was found to provide moderate increase of DOF while slightly deteriorating the image quality at the same time. The predicted DOF was also affected by the IQMs and the threshold level adopted. We then investigated the influence of the chosen threshold level of the IQMs on the predicted DOF, and how it relates to the subjectively measured DOF. The subjective DOF was measured in a group of 17 normal subjects, and we used through-focus visual Strehl ratio based on optical transfer function (VSOTF) derived from their wavefront aberrations as the IQM to estimate the DOF. The results allowed comparison of the subjective DOF with the estimated DOF and determination of a threshold level for DOF estimation. Significant correlation was found between the subject.s estimated threshold level for the estimated DOF and HOA RMS (Pearson.s r=0.88, p<0.001). The linear correlation can be used to estimate the threshold level for each individual subject, subsequently leading to a method for estimating individual.s DOF from a single measurement of their wavefront aberrations. A subsequent study was conducted to investigate the DOF of keratoconic subjects. Significant increases of the level of HOAs, including spherical aberration, coma and trefoil, can be observed in keratoconic eyes. This population of subjects provides an opportunity to study the influence of these HOAs on DOF. It was also expected that the asymmetric aberrations (coma and trefoil) in the keratoconic eye could interact with defocus to cause regional blur of the target. A dual-Badal-channel optical system with a star-pattern target was used to measure the subjective DOF in 10 keratoconic eyes and compared to those from a group of 10 normal subjects. The DOF measured in keratoconic eyes was significantly larger than that in normal eyes. However there was not a strong correlation between the large amount of HOA RMS and DOF in keratoconic eyes. Among all HOA terms, spherical aberration was found to be the only HOA that helped to significantly increase the DOF in the studied keratoconic subjects. Through the first three studies, a comprehensive understanding of DOF and its association to the HOAs in the human eye had been achieved. An adaptive optics system was then designed and constructed. The system was capable of measuring and altering the wavefront aberrations in the subject.s eye and measuring the resulting DOF under the influence of different combination of HOAs. Using the AO system, we investigated the concept of extending the DOF through optimized combinations of 0 4 Z and 0 6 Z . Systematic introduction of a targeted amount of both 0 4 Z and 0 6 Z was found to significantly improve the DOF of healthy subjects. The use of wavefront combinations of 0 4 Z and 0 6 Z with opposite signs can further expand the DOF, rather than using 0 4 Z or 0 6 Z alone. The optimal wavefront combinations to expand the DOF were estimated using the ratio of increase in DOF and loss of retinal image quality defined by VSOTF. In the experiment, the optimal combinations of 0 4 Z and 0 6 Z were found to provide a better balance of DOF expansion and relatively smaller decreases in VA. Therefore, the optimal combinations of 0 4 Z and 0 6 Z provides a more efficient method to expand the DOF rather than 0 4 Z or 0 6 Z alone. This PhD research has shown that there is a positive correlation between the DOF and the eye.s wavefront aberrations. More aberrated eyes generally have a larger DOF. The association of DOF and the natural HOAs in normal subjects can be quantified, which allows the estimation of DOF directly from the ocular wavefront aberration. Among the Zernike HOA terms, spherical aberrations ( 0 4 Z and 0 6 Z ) were found to improve the DOF. Certain combinations of 0 4 Z and 0 6 Z provide a more effective method to expand DOF than using 0 4 Z or 0 6 Z alone, and this could be useful in the optimal design of presbyopic optical corrections such as multifocal contact lenses, intraocular lenses and laser corneal surgeries.
Resumo:
Earlier research developed theoretically-based aggregate metrics for technology strategy and used them to analyze California bridge construction firms (Hampson, 1993). Determinants of firm performance, including trend in contract awards, market share and contract awards per employee, were used as indicators for competitive performance. The results of this research were a series of refined theoretically-based measures for technology strategy and a demonstrated positive relationship between technology strategy and competitive performance within the bridge construction sector. This research showed that three technology strategy dimensions—competitive positioning, depth of technology strategy, and organizational fit— show very strong correlation with the competitive performance indicators of absolute growth in contract awards, and contract awards per employee. Both researchers and industry professionals need improved understanding of how technology affects results, and how to better target investments to improve competitive performance in particular industry sectors. This paper builds on the previous research findings by evaluating the strategic fit of firms' approach to technology with industry segment characteristics. It begins with a brief overview of the background regarding technology strategy. The major sections of the paper describe niches and firms in an example infrastructure construction market, analyze appropriate technology strategies, and describe managerial actions to implement these strategies and support the business objectives of the firm.
Resumo:
We present a hierarchical model for assessing an object-oriented program's security. Security is quantified using structural properties of the program code to identify the ways in which `classified' data values may be transferred between objects. The model begins with a set of low-level security metrics based on traditional design characteristics of object-oriented classes, such as data encapsulation, cohesion and coupling. These metrics are then used to characterise higher-level properties concerning the overall readability and writability of classified data throughout the program. In turn, these metrics are then mapped to well-known security design principles such as `assigning the least privilege' and `reducing the size of the attack surface'. Finally, the entire program's security is summarised as a single security index value. These metrics allow different versions of the same program, or different programs intended to perform the same task, to be compared for their relative security at a number of different abstraction levels. The model is validated via an experiment involving five open source Java programs, using a static analysis tool we have developed to automatically extract the security metrics from compiled Java bytecode.
Resumo:
A significant proportion of the cost of software development is due to software testing and maintenance. This is in part the result of the inevitable imperfections due to human error, lack of quality during the design and coding of software, and the increasing need to reduce faults to improve customer satisfaction in a competitive marketplace. Given the cost and importance of removing errors improvements in fault detection and removal can be of significant benefit. The earlier in the development process faults can be found, the less it costs to correct them and the less likely other faults are to develop. This research aims to make the testing process more efficient and effective by identifying those software modules most likely to contain faults, allowing testing efforts to be carefully targeted. This is done with the use of machine learning algorithms which use examples of fault prone and not fault prone modules to develop predictive models of quality. In order to learn the numerical mapping between module and classification, a module is represented in terms of software metrics. A difficulty in this sort of problem is sourcing software engineering data of adequate quality. In this work, data is obtained from two sources, the NASA Metrics Data Program, and the open source Eclipse project. Feature selection before learning is applied, and in this area a number of different feature selection methods are applied to find which work best. Two machine learning algorithms are applied to the data - Naive Bayes and the Support Vector Machine - and predictive results are compared to those of previous efforts and found to be superior on selected data sets and comparable on others. In addition, a new classification method is proposed, Rank Sum, in which a ranking abstraction is laid over bin densities for each class, and a classification is determined based on the sum of ranks over features. A novel extension of this method is also described based on an observed polarising of points by class when rank sum is applied to training data to convert it into 2D rank sum space. SVM is applied to this transformed data to produce models the parameters of which can be set according to trade-off curves to obtain a particular performance trade-off.