656 resultados para quality metrics
Resumo:
Existing secure software development principles tend to focus on coding vulnerabilities, such as buffer or integer overflows, that apply to individual program statements, or issues associated with the run-time environment, such as component isolation. Here we instead consider software security from the perspective of potential information flow through a program’s object-oriented module structure. In particular, we define a set of quantifiable "security metrics" which allow programmers to quickly and easily assess the overall security of a given source code program or object-oriented design. Although measuring quality attributes of object-oriented programs for properties such as maintainability and performance has been well-covered in the literature, metrics which measure the quality of information security have received little attention. Moreover, existing securityrelevant metrics assess a system either at a very high level, i.e., the whole system, or at a fine level of granularity, i.e., with respect to individual statements. These approaches make it hard and expensive to recognise a secure system from an early stage of development. Instead, our security metrics are based on well-established compositional properties of object-oriented programs (i.e., data encapsulation, cohesion, coupling, composition, extensibility, inheritance and design size), combined with data flow analysis principles that trace potential information flow between high- and low-security system variables. We first define a set of metrics to assess the security quality of a given object-oriented system based on its design artifacts, allowing defects to be detected at an early stage of development. We then extend these metrics to produce a second set applicable to object-oriented program source code. The resulting metrics make it easy to compare the relative security of functionallyequivalent system designs or source code programs so that, for instance, the security of two different revisions of the same system can be compared directly. This capability is further used to study the impact of specific refactoring rules on system security more generally, at both the design and code levels. By measuring the relative security of various programs refactored using different rules, we thus provide guidelines for the safe application of refactoring steps to security-critical programs. Finally, to make it easy and efficient to measure a system design or program’s security, we have also developed a stand-alone software tool which automatically analyses and measures the security of UML designs and Java program code. The tool’s capabilities are demonstrated by applying it to a number of security-critical system designs and Java programs. Notably, the validity of the metrics is demonstrated empirically through measurements that confirm our expectation that program security typically improves as bugs are fixed, but worsens as new functionality is added.
Resumo:
Field robots often rely on laser range finders (LRFs) to detect obstacles and navigate autonomously. Despite recent progress in sensing technology and perception algorithms, adverse environmental conditions, such as the presence of smoke, remain a challenging issue for these robots. In this paper, we investigate the possibility to improve laser-based perception applications by anticipating situations when laser data are affected by smoke, using supervised learning and state-of-the-art visual image quality analysis. We propose to train a k-nearest-neighbour (kNN) classifier to recognise situations where a laser scan is likely to be affected by smoke, based on visual data quality features. This method is evaluated experimentally using a mobile robot equipped with LRFs and a visual camera. The strengths and limitations of the technique are identified and discussed, and we show that the method is beneficial if conservative decisions are the most appropriate.
Resumo:
This paper proposes an experimental study of quality metrics that can be applied to visual and infrared images acquired from cameras onboard an unmanned ground vehicle (UGV). The relevance of existing metrics in this context is discussed and a novel metric is introduced. Selected metrics are evaluated on data collected by a UGV in clear and challenging environmental conditions, represented in this paper by the presence of airborne dust or smoke. An example of application is given with monocular SLAM estimating the pose of the UGV while smoke is present in the environment. It is shown that the proposed novel quality metric can be used to anticipate situations where the quality of the pose estimate will be significantly degraded due to the input image data. This leads to decisions of advantageously switching between data sources (e.g. using infrared images instead of visual images).
Resumo:
This paper proposes an experimental study of quality metrics that can be applied to visual and infrared images acquired from cameras onboard an unmanned ground vehicle (UGV). The relevance of existing metrics in this context is discussed and a novel metric is introduced. Selected metrics are evaluated on data collected by a UGV in clear and challenging environmental conditions, represented in this paper by the presence of airborne dust or smoke.
Resumo:
Understanding the differences between the temporal and physical aspects of the building life cycle is an essential ingredient in the development of Building Environmental Assessment (BEA) tools. This paper illustrates a theoretical Life Cycle Assessment (LCA) framework aligning temporal decision-making with that of material flows over building development phases. It was derived during development of a prototype commercial building design tool that was based on a 3-D CAD information and communications technology (ICT) platform and LCA software. The framework aligns stakeholder BEA needs and the decision-making process against characteristics of leading green building tools. The paper explores related integration of BEA tool development applications on such ICT platforms. Key framework modules are depicted and practical examples for BEA are provided for: • Definition of investment and service goals at project initiation; • Design integrated to avoid overlaps/confusion over the project life cycle; • Detailing the supply chain considering building life cycle impacts; • Delivery of quality metrics for occupancy post-construction/handover; • Deconstruction profiling at end of life to facilitate recovery.
Resumo:
It is possible to estimate the depth of focus (DOF) of the eye directly from wavefront measurements using various retinal image quality metrics (IQMs). In such methods, DOF is defined as the range of defocus error that degrades the retinal image quality calculated from IQMs to a certain level of the maximum value. Although different retinal image quality metrics are used, currently there have been two arbitrary threshold levels adopted, 50% and 80%. There has been limited study of the relationship between these threshold levels and the actual measured DOF. We measured the subjective DOF in a group of 17 normal subjects, and used through-focus augmented visual Strehl ratio based on optical transfer function (VSOTF) derived from their wavefront aberrations as the IQM. For each subject, a VSOTF threshold level was derived that would match the subjectively measured DOF. Significant correlation was found between the subject’s estimated threshold level and the HOA RMS (Pearson’s r=0.88, p<0.001). The linear correlation can be used to estimate the threshold level for each individual subject, subsequently leading to a method for estimating individual’s DOF from a single measurement of their wavefront aberrations.
Resumo:
The depth of focus (DOF) can be defined as the variation in image distance of a lens or an optical system which can be tolerated without incurring an objectionable lack of sharpness of focus. The DOF of the human eye serves a mechanism of blur tolerance. As long as the target image remains within the depth of focus in the image space, the eye will still perceive the image as being clear. A large DOF is especially important for presbyopic patients with partial or complete loss of accommodation (presbyopia), since this helps them to obtain an acceptable retinal image when viewing a target moving through a range of near to intermediate distances. The aim of this research was to investigate the DOF of the human eye and its association with the natural wavefront aberrations, and how higher order aberrations (HOAs) can be used to expand the DOF, in particular by inducing spherical aberrations ( 0 4 Z and 0 6 Z ). The depth of focus of the human eye can be measured using a variety of subjective and objective methods. Subjective measurements based on a Badal optical system have been widely adopted, through which the retinal image size can be kept constant. In such measurements, the subject.s tested eye is normally cyclopleged. Objective methods without the need of cycloplegia are also used, where the eye.s accommodative response is continuously monitored. Generally, the DOF measured by subjective methods are slightly larger than those measured objectively. In recent years, methods have also been developed to estimate DOF from retinal image quality metrics (IQMs) derived from the ocular wavefront aberrations. In such methods, the DOF is defined as the range of defocus error that degrades the retinal image quality calculated from the IQMs to a certain level of the possible maximum value. In this study, the effect of different amounts of HOAs on the DOF was theoretically evaluated by modelling and comparing the DOF of subjects from four different clinical groups, including young emmetropes (20 subjects), young myopes (19 subjects), presbyopes (32 subjects) and keratoconics (35 subjects). A novel IQM-based through-focus algorithm was developed to theoretically predict the DOF of subjects with their natural HOAs. Additional primary spherical aberration ( 0 4 Z ) was also induced in the wavefronts of myopes and presbyopes to simulate the effect of myopic refractive correction (e.g. LASIK) and presbyopic correction (e.g. progressive power IOL) on the subject.s DOF. Larger amounts of HOAs were found to lead to greater values of predicted DOF. The introduction of primary spherical aberration was found to provide moderate increase of DOF while slightly deteriorating the image quality at the same time. The predicted DOF was also affected by the IQMs and the threshold level adopted. We then investigated the influence of the chosen threshold level of the IQMs on the predicted DOF, and how it relates to the subjectively measured DOF. The subjective DOF was measured in a group of 17 normal subjects, and we used through-focus visual Strehl ratio based on optical transfer function (VSOTF) derived from their wavefront aberrations as the IQM to estimate the DOF. The results allowed comparison of the subjective DOF with the estimated DOF and determination of a threshold level for DOF estimation. Significant correlation was found between the subject.s estimated threshold level for the estimated DOF and HOA RMS (Pearson.s r=0.88, p<0.001). The linear correlation can be used to estimate the threshold level for each individual subject, subsequently leading to a method for estimating individual.s DOF from a single measurement of their wavefront aberrations. A subsequent study was conducted to investigate the DOF of keratoconic subjects. Significant increases of the level of HOAs, including spherical aberration, coma and trefoil, can be observed in keratoconic eyes. This population of subjects provides an opportunity to study the influence of these HOAs on DOF. It was also expected that the asymmetric aberrations (coma and trefoil) in the keratoconic eye could interact with defocus to cause regional blur of the target. A dual-Badal-channel optical system with a star-pattern target was used to measure the subjective DOF in 10 keratoconic eyes and compared to those from a group of 10 normal subjects. The DOF measured in keratoconic eyes was significantly larger than that in normal eyes. However there was not a strong correlation between the large amount of HOA RMS and DOF in keratoconic eyes. Among all HOA terms, spherical aberration was found to be the only HOA that helped to significantly increase the DOF in the studied keratoconic subjects. Through the first three studies, a comprehensive understanding of DOF and its association to the HOAs in the human eye had been achieved. An adaptive optics system was then designed and constructed. The system was capable of measuring and altering the wavefront aberrations in the subject.s eye and measuring the resulting DOF under the influence of different combination of HOAs. Using the AO system, we investigated the concept of extending the DOF through optimized combinations of 0 4 Z and 0 6 Z . Systematic introduction of a targeted amount of both 0 4 Z and 0 6 Z was found to significantly improve the DOF of healthy subjects. The use of wavefront combinations of 0 4 Z and 0 6 Z with opposite signs can further expand the DOF, rather than using 0 4 Z or 0 6 Z alone. The optimal wavefront combinations to expand the DOF were estimated using the ratio of increase in DOF and loss of retinal image quality defined by VSOTF. In the experiment, the optimal combinations of 0 4 Z and 0 6 Z were found to provide a better balance of DOF expansion and relatively smaller decreases in VA. Therefore, the optimal combinations of 0 4 Z and 0 6 Z provides a more efficient method to expand the DOF rather than 0 4 Z or 0 6 Z alone. This PhD research has shown that there is a positive correlation between the DOF and the eye.s wavefront aberrations. More aberrated eyes generally have a larger DOF. The association of DOF and the natural HOAs in normal subjects can be quantified, which allows the estimation of DOF directly from the ocular wavefront aberration. Among the Zernike HOA terms, spherical aberrations ( 0 4 Z and 0 6 Z ) were found to improve the DOF. Certain combinations of 0 4 Z and 0 6 Z provide a more effective method to expand DOF than using 0 4 Z or 0 6 Z alone, and this could be useful in the optimal design of presbyopic optical corrections such as multifocal contact lenses, intraocular lenses and laser corneal surgeries.
Resumo:
Background Illumina's Infinium SNP BeadChips are extensively used in both small and large-scale genetic studies. A fundamental step in any analysis is the processing of raw allele A and allele B intensities from each SNP into genotype calls (AA, AB, BB). Various algorithms which make use of different statistical models are available for this task. We compare four methods (GenCall, Illuminus, GenoSNP and CRLMM) on data where the true genotypes are known in advance and data from a recently published genome-wide association study. Results In general, differences in accuracy are relatively small between the methods evaluated, although CRLMM and GenoSNP were found to consistently outperform GenCall. The performance of Illuminus is heavily dependent on sample size, with lower no call rates and improved accuracy as the number of samples available increases. For X chromosome SNPs, methods with sex-dependent models (Illuminus, CRLMM) perform better than methods which ignore gender information (GenCall, GenoSNP). We observe that CRLMM and GenoSNP are more accurate at calling SNPs with low minor allele frequency than GenCall or Illuminus. The sample quality metrics from each of the four methods were found to have a high level of agreement at flagging samples with unusual signal characteristics. Conclusions CRLMM, GenoSNP and GenCall can be applied with confidence in studies of any size, as their performance was shown to be invariant to the number of samples available. Illuminus on the other hand requires a larger number of samples to achieve comparable levels of accuracy and its use in smaller studies (50 or fewer individuals) is not recommended.
Resumo:
Background The sequencing, de novo assembly and annotation of transcriptome datasets generated with next generation sequencing (NGS) has enabled biologists to answer genomic questions in non-model species with unprecedented ease. Reliable and accurate de novo assembly and annotation of transcriptomes, however, is a critically important step for transcriptome assemblies generated from short read sequences. Typical benchmarks for assembly and annotation reliability have been performed with model species. To address the reliability and accuracy of de novo transcriptome assembly in non-model species, we generated an RNAseq dataset for an intertidal gastropod mollusc species, Nerita melanotragus, and compared the assembly produced by four different de novo transcriptome assemblers; Velvet, Oases, Geneious and Trinity, for a number of quality metrics and redundancy. Results Transcriptome sequencing on the Ion Torrent PGM™ produced 1,883,624 raw reads with a mean length of 133 base pairs (bp). Both the Trinity and Oases de novo assemblers produced the best assemblies based on all quality metrics including fewer contigs, increased N50 and average contig length and contigs of greater length. Overall the BLAST and annotation success of our assemblies was not high with only 15-19% of contigs assigned a putative function. Conclusions We believe that any improvement in annotation success of gastropod species will require more gastropod genome sequences, but in particular an increase in mollusc protein sequences in public databases. Overall, this paper demonstrates that reliable and accurate de novo transcriptome assemblies can be generated from short read sequencers with the right assembly algorithms. Keywords: Nerita melanotragus; De novo assembly; Transcriptome; Heat shock protein; Ion torrent
Resumo:
This paper presents an enhanced algorithm for matching laser scan maps using histogram correlations. The histogram representation effectively summarizes a map's salient features such that pairs of maps can be matched efficiently without any prior guess as to their alignment. The histogram matching algorithm has been enhanced in order to work well in outdoor unstructured environments by using entropy metrics, weighted histograms and proper thresholding of quality metrics. Thus our large-scale scan-matching SLAM implementation has a vastly improved ability to close large loops in real-time even when odometry is not available. Our experimental results have demonstrated a successful mapping of the largest area ever mapped to date using only a single laser scanner. We also demonstrate our ability to solve the lost robot problem by localizing a robot to a previously built map without any prior initialization.
Resumo:
Despite longstanding concern with the dimensionality of the service quality construct as measured by ServQual and IS-ServQual instruments, variations on the IS-ServQual instrument have been enduringly prominent in both academic research and practice in the field of IS. We explain the continuing popularity of the instrument based on the salience of the item set for predicting overall customer satisfaction, suggesting that the preoccupation with the dimensions has been a distraction. The implicit mutual exclusivity of the items suggests a more appropriate conceptualization of IS-ServQual as a formative index. This conceptualization resolves the paradox in IS-ServQual research, that of how an instrument with such well-known and well-documented weaknesses continue to be very influential and widely used by academics and practitioners. A formative conceptualization acknowledges and addresses the criticisms of IS-ServQual, while simultaneously explaining its enduring salience by focusing on the items rather than the “dimensions.” By employing an opportunistic sample and adopting the most recent IS-ServQual instrument published in a leading IS journal (virtually, any valid IS- ServQual sample in combination with a previously tested instrument variant would suffice for study purposes), we demonstrate that when re-specified as both first-order and second-order formatives, IS-ServQual has good model quality metrics and high predictive power on customer satisfaction. We conclude that this formative specification has higher practical use and is more defensible theoretically.
Resumo:
This thesis has investigated how to cluster a large number of faces within a multi-media corpus in the presence of large session variation. Quality metrics are used to select the best faces to represent a sequence of faces; and session variation modelling improves clustering performance in the presence of wide variations across videos. Findings from this thesis contribute to improving the performance of both face verification systems and the fully automated clustering of faces from a large video corpus.
Resumo:
The planning of IMRT treatments requires a compromise between dose conformity (complexity) and deliverability. This study investigates established and novel treatment complexity metrics for 122 IMRT beams from prostate treatment plans. The Treatment and Dose Assessor software was used to extract the necessary data from exported treatment plan files and calculate the metrics. For most of the metrics, there was strong overlap between the calculated values for plans that passed and failed their quality assurance (QA) tests. However, statistically significant variation between plans that passed and failed QA measurements was found for the established modulation index and for a novel metric describing the proportion of small apertures in each beam. The ‘small aperture score’ provided threshold values which successfully distinguished deliverable treatment plans from plans that did not pass QA, with a low false negative rate.
Resumo:
The quality of office indoor environments is considered to consist of those factors that impact the occupants according to their health and well-being and (by consequence) their productivity. Indoor Environment Quality (IEQ) can be characterized by four indicators: • Indoor air quality indicators • Thermal comfort indicators • Lighting indicators • Noise indicators. Within each indicator, there are specific metrics that can be utilized in determining an acceptable quality of an indoor environment based on existing knowledge and best practice. Examples of these metrics are: indoor air levels of pollutants or odorants; operative temperature and its control; radiant asymmetry; task lighting; glare; ambient noise. The way in which these metrics impact occupants is not fully understood, especially when multiple metrics may interact in their impacts. It can be estimated that the potential cost of lost productivity from poor IEQ may be much in excess of other operating costs of a building. However, the relative productivity impacts of each of the four indicators is largely unknown. The CRC Project ‘Regenerating Construction to Enhance Sustainability’ has a focus on IEQ impacts before and after building refurbishment. This paper provides an overview of IEQ impacts and criteria and the implementation of a CRC project that is currently researching these factors during the refurbishment of a Melbourne office building. IEQ measurements and their impacts will be reported in a future paper
Resumo:
The quality of office indoor environments is considered to consist of those factors that impact occupants according to their health and well-being and (by consequence) their productivity. Indoor Environment Quality (IEQ) can be characterized by four indicators: • Indoor air quality indicators • Thermal comfort indicators • Lighting indicators • Noise indicators. Within each indicator, there are specific metrics that can be utilized in determining an acceptable quality of an indoor environment based on existing knowledge and best practice. Examples of these metrics are: indoor air levels of pollutants or odorants; operative temperature and its control; radiant asymmetry; task lighting; glare; ambient noise. The way in which these metrics impact occupants is not fully understood, especially when multiple metrics may interact in their impacts. While the potential cost of lost productivity from poor IEQ has been estimated to exceed building operation costs, the level of impact and the relative significance of the above four indicators are largely unknown. However, they are key factors in the sustainable operation or refurbishment of office buildings. This paper presents a methodology for assessing indoor environment quality (IEQ) in office buildings, and indicators with related metrics for high performance and occupant comfort. These are intended for integration into the specification of sustainable office buildings as key factors to ensure a high degree of occupant habitability, without this being impaired by other sustainability factors. The assessment methodology was applied in a case study on IEQ in Australia’s first ‘six star’ sustainable office building, Council House 2 (CH2), located in the centre of Melbourne. The CH2 building was designed and built with specific focus on sustainability and the provision of a high quality indoor environment for occupants. Actual IEQ performance was assessed in this study by field assessment after construction and occupancy. For comparison, the methodology was applied to a 30 year old conventional building adjacent to CH2 which housed the same or similar occupants and activities. The impact of IEQ on occupant productivity will be reported in a separate future paper