893 resultados para metrics
Resumo:
The depth of focus (DOF) can be defined as the variation in image distance of a lens or an optical system which can be tolerated without incurring an objectionable lack of sharpness of focus. The DOF of the human eye serves a mechanism of blur tolerance. As long as the target image remains within the depth of focus in the image space, the eye will still perceive the image as being clear. A large DOF is especially important for presbyopic patients with partial or complete loss of accommodation (presbyopia), since this helps them to obtain an acceptable retinal image when viewing a target moving through a range of near to intermediate distances. The aim of this research was to investigate the DOF of the human eye and its association with the natural wavefront aberrations, and how higher order aberrations (HOAs) can be used to expand the DOF, in particular by inducing spherical aberrations ( 0 4 Z and 0 6 Z ). The depth of focus of the human eye can be measured using a variety of subjective and objective methods. Subjective measurements based on a Badal optical system have been widely adopted, through which the retinal image size can be kept constant. In such measurements, the subject.s tested eye is normally cyclopleged. Objective methods without the need of cycloplegia are also used, where the eye.s accommodative response is continuously monitored. Generally, the DOF measured by subjective methods are slightly larger than those measured objectively. In recent years, methods have also been developed to estimate DOF from retinal image quality metrics (IQMs) derived from the ocular wavefront aberrations. In such methods, the DOF is defined as the range of defocus error that degrades the retinal image quality calculated from the IQMs to a certain level of the possible maximum value. In this study, the effect of different amounts of HOAs on the DOF was theoretically evaluated by modelling and comparing the DOF of subjects from four different clinical groups, including young emmetropes (20 subjects), young myopes (19 subjects), presbyopes (32 subjects) and keratoconics (35 subjects). A novel IQM-based through-focus algorithm was developed to theoretically predict the DOF of subjects with their natural HOAs. Additional primary spherical aberration ( 0 4 Z ) was also induced in the wavefronts of myopes and presbyopes to simulate the effect of myopic refractive correction (e.g. LASIK) and presbyopic correction (e.g. progressive power IOL) on the subject.s DOF. Larger amounts of HOAs were found to lead to greater values of predicted DOF. The introduction of primary spherical aberration was found to provide moderate increase of DOF while slightly deteriorating the image quality at the same time. The predicted DOF was also affected by the IQMs and the threshold level adopted. We then investigated the influence of the chosen threshold level of the IQMs on the predicted DOF, and how it relates to the subjectively measured DOF. The subjective DOF was measured in a group of 17 normal subjects, and we used through-focus visual Strehl ratio based on optical transfer function (VSOTF) derived from their wavefront aberrations as the IQM to estimate the DOF. The results allowed comparison of the subjective DOF with the estimated DOF and determination of a threshold level for DOF estimation. Significant correlation was found between the subject.s estimated threshold level for the estimated DOF and HOA RMS (Pearson.s r=0.88, p<0.001). The linear correlation can be used to estimate the threshold level for each individual subject, subsequently leading to a method for estimating individual.s DOF from a single measurement of their wavefront aberrations. A subsequent study was conducted to investigate the DOF of keratoconic subjects. Significant increases of the level of HOAs, including spherical aberration, coma and trefoil, can be observed in keratoconic eyes. This population of subjects provides an opportunity to study the influence of these HOAs on DOF. It was also expected that the asymmetric aberrations (coma and trefoil) in the keratoconic eye could interact with defocus to cause regional blur of the target. A dual-Badal-channel optical system with a star-pattern target was used to measure the subjective DOF in 10 keratoconic eyes and compared to those from a group of 10 normal subjects. The DOF measured in keratoconic eyes was significantly larger than that in normal eyes. However there was not a strong correlation between the large amount of HOA RMS and DOF in keratoconic eyes. Among all HOA terms, spherical aberration was found to be the only HOA that helped to significantly increase the DOF in the studied keratoconic subjects. Through the first three studies, a comprehensive understanding of DOF and its association to the HOAs in the human eye had been achieved. An adaptive optics system was then designed and constructed. The system was capable of measuring and altering the wavefront aberrations in the subject.s eye and measuring the resulting DOF under the influence of different combination of HOAs. Using the AO system, we investigated the concept of extending the DOF through optimized combinations of 0 4 Z and 0 6 Z . Systematic introduction of a targeted amount of both 0 4 Z and 0 6 Z was found to significantly improve the DOF of healthy subjects. The use of wavefront combinations of 0 4 Z and 0 6 Z with opposite signs can further expand the DOF, rather than using 0 4 Z or 0 6 Z alone. The optimal wavefront combinations to expand the DOF were estimated using the ratio of increase in DOF and loss of retinal image quality defined by VSOTF. In the experiment, the optimal combinations of 0 4 Z and 0 6 Z were found to provide a better balance of DOF expansion and relatively smaller decreases in VA. Therefore, the optimal combinations of 0 4 Z and 0 6 Z provides a more efficient method to expand the DOF rather than 0 4 Z or 0 6 Z alone. This PhD research has shown that there is a positive correlation between the DOF and the eye.s wavefront aberrations. More aberrated eyes generally have a larger DOF. The association of DOF and the natural HOAs in normal subjects can be quantified, which allows the estimation of DOF directly from the ocular wavefront aberration. Among the Zernike HOA terms, spherical aberrations ( 0 4 Z and 0 6 Z ) were found to improve the DOF. Certain combinations of 0 4 Z and 0 6 Z provide a more effective method to expand DOF than using 0 4 Z or 0 6 Z alone, and this could be useful in the optimal design of presbyopic optical corrections such as multifocal contact lenses, intraocular lenses and laser corneal surgeries.
Resumo:
Earlier research developed theoretically-based aggregate metrics for technology strategy and used them to analyze California bridge construction firms (Hampson, 1993). Determinants of firm performance, including trend in contract awards, market share and contract awards per employee, were used as indicators for competitive performance. The results of this research were a series of refined theoretically-based measures for technology strategy and a demonstrated positive relationship between technology strategy and competitive performance within the bridge construction sector. This research showed that three technology strategy dimensions—competitive positioning, depth of technology strategy, and organizational fit— show very strong correlation with the competitive performance indicators of absolute growth in contract awards, and contract awards per employee. Both researchers and industry professionals need improved understanding of how technology affects results, and how to better target investments to improve competitive performance in particular industry sectors. This paper builds on the previous research findings by evaluating the strategic fit of firms' approach to technology with industry segment characteristics. It begins with a brief overview of the background regarding technology strategy. The major sections of the paper describe niches and firms in an example infrastructure construction market, analyze appropriate technology strategies, and describe managerial actions to implement these strategies and support the business objectives of the firm.
Resumo:
We present a hierarchical model for assessing an object-oriented program's security. Security is quantified using structural properties of the program code to identify the ways in which `classified' data values may be transferred between objects. The model begins with a set of low-level security metrics based on traditional design characteristics of object-oriented classes, such as data encapsulation, cohesion and coupling. These metrics are then used to characterise higher-level properties concerning the overall readability and writability of classified data throughout the program. In turn, these metrics are then mapped to well-known security design principles such as `assigning the least privilege' and `reducing the size of the attack surface'. Finally, the entire program's security is summarised as a single security index value. These metrics allow different versions of the same program, or different programs intended to perform the same task, to be compared for their relative security at a number of different abstraction levels. The model is validated via an experiment involving five open source Java programs, using a static analysis tool we have developed to automatically extract the security metrics from compiled Java bytecode.
Resumo:
A significant proportion of the cost of software development is due to software testing and maintenance. This is in part the result of the inevitable imperfections due to human error, lack of quality during the design and coding of software, and the increasing need to reduce faults to improve customer satisfaction in a competitive marketplace. Given the cost and importance of removing errors improvements in fault detection and removal can be of significant benefit. The earlier in the development process faults can be found, the less it costs to correct them and the less likely other faults are to develop. This research aims to make the testing process more efficient and effective by identifying those software modules most likely to contain faults, allowing testing efforts to be carefully targeted. This is done with the use of machine learning algorithms which use examples of fault prone and not fault prone modules to develop predictive models of quality. In order to learn the numerical mapping between module and classification, a module is represented in terms of software metrics. A difficulty in this sort of problem is sourcing software engineering data of adequate quality. In this work, data is obtained from two sources, the NASA Metrics Data Program, and the open source Eclipse project. Feature selection before learning is applied, and in this area a number of different feature selection methods are applied to find which work best. Two machine learning algorithms are applied to the data - Naive Bayes and the Support Vector Machine - and predictive results are compared to those of previous efforts and found to be superior on selected data sets and comparable on others. In addition, a new classification method is proposed, Rank Sum, in which a ranking abstraction is laid over bin densities for each class, and a classification is determined based on the sum of ranks over features. A novel extension of this method is also described based on an observed polarising of points by class when rank sum is applied to training data to convert it into 2D rank sum space. SVM is applied to this transformed data to produce models the parameters of which can be set according to trade-off curves to obtain a particular performance trade-off.
Resumo:
The tear film plays an important role preserving the health of the ocular surface and maintaining the optimal refractive power of the cornea. Moreover dry eye syndrome is one of the most commonly reported eye health problems. This syndrome is caused by abnormalities in the properties of the tear film. Current clinical tools to assess the tear film properties have shown certain limitations. The traditional invasive methods for the assessment of tear film quality, which are used by most clinicians, have been criticized for the lack of reliability and/or repeatability. A range of non-invasive methods of tear assessment have been investigated, but also present limitations. Hence no “gold standard” test is currently available to assess the tear film integrity. Therefore, improving techniques for the assessment of the tear film quality is of clinical significance and the main motivation for the work described in this thesis. In this study the tear film surface quality (TFSQ) changes were investigated by means of high-speed videokeratoscopy (HSV). In this technique, a set of concentric rings formed in an illuminated cone or a bowl is projected on the anterior cornea and their reflection from the ocular surface imaged on a charge-coupled device (CCD). The reflection of the light is produced in the outer most layer of the cornea, the tear film. Hence, when the tear film is smooth the reflected image presents a well structure pattern. In contrast, when the tear film surface presents irregularities, the pattern also becomes irregular due to the light scatter and deviation of the reflected light. The videokeratoscope provides an estimate of the corneal topography associated with each Placido disk image. Topographical estimates, which have been used in the past to quantify tear film changes, may not always be suitable for the evaluation of all the dynamic phases of the tear film. However the Placido disk image itself, which contains the reflected pattern, may be more appropriate to assess the tear film dynamics. A set of novel routines have been purposely developed to quantify the changes of the reflected pattern and to extract a time series estimate of the TFSQ from the video recording. The routine extracts from each frame of the video recording a maximized area of analysis. In this area a metric of the TFSQ is calculated. Initially two metrics based on the Gabor filter and Gaussian gradient-based techniques, were used to quantify the consistency of the pattern’s local orientation as a metric of TFSQ. These metrics have helped to demonstrate the applicability of HSV to assess the tear film, and the influence of contact lens wear on TFSQ. The results suggest that the dynamic-area analysis method of HSV was able to distinguish and quantify the subtle, but systematic degradation of tear film surface quality in the inter-blink interval in contact lens wear. It was also able to clearly show a difference between bare eye and contact lens wearing conditions. Thus, the HSV method appears to be a useful technique for quantitatively investigating the effects of contact lens wear on the TFSQ. Subsequently a larger clinical study was conducted to perform a comparison between HSV and two other non-invasive techniques, lateral shearing interferometry (LSI) and dynamic wavefront sensing (DWS). Of these non-invasive techniques, the HSV appeared to be the most precise method for measuring TFSQ, by virtue of its lower coefficient of variation. While the LSI appears to be the most sensitive method for analyzing the tear build-up time (TBUT). The capability of each of the non-invasive methods to discriminate dry eye from normal subjects was also investigated. The receiver operating characteristic (ROC) curves were calculated to assess the ability of each method to predict dry eye syndrome. The LSI technique gave the best results under both natural blinking conditions and in suppressed blinking conditions, which was closely followed by HSV. The DWS did not perform as well as LSI or HSV. The main limitation of the HSV technique, which was identified during the former clinical study, was the lack of the sensitivity to quantify the build-up/formation phase of the tear film cycle. For that reason an extra metric based on image transformation and block processing was proposed. In this metric, the area of analysis was transformed from Cartesian to Polar coordinates, converting the concentric circles pattern into a quasi-straight lines image in which a block statistics value was extracted. This metric has shown better sensitivity under low pattern disturbance as well as has improved the performance of the ROC curves. Additionally a theoretical study, based on ray-tracing techniques and topographical models of the tear film, was proposed to fully comprehend the HSV measurement and the instrument’s potential limitations. Of special interested was the assessment of the instrument’s sensitivity under subtle topographic changes. The theoretical simulations have helped to provide some understanding on the tear film dynamics, for instance the model extracted for the build-up phase has helped to provide some insight into the dynamics during this initial phase. Finally some aspects of the mathematical modeling of TFSQ time series have been reported in this thesis. Over the years, different functions have been used to model the time series as well as to extract the key clinical parameters (i.e., timing). Unfortunately those techniques to model the tear film time series do not simultaneously consider the underlying physiological mechanism and the parameter extraction methods. A set of guidelines are proposed to meet both criteria. Special attention was given to a commonly used fit, the polynomial function, and considerations to select the appropriate model order to ensure the true derivative of the signal is accurately represented. The work described in this thesis has shown the potential of using high-speed videokeratoscopy to assess tear film surface quality. A set of novel image and signal processing techniques have been proposed to quantify different aspects of the tear film assessment, analysis and modeling. The dynamic-area HSV has shown good performance in a broad range of conditions (i.e., contact lens, normal and dry eye subjects). As a result, this technique could be a useful clinical tool to assess tear film surface quality in the future.
Resumo:
The importance of actively managing and analysing business processes is acknowledged more than ever in organisations nowadays. Business processes form an essential part of an organisation and their application areas are manifold. Most organisations keep records of various activities that have been carried out for auditing purposes, but they are rarely used for analysis purposes. This paper describes the design and implementation of a process analysis tool that replays, analyses and visualises a variety of performance metrics using a process definition and its corresponding execution logs. The replayer uses a YAWL process model example to demonstrate its capacity to support advanced language constructs.
Resumo:
Numerous tools and techniques have been developed to eliminate or reduce waste and carry out lean concepts in the manufacturing environment. However, appropriate lean tools need to be selected and implemented in order to fulfil the manufacturer needs within their budgetary constraints. As a result, it is important to identify manufacturer needs and implement only those tools, which contribute maximum benefit to their needs. In this research a mathematical model is proposed for maximising the perceived value of manufacturer needs and developed a step-by-step methodology to select best performance metrics along with appropriate lean strategies within the budgetary constraints. With the help of a case study, the proposed model and method have been demonstrated.
Resumo:
Machine learning has become a valuable tool for detecting and preventing malicious activity. However, as more applications employ machine learning techniques in adversarial decision-making situations, increasingly powerful attacks become possible against machine learning systems. In this paper, we present three broad research directions towards the end of developing truly secure learning. First, we suggest that finding bounds on adversarial influence is important to understand the limits of what an attacker can and cannot do to a learning system. Second, we investigate the value of adversarial capabilities-the success of an attack depends largely on what types of information and influence the attacker has. Finally, we propose directions in technologies for secure learning and suggest lines of investigation into secure techniques for learning in adversarial environments. We intend this paper to foster discussion about the security of machine learning, and we believe that the research directions we propose represent the most important directions to pursue in the quest for secure learning.
Resumo:
Building on the recommendations of the Bradley Review (2008), the Australian Federal government intends to promote a higher level of penetration of tertiary qualification across the broader Australian community which is anticipated to result in increased levels of standardisation across university degrees. In the field of property, tertiary academic programs are very closely aligned to the needs of a range of built environment professions and there are well developed synergies between the relevant professional bodies and the educational institutions. The strong nexus between the academic and the professional content is characterised by ongoing industry accreditation which nominates a range of outcomes which the academic programs must maintain across a range of specified metrics. Commonly, the accrediting bodies focus on standard of minimum requirements especially in the area of specialised subject areas where they require property graduates to demonstrate appropriate learning and attitudes. In addition to nominated content fields, in every undergraduate degree program there are also many other subjects which provide a richer experience for the students beyond the merely professional. This study focuses on the nonspecialised knowledge field which varies across the universities offering property degree courses as every university has the freedom to pursue its own policy for these non-specialised units. With universities being sensitive to their role of in the appropriate socialisation of new entrants, first year units have been used as a vehicle to support students’ transition into university education and the final year units seek to support students’ integration into the professional world. Consequentially, many property programs have to squeeze their property-specific units to accommodate more generic units for both first year and final year units and the resulting diversity is a feature of the current range of property degrees across Australia which this research will investigate. The matrix of knowledge fields nominated by the Australian Property Institute for accreditation of degrees accepted for Certified Practising Valuer (CPV) educational requirement and the complementary requirements of the other major accrediting body (RICS) are used to classify and compare similarities and differences across property degrees in the light of the streamlining anticipated from the Bradley Review.
Resumo:
The World Health Organization recommends that data on mortality in its member countries are collected utilising the Medical Certificate of Cause of Death published in the instruction volume of the ICD-10. However, investment in health information processes necessary to promote the use of this certificate and improve mortality information is lacking in many countries. An appeal for support to make improvements has been launched through the Health Metrics Network’s MOVE-IT strategy (Monitoring of Vital Events – Information Technology) [World Health Organization, 2011]. Despite this international spotlight on the need for capture of mortality data and in the use of the ICD-10 to code the data reported on such certificates, there is little cohesion in the way that certifiers of deaths receive instruction in how to complete the death certificate, which is the main source document for mortality statistics. Complete and accurate documentation of the immediate, underlying and contributory causes of death of the decedent on the death certificate is a requirement to produce standardised statistical information and to the ability to produce cause-specific mortality statistics that can be compared between populations and across time. This paper reports on a research project conducted to determine the efficacy and accessibility of the certification module of the WHO’s newly-developed web based training tool for coders and certifiers of deaths. Involving a population of medical students from the Fiji School of Medicine and a pre and post research design, the study entailed completion of death certificates based on vignettes before and after access to the training tool. The ability of the participants to complete the death certificates and analysis of the completeness and specificity of the ICD-10 coding of the reported causes of death were used to measure the effect of the students’ learning from the training tool. The quality of death certificate completion was assessed using a Quality Index before and after the participants accessed the training tool. In addition, the views of the participants about accessibility and use of the training tool were elicited using a supplementary questionnaire. The results of the study demonstrated improvement in the ability of the participants to complete death certificates completely and accurately according to best practice. The training tool was viewed very positively and its implementation in the curriculum for medical students was encouraged. Participants also recommended that interactive discussions to examine the certification exercises would be an advantage.
Resumo:
Research into complaints handling in the health care system has predominately focused on examining the processes that underpin the organisational systems. An understanding of the cognitive decisions made by patients that influence whether they are satisfied or dissatisfied with the care they are receiving has had limited attention thus far. This study explored the lived experiences of Queensland acute care patients who complained about some aspect of their inpatient stay. A purposive sample of sixteen participants was recruited and interviewed about their experience of making a complaint. The qualitative data gathered through the interview process was subjected to an Interpretative Phenomenological Analysis (IPA) approach, guided by the philosophical influences of Heidegger (1889-1976). As part of the interpretive endeavour of this study, Lazarus’ cognitive emotive model with situational challenge was drawn on to provide a contextual understanding of the emotions experienced by the study participants. Analysis of the research data, aided by Leximancer™ software, revealed a series of relational themes that supported the interpretative data analysis process undertaken. The superordinate thematic statements that emerged from the narratives via the hermeneutic process were ineffective communication, standards of care were not consistent, being treated with disrespect, information on how to complain was not clear, and perceptions of negligence. This study’s goal was to provide health services with information about complaints handling that can help them develop service improvements. The study patients articulated the need for health care system reform; they want to be listened to, to be acknowledged, to be believed, for people to take ownership if they had made a mistake, for mistakes not to occur again, and to receive an apology. For these initiatives to be fully realised, the paradigm shift must go beyond regurgitating complaints data metrics in percentages per patient contact, towards a concerted effort to evaluate what the qualitative complaints data is really saying. An opportunity to identify a more positive and proactive approach in encouraging our patients to complain when they are dissatisfied has the potential to influence improvements.
Resumo:
This paper will provide an overview of a join research initiative being developed by the Queensland University of Technology in conjunction with the Australian Smart Services Cooperative Research Centre in relation to the development and analysis of online communities. The intention of this project is to initially create an exciting and innovative web space around the concept of adventure travel and then to analyse the level of user engagement to uncover possible patterns and processes that could be used in the future development of other virtual online communities. Travel websites are not a new concept and there are many successful examples currently operating and generating profit. The intention of the QUT/Smart Services CRC project is to analyse the site metrics to determine the following: what specific conditions/parameters are required to foster a growing and engaged virtual community; when does the shift occur from external moderation to a more sustainable system of self-moderation within the online community; when do users begin to take ownership of a site and take an invested interested in the content and growth of an online community; and how to retain active contributors and high-impact power users on a long-term basis. With the travel website rapidly approaching release, this paper begins the process of reflection, outlining the process undertaken and the findings so far aggregated whilst also positioning the project within the greater context of current online user participation and user generated content research.
Resumo:
Marketers spend considerable resources to motivate people to consume their products and services as a means of goal attainment (Bagozzi and Dholakia, 1999). Why people increase, decrease, or stop consuming some products is based largely on how well they perceive they are doing in pursuit of their goals (Carver and Scheier, 1992). Yet despite the importance for marketers in understanding how current performance influences a consumer’s future efforts, this topic has received little attention in marketing research. Goal researchers generally agree that feedback about how well or how poorly people are doing in achieving their goals affects their motivation (Bandura and Cervone, 1986; Locke and Latham, 1990). Yet there is less agreement about whether positive and negative performance feedback increases or decreases future effort (Locke and Latham, 1990). For instance, while a customer of a gym might cancel his membership after receiving negative feedback about his fitness, the same negative feedback might cause another customer to visit the gym more often to achieve better results. A similar logic can apply to many products and services from the use of cosmetics to investing in mutual funds. The present research offers managers key insights into how to engage customers and keep them motivated. Given that connecting customers with the company is a top research priority for managers (Marketing Science Institute, 2006), this article provides suggestions for performance metrics including four questions that managers can use to apply the findings.