984 resultados para carcass quality
Resumo:
The variability of input parameters is the most important source of overall model uncertainty. Therefore, an in-depth understanding of the variability is essential for uncertainty analysis of stormwater quality model outputs. This paper presents the outcomes of a research study which investigated the variability of pollutants build-up characteristics on road surfaces in residential, commercial and industrial land uses. It was found that build-up characteristics vary highly even within the same land use. Additionally, industrial land use showed relatively higher variability of maximum build-up, build-up rate and particle size distribution, whilst the commercial land use displayed a relatively higher variability of pollutant-solid ratio. Among the various build-up parameters analysed, D50 (volume-median-diameter) displayed the relatively highest variability for all three land uses.
Resumo:
Franchised convenience stores successfully operate throughout Taiwan, but the convenience store market is approaching saturation point. Creating a cooperative long-term franchising relationship between franchisors and franchisees is essential to maintain the proportion of convenience stores...
Resumo:
The issue of ensuring that construction projects achieve high quality outcomes continues to be an important consideration for key project stakeholders. Although a lot of quality practices have been done within the industry, establishment and achievement of reasonable levels of quality in construction projects continues to be a problem. While some studies into the introduction and development of quality practices and stakeholder management in the construction industry have been undertaken separately, no major studies have so far been completed that examine in depth how quality management practices that specifically address stakeholders’ perspectives of quality can be utilised to contribute to the ultimate constructed quality of projects. This paper encompasses and summarizes a review of the literature related to previous research undertaken on quality within the industry, focuses on the benefits and shortcomings, together with examining the concept of integrating stakeholder perspectives of project quality for improvement of outcomes throughout the project lifecycle. Findings discussed in this paper reveal a pressing need for investigation, development and testing of a framework to facilitate better implementation of quality management practices and thus achievement of better quality outcomes within the construction industry. The framework will incorporate and integrate the views of stakeholders on what constitutes final project quality to be utilised in developing better quality management planning and systems aimed ultimately at achieving better project quality delivery.
Resumo:
A significant proportion of the cost of software development is due to software testing and maintenance. This is in part the result of the inevitable imperfections due to human error, lack of quality during the design and coding of software, and the increasing need to reduce faults to improve customer satisfaction in a competitive marketplace. Given the cost and importance of removing errors improvements in fault detection and removal can be of significant benefit. The earlier in the development process faults can be found, the less it costs to correct them and the less likely other faults are to develop. This research aims to make the testing process more efficient and effective by identifying those software modules most likely to contain faults, allowing testing efforts to be carefully targeted. This is done with the use of machine learning algorithms which use examples of fault prone and not fault prone modules to develop predictive models of quality. In order to learn the numerical mapping between module and classification, a module is represented in terms of software metrics. A difficulty in this sort of problem is sourcing software engineering data of adequate quality. In this work, data is obtained from two sources, the NASA Metrics Data Program, and the open source Eclipse project. Feature selection before learning is applied, and in this area a number of different feature selection methods are applied to find which work best. Two machine learning algorithms are applied to the data - Naive Bayes and the Support Vector Machine - and predictive results are compared to those of previous efforts and found to be superior on selected data sets and comparable on others. In addition, a new classification method is proposed, Rank Sum, in which a ranking abstraction is laid over bin densities for each class, and a classification is determined based on the sum of ranks over features. A novel extension of this method is also described based on an observed polarising of points by class when rank sum is applied to training data to convert it into 2D rank sum space. SVM is applied to this transformed data to produce models the parameters of which can be set according to trade-off curves to obtain a particular performance trade-off.
Resumo:
In today’s electronic world vast amounts of knowledge is stored within many datasets and databases. Often the default format of this data means that the knowledge within is not immediately accessible, but rather has to be mined and extracted. This requires automated tools and they need to be effective and efficient. Association rule mining is one approach to obtaining knowledge stored with datasets / databases which includes frequent patterns and association rules between the items / attributes of a dataset with varying levels of strength. However, this is also association rule mining’s downside; the number of rules that can be found is usually very big. In order to effectively use the association rules (and the knowledge within) the number of rules needs to be kept manageable, thus it is necessary to have a method to reduce the number of association rules. However, we do not want to lose knowledge through this process. Thus the idea of non-redundant association rule mining was born. A second issue with association rule mining is determining which ones are interesting. The standard approach has been to use support and confidence. But they have their limitations. Approaches which use information about the dataset’s structure to measure association rules are limited, but could yield useful association rules if tapped. Finally, while it is important to be able to get interesting association rules from a dataset in a manageable size, it is equally as important to be able to apply them in a practical way, where the knowledge they contain can be taken advantage of. Association rules show items / attributes that appear together frequently. Recommendation systems also look at patterns and items / attributes that occur together frequently in order to make a recommendation to a person. It should therefore be possible to bring the two together. In this thesis we look at these three issues and propose approaches to help. For discovering non-redundant rules we propose enhanced approaches to rule mining in multi-level datasets that will allow hierarchically redundant association rules to be identified and removed, without information loss. When it comes to discovering interesting association rules based on the dataset’s structure we propose three measures for use in multi-level datasets. Lastly, we propose and demonstrate an approach that allows for association rules to be practically and effectively used in a recommender system, while at the same time improving the recommender system’s performance. This especially becomes evident when looking at the user cold-start problem for a recommender system. In fact our proposal helps to solve this serious problem facing recommender systems.
Resumo:
The tear film plays an important role preserving the health of the ocular surface and maintaining the optimal refractive power of the cornea. Moreover dry eye syndrome is one of the most commonly reported eye health problems. This syndrome is caused by abnormalities in the properties of the tear film. Current clinical tools to assess the tear film properties have shown certain limitations. The traditional invasive methods for the assessment of tear film quality, which are used by most clinicians, have been criticized for the lack of reliability and/or repeatability. A range of non-invasive methods of tear assessment have been investigated, but also present limitations. Hence no “gold standard” test is currently available to assess the tear film integrity. Therefore, improving techniques for the assessment of the tear film quality is of clinical significance and the main motivation for the work described in this thesis. In this study the tear film surface quality (TFSQ) changes were investigated by means of high-speed videokeratoscopy (HSV). In this technique, a set of concentric rings formed in an illuminated cone or a bowl is projected on the anterior cornea and their reflection from the ocular surface imaged on a charge-coupled device (CCD). The reflection of the light is produced in the outer most layer of the cornea, the tear film. Hence, when the tear film is smooth the reflected image presents a well structure pattern. In contrast, when the tear film surface presents irregularities, the pattern also becomes irregular due to the light scatter and deviation of the reflected light. The videokeratoscope provides an estimate of the corneal topography associated with each Placido disk image. Topographical estimates, which have been used in the past to quantify tear film changes, may not always be suitable for the evaluation of all the dynamic phases of the tear film. However the Placido disk image itself, which contains the reflected pattern, may be more appropriate to assess the tear film dynamics. A set of novel routines have been purposely developed to quantify the changes of the reflected pattern and to extract a time series estimate of the TFSQ from the video recording. The routine extracts from each frame of the video recording a maximized area of analysis. In this area a metric of the TFSQ is calculated. Initially two metrics based on the Gabor filter and Gaussian gradient-based techniques, were used to quantify the consistency of the pattern’s local orientation as a metric of TFSQ. These metrics have helped to demonstrate the applicability of HSV to assess the tear film, and the influence of contact lens wear on TFSQ. The results suggest that the dynamic-area analysis method of HSV was able to distinguish and quantify the subtle, but systematic degradation of tear film surface quality in the inter-blink interval in contact lens wear. It was also able to clearly show a difference between bare eye and contact lens wearing conditions. Thus, the HSV method appears to be a useful technique for quantitatively investigating the effects of contact lens wear on the TFSQ. Subsequently a larger clinical study was conducted to perform a comparison between HSV and two other non-invasive techniques, lateral shearing interferometry (LSI) and dynamic wavefront sensing (DWS). Of these non-invasive techniques, the HSV appeared to be the most precise method for measuring TFSQ, by virtue of its lower coefficient of variation. While the LSI appears to be the most sensitive method for analyzing the tear build-up time (TBUT). The capability of each of the non-invasive methods to discriminate dry eye from normal subjects was also investigated. The receiver operating characteristic (ROC) curves were calculated to assess the ability of each method to predict dry eye syndrome. The LSI technique gave the best results under both natural blinking conditions and in suppressed blinking conditions, which was closely followed by HSV. The DWS did not perform as well as LSI or HSV. The main limitation of the HSV technique, which was identified during the former clinical study, was the lack of the sensitivity to quantify the build-up/formation phase of the tear film cycle. For that reason an extra metric based on image transformation and block processing was proposed. In this metric, the area of analysis was transformed from Cartesian to Polar coordinates, converting the concentric circles pattern into a quasi-straight lines image in which a block statistics value was extracted. This metric has shown better sensitivity under low pattern disturbance as well as has improved the performance of the ROC curves. Additionally a theoretical study, based on ray-tracing techniques and topographical models of the tear film, was proposed to fully comprehend the HSV measurement and the instrument’s potential limitations. Of special interested was the assessment of the instrument’s sensitivity under subtle topographic changes. The theoretical simulations have helped to provide some understanding on the tear film dynamics, for instance the model extracted for the build-up phase has helped to provide some insight into the dynamics during this initial phase. Finally some aspects of the mathematical modeling of TFSQ time series have been reported in this thesis. Over the years, different functions have been used to model the time series as well as to extract the key clinical parameters (i.e., timing). Unfortunately those techniques to model the tear film time series do not simultaneously consider the underlying physiological mechanism and the parameter extraction methods. A set of guidelines are proposed to meet both criteria. Special attention was given to a commonly used fit, the polynomial function, and considerations to select the appropriate model order to ensure the true derivative of the signal is accurately represented. The work described in this thesis has shown the potential of using high-speed videokeratoscopy to assess tear film surface quality. A set of novel image and signal processing techniques have been proposed to quantify different aspects of the tear film assessment, analysis and modeling. The dynamic-area HSV has shown good performance in a broad range of conditions (i.e., contact lens, normal and dry eye subjects). As a result, this technique could be a useful clinical tool to assess tear film surface quality in the future.
Resumo:
In 2008, a three-year pilot ‘pay for performance’ (P4P) program, known as ‘Clinical Practice Improvement Payment’ (CPIP) was introduced into Queensland Health (QHealth). QHealth is a large public health sector provider of acute, community, and public health services in Queensland, Australia. The organisation has recently embarked on a significant reform agenda including a review of existing funding arrangements (Duckett et al., 2008). Partly in response to this reform agenda, a casemix funding model has been implemented to reconnect health care funding with outcomes. CPIP was conceptualised as a performance-based scheme that rewarded quality with financial incentives. This is the first time such a scheme has been implemented into the public health sector in Australia with a focus on rewarding quality, and it is unique in that it has a large state-wide focus and includes 15 Districts. CPIP initially targeted five acute and community clinical areas including Mental Health, Discharge Medication, Emergency Department, Chronic Obstructive Pulmonary Disease, and Stroke. The CPIP scheme was designed around key concepts including the identification of clinical indicators that met the set criteria of: high disease burden, a well defined single diagnostic group or intervention, significant variations in clinical outcomes and/or practices, a good evidence, and clinician control and support (Ward, Daniels, Walker & Duckett, 2007). This evaluative research targeted Phase One of implementation of the CPIP scheme from January 2008 to March 2009. A formative evaluation utilising a mixed methodology and complementarity analysis was undertaken. The research involved three research questions and aimed to determine the knowledge, understanding, and attitudes of clinicians; identify improvements to the design, administration, and monitoring of CPIP; and determine the financial and economic costs of the scheme. Three key studies were undertaken to ascertain responses to the key research questions. Firstly, a survey of clinicians was undertaken to examine levels of knowledge and understanding and their attitudes to the scheme. Secondly, the study sought to apply Statistical Process Control (SPC) to the process indicators to assess if this enhanced the scheme and a third study examined a simple economic cost analysis. The CPIP Survey of clinicians elicited 192 clinician respondents. Over 70% of these respondents were supportive of the continuation of the CPIP scheme. This finding was also supported by the results of a quantitative altitude survey that identified positive attitudes in 6 of the 7 domains-including impact, awareness and understanding and clinical relevance, all being scored positive across the combined respondent group. SPC as a trending tool may play an important role in the early identification of indicator weakness for the CPIP scheme. This evaluative research study supports a previously identified need in the literature for a phased introduction of Pay for Performance (P4P) type programs. It further highlights the value of undertaking a formal risk assessment of clinician, management, and systemic levels of literacy and competency with measurement and monitoring of quality prior to a phased implementation. This phasing can then be guided by a P4P Design Variable Matrix which provides a selection of program design options such as indicator target and payment mechanisms. It became evident that a clear process is required to standardise how clinical indicators evolve over time and direct movement towards more rigorous ‘pay for performance’ targets and the development of an optimal funding model. Use of this matrix will enable the scheme to mature and build the literacy and competency of clinicians and the organisation as implementation progresses. Furthermore, the research identified that CPIP created a spotlight on clinical indicators and incentive payments of over five million from a potential ten million was secured across the five clinical areas in the first 15 months of the scheme. This indicates that quality was rewarded in the new QHealth funding model, and despite issues being identified with the payment mechanism, funding was distributed. The economic model used identified a relative low cost of reporting (under $8,000) as opposed to funds secured of over $300,000 for mental health as an example. Movement to a full cost effectiveness study of CPIP is supported. Overall the introduction of the CPIP scheme into QHealth has been a positive and effective strategy for engaging clinicians in quality and has been the catalyst for the identification and monitoring of valuable clinical process indicators. This research has highlighted that clinicians are supportive of the scheme in general; however, there are some significant risks that include the functioning of the CPIP payment mechanism. Given clinician support for the use of a pay–for-performance methodology in QHealth, the CPIP scheme has the potential to be a powerful addition to a multi-faceted suite of quality improvement initiatives within QHealth.
Resumo:
In this paper we study both the level of Value-at-Risk (VaR) disclosure and the accuracy of the disclosed VaR figures for a sample of US and international commercial banks. To measure the level of VaR disclosures, we develop a VaR Disclosure Index that captures many different facets of market risk disclosure. Using panel data over the period 1996–2005, we find an overall upward trend in the quantity of information released to the public. We also find that Historical Simulation is by far the most popular VaR method. We assess the accuracy of VaR figures by studying the number of VaR exceedances and whether actual daily VaRs contain information about the volatility of subsequent trading revenues. Unlike the level of VaR disclosure, the quality of VaR disclosure shows no sign of improvement over time. We find that VaR computed using Historical Simulation contains very little information about future volatility.
Resumo:
Inadequate air quality and the inhalation of airborne pollutants pose many risks to human health and wellbeing, and are listed among the top environmental risks worldwide. The importance of outdoor air quality was recognised in the 1950s and indoor air quality emerged as an issue some time later and was soon recognised as having an equal, if not greater importance than outdoor air quality. Identification of ambient air pollution as a health hazard was followed by steps, undertaken by a broad range of national and international professional and government organisations, aimed at reduction or elimination of the hazard. However, the process of achieving better air quality is still in progress. The last 10 years or so have seen an unprecedented increase in the interest in, and attention to, airborne particles, with a special focus on their finer size fractions, including ultrafine (< 0.1 m) and their subset, nano particles (< 0.05 m). This paper discusses the current status of scientific knowledge on the links between air quality and health, with a particular focus on airborne particulate matter, and the directions taken by national and international bodies to improve air quality.
Resumo:
Background/aim In response to the high burden of disease associated with chronic heart failure (CHF), in particular the high rates of hospital admissions, dedicated CHF management programs (CHF-MP) have been developed. Over the past five years there has been a rapid growth of CHF-MPs in Australia. Given the apparent mismatch between the demand for, and availability of CHF-MPs, this paper has been designed to discuss the accessibility to and quality of current CHF-MPs in Australia. Methods The data presented in this report has been combined from the research of the co-authors, in particular a review of the inequities in access to chronic heart failure which utilised geographical information systems (GIS) and the survey of heterogeneity in quality and service provision in Australian. Results Of the 62 CHF-MPs surveyed in this study 93% (58) centres had been located areas that are rated as Highly Accessible. This result indicated that most of the CHF-MPs have been located in capital cities or large regional cities. Six percent (4 CHF-MPs) had been located in Accessible areas which were country towns or cities. No CHF-MPs had been established outside of cities to service the estimated 72,000 individuals with CHF living in rural and remote areas. 16% of programs recruited NYHA Class I patients and of these 20% lacked confirmation (echocardiogram) of their diagnosis. Conclusion Overall, these data highlight the urgent need to provide equitable access to CHF-MP's. When establishing CHF-MPs consideration of current evidence based models to ensure quality in practice.
Resumo:
Quality, as well as project success, in construction projects should be capable of being regarded as the fulfillment of expectation of those contributors and stakeholders involved in such projects. Although a significant amount of quality practices have been introduced within the industry, establishment and attainment of reasonable levels of quality internationally in construction projects continues to be an ongoing problem. To date, some investigation into the introduction and improvement of quality practices and stakeholder management in the construction industry has been accomplished independently, but so far no major studies have been completed that examine comprehensively how quality management practices that particularly concentrate on the stakeholders’ perspective of quality can be used to contribute to final project quality outcomes. This paper aims to examine the process for development of a framework for better involvement of stakeholders in quality planning and practices and subsequently to contribute to higher quality outcomes within construction projects. Through extensive literature review it highlights various perceptions of quality, categorizes quality issues with particular focus on benefits and shortcomings and also examines stakeholders’ viewpoint of project quality in order to promote the improvement of outcomes throughout a project’s lifecycle. It proposes a set of arranged information as a basis for development of prospective framework which ultimately aims to improve project quality outcomes. The subsequent framework that will be developed from this research will provide project managers and owners with the required information and strategic direction to achieve their own and their stakeholders’ targets for implementation of quality practices and achievement of high quality outcomes on their future projects.
Resumo:
This study explores organizational capability and culture change through a project developing an assurance of learning program in a business school. In order to compete internationally for high quality faculty, students, strategic partnerships and research collaborations it is essential for Universities to develop and maintain an international focus and a quality produce that predicts excellence in the student experience and graduate outcomes that meet industry needs. Developing, marketing and delivering that quality product requires an organizational strategy to which all members of the organization contribute and adhere. Now, the ability to acquire, share and utilize knowledge has become a critical organizational capability in academia as well as other industries. Traditionally the functional approach to business school structures and disparate nature of the social networks and work contact limit the sharing of knowledge between academics working in different disciplines. In this project a community of practice program was established to include academics in the development of an embedded assurance of learning program affecting more than 5000 undergraduate students and 250 academics from nine different disciplines across four schools. The primary outcome from the fully developed and implemented assurance of learning program was the five year accreditation of the business schools programs by two international accrediting bodies, EQUIS and AACSB. However this study explores a different outcome, namely the change in organizational culture and individual capabilities as academics worked together in teaching and learning teams. This study uses a survey and interviews with academics involved, through a retrospective panel design which contained an experimental group and a control group. Results offer insights into communities of practice as a means of addressing organizational capability and changes in organizational culture. Knowledge management and shared learning can achieve strategic and operational benefits equally within academia as within other industrial enterprises but it comes at a cost. Traditional structures, academics that act like individual contractors and deep divides across research, teaching and service interest served a different master and required fewer resources. Collaborative structures; fewer master categories of discrete knowledge areas; specific strategic goals; greater links between academics and industry; and the means to share learned insights will require a different approach to resourcing both the individual and the team.