719 resultados para Quality improvements
Resumo:
The variability of input parameters is the most important source of overall model uncertainty. Therefore, an in-depth understanding of the variability is essential for uncertainty analysis of stormwater quality model outputs. This paper presents the outcomes of a research study which investigated the variability of pollutants build-up characteristics on road surfaces in residential, commercial and industrial land uses. It was found that build-up characteristics vary highly even within the same land use. Additionally, industrial land use showed relatively higher variability of maximum build-up, build-up rate and particle size distribution, whilst the commercial land use displayed a relatively higher variability of pollutant-solid ratio. Among the various build-up parameters analysed, D50 (volume-median-diameter) displayed the relatively highest variability for all three land uses.
Resumo:
Franchised convenience stores successfully operate throughout Taiwan, but the convenience store market is approaching saturation point. Creating a cooperative long-term franchising relationship between franchisors and franchisees is essential to maintain the proportion of convenience stores...
Resumo:
A review of the literature related to issues involved in irrigation induced agricultural development (IIAD) reveals that: (1) the magnitude, sensitivity and distribution of social welfare of IIAD is not fully analysed; (2) the impacts of excessive pesticide use on farmers’ health are not adequately explained; (3) no analysis estimates the relationship between farm level efficiency and overuse of agro-chemical inputs under imperfect markets; and (4) the method of incorporating groundwater extraction costs is misleading. This PhD thesis investigates these issues by using primary data, along with secondary data from Sri Lanka. The overall findings of the thesis can be summarised as follows. First, the thesis demonstrates that Sri Lanka has gained a positive welfare change as a result of introducing new irrigation technology. The change in the consumer surplus is Rs.48,236 million, while the change in the producer surplus is Rs. 14,274 millions between 1970 and 2006. The results also show that the long run benefits and costs of IIAD depend critically on the magnitude of the expansion of the irrigated area, as well as the competition faced by traditional farmers (agricultural crowding out effects). The traditional sector’s ability to compete with the modern sector depends on productivity improvements, reducing production costs and future structural changes (spillover effects). Second, the thesis findings on pesticides used for agriculture show that, on average, a farmer incurs a cost of approximately Rs. 590 to 800 per month during a typical cultivation period due to exposure to pesticides. It is shown that the value of average loss in earnings per farmer for the ‘hospitalised’ sample is Rs. 475 per month, while it is approximately Rs. 345 per month for the ‘general’ farmers group during a typical cultivation season. However, the average willingness to pay (WTP) to avoid exposure to pesticides is approximately Rs. 950 and Rs. 620 for ‘hospitalised’ and ‘general’ farmers’ samples respectively. The estimated percentage contribution for WTP due to health costs, lost earnings, mitigating expenditure, and disutility are 29, 50, 5 and 16 per cent respectively for hospitalised farmers, while they are 32, 55, 8 and 5 per cent respectively for ‘general’ farmers. It is also shown that given market imperfections for most agricultural inputs, farmers are overusing pesticides with the expectation of higher future returns. This has led to an increase in inefficiency in farming practices which is not understood by the farmers. Third, it is found that various groundwater depletion studies in the economics literature have provided misleading optimal water extraction quantity levels. This is due to a failure to incorporate all production costs in the relevant models. It is only by incorporating quality changes to quantity deterioration, that it is possible to derive socially optimal levels. Empirical results clearly show that the benefits per hectare per month considering both the avoidance costs of deepening agro-wells by five feet from the existing average, as well as the avoidance costs of maintaining the water salinity level at 1.8 (mmhos/Cm), is approximately Rs. 4,350 for farmers in the Anuradhapura district and Rs. 5,600 for farmers in the Matale district.
Resumo:
The issue of ensuring that construction projects achieve high quality outcomes continues to be an important consideration for key project stakeholders. Although a lot of quality practices have been done within the industry, establishment and achievement of reasonable levels of quality in construction projects continues to be a problem. While some studies into the introduction and development of quality practices and stakeholder management in the construction industry have been undertaken separately, no major studies have so far been completed that examine in depth how quality management practices that specifically address stakeholders’ perspectives of quality can be utilised to contribute to the ultimate constructed quality of projects. This paper encompasses and summarizes a review of the literature related to previous research undertaken on quality within the industry, focuses on the benefits and shortcomings, together with examining the concept of integrating stakeholder perspectives of project quality for improvement of outcomes throughout the project lifecycle. Findings discussed in this paper reveal a pressing need for investigation, development and testing of a framework to facilitate better implementation of quality management practices and thus achievement of better quality outcomes within the construction industry. The framework will incorporate and integrate the views of stakeholders on what constitutes final project quality to be utilised in developing better quality management planning and systems aimed ultimately at achieving better project quality delivery.
Resumo:
In today’s electronic world vast amounts of knowledge is stored within many datasets and databases. Often the default format of this data means that the knowledge within is not immediately accessible, but rather has to be mined and extracted. This requires automated tools and they need to be effective and efficient. Association rule mining is one approach to obtaining knowledge stored with datasets / databases which includes frequent patterns and association rules between the items / attributes of a dataset with varying levels of strength. However, this is also association rule mining’s downside; the number of rules that can be found is usually very big. In order to effectively use the association rules (and the knowledge within) the number of rules needs to be kept manageable, thus it is necessary to have a method to reduce the number of association rules. However, we do not want to lose knowledge through this process. Thus the idea of non-redundant association rule mining was born. A second issue with association rule mining is determining which ones are interesting. The standard approach has been to use support and confidence. But they have their limitations. Approaches which use information about the dataset’s structure to measure association rules are limited, but could yield useful association rules if tapped. Finally, while it is important to be able to get interesting association rules from a dataset in a manageable size, it is equally as important to be able to apply them in a practical way, where the knowledge they contain can be taken advantage of. Association rules show items / attributes that appear together frequently. Recommendation systems also look at patterns and items / attributes that occur together frequently in order to make a recommendation to a person. It should therefore be possible to bring the two together. In this thesis we look at these three issues and propose approaches to help. For discovering non-redundant rules we propose enhanced approaches to rule mining in multi-level datasets that will allow hierarchically redundant association rules to be identified and removed, without information loss. When it comes to discovering interesting association rules based on the dataset’s structure we propose three measures for use in multi-level datasets. Lastly, we propose and demonstrate an approach that allows for association rules to be practically and effectively used in a recommender system, while at the same time improving the recommender system’s performance. This especially becomes evident when looking at the user cold-start problem for a recommender system. In fact our proposal helps to solve this serious problem facing recommender systems.
Resumo:
The tear film plays an important role preserving the health of the ocular surface and maintaining the optimal refractive power of the cornea. Moreover dry eye syndrome is one of the most commonly reported eye health problems. This syndrome is caused by abnormalities in the properties of the tear film. Current clinical tools to assess the tear film properties have shown certain limitations. The traditional invasive methods for the assessment of tear film quality, which are used by most clinicians, have been criticized for the lack of reliability and/or repeatability. A range of non-invasive methods of tear assessment have been investigated, but also present limitations. Hence no “gold standard” test is currently available to assess the tear film integrity. Therefore, improving techniques for the assessment of the tear film quality is of clinical significance and the main motivation for the work described in this thesis. In this study the tear film surface quality (TFSQ) changes were investigated by means of high-speed videokeratoscopy (HSV). In this technique, a set of concentric rings formed in an illuminated cone or a bowl is projected on the anterior cornea and their reflection from the ocular surface imaged on a charge-coupled device (CCD). The reflection of the light is produced in the outer most layer of the cornea, the tear film. Hence, when the tear film is smooth the reflected image presents a well structure pattern. In contrast, when the tear film surface presents irregularities, the pattern also becomes irregular due to the light scatter and deviation of the reflected light. The videokeratoscope provides an estimate of the corneal topography associated with each Placido disk image. Topographical estimates, which have been used in the past to quantify tear film changes, may not always be suitable for the evaluation of all the dynamic phases of the tear film. However the Placido disk image itself, which contains the reflected pattern, may be more appropriate to assess the tear film dynamics. A set of novel routines have been purposely developed to quantify the changes of the reflected pattern and to extract a time series estimate of the TFSQ from the video recording. The routine extracts from each frame of the video recording a maximized area of analysis. In this area a metric of the TFSQ is calculated. Initially two metrics based on the Gabor filter and Gaussian gradient-based techniques, were used to quantify the consistency of the pattern’s local orientation as a metric of TFSQ. These metrics have helped to demonstrate the applicability of HSV to assess the tear film, and the influence of contact lens wear on TFSQ. The results suggest that the dynamic-area analysis method of HSV was able to distinguish and quantify the subtle, but systematic degradation of tear film surface quality in the inter-blink interval in contact lens wear. It was also able to clearly show a difference between bare eye and contact lens wearing conditions. Thus, the HSV method appears to be a useful technique for quantitatively investigating the effects of contact lens wear on the TFSQ. Subsequently a larger clinical study was conducted to perform a comparison between HSV and two other non-invasive techniques, lateral shearing interferometry (LSI) and dynamic wavefront sensing (DWS). Of these non-invasive techniques, the HSV appeared to be the most precise method for measuring TFSQ, by virtue of its lower coefficient of variation. While the LSI appears to be the most sensitive method for analyzing the tear build-up time (TBUT). The capability of each of the non-invasive methods to discriminate dry eye from normal subjects was also investigated. The receiver operating characteristic (ROC) curves were calculated to assess the ability of each method to predict dry eye syndrome. The LSI technique gave the best results under both natural blinking conditions and in suppressed blinking conditions, which was closely followed by HSV. The DWS did not perform as well as LSI or HSV. The main limitation of the HSV technique, which was identified during the former clinical study, was the lack of the sensitivity to quantify the build-up/formation phase of the tear film cycle. For that reason an extra metric based on image transformation and block processing was proposed. In this metric, the area of analysis was transformed from Cartesian to Polar coordinates, converting the concentric circles pattern into a quasi-straight lines image in which a block statistics value was extracted. This metric has shown better sensitivity under low pattern disturbance as well as has improved the performance of the ROC curves. Additionally a theoretical study, based on ray-tracing techniques and topographical models of the tear film, was proposed to fully comprehend the HSV measurement and the instrument’s potential limitations. Of special interested was the assessment of the instrument’s sensitivity under subtle topographic changes. The theoretical simulations have helped to provide some understanding on the tear film dynamics, for instance the model extracted for the build-up phase has helped to provide some insight into the dynamics during this initial phase. Finally some aspects of the mathematical modeling of TFSQ time series have been reported in this thesis. Over the years, different functions have been used to model the time series as well as to extract the key clinical parameters (i.e., timing). Unfortunately those techniques to model the tear film time series do not simultaneously consider the underlying physiological mechanism and the parameter extraction methods. A set of guidelines are proposed to meet both criteria. Special attention was given to a commonly used fit, the polynomial function, and considerations to select the appropriate model order to ensure the true derivative of the signal is accurately represented. The work described in this thesis has shown the potential of using high-speed videokeratoscopy to assess tear film surface quality. A set of novel image and signal processing techniques have been proposed to quantify different aspects of the tear film assessment, analysis and modeling. The dynamic-area HSV has shown good performance in a broad range of conditions (i.e., contact lens, normal and dry eye subjects). As a result, this technique could be a useful clinical tool to assess tear film surface quality in the future.
Resumo:
Information and communication technologies (ICTs) are essential components of the knowledge economy, and have an immense complementary role in innovation, education, knowledge creation, and relations with government, civil society, and business within city regions. The ability to create, distribute, and exploit knowledge has become a major source of competitive advantage, wealth creation, and improvements in the new regional policies. Growing impact of ICTs on the economy and society, rapid application of recent scientific advances in new products and processes, shifting to more knowledge-intensive industry and services, and rising skill requirements have become crucial concepts for urban and regional competitiveness. Therefore, harnessing ICTs for knowledge-based urban development (KBUD) has a significant impact on urban and regional growth (Yigitcanlar, 2005). In this sense, e-region is a novel concept utilizing ICTs for regional development. Since the Helsinki European Council announced Turkey as a candidate for European Union (EU) membership in 1999, the candidacy has accelerated the speed of regional policy enhancements and adoption of the European regional policy standards. These enhancements and adoption include the generation of a new regional spatial division, NUTS-II statistical regions; a new legislation on the establishment of regional development agencies (RDAs); and new orientations in the field of high education, science, and technology within the framework of the EU’s Lisbon Strategy and the Bologna Process. The European standards posed an ambitious new agenda in the development and application of contemporary regional policy in Turkey (Bilen, 2005). In this sense, novel regional policies in Turkey necessarily endeavor to include information society objectives through efficient use of new technologies such as ICTs. Such a development seeks to be based on tangible assets of the region (Friedmann, 2006) as well as the best practices deriving from grounding initiatives on urban and local levels. These assets provide the foundation of an e-region that harnesses regional development in an information society context. With successful implementations, the Marmara region’s local governments in Turkey are setting the benchmark for the country in the implementation of spatial information systems and e-governance, and moving toward an e-region. Therefore, this article aims to shed light on organizational and regional realities of recent practices of ICT applications and their supply instruments based on evidence from selected local government organizations in the Marmara region. This article also exemplifies challenges and opportunities of the region in moving toward an e-region and provides a concise review of different ICT applications and strategies in a broader urban and regional context. The article is organized in three parts. The following section scrutinizes the e-region framework and the role of ICTs in regional development. Then, Marmara’s opportunities and challenges in moving toward an e-region are discussed in the context of ICT applications and their supply instruments based on public-sector projects, policies, and initiatives. Subsequently, the last section discusses conclusions and prospective research.
Resumo:
Relevance Feedback (RF) has been proven very effective for improving retrieval accuracy. Adaptive information filtering (AIF) technology has benefited from the improvements achieved in all the tasks involved over the last decades. A difficult problem in AIF has been how to update the system with new feedback efficiently and effectively. In current feedback methods, the updating processes focus on updating system parameters. In this paper, we developed a new approach, the Adaptive Relevance Features Discovery (ARFD). It automatically updates the system's knowledge based on a sliding window over positive and negative feedback to solve a nonmonotonic problem efficiently. Some of the new training documents will be selected using the knowledge that the system currently obtained. Then, specific features will be extracted from selected training documents. Different methods have been used to merge and revise the weights of features in a vector space. The new model is designed for Relevance Features Discovery (RFD), a pattern mining based approach, which uses negative relevance feedback to improve the quality of extracted features from positive feedback. Learning algorithms are also proposed to implement this approach on Reuters Corpus Volume 1 and TREC topics. Experiments show that the proposed approach can work efficiently and achieves the encouragement performance.
Resumo:
Introduction. Surgical treatment of scoliosis is assessed in the spine clinic by the surgeon making numerous measurements on X-Rays as well as the rib hump. But it is important to understand which of these measures correlate with self-reported improvements in patients’ quality of life following surgery. The objective of this study was to examine the relationship between patient satisfaction after thoracoscopic (keyhole) anterior scoliosis surgery and standard deformity correction measures using the Scoliosis Research Society (SRS) adolescent questionnaire. Methods. A series of 100 consecutive adolescent idiopathic scoliosis patients received a single anterior rod via a keyhole approach at the Mater Children’s Hospital, Brisbane. Patients completed SRS outcomes questionnaires before surgery and again at 24 months after surgery. Multiple regression and t-tests were used to investigate the relationship between SRS scores and deformity correction achieved after surgery. Results. There were 94 females and 6 males with a mean age of 16.1 years. The mean Cobb angle improved from 52º pre-operatively to 21º for the instrumented levels post-operatively (59% correction) and the mean rib hump improved from 16º to 8º (51% correction). The mean total SRS score for the cohort was 99.4/120 which indicated a high level of satisfaction with the results of their scoliosis surgery. None of the deformity related parameters in the multiple regressions were significant. However, the twenty patients with the smallest Cobb angles after surgery reported significantly higher SRS scores than the twenty patients with the largest Cobb angles after surgery, but there was no difference on the basis of rib hump correction. Discussion. Patients undergoing thoracoscopic (keyhole) anterior scoliosis correction report good SRS scores which are comparable to those in previous studies. We suggest that the absence of any statistically significant difference in SRS scores between patients with and without rod or screw complications is because these complications are not associated with any clinically significant loss of correction in our patient group. The Cobb angle after surgery was the only significant predictor of patient satisfaction when comparing subgroups of patients with the largest and smallest Cobb angles after surgery.
Resumo:
A vast proportion of companies nowadays are looking to design and are focusing on the end users as a means of driving new projects. However still many companies are drawn to technological improvements which drive innovation within their industry context. The Australian livestock industry is no different. To date the adoption of new products and services within the livestock industry has been documented as being quite slow. This paper investigates how disruptive innovation should be a priority for these technologically focused companies and demonstrates how the use of design led innovation can bring about a higher quality engagement between end user and company alike. A case study linking participatory design and design thinking is presented. Within this, a conceptual model of presenting future scenarios to internal and external stakeholders is applied to the livestock industry; assisting companies to apply strategy, culture and advancement in meaningful product offerings to consumers.
Resumo:
In this paper we study both the level of Value-at-Risk (VaR) disclosure and the accuracy of the disclosed VaR figures for a sample of US and international commercial banks. To measure the level of VaR disclosures, we develop a VaR Disclosure Index that captures many different facets of market risk disclosure. Using panel data over the period 1996–2005, we find an overall upward trend in the quantity of information released to the public. We also find that Historical Simulation is by far the most popular VaR method. We assess the accuracy of VaR figures by studying the number of VaR exceedances and whether actual daily VaRs contain information about the volatility of subsequent trading revenues. Unlike the level of VaR disclosure, the quality of VaR disclosure shows no sign of improvement over time. We find that VaR computed using Historical Simulation contains very little information about future volatility.
Resumo:
Background/aim In response to the high burden of disease associated with chronic heart failure (CHF), in particular the high rates of hospital admissions, dedicated CHF management programs (CHF-MP) have been developed. Over the past five years there has been a rapid growth of CHF-MPs in Australia. Given the apparent mismatch between the demand for, and availability of CHF-MPs, this paper has been designed to discuss the accessibility to and quality of current CHF-MPs in Australia. Methods The data presented in this report has been combined from the research of the co-authors, in particular a review of the inequities in access to chronic heart failure which utilised geographical information systems (GIS) and the survey of heterogeneity in quality and service provision in Australian. Results Of the 62 CHF-MPs surveyed in this study 93% (58) centres had been located areas that are rated as Highly Accessible. This result indicated that most of the CHF-MPs have been located in capital cities or large regional cities. Six percent (4 CHF-MPs) had been located in Accessible areas which were country towns or cities. No CHF-MPs had been established outside of cities to service the estimated 72,000 individuals with CHF living in rural and remote areas. 16% of programs recruited NYHA Class I patients and of these 20% lacked confirmation (echocardiogram) of their diagnosis. Conclusion Overall, these data highlight the urgent need to provide equitable access to CHF-MP's. When establishing CHF-MPs consideration of current evidence based models to ensure quality in practice.