955 resultados para development of quality


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Impressions about product quality and reliability can depend as much on perceptions about brands and country of origin as on data regarding performance and failure. This has implications for companies in developing countries that need to compete with importers. For manufacturers in industrialised countries it has implications for the value of transferred technologies. This article considers the issue of quality and reliability when technology is transferred between countries with different levels of development. It is based on UK and Chinese company case studies and questionnaire surveys undertaken among three company groups: UK manufacturers; Chinese manufacturers; Chinese users. Results show that all three groups recognise quality and reliability as important and support the premise that foreign technology based machines made in China carry a price premium over Chinese machines based on local technology. Closer examination reveals a number of important differences concerning the perceptions and reality of quality and reliability between the groups.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this research is to propose a procurement system across other disciplines and retrieved information with relevant parties so as to have a better co-ordination between supply and demand sides. This paper demonstrates how to analyze the data with an agent-based procurement system (APS) to re-engineer and improve the existing procurement process. The intelligence agents take the responsibility of searching the potential suppliers, negotiation with the short-listed suppliers and evaluating the performance of suppliers based on the selection criteria with mathematical model. Manufacturing firms and trading companies spend more than half of their sales dollar in the purchase of raw material and components. Efficient data collection with high accuracy is one of the key success factors to generate quality procurement which is to purchasing right material at right quality from right suppliers. In general, the enterprises spend a significant amount of resources on data collection and storage, but too little on facilitating data analysis and sharing. To validate the feasibility of the approach, a case study on a manufacturing small and medium-sized enterprise (SME) has been conducted. APS supports the data and information analyzing technique to facilitate the decision making such that the agent can enhance the negotiation and suppler evaluation efficiency by saving time and cost.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose: To develop a questionnaire that subjectively assesses near visual function in patients with 'accommodating' intraocular lenses (IOLs). Methods: A literature search of existing vision-related quality-of-life instruments identified all questions relating to near visual tasks. Questions were combined if repeated in multiple instruments. Further relevant questions were added and item interpretation confirmed through multidisciplinary consultation and focus groups. A preliminary 19-item questionnaire was presented to 22 subjects at their 4-week visit post first eye phacoemulsification with 'accommodative' IOL implantation, and again 6 and 12 weeks post-operatively. Rasch Analysis, Frequency of Endorsement, and tests of normality (skew and kurtosis) were used to reduce the instrument. Cronbach's alpha and test-retest reliability (intraclass correlation coefficient, ICC) were determined for the final questionnaire. Construct validity was obtained by Pearson's product moment correlation (PPMC) of questionnaire scores to reading acuity (RA) and to Critical Print Size (CPS) reading speed. Criterion validity was obtained by receiver operating characteristic (ROC) curve analysis and dimensionality of the questionnaire was assessed by factor analysis. Results: Rasch Analysis eliminated nine items due to poor fit statistics. The final items have good separation (2.55), internal consistency (Cronbach's α = 0.97) and test-retest reliability (ICC = 0.66). PPMC of questionnaire scores with RA was 0.33, and with CPS reading speed was 0.08. Area under the ROC curve was 0.88 and Factor Analysis revealed one principal factor. Conclusion: The pilot data indicates the questionnaire to be internally consistent, reliable and a valid instrument that could be useful for assessing near visual function in patients with 'accommodating' IOLS. The questionnaire will now be expanded to include other types of presbyopic correction. © 2007 British Contact Lens Association.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of the research is to develop an e-business selection framework for small and medium enterprises (SMEs) by integrating established techniques in planning. The research is case based, comprising four case studies carried out in the printing industry for the purpose of evaluating the framework. Two of the companies are from Singapore, while the other two are from Guangzhou, China and Jinan, China respectively. To determine the need of an e-business selection framework for SMEs, extensive literature reviews were carried out in the area of e-business, business planning frameworks, SMEs and the printing industry. An e-business selection framework is then proposed by integrating the three established techniques of the Balanced Scorecard (BSC), Value Chain Analysis (VCA) and Quality Function Deployment (QFD). The newly developed selection framework is pilot tested using a published case study before actual evaluation is carried out in four case study companies. The case study methodology was chosen because of its ability to integrate diverse data collection techniques required to generate the BSC, VCA and QFD for the selection framework. The findings of the case studies revealed that the three techniques of BSC, VCA and QFD can be integrated seamlessly to complement on each other’s strengths in e-business planning. The eight-step methodology of the selection framework can provide SMEs with a step-by-step approach to e-business through structured planning. Also, the project has also provided better understanding and deeper insights into SMEs in the printing industry.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In recent years, freshwater fish farmers have come under increasing pressure from the Water Authorities to control the quality of their farm effluents. This project aimed to investigate methods of treating aquacultural effluent in an efficient and cost-effective manner, and to incorporate the knowledge gained into an Expert System which could then be used in an advice service to farmers. From the results of this research it was established that sedimentation and the use of low pollution diets are the only cost effective methods of controlling the quality of fish farm effluents. Settlement has been extensively investigated and it was found that the removal of suspended solids in a settlement pond is only likely to be effective if the inlet solids concentration is in excess of 8 mg/litre. The probability of good settlement can be enhanced by keeping the ratio of length/retention time (a form of mean fluid velocity) below 4.0 metres/minute. The removal of BOD requires inlet solids concentrations in excess of 20 mg/litre to be effective, and this is seldom attained on commercial fish farms. Settlement, generally, does not remove appreciable quantities of ammonia from effluents, but algae can absorb ammonia by nutrient uptake under certain conditions. The use of low pollution, high performance diets gives pollutant yields which are low when compared with published figures obtained by many previous workers. Two Expert Systems were constructed, both of which diagnose possible causes of poor effluent quality on fish farms and suggest solutions. The first system uses knowledge gained from a literature review and the second employs the knowledge obtained from this project's experimental work. Consent details for over 100 fish farms were obtained from the public registers kept by the Water Authorities. Large variations in policy from one Authority to the next were found. These data have been compiled in a computer file for ease of comparison.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The research described here concerns the development of metrics and models to support the development of hybrid (conventional/knowledge based) integrated systems. The thesis argues from the point that, although it is well known that estimating the cost, duration and quality of information systems is a difficult task, it is far from clear what sorts of tools and techniques would adequately support a project manager in the estimation of these properties. A literature review shows that metrics (measurements) and estimating tools have been developed for conventional systems since the 1960s while there has been very little research on metrics for knowledge based systems (KBSs). Furthermore, although there are a number of theoretical problems with many of the `classic' metrics developed for conventional systems, it also appears that the tools which such metrics can be used to develop are not widely used by project managers. A survey was carried out of large UK companies which confirmed this continuing state of affairs. Before any useful tools could be developed, therefore, it was important to find out why project managers were not using these tools already. By characterising those companies that use software cost estimating (SCE) tools against those which could but do not, it was possible to recognise the involvement of the client/customer in the process of estimation. Pursuing this point, a model of the early estimating and planning stages (the EEPS model) was developed to test exactly where estimating takes place. The EEPS model suggests that estimating could take place either before a fully-developed plan has been produced, or while this plan is being produced. If it were the former, then SCE tools would be particularly useful since there is very little other data available from which to produce an estimate. A second survey, however, indicated that project managers see estimating as being essentially the latter at which point project management tools are available to support the process. It would seem, therefore, that SCE tools are not being used because project management tools are being used instead. The issue here is not with the method of developing an estimating model or tool, but; in the way in which "an estimate" is intimately tied to an understanding of what tasks are being planned. Current SCE tools are perceived by project managers as targetting the wrong point of estimation, A model (called TABATHA) is then presented which describes how an estimating tool based on an analysis of tasks would thus fit into the planning stage. The issue of whether metrics can be usefully developed for hybrid systems (which also contain KBS components) is tested by extending a number of "classic" program size and structure metrics to a KBS language, Prolog. Measurements of lines of code, Halstead's operators/operands, McCabe's cyclomatic complexity, Henry & Kafura's data flow fan-in/out and post-release reported errors were taken for a set of 80 commercially-developed LPA Prolog programs: By re~defining the metric counts for Prolog it was found that estimates of program size and error-proneness comparable to the best conventional studies are possible. This suggests that metrics can be usefully applied to KBS languages, such as Prolog and thus, the development of metncs and models to support the development of hybrid information systems is both feasible and useful.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Engineers logbooks are an important part of the CDIO process, as a prequel to the logooks they will be expected to keep when in industry. Previously however, students logbooks were insufficient and students did not appear to appreciate the importance of the logbooks or how they would be assessed. In an attempt to improve the students understanding and quality of logbooks, a group of ~100 1st year CDIO students were asked to collaboratively develop a marking matrix with the tutors. The anticipated outcome was that students would have more ownership in, and a deeper understanding of, the logbook and what is expected from the student during assessment. A revised marking matrix was developed in class and a short questionnaire was implemented on delivery of the adapted matrix to gauge the students response to the process. Marks from the logbooks were collected twice during teaching period one and two and compared to marks from previous years. This poster will deliver the methodology and outcomes for this venture.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Although the importance of dataset fitness-for-use evaluation and intercomparison is widely recognised within the GIS community, no practical tools have yet been developed to support such interrogation. GeoViQua aims to develop a GEO label which will visually summarise and allow interrogation of key informational aspects of geospatial datasets upon which users rely when selecting datasets for use. The proposed GEO label will be integrated in the Global Earth Observation System of Systems (GEOSS) and will be used as a value and trust indicator for datasets accessible through the GEO Portal. As envisioned, the GEO label will act as a decision support mechanism for dataset selection and thereby hopefully improve user recognition of the quality of datasets. To date we have conducted 3 user studies to (1) identify the informational aspects of geospatial datasets upon which users rely when assessing dataset quality and trustworthiness, (2) elicit initial user views on a GEO label and its potential role and (3), evaluate prototype label visualisations. Our first study revealed that, when evaluating quality of data, users consider 8 facets: dataset producer information; producer comments on dataset quality; dataset compliance with international standards; community advice; dataset ratings; links to dataset citations; expert value judgements; and quantitative quality information. Our second study confirmed the relevance of these facets in terms of the community-perceived function that a GEO label should fulfil: users and producers of geospatial data supported the concept of a GEO label that provides a drill-down interrogation facility covering all 8 informational aspects. Consequently, we developed three prototype label visualisations and evaluated their comparative effectiveness and user preference via a third user study to arrive at a final graphical GEO label representation. When integrated in the GEOSS, an individual GEO label will be provided for each dataset in the GEOSS clearinghouse (or other data portals and clearinghouses) based on its available quality information. Producer and feedback metadata documents are being used to dynamically assess information availability and generate the GEO labels. The producer metadata document can either be a standard ISO compliant metadata record supplied with the dataset, or an extended version of a GeoViQua-derived metadata record, and is used to assess the availability of a producer profile, producer comments, compliance with standards, citations and quantitative quality information. GeoViQua is also currently developing a feedback server to collect and encode (as metadata records) user and producer feedback on datasets; these metadata records will be used to assess the availability of user comments, ratings, expert reviews and user-supplied citations for a dataset. The GEO label will provide drill-down functionality which will allow a user to navigate to a GEO label page offering detailed quality information for its associated dataset. At this stage, we are developing the GEO label service that will be used to provide GEO labels on demand based on supplied metadata records. In this presentation, we will provide a comprehensive overview of the GEO label development process, with specific emphasis on the GEO label implementation and integration into the GEOSS.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The system of development unstable processes prediction is given. It is based on a decision-tree method. The processing technique of the expert information is offered. It is indispensable for constructing and processing by a decision-tree method. In particular data is set in the fuzzy form. The original search algorithms of optimal paths of development of the forecast process are described. This one is oriented to processing of trees of large dimension with vector estimations of arcs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The evaluation of geospatial data quality and trustworthiness presents a major challenge to geospatial data users when making a dataset selection decision. The research presented here therefore focused on defining and developing a GEO label – a decision support mechanism to assist data users in efficient and effective geospatial dataset selection on the basis of quality, trustworthiness and fitness for use. This thesis thus presents six phases of research and development conducted to: (a) identify the informational aspects upon which users rely when assessing geospatial dataset quality and trustworthiness; (2) elicit initial user views on the GEO label role in supporting dataset comparison and selection; (3) evaluate prototype label visualisations; (4) develop a Web service to support GEO label generation; (5) develop a prototype GEO label-based dataset discovery and intercomparison decision support tool; and (6) evaluate the prototype tool in a controlled human-subject study. The results of the studies revealed, and subsequently confirmed, eight geospatial data informational aspects that were considered important by users when evaluating geospatial dataset quality and trustworthiness, namely: producer information, producer comments, lineage information, compliance with standards, quantitative quality information, user feedback, expert reviews, and citations information. Following an iterative user-centred design (UCD) approach, it was established that the GEO label should visually summarise availability and allow interrogation of these key informational aspects. A Web service was developed to support generation of dynamic GEO label representations and integrated into a number of real-world GIS applications. The service was also utilised in the development of the GEO LINC tool – a GEO label-based dataset discovery and intercomparison decision support tool. The results of the final evaluation study indicated that (a) the GEO label effectively communicates the availability of dataset quality and trustworthiness information and (b) GEO LINC successfully facilitates ‘at a glance’ dataset intercomparison and fitness for purpose-based dataset selection.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Food allergy is often a life-long condition that requires constant vigilance in order to prevent accidental exposure and avoid potentially life-threatening symptoms. Parents’ confidence in managing their child’s food allergy may relate to the poor quality of life anxiety and worry reported by parents of food allergic children. Objective: The aim of the current study was to develop and validate the first scale to measure parental confidence (self-efficacy) in managing food allergy in their child. Methods: The Food Allergy Self-Efficacy Scale for Parents (FASE-P) was developed through interviews with 53 parents, consultation of the literature and experts in the area. The FASE-P was then completed by 434 parents of food allergic children from a general population sample in addition to the General Self-Efficacy Scale (GSES), the Food Allergy Quality of Life Parental Burden Scale (FAQL-PB), the General Health Questionnaire (GHQ12) and the Food Allergy Impact Measure (FAIM). A total of 250 parents completed the re-test of the FASE-P. Results: Factor and reliability analysis resulted in a 21 item scale with 5 sub-scales. The overall scale and sub-scales has good to excellent internal consistency (α’s of 0.63-0.89) and the scale is stable over time. There were low to moderate significant correlations with the GSES, FAIM and GHQ12 and strong correlations with the FAQL-PB, with better parental confidence relating to better general self-efficacy, better quality of life and better mental health in the parent. Poorer self-efficacy was related to egg and milk allergy; self-efficacy was not related to severity of allergy. Conclusions and clinical relevance: The FASE-P is a reliable and valid scale for use with parents from a general population. Its application within clinical settings could aid provision of advice and improve targeted interventions by identifying areas where parents have less confidence in managing their child’s food allergy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation tested the effectiveness of a psychosocial intervention, the Personal Development in the Context of Relationships (PDCR) program. The aim of the PDCR seeks to foster the development (or enhancement) of a sense of identity and intimacy among adolescents who participate in the program. The PDCR is a psychosocial group intervention which utilizes interpersonal relationship issues as a context to foster personal development in identity formation and facilitate the development of an individual's capacity for intimacy. The PDCR uses intervention strategies which include skills and knowledge development, experiential group exercises, and exploration for insight. Participants consisted of 110 late adolescents. A mixed-subjects design (pre-post-follow up) was used to assess the effectiveness, efficacy and utility of the PDCR on the experimental condition relative to a content/social contact control group and a time control condition. Identity exploration and identity commitment were measured by the Ego Identity Process Questionnaire (EIPQ). Total intimacy and identity role satisfaction were measured by the Erikson Psychosocial Stage Inventory (EPSI). Relationship quality and closeness were measured by the Relationship Quality Scale (RQS) and the Relationship Closeness Inventory (RCI) in an effort to assess whether any potential impact on interpersonal relationships occurs. Mixed MANOVAs were used to analyze the data with results yielding significant values for increased total identity exploration from pre to post test and decreases in total identity commitment from pre to post to follow-up test in the experimental group relative to the control conditions on the EIPQ. Further results indicated increases in total intimacy from pre to post to follow-up test in the experimental group relative to the control conditions on the EPSI. No clear trends emerged from pre to post to follow-up test for the Relationship measures. Results are discussed in terms of both practical and theoretical implications. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objectionable odors remain at the top of air pollution complaints in urban areas such as Broward County that is subject to increasing residential and industrial developments. The odor complaints in Broward County escalated by 150 percent for the 2001 to 2004 period although the population increased by only 6 percent. It is estimated that in 2010 the population will increase to 2.5 million. Relying solely on enforcing the local odor ordinance is evidently not sufficient to manage the escalating odor complaint trends. An alternate approach similar to odor management plans (OMPs) that are successful in managing major malodor sources such as animal farms is required. ^ This study aims to develop and determine the feasibility of implementing a comprehensive odor management plan (COMP) for the entire Broward County. Unlike existing OMPs for single sources where the receptors (i.e. the complainants) are located beyond the boundary of the source, the COMP addresses a complex model of multiple sources and receptors coexisting within the boundary of the entire county. Each receptor is potentially subjected to malodor emissions from multiple sources within the county. Also, the quantity and quality of the source/receptor variables are continuously changing. ^ The results of this study show that it is feasible to develop a COMP that adopts a systematic procedure to: (1) Generate maps of existing odor complaint areas and malodor sources, (2) Identify potential odor sources (target sources) responsible for existing odor complaints, (3) Identify possible odor control strategies for target sources, (4) Determine the criteria for implementing odor control strategies, (5) Develop an odor complaint response protocol, and (6) Conduct odor impact analyses for new sources to prevent future odor related issues. Geographic Information System (GIS) is used to identify existing complaint areas. A COMP software that incorporates existing United States Environmental Protection Agency (EPA) air dispersion software is developed to determine the target sources, predict the likelihood of new complaints, and conduct odor impact analysis. The odor response protocol requires pre-planning field investigations and conducting surveys to optimize the local agency available resources while protecting the citizen's welfare, as required by the Clean Air Act. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Crash reduction factors (CRFs) are used to estimate the potential number of traffic crashes expected to be prevented from investment in safety improvement projects. The method used to develop CRFs in Florida has been based on the commonly used before-and-after approach. This approach suffers from a widely recognized problem known as regression-to-the-mean (RTM). The Empirical Bayes (EB) method has been introduced as a means to addressing the RTM problem. This method requires the information from both the treatment and reference sites in order to predict the expected number of crashes had the safety improvement projects at the treatment sites not been implemented. The information from the reference sites is estimated from a safety performance function (SPF), which is a mathematical relationship that links crashes to traffic exposure. The objective of this dissertation was to develop the SPFs for different functional classes of the Florida State Highway System. Crash data from years 2001 through 2003 along with traffic and geometric data were used in the SPF model development. SPFs for both rural and urban roadway categories were developed. The modeling data used were based on one-mile segments that contain homogeneous traffic and geometric conditions within each segment. Segments involving intersections were excluded. The scatter plots of data show that the relationships between crashes and traffic exposure are nonlinear, that crashes increase with traffic exposure in an increasing rate. Four regression models, namely, Poisson (PRM), Negative Binomial (NBRM), zero-inflated Poisson (ZIP), and zero-inflated Negative Binomial (ZINB), were fitted to the one-mile segment records for individual roadway categories. The best model was selected for each category based on a combination of the Likelihood Ratio test, the Vuong statistical test, and the Akaike's Information Criterion (AIC). The NBRM model was found to be appropriate for only one category and the ZINB model was found to be more appropriate for six other categories. The overall results show that the Negative Binomial distribution model generally provides a better fit for the data than the Poisson distribution model. In addition, the ZINB model was found to give the best fit when the count data exhibit excess zeros and over-dispersion for most of the roadway categories. While model validation shows that most data points fall within the 95% prediction intervals of the models developed, the Pearson goodness-of-fit measure does not show statistical significance. This is expected as traffic volume is only one of the many factors contributing to the overall crash experience, and that the SPFs are to be applied in conjunction with Accident Modification Factors (AMFs) to further account for the safety impacts of major geometric features before arriving at the final crash prediction. However, with improved traffic and crash data quality, the crash prediction power of SPF models may be further improved.