968 resultados para Improvement programs
Improvement of direct methanol fuel cell performance by modifying catalyst coated membrane structure
Resumo:
A five-layer catalyst coated membrane (CCM) based upon Nation 115 membrane for direct methanol fuel cell (DMFC) was designed and fabricated by introducing a modified Nafion layer between the membrane and the catalyst layer. The properties of the CCM were determined by SEM, cyclic voltammetry, impedance spectroscopy, ruinous test and I-V curves. The characterizations show that the modified Nation layers provide increased interface contact area and enhanced interaction between the membrane and the catalyst layer. As a result, higher Pt utilization, lower contact resistance and superior durability of membrane electrode assembly was achieved. A 75% Pt utilization efficiency was obtained by using the novel CCM structure, whereas the conventional structure gave 60% efficiency. All these features greatly contribute to the increase in DMFC performance. The DMFC with new CCM structure presented a maximum power density of 260 MW cm(-2), but the DMFC with conventional structure gave only 200 mW cm(-2) under the same operation condition. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
The Girolando breed progeny test was established in 1997, as a result of the partnership between Girolando and Embrapa Dairy Cattle. In 2007, the Programa de Melhoramento Genético da Raça Girolando? PMGG (Genetic Improvement Program of the Girolando Breed) was implemented. Besides interacting with previously existing initiatives of the Girolando Breeders Association, such as the genealogical register service, the progeny test and the dairy control service, the PMGG launched the Linear Evaluation System (SLAG). The main objectives of the PMGG comprises identification of genetically superior individuals, the technically-oriented multiplication of genetics, the evaluation of economic traits and the promotion of sustainable dairy activities. The program have yielded impressive results. The Girolando breed semen sales increases faster than any other breed in Brazil.
Resumo:
There are a variety of guidelines and methods available to measure and assess survey quality. Most of these are based on qualitative descriptions. In practice, they are not easy to implement and it is very difficult to make comparisons between surveys. Hence there is a theoretical and pragmatic demand to develop a mainly quantitative based survey assessment tool. This research aimed to meet this need and make contributions to the evaluation and improvement of survey quality. Acknowledging the critical importance of measurement issues in survey research, this thesis starts with a comprehensive introduction to measurement theory and identifies the types of measurement errors associated with measurement procedures through three experiments. Then it moves on to describe concepts, guidelines and methods available for measuring and assessing survey quality. Combining these with measurement principles leads to the development of a quantitative based statistical holistic tool to measure and assess survey quality. The criteria, weights and subweights for the assessment tool are determined using Multi-Criteria Decision-Making (MCDM) and a survey questionnaire based on the Delphi method. Finally the model is applied to a database of surveys which was constructed to develop methods of classification, assessment and improvement of survey quality. The model developed in this thesis enables survey researchers and/or commissioners to make a holistic assessment of the value of the particular survey(s). This model is an Excel based audit which takes a holistic approach, following all stages of the survey from inception, to design, construction, execution, analysis and dissemination. At each stage a set of criteria are applied to assess quality. Scores attained against these assessments are weighted by the importance of the criteria and summed to give an overall assessment of the stage. The total score for a survey can be obtained by a combination of the scores for every stage weighted again by the importance of each stage. The advantage of this is to construct a means of survey assessment which can be used in a diagnostic manner to assess and improve survey quality.
Resumo:
Sanders, K. and Thomas, L. 2007. Checklists for grading object-oriented CS1 programs: concepts and misconceptions. In Proceedings of the 12th Annual SIGCSE Conference on innovation and Technology in Computer Science Education (Dundee, Scotland, June 25 - 27, 2007). ITiCSE '07. ACM, New York, NY, 166-170
Resumo:
To evaluate critical exposure levels and the reversibility of lead neurotoxicity a group of lead exposed foundry workers and an unexposed reference population were followed up for three years. During this period, tests designed to monitor neurobehavioural function and lead dose were administered. Evaluations of 160 workers during the first year showed dose dependent decrements in mood, visual/motor performance, memory, and verbal concept formation. Subsequently, an improvement in the hygienic conditions at the plant resulted in striking reductions in blood lead concentrations over the following two years. Attendant improvement in indices of tension (20% reduction), anger (18%), depression (26%), fatigue (27%), and confusion (13%) was observed. Performance on neurobehavioural testing generally correlated best with integrated dose estimates derived from blood lead concentrations measured periodically over the study period; zinc protoporphyrin levels were less well correlated with function. This investigation confirms the importance of compliance with workplace standards designed to lower exposures to ensure that individual blood lead concentrations remain below 50 micrograms/dl.
Resumo:
Paper published in PLoS Medicine in 2007.
Resumo:
Predictability -- the ability to foretell that an implementation will not violate a set of specified reliability and timeliness requirements -- is a crucial, highly desirable property of responsive embedded systems. This paper overviews a development methodology for responsive systems, which enhances predictability by eliminating potential hazards resulting from physically-unsound specifications. The backbone of our methodology is the Time-constrained Reactive Automaton (TRA) formalism, which adopts a fundamental notion of space and time that restricts expressiveness in a way that allows the specification of only reactive, spontaneous, and causal computation. Using the TRA model, unrealistic systems – possessing properties such as clairvoyance, caprice, infinite capacity, or perfect timing -- cannot even be specified. We argue that this "ounce of prevention" at the specification level is likely to spare a lot of time and energy in the development cycle of responsive systems -- not to mention the elimination of potential hazards that would have gone, otherwise, unnoticed. The TRA model is presented to system developers through the Cleopatra programming language. Cleopatra features a C-like imperative syntax for the description of computation, which makes it easier to incorporate in applications already using C. It is event-driven, and thus appropriate for embedded process control applications. It is object-oriented and compositional, thus advocating modularity and reusability. Cleopatra is semantically sound; its objects can be transformed, mechanically and unambiguously, into formal TRA automata for verification purposes, which can be pursued using model-checking or theorem proving techniques. Since 1989, an ancestor of Cleopatra has been in use as a specification and simulation language for embedded time-critical robotic processes.
Resumo:
Generic object-oriented programming languages combine parametric polymorphism and nominal subtype polymorphism, thereby providing better data abstraction, greater code reuse, and fewer run-time errors. However, most generic object-oriented languages provide a straightforward combination of the two kinds of polymorphism, which prevents the expression of advanced type relationships. Furthermore, most generic object-oriented languages have a type-erasure semantics: instantiations of type parameters are not available at run time, and thus may not be used by type-dependent operations. This dissertation shows that two features, which allow the expression of many advanced type relationships, can be added to a generic object-oriented programming language without type erasure: 1. type variables that are not parameters of the class that declares them, and 2. extension that is dependent on the satisfiability of one or more constraints. We refer to the first feature as hidden type variables and the second feature as conditional extension. Hidden type variables allow: covariance and contravariance without variance annotations or special type arguments such as wildcards; a single type to extend, and inherit methods from, infinitely many instantiations of another type; a limited capacity to augment the set of superclasses after that class is defined; and the omission of redundant type arguments. Conditional extension allows the properties of a collection type to be dependent on the properties of its element type. This dissertation describes the semantics and implementation of hidden type variables and conditional extension. A sound type system is presented. In addition, a sound and terminating type checking algorithm is presented. Although designed for the Fortress programming language, hidden type variables and conditional extension can be incorporated into other generic object-oriented languages. Many of the same problems would arise, and solutions analogous to those we present would apply.
Resumo:
A method is presented for converting unstructured program schemas to strictly equivalent structured form. The predicates of the original schema are left intact with structuring being achieved by the duplication of he original decision vertices without the introduction of compound predicate expressions, or where possible by function duplication alone. It is shown that structured schemas must have at least as many decision vertices as the original unstructured schema, and must have more when the original schema contains branches out of decision constructs. The structuring method allows the complete avoidance of function duplication, but only at the expense of decision vertex duplication. It is shown that structured schemas have greater space-time requirements in general than their equivalent optimal unstructured counterparts and at best have the same requirements.
Resumo:
Coeliac disease is one of the most common food intolerances worldwide and at present the gluten free diet remains the only suitable treatment. A market overview conducted as part of this thesis on nutritional and sensory quality of commercially available gluten free breads and pasta showed that improvements are necessary. Many products show strong off-flavors, poor mouthfeel and reduced shelf-life. Since the life-long avoidance of the cereal protein gluten means a major change to the diet, it is important to also consider the nutritional value of products intending to replace staple foods such as bread or pasta. This thesis addresses this issue by characterising available gluten free cereal and pseudocereal flours to facilitate a better raw material choice. It was observed that especially quinoa, buckwheat and teff are high in essential nutrients, such as protein, minerals and folate. In addition the potential of functional ingredients such as inulin, β-glucan, HPMC and xanthan to improve loaf quality were evaluated. Results show that these ingredients can increase loaf volume and reduce crumb hardness as well as rate of staling but that the effect diverges strongly depending on the bread formulation used. Furthermore, fresh egg pasta formulations based on teff and oat flour were developed. The resulting products were characterised regarding sensory and textural properties as well as in vitro digestibility. Scanning electron and confocal laser scanning microscopy was used throughout the thesis to visualise structural changes occurring during baking and pasta making
Resumo:
Organizations that leverage lessons learned from their experience in the practice of complex real-world activities are faced with five difficult problems. First, how to represent the learning situation in a recognizable way. Second, how to represent what was actually done in terms of repeatable actions. Third, how to assess performance taking account of the particular circumstances. Fourth, how to abstract lessons learned that are re-usable on future occasions. Fifth, how to determine whether to pursue practice maturity or strategic relevance of activities. Here, organizational learning and performance improvement are investigated in a field study using the Context-based Intelligent Assistant Support (CIAS) approach. A new conceptual framework for practice-based organizational learning and performance improvement is presented that supports researchers and practitioners address the problems evoked and contributes to a practice-based approach to activity management. The novelty of the research lies in the simultaneous study of the different levels involved in the activity. Route selection in light rail infrastructure projects involves practices at both the strategic and operational levels; it is part managerial/political and part engineering. Aspectual comparison of practices represented in Contextual Graphs constitutes a new approach to the selection of Key Performance Indicators (KPIs). This approach is free from causality assumptions and forms the basis of a new approach to practice-based organizational learning and performance improvement. The evolution of practices in contextual graphs is shown to be an objective and measurable expression of organizational learning. This diachronic representation is interpreted using a practice-based organizational learning novelty typology. This dissertation shows how lessons learned when effectively leveraged by an organization lead to practice maturity. The practice maturity level of an activity in combination with an assessment of an activity’s strategic relevance can be used by management to prioritize improvement effort.
Resumo:
Aim: To investigate the value of using PROMs as quality improvement tools. Methods: Two systematic reviews were undertaken. The first reviewed the quantitative literature on the impact of PROMs feedback and the second reviewed the qualitative literature on the use of PROMs in practice. These reviews informed the focus of the primary research. A cluster randomised controlled trial (PROFILE) examined the impact of providing peer benchmarked PROMs feedback to consultant orthopaedic surgeons on improving outcomes for hip replacement surgery. Qualitative interviews with surgeons in the intervention arm of the trial examined the view of and reactions to the feedback. Results: The quantitative review of 17 studies found weak evidence to suggest that providing PROMs feedback to professionals improves patient outcomes. The qualitative review of 16 studies identified the barriers and facilitators to the use of PROMs based on four themes: practical considerations, attitudes towards the data, methodological concerns and the impact of feedback on care. The PROFILE trial included 11 surgeons and 215 patients in the intervention arm, and 10 surgeons and 217 patients in the control arm. The trial found no significant difference in the Oxford Hip Score between the arms (-0.7, 95% CI -1.9-0.5, p=0.2). Interviews with surgeons revealed mixed opinions about the value of the PROMs feedback and the information did not promote explicit changes to their practice. Conclusion: It is important to use PROMs which have been validated for the specific purpose of performance measurement, consult with professionals when developing a PROMs feedback intervention, communicate with professionals about the objectives of the data collection, educate professionals on the properties and interpretation of the data, and support professionals in using the information to improve care. It is also imperative that the burden of data collection and dissemination of the information is minimised.