936 resultados para Support unit costs
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
The modalities of treatment with prostheses over implants for total edentulous patients can be divided in fixed and removable prostheses (overdentures). The fixed prostheses modality has proven to offer better results as to the functional aspects and, results in higher longevity. Overdentures are considered as a modality of rehabilitation utilized to compensate the need for better retention, aside from augmenting chewing efficiency. Its use is justified by its facilitated hygiene, reduced surgical and prosthetic costs, reestablishment of labial support and elimination of the possibility of air escape. However, this option presents psychosocial downside, since the fact that it is considered a removable rehabilitation modality does not please the majority of patients. Although many patients prefer a fixed implant-supported prosthesis to a removable overdenture, frequently it is necessary to utilize an implant retained overdenture as an alternative to the treatment, due to anatomical, physiological, aesthetic, hygienic, and financial limitations regarding the patient. The objective of this study was to discuss a clinical case of a partially edentulous patient treated in the Implantodontic Surgery Post-Graduation Course from the Kenedy Dentistry Institute Mozarteum/Famosp Unit – Goiânia-Brazil), and submitted to osseointegrated implants surgical fixation techniques. A modality of differential diagnosis was established after osseointegration period, it aimed at facilitating the choice of a rehabilitation model that could favor the patient’s and professional`s expectations. Therefore the appreciated aspects were function, comfort, aesthetics, and especially the patient satisfaction.
Resumo:
Where the creation, understanding, and assessment of software testing and regression testing techniques are concerned, controlled experimentation is an indispensable research methodology. Obtaining the infrastructure necessary to support such experimentation, however, is difficult and expensive. As a result, progress in experimentation with testing techniques has been slow, and empirical data on the costs and effectiveness of techniques remains relatively scarce. To help address this problem, we have been designing and constructing infrastructure to support controlled experimentation with testing and regression testing techniques. This paper reports on the challenges faced by researchers experimenting with testing techniques, including those that inform the design of our infrastructure. The paper then describes the infrastructure that we are creating in response to these challenges, and that we are now making available to other researchers, and discusses the impact that this infrastructure has and can be expected to have.
Resumo:
OBJECTIVES: A number of complications exist with invasive mechanical ventilation and with the use of and withdrawal from prolonged ventilator support. The use of protocols that enable the systematic identification of patients eligible for an interruption in mechanical ventilation can significantly reduce the number of complications. This study describes the application of a weaning protocol and its results. METHODS: Patients who required invasive mechanical ventilation for more than 24 hours were included and assessed daily to identify individuals who were ready to begin the weaning process. RESULTS: We studied 252 patients with a median mechanical ventilation time of 3.7 days (interquartile range of 1 to 23 days), a rapid shallow breathing index value of 48 (median), a maximum inspiratory pressure of 40 cmH2O, and a maximum expiratory pressure of 40 cm H2O (median). Of these 252 patients, 32 (12.7%) had to be reintubated, which represented weaning failure. Noninvasive ventilation was used postextubation in 170 (73%) patients, and 15% of these patients were reintubated, which also represented weaning failure. The mortality rate of the 252 patients studied was 8.73% (22), and there was no significant difference in the age, gender, mechanical ventilation time, and maximum inspiratory pressure between the survivors and nonsurvivors. CONCLUSIONS: The use of a specific weaning protocol resulted in a lower mechanical ventilation time and an acceptable reintubation rate. This protocol can be used as a comparative index in hospitals to improve the weaning system, its monitoring and the informative reporting of patient outcomes and may represent a future tool and source of quality markers for patient care.
Resumo:
Insects are able to combat infection by initiating an efficient immune response that involves synthesizing antimicrobial peptides and a range of other defense molecules. These responses may be costly to the organism, resulting in it exploiting endogenous resources to maintain homeostasis or support defense to the detriment of other physiological needs. We used queenless worker bees on distinct dietary regimes that may alter hemolymph protein storage and ovary activation to investigate the physiological costs of infection with Serratia marcescens. The expression of the genes encoding the storage proteins vitellogenin and hexamerin 70a, the vitellogenin receptor, and vasa (which has a putative role in reproduction), was impaired in the infected bees. This impairment was mainly evident in the bees fed beebread, which caused significantly higher expression of these genes than did royal jelly or syrup, and this was confirmed at the vitellogenin and hexamerin 70a protein levels. Beebread was also the only diet that promoted ovary activation in the queenless bees, but this activation was significantly impaired by the infection. The expression of the genes encoding the storage proteins apolipophorins-I and -III and the lipophorin receptor was not altered by infection regardless the diet provided to the bees. Similarly, the storage of apolipophorin-I in the hemolymph was only slightly impaired by the infection, independently of the supplied diet. Taken together these results indicate that, infection demands a physiological cost from the transcription of specific protein storage-related genes and from the reproductive capacity. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
OBJECTIVE: To evaluate the perception of social support and the relationship of sociodemographic, clinical and metabolic control variables in individuals with diabetes mellitus and foot ulcers in an outpatient unit. METHODS: A quantitative cross-sectional approach was carried out using a social support network inventory. RESULTS: Participants had a high perception of social support; family and health professionals were identified as the main support sources. Fasting plasma glucose values were directly related with social support. CONCLUSION: Family members were identified as the main support source, which emphasizes their importance in the health care process.
Resumo:
Modern food production is a complex, globalized system in which what we eat and how it is produced are increasingly disconnected. This thesis examines some of the ways in which global trade has changed the mix of inputs to food and feed, and how this affects food security and our perceptions of sustainability. One useful indicator of the ecological impact of trade in food and feed products is the Appropriated Ecosystem Areas (ArEAs), which estimates the terrestrial and aquatic areas needed to produce all the inputs to particular products. The method is introduced in Paper I and used to calculate and track changes in imported subsidies to Swedish agriculture over the period 1962-1994. In 1994, Swedish consumers needed agricultural areas outside their national borders to satisfy more than a third of their food consumption needs. The method is then applied to Swedish meat production in Paper II to show that the term “Made in Sweden” is often a misnomer. In 1999, almost 80% of manufactured feed for Swedish pigs, cattle and chickens was dependent on imported inputs, mainly from Europe, Southeast Asia and South America. Paper III examines ecosystem subsidies to intensive aquaculture in two nations: shrimp production in Thailand and salmon production in Norway. In both countries, aquaculture was shown to rely increasingly on imported subsidies. The rapid expansion of aquaculture turned these countries from fishmeal net exporters to fishmeal net importers, increasingly using inputs from the Southeastern Pacific Ocean. As the examined agricultural and aquacultural production systems became globalized, levels of dependence on other nations’ ecosystems, the number of external supply sources, and the distance to these sources steadily increased. Dependence on other nations is not problematic, as long as we are able to acknowledge these links and sustainably manage resources both at home and abroad. However, ecosystem subsidies are seldom recognized or made explicit in national policy or economic accounts. Economic systems are generally not designed to receive feedbacks when the status of remote ecosystems changes, much less to respond in an ecologically sensitive manner. Papers IV and V discuss the problem of “masking” of the true environmental costs of production for trade. One of our conclusions is that, while the ArEAs approach is a useful tool for illuminating environmentally-based subsidies in the policy arena, it does not reflect all of the costs. Current agricultural and aquacultural production methods have generated substantial increases in production levels, but if policy continues to support the focus on yield and production increases alone, taking the work of ecosystems for granted, vulnerability can result. Thus, a challenge is to develop a set of complementary tools that can be used in economic accounting at national and international scales that address ecosystem support and performance. We conclude that future resilience in food production systems will require more explicit links between consumers and the work of supporting ecosystems, locally and in other regions of the world, and that food security planning will require active management of the capacity of all involved ecosystems to sustain food production.
Resumo:
The relation between the intercepted light and orchard productivity was considered linear, although this dependence seems to be more subordinate to planting system rather than light intensity. At whole plant level not always the increase of irradiance determines productivity improvement. One of the reasons can be the plant intrinsic un-efficiency in using energy. Generally in full light only the 5 – 10% of the total incoming energy is allocated to net photosynthesis. Therefore preserving or improving this efficiency becomes pivotal for scientist and fruit growers. Even tough a conspicuous energy amount is reflected or transmitted, plants can not avoid to absorb photons in excess. The chlorophyll over-excitation promotes the reactive species production increasing the photoinhibition risks. The dangerous consequences of photoinhibition forced plants to evolve a complex and multilevel machine able to dissipate the energy excess quenching heat (Non Photochemical Quenching), moving electrons (water-water cycle , cyclic transport around PSI, glutathione-ascorbate cycle and photorespiration) and scavenging the generated reactive species. The price plants must pay for this equipment is the use of CO2 and reducing power with a consequent decrease of the photosynthetic efficiency, both because some photons are not used for carboxylation and an effective CO2 and reducing power loss occurs. Net photosynthesis increases with light until the saturation point, additional PPFD doesn’t improve carboxylation but it rises the efficiency of the alternative pathways in energy dissipation but also ROS production and photoinhibition risks. The wide photo-protective apparatus, although is not able to cope with the excessive incoming energy, therefore photodamage occurs. Each event increasing the photon pressure and/or decreasing the efficiency of the described photo-protective mechanisms (i.e. thermal stress, water and nutritional deficiency) can emphasize the photoinhibition. Likely in nature a small amount of not damaged photosystems is found because of the effective, efficient and energy consuming recovery system. Since the damaged PSII is quickly repaired with energy expense, it would be interesting to investigate how much PSII recovery costs to plant productivity. This PhD. dissertation purposes to improve the knowledge about the several strategies accomplished for managing the incoming energy and the light excess implication on photo-damage in peach. The thesis is organized in three scientific units. In the first section a new rapid, non-intrusive, whole tissue and universal technique for functional PSII determination was implemented and validated on different kinds of plants as C3 and C4 species, woody and herbaceous plants, wild type and Chlorophyll b-less mutant and monocot and dicot plants. In the second unit, using a “singular” experimental orchard named “Asymmetric orchard”, the relation between light environment and photosynthetic performance, water use and photoinhibition was investigated in peach at whole plant level, furthermore the effect of photon pressure variation on energy management was considered on single leaf. In the third section the quenching analysis method suggested by Kornyeyev and Hendrickson (2007) was validate on peach. Afterwards it was applied in the field where the influence of moderate light and water reduction on peach photosynthetic performances, water requirements, energy management and photoinhibition was studied. Using solar energy as fuel for life plant is intrinsically suicidal since the high constant photodamage risk. This dissertation would try to highlight the complex relation existing between plant, in particular peach, and light analysing the principal strategies plants developed to manage the incoming light for deriving the maximal benefits as possible minimizing the risks. In the first instance the new method proposed for functional PSII determination based on P700 redox kinetics seems to be a valid, non intrusive, universal and field-applicable technique, even because it is able to measure in deep the whole leaf tissue rather than the first leaf layers as fluorescence. Fluorescence Fv/Fm parameter gives a good estimate of functional PSII but only when data obtained by ad-axial and ab-axial leaf surface are averaged. In addition to this method the energy quenching analysis proposed by Kornyeyev and Hendrickson (2007), combined with the photosynthesis model proposed by von Caemmerer (2000) is a forceful tool to analyse and study, even in the field, the relation between plant and environmental factors such as water, temperature but first of all light. “Asymmetric” training system is a good way to study light energy, photosynthetic performance and water use relations in the field. At whole plant level net carboxylation increases with PPFD reaching a saturating point. Light excess rather than improve photosynthesis may emphasize water and thermal stress leading to stomatal limitation. Furthermore too much light does not promote net carboxylation improvement but PSII damage, in fact in the most light exposed plants about 50-60% of the total PSII is inactivated. At single leaf level, net carboxylation increases till saturation point (1000 – 1200 μmolm-2s-1) and light excess is dissipated by non photochemical quenching and non net carboxylative transports. The latter follows a quite similar pattern of Pn/PPFD curve reaching the saturation point at almost the same photon flux density. At middle-low irradiance NPQ seems to be lumen pH limited because the incoming photon pressure is not enough to generate the optimum lumen pH for violaxanthin de-epoxidase (VDE) full activation. Peach leaves try to cope with the light excess increasing the non net carboxylative transports. While PPFD rises the xanthophyll cycle is more and more activated and the rate of non net carboxylative transports is reduced. Some of these alternative transports, such as the water-water cycle, the cyclic transport around the PSI and the glutathione-ascorbate cycle are able to generate additional H+ in lumen in order to support the VDE activation when light can be limiting. Moreover the alternative transports seems to be involved as an important dissipative way when high temperature and sub-optimal conductance emphasize the photoinhibition risks. In peach, a moderate water and light reduction does not determine net carboxylation decrease but, diminishing the incoming light and the environmental evapo-transpiration request, stomatal conductance decreases, improving water use efficiency. Therefore lowering light intensity till not limiting levels, water could be saved not compromising net photosynthesis. The quenching analysis is able to partition absorbed energy in the several utilization, photoprotection and photo-oxidation pathways. When recovery is permitted only few PSII remained un-repaired, although more net PSII damage is recorded in plants placed in full light. Even in this experiment, in over saturating light the main dissipation pathway is the non photochemical quenching; at middle-low irradiance it seems to be pH limited and other transports, such as photorespiration and alternative transports, are used to support photoprotection and to contribute for creating the optimal trans-thylakoidal ΔpH for violaxanthin de-epoxidase. These alternative pathways become the main quenching mechanisms at very low light environment. Another aspect pointed out by this study is the role of NPQ as dissipative pathway when conductance becomes severely limiting. The evidence that in nature a small amount of damaged PSII is seen indicates the presence of an effective and efficient recovery mechanism that masks the real photodamage occurring during the day. At single leaf level, when repair is not allowed leaves in full light are two fold more photoinhibited than the shaded ones. Therefore light in excess of the photosynthetic optima does not promote net carboxylation but increases water loss and PSII damage. The more is photoinhibition the more must be the photosystems to be repaired and consequently the energy and dry matter to allocate in this essential activity. Since above the saturation point net photosynthesis is constant while photoinhibition increases it would be interesting to investigate how photodamage costs in terms of tree productivity. An other aspect of pivotal importance to be further widened is the combined influence of light and other environmental parameters, like water status, temperature and nutrition on peach light, water and phtosyntate management.
Resumo:
Bioinformatics is a recent and emerging discipline which aims at studying biological problems through computational approaches. Most branches of bioinformatics such as Genomics, Proteomics and Molecular Dynamics are particularly computationally intensive, requiring huge amount of computational resources for running algorithms of everincreasing complexity over data of everincreasing size. In the search for computational power, the EGEE Grid platform, world's largest community of interconnected clusters load balanced as a whole, seems particularly promising and is considered the new hope for satisfying the everincreasing computational requirements of bioinformatics, as well as physics and other computational sciences. The EGEE platform, however, is rather new and not yet free of problems. In addition, specific requirements of bioinformatics need to be addressed in order to use this new platform effectively for bioinformatics tasks. In my three years' Ph.D. work I addressed numerous aspects of this Grid platform, with particular attention to those needed by the bioinformatics domain. I hence created three major frameworks, Vnas, GridDBManager and SETest, plus an additional smaller standalone solution, to enhance the support for bioinformatics applications in the Grid environment and to reduce the effort needed to create new applications, additionally addressing numerous existing Grid issues and performing a series of optimizations. The Vnas framework is an advanced system for the submission and monitoring of Grid jobs that provides an abstraction with reliability over the Grid platform. In addition, Vnas greatly simplifies the development of new Grid applications by providing a callback system to simplify the creation of arbitrarily complex multistage computational pipelines and provides an abstracted virtual sandbox which bypasses Grid limitations. Vnas also reduces the usage of Grid bandwidth and storage resources by transparently detecting equality of virtual sandbox files based on content, across different submissions, even when performed by different users. BGBlast, evolution of the earlier project GridBlast, now provides a Grid Database Manager (GridDBManager) component for managing and automatically updating biological flatfile databases in the Grid environment. GridDBManager sports very novel features such as an adaptive replication algorithm that constantly optimizes the number of replicas of the managed databases in the Grid environment, balancing between response times (performances) and storage costs according to a programmed cost formula. GridDBManager also provides a very optimized automated management for older versions of the databases based on reverse delta files, which reduces the storage costs required to keep such older versions available in the Grid environment by two orders of magnitude. The SETest framework provides a way to the user to test and regressiontest Python applications completely scattered with side effects (this is a common case with Grid computational pipelines), which could not easily be tested using the more standard methods of unit testing or test cases. The technique is based on a new concept of datasets containing invocations and results of filtered calls. The framework hence significantly accelerates the development of new applications and computational pipelines for the Grid environment, and the efforts required for maintenance. An analysis of the impact of these solutions will be provided in this thesis. This Ph.D. work originated various publications in journals and conference proceedings as reported in the Appendix. Also, I orally presented my work at numerous international conferences related to Grid and bioinformatics.
Resumo:
In the last years of research, I focused my studies on different physiological problems. Together with my supervisors, I developed/improved different mathematical models in order to create valid tools useful for a better understanding of important clinical issues. The aim of all this work is to develop tools for learning and understanding cardiac and cerebrovascular physiology as well as pathology, generating research questions and developing clinical decision support systems useful for intensive care unit patients. I. ICP-model Designed for Medical Education We developed a comprehensive cerebral blood flow and intracranial pressure model to simulate and study the complex interactions in cerebrovascular dynamics caused by multiple simultaneous alterations, including normal and abnormal functional states of auto-regulation of the brain. Individual published equations (derived from prior animal and human studies) were implemented into a comprehensive simulation program. Included in the normal physiological modelling was: intracranial pressure, cerebral blood flow, blood pressure, and carbon dioxide (CO2) partial pressure. We also added external and pathological perturbations, such as head up position and intracranial haemorrhage. The model performed clinically realistically given inputs of published traumatized patients, and cases encountered by clinicians. The pulsatile nature of the output graphics was easy for clinicians to interpret. The manoeuvres simulated include changes of basic physiological inputs (e.g. blood pressure, central venous pressure, CO2 tension, head up position, and respiratory effects on vascular pressures) as well as pathological inputs (e.g. acute intracranial bleeding, and obstruction of cerebrospinal outflow). Based on the results, we believe the model would be useful to teach complex relationships of brain haemodynamics and study clinical research questions such as the optimal head-up position, the effects of intracranial haemorrhage on cerebral haemodynamics, as well as the best CO2 concentration to reach the optimal compromise between intracranial pressure and perfusion. We believe this model would be useful for both beginners and advanced learners. It could be used by practicing clinicians to model individual patients (entering the effects of needed clinical manipulations, and then running the model to test for optimal combinations of therapeutic manoeuvres). II. A Heterogeneous Cerebrovascular Mathematical Model Cerebrovascular pathologies are extremely complex, due to the multitude of factors acting simultaneously on cerebral haemodynamics. In this work, the mathematical model of cerebral haemodynamics and intracranial pressure dynamics, described in the point I, is extended to account for heterogeneity in cerebral blood flow. The model includes the Circle of Willis, six regional districts independently regulated by autoregulation and CO2 reactivity, distal cortical anastomoses, venous circulation, the cerebrospinal fluid circulation, and the intracranial pressure-volume relationship. Results agree with data in the literature and highlight the existence of a monotonic relationship between transient hyperemic response and the autoregulation gain. During unilateral internal carotid artery stenosis, local blood flow regulation is progressively lost in the ipsilateral territory with the presence of a steal phenomenon, while the anterior communicating artery plays the major role to redistribute the available blood flow. Conversely, distal collateral circulation plays a major role during unilateral occlusion of the middle cerebral artery. In conclusion, the model is able to reproduce several different pathological conditions characterized by heterogeneity in cerebrovascular haemodynamics and can not only explain generalized results in terms of physiological mechanisms involved, but also, by individualizing parameters, may represent a valuable tool to help with difficult clinical decisions. III. Effect of Cushing Response on Systemic Arterial Pressure. During cerebral hypoxic conditions, the sympathetic system causes an increase in arterial pressure (Cushing response), creating a link between the cerebral and the systemic circulation. This work investigates the complex relationships among cerebrovascular dynamics, intracranial pressure, Cushing response, and short-term systemic regulation, during plateau waves, by means of an original mathematical model. The model incorporates the pulsating heart, the pulmonary circulation and the systemic circulation, with an accurate description of the cerebral circulation and the intracranial pressure dynamics (same model as in the first paragraph). Various regulatory mechanisms are included: cerebral autoregulation, local blood flow control by oxygen (O2) and/or CO2 changes, sympathetic and vagal regulation of cardiovascular parameters by several reflex mechanisms (chemoreceptors, lung-stretch receptors, baroreceptors). The Cushing response has been described assuming a dramatic increase in sympathetic activity to vessels during a fall in brain O2 delivery. With this assumption, the model is able to simulate the cardiovascular effects experimentally observed when intracranial pressure is artificially elevated and maintained at constant level (arterial pressure increase and bradicardia). According to the model, these effects arise from the interaction between the Cushing response and the baroreflex response (secondary to arterial pressure increase). Then, patients with severe head injury have been simulated by reducing intracranial compliance and cerebrospinal fluid reabsorption. With these changes, oscillations with plateau waves developed. In these conditions, model results indicate that the Cushing response may have both positive effects, reducing the duration of the plateau phase via an increase in cerebral perfusion pressure, and negative effects, increasing the intracranial pressure plateau level, with a risk of greater compression of the cerebral vessels. This model may be of value to assist clinicians in finding the balance between clinical benefits of the Cushing response and its shortcomings. IV. Comprehensive Cardiopulmonary Simulation Model for the Analysis of Hypercapnic Respiratory Failure We developed a new comprehensive cardiopulmonary model that takes into account the mutual interactions between the cardiovascular and the respiratory systems along with their short-term regulatory mechanisms. The model includes the heart, systemic and pulmonary circulations, lung mechanics, gas exchange and transport equations, and cardio-ventilatory control. Results show good agreement with published patient data in case of normoxic and hyperoxic hypercapnia simulations. In particular, simulations predict a moderate increase in mean systemic arterial pressure and heart rate, with almost no change in cardiac output, paralleled by a relevant increase in minute ventilation, tidal volume and respiratory rate. The model can represent a valid tool for clinical practice and medical research, providing an alternative way to experience-based clinical decisions. In conclusion, models are not only capable of summarizing current knowledge, but also identifying missing knowledge. In the former case they can serve as training aids for teaching the operation of complex systems, especially if the model can be used to demonstrate the outcome of experiments. In the latter case they generate experiments to be performed to gather the missing data.
Resumo:
Recently in most of the industrial automation process an ever increasing degree of automation has been observed. This increasing is motivated by the higher requirement of systems with great performance in terms of quality of products/services generated, productivity, efficiency and low costs in the design, realization and maintenance. This trend in the growth of complex automation systems is rapidly spreading over automated manufacturing systems (AMS), where the integration of the mechanical and electronic technology, typical of the Mechatronics, is merging with other technologies such as Informatics and the communication networks. An AMS is a very complex system that can be thought constituted by a set of flexible working stations, one or more transportation systems. To understand how this machine are important in our society let considerate that every day most of us use bottles of water or soda, buy product in box like food or cigarets and so on. Another important consideration from its complexity derive from the fact that the the consortium of machine producers has estimated around 350 types of manufacturing machine. A large number of manufacturing machine industry are presented in Italy and notably packaging machine industry,in particular a great concentration of this kind of industry is located in Bologna area; for this reason the Bologna area is called “packaging valley”. Usually, the various parts of the AMS interact among them in a concurrent and asynchronous way, and coordinate the parts of the machine to obtain a desiderated overall behaviour is an hard task. Often, this is the case in large scale systems, organized in a modular and distributed manner. Even if the success of a modern AMS from a functional and behavioural point of view is still to attribute to the design choices operated in the definition of the mechanical structure and electrical electronic architecture, the system that governs the control of the plant is becoming crucial, because of the large number of duties associated to it. Apart from the activity inherent to the automation of themachine cycles, the supervisory system is called to perform other main functions such as: emulating the behaviour of traditional mechanical members thus allowing a drastic constructive simplification of the machine and a crucial functional flexibility; dynamically adapting the control strategies according to the different productive needs and to the different operational scenarios; obtaining a high quality of the final product through the verification of the correctness of the processing; addressing the operator devoted to themachine to promptly and carefully take the actions devoted to establish or restore the optimal operating conditions; managing in real time information on diagnostics, as a support of the maintenance operations of the machine. The kind of facilities that designers can directly find on themarket, in terms of software component libraries provides in fact an adequate support as regard the implementation of either top-level or bottom-level functionalities, typically pertaining to the domains of user-friendly HMIs, closed-loop regulation and motion control, fieldbus-based interconnection of remote smart devices. What is still lacking is a reference framework comprising a comprehensive set of highly reusable logic control components that, focussing on the cross-cutting functionalities characterizing the automation domain, may help the designers in the process of modelling and structuring their applications according to the specific needs. Historically, the design and verification process for complex automated industrial systems is performed in empirical way, without a clear distinction between functional and technological-implementation concepts and without a systematic method to organically deal with the complete system. Traditionally, in the field of analog and digital control design and verification through formal and simulation tools have been adopted since a long time ago, at least for multivariable and/or nonlinear controllers for complex time-driven dynamics as in the fields of vehicles, aircrafts, robots, electric drives and complex power electronics equipments. Moving to the field of logic control, typical for industrial manufacturing automation, the design and verification process is approached in a completely different way, usually very “unstructured”. No clear distinction between functions and implementations, between functional architectures and technological architectures and platforms is considered. Probably this difference is due to the different “dynamical framework”of logic control with respect to analog/digital control. As a matter of facts, in logic control discrete-events dynamics replace time-driven dynamics; hence most of the formal and mathematical tools of analog/digital control cannot be directly migrated to logic control to enlighten the distinction between functions and implementations. In addition, in the common view of application technicians, logic control design is strictly connected to the adopted implementation technology (relays in the past, software nowadays), leading again to a deep confusion among functional view and technological view. In Industrial automation software engineering, concepts as modularity, encapsulation, composability and reusability are strongly emphasized and profitably realized in the so-calledobject-oriented methodologies. Industrial automation is receiving lately this approach, as testified by some IEC standards IEC 611313, IEC 61499 which have been considered in commercial products only recently. On the other hand, in the scientific and technical literature many contributions have been already proposed to establish a suitable modelling framework for industrial automation. During last years it was possible to note a considerable growth in the exploitation of innovative concepts and technologies from ICT world in industrial automation systems. For what concerns the logic control design, Model Based Design (MBD) is being imported in industrial automation from software engineering field. Another key-point in industrial automated systems is the growth of requirements in terms of availability, reliability and safety for technological systems. In other words, the control system should not only deal with the nominal behaviour, but should also deal with other important duties, such as diagnosis and faults isolations, recovery and safety management. Indeed, together with high performance, in complex systems fault occurrences increase. This is a consequence of the fact that, as it typically occurs in reliable mechatronic systems, in complex systems such as AMS, together with reliable mechanical elements, an increasing number of electronic devices are also present, that are more vulnerable by their own nature. The diagnosis problem and the faults isolation in a generic dynamical system consists in the design of an elaboration unit that, appropriately processing the inputs and outputs of the dynamical system, is also capable of detecting incipient faults on the plant devices, reconfiguring the control system so as to guarantee satisfactory performance. The designer should be able to formally verify the product, certifying that, in its final implementation, it will perform itsrequired function guarantying the desired level of reliability and safety; the next step is that of preventing faults and eventually reconfiguring the control system so that faults are tolerated. On this topic an important improvement to formal verification of logic control, fault diagnosis and fault tolerant control results derive from Discrete Event Systems theory. The aimof this work is to define a design pattern and a control architecture to help the designer of control logic in industrial automated systems. The work starts with a brief discussion on main characteristics and description of industrial automated systems on Chapter 1. In Chapter 2 a survey on the state of the software engineering paradigm applied to industrial automation is discussed. Chapter 3 presentes a architecture for industrial automated systems based on the new concept of Generalized Actuator showing its benefits, while in Chapter 4 this architecture is refined using a novel entity, the Generalized Device in order to have a better reusability and modularity of the control logic. In Chapter 5 a new approach will be present based on Discrete Event Systems for the problemof software formal verification and an active fault tolerant control architecture using online diagnostic. Finally conclusive remarks and some ideas on new directions to explore are given. In Appendix A are briefly reported some concepts and results about Discrete Event Systems which should help the reader in understanding some crucial points in chapter 5; while in Appendix B an overview on the experimental testbed of the Laboratory of Automation of University of Bologna, is reported to validated the approach presented in chapter 3, chapter 4 and chapter 5. In Appendix C some components model used in chapter 5 for formal verification are reported.
Resumo:
Different tools have been used to set up and adopt the model for the fulfillment of the objective of this research. 1. The Model The base model that has been used is the Analytical Hierarchy Process (AHP) adapted with the aim to perform a Benefit Cost Analysis. The AHP developed by Thomas Saaty is a multicriteria decision - making technique which decomposes a complex problem into a hierarchy. It is used to derive ratio scales from both discreet and continuous paired comparisons in multilevel hierarchic structures. These comparisons may be taken from actual measurements or from a fundamental scale that reflects the relative strength of preferences and feelings. 2. Tools and methods 2.1. The Expert Choice Software The software Expert Choice is a tool that allows each operator to easily implement the AHP model in every stage of the problem. 2.2. Personal Interviews to the farms For this research, the farms of the region Emilia Romagna certified EMAS have been detected. Information has been given by EMAS center in Wien. Personal interviews have been carried out to each farm in order to have a complete and realistic judgment of each criteria of the hierarchy. 2.3. Questionnaire A supporting questionnaire has also been delivered and used for the interviews . 3. Elaboration of the data After data collection, the data elaboration has taken place. The software support Expert Choice has been used . 4. Results of the Analysis The result of the figures above (vedere altro documento) gives a series of numbers which are fractions of the unit. This has to be interpreted as the relative contribution of each element to the fulfillment of the relative objective. So calculating the Benefits/costs ratio for each alternative the following will be obtained: Alternative One: Implement EMAS Benefits ratio: 0, 877 Costs ratio: 0, 815 Benfit/Cost ratio: 0,877/0,815=1,08 Alternative Two: Not Implement EMAS Benefits ratio: 0,123 Costs ration: 0,185 Benefit/Cost ratio: 0,123/0,185=0,66 As stated above, the alternative with the highest ratio will be the best solution for the organization. This means that the research carried out and the model implemented suggests that EMAS adoption in the agricultural sector is the best alternative. It has to be noted that the ratio is 1,08 which is a relatively low positive value. This shows the fragility of this conclusion and suggests a careful exam of the benefits and costs for each farm before adopting the scheme. On the other part, the result needs to be taken in consideration by the policy makers in order to enhance their intervention regarding the scheme adoption on the agricultural sector. According to the AHP elaboration of judgments we have the following main considerations on Benefits: - Legal compliance seems to be the most important benefit for the agricultural sector since its rank is 0,471 - The next two most important benefits are Improved internal organization (ranking 0,230) followed by Competitive advantage (ranking 0, 221) mostly due to the sub-element Improved image (ranking 0,743) Finally, even though Incentives are not ranked among the most important elements, the financial ones seem to have been decisive on the decision making process. According to the AHP elaboration of judgments we have the following main considerations on Costs: - External costs seem to be largely more important than the internal ones (ranking 0, 857 over 0,143) suggesting that Emas costs over consultancy and verification remain the biggest obstacle. - The implementation of the EMS is the most challenging element regarding the internal costs (ranking 0,750).
Resumo:
The aim of the study was to examine the economic performance as well as perceived social and environmental impacts of organic cotton in Southern Kyrgyzstan on the basis of a comparative field study (44 certified organic farmers and 33 conventional farmers) carried out in 2009. It also investigated farmers’ motivation for and assessment of conversion to organic farming. Cotton yields on organic farms were found to be 10% lower whereby input costs per unit were 42% lower, which resulted in organic farmers having a 20% higher revenue from cotton. Due to lower input costs and organic and fair trade price premiums the average gross margin from organic cotton was 27%. In addition to direct economic benefits organic farmers enjoy a number of additional benefits such as easy access to credits on favourable terms, provision with uncontaminated cotton cooking oil and seed cake as animal feed, marketing support as well as extension and training, services provided by the newly established organic service provider. A big majority of organic farmers perceives an improvement of soil qualities, improved health conditions, and positively assesses their previous decision to convert to organic farming. The major disadvantage of organic farming is the high manual labour input required. In the study area, where manual farm work is mainly women’s work and male labour migration widespread, women are most affected by this negative aspect of organic farming. Altogether, the results suggest that despite the inconvenience of higher work load the advantages of organic farming outweigh the disadvantages and that conversion to organic farming can improve the livelihoods of small-scale farmers.
Resumo:
Cotton is a leading agricultural non-food commodity associated with soil degradation, water pollution and pesticide poisoning due to high levels of agrochemical inputs. Organic farming is often promoted as a means of addressing the economic, environmental and health risks of conventional cotton production, and it is slowly gaining ground in the global cotton market. Organic and fair trade cotton are widely seen as opportunities for smallholder farmers to improve their livelihoods thanks to higher returns, lower input costs and fewer risks. Despite an increasing number of studies comparing the profitability of organic and non-organic farming systems in developing and industrialized countries, little has been published on organic farming in Central Asia. The aim of this article is to describe the economic performance and perceived social and environmental impacts of organic cotton in southern Kyrgyzstan, drawing on a comparative field study conducted by the author in 2009. In addition to economic and environmental aspects, the study investigated farmers’ motivations toward and assessment of conversion to organic farming. Cotton yields on organic farms were found to be 10% lower, while input costs per unit were 42% lower; as a result, organic farmers’ cotton revenues were 20% higher. Due to lower input costs as well as organic and fair trade price premiums, the average gross margin from organic cotton was 27% higher. In addition to direct economic benefits, organic farmers enjoy other benefits, such as easy access to credit on favorable terms, provision of uncontaminated cottonseed cooking oil and cottonseed cake as animal feed, and marketing support as well as extension and training services provided by newly established organic service providers. The majority of organic farmers perceive improved soil quality, improved health conditions, and positively assess their initial decision to convert to organic farming. The major disadvantage of organic farming is the high manual labor input required. In the study area, where manual farm work is mainly women's work and male labor migration is widespread, women are most affected by this negative aspect of organic farming. Altogether, the results suggest that, despite the inconvenience of a higher workload, the advantages of organic farming outweigh its disadvantages and that conversion to organic farming improves the livelihoods of small-scale farmers.
Resumo:
Soil erosion models and soil erosion risk maps are often used as indicators to assess potential soil erosion in order to assist policy decisions. This paper shows the scientific basis of the soil erosion risk map of Switzerland and its application in policy and practice. Linking a USLE/RUSLE-based model approach (AVErosion) founded on multiple flow algorithms and the unit contributing area concept with an extremely precise and high-resolution digital terrain model (2 m × 2 m grid) using GIS allows for a realistic assessment of the potential soil erosion risk, on single plots, i.e. uniform and comprehensive for the agricultural area of Switzerland (862,579 ha in the valley area and the lower mountain regions). The national or small-scale soil erosion prognosis has thus reached a level heretofore possible only in smaller catchment areas or single plots. Validation was carried out using soil loss data from soil erosion damage mappings in the field from long-term monitoring in different test areas. 45% of the evaluated agricultural area of Switzerland was classified as low potential erosion risk, 12% as moderate potential erosion risk, and 43% as high potential erosion risk. However, many of the areas classified as high potential erosion risk are located at the transition from valley to mountain zone, where many areas are used as permanent grassland, which drastically lowers their current erosion risk. The present soil erosion risk map serves on the one hand to identify and prioritise the high-erosion risk areas, and on the other hand to promote awareness amongst farmers and authorities. It was published on the internet and will be made available to the authorities in digital form. It is intended as a tool for simplifying and standardising enforcement of the legal framework for soil erosion prevention in Switzerland. The work therefore provides a successful example of cooperation between science, policy and practice.