936 resultados para Alcohol Safety Interlock Systems.
Resumo:
Este trabajo es una contribución a los sistemas fotovoltaicos (FV) con seguimiento distribuido del punto de máxima potencia (DMPPT), una topología que se caracteriza porque lleva a cabo el MPPT a nivel de módulo, al contrario de las topologías más tradicionales que llevan a cabo el MPPT para un número más elevado de módulos, pudiendo ser hasta cientos de módulos. Las dos tecnologías DMPPT que existen en el mercado son conocidos como microinversores y optimizadores de potencia, y ofrecen ciertas ventajas sobre sistemas de MPPT central como: mayor producción en situaciones de mismatch, monitorización individual de cada módulo, flexibilidad de diseño, mayor seguridad del sistema, etc. Aunque los sistemas DMPPT no están limitados a los entornos urbanos, se ha enfatizado en el título ya que es su mercado natural, siendo difícil una justificación de su sobrecoste en grandes huertas solares en suelo. Desde el año 2010 el mercado de estos sistemas ha incrementado notablemente y sigue creciendo de una forma continuada. Sin embargo, todavía falta un conocimiento profundo de cómo funcionan estos sistemas, especialmente en el caso de los optimizadores de potencia, de las ganancias energéticas esperables en condiciones de mismatch y de las posibilidades avanzadas de diagnóstico de fallos. El principal objetivo de esta tesis es presentar un estudio completo de cómo funcionan los sistemas DMPPT, sus límites y sus ventajas, así como experimentos varios que verifican la teoría y el desarrollo de herramientas para valorar las ventajas de utilizar DMPPT en cada instalación. Las ecuaciones que modelan el funcionamiento de los sistemas FVs con optimizadores de potencia se han desarrollado y utilizado para resaltar los límites de los mismos a la hora de resolver ciertas situaciones de mismatch. Se presenta un estudio profundo sobre el efecto de las sombras en los sistemas FVs: en la curva I-V y en los algoritmos MPPT. Se han llevado a cabo experimentos sobre el funcionamiento de los algoritmos MPPT en situaciones de sombreado, señalando su ineficiencia en estas situaciones. Un análisis de la ventaja del uso de DMPPT frente a los puntos calientes es presentado y verificado. También se presenta un análisis sobre las posibles ganancias en potencia y energía con el uso de DMPPT en condiciones de sombreado y este también es verificado experimentalmente, así como un breve estudio de su viabilidad económica. Para ayudar a llevar a cabo todos los análisis y experimentos descritos previamente se han desarrollado una serie de herramientas software. Una siendo un programa en LabView para controlar un simulador solar y almacenar las medidas. También se ha desarrollado un programa que simula curvas I-V de módulos y generador FVs afectados por sombras y este se ha verificado experimentalmente. Este mismo programa se ha utilizado para desarrollar un programa todavía más completo que estima las pérdidas anuales y las ganancias obtenidas con DMPPT en instalaciones FVs afectadas por sombras. Finalmente, se han desarrollado y verificado unos algoritmos para diagnosticar fallos en sistemas FVs con DMPPT. Esta herramienta puede diagnosticar los siguientes fallos: sombras debido a objetos fijos (con estimación de la distancia al objeto), suciedad localizada, suciedad general, posible punto caliente, degradación de módulos y pérdidas en el cableado de DC. Además, alerta al usuario de las pérdidas producidas por cada fallo y no requiere del uso de sensores de irradiancia y temperatura. ABSTRACT This work is a contribution to photovoltaic (PV) systems with distributed maximum power point tracking (DMPPT), a system topology characterized by performing the MPPT at module level, instead of the more traditional topologies which perform MPPT for a larger number of modules. The two DMPPT technologies available at the moment are known as microinverters and power optimizers, also known as module level power electronics (MLPE), and they provide certain advantages over central MPPT systems like: higher energy production in mismatch situations, monitoring of each individual module, system design flexibility, higher system safety, etc. Although DMPPT is not limited to urban environments, it has been emphasized in the title as it is their natural market, since in large ground-mounted PV plants the extra cost is difficult to justify. Since 2010 MLPE have increased their market share steadily and continuing to grow steadily. However, there still lacks a profound understanding of how they work, especially in the case of power optimizers, the achievable energy gains with their use and the possibilities in failure diagnosis. The main objective of this thesis is to provide a complete understanding of DMPPT technologies: how they function, their limitations and their advantages. A series of equations used to model PV arrays with power optimizers have been derived and used to point out limitations in solving certain mismatch situation. Because one of the most emphasized benefits of DMPPT is their ability to mitigate shading losses, an extensive study on the effects of shadows on PV systems is presented; both on the I-V curve and on MPPT algorithms. Experimental tests have been performed on the MPPT algorithms of central inverters and MLPE, highlighting their inefficiency in I-V curves with local maxima. An analysis of the possible mitigation of hot-spots with DMPPT is discussed and experimentally verified. And a theoretical analysis of the possible power and energy gains is presented as well as experiments in real PV systems. A short economic analysis of the benefits of DMPPT has also been performed. In order to aide in the previous task, a program which simulates I-V curves under shaded conditions has been developed and experimentally verified. This same program has been used to develop a software tool especially designed for PV systems affected by shading, which estimates the losses due to shading and the energy gains obtained with DMPPT. Finally, a set of algorithms for diagnosing system faults in PV systems with DMPPT has been developed and experimentally verified. The tool can diagnose the following failures: fixed object shading (with distance estimation), localized dirt, generalized dirt, possible hot-spots, module degradation and excessive losses in DC cables. In addition, it alerts the user of the power losses produced by each failure and classifies the failures by their severity and it does not require the use of irradiance or temperature sensors.
Resumo:
Electrical power systems are changing their traditional structure, which was based on a little number of large generating power plants placed at great distances from loads by new models that tend to split the big production nodes in many smaller ones. The set of small groups which are located close to consumers and provide safe and quality energy is called distributed generation (DG). The proximity of the sources to the loads reduces losses associated with transportation and increases overall system efficiency. DG also favors the inclusion of renewable energy sources in isolated electrical systems or remote microgrids, because they can be installed where the natural resource is located. In both cases, as weak grids unable to get help from other nearby networks, it is essential to ensure appropriate behavior of DG sources to guarantee power system safety and stability. The grid codes sets out the technical requirements to be fulfilled for the sources connected in these electrical networks. In technical literature it is rather easy to find and compare grid codes for interconnected electrical systems. However, the existing literature is incomplete and sparse regarding isolated electrical systems and this happens due to the difficulties inherent in the pursuit of codes. Some countries have developed their own legislation only for their island territory (as Spain or France), others apply the same set of rules as in mainland, another group of island countries have elaborated a complete grid code for all generating sources and some others lack specific regulation. This paper aims to make a complete review of the state of the art in grid codes applicable to isolated systems, setting the comparison between them and defining the guidelines predictably followed by the upcoming regulations in these particular systems.
Resumo:
Abstract We consider a wide class of models that includes the highly reliable Markovian systems (HRMS) often used to represent the evolution of multi-component systems in reliability settings. Repair times and component lifetimes are random variables that follow a general distribution, and the repair service adopts a priority repair rule based on system failure risk. Since crude simulation has proved to be inefficient for highly-dependable systems, the RESTART method is used for the estimation of steady-state unavailability and other reliability measures. In this method, a number of simulation retrials are performed when the process enters regions of the state space where the chance of occurrence of a rare event (e.g., a system failure) is higher. The main difficulty involved in applying this method is finding a suitable function, called the importance function, to define the regions. In this paper we introduce an importance function which, for unbalanced systems, represents a great improvement over the importance function used in previous papers. We also demonstrate the asymptotic optimality of RESTART estimators in these models. Several examples are presented to show the effectiveness of the new approach, and probabilities up to the order of 10-42 are accurately estimated with little computational effort.
Resumo:
Emotion is generally argued to be an influence on the behavior of life systems, largely concerning flexibility and adaptivity. The way in which life systems acts in response to a particular situations of the environment, has revealed the decisive and crucial importance of this feature in the success of behaviors. And this source of inspiration has influenced the way of thinking artificial systems. During the last decades, artificial systems have undergone such an evolution that each day more are integrated in our daily life. They have become greater in complexity, and the subsequent effects are related to an increased demand of systems that ensure resilience, robustness, availability, security or safety among others. All of them questions that raise quite a fundamental challenges in control design. This thesis has been developed under the framework of the Autonomous System project, a.k.a the ASys-Project. Short-term objectives of immediate application are focused on to design improved systems, and the approaching of intelligence in control strategies. Besides this, long-term objectives underlying ASys-Project concentrate on high order capabilities such as cognition, awareness and autonomy. This thesis is placed within the general fields of Engineery and Emotion science, and provides a theoretical foundation for engineering and designing computational emotion for artificial systems. The starting question that has grounded this thesis aims the problem of emotion--based autonomy. And how to feedback systems with valuable meaning has conformed the general objective. Both the starting question and the general objective, have underlaid the study of emotion, the influence on systems behavior, the key foundations that justify this feature in life systems, how emotion is integrated within the normal operation, and how this entire problem of emotion can be explained in artificial systems. By assuming essential differences concerning structure, purpose and operation between life and artificial systems, the essential motivation has been the exploration of what emotion solves in nature to afterwards analyze analogies for man--made systems. This work provides a reference model in which a collection of entities, relationships, models, functions and informational artifacts, are all interacting to provide the system with non-explicit knowledge under the form of emotion-like relevances. This solution aims to provide a reference model under which to design solutions for emotional operation, but related to the real needs of artificial systems. The proposal consists of a multi-purpose architecture that implement two broad modules in order to attend: (a) the range of processes related to the environment affectation, and (b) the range or processes related to the emotion perception-like and the higher levels of reasoning. This has required an intense and critical analysis beyond the state of the art around the most relevant theories of emotion and technical systems, in order to obtain the required support for those foundations that sustain each model. The problem has been interpreted and is described on the basis of AGSys, an agent assumed with the minimum rationality as to provide the capability to perform emotional assessment. AGSys is a conceptualization of a Model-based Cognitive agent that embodies an inner agent ESys, the responsible of performing the emotional operation inside of AGSys. The solution consists of multiple computational modules working federated, and aimed at conforming a mutual feedback loop between AGSys and ESys. Throughout this solution, the environment and the effects that might influence over the system are described as different problems. While AGSys operates as a common system within the external environment, ESys is designed to operate within a conceptualized inner environment. And this inner environment is built on the basis of those relevances that might occur inside of AGSys in the interaction with the external environment. This allows for a high-quality separate reasoning concerning mission goals defined in AGSys, and emotional goals defined in ESys. This way, it is provided a possible path for high-level reasoning under the influence of goals congruence. High-level reasoning model uses knowledge about emotional goals stability, letting this way new directions in which mission goals might be assessed under the situational state of this stability. This high-level reasoning is grounded by the work of MEP, a model of emotion perception that is thought as an analogy of a well-known theory in emotion science. The work of this model is described under the operation of a recursive-like process labeled as R-Loop, together with a system of emotional goals that are assumed as individual agents. This way, AGSys integrates knowledge that concerns the relation between a perceived object, and the effect which this perception induces on the situational state of the emotional goals. This knowledge enables a high-order system of information that provides the sustain for a high-level reasoning. The extent to which this reasoning might be approached is just delineated and assumed as future work. This thesis has been studied beyond a long range of fields of knowledge. This knowledge can be structured into two main objectives: (a) the fields of psychology, cognitive science, neurology and biological sciences in order to obtain understanding concerning the problem of the emotional phenomena, and (b) a large amount of computer science branches such as Autonomic Computing (AC), Self-adaptive software, Self-X systems, Model Integrated Computing (MIC) or the paradigm of models@runtime among others, in order to obtain knowledge about tools for designing each part of the solution. The final approach has been mainly performed on the basis of the entire acquired knowledge, and described under the fields of Artificial Intelligence, Model-Based Systems (MBS), and additional mathematical formalizations to provide punctual understanding in those cases that it has been required. This approach describes a reference model to feedback systems with valuable meaning, allowing for reasoning with regard to (a) the relationship between the environment and the relevance of the effects on the system, and (b) dynamical evaluations concerning the inner situational state of the system as a result of those effects. And this reasoning provides a framework of distinguishable states of AGSys derived from its own circumstances, that can be assumed as artificial emotion.
Resumo:
Los sistemas empotrados son cada día más comunes y complejos, de modo que encontrar procesos seguros, eficaces y baratos de desarrollo software dirigidos específicamente a esta clase de sistemas es más necesario que nunca. A diferencia de lo que ocurría hasta hace poco, en la actualidad los avances tecnológicos en el campo de los microprocesadores de los últimos tiempos permiten el desarrollo de equipos con prestaciones más que suficientes para ejecutar varios sistemas software en una única máquina. Además, hay sistemas empotrados con requisitos de seguridad (safety) de cuyo correcto funcionamiento depende la vida de muchas personas y/o grandes inversiones económicas. Estos sistemas software se diseñan e implementan de acuerdo con unos estándares de desarrollo software muy estrictos y exigentes. En algunos casos puede ser necesaria también la certificación del software. Para estos casos, los sistemas con criticidades mixtas pueden ser una alternativa muy valiosa. En esta clase de sistemas, aplicaciones con diferentes niveles de criticidad se ejecutan en el mismo computador. Sin embargo, a menudo es necesario certificar el sistema entero con el nivel de criticidad de la aplicación más crítica, lo que hace que los costes se disparen. La virtualización se ha postulado como una tecnología muy interesante para contener esos costes. Esta tecnología permite que un conjunto de máquinas virtuales o particiones ejecuten las aplicaciones con unos niveles de aislamiento tanto temporal como espacial muy altos. Esto, a su vez, permite que cada partición pueda ser certificada independientemente. Para el desarrollo de sistemas particionados con criticidades mixtas se necesita actualizar los modelos de desarrollo software tradicionales, pues estos no cubren ni las nuevas actividades ni los nuevos roles que se requieren en el desarrollo de estos sistemas. Por ejemplo, el integrador del sistema debe definir las particiones o el desarrollador de aplicaciones debe tener en cuenta las características de la partición donde su aplicación va a ejecutar. Tradicionalmente, en el desarrollo de sistemas empotrados, el modelo en V ha tenido una especial relevancia. Por ello, este modelo ha sido adaptado para tener en cuenta escenarios tales como el desarrollo en paralelo de aplicaciones o la incorporación de una nueva partición a un sistema ya existente. El objetivo de esta tesis doctoral es mejorar la tecnología actual de desarrollo de sistemas particionados con criticidades mixtas. Para ello, se ha diseñado e implementado un entorno dirigido específicamente a facilitar y mejorar los procesos de desarrollo de esta clase de sistemas. En concreto, se ha creado un algoritmo que genera el particionado del sistema automáticamente. En el entorno de desarrollo propuesto, se han integrado todas las actividades necesarias para desarrollo de un sistema particionado, incluidos los nuevos roles y actividades mencionados anteriormente. Además, el diseño del entorno de desarrollo se ha basado en la ingeniería guiada por modelos (Model-Driven Engineering), la cual promueve el uso de los modelos como elementos fundamentales en el proceso de desarrollo. Así pues, se proporcionan las herramientas necesarias para modelar y particionar el sistema, así como para validar los resultados y generar los artefactos necesarios para el compilado, construcción y despliegue del mismo. Además, en el diseño del entorno de desarrollo, la extensión e integración del mismo con herramientas de validación ha sido un factor clave. En concreto, se pueden incorporar al entorno de desarrollo nuevos requisitos no-funcionales, la generación de nuevos artefactos tales como documentación o diferentes lenguajes de programación, etc. Una parte clave del entorno de desarrollo es el algoritmo de particionado. Este algoritmo se ha diseñado para ser independiente de los requisitos de las aplicaciones así como para permitir al integrador del sistema implementar nuevos requisitos del sistema. Para lograr esta independencia, se han definido las restricciones al particionado. El algoritmo garantiza que dichas restricciones se cumplirán en el sistema particionado que resulte de su ejecución. Las restricciones al particionado se han diseñado con una capacidad expresiva suficiente para que, con un pequeño grupo de ellas, se puedan expresar la mayor parte de los requisitos no-funcionales más comunes. Las restricciones pueden ser definidas manualmente por el integrador del sistema o bien pueden ser generadas automáticamente por una herramienta a partir de los requisitos funcionales y no-funcionales de una aplicación. El algoritmo de particionado toma como entradas los modelos y las restricciones al particionado del sistema. Tras la ejecución y como resultado, se genera un modelo de despliegue en el que se definen las particiones que son necesarias para el particionado del sistema. A su vez, cada partición define qué aplicaciones deben ejecutar en ella así como los recursos que necesita la partición para ejecutar correctamente. El problema del particionado y las restricciones al particionado se modelan matemáticamente a través de grafos coloreados. En dichos grafos, un coloreado propio de los vértices representa un particionado del sistema correcto. El algoritmo se ha diseñado también para que, si es necesario, sea posible obtener particionados alternativos al inicialmente propuesto. El entorno de desarrollo, incluyendo el algoritmo de particionado, se ha probado con éxito en dos casos de uso industriales: el satélite UPMSat-2 y un demostrador del sistema de control de una turbina eólica. Además, el algoritmo se ha validado mediante la ejecución de numerosos escenarios sintéticos, incluyendo algunos muy complejos, de más de 500 aplicaciones. ABSTRACT The importance of embedded software is growing as it is required for a large number of systems. Devising cheap, efficient and reliable development processes for embedded systems is thus a notable challenge nowadays. Computer processing power is continuously increasing, and as a result, it is currently possible to integrate complex systems in a single processor, which was not feasible a few years ago.Embedded systems may have safety critical requirements. Its failure may result in personal or substantial economical loss. The development of these systems requires stringent development processes that are usually defined by suitable standards. In some cases their certification is also necessary. This scenario fosters the use of mixed-criticality systems in which applications of different criticality levels must coexist in a single system. In these cases, it is usually necessary to certify the whole system, including non-critical applications, which is costly. Virtualization emerges as an enabling technology used for dealing with this problem. The system is structured as a set of partitions, or virtual machines, that can be executed with temporal and spatial isolation. In this way, applications can be developed and certified independently. The development of MCPS (Mixed-Criticality Partitioned Systems) requires additional roles and activities that traditional systems do not require. The system integrator has to define system partitions. Application development has to consider the characteristics of the partition to which it is allocated. In addition, traditional software process models have to be adapted to this scenario. The V-model is commonly used in embedded systems development. It can be adapted to the development of MCPS by enabling the parallel development of applications or adding an additional partition to an existing system. The objective of this PhD is to improve the available technology for MCPS development by providing a framework tailored to the development of this type of system and by defining a flexible and efficient algorithm for automatically generating system partitionings. The goal of the framework is to integrate all the activities required for developing MCPS and to support the different roles involved in this process. The framework is based on MDE (Model-Driven Engineering), which emphasizes the use of models in the development process. The framework provides basic means for modeling the system, generating system partitions, validating the system and generating final artifacts. The framework has been designed to facilitate its extension and the integration of external validation tools. In particular, it can be extended by adding support for additional non-functional requirements and support for final artifacts, such as new programming languages or additional documentation. The framework includes a novel partitioning algorithm. It has been designed to be independent of the types of applications requirements and also to enable the system integrator to tailor the partitioning to the specific requirements of a system. This independence is achieved by defining partitioning constraints that must be met by the resulting partitioning. They have sufficient expressive capacity to state the most common constraints and can be defined manually by the system integrator or generated automatically based on functional and non-functional requirements of the applications. The partitioning algorithm uses system models and partitioning constraints as its inputs. It generates a deployment model that is composed by a set of partitions. Each partition is in turn composed of a set of allocated applications and assigned resources. The partitioning problem, including applications and constraints, is modeled as a colored graph. A valid partitioning is a proper vertex coloring. A specially designed algorithm generates this coloring and is able to provide alternative partitions if required. The framework, including the partitioning algorithm, has been successfully used in the development of two industrial use cases: the UPMSat-2 satellite and the control system of a wind-power turbine. The partitioning algorithm has been successfully validated by using a large number of synthetic loads, including complex scenarios with more that 500 applications.
Resumo:
To “control” a system is to make it behave (hopefully) according to our “wishes,” in a way compatible with safety and ethics, at the least possible cost. The systems considered here are distributed—i.e., governed (modeled) by partial differential equations (PDEs) of evolution. Our “wish” is to drive the system in a given time, by an adequate choice of the controls, from a given initial state to a final given state, which is the target. If this can be achieved (respectively, if we can reach any “neighborhood” of the target) the system, with the controls at our disposal, is exactly (respectively, approximately) controllable. A very general (and fuzzy) idea is that the more a system is “unstable” (chaotic, turbulent) the “simplest,” or the “cheapest,” it is to achieve exact or approximate controllability. When the PDEs are the Navier–Stokes equations, it leads to conjectures, which are presented and explained. Recent results, reported in this expository paper, essentially prove the conjectures in two space dimensions. In three space dimensions, a large number of new questions arise, some new results support (without proving) the conjectures, such as generic controllability and cases of decrease of cost of control when the instability increases. Short comments are made on models arising in climatology, thermoelasticity, non-Newtonian fluids, and molecular chemistry. The Introduction of the paper and the first part of all sections are not technical. Many open questions are mentioned in the text.
Resumo:
Toxoplasma gondii is a coccidian parasite with a global distribution. The definitive host is the cat (and other felids). All warm-blooded animals can act as intermediate hosts, including humans. Sexual reproduction (gametogony) takes place in the final host and oocysts are released in the environment, where they then sporulate to become infective. In intermediate hosts the cycle is extra-intestinal and results in the formation of tachyzoites and bradyzoites. Tachyzoites represent the invasive and proliferative stage and on entering a cell it multiplies asexually by endodyogeny. Bradyzoites within tissue cysts are the latent form. T. gondii is a food-borne parasite causing toxoplasmosis, which can occur in both animals and humans. Infection in humans is asymptomatic in more than 80% of cases in Europe and North-America. In the remaining cases patients present fever, cervical lymphadenopathy and other non-specific clinical signs. Nevertheless, toxoplasmosis is life threatening if it occurs in immunocompromised subjects. The main organs involved are brain (toxoplasmic encephalitis), heart (myocarditis), lungs (pulmonary toxoplasmosis), eyes, pancreas and parasite can be isolated from these tissues. Another aspect is congenital toxoplasmosis that may occur in pregnant women and the severity of the consequences depends on the stage of pregnancy when maternal infection occurs. Acute toxoplasmosis in developing foetuses may result in blindness, deformation, mental retardation or even death. The European Food Safety Authority (EFSA), in recent reports on zoonoses, highlighted that an increasing numbers of animals resulted infected with T. gondii in EU (reported by the European Member States for pigs, sheep, goats, hunted wild boar and hunted deer, in 2011 and 2012). In addition, high prevalence values have been detected in cats, cattle and dogs, as well as several other animal species, indicating the wide distribution of the parasite among different animal and wildlife species. The main route of transmission is consumption of food and water contaminated with sporulated oocysts. However, infection through the ingestion of meat contaminated with tissue cysts is frequent. Finally, although less frequent, other food products contaminated with tachyzoites such as milk, may also pose a risk. The importance of this parasite as a risk for human health was recently highlighted by EFSA’s opinion on modernization of meat inspection, where Toxoplasma gondii was identified as a relevant hazard to be addressed in revised meat inspection systems for pigs, sheep, goats, farmed wild boar and farmed deer (Call for proposals -GP/EFSA/BIOHAZ/2013/01). The risk of infection is more highly associated to animals reared outside, also in free-range or organic farms, where biohazard measure are less strict than in large scale, industrial farms. Here, animals are kept under strict biosecurity measures, including barriers, which inhibit access by cats, thus making soil contamination by oocysts nearly impossible. A growing demand by the consumer for organic products, coming from free-range livestock, in respect of animal-welfare, and the desire for the best quality of derived products, have all led to an increase in the farming of free-range animals. The risk of Toxoplasma gondii infection increases when animals have access to environment and the absence of data in Italy, together with need for in depth study of both the prevalence and genotypes of Toxoplasma gondii present in our country were the main reasons for the development of this thesis project. A total of 152 animals have been analyzed, including 21 free-range pigs (Suino Nero race), 24 transhumant Cornigliese sheep, 77 free-range chickens and 21 wild animals. Serology (on meat juice) and identification of T. gondii DNA through PCR was performed on all samples, except for wild animals (no serology). An in-vitro test was also applied with the aim to find an alternative and valid method to bioassay, actually the gold standard. Meat samples were digested and seeded onto Vero cells, checked every day and a RT-PCR protocol was used to determine an eventual increase in the amount of DNA, demonstrating the viability of the parasite. Several samples were alos genetically characterized using a PCR-RFLP protocol to define the major genotypes diffused in the geographical area studied. Within the context of a project promoted by Istituto Zooprofilattico of Pavia and Brescia (Italy), experimentally infected pigs were also analyzed. One of the aims was to verify if the production process of cured “Prosciutto di Parma” is able to kill the parasite. Our contribution included the digestion and seeding of homogenates on Vero cells and applying the Elisa test on meat juice. This thesis project has highlighted widespread diffusion of T. gondii in the geographical area taken into account. Pigs, sheep, chickens and wild animals showed high prevalence of infection. The data obtained with serology were 95.2%, 70.8%, 36.4%, respectively, indicating the spread of the parasite among numerous animal species. For wild animals, the average value of parasite infection determined through PCR was 44.8%. Meat juice serology appears to be a very useful, rapid and sensitive method for screening carcasses at slaughterhouse and for marketing “Toxo-free” meat. The results obtained on fresh pork meat (derived from experimentally infected pigs) before (on serum) and after (on meat juice) slaughter showed a good concordance. The free-range farming put in evidence a marked risk for meat-producing animals and as a consequence also for the consumer. Genotyping revealed the diffusion of Type-II and in a lower percentage of Type-III. In pigs is predominant the Type-II profile, while in wildlife is more diffused a Type-III and mixed profiles (mainly Type-II/III). The mixed genotypes (Type-II/III) could be explained by the presence of mixed infections. Free-range farming and the contact with wildlife could facilitate the spread of the parasite and the generation of new and atypical strains, with unknown consequences on human health. The curing process employed in this study appears to produce hams that do not pose a serious concern to human health and therefore could be marketed and consumed without significant health risk. Little is known about the diffusion and genotypes of T. gondii in wild animals; further studies on the way in which new and mixed genotypes may be introduced into the domestic cycle should be very interesting, also with the use of NGS techniques, more rapid and sensitive than PCR-RFLP. Furthermore wildlife can become a valuable indicator of environmental contamination with T. gondii oocysts. Other future perspectives regarding pigs include the expansion of the number of free-range animals and farms and for Cornigliese sheep the evaluation of other food products as raw milk and cheeses. It should be interesting to proceed with the validation of an ELISA test for infection in chickens, using both serum and meat juice on a larger number of animals and the same should be done also for wildlife (at the moment no ELISA tests are available and MAT is the reference method for them). Results related to Parma ham do not suggest a concerning risk for consumers. However, further studies are needed to complete the risk assessment and the analysis of other products cured using technological processes other than those investigated in the present study. For example, it could be interesting to analyze products such as salami, produced with pig meat all over the Italian country, with very different recipes, also in domestic and rural contexts, characterized by a very short period of curing (1 to 6 months). Toxoplasma gondii is one of the most diffuse food-borne parasites globally. Public health safety, improved animal production and protection of endangered livestock species are all important goals of research into reliable diagnostic tools for this infection. Future studies into the epidemiology, parasite survival and genotypes of T. gondii in meat producing animals should continue to be a research priority.
Resumo:
Recent federal incentives and increased demand for home photovoltaic and small wind electrical systems highlights the need for consistent zoning ordinances and guidance materials for Northglenn residents. This Capstone Project assesses perceived impacts related to renewable energy systems, like noise, safety, aesthetics, and environmental considerations, and provides a model ordinance intended to mitigate these issues. It was concluded a model ordinance would ease and stimulate additions of alternative energy systems in Northglenn. Additionally, this research concluded development of public information could stimulate homeowners into positive decisions. The project also identifies potential financial and environmental benefits of installing such systems in an effort to promote sustainable and clean energy production within the city.
Resumo:
Behaviour analysis of construction safety systems is of fundamental importance to avoid accidental injuries. Traditionally, measurements of dynamic actions in Civil Engineering have been done through accelerometers, but high-speed cameras and image processing techniques can play an important role in this area. Here, we propose using morphological image filtering and Hough transform on high-speed video sequence as tools for dynamic measurements on that field. The presented method is applied to obtain the trajectory and acceleration of a cylindrical ballast falling from a building and trapped by a thread net. Results show that safety recommendations given in construction codes can be potentially dangerous for workers.
Resumo:
Herein, we explore the immobilization of nickel on various carbon supports and their application as electrocatalysts for the oxidation of propargyl alcohol in alkaline medium. In comparison with massive and nanoparticulated nickel electrode systems, Ni-doped nanoporous carbons provided similar propargyl alcohol conversions for very low metallic contents. Nanoparticulated Ni on various carbon supports gave rise to the highest electrocatalytic activity in terms of product selectivity, with a clear dependence on Ni content. The results point to the importance of controlling the dispersion of the Ni phase within the carbon matrix for a full exploitation of the electroactive area of the metal. Additionally, a change in the mechanism of the propargyl alcohol electrooxidation was noted, which seems to be related to the physicochemical properties of the carbon support as well. Thus, the stereoselectivity of the electrooxidative reaction can be controlled by the active nickel content immobilized on the anode, with a preferential oxidation to (Z)-3-(2-propynoxy)-2-propenoic acid with high Ni-loading, and to propiolic acid with low loading of active Ni sites. Moreover, the formation of (E)-3-(2-propynoxy)-2-propenoic acid was discriminatory irrespective of the experimental conditions and Ni loadings on the carbon matrixes.
Resumo:
The requirements for edge protection systems on most sloped work surfaces (class C, according to EN 13374-2013 code) in construction works are studied in this paper. Maximum deceleration suffered by a falling body and maximum deflection of the protection system were analyzed through finite-element models and confirmed through full-scale experiments. The aim of this work is to determine which value for deflection system entails a safe deceleration for the human body. This value is compared with the requirements given by the current version of EN 13374-2013. An additional series of experiments were done to determine the acceleration linked to minimum deflection required by code (200 mm) during the retention process. According to the obtained results, a modification of this value is recommended. Additionally, a simple design formula for this falling protection system is proposed as a quick tool for the initial steps of design.
Resumo:
Summary. On 11 March 2011, a devastating earthquake struck Japan and caused a major nuclear accident at the Fukushima Daiichi nuclear plant. The disaster confirmed that nuclear reactors must be protected even against accidents that have been assessed as highly unlikely. It also revealed a well-known catalogue of problems: faulty design, insufficient back-up systems, human error, inadequate contingency plans, and poor communications. The catastrophe triggered the rapid launch of a major re-examination of nuclear reactor security in Europe. It also stopped in its tracks what had appeared to be a ‘nuclear renaissance’, both in Europe and globally, especially in the emerging countries. Under the accumulated pressure of rising demand and climate warming, many new nuclear projects had been proposed. Since 2011 there has been more ambivalence, especially in Europe. Some Member States have even decided to abandon the nuclear sector altogether. This Egmont Paper aims to examine the reactions of the EU regarding nuclear safety since 2011. Firstly, a general description of the nuclear sector in Europe is provided. The nuclear production of electricity currently employs around 500,000 people, including those working in the supply chain. It generates approximately €70 billion per year. It provides roughly 30% of the electricity consumed in the EU. At the end of 2013, there were 131 nuclear power reactors active in the EU, located in 14 countries. Four new reactors are under construction in France, Slovakia and Finland. Secondly, this paper will present the Euratom legal framework regarding nuclear safety. The European Atomic Energy Community (EAEC or Euratom) Treaty was signed in 1957, and somewhat obscured by the European Economic Community (EEC) Treaty. It was a more classical treaty, establishing institutions with limited powers. Its development remained relatively modest until the Chernobyl catastrophe, which provoked many initiatives. The most important was the final adoption of the Nuclear Safety Directive 2009/71. Thirdly, the general symbiosis between Euratom and the International Atomic Energy Agency (IAEA) will be explained. Fourthly, the paper analyses the initiatives taken by the EU in the wake of the Fukushima catastrophe. These initiatives are centred around the famous ‘stress tests’. Fifthly, the most important legal change brought about by this event was the revision of Directive 2009/71. Directive 2014/87 has been adopted quite rapidly, and has deepened in various ways the role of the EU in nuclear safety. It has reinforced the role and effective independence of the national regulatory authorities. It has enhanced transparency on nuclear safety matters. It has strengthened principles, and introduced new general nuclear safety objectives and requirements, addressing specific technical issues across the entire life cycle of nuclear installations, and in particular, nuclear power plants. It has extended monitoring and the exchange of experiences by establishing a European system of peer reviews. Finally, it has established a mechanism for developing EU-wide harmonized nuclear safety guidelines. In spite of these various improvements, Directive 2014/87 Euratom still reflects the ambiguity of the Euratom system in general, and especially in the field of nuclear safety. The use of nuclear energy remains controversial among Member States. Some of them remain adamantly in favour, others against or ambivalent. The intervention of the EAEC institutions remains sensitive. The use of the traditional Community method remains limited. The peer review method remains a very peculiar mechanism that deserves more attention.
Resumo:
National Highway Traffic Safety Administration, Washington, D.C.
Resumo:
Federal Highway Administration, Office of Safety and Traffic Operations, Washington, D.C.
Resumo:
Federal Transit Administration, Washington, D.C.