891 resultados para Take-up rate
Resumo:
The paper explains how bioenergy education and training is growing in Europe. Employment estimates are included for renewable energy in general, and bioenergy in particular, to highlight the need for a broadly based education and training programme that is essential to build a knowledgeable workforce that can drive Europe's growing bioenergy sector. The paper reviews current provisions in bioenergy at Masters and PhD levels across the 27 members of the EU (EU27) plus Norway and Switzerland. This identifies a very active and expanding bioenergy education provision. 65 English-language Masters Courses in bioenergy (either focussing completely on bioenergy or with significant bioenergy content or specialisation) were identified. 231 providers of PhD studies in bioenergy were found.Masters Course offerings have grown rapidly across Europe during the last five years, but where data is available, enrolment has been quite low suggesting that there is an oversupply of courses and that course organisers are being optimistic in their projections. Existing provisions in Europe at Masters and PhD levels are clearly more than sufficient for short term needs, but further work is needed to evaluate the take-up rate and the content and focus of the provisions. To ensure talented graduates are attracted to these programmes, better promotion, stronger links with the research community and industry, and increased collaboration among course providers are needed. Short Courses of two to five days are an excellent way of meeting post-experience training needs but require further growth and development to serve the needs of the bioenergy community. © 2011 Elsevier Ltd.
Resumo:
Glycogen-accumulating organisms (GAO) have the potential to directly compete with polyphosphate-accumulating organisms (PAO) in EBPR systems as both are able to take up VFA anaerobically and grow on the intracellular storage products aerobically. Under anaerobic conditions GAO hydrolyse glycogen to gain energy and reducing equivalents to take up VFA and to synthesise polyhydroxyalkanoate (PHA). In the subsequent aerobic stage, PHA is being oxidised to gain energy for glycogen replenishment (from PHA) and for cell growth. This article describes a complete anaerobic and aerobic model for GAO based on the understanding of their metabolic pathways. The anaerobic model has been developed and reported previously, while the aerobic metabolic model was developed in this study. It is based on the assumption that acetyl-CoA and propionyl-CoA go through the catabolic and anabolic processes independently. Experimental validation shows that the integrated model can predict the anaerobic and aerobic results very well. It was found in this study that at pH 7 the maximum acetate uptake rate of GAO was slower than that reported for PAO in the anaerobic stage. On the other hand, the net biomass production per C-mol acetate added is about 9% higher for GAO than for PAO. This would indicate that PAO and GAO each have certain competitive advantages during different parts of the anaerobic/aerobic process cycle. (C) 2002 Wiley Periodicals, Inc.
Resumo:
PhD in Chemical and Biological Engineering
Resumo:
The prevalence of overweight and obesity has increased with alarming speed over the past twenty years. It has recently been described by the World Health Organisation as a ‘global epidemic’. In the year 2000 more than 300 million people worldwide were obese and it is now projected that by 2025 up to half the population of the United States will be obese if current trends are maintained. The disease is now a major public health problem throughout Europe. In Ireland at the present time 39% of adults are overweight and 18% are obese. Of these, slightly more men than women are obese and there is a higher incidence of the disease in lower socio-economic groups. Most worrying of all is the fact that childhood obesity has reached epidemic proportions in Europe, with body weight now the most prevalent childhood disease. While currently there are no agreed criteria or standards for assessing Irish children for obesity some studies are indicating that the numbers of children who are significantly overweight have trebled over the past decade. Extrapolation from authoritative UK data suggests that these numbers could now amount to more than 300,000 overweight and obese children on the island of Ireland and they are probably rising at a rate of over 10,000 per year. A balance of food intake and physical activity is necessary for a healthy weight. The foods we individually consume and our participation in physical activity are the result of a complex supply and production system. The growing research evidence that energy dense foods promote obesity is impressive and convincing. These are the foods that are high in fat, sugar and starch. Of these potentially the most significant promoter of weight gain is fat and foods from the top shelf of the food pyramid including spreads (butter and margarine), cakes and biscuits, and confectionery, when combined are the greatest contributors to fat intake in the Irish diet. In company with their adult counterparts Irish children are also consuming large amounts of energy dense foods outside the home. A recent survey revealed that slightly over half of these children ate sweets at least once a day and roughly a third of them had fizzy drinks and crisps with the same regularity. Sugar sweetened carbonated drinks are thought to contribute to obesity and for this reason the World Health Organisation has expressed serious concerns at the high and increasing consumption of these drinks by children. Physical activity is an important determinant of body weight. Over recent decades there has been a marked decline in demanding physical work and this has been accompanied by more sedentary lifestyles generally and reduced leisure-time activity. These observable changes, which are supported by data from most European countries and the United States, suggest that physical inactivity has made a significant impact on the increase in overweight and obesity being seen today. It is now widely accepted that adults shoud be involved in 45-60 minutes, and children should be involved in at least 60 minutes per day of moderate physical activity in order to prevent excess weight gain. Being overweight today not only signals increased risk of medical problems but also exposes people to serious psychosocial problems due mainly to widespread prejudice against fat people. Prejudice against obese people seems to border on the socially acceptable in Ireland. It crops up consistently in surveys covering groups such as employers, teachers, medical and healthcare personnel, and the media. It occurs among adolescents and children, even very young children. Because obesity is associated with premature death, excessive morbidity and serious psychosocial problems the damage it causes to the welfare of citizens is extremely serious and for this reason government intervention is necessary and warranted. In economic terms, a figure of approximately â,¬30million has been estimated for in-patient costs alone in 2003 for a number of Irish hospitals. This year about 2,000 premature deaths in Ireland will be attributed to obesity and the numbers are growing relentlessly. Diseases which proportionally more obese people suffer from than the general population include hypertension, type 2 diabetes, angina, heart attack and osteoarthritis. There are indirect costs also such as days lost to the workplace due to illness arising from obesity and output foregone as a result of premature death. Using the accepted EU environmental cost benefit method, these deaths alone may be costing the state as much as â,¬4bn per year. The social determinants of physical activity include factors such as socio-economic status, education level, gender, family and peer group influences as well as individual perceptions of the benefits of physical activity. The environmental determinants include geographic location, time of year, and proximity of facilities such as open spaces, parks and safe recreational areas generally. The environmental factors have not yet been as well studied as the social ones and this research gap needs to be addressed. Clearly there is a public health imperative to ensure that relevant environmental policies maximise opportunities for active transport, recreational physical activity and total physical activity. It is clear that concerted policy initiatives must be put in place if the predominantly negative findings of research regarding the determinants of food consumption and physical activity are to be accepted, and they must surely be accepted by government if the rapid increase in the incidence of obesity with all its negative consequences for citizens is to be reversed. So far actions surrounding nutrition policies have concentrated mostly on actions that are within the remit of the Department of Health and Children such as implementing the dietary guidelines. These are important but government must now look at the totality of policies that influence the type and supply of food that its citizens eat and the range and quality of opportunities that are available to citizens to engage in physical activity. This implies a fundamental examination of existing agricultural, industrial, economic and other policies and a determination to change them if they do not enable people to eat healthily and partake in physical activity. The current crisis in obesity prevalence requires a population health approach for adults and children in addition to effective weight-reduction management for individuals who are severely overweight. This entails addressing the obesogenic environment where people live, creating conditions over time which lead to healthier eating and more active living, and protecting people from the widespread availability of unhealthy food and beverage options in addition to sedentary activities that take up all of their leisure time. People of course have a fundamental right to choose to eat what they want and to be as active as they wish. That is not the issue. What the National Taskforce on Obesity has had to take account of is that many forces are actively impeding change for those well aware of the potential health and well-being consequences to themselves of overweight and obesity. The Taskforce’s social change strategy is to give people meaningful choice. Choice, or the capacity to change (because the strategy is all about change), is facilitated through the development of personal skills and preferences, through supportive and participative environments at work, at school and in the local community, and through a dedicated and clearly communicated public health strategy. High-level cabinet support will be necessary to implement the Taskforce’s recommendations. The approach to implementation must be characterised by joined-up thinking, real practical engagement by the public and private sectors, the avoidance of duplication of effort or crosspurpose approaches, and the harnessing of existing strategies and agencies. The range of government departments with roles to play is considerable. The Taskforce outlines the different contributions that each relevant department can make in driving its strategy forward. It also emphasises its requirement that all phases of the national strategy for healthy eating and physical activity are closely monitored, analysed and evaluated. The vision of the Taskforce is expressed as: An Irish society that enables people through health promotion, prevention and care to achieve and maintain healthy eating and active living throughout their lifespan. Its high-level goals are expressed as follows: Its recommendations, over eighty in all, relate to actions across six broad sectors: high-level government; education; social and community; health; food, commodities, production and supply; and the physical environment. In developing its recommendations the Taskforce has taken account of the complex, multisectoral and multi-faceted determinants of diet and physical activity. This strategy poses challenges for government, within individual departments, inter-departmentally and in developing partnerships with the commercial sector. Equally it challenges the commercial sector to work in partnership with government. The framework required for such initiative has at its core the rights and benefits of the individual. Health promotion is fundamentally about empowerment, whether at the individual, the community or the policy level.
Resumo:
Click here to download PDF The prevalence of overweight and obesity has increased with alarming speed over the past twenty years. It has recently been described by the World Health Organisation as a ‘global epidemic’. In the year 2000 more than 300 million people worldwide were obese and it is now projected that by 2025 up to half the population of the United States will be obese if current trends are maintained. The disease is now a major public health problem throughout Europe. In Ireland at the present time 39% of adults are overweight and 18% are obese. Of these, slightly more men than women are obese and there is a higher incidence of the disease in lower socio-economic groups. Most worrying of all is the fact that childhood obesity has reached epidemic proportions in Europe, with body weight now the most prevalent childhood disease. While currently there are no agreed criteria or standards for assessing Irish children for obesity some studies are indicating that the numbers of children who are significantly overweight have trebled over the past decade. Extrapolation from authoritative UK data suggests that these numbers could now amount to more than 300,000 overweight and obese children on the island of Ireland and they are probably rising at a rate of over 10,000 per year. A balance of food intake and physical activity is necessary for a healthy weight. The foods we individually consume and our participation in physical activity are the result of a complex supply and production system. The growing research evidence that energy dense foods promote obesity is impressive and convincing. These are the foods that are high in fat, sugar and starch. Of these potentially the most significant promoter of weight gain is fat and foods from the top shelf of the food pyramid including spreads (butter and margarine), cakes and biscuits, and confectionery, when combined are the greatest contributors to fat intake in the Irish diet. In company with their adult counterparts Irish children are also consuming large amounts of energy dense foods outside the home. A recent survey revealed that slightly over half of these children ate sweets at least once a day and roughly a third of them had fizzy drinks and crisps with the same regularity. Sugar sweetened carbonated drinks are thought to contribute to obesity and for this reason the World Health Organisation has expressed serious concerns at the high and increasing consumption of these drinks by children. Physical activity is an important determinant of body weight. Over recent decades there has been a marked decline in demanding physical work and this has been accompanied by more sedentary lifestyles generally and reduced leisure-time activity. These observable changes, which are supported by data from most European countries and the United States, suggest that physical inactivity has made a significant impact on the increase in overweight and obesity being seen today. It is now widely accepted that adults shoud be involved in 45-60 minutes, and children should be involved in at least 60 minutes per day of moderate physical activity in order to prevent excess weight gain. Being overweight today not only signals increased risk of medical problems but also exposes people to serious psychosocial problems due mainly to widespread prejudice against fat people. Prejudice against obese people seems to border on the socially acceptable in Ireland. It crops up consistently in surveys covering groups such as employers, teachers, medical and healthcare personnel, and the media. It occurs among adolescents and children, even very young children. Because obesity is associated with premature death, excessive morbidity and serious psychosocial problems the damage it causes to the welfare of citizens is extremely serious and for this reason government intervention is necessary and warranted. In economic terms, a figure of approximately â,¬30million has been estimated for in-patient costs alone in 2003 for a number of Irish hospitals. This year about 2,000 premature deaths in Ireland will be attributed to obesity and the numbers are growing relentlessly. Diseases which proportionally more obese people suffer from than the general population include hypertension, type 2 diabetes, angina, heart attack and osteoarthritis. There are indirect costs also such as days lost to the workplace due to illness arising from obesity and output foregone as a result of premature death. Using the accepted EU environmental cost benefit method, these deaths alone may be costing the state as much as â,¬4bn per year. The social determinants of physical activity include factors such as socio-economic status, education level, gender, family and peer group influences as well as individual perceptions of the benefits of physical activity. The environmental determinants include geographic location, time of year, and proximity of facilities such as open spaces, parks and safe recreational areas generally. The environmental factors have not yet been as well studied as the social ones and this research gap needs to be addressed. Clearly there is a public health imperative to ensure that relevant environmental policies maximise opportunities for active transport, recreational physical activity and total physical activity. It is clear that concerted policy initiatives must be put in place if the predominantly negative findings of research regarding the determinants of food consumption and physical activity are to be accepted, and they must surely be accepted by government if the rapid increase in the incidence of obesity with all its negative consequences for citizens is to be reversed. So far actions surrounding nutrition policies have concentrated mostly on actions that are within the remit of the Department of Health and Children such as implementing the dietary guidelines. These are important but government must now look at the totality of policies that influence the type and supply of food that its citizens eat and the range and quality of opportunities that are available to citizens to engage in physical activity. This implies a fundamental examination of existing agricultural, industrial, economic and other policies and a determination to change them if they do not enable people to eat healthily and partake in physical activity. The current crisis in obesity prevalence requires a population health approach for adults and children in addition to effective weight-reduction management for individuals who are severely overweight. This entails addressing the obesogenic environment where people live, creating conditions over time which lead to healthier eating and more active living, and protecting people from the widespread availability of unhealthy food and beverage options in addition to sedentary activities that take up all of their leisure time. People of course have a fundamental right to choose to eat what they want and to be as active as they wish. That is not the issue. What the National Taskforce on Obesity has had to take account of is that many forces are actively impeding change for those well aware of the potential health and well-being consequences to themselves of overweight and obesity. The Taskforce’s social change strategy is to give people meaningful choice. Choice, or the capacity to change (because the strategy is all about change), is facilitated through the development of personal skills and preferences, through supportive and participative environments at work, at school and in the local community, and through a dedicated and clearly communicated public health strategy. High-level cabinet support will be necessary to implement the Taskforce’s recommendations. The approach to implementation must be characterised by joined-up thinking, real practical engagement by the public and private sectors, the avoidance of duplication of effort or crosspurpose approaches, and the harnessing of existing strategies and agencies. The range of government departments with roles to play is considerable. The Taskforce outlines the different contributions that each relevant department can make in driving its strategy forward. It also emphasises its requirement that all phases of the national strategy for healthy eating and physical activity are closely monitored, analysed and evaluated. The vision of the Taskforce is expressed as: An Irish society that enables people through health promotion, prevention and care to achieve and maintain healthy eating and active living throughout their lifespan. Its high-level goals are expressed as follows: Its recommendations, over eighty in all, relate to actions across six broad sectors: high-level government; education; social and community; health; food, commodities, production and supply; and the physical environment. In developing its recommendations the Taskforce has taken account of the complex, multisectoral and multi-faceted determinants of diet and physical activity. This strategy poses challenges for government, within individual departments, inter-departmentally and in developing partnerships with the commercial sector. Equally it challenges the commercial sector to work in partnership with government. The framework required for such initiative has at its core the rights and benefits of the individual. Health promotion is fundamentally about empowerment, whether at the individual, the community or the policy level.
Resumo:
This thesis entitled Reliability Modelling and Analysis in Discrete time Some Concepts and Models Useful in the Analysis of discrete life time data.The present study consists of five chapters. In Chapter II we take up the derivation of some general results useful in reliability modelling that involves two component mixtures. Expression for the failure rate, mean residual life and second moment of residual life of the mixture distributions in terms of the corresponding quantities in the component distributions are investigated. Some applications of these results are also pointed out. The role of the geometric,Waring and negative hypergeometric distributions as models of life lengths in the discrete time domain has been discussed already. While describing various reliability characteristics, it was found that they can be often considered as a class. The applicability of these models in single populations naturally extends to the case of populations composed of sub-populations making mixtures of these distributions worth investigating. Accordingly the general properties, various reliability characteristics and characterizations of these models are discussed in chapter III. Inference of parameters in mixture distribution is usually a difficult problem because the mass function of the mixture is a linear function of the component masses that makes manipulation of the likelihood equations, leastsquare function etc and the resulting computations.very difficult. We show that one of our characterizations help in inferring the parameters of the geometric mixture without involving computational hazards. As mentioned in the review of results in the previous sections, partial moments were not studied extensively in literature especially in the case of discrete distributions. Chapters IV and V deal with descending and ascending partial factorial moments. Apart from studying their properties, we prove characterizations of distributions by functional forms of partial moments and establish recurrence relations between successive moments for some well known families. It is further demonstrated that partial moments are equally efficient and convenient compared to many of the conventional tools to resolve practical problems in reliability modelling and analysis. The study concludes by indicating some new problems that surfaced during the course of the present investigation which could be the subject for a future work in this area.
Resumo:
The efficiency with which the oceans take up heat has a significant influence on the rate of global warming. Warming of the ocean above 700 m over the past few decades has been well documented. However, most of the ocean lies below 700 m. Here we analyse observations of heat uptake into the deep North Atlantic. We find that the extratropical North Atlantic as a whole warmed by 1.45±0.5×1022 J between 1955 and 2005, but Lower North Atlantic Deep Water cooled, most likely as an adjustment from an early twentieth-century warm period. In contrast, the heat content of Upper North Atlantic Deep Water exhibited strong decadal variability. We demonstrate and quantify the importance of density-compensated temperature anomalies for long-term heat uptake into the deep North Atlantic. These anomalies form in the subpolar gyre and propagate equatorwards. High salinity in the subpolar gyre is a key requirement for this mechanism. In the past 50 years, suitable conditions have occurred only twice: first during the 1960s and again during the past decade. We conclude that heat uptake through density-compensated temperature anomalies will contribute to deep ocean heat uptake in the near term. In the longer term, the importance of this mechanism will be determined by competition between the multiple processes that influence subpolar gyre salinity in a changing climate.
Resumo:
[EN] To study the role of muscle mass and muscle activity on lactate and energy kinetics during exercise, whole body and limb lactate, glucose, and fatty acid fluxes were determined in six elite cross-country skiers during roller-skiing for 40 min with the diagonal stride (Continuous Arm + Leg) followed by 10 min of double poling and diagonal stride at 72-76% maximal O(2) uptake. A high lactate appearance rate (R(a), 184 +/- 17 micromol x kg(-1) x min(-1)) but a low arterial lactate concentration ( approximately 2.5 mmol/l) were observed during Continuous Arm + Leg despite a substantial net lactate release by the arm of approximately 2.1 mmol/min, which was balanced by a similar net lactate uptake by the leg. Whole body and limb lactate oxidation during Continuous Arm + Leg was approximately 45% at rest and approximately 95% of disappearance rate and limb lactate uptake, respectively. Limb lactate kinetics changed multiple times when exercise mode was changed. Whole body glucose and glycerol turnover was unchanged during the different skiing modes; however, limb net glucose uptake changed severalfold. In conclusion, the arterial lactate concentration can be maintained at a relatively low level despite high lactate R(a) during exercise with a large muscle mass because of the large capacity of active skeletal muscle to take up lactate, which is tightly correlated with lactate delivery. The limb lactate uptake during exercise is oxidized at rates far above resting oxygen consumption, implying that lactate uptake and subsequent oxidation are also dependent on an elevated metabolic rate. The relative contribution of whole body and limb lactate oxidation is between 20 and 30% of total carbohydrate oxidation at rest and during exercise under the various conditions. Skeletal muscle can change its limb net glucose uptake severalfold within minutes, causing a redistribution of the available glucose because whole body glucose turnover was unchanged.
Resumo:
Only about half of all the CO_2 that has been produced by the burning of fossil fuels now remains in the atmosphere. The CO_2 "missing" from the atmosphere is the subject of an important debate. It was thought that the great majority of the missing CO_2 has invaded the ocean, for this system naturally acts as a giant chemical regulator of the atmosphere. Although it is clear that ocean processes have a major role in the regulation of the carbon dioxide content of the atmosphere through air-sea exchange processes, recent studies of the oceanic carbon cycle and air-sea interaction indicate that oceanic carbon is in a quasi-steady state via the system of biological and physical processes in the ocean interior. It is difficult to determine whether the ocean has the capacity to take up the increasing air-born CO_2 released by human activities over the past five or six decades. To understand this enigma, we need a better understanding of the natural variability of the oceanic carbon cycle.
Resumo:
In university studies, it is not unusual for students to drop some of the subjects they have enrolled in for the academic year. They start by not attending lectures, sometimes due to neglect or carelessness, or because they find the subject too difficult, this means that they lose the continuity in the topics that the professor follows. If they try to attend again they discover that they hardly understand anything and become discouraged and so decide to give up attending lectures and study on their own. However some fail to turn up to do their final exams and the failure rate of those who actually do the exams is high. The problem is that this is not only the case with one specific subject, but it is often the same with many subjects. The result is that students arent’s productive enough, wasting time and also prolonging their years of study which entails a great cost for families. Degree courses structured to be conducted and completed in three academic courses, it may in fact take up to an average of six or more academic courses. In this paper, we have studied this problem, which apart from the waste of money and time, produces frustration in the student, who finds that he has not been able to achieve what he had proposed at the beginning of the course. It is quite common, to find students who do not even pass nor 50% of the subjects they had enrolled in for the academic year. If this happens repeatedly to a student, it can be the point when he considers dropping out altogether. This is also a concern for the universities, especially in the early courses. In our experience as professors, we have found that students, who attend lectures regularly and follow the explanations, approach the final exams with confidence and rarely fail the subject. In this proposal we present some techniques and methods carried out to solve in possible, the problem of lack of attendance to lectures. This involves "rewarding students for their assistance and participation in lectures". Rewarding assistance with a "prize" that counts for the final mark on the subject and involving more participation in the development of lectures. We believe that we have to teach students to use the lectures as part of their learning in a non-passive way. We consider the professor's work as fundamental in terms of how to convey the usefulness of these topics explained and the applications that they will have for their professional life in the future. In this way the student see for himself the use and importance of what he is learning. When his participation is required, he will feel more involved and confident participating in the educational system. Finally we present statistical results of studies carried out on different degrees and on different subjects over two consecutive years. In the first year we assessed only the final exams without considering the students attendance, or participation. In the second year, we have applied the techniques and methods proposed here. In addition we have compared the two ways of assessing subjects.
Resumo:
I. GENERALIDADES 1.1. Introducción Entre los diversos tipos de perturbaciones eléctricas, los huecos de tensión son considerados el problema de calidad de suministro más frecuente en los sistemas eléctricos. Este fenómeno es originado por un aumento extremo de la corriente en el sistema, causado principalmente por cortocircuitos o maniobras inadecuadas en la red. Este tipo de perturbación eléctrica está caracterizado básicamente por dos parámetros: tensión residual y duración. Típicamente, se considera que el hueco se produce cuando la tensión residual alcanza en alguna de las fases un valor entre 0.01 a 0.9 pu y tiene una duración de hasta 60 segundos. Para un usuario final, el efecto más relevante de un hueco de tensión es la interrupción o alteración de la operación de sus equipos, siendo los dispositivos de naturaleza electrónica los principalmente afectados (p. ej. ordenador, variador de velocidad, autómata programable, relé, etc.). Debido al auge tecnológico de las últimas décadas y a la búsqueda constante de automatización de los procesos productivos, el uso de componentes electrónicos resulta indispensable en la actualidad. Este hecho, lleva a que los efectos de los huecos de tensión sean más evidentes para el usuario final, provocando que su nivel de exigencia de la calidad de energía suministrada sea cada vez mayor. De forma general, el estudio de los huecos de tensión suele ser abordado bajo dos enfoques: en la carga o en la red. Desde el punto de vista de la carga, se requiere conocer las características de sensibilidad de los equipos para modelar su respuesta ante variaciones súbitas de la tensión del suministro eléctrico. Desde la perspectiva de la red, se busca estimar u obtener información adecuada que permita caracterizar su comportamiento en términos de huecos de tensión. En esta tesis, el trabajo presentado se encuadra en el segundo aspecto, es decir, en el modelado y estimación de la respuesta de un sistema eléctrico de potencia ante los huecos de tensión. 1.2. Planteamiento del problema A pesar de que los huecos de tensión son el problema de calidad de suministro más frecuente en las redes, hasta la actualidad resulta complejo poder analizar de forma adecuada este tipo de perturbación para muchas compañías del sector eléctrico. Entre las razones más comunes se tienen: - El tiempo de monitorización puede llegar a ser de varios años para conseguir una muestra de registros de huecos estadísticamente válida. - La limitación de recursos económicos para la adquisición e instalación de equipos de monitorización de huecos. - El elevado coste operativo que implica el análisis de los datos de los medidores de huecos de tensión instalados. - La restricción que tienen los datos de calidad de energía de las compañías eléctricas. Es decir, ante la carencia de datos que permitan analizar con mayor detalle los huecos de tensión, es de interés de las compañías eléctricas y la academia poder crear métodos fiables que permitan profundizar en el estudio, estimación y supervisión de este fenómeno electromagnético. Los huecos de tensión, al ser principalmente originados por eventos fortuitos como los cortocircuitos, son el resultado de diversas variables exógenas como: (i) la ubicación de la falta, (ii) la impedancia del material de contacto, (iii) el tipo de fallo, (iv) la localización del fallo en la red, (v) la duración del evento, etc. Es decir, para plantear de forma adecuada cualquier modelo teórico sobre los huecos de tensión, se requeriría representar esta incertidumbre combinada de las variables para proveer métodos realistas y, por ende, fiables para los usuarios. 1.3. Objetivo La presente tesis ha tenido como objetivo el desarrollo diversos métodos estocásticos para el estudio, estimación y supervisión de los huecos de tensión en los sistemas eléctricos de potencia. De forma específica, se ha profundizado en los siguientes ámbitos: - En el modelado realista de las variables que influyen en la caracterización de los huecos. Esto es, en esta Tesis se ha propuesto un método que permite representar de forma verosímil su cuantificación y aleatoriedad en el tiempo empleando distribuciones de probabilidad paramétricas. A partir de ello, se ha creado una herramienta informática que permite estimar la severidad de los huecos de tensión en un sistema eléctrico genérico. - Se ha analizado la influencia la influencia de las variables de entrada en la estimación de los huecos de tensión. En este caso, el estudio se ha enfocado en las variables de mayor divergencia en su caracterización de las propuestas existentes. - Se ha desarrollado un método que permite estima el número de huecos de tensión de una zona sin monitorización a través de la información de un conjunto limitado de medidas de un sistema eléctrico. Para ello, se aplican los principios de la estadística Bayesiana, estimando el número de huecos de tensión más probable de un emplazamiento basándose en los registros de huecos de otros nudos de la red. - Plantear una estrategia para optimizar la monitorización de los huecos de tensión en un sistema eléctrico. Es decir, garantizar una supervisión del sistema a través de un número de medidores menor que el número de nudos de la red. II. ESTRUCTURA DE LA TESIS Para plantear las propuestas anteriormente indicadas, la presente Tesis se ha estructurado en seis capítulos. A continuación, se describen brevemente los mismos. A manera de capítulo introductorio, en el capítulo 1, se realiza una descripción del planteamiento y estructura de la presente tesis. Esto es, se da una visión amplia de la problemática a tratar, además de describir el alcance de cada capítulo de la misma. En el capítulo 2, se presenta una breve descripción de los fundamentos y conceptos generales de los huecos de tensión. Los mismos, buscan brindar al lector de una mejor comprensión de los términos e indicadores más empleados en el análisis de severidad de los huecos de tensión en las redes eléctricas. Asimismo, a manera de antecedente, se presenta un resumen de las principales características de las técnicas o métodos existentes aplicados en la predicción y monitorización óptima de los huecos de tensión. En el capítulo 3, se busca fundamentalmente conocer la importancia de las variables que determinen la frecuencia o severidad de los huecos de tensión. Para ello, se ha implementado una herramienta de estimación de huecos de tensión que, a través de un conjunto predeterminado de experimentos mediante la técnica denominada Diseño de experimentos, analiza la importancia de la parametrización de las variables de entrada del modelo. Su análisis, es realizado mediante la técnica de análisis de la varianza (ANOVA), la cual permite establecer con rigor matemático si la caracterización de una determinada variable afecta o no la respuesta del sistema en términos de los huecos de tensión. En el capítulo 4, se propone una metodología que permite predecir la severidad de los huecos de tensión de todo el sistema a partir de los registros de huecos de un conjunto reducido de nudos de dicha red. Para ello, se emplea el teorema de probabilidad condicional de Bayes, el cual calcula las medidas más probables de todo el sistema a partir de la información proporcionada por los medidores de huecos instalados. Asimismo, en este capítulo se revela una importante propiedad de los huecos de tensión, como es la correlación del número de eventos de huecos de tensión en diversas zonas de las redes eléctricas. En el capítulo 5, se desarrollan dos métodos de localización óptima de medidores de huecos de tensión. El primero, que es una evolución metodológica del criterio de observabilidad; aportando en el realismo de la pseudo-monitorización de los huecos de tensión con la que se calcula el conjunto óptimo de medidores y, por ende, en la fiabilidad del método. Como una propuesta alternativa, se emplea la propiedad de correlación de los eventos de huecos de tensión de una red para plantear un método que permita establecer la severidad de los huecos de todo el sistema a partir de una monitorización parcial de dicha red. Finalmente, en el capítulo 6, se realiza una breve descripción de las principales aportaciones de los estudios realizados en esta tesis. Adicionalmente, se describen diversos temas a desarrollar en futuros trabajos. III. RESULTADOS En base a las pruebas realizadas en las tres redes planteadas; dos redes de prueba IEEE de 24 y 118 nudos (IEEE-24 e IEEE-118), además del sistema eléctrico de la República del Ecuador de 357 nudos (EC-357), se describen los siguientes puntos como las observaciones más relevantes: A. Estimación de huecos de tensión en ausencia de medidas: Se implementa un método estocástico de estimación de huecos de tensión denominado PEHT, el cual representa con mayor realismo la simulación de los eventos de huecos de un sistema a largo plazo. Esta primera propuesta de la tesis, es considerada como un paso clave para el desarrollo de futuros métodos del presente trabajo, ya que permite emular de forma fiable los registros de huecos de tensión a largo plazo en una red genérica. Entre las novedades más relevantes del mencionado Programa de Estimación de Huecos de Tensión (PEHT) se tienen: - Considerar el efecto combinado de cinco variables aleatorias de entrada para simular los eventos de huecos de tensión en una pseudo-monitorización a largo plazo. Las variables de entrada modeladas en la caracterización de los huecos de tensión en el PEHT son: (i) coeficiente de fallo, (ii) impedancia de fallo, (iii) tipo de fallo, (iv) localización del fallo y (v) duración. - El modelado estocástico de las variables de entrada impedancia de fallo y duración en la caracterización de los eventos de huecos de tensión. Para la parametrización de las variables mencionadas, se realizó un estudio detallado del comportamiento real de las mismas en los sistemas eléctricos. Asimismo, se define la función estadística que mejor representa la naturaleza aleatoria de cada variable. - Considerar como variables de salida del PEHT a indicadores de severidad de huecos de uso común en las normativas, como es el caso de los índices: SARFI-X, SARFI-Curve, etc. B. Análisis de sensibilidad de los huecos de tensión: Se presenta un estudio causa-efecto (análisis de sensibilidad) de las variables de entrada de mayor divergencia en su parametrización entre las referencias relacionadas a la estimación de los huecos de tensión en redes eléctricas. De forma específica, se profundiza en el estudio de la influencia de la parametrización de las variables coeficiente de fallo e impedancia de fallo en la predicción de los huecos de tensión. A continuación un resumen de las conclusiones más destacables: - La precisión de la variable de entrada coeficiente de fallo se muestra como un parámetro no influyente en la estimación del número de huecos de tensión (SARFI-90 y SARFI-70) a largo plazo. Es decir, no se requiere de una alta precisión del dato tasa de fallo de los elementos del sistema para obtener una adecuada estimación de los huecos de tensión. - La parametrización de la variable impedancia de fallo se muestra como un factor muy sensible en la estimación de la severidad de los huecos de tensión. Por ejemplo, al aumentar el valor medio de esta variable aleatoria, se disminuye considerablemente la severidad reportada de los huecos en la red. Por otra parte, al evaluar el parámetro desviación típica de la impedancia de fallo, se observa una relación directamente proporcional de este parámetro con la severidad de los huecos de tensión de la red. Esto es, al aumentar la desviación típica de la impedancia de fallo, se evidencia un aumento de la media y de la variación interanual de los eventos SARFI-90 y SARFI-70. - En base al análisis de sensibilidad desarrollado en la variable impedancia de fallo, se considera muy cuestionable la fiabilidad de los métodos de estimación de huecos de tensión que omiten su efecto en el modelo planteado. C. Estimación de huecos de tensión en base a la información de una monitorización parcial de la red: Se desarrolla un método que emplea los registros de una red parcialmente monitorizada para determinar la severidad de los huecos de todo el sistema eléctrico. A partir de los casos de estudio realizados, se observa que el método implementado (PEHT+MP) posee las siguientes características: - La metodología propuesta en el PEHT+MP combina la teoría clásica de cortocircuitos con diversas técnicas estadísticas para estimar, a partir de los datos de los medidores de huecos instalados, las medidas de huecos de los nudos sin monitorización de una red genérica. - El proceso de estimación de los huecos de tensión de la zona no monitorizada de la red se fundamenta en la aplicación del teorema de probabilidad condicional de Bayes. Es decir, en base a los datos observados (los registros de los nudos monitorizados), el PEHT+MP calcula de forma probabilística la severidad de los huecos de los nudos sin monitorización del sistema. Entre las partes claves del procedimiento propuesto se tienen los siguientes puntos: (i) la creación de una base de datos realista de huecos de tensión a través del Programa de Estimación de Huecos de Tensión (PEHT) propuesto en el capítulo anterior; y, (ii) el criterio de máxima verosimilitud empleado para estimar las medidas de huecos de los nudos sin monitorización de la red evaluada. - Las predicciones de medidas de huecos de tensión del PEHT+MP se ven potenciadas por la propiedad de correlación de los huecos de tensión en diversas zonas de un sistema eléctrico. Esta característica intrínseca de las redes eléctricas limita de forma significativa la respuesta de las zonas fuertemente correlacionadas del sistema ante un eventual hueco de tensión. Como el PEHT+MP está basado en principios probabilísticos, la reducción del rango de las posibles medidas de huecos se ve reflejado en una mejor predicción de las medidas de huecos de la zona no monitorizada. - Con los datos de un conjunto de medidores relativamente pequeño del sistema, es posible obtener estimaciones precisas (error nulo) de la severidad de los huecos de la zona sin monitorizar en las tres redes estudiadas. - El PEHT+MP se puede aplicar a diversos tipos de indicadores de severidad de los huecos de tensión, como es el caso de los índices: SARFI-X, SARFI-Curve, SEI, etc. D. Localización óptima de medidores de huecos de tensión: Se plantean dos métodos para ubicar de forma estratégica al sistema de monitorización de huecos en una red genérica. La primera propuesta, que es una evolución metodológica de la localización óptima de medidores de huecos basada en el criterio de observabilidad (LOM+OBS); y, como segunda propuesta, un método que determina la localización de los medidores de huecos según el criterio del área de correlación (LOM+COR). Cada método de localización óptima de medidores propuesto tiene un objetivo concreto. En el caso del LOM+OBS, la finalidad del método es determinar el conjunto óptimo de medidores que permita registrar todos los fallos que originen huecos de tensión en la red. Por otro lado, en el método LOM+COR se persigue definir un sistema óptimo de medidores que, mediante la aplicación del PEHT+MP (implementado en el capítulo anterior), sea posible estimar de forma precisa las medidas de huecos de tensión de todo el sistema evaluado. A partir del desarrollo de los casos de estudio de los citados métodos de localización óptima de medidores en las tres redes planteadas, se describen a continuación las observaciones más relevantes: - Como la generación de pseudo-medidas de huecos de tensión de los métodos de localización óptima de medidores (LOM+OBS y LOM+COR) se obtienen mediante la aplicación del algoritmo PEHT, la formulación del criterio de optimización se realiza en base a una pseudo-monitorización realista, la cual considera la naturaleza aleatoria de los huecos de tensión a través de las cinco variables estocásticas modeladas en el PEHT. Esta característica de la base de datos de pseudo-medidas de huecos de los métodos LOM+OBS y LOM+COR brinda una mayor fiabilidad del conjunto óptimo de medidores calculado respecto a otros métodos similares en la bibliografía. - El conjunto óptimo de medidores se determina según la necesidad del operador de la red. Esto es, si el objetivo es registrar todos los fallos que originen huecos de tensión en el sistema, se emplea el criterio de observabilidad en la localización óptima de medidores de huecos. Por otra parte, si se plantea definir un sistema de monitorización que permita establecer la severidad de los huecos de tensión de todo el sistema en base a los datos de un conjunto reducido de medidores de huecos, el criterio de correlación resultaría el adecuado. De forma específica, en el caso del método LOM+OBS, basado en el criterio de observabilidad, se evidenciaron las siguientes propiedades en los casos de estudio realizados: - Al aumentar el tamaño de la red, se observa la tendencia de disminuir el porcentaje de nudos monitorizados de dicho sistema. Por ejemplo, para monitorizar los fallos que originan huecos en la red IEEE-24, se requiere monitorizar el 100\% de los nudos del sistema. En el caso de las redes IEEE-118 y EC-357, el método LOM+OBS determina que con la monitorización de un 89.5% y 65.3% del sistema, respectivamente, se cumpliría con el criterio de observabilidad del método. - El método LOM+OBS permite calcular la probabilidad de utilización del conjunto óptimo de medidores a largo plazo, estableciendo así un criterio de la relevancia que tiene cada medidor considerado como óptimo en la red. Con ello, se puede determinar el nivel de precisión u observabilidad (100%, 95%, etc.) con el cual se detectarían los fallos que generan huecos en la red estudiada. Esto es, al aumentar el nivel de precisión de detección de los fallos que originan huecos, se espera que aumente el número de medidores requeridos en el conjunto óptimo de medidores calculado. - El método LOM+OBS se evidencia como una técnica aplicable a todo tipo de sistema eléctrico (radial o mallado), el cual garantiza la detección de los fallos que originan huecos de tensión en un sistema según el nivel de observabilidad planteado. En el caso del método de localización óptima de medidores basado en el criterio del área de correlación (LOM+COR), las diversas pruebas realizadas evidenciaron las siguientes conclusiones: - El procedimiento del método LOM+COR combina los métodos de estimación de huecos de tensión de capítulos anteriores (PEHT y PEHT+MP) con técnicas de optimización lineal para definir la localización óptima de los medidores de huecos de tensión de una red. Esto es, se emplea el PEHT para generar los pseudo-registros de huecos de tensión, y, en base al criterio planteado de optimización (área de correlación), el LOM+COR formula y calcula analíticamente el conjunto óptimo de medidores de la red a largo plazo. A partir de la información registrada por este conjunto óptimo de medidores de huecos, se garantizaría una predicción precisa de la severidad de los huecos de tensión de todos los nudos del sistema con el PEHT+MP. - El método LOM+COR requiere un porcentaje relativamente reducido de nudos del sistema para cumplir con las condiciones de optimización establecidas en el criterio del área de correlación. Por ejemplo, en el caso del número total de huecos (SARFI-90) de las redes IEEE-24, IEEE-118 y EC-357, se calculó un conjunto óptimo de 9, 12 y 17 medidores de huecos, respectivamente. Es decir, solamente se requeriría monitorizar el 38\%, 10\% y 5\% de los sistemas indicados para supervisar los eventos SARFI-90 en toda la red. - El método LOM+COR se muestra como un procedimiento de optimización versátil, el cual permite reducir la dimensión del sistema de monitorización de huecos de redes eléctricas tanto radiales como malladas. Por sus características, este método de localización óptima permite emular una monitorización integral del sistema a través de los registros de un conjunto pequeño de monitores. Por ello, este nuevo método de optimización de medidores sería aplicable a operadores de redes que busquen disminuir los costes de instalación y operación del sistema de monitorización de los huecos de tensión. ABSTRACT I. GENERALITIES 1.1. Introduction Among the various types of electrical disturbances, voltage sags are considered the most common quality problem in power systems. This phenomenon is caused by an extreme increase of the current in the network, primarily caused by short-circuits or inadequate maneuvers in the system. This type of electrical disturbance is basically characterized by two parameters: residual voltage and duration. Typically, voltage sags occur when the residual voltage, in some phases, reaches a value between 0.01 to 0.9 pu and lasts up to 60 seconds. To an end user, the most important effect of a voltage sags is the interruption or alteration of their equipment operation, with electronic devices the most affected (e.g. computer, drive controller, PLC, relay, etc.). Due to the technology boom of recent decades and the constant search for automating production processes, the use of electronic components is essential today. This fact makes the effects of voltage sags more noticeable to the end user, causing the level of demand for a quality energy supply to be increased. In general, the study of voltage sags is usually approached from one of two aspects: the load or the network. From the point of view of the load, it is necessary to know the sensitivity characteristics of the equipment to model their response to sudden changes in power supply voltage. From the perspective of the network, the goal is to estimate or obtain adequate information to characterize the network behavior in terms of voltage sags. In this thesis, the work presented fits into the second aspect; that is, in the modeling and estimation of the response of a power system to voltage sag events. 1.2. Problem Statement Although voltage sags are the most frequent quality supply problem in electrical networks, thistype of disturbance remains complex and challenging to analyze properly. Among the most common reasons for this difficulty are: - The sag monitoring time, because it can take up to several years to get a statistically valid sample. - The limitation of funds for the acquisition and installation of sag monitoring equipment. - The high operating costs involved in the analysis of the voltage sag data from the installed monitors. - The restrictions that electrical companies have with the registered power quality data. That is, given the lack of data to further voltage sag analysis, it is of interest to electrical utilities and researchers to create reliable methods to deepen the study, estimation and monitoring of this electromagnetic phenomenon. Voltage sags, being mainly caused by random events such as short-circuits, are the result of various exogenous variables such as: (i) the number of faults of a system element, (ii) the impedance of the contact material, (iii) the fault type, (iv) the fault location, (v) the duration of the event, etc. That is, to properly raise any theoretical model of voltage sags, it is necessary to represent the combined uncertainty of variables to provide realistic methods that are reliable for users. 1.3. Objective This Thesis has been aimed at developing various stochastic methods for the study, estimation and monitoring of voltage sags in electrical power systems. Specifically, it has deepened the research in the following areas: - This research furthers knowledge in the realistic modeling of the variables that influence sag characterization. This thesis proposes a method to credibly represent the quantification and randomness of the sags in time by using parametric probability distributions. From this, a software tool was created to estimate the severity of voltage sags in a generic power system. - This research also analyzes the influence of the input variables in the estimation of voltage sags. In this case, the study has focused on the variables of greatest divergence in their characterization of the existing proposals. - A method was developed to estimate the number of voltage sags of an area without monitoring through the information of a limited set of sag monitors in an electrical system. To this end, the principles of Bayesian statistics are applied, estimating the number of sags most likely to happen in a system busbar based in records of other sag network busbars. - A strategy was developed to optimize the monitorization of voltage sags on a power system. Its purpose is to ensure the monitoring of the system through a number of monitors lower than the number of busbars of the network assessed. II. THESIS STRUCTURE To describe in detail the aforementioned proposals, this Thesis has been structured into six chapters. Below is are brief descriptions of them: As an introductory chapter, Chapter 1, provides a description of the approach and structure of this thesis. It presents a wide view of the problem to be treated, in addition to the description of the scope of each chapter. In Chapter 2, a brief description of the fundamental and general concepts of voltage sags is presented to provide to the reader a better understanding of the terms and indicators used in the severity analysis of voltage sags in power networks. Also, by way of background, a summary of the main features of existing techniques or methods used in the prediction and optimal monitoring of voltage sags is also presented. Chapter 3 essentially seeks to know the importance of the variables that determine the frequency or severity of voltage sags. To do this, a tool to estimate voltage sags is implemented that, through a predetermined set of experiments using the technique called Design of Experiments, discusses the importance of the parameters of the input variables of the model. Its analysis is interpreted by using the technique of analysis of variance (ANOVA), which provides mathematical rigor to establish whether the characterization of a particular variable affects the system response in terms of voltage sags or not. In Chapter 4, a methodology to predict the severity of voltage sags of an entire system through the sag logs of a reduced set of monitored busbars is proposed. For this, the Bayes conditional probability theorem is used, which calculates the most likely sag severity of the entire system from the information provided by the installed monitors. Also, in this chapter an important property of voltage sags is revealed, as is the correlation of the voltage sags events in several zones of a power system. In Chapter 5, two methods of optimal location of voltage sag monitors are developed. The first one is a methodological development of the observability criteria; it contributes to the realism of the sag pseudo-monitoring with which the optimal set of sag monitors is calculated and, therefore, to the reliability of the proposed method. As an alternative proposal, the correlation property of the sag events of a network is used to raise a method that establishes the sag severity of the entire system from a partial monitoring of the network. Finally, in Chapter 6, a brief description of the main contributions of the studies in this Thesis is detailed. Additionally, various themes to be developed in future works are described. III. RESULTS. Based on tests on the three networks presented, two IEEE test networks of 24 and 118 busbars (IEEE-24 and IEEE-118) and the electrical system of the Republic of Ecuador (EC-357), the following points present the most important observations: A. Estimation of voltage sags in the absence of measures: A stochastic estimation method of voltage sags, called PEHT, is implemented to represent with greater realism the long-term simulation of voltage sags events in a system. This first proposal of this thesis is considered a key step for the development of future methods of this work, as it emulates in a reliable manner the voltage sag long-term records in a generic network. Among the main innovations of this voltage sag estimation method are the following: - Consideration of the combined effect of five random input variables to simulate the events of voltage sags in long-term monitoring is included. The input variables modeled in the characterization of voltage sags on the PEHT are as follows: (i) fault coefficient, (ii) fault impedance, (iii) type of fault, (iv) location of the fault, and (v) fault duration. - Also included is the stochastic modeling of the input variables of fault impedance and duration in the characterization of the events of voltage sags. For the parameterization of these variables, a detailed study of the real behavior in power systems is developed. Also, the statistical function best suited to the random nature of each variable is defined. - Consideration of sag severity indicators used in standards as PEHT output variables, including such as indices as SARFI-X, SARFI-Curve, etc. B. Sensitivity analysis of voltage sags: A cause-effect study (sensitivity analysis) of the input variables of greatest divergence between reference parameterization related to the estimation of voltage sags in electrical networks is presented. Specifically, it delves into the study of the influence of the parameterization of the variables fault coefficient and fault impedance in the voltage sag estimation. Below is a summary of the most notable observations: - The accuracy of the input variable fault coefficient is shown as a non-influential parameter in the long-term estimation of the number of voltage sags (SARFI-90 and SARFI-70). That is, it does not require a high accuracy of the fault rate data of system elements for a proper voltage sag estimation. - The parameterization of the variable fault impedance is shown to be a very sensitive factor in the estimation of the voltage sag severity. For example, by increasing the average value of this random variable, the reported sag severity in the network significantly decreases. Moreover, in assessing the standard deviation of the fault impedance parameter, a direct relationship of this parameter with the voltage sag severity of the network is observed. That is, by increasing the fault impedance standard deviation, an increase of the average and the interannual variation of the SARFI-90 and SARFI-70 events is evidenced. - Based on the sensitivity analysis developed in the variable fault impedance, the omission of this variable in the voltage sag estimation would significantly call into question the reliability of the responses obtained. C. Voltage sag estimation from the information of a network partially monitored: A method that uses the voltage sag records of a partially monitored network for the sag estimation of all the power system is developed. From the case studies performed, it is observed that the method implemented (PEHT+MP) has the following characteristics: - The methodology proposed in the PEHT+MP combines the classical short-circuit theory with several statistical techniques to estimate, from data the of the installed sag meters, the sag measurements of unmonitored busbars of a generic power network. - The estimation process of voltage sags of the unmonitored zone of the network is based on the application of the conditional probability theorem of Bayes. That is, based on the observed data (monitored busbars records), the PEHT+MP calculates probabilistically the sag severity at unmonitored system busbars. Among the key parts of the proposed procedure are the following: (i) the creation of a realistic data base of voltage sags through of the sag estimation program (PEHT); and, (ii) the maximum likelihood criterion used to estimate the sag indices of system busbars without monitoring. - The voltage sag measurement estimations of PEHT+MP are potentiated by the correlation property of the sag events in power systems. This inherent characteristic of networks significantly limits the response of strongly correlated system zones to a possible voltage sag. As the PEHT+MP is based on probabilistic principles, a reduction of the range of possible sag measurements is reflected in a better sag estimation of the unmonitored area of the power system. - From the data of a set of monitors representing a relatively small portion of the system, to obtain accurate estimations (null error) of the sag severity zones without monitoring is feasible in the three networks studied. - The PEHT+MP can be applied to several types of sag indices, such as: SARFI-X, SARFI-Curve, SEI, etc. D. Optimal location of voltage sag monitors in power systems: Two methods for strategically locating the sag monitoring system are implemented for a generic network. The first proposal is a methodological development of the optimal location of sag monitors based on the observability criterion (LOM + OBS); the second proposal is a method that determines the sag monitor location according to the correlation area criterion (LOM+COR). Each proposed method of optimal location of sag monitors has a specific goal. In the case of LOM+OBS, the purpose of the method is to determine the optimal set of sag monitors to record all faults that originate voltage sags in the network. On the other hand, the LOM+COR method attempts to define the optimal location of sag monitors to estimate the sag indices in all the assessed network with the PEHT+MP application. From the development of the case studies of these methods of optimal location of sag monitors in the three networks raised, the most relevant observations are described below: - As the generation of voltage sag pseudo-measurements of the optimal location methods (LOM+OBS and LOM+COR) are obtained by applying the algorithm PEHT, the formulation of the optimization criterion is performed based on a realistic sag pseudo-monitoring, which considers the random nature of voltage sags through the five stochastic variables modeled in PEHT. This feature of the database of sag pseudo-measurements of the LOM+OBS and LOM+COR methods provides a greater reliability of the optimal set of monitors calculated when compared to similar methods in the bibliography. - The optimal set of sag monitors is determined by the network operator need. That is, if the goal is to record all faults that originate from voltage sags in the system, the observability criterion is used to determine the optimal location of sag monitors (LOM+OBS). Moreover, if the objective is to define a monitoring system that allows establishing the sag severity of the system from taken from information based on a limited set of sag monitors, the correlation area criterion would be appropriate (LOM+COR). Specifically, in the case of the LOM+OBS method (based on the observability criterion), the following properties were observed in the case studies: - By increasing the size of the network, there was observed a reduction in the percentage of monitored system busbars required. For example, to monitor all the faults which cause sags in the IEEE-24 network, then 100% of the system busbars are required for monitoring. In the case of the IEEE-118 and EC-357 networks, the method LOM+OBS determines that with monitoring 89.5 % and 65.3 % of the system, respectively, the observability criterion of the method would be fulfilled. - The LOM+OBS method calculates the probability of using the optimal set of sag monitors in the long term, establishing a relevance criterion of each sag monitor considered as optimal in the network. With this, the level of accuracy or observability (100%, 95%, etc.) can be determined, with which the faults that caused sags in the studied network are detected. That is, when the accuracy level for detecting faults that cause sags in the system is increased, a larger number of sag monitors is expected when calculating the optimal set of monitors. - The LOM + OBS method is demonstrated to be a technique applicable to any type of electrical system (radial or mesh), ensuring the detection of faults that cause voltage sags in a system according to the observability level raised. In the case of the optimal localization of sag monitors based on the criterion of correlation area (LOM+COR), several tests showed the following conclusions: - The procedure of LOM+COR method combines the implemented algorithms of voltage sag estimation (PEHT and PEHT+MP) with linear optimization techniques to define the optimal location of the sag monitors in a network. That is, the PEHT is used to generate the voltage sag pseudo-records, and, from the proposed optimization criterion (correlation area), the LOM+COR formulates and analytically calculates the optimal set of sag monitors of the network in the long term. From the information recorded by the optimal set of sag monitors, an accurate prediction of the voltage sag severity at all the busbars of the system is guaranteed with the PEHT+MP. - The LOM + COR method is shown to be a versatile optimization procedure, which reduces the size of the sag monitoring system both at radial as meshed grids. Due to its characteristics, this optimal location method allows emulation of complete system sag monitoring through the records of a small optimal set of sag monitors. Therefore, this new optimization method would be applicable to network operators that looks to reduce the installation and operation costs of the voltage sag monitoring system.
Resumo:
In vivo antinociception studies demonstrate that deltorphins are opioid peptides with an unusually high blood–brain barrier penetration rate. In vitro, isolated bovine brain microvessels can take up deltorphins through a saturable nonconcentrative permeation system, which is apparently distinct from previously described systems involved in the transport of neutral amino acids or of enkephalins. Removing Na+ ions from the incubation medium decreases the carrier affinity for deltorphins (−25%), but does not affect the Vmax value of the transport. The nonselective opiate antagonist naloxone inhibits deltorphin uptake by brain microvessels, but neither the selective δ-opioid antagonist naltrindole nor a number of opioid peptides with different affinities for δ- or μ-opioid receptors compete with deltorphins for the transport. Binding studies demonstrate that μ-, δ-, and κ-opioid receptors are undetectable in the microvessel preparation. Preloading of the microvessels with l-glutamine results in a transient stimulation of deltorphin uptake. Glutamine-accelerated deltorphin uptake correlates to the rate of glutamine efflux from the microvessels and is abolished by naloxone.
Resumo:
We have developed a fluorimetric assay with the use of the dye FM1-43 to determine the rate at which Dictyostelium amoebae endocytose their surface membrane. Our results show that they do so about once each 4–10 min. A clathrin null mutant takes its surface up only ∼30% more slowly, showing that this membrane uptake cannot be caused by clathrin-coated vesicles. Surprisingly, Ax2 and its parent, NC4, which differ in their rates of fluid-phase internalization by ∼60-fold, take up their surfaces at the same rates. These results show that, in axenic cells, the uptake of fluid and of surface area are separate processes. The large activity of this new endocytic cycle in both Ax2 and NC4 amoebae appears capable of delivering sufficient new surface area to advance the cells’ fronts during migration.
Resumo:
The effect of low temperature on cell growth, photosynthesis, photoinhibition, and nitrate assimilation was examined in the cyanobacterium Synechococcus sp. PCC 6301 to determine the factor that limits growth. Synechococcus sp. PCC 6301 grew exponentially between 20°C and 38°C, the growth rate decreased with decreasing temperature, and growth ceased at 15°C. The rate of photosynthetic oxygen evolution decreased more slowly with temperature than the growth rate, and more than 20% of the activity at 38°C remained at 15°C. Oxygen evolution was rapidly inactivated at high light intensity (3 mE m−2 s−1) at 15°C. Little or no loss of oxygen evolution was observed under the normal light intensity (250 μE m−2 s−1) for growth at 15°C. The decrease in the rate of nitrate consumption by cells as a function of temperature was similar to the decrease in the growth rate. Cells could not actively take up nitrate or nitrite at 15°C, although nitrate reductase and nitrite reductase were still active. These data demonstrate that growth at low temperature is not limited by a decrease in the rate of photosynthetic electron transport or by photoinhibition, but that inactivation of the nitrate/nitrite transporter limits growth at low temperature.
Resumo:
Enhanced biological phosphorus removal (EBPR) performance is directly affected by the competition between polyphosphate accumulating organisms (PAOs) and glycogen accumulating organisms (GAOs). This study investigates the effects of carbon source on PAO and GAO metabolism. Enriched PAO and GAO cultures were tested with the two most commonly found volatile fatty acids (VFAs) in wastewater systems, acetate and propionate. Four sequencing batch reactors (SBRs) were operated under similar conditions and influent compositions with either acetate or propionate as the sole carbon source. The stimulus for selection of the PAO and GAO phenotypes was provided only through variation of the phosphorus concentration in the feed. The abundance of PAOs and GAOs was quantified using fluorescence in situ hybridisation (FISH). In the acetate fed PAO and GAO reactors, Candidatus Accumulibacter phosphatis (a known PAO) and Candidatus Competibacter phosphatis (a known GAO) were present in abundance. A novel GAO, likely belonging to the group of Alphaproteobacteria, was found to dominate the propionate fed GAO reactor. The results clearly show that there are some very distinctive differences between PAOs and GAOs in their ability to take up acetate and propionate. PAOs enriched with acetate as the sole carbon source were immediately able to take up propionate, likely at a similar rate as acetate. However, an enrichment of GAOs with acetate as the sole carbon source took up propionate at a much slower rate (only about 5% of the rate of acetate uptake on a COD basis) during a short-term switch in carbon source. A GAO enrichment with propionate as the sole carbon source took up acetate at a rate that was less than half of the propionate uptake rate on a COD basis. These results, along with literature reports showing that PAOs fed with propionate (also dominated by Accumulibacter) can immediately switch to acetate, suggesting that PAOs are more adaptable to changes in carbon source as compared to GAOs. This study suggests that the PAO and GAO competition could be influenced in favour of PAOs through the provision of propionate in the feed or even by regularly switching the dominant VFA species in the wastewater. Further study is necessary in order to provide greater support for these hypotheses. (c) 2005 Wiley Periodicals, Inc.