953 resultados para Environmental Impact Modeling.
Resumo:
L'activité humaine affecte particulièrement la biodiversité, qui décline à une vitesse préoccupante. Parmi les facteurs réduisant la biodiversité, on trouve les espèces envahissantes. Symptomatiques d'un monde globalisé où l'échange se fait à l'échelle de la planète, certaines espèces, animales ou végétales, sont introduites, volontairement ou accidentellement par l'activité humaine (par exemple lors des échanges commerciaux ou par les voyageurs). Ainsi, ces espèces atteignent des régions qu'elles n'auraient jamais pu coloniser naturellement. Une fois introduites, l'absence de compétiteur peut les rendre particulièrement nuisibles. Ces nuisances sont plus ou moins directes, allant de problèmes sanitaires (p. ex. les piqûres très aigües des fourmis de feu, originaires d'Amérique du Sud et colonisant à une vitesse fulgurante les USA, l'Australie ou la Chine) à des nuisances sur la biodiversité (p. ex. les ravages de la perche du Nil sur la diversité unique des poissons Cichlidés du Lac Victoria). Il est donc important de pouvoir prévenir de telles introductions. De plus, pour le biologiste, ces espèces représentent une rare occasion de pouvoir comprendre les mécanismes évolutifs et écologiques qui expliquent le succès des envahissantes dans un monde où les équilibres sont bouleversés. Les modèles de niche environnementale sont un outil particulièrement utile dans le cadre de cette problématique. En reliant des observations d'espèces aux conditions environnementales où elles se trouvent, ils peuvent prédire la distribution potentielle des envahissantes, permettant d'anticiper et de mieux limiter leur impact. Toutefois, ils reposent sur des hypothèses pas évidentes à démontrer. L'une d'entre elle étant que la niche d'une espèce reste constante dans le temps, et dans l'espace. Le premier objectif de mon travail est de comparer si la niche d'une espèce envahissante diffère entre sa distribution d'origine native et celle d'origine introduite. En étudiant 50 espèces de plantes et 168 espèces de Mammifères, je démontre que c'est le cas et que par corolaire, il est possible de prédire leurs distributions. La deuxième partie de mon travail consiste à comprendre quelles seront les interactions entre le changement climatiques et les envahissantes, afin d'estimer leur impact sous un climat réchauffé. En étudiant la distribution de 49 espèces de plantes envahissantes, je démontre que les montagnes, régions relativement préservée par ce problème, deviendront bien plus exposées aux risques d'invasions biologiques. J'expose aussi comment les interactions entre l'activité humaine, le réchauffement climatique et les espèces envahissantes menacent la vigne sauvage en Europe et propose des zones géographiques particulièrement adaptée pour sa conservation. Enfin, à une échelle beaucoup plus locale, je montre qu'il est possible d'utiliser ces modèles de niches le long d'une rivière à une échelle extrêmement fine (1 mètre), potentiellement utile pour rationnaliser des mesures de conservations sur le terrain. - Biodiversity is significantly negatively affected by human activity. Invasive species are one of the most important factors causing biodiversity's decline. Intimately linked to the era of global trade, some plant or animal species can be accidentally or casually introduced with human activity (e.g. trade or travel). In this way, these species reach areas they could never reach through natural dispersal. Once naturalized, the lack of competitors can make these species highly noxious. Their effect is more or less direct, from sanitary problems (e.g. the harmful sting of Fire Ants, originating from South America and now spreading throughout USA, China and Australia) or can affect biodiversity (e.g. the Nile perch, devastating the one of the richest hotspot of Cichlid fishes diversity in Lake Victoria). It is thus important to prevent such harmful introductions. Moreover, invasive species represent for biologists one of the rare occasions to understand the evolutionary and ecological mechanisms behind the success of invaders in a world where natural equilibrium is already disturbed. Environmental niche models are particularly useful to tackle this problematic. By relating species observation to the environmental conditions where they occur, they can predict the potential distribution of invasive species, allowing a better anticipation and thus limiting their impact. However, they rely on strong assumption, one of the most important being that the modeled niche remains constant through space and time. The first aim of my thesis is to quantify the difference between the native and the invaded niche. By investigating 50 plant and 168 mammal species, I show that the niche is at least partially conserved, supporting for reliable predictions of invasive' s potential distributions. The second aim of my thesis is to understand the possible interactions between climate change and invasive species, such as to assess their impact under a warmer climate. By studying 49 invasive plant species, I show that mountain areas, which were relatively preserved, will become more suitable for biological invasions. Additionally, I show how interactions between human activity, global warming and invasive species are threatening the wild grapevine in Europe and propose geographical areas particularly adapted for conservation measures. Finally, at a much finer scale where conservation plannings ultimately take place, I show that it is possible to model the niche at very high resolution (1 meter) in an alluvial area allowing better prioritizations for conservation.
Resumo:
One of the key scrutiny issues of new coming energy era would be the environmental impact of fusion facilities managing one kg of tritium. The potential change of committed dose regulatory limits together with the implementation of nuclear design principles (As Low as Reasonably achievable - ALARA -, Defense in Depth -D-i-D-) for fusion facilities could strongly impact on the cost of deployment of coming fusion technology. Accurate modeling of environmental tritium transport forms (HT, HTO) for the assessment of fusion facility dosimetric impact in Accidental case appears as of major interest. This paper considers different short-term releases of tritium forms (HT and HTO) to the atmosphere from a potential fusion reactor located in the Mediterranean Basin. This work models in detail the dispersion of tritium forms and dosimetric impact of selected environmental patterns both inland and in-sea using real topography and forecast meteorological data-fields (ECMWF/FLEXPART). We explore specific values of this ratio in different levels and we examine the influence of meteorological conditions in the HTO behavior for 24 hours. For this purpose we have used a tool which consists on a coupled Lagrangian ECMWF/FLEXPART model useful to follow real time releases of tritium at 10, 30 and 60 meters together with hourly observations of wind (and in some cases precipitations) to provide a short-range approximation of tritium cloud behavior. We have assessed inhalation doses. And also HTO/HT ratios in a representative set of cases during winter 2010 and spring 2011 for the 3 air levels.
Resumo:
Los Centros de Datos se encuentran actualmente en cualquier sector de la economía mundial. Están compuestos por miles de servidores, dando servicio a los usuarios de forma global, las 24 horas del día y los 365 días del año. Durante los últimos años, las aplicaciones del ámbito de la e-Ciencia, como la e-Salud o las Ciudades Inteligentes han experimentado un desarrollo muy significativo. La necesidad de manejar de forma eficiente las necesidades de cómputo de aplicaciones de nueva generación, junto con la creciente demanda de recursos en aplicaciones tradicionales, han facilitado el rápido crecimiento y la proliferación de los Centros de Datos. El principal inconveniente de este aumento de capacidad ha sido el rápido y dramático incremento del consumo energético de estas infraestructuras. En 2010, la factura eléctrica de los Centros de Datos representaba el 1.3% del consumo eléctrico mundial. Sólo en el año 2012, el consumo de potencia de los Centros de Datos creció un 63%, alcanzando los 38GW. En 2013 se estimó un crecimiento de otro 17%, hasta llegar a los 43GW. Además, los Centros de Datos son responsables de más del 2% del total de emisiones de dióxido de carbono a la atmósfera. Esta tesis doctoral se enfrenta al problema energético proponiendo técnicas proactivas y reactivas conscientes de la temperatura y de la energía, que contribuyen a tener Centros de Datos más eficientes. Este trabajo desarrolla modelos de energía y utiliza el conocimiento sobre la demanda energética de la carga de trabajo a ejecutar y de los recursos de computación y refrigeración del Centro de Datos para optimizar el consumo. Además, los Centros de Datos son considerados como un elemento crucial dentro del marco de la aplicación ejecutada, optimizando no sólo el consumo del Centro de Datos sino el consumo energético global de la aplicación. Los principales componentes del consumo en los Centros de Datos son la potencia de computación utilizada por los equipos de IT, y la refrigeración necesaria para mantener los servidores dentro de un rango de temperatura de trabajo que asegure su correcto funcionamiento. Debido a la relación cúbica entre la velocidad de los ventiladores y el consumo de los mismos, las soluciones basadas en el sobre-aprovisionamiento de aire frío al servidor generalmente tienen como resultado ineficiencias energéticas. Por otro lado, temperaturas más elevadas en el procesador llevan a un consumo de fugas mayor, debido a la relación exponencial del consumo de fugas con la temperatura. Además, las características de la carga de trabajo y las políticas de asignación de recursos tienen un impacto importante en los balances entre corriente de fugas y consumo de refrigeración. La primera gran contribución de este trabajo es el desarrollo de modelos de potencia y temperatura que permiten describes estos balances entre corriente de fugas y refrigeración; así como la propuesta de estrategias para minimizar el consumo del servidor por medio de la asignación conjunta de refrigeración y carga desde una perspectiva multivariable. Cuando escalamos a nivel del Centro de Datos, observamos un comportamiento similar en términos del balance entre corrientes de fugas y refrigeración. Conforme aumenta la temperatura de la sala, mejora la eficiencia de la refrigeración. Sin embargo, este incremente de la temperatura de sala provoca un aumento en la temperatura de la CPU y, por tanto, también del consumo de fugas. Además, la dinámica de la sala tiene un comportamiento muy desigual, no equilibrado, debido a la asignación de carga y a la heterogeneidad en el equipamiento de IT. La segunda contribución de esta tesis es la propuesta de técnicas de asigación conscientes de la temperatura y heterogeneidad que permiten optimizar conjuntamente la asignación de tareas y refrigeración a los servidores. Estas estrategias necesitan estar respaldadas por modelos flexibles, que puedan trabajar en tiempo real, para describir el sistema desde un nivel de abstracción alto. Dentro del ámbito de las aplicaciones de nueva generación, las decisiones tomadas en el nivel de aplicación pueden tener un impacto dramático en el consumo energético de niveles de abstracción menores, como por ejemplo, en el Centro de Datos. Es importante considerar las relaciones entre todos los agentes computacionales implicados en el problema, de forma que puedan cooperar para conseguir el objetivo común de reducir el coste energético global del sistema. La tercera contribución de esta tesis es el desarrollo de optimizaciones energéticas para la aplicación global por medio de la evaluación de los costes de ejecutar parte del procesado necesario en otros niveles de abstracción, que van desde los nodos hasta el Centro de Datos, por medio de técnicas de balanceo de carga. Como resumen, el trabajo presentado en esta tesis lleva a cabo contribuciones en el modelado y optimización consciente del consumo por fugas y la refrigeración de servidores; el modelado de los Centros de Datos y el desarrollo de políticas de asignación conscientes de la heterogeneidad; y desarrolla mecanismos para la optimización energética de aplicaciones de nueva generación desde varios niveles de abstracción. ABSTRACT Data centers are easily found in every sector of the worldwide economy. They consist of tens of thousands of servers, serving millions of users globally and 24-7. In the last years, e-Science applications such e-Health or Smart Cities have experienced a significant development. The need to deal efficiently with the computational needs of next-generation applications together with the increasing demand for higher resources in traditional applications has facilitated the rapid proliferation and growing of data centers. A drawback to this capacity growth has been the rapid increase of the energy consumption of these facilities. In 2010, data center electricity represented 1.3% of all the electricity use in the world. In year 2012 alone, global data center power demand grew 63% to 38GW. A further rise of 17% to 43GW was estimated in 2013. Moreover, data centers are responsible for more than 2% of total carbon dioxide emissions. This PhD Thesis addresses the energy challenge by proposing proactive and reactive thermal and energy-aware optimization techniques that contribute to place data centers on a more scalable curve. This work develops energy models and uses the knowledge about the energy demand of the workload to be executed and the computational and cooling resources available at data center to optimize energy consumption. Moreover, data centers are considered as a crucial element within their application framework, optimizing not only the energy consumption of the facility, but the global energy consumption of the application. The main contributors to the energy consumption in a data center are the computing power drawn by IT equipment and the cooling power needed to keep the servers within a certain temperature range that ensures safe operation. Because of the cubic relation of fan power with fan speed, solutions based on over-provisioning cold air into the server usually lead to inefficiencies. On the other hand, higher chip temperatures lead to higher leakage power because of the exponential dependence of leakage on temperature. Moreover, workload characteristics as well as allocation policies also have an important impact on the leakage-cooling tradeoffs. The first key contribution of this work is the development of power and temperature models that accurately describe the leakage-cooling tradeoffs at the server level, and the proposal of strategies to minimize server energy via joint cooling and workload management from a multivariate perspective. When scaling to the data center level, a similar behavior in terms of leakage-temperature tradeoffs can be observed. As room temperature raises, the efficiency of data room cooling units improves. However, as we increase room temperature, CPU temperature raises and so does leakage power. Moreover, the thermal dynamics of a data room exhibit unbalanced patterns due to both the workload allocation and the heterogeneity of computing equipment. The second main contribution is the proposal of thermal- and heterogeneity-aware workload management techniques that jointly optimize the allocation of computation and cooling to servers. These strategies need to be backed up by flexible room level models, able to work on runtime, that describe the system from a high level perspective. Within the framework of next-generation applications, decisions taken at this scope can have a dramatical impact on the energy consumption of lower abstraction levels, i.e. the data center facility. It is important to consider the relationships between all the computational agents involved in the problem, so that they can cooperate to achieve the common goal of reducing energy in the overall system. The third main contribution is the energy optimization of the overall application by evaluating the energy costs of performing part of the processing in any of the different abstraction layers, from the node to the data center, via workload management and off-loading techniques. In summary, the work presented in this PhD Thesis, makes contributions on leakage and cooling aware server modeling and optimization, data center thermal modeling and heterogeneityaware data center resource allocation, and develops mechanisms for the energy optimization for next-generation applications from a multi-layer perspective.
Resumo:
How does knowledge management (KM) by a government agency responsible for environmental impact assessment (EIA) potentially contribute to better environmental assessment and management practice? Staff members at government agencies in charge of the EIA process are knowledge workers who perform judgement-oriented tasks highly reliant on individual expertise, but also grounded on the agency`s knowledge accumulated over the years. Part of an agency`s knowledge can be codified and stored in an organizational memory, but is subject to decay or loss if not properly managed. The EIA agency operating in Western Australia was used as a case study. Its KM initiatives were reviewed, knowledge repositories were identified and staff surveyed to gauge the utilisation and effectiveness of such repositories in enabling them to perform EIA tasks. Key elements of KM are the preparation of substantive guidance and spatial information management. It was found that treatment of cumulative impacts on the environment is very limited and information derived from project follow-up is not properly captured and stored, thus not used to create new knowledge and to improve practice and effectiveness. Other opportunities for improving organizational learning include the use of after-action reviews. The learning about knowledge management in EIA practice gained from Western Australian experience should be of value to agencies worldwide seeking to understand where best to direct their resources for their own knowledge repositories and environmental management practice. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Constructing highways in dense urban areas is always a challenge. In Sao Paulo Metropolitan Region, heavy truck traffic contributes to clog streets and expressways alike. As part of the traffic neither originates nor head to the region, a peripheral highway has been proposed to reduce traffic problems. This project called Rodoanel, is an expressway approximately 175 km long. The fact that the projected south and north sections would cross catchments that supply most of the metropolis water demand was strongly disputed and made the environmental permitting process particularly difficult. The agency in charge commissioned a strategic environmental assessment (SEA) of a revamped project, and called it the Rodoanel Programme. However, the SEA report failed to satisfactorily take account of significant strategic issues. Among these, the highway potential effect of inducing urban sprawl over water protection zones is the most critical issue, as it emerged later as a hurdle to project licensing. Conclusion is that, particularly where no agreed-upon framework for SEA exists, when vertical tiering with downstream project EIA is sought, then a careful scoping of strategic issues is more than necessary. If an agreement on `what is strategic` is not reached and not recognized by influential stakeholders, then the unsettled conflicts will be transferred to project EIA. In such a context, SEA will have added another loop to the usually long road to project approval. (c) 2008 Elsevier Inc. All rights reserved.
Resumo:
Certification of an ISO 14001 Environmental Management System (EMS) is currently an important requirement for those enterprises wishing to sell their products in the context of a global market. The system`s structure is based on environmental impact evaluation (EIE). However, if an erroneous or inadequate methodology is applied, the entire process may be jeopardized. Many methodologies have been developed for making of EIEs, some of them are fairly complex and unsuitable for EMS implementation in an organizational context, principally when small and medium size enterprises (SMEs) are involved. The proposed methodology for EIE is part of a model for implementing EMS. The methodological approach used was a qualitative exploratory research method based upon sources of evidence such as document analyses, semi-structured interviews and participant observations. By adopting a cooperative implementation model based on the theory of system engineering, difficulties relating to implementation of the sub-system were overcome thus encouraging SMEs to implement EMS. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
OBJECTIVE: To assess the impact of town planning, infrastructure, sanitation and rainfall on the bacteriological quality of domestic water supplies. METHODS: Water samples obtained from deep and shallow wells, boreholes and public taps were cultured to determine the most probable number of Escherichia coli and total coliform using the multiple tube technique. Presence of enteric pathogens was detected using selective and differential media. Samples were collected during both periods of heavy and low rainfall and from municipalities that are unique with respect to infrastructure planning, town planning and sanitation. RESULTS: Contamination of treated and pipe distributed water was related with distance of the collection point from a utility station. Faults in pipelines increased the rate of contamination (p<0.5) and this occurred mostly in densely populated areas with dilapidated infrastructure. Wastewater from drains was the main source of contamination of pipe-borne water. Shallow wells were more contaminated than deep wells and boreholes and contamination was higher during period of heavy rainfall (p<0.05). E. coli and enteric pathogens were isolated from contaminated supplies. CONCLUSIONS: Poor town planning, dilapidated infrastructure and indiscriminate siting of wells and boreholes contributed to the low bacteriological quality of domestic water supplies. Rainfall accentuated the impact.
Resumo:
Social concerns for environmental impact on air, water and soil pollution have grown along with the accelerated growth of pig production. This study intends to characterize air contamination caused by fungi and particles in swine production, and, additionally, to conclude about their eventual environmental impact. Fiftysix air samples of 50 litters were collected through impaction method. Air sampling and particle matter concentration were performed in indoor and also outdoor premises. Simultaneously, temperature and relative humidity were monitored according to the International Standard ISO 7726 – 1998. Aspergillus versicolor presents the highest indoor spore counts (>2000 CFU/m3) and the highest overall prevalence (40.5%), followed by Scopulariopsis brevicaulis (17.0%) and Penicillium sp. (14.1%). All the swine farms showed indoor fungal species different from the ones identified outdoors and the most frequent genera were also different from the ones indoors. The distribution of particle size showed the same tendency in all swine farms (higher concentration values in PM5 and PM10 sizes). Through the ratio between the indoor and outdoor values, it was possible to conclude that CFU/m3 and particles presented an eventual impact in outdoor measurements.
Resumo:
Journal of Cleaner Production, nº 16, p. 639-645
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa Para a obtenção do Grau de Mestre em Energia e Bioenergia
Resumo:
Aromatic amines are widely used industrial chemicals as their major sources in the environment include several chemical industry sectors such as oil refining, synthetic polymers, dyes, adhesives, rubbers, perfume, pharmaceuticals, pesticides and explosives. They result also from diesel exhaust, combustion of wood chips and rubber and tobacco smoke. Some types of aromatic amines are generated during cooking, special grilled meat and fish, as well. The intensive use and production of these compounds explains its occurrence in the environment such as in air, water and soil, thereby creating a potential for human exposure. Since aromatic amines are potential carcinogenic and toxic agents, they constitute an important class of environmental pollutants of enormous concern, which efficient removal is a crucial task for researchers, so several methods have been investigated and applied. In this chapter the types and general properties of aromatic amine compounds are reviewed. As aromatic amines are continuously entering the environment from various sources and have been designated as high priority pollutants, their presence in the environment must be monitored at concentration levels lower than 30 mg L1, compatible with the limits allowed by the regulations. Consequently, most relevant analytical methods to detect the aromatic amines composition in environmental matrices, and for monitoring their degradation, are essential and will be presented. Those include Spectroscopy, namely UV/visible and Fourier Transform Infrared Spectroscopy (FTIR); Chromatography, in particular Thin Layer (TLC), High Performance Liquid (HPLC) and Gas chromatography (GC); Capillary electrophoresis (CE); Mass spectrometry (MS) and combination of different methods including GC-MS, HPLC-MS and CE-MS. Choosing the best methods depend on their availability, costs, detection limit and sample concentration, which sometimes need to be concentrate or pretreated. However, combined methods may give more complete results based on the complementary information. The environmental impact, toxicity and carcinogenicity of many aromatic amines have been reported and are emphasized in this chapter too. Lately, the conventional aromatic amines degradation and the alternative biodegradation processes are highlighted. Parameters affecting biodegradation, role of different electron acceptors in aerobic and anaerobic biodegradation and kinetics are discussed. Conventional processes including extraction, adsorption onto activated carbon, chemical oxidation, advanced oxidation, electrochemical techniques and irradiation suffer from drawbacks including high costs, formation of hazardous by-products and low efficiency. Biological processes, taking advantage of the naturally processes occurring in environment, have been developed and tested, proved as an economic, energy efficient and environmentally feasible alternative. Aerobic biodegradation is one of the most promising techniques for aromatic amines remediation, but has the drawback of aromatic amines autooxidation once they are exposed to oxygen, instead of their degradation. Higher costs, especially due to power consumption for aeration, can also limit its application. Anaerobic degradation technology is the novel path for treatment of a wide variety of aromatic amines, including industrial wastewater, and will be discussed. However, some are difficult to degrade under anaerobic conditions and, thus, other electron acceptors such as nitrate, iron, sulphate, manganese and carbonate have, alternatively, been tested.
Resumo:
1
Resumo:
2