993 resultados para Varying environment


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The system formed by the F ring and two close satellites, Prometheus and Pandora, has been analysed since the time that Voyager visited the planet Saturn. During the ring plane crossing in 1995 the satellites were found in different positions as predicted by the Voyager data. Besides the mutual effects of Prometheus and Pandora, they are also disturbed by a massive F ring. Showalter et al. [Icarus 100 (1992) 394] proposed that, the core of the ring has a mass which corresponds to a moonlet varying in size from 15 to 70 kin in radius which can prevent the ring from spreading due to dissipative forces, such as Poynting-Robertson drag and collisions. We have divided this work into two parts. Firstly we analysed the secular interactions between Prometheus-Pandora and a massive F ring using the secular theory. Our results show the variation in eccentricity and inclination of the satellites and the F ring taking into account a massive ring corresponding to a moonlet of different sizes. There is also a population of dust particles in the ring in the company of moonlets at different sizes [Icarus 109 (1997) 304]. We also analysed the behaviour of these particles under the effects of the Poynting-Robertson drag and radiation pressure. Our results show that the time scale proposed for a dust particle to leave the ring is much shorter than predicted before even in the presence of a coorbital moonlet. This result does not agree with the confinement model proposed by Dermott et al. [Nature 284 (1980) 309]. In 2004, Cassini mission will perform repeated observations of the whole system, including observations of the satellites and the F ring environment. These data will help us to better understand this system. (C) 2003 COSPAR. Published by Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN] We describe the coupling between upper ocean layer variability and size-fractionated phytoplankton distribution in the non-nutrient-limited Bransfield Strait region (BS) of Antarctica. For this purpose we use hydrographic and size-fractionated chlorophyll a data from a transect that crossed 2 fronts and an eddy, together with data from 3 stations located in a deeply mixed region, the Antarctic Sound (AS). In the BS transect, small phytoplankton (<20 μm equivalent spherical diameter [ESD]) accounted for 80% of total chl a and their distribution appeared to be linked to cross-frontal variability. On the deepening upper mixed layer (UML) sides of both fronts we observed a deep subducting column-like structure of small phytoplankton biomass. On the shoaling UML sides of both fronts, where there were signs of restratification, we observed a local shallow maximum of small phytoplankton biomass. We propose that this observed phytoplankton distribution may be a response to the development of frontal vertical circulation cells. In the deep, turbulent environment of the AS, larger phytoplankton (>20 μm ESD) accounted for 80% of total chl a. The proportion of large phytoplankton increases as the depth of the upper mixed layer (ZUML), and the corresponding rate of vertical mixing, increases. We hypothesize that this change in phytoplankton composition with varying ZUML is related to the competition for light, and results from modification of the light regime caused by vertical mixing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The built environment is part of the physical environment made by people and for people. Because the built environment is such a ubiquitous component of the environment, it acts as an important pathway in determining health outcomes. Zoning, a type of urban planning policy, is one of the most important mechanisms connecting the built environment to public health. This policy analysis research paper explores how zoning regulations in Austin, Texas promote or prohibit the development of a healthy built environment. A systematic literature review was obtained from Active Living Research, which contained literature published about the relationships between the built environment, physical activity, and health. The results of these studies identified the following four components of the built environment that were associated to health: access to recreational facilities, sprawl and residential density, land use mix, and sidewalks and their walkability. A hierarchy analysis was then performed to demonstrate the association between these aspects of the built environment and health outcomes such as obesity, cardiovascular disease, and general health. Once these associations had been established, the components of the built environment were adapted into the evaluation criteria used to conduct a public health analysis of Austin's zoning ordinance. A total of eighty-eight regulations were identified to be related to these components and their varying associations to human health. Eight regulations were projected to have a negative association to health, three would have both a positive and negative association simultaneously, and nine were indeterminable with the information obtained through the literature review. The remaining sixty-eight regulations were projected to be associated in a beneficial manner to human health. Therefore, it was concluded that Austin's zoning ordinance would have an overwhelmingly positive impact on the public's health based on identified associations between the built environment and health outcomes.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of coal for fuel in place of oil and natural gas has been increasing in the United States. Typically, users store their reserves of coal outdoors in large piles and rainfall on the coal creates runoffs which may contain materials hazardous to the environment and the public's health. To study this hazard, rainfall on model coal piles was simulated, using deionized water and four coals of varying sulfur content. The simulated surface runoffs were collected during 9 rainfall simulations spaced 15 days apart. The runoffs were analyzed for 13 standard water quality parameters, extracted with organic solvents and then analyzed with capillary column GC/MS, and the extracts were tested for mutagenicity with the Ames Salmonella microsomal assay and for clastogenicity with Chinese hamster ovary cells.^ The runoffs from the high-sulfur coals and the lignite exhibited extremes of pH (acidity), specific conductance, chemical oxygen demand, and total suspended solids; the low-sulfur coal runoffs did not exhibit these extremes. Without treatment, effluents from these high-sulfur coals and lignite would not comply with federal water quality guidelines.^ Most extracts of the simulated surface runoffs contained at least 10 organic compounds including polycyclic aromatic hydrocarbons, their methyl and ethyl homologs, olefins, paraffins, and some terpenes. The concentrations of these compounds were generally less than 50 (mu)g/l in most extracts.^ Some of the extracts were weakly mutagenic and affected both a DNA-repair proficient and deficient Salmonella strain. The addition of S9 decreased the effect significantly. Extracts of runoffs from the low-sulfur coal were not mutagenic.^ All extracts were clastogenic. Extracts of runoffs from the high-sulfur coals were both clastogenic and cytotoxic; those from the low-sulfur coal and the lignite were less clastogenic and not cytotoxic. Clastogenicity occurred with and without S9 activation. Chromosomal lesions included gaps, breaks and exchanges. These data suggest a relationship between the sulfur content of a coal, its mutagenicity and also its clastogenicity.^ The runoffs from actual coal piles should be investigated for possible genotoxic effects in view of the data presented in this study.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study characterises the shape of the flow separation zone (FSZ) and wake region over large asymmetric bedforms under tidal flow conditions. High resolution bathymetry, flow velocity and turbulence data were measured along two parallel transects in a tidal channel covered with bedforms. The field data are used to verify the applicability of a numerical model for a systematic study using the Delft3D modelling system and test the model sensitivity to roughness length. Three experiments are then conducted to investigate how the FSZ size and wake extent vary depending on tidally-varying flow conditions, water levels and bathymetry. During the ebb, a large FSZ occurs over the steep lee side of each bedform. During the flood, no flow separation develops over the bedforms having a flat crest; however, a small FSZ is observed over the steepest part of the crest of some bedforms, where the slope is locally up to 15°. Over a given bedform morphology and constant water levels, no FSZ occurs for velocity magnitudes smaller than 0.1 m s**-1; as the flow accelerates, the FSZ reaches a stable size for velocity magnitudes greater than 0.4 m s**-1. The shape of the FSZ is not influenced by changes in water levels. On the other hand, variations in bed morphology, as recorded from the high-resolution bathymetry collected during the tidal cycle, influence the size and position of the FSZ: a FSZ develops only when the maximum lee side slope over a horizontal distance of 5 m is greater than 10°. The height and length of the wake region are related to the length of the FSZ. The total roughness along the transect lines is an order of magnitude larger during the ebb than during the flood due to flow direction in relation to bedform asymmetry: during the ebb, roughness is created by the large bedforms because a FSZ and wake develops over the steep lee side. The results add to the understanding of hydrodynamics of natural bedforms in a tidal environment and may be used to better parameterise small-scale processes in large-scale studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fossil fish teeth from pelagic open ocean settings are considered a robust archive for preserving the neodymium (Nd) isotopic composition of ancient seawater. However, using fossil fish teeth as an archive to reconstruct seawater Nd isotopic compositions in different sedimentary redox environments and in terrigenous-dominated, shallow marine settings is less proven. To address these uncertainties, fish tooth and sediment samples from a middle Eocene section deposited proximal to the East Antarctic margin at Integrated Ocean Drilling Program Site U1356 were analyzed for major and trace element geochemistry, and Nd isotopes. Major and trace element analyses of the sediments reveal changing redox conditions throughout deposition in a shallow marine environment. However, variations in the Nd isotopic composition and rare earth element (REE) patterns of the associated fish teeth do not correspond to redox changes in the sediments. REE patterns in fish teeth at Site U1356 carry a typical mid-REE-enriched signature. However, a consistently positive Ce anomaly marks a deviation from a pure authigenic origin of REEs to the fish tooth. Neodymium isotopic compositions of cleaned and uncleaned fish teeth fall between modern seawater and local sediments and hence could be authigenic in nature, but could also be influenced by sedimentary fluxes. We conclude that the fossil fish tooth Nd isotope proxy is not sensitive to moderate changes in pore water oxygenation. However, combined studies on sediments, pore waters, fish teeth and seawater are needed to fully understand processes driving the reconstructed signature from shallow marine sections in proximity to continental sources. This article is protected by copyright. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Context. The ESA Rosetta spacecraft, currently orbiting around cornet 67P/Churyumov-Gerasimenko, has already provided in situ measurements of the dust grain properties from several instruments, particularly OSIRIS and GIADA. We propose adding value to those measurements by combining them with ground-based observations of the dust tail to monitor the overall, time-dependent dust-production rate and size distribution. Aims. To constrain the dust grain properties, we take Rosetta OSIRIS and GIADA results into account, and combine OSIRIS data during the approach phase (from late April to early June 2014) with a large data set of ground-based images that were acquired with the ESO Very Large Telescope (VLT) from February to November 2014. Methods. A Monte Carlo dust tail code, which has already been used to characterise the dust environments of several comets and active asteroids, has been applied to retrieve the dust parameters. Key properties of the grains (density, velocity, and size distribution) were obtained from. Rosetta observations: these parameters were used as input of the code to considerably reduce the number of free parameters. In this way, the overall dust mass-loss rate and its dependence on the heliocentric distance could be obtained accurately. Results. The dust parameters derived from the inner coma measurements by OSIRIS and GIADA and from distant imaging using VLT data are consistent, except for the power index of the size-distribution function, which is alpha = -3, instead of alpha = -2, for grains smaller than 1 mm. This is possibly linked to the presence of fluffy aggregates in the coma. The onset of cometary activity occurs at approximately 4.3 AU, with a dust production rate of 0.5 kg/s, increasing up to 15 kg/s at 2.9 AU. This implies a dust-to-gas mass ratio varying between 3.8 and 6.5 for the best-fit model when combined with water-production rates from the MIRO experiment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El entorno espacial actual hay un gran numero de micro-meteoritos y basura espacial generada por el hombre, lo cual plantea un riesgo para la seguridad de las operaciones en el espacio. La situación se agrava continuamente a causa de las colisiones de basura espacial en órbita, y los nuevos lanzamientos de satélites. Una parte significativa de esta basura son satélites muertos, y fragmentos de satélites resultantes de explosiones y colisiones de objetos en órbita. La mitigación de este problema se ha convertido en un tema de preocupación prioritario para todas las instituciones que participan en operaciones espaciales. Entre las soluciones existentes, las amarras electrodinámicas (EDT) proporcionan un eficiente dispositivo para el rápido de-orbitado de los satélites en órbita terrestre baja (LEO), al final de su vida útil. El campo de investigación de las amarras electrodinámicas (EDT) ha sido muy fructífero desde los años 70. Gracias a estudios teóricos, y a misiones para la demostración del funcionamiento de las amarras en órbita, esta tecnología se ha desarrollado muy rápidamente en las últimas décadas. Durante este período de investigación, se han identificado y superado múltiples problemas técnicos de diversa índole. Gran parte del funcionamiento básico del sistema EDT depende de su capacidad de supervivencia ante los micro-meteoritos y la basura espacial. Una amarra puede ser cortada completamente por una partícula cuando ésta tiene un diámetro mínimo. En caso de corte debido al impacto de partículas, una amarra en sí misma, podría ser un riesgo para otros satélites en funcionamiento. Por desgracia, tras varias demostraciones en órbita, no se ha podido concluir que este problema sea importante para el funcionamiento del sistema. En esta tesis, se presenta un análisis teórico de la capacidad de supervivencia de las amarras en el espacio. Este estudio demuestra las ventajas de las amarras de sección rectangular (cinta), en cuanto a la probabilidad de supervivencia durante la misión, frente a las amarras convencionales (cables de sección circular). Debido a su particular geometría (longitud mucho mayor que la sección transversal), una amarra puede tener un riesgo relativamente alto de ser cortado por un único impacto con una partícula de pequeñas dimensiones. Un cálculo analítico de la tasa de impactos fatales para una amarra cilindrica y de tipo cinta de igual longitud y masa, considerando el flujo de partículas de basura espacial del modelo ORDEM2000 de la NASA, muestra mayor probabilidad de supervivencia para las cintas. Dicho análisis ha sido comparado con un cálculo numérico empleando los modelos de flujo el ORDEM2000 y el MASTER2005 de ESA. Además se muestra que, para igual tiempo en órbita, una cinta tiene una probabilidad de supervivencia un orden y medio de magnitud mayor que una amarra cilindrica con igual masa y longitud. Por otra parte, de-orbitar una cinta desde una cierta altitud, es mucho más rápido, debido a su mayor perímetro que le permite capturar más corriente. Este es un factor adicional que incrementa la probabilidad de supervivencia de la cinta, al estar menos tiempo expuesta a los posibles impactos de basura espacial. Por este motivo, se puede afirmar finalmente y en sentido práctico, que la capacidad de supervivencia de la cinta es bastante alta, en comparación con la de la amarra cilindrica. El segundo objetivo de este trabajo, consiste en la elaboración de un modelo analítico, mejorando la aproximación del flujo de ORDEM2000 y MASTER2009, que permite calcular con precisión, la tasa de impacto fatal al año para una cinta en un rango de altitudes e inclinaciones, en lugar de unas condiciones particulares. Se obtiene el numero de corte por un cierto tiempo en función de la geometría de la cinta y propiedades de la órbita. Para las mismas condiciones, el modelo analítico, se compara con los resultados obtenidos del análisis numérico. Este modelo escalable ha sido esencial para la optimización del diseño de la amarra para las misiones de de-orbitado de los satélites, variando la masa del satélite y la altitud inicial de la órbita. El modelo de supervivencia se ha utilizado para construir una función objetivo con el fin de optimizar el diseño de amarras. La función objectivo es el producto del cociente entre la masa de la amarra y la del satélite y el numero de corte por un cierto tiempo. Combinando el modelo de supervivencia con una ecuación dinámica de la amarra donde aparece la fuerza de Lorentz, se elimina el tiempo y se escribe la función objetivo como función de la geometría de la cinta y las propietades de la órbita. Este modelo de optimización, condujo al desarrollo de un software, que esta en proceso de registro por parte de la UPM. La etapa final de este estudio, consiste en la estimación del número de impactos fatales, en una cinta, utilizando por primera vez una ecuación de límite balístico experimental. Esta ecuación ha sido desarollada para cintas, y permite representar los efectos tanto de la velocidad de impacto como el ángulo de impacto. Los resultados obtenidos demuestran que la cinta es altamente resistente a los impactos de basura espacial, y para una cinta con una sección transversal definida, el número de impactos críticos debidos a partículas no rastreables es significativamente menor. ABSTRACT The current space environment, consisting of man-made debris and tiny meteoroids, poses a risk to safe operations in space, and the situation is continuously deteriorating due to in-orbit debris collisions and to new satellite launches. Among these debris a significant portion is due to dead satellites and fragments of satellites resulted from explosions and in-orbit collisions. Mitigation of space debris has become an issue of first concern for all the institutions involved in space operations. Bare electrodynamic tethers (EDT) can provide an efficient mechanism for rapid de-orbiting of defunct satellites from low Earth orbit (LEO) at end of life. The research on EDT has been a fruitful field since the 70’s. Thanks to both theoretical studies and in orbit demonstration missions, this technology has been developed very fast in the following decades. During this period, several technical issues were identified and overcome. The core functionality of EDT system greatly depends on their survivability to the micrometeoroids and orbital debris, and a tether can become itself a kind of debris for other operating satellites in case of cutoff due to particle impact; however, this very issue is still inconclusive and conflicting after having a number of space demonstrations. A tether can be completely cut by debris having some minimal diameter. This thesis presents a theoretical analysis of the survivability of tethers in space. The study demonstrates the advantages of tape tethers over conventional round wires particularly on the survivability during the mission. Because of its particular geometry (length very much larger than cross-sectional dimensions), a tether may have a relatively high risk of being severed by the single impact of small debris. As a first approach to the problem, survival probability has been compared for a round and a tape tether of equal mass and length. The rates of fatal impact of orbital debris on round and tape tether, evaluated with an analytical approximation to debris flux modeled by NASA’s ORDEM2000, shows much higher survival probability for tapes. A comparative numerical analysis using debris flux model ORDEM2000 and ESA’s MASTER2005 shows good agreement with the analytical result. It also shows that, for a given time in orbit, a tape has a probability of survival of about one and a half orders of magnitude higher than a round tether of equal mass and length. Because de-orbiting from a given altitude is much faster for the tape due to its larger perimeter, its probability of survival in a practical sense is quite high. As the next step, an analytical model derived in this work allows to calculate accurately the fatal impact rate per year for a tape tether. The model uses power laws for debris-size ranges, in both ORDEM2000 and MASTER2009 debris flux models, to calculate tape tether survivability at different LEO altitudes. The analytical model, which depends on tape dimensions (width, thickness) and orbital parameters (inclinations, altitudes) is then compared with fully numerical results for different orbit inclinations, altitudes and tape width for both ORDEM2000 and MASTER2009 flux data. This scalable model not only estimates the fatal impact count but has proved essential in optimizing tether design for satellite de-orbit missions varying satellite mass and initial orbital altitude and inclination. Within the frame of this dissertation, a simple analysis has been finally presented, showing the scalable property of tape tether, thanks to the survivability model developed, that allows analyze and compare de-orbit performance for a large range of satellite mass and orbit properties. The work explicitly shows the product of tether-to-satellite mass-ratio and fatal impact count as a function of tether geometry and orbital parameters. Combining the tether dynamic equation involving Lorentz drag with space debris impact survivability model, eliminates time from the expression. Hence the product, is independent of tether de-orbit history and just depends on mission constraints and tether length, width and thickness. This optimization model finally led to the development of a friendly software tool named BETsMA, currently in process of registration by UPM. For the final step, an estimation of fatal impact rate on a tape tether has been done, using for the first time an experimental ballistic limit equation that was derived for tapes and accounts for the effects of both the impact velocity and impact angle. It is shown that tape tethers are highly resistant to space debris impacts and considering a tape tether with a defined cross section, the number of critical events due to impact with non-trackable debris is always significantly low.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Awareness of antibiotics in wastewaters and aquatic ecosystems is growing as investigations into alternate pollutants increase and analytical techniques for detecting these chemicals improve. The presence of three antibiotics (ciproffoxacin, norfloxacin and cephalexin) was evaluated in both sewage effluent and environmental waters downstream from a sewage discharge. Bacteria cultured from the sewage bioreactor and receiving waters were tested for resistance against six antibiotics (ciprofloxacin, tetracycline, ampicillin, trimethoprim, erythromycin and trimethoprim/sulphamethoxazole) and effects of short term exposure (24h) to antibiotics on bacterial denitrification rates were examined. Antibiotics were detected entering the sewage treatment plant with varying levels of removal during the treatment process. Antibiotics were also detected in effluent entering receiving waters and detectable 500m from the source. Among the bacteria cultured from the sewage bioreactor, resistance was displayed against all six antibiotics tested and bacteria cultured from receiving waters were resistant against two of the antibiotics tested. Rates of denitrification were observed to decrease in response to some antibiotics and not to others, though this was only observed at concentrations exceeding those likely to be found in the environment. Findings from this preliminary research have indicated that antibiotics are entering our aquatic systems and pose a potential threat to ecosystem function and potentially human health. (c) 2004 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper aims to help supply chain managers to determine the value of retailer-supplier partnership initiatives beyond information sharing (IS) according to their specific business environment under time-varying demand conditions. For this purpose, we use integer linear programming models to quantify the benefits that can be accrued by a retailer, a supplier and system as a whole from shift in inventory ownership and shift in decision-making power with that of IS. The results of a detailed numerical study pertaining to static time horizon reveal that the shift in inventory ownership provides system-wide cost benefits in specific settings. Particularly, when it induces the retailer to order larger quantities and the supplier also prefers such orders due to significantly high setup and shipment costs. We observe that the relative benefits of shift in decision-making power are always higher than the shift in inventory ownership under all the conditions. The value of the shift in decision-making power is greater than IS particularly when the variability of underlying demand is low and time-dependent variation in production cost is high. However, when the shipment cost is negligible and order issuing efficiency of the supplier is low, the cost benefits of shift in decision-making power beyond IS are not significant. © 2012 Taylor & Francis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Prior research has established that idiosyncratic volatility of the securities prices exhibits a positive trend. This trend and other factors have made the merits of investment diversification and portfolio construction more compelling. ^ A new optimization technique, a greedy algorithm, is proposed to optimize the weights of assets in a portfolio. The main benefits of using this algorithm are to: (a) increase the efficiency of the portfolio optimization process, (b) implement large-scale optimizations, and (c) improve the resulting optimal weights. In addition, the technique utilizes a novel approach in the construction of a time-varying covariance matrix. This involves the application of a modified integrated dynamic conditional correlation GARCH (IDCC - GARCH) model to account for the dynamics of the conditional covariance matrices that are employed. ^ The stochastic aspects of the expected return of the securities are integrated into the technique through Monte Carlo simulations. Instead of representing the expected returns as deterministic values, they are assigned simulated values based on their historical measures. The time-series of the securities are fitted into a probability distribution that matches the time-series characteristics using the Anderson-Darling goodness-of-fit criterion. Simulated and actual data sets are used to further generalize the results. Employing the S&P500 securities as the base, 2000 simulated data sets are created using Monte Carlo simulation. In addition, the Russell 1000 securities are used to generate 50 sample data sets. ^ The results indicate an increase in risk-return performance. Choosing the Value-at-Risk (VaR) as the criterion and the Crystal Ball portfolio optimizer, a commercial product currently available on the market, as the comparison for benchmarking, the new greedy technique clearly outperforms others using a sample of the S&P500 and the Russell 1000 securities. The resulting improvements in performance are consistent among five securities selection methods (maximum, minimum, random, absolute minimum, and absolute maximum) and three covariance structures (unconditional, orthogonal GARCH, and integrated dynamic conditional GARCH). ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Prior research has established that idiosyncratic volatility of the securities prices exhibits a positive trend. This trend and other factors have made the merits of investment diversification and portfolio construction more compelling. A new optimization technique, a greedy algorithm, is proposed to optimize the weights of assets in a portfolio. The main benefits of using this algorithm are to: a) increase the efficiency of the portfolio optimization process, b) implement large-scale optimizations, and c) improve the resulting optimal weights. In addition, the technique utilizes a novel approach in the construction of a time-varying covariance matrix. This involves the application of a modified integrated dynamic conditional correlation GARCH (IDCC - GARCH) model to account for the dynamics of the conditional covariance matrices that are employed. The stochastic aspects of the expected return of the securities are integrated into the technique through Monte Carlo simulations. Instead of representing the expected returns as deterministic values, they are assigned simulated values based on their historical measures. The time-series of the securities are fitted into a probability distribution that matches the time-series characteristics using the Anderson-Darling goodness-of-fit criterion. Simulated and actual data sets are used to further generalize the results. Employing the S&P500 securities as the base, 2000 simulated data sets are created using Monte Carlo simulation. In addition, the Russell 1000 securities are used to generate 50 sample data sets. The results indicate an increase in risk-return performance. Choosing the Value-at-Risk (VaR) as the criterion and the Crystal Ball portfolio optimizer, a commercial product currently available on the market, as the comparison for benchmarking, the new greedy technique clearly outperforms others using a sample of the S&P500 and the Russell 1000 securities. The resulting improvements in performance are consistent among five securities selection methods (maximum, minimum, random, absolute minimum, and absolute maximum) and three covariance structures (unconditional, orthogonal GARCH, and integrated dynamic conditional GARCH).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Veterinary medicines (VMs) from agricultural industry can enter the environment in a number of ways. This includes direct exposure through aquaculture, accidental spillage and disposal, and indirect entry by leaching from manure or runoff after treatment. Many compounds used in animal treatments have ecotoxic properties that may have chronic or sometimes lethal effects when they come into contact with non-target organisms. VMs enter the environment in mixtures, potentially having additive effects. Traditional ecotoxicology tests are used to determine the lethal and sometimes reproductive effects on freshwater and terrestrial organisms. However, organisms used in ecotoxicology tests can be unrepresentative of the populations that are likely to be exposed to the compound in the environment. Most often the tests are on single compound toxicity but mixture effects may be significant and should be included in ecotoxicology testing. This work investigates the use, measured environmental concentrations (MECs) and potential impact of sea lice treatments on salmon farms in Scotland. Alternative methods for ecotoxicology testing including mixture toxicity, and the use of in silico techniques to predict the chronic impact of VMs on different species of aquatic organisms were also investigated. The Scottish Environmental Protection Agency (SEPA) provided information on the use of five sea lice treatments from 2008-2011 on Scottish salmon farms. This information was combined with the recently available data on sediment MECs for the years 2009-2012 provided by SEPA using ArcGIS 10.1. In depth analysis of this data showed that from a total of 55 sites, 30 sites had a MEC higher than the maximum allowable concentration (MAC) as set out by SEPA for emamectin benzoate and 7 sites had a higher MEC than MAC for teflubenzuron. A number of sites that were up to 16 km away from the nearest salmon farm reported as using either emamectin benzoate or teflubenzuron measured positive for the two treatments. There was no relationship between current direction and the distribution of the sea lice treatments, nor was there any evidence for alternative sources of the compounds e.g. land treatments. The sites that had MECs higher than the MAC could pose a risk to non-target organisms and disrupt the species dynamics of the area. There was evidence that some marine protected sites might be at risk of exposure to these compounds. To complement this work, effects on acute mixture toxicity of the 5 sea lice treatments, plus one major metabolite 3-phenoxybenzoic acid (3PBA), were measured using an assay using the bioluminescent bacteria Aliivibrio fischeri. When exposed to the 5 sea lice treatments and 3PBA A. fischeri showed a response to 3PBA, emamectin benzoate and azamethiphos as well as combinations of the three. In order to establish any additive effect of the sea lice treatments, the efficacy of two mixture prediction equations, concentration addition (CA) and independent action ii(IA) were tested using the results from single compound dose response curves. In this instance IA was the more effective prediction method with a linear regression confidence interval of 82.6% compared with 22.6% of CA. In silico molecular docking was carried out to predict the chronic effects of 15 VMs (including the five used as sea lice control). Molecular docking has been proposed as an alternative screening method for the chronic effects of large animal treatments on non-target organisms. Oestrogen receptor alpha (ERα) of 7 non-target bony fish and the African clawed frog Xenopus laevis were modelled using SwissModel. These models were then ‘docked’ to oestradiol, the synthetic oestrogen ethinylestradiol, two known xenoestrogens dichlorodiphenyltrichloroethane (DDT) and bisphenol A (BPA), the antioestrogen breast cancer treatment tamoxifen and 15 VMs using Auto Dock 4. Based on the results of this work, four VMs were identified as being possible xenoestrogens or anti-oestrogens; these were cypermethrin, deltamethrin, fenbendazole and teflubenzuron. Further investigation, using in vitro assays, into these four VMs has been suggested as future work. A modified recombinant yeast oestrogen screen (YES) was attempted using the cDNA of the ERα of the zebrafish Danio rerio and the rainbow trout Oncorhynchus mykiss. Due to time and difficulties in cloning protocols this work was unable to be completed. Use of such in vitro assays would allow for further investigation of the highlighted VMs into their oestrogenic potential. In conclusion, VMs used as sea lice treatments, such as teflubenzuron and emamectin benzoate may be more persistent and have a wider range in the environment than previously thought. Mixtures of sea lice treatments have been found to persist together in the environment, and effects of these mixtures on the bacteria A. fischeri can be predicted using the IA equation. Finally, molecular docking may be a suitable tool to predict chronic endocrine disrupting effects and identify varying degrees of impact on the ERα of nine species of aquatic organisms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The complexities of evaporation from structurally and mineralogically heterogeneous sandstone (Locharbriggs Sandstone) are investigated through a laboratory-based experiment in which a variety of environmental conditions are simulated. Data reported demonstrate the significance of material-environment interactions on the spatial and temporal variability of evaporative dynamics. Evaporation from porous stone is determined by the interplay between environmental, material and solution properties, which govern the rate and mode by which water is transmitted to, and subsequently removed from, an evaporating surface. Initially evaporation is marked by high rates of moisture loss controlled by external atmospheric conditions; then, when a critical level of surface moisture content is reached, hydraulic continuity between the stone surface and subsurface is disrupted and the drying front recedes
beneath the surface, evaporation rates decrease and are controlled by the ability of the material to transport water vapour to the surface. Pore size distribution and connectivity, as well as other material properties, control the timing of each stage of evaporation and the nature of the transition.

These experimental data highlight the complexity of evaporation, demonstrating that different regions of the same stone can exhibit varying moisture dynamics during drying and that the rate and nature of evaporative loss differs under different environmental conditions. The results identify the importance of material-environment interactions during drying and that stone micro-environmental conditions cannot be inferred from ambient data alone.
These data have significance for understanding the spatial distribution of stone surface weathering-related morphologies in both the natural and built environments where mineralogical and/or structural heterogeneity creates differences in moisture flux and hence variable drying rates. Such differences may provide a clearer explanation for the initiation and subsequent development of complex weathering responses where areas of significant deterioration can be found alongside areas that exhibit little or no evidence surface breakdown.