820 resultados para Scarcity of available alternatives


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Two modal size groups of sexually mature Arctic charr (Salvelinus alpinus) differing in shape and found at different depths in Lake Aigneau in the Canadian sub-Arctic are described and tested for genetic and ecological differentiation. Forms consisted of a small littoral resident, mean size 21.7 cm, and a large profundal resident, mean size 53.9 cm. Mitochondrial DNA analysis indicated that seven of eight haplotypes were diagnostic for either the littoral or profundal fish, with 66.6% of the variation being found within form groupings. Pairwise tests of microsatellite data indicated significant differences in nine of 12 loci and a significant difference between the forms across all tested loci. Molecular variation was partitioned to 84.1% within and 15.9% between forms and suggestive of either restricted interbreeding over time or different allopatric origins. Stable isotope signatures were also significantly different, with the profundal fish having higher d13C and d15N values than the littoral fish. Overlap and separation, respectively, in the range of form d13C and d15N signatures indicated that carbon was obtained from similar sources, but that forms fed at different trophic levels. Littoral fish relied on aquatic insects, predominantly chironomids. Profundal fish were largely piscivorous, including cannibalism. Predominantly empty stomachs and low per cent nitrogen muscle-tissue composition among profundal fish further indicated that the feeding activity was limited to the winter when ice-cover increases the density of available prey at depth. Results provide evidence of significant differences between the modal groups, with origins in both genetics and ecology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Studies on the impact of historical, current and future global change require very high-resolution climate data (less or equal 1km) as a basis for modelled responses, meaning that data from digital climate models generally require substantial rescaling. Another shortcoming of available datasets on past climate is that the effects of sea level rise and fall are not considered. Without such information, the study of glacial refugia or early Holocene plant and animal migration are incomplete if not impossible. Sea level at the last glacial maximum (LGM) was approximately 125m lower, creating substantial additional terrestrial area for which no current baseline data exist. Here, we introduce the development of a novel, gridded climate dataset for LGM that is both very high resolution (1km) and extends to the LGM sea and land mask. We developed two methods to extend current terrestrial precipitation and temperature data to areas between the current and LGM coastlines. The absolute interpolation error is less than 1°C and 0.5 °C for 98.9% and 87.8% of all pixels for the first two 1 arc degree distance zones. We use the change factor method with these newly assembled baseline data to downscale five global circulation models of LGM climate to a resolution of 1km for Europe. As additional variables we calculate 19 'bioclimatic' variables, which are often used in climate change impact studies on biological diversity. The new LGM climate maps are well suited for analysing refugia and migration during Holocene warming following the LGM.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many studies argue, based partly on Pb isotopic evidence, that recycled, subducted slabs reside in the mantle source of ocean island basalts (OIB) (Hofmann and White, 1982, doi:10.1016/0012-821X(82)90161-3; Weaver, 1991 doi:10.1016/0012-821X(91)90217-6; Lassiter, and Hauri, 1998, doi:10.1016/S0012-821X(98)00240-4). Such models, however, have remained largely untested against actual subduction zone inputs, due to the scarcity of comprehensive measurements of both radioactive parents (Th and U) and radiogenic daughter (Pb) in altered oceanic crust (AOC). Here, we discuss new, comprehensive measurements of U, Th, and Pb concentrations in the oldest AOC, ODP Site 801, and consider the effect of subducting this crust on the long-term Pb isotope evolution of the mantle. The upper 500 m of AOC at Site 801 shows >4-fold enrichment in U over pristine glass during seafloor alteration, but no net change to Pb or Th. Without subduction zone processing, ancient AOC would evolve to low 208Pb/206Pb compositions unobserved in the modern mantle (Hart and Staudigel, 1989 [Isotopic characterization and identification of recycled components, in: Crust/Mantle Recycling at Convergence Zones, Eds. S.R. Hart, L. Gqlen, NATO ASI Series. Series C: Mathematical and Physical Sciences 258, pp. 15-28, D. Reidel Publishing Company, Dordrecht-Boston, 1989]). Subduction, however, drives U-Th-Pb fractionation as AOC dehydrates in the earth's interior. Pacific arcs define mixing trends requiring 8-fold enrichment in Pb over U in AOC-derived fluid. A mass balance across the Mariana subduction zone shows that 44-75% of Pb but <10% of U is lost from AOC to the arc, and a further 10-23% of Pb and 19-40% of U is lost to the back-arc. Pb is lost shallow and U deep from subducted AOC, which may be a consequence of the stability of phases binding these elements during seafloor alteration: U in carbonate and Pb in sulfides. The upper end of these recycling estimates, which reflect maximum arc and back-arc growth rates, remove enough Pb and U from the slab to enable it to evolve rapidly (<<0.5 Ga) to sources suitable to explain the 208Pb/206Pb isotopic array of OIB, although these conditions fail to simultaneously satisfy the 207Pb/206Pb system. Lower growth rates would require additional U loss (29%) at depths beyond the zones of arc and back-arc magmagenesis, which would decrease upper mantle kappa (232Th/238U) over time, consistent with one solution to the "kappa conundrum" (Elliott et al., 1999, doi:10.1016/S0012-821X(99)00077-1). The net effects of alteration (doubling of l [238U/204Pb]) and subduction (doubling of omega [232Th/204Pb]) are sufficient to create the Pb isotopic signatures of oceanic basalts.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Energy is required to maintain physiological homeostasis in response to environmental change. Although responses to environmental stressors frequently are assumed to involve high metabolic costs, the biochemical bases of actual energy demands are rarely quantified. We studied the impact of a near-future scenario of ocean acidification [800 µatm partial pressure of CO2 (pCO2)] during the development and growth of an important model organism in developmental and environmental biology, the sea urchin Strongylocentrotus purpuratus. Size, metabolic rate, biochemical content, and gene expression were not different in larvae growing under control and seawater acidification treatments. Measurements limited to those levels of biological analysis did not reveal the biochemical mechanisms of response to ocean acidification that occurred at the cellular level. In vivo rates of protein synthesis and ion transport increased 50% under acidification. Importantly, the in vivo physiological increases in ion transport were not predicted from total enzyme activity or gene expression. Under acidification, the increased rates of protein synthesis and ion transport that were sustained in growing larvae collectively accounted for the majority of available ATP (84%). In contrast, embryos and prefeeding and unfed larvae in control treatments allocated on average only 40% of ATP to these same two processes. Understanding the biochemical strategies for accommodating increases in metabolic energy demand and their biological limitations can serve as a quantitative basis for assessing sublethal effects of global change. Variation in the ability to allocate ATP differentially among essential functions may be a key basis of resilience to ocean acidification and other compounding environmental stressors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The ecological intensification of crops is proposed as a solution to the growing demand of agricultural and forest resources, in opposition to intensive monocultures. The introduction of mixed cultures as mixtures between nitrogen fixing species and non nitrogen fixing species intended to increase crop yield as a result of an improvement of the available nitrogen and phosphorus in soil. Relationship between crops have received little attention despite the wide range of advantages that confers species diversity to these systems, such as increased productivity, resilience to disruption and ecological sustainability. Forests and forestry plantations can develop an important role in storing carbon in their tissues, especially in wood which become into durable product. A simplifying parameter to analyze the amount allocated carbon by plantation is the TBCA (total belowground carbon allocation), whereby, for short periods and mature plantations, is admitted as the subtraction between soil carbon efflux and litterfall. Soil respiration depends on a wide range of factors, such as soil temperature and soil water content, soil fertility, presence and type of vegetation, among others. The studied orchard is a mixed forestry plantation of hybrid walnuts(Juglans × intermedia Carr.) for wood and alders (Alnus cordata (Loisel.) Duby.), a nitrogen fixing specie through the actinomycete Frankia alni ((Woronin, 1866) Von Tubeuf 1895). The study area is sited at Restinclières, a green area near Montpellier (South of France). In the present work, soil respiration varied greatly throughout the year, mainly influenced by soil temperature. Soil water content did not significantly influence the response of soil respiration as it was constant during the measurement period and under no water stress conditions. Distance between nearest walnut and measurement was also a highly influential factor in soil respiration. Generally there was a decreasing trend in soil respiration when the distance to the nearest tree increased. It was also analyzed the response of soil respiration according to alder presence and fertilizer management (50 kg N·ha-1·año-1 from 1999 to 2010). None of these treatments significantly influenced soil respiration, although previous studies noticed an inhibition in rates of soil respiration under fertilized conditions and high rates of available nitrogen. However, treatments without fertilization and without alder presence obtained higher respiration rates in those cases with significant differences. The lack of significant differences between treatments may be due to the high coefficient of variation experienced by soil respiration measurements. Finally an asynchronous fluctuation was observed between soil respiration and litterfall during senescence period. This is possibly due to the slowdown in the emission of exudates by roots during senescence period, which are largely related to microbial activity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Separating programs into modules is a well-known technique which has proven very useful in program development and maintenance. Starting by introducing a number of possible scenarios, in this paper we study different issues which appear when developing analysis and specialization techniques for modular logic programming. We discuss a number of design alternatives and their consequences for the different scenarios considered and describe where applicable the decisions made in the Ciao system analyzer and specializer. In our discussion we use the module system of Ciao Prolog. This is both for concreteness and because Ciao Prolog is a second-generation Prolog system which has been designed with global analysis and specialization in mind, and which has a strict module system. The aim of this work is not to provide a theoretical basis on modular analysis and specialization, but rather to discuss some interesting practical issues.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One important task in the design of an antenna is to carry out an analysis to find out the characteristics of the antenna that best fulfills the specifications fixed by the application. After that, a prototype is manufactured and the next stage in design process is to check if the radiation pattern differs from the designed one. Besides the radiation pattern, other radiation parameters like directivity, gain, impedance, beamwidth, efficiency, polarization, etc. must be also evaluated. For this purpose, accurate antenna measurement techniques are needed in order to know exactly the actual electromagnetic behavior of the antenna under test. Due to this fact, most of the measurements are performed in anechoic chambers, which are closed areas, normally shielded, covered by electromagnetic absorbing material, that simulate free space propagation conditions, due to the absorption of the radiation absorbing material. Moreover, these facilities can be employed independently of the weather conditions and allow measurements free from interferences. Despite all the advantages of the anechoic chambers, the results obtained both from far-field measurements and near-field measurements are inevitably affected by errors. Thus, the main objective of this Thesis is to propose algorithms to improve the quality of the results obtained in antenna measurements by using post-processing techniques and without requiring additional measurements. First, a deep revision work of the state of the art has been made in order to give a general vision of the possibilities to characterize or to reduce the effects of errors in antenna measurements. Later, new methods to reduce the unwanted effects of four of the most commons errors in antenna measurements are described and theoretical and numerically validated. The basis of all them is the same, to perform a transformation from the measurement surface to another domain where there is enough information to easily remove the contribution of the errors. The four errors analyzed are noise, reflections, truncation errors and leakage and the tools used to suppress them are mainly source reconstruction techniques, spatial and modal filtering and iterative algorithms to extrapolate functions. Therefore, the main idea of all the methods is to modify the classical near-field-to-far-field transformations by including additional steps with which errors can be greatly suppressed. Moreover, the proposed methods are not computationally complex and, because they are applied in post-processing, additional measurements are not required. The noise is the most widely studied error in this Thesis, proposing a total of three alternatives to filter out an important noise contribution before obtaining the far-field pattern. The first one is based on a modal filtering. The second alternative uses a source reconstruction technique to obtain the extreme near-field where it is possible to apply a spatial filtering. The last one is to back-propagate the measured field to a surface with the same geometry than the measurement surface but closer to the AUT and then to apply also a spatial filtering. All the alternatives are analyzed in the three most common near-field systems, including comprehensive noise statistical analyses in order to deduce the signal-to-noise ratio improvement achieved in each case. The method to suppress reflections in antenna measurements is also based on a source reconstruction technique and the main idea is to reconstruct the field over a surface larger than the antenna aperture in order to be able to identify and later suppress the virtual sources related to the reflective waves. The truncation error presents in the results obtained from planar, cylindrical and partial spherical near-field measurements is the third error analyzed in this Thesis. The method to reduce this error is based on an iterative algorithm to extrapolate the reliable region of the far-field pattern from the knowledge of the field distribution on the AUT plane. The proper termination point of this iterative algorithm as well as other critical aspects of the method are also studied. The last part of this work is dedicated to the detection and suppression of the two most common leakage sources in antenna measurements. A first method tries to estimate the leakage bias constant added by the receiver’s quadrature detector to every near-field data and then suppress its effect on the far-field pattern. The second method can be divided into two parts; the first one to find the position of the faulty component that radiates or receives unwanted radiation, making easier its identification within the measurement environment and its later substitution; and the second part of this method is able to computationally remove the leakage effect without requiring the substitution of the faulty component. Resumen Una tarea importante en el diseño de una antena es llevar a cabo un análisis para averiguar las características de la antena que mejor cumple las especificaciones fijadas por la aplicación. Después de esto, se fabrica un prototipo de la antena y el siguiente paso en el proceso de diseño es comprobar si el patrón de radiación difiere del diseñado. Además del patrón de radiación, otros parámetros de radiación como la directividad, la ganancia, impedancia, ancho de haz, eficiencia, polarización, etc. deben ser también evaluados. Para lograr este propósito, se necesitan técnicas de medida de antenas muy precisas con el fin de saber exactamente el comportamiento electromagnético real de la antena bajo prueba. Debido a esto, la mayoría de las medidas se realizan en cámaras anecoicas, que son áreas cerradas, normalmente revestidas, cubiertas con material absorbente electromagnético. Además, estas instalaciones se pueden emplear independientemente de las condiciones climatológicas y permiten realizar medidas libres de interferencias. A pesar de todas las ventajas de las cámaras anecoicas, los resultados obtenidos tanto en medidas en campo lejano como en medidas en campo próximo están inevitablemente afectados por errores. Así, el principal objetivo de esta Tesis es proponer algoritmos para mejorar la calidad de los resultados obtenidos en medida de antenas mediante el uso de técnicas de post-procesado. Primeramente, se ha realizado un profundo trabajo de revisión del estado del arte con el fin de dar una visión general de las posibilidades para caracterizar o reducir los efectos de errores en medida de antenas. Después, se han descrito y validado tanto teórica como numéricamente nuevos métodos para reducir el efecto indeseado de cuatro de los errores más comunes en medida de antenas. La base de todos ellos es la misma, realizar una transformación de la superficie de medida a otro dominio donde hay suficiente información para eliminar fácilmente la contribución de los errores. Los cuatro errores analizados son ruido, reflexiones, errores de truncamiento y leakage y las herramientas usadas para suprimirlos son principalmente técnicas de reconstrucción de fuentes, filtrado espacial y modal y algoritmos iterativos para extrapolar funciones. Por lo tanto, la principal idea de todos los métodos es modificar las transformaciones clásicas de campo cercano a campo lejano incluyendo pasos adicionales con los que los errores pueden ser enormemente suprimidos. Además, los métodos propuestos no son computacionalmente complejos y dado que se aplican en post-procesado, no se necesitan medidas adicionales. El ruido es el error más ampliamente estudiado en esta Tesis, proponiéndose un total de tres alternativas para filtrar una importante contribución de ruido antes de obtener el patrón de campo lejano. La primera está basada en un filtrado modal. La segunda alternativa usa una técnica de reconstrucción de fuentes para obtener el campo sobre el plano de la antena donde es posible aplicar un filtrado espacial. La última es propagar el campo medido a una superficie con la misma geometría que la superficie de medida pero más próxima a la antena y luego aplicar también un filtrado espacial. Todas las alternativas han sido analizadas en los sistemas de campo próximos más comunes, incluyendo detallados análisis estadísticos del ruido con el fin de deducir la mejora de la relación señal a ruido lograda en cada caso. El método para suprimir reflexiones en medida de antenas está también basado en una técnica de reconstrucción de fuentes y la principal idea es reconstruir el campo sobre una superficie mayor que la apertura de la antena con el fin de ser capaces de identificar y después suprimir fuentes virtuales relacionadas con las ondas reflejadas. El error de truncamiento que aparece en los resultados obtenidos a partir de medidas en un plano, cilindro o en la porción de una esfera es el tercer error analizado en esta Tesis. El método para reducir este error está basado en un algoritmo iterativo para extrapolar la región fiable del patrón de campo lejano a partir de información de la distribución del campo sobre el plano de la antena. Además, se ha estudiado el punto apropiado de terminación de este algoritmo iterativo así como otros aspectos críticos del método. La última parte de este trabajo está dedicado a la detección y supresión de dos de las fuentes de leakage más comunes en medida de antenas. El primer método intenta realizar una estimación de la constante de fuga del leakage añadido por el detector en cuadratura del receptor a todos los datos en campo próximo y después suprimir su efecto en el patrón de campo lejano. El segundo método se puede dividir en dos partes; la primera de ellas para encontrar la posición de elementos defectuosos que radian o reciben radiación indeseada, haciendo más fácil su identificación dentro del entorno de medida y su posterior substitución. La segunda parte del método es capaz de eliminar computacionalmente el efector del leakage sin necesidad de la substitución del elemento defectuoso.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El Zn es un elemento esencial para el crecimiento saludable y reproducción de plantas, animales y humanos. La deficiencia de Zn es una de las carencias de micronutrientes más extendidas en muchos cultivos, afectando a grandes extensiones de suelos en diferentes áreas agrícolas. La biofortificación agronómica de diferentes cultivos, incrementando la concentración de micronutriente Zn en la planta, es un medio para evitar la deficiencia de Zn en animales y humanos. Tradicionalmente se han utilizado fertilizantes de Zn inorgánicos, como el ZnSO4, aunque en los últimos años se están utilizado complejos de Zn como fuentes de este micronutriente, obteniéndose altas concentraciones de Zn soluble y disponible en el suelo. Sin embargo, el envejecimiento de la fuente en el suelo puede causar cambios importantes en su disponibilidad para las plantas. Cuando se añaden al suelo fuentes de Zn inorgánicas, las formas de Zn más solubles pierden actividad y extractabilidad con el paso del tiempo, transformándose a formas más estables y menos biodisponibles. En esta tesis se estudia el efecto residual de diferentes complejos de Zn de origen natural y sintético, aplicados en cultivos previos de judía y lino, bajo dos condiciones de riego distintas (por encima y por debajo de la capacidad de campo, respectivamente) y en dos suelos diferentes (ácido y calizo). Los fertilizantes fueron aplicados al cultivo previo en tres dosis diferentes (0, 5 y 10 mg Zn kg-1 suelo). El Zn fácilmente lixiviable se estimó con la extracción con BaCl2 0,1M. Bajo condiciones de humedad por encima de la capacidad de campo se obtuvieron mayores porcentajes de Zn lixiviado en el suelo calizo que en el suelo ácido. En el caso del cultivo de judía realizado en condiciones de humedad por encima de la capacidad de campo se compararon las cantidades extraídas con el Zn lixiviado real. El análisis de correlación entre el Zn fácilmente lixiviable y el estimado sólo fue válido para complejos con alta movilidad y para cada suelo por separado. Bajo condiciones de humedad por debajo de la capacidad de campo, la concentración de Zn biodisponible fácilmente lixiviable presentó correlaciones positivas y altamente significativas con la concentración de Zn disponible en el suelo. El Zn disponible se estimó con varios métodos de extracción empleados habitualmente: DTPA-TEA, DTPA-AB, Mehlich-3 y LMWOAs. Estas concentraciones fueron mayores en el suelo ácido que en el calizo. Los diferentes métodos utilizados para estimar el Zn disponible presentaron correlaciones positivas y altamente significativas entre sí. La distribución del Zn en las distintas fracciones del suelo fue estimada con diferentes extracciones secuenciales. Las extracciones secuenciales mostraron un descenso entre los dos cultivos (el anterior y el actual) en la fracción de Zn más lábil y un aumento en la concentración de Zn asociado a fracciones menos lábiles, como carbonatos, óxidos y materia orgánica. Se obtuvieron correlaciones positivas y altamente significativas entre las concentraciones de Zn asociado a las fracciones más lábiles (WSEX y WS+EXC, experimento de la judía y lino, respectivamente) y las concentraciones de Zn disponible, estimadas por los diferentes métodos. Con respecto a la planta se determinaron el rendimiento en materia seca y la concentración de Zn en planta. Se observó un aumento del rendimiento y concentraciones con el efecto residual de la dosis mayores (10 mg Zn kg-1) con respecto a la dosis inferior (5 mg Zn 12 kg-1) y de ésta con respecto a la dosis 0 (control). El incremento de la concentración de Zn en todos los tratamientos fertilizantes, respecto al control, fue mayor en el suelo ácido que en el calizo. Las concentraciones de Zn en planta indicaron que, en el suelo calizo, serían convenientes nuevas aplicaciones de Zn en posteriores cultivos para mantener unas adecuadas concentraciones en planta. Las mayores concentraciones de Zn en la planta de judía, cultivada bajo condiciones de humedad por encima de la capacidad de campo, se obtuvieron en el suelo ácido con el efecto residual del Zn-HEDTA a la dosis de 10 mg Zn kg-1 (280,87 mg Zn kg-1) y en el suelo calizo con el efecto residual del Zn-DTPA-HEDTA-EDTA a la dosis de 10 mg Zn kg-1 (49,89 mg Zn kg-1). En el cultivo de lino, cultivado bajo condiciones de humedad por debajo de la capacidad de campo, las mayores concentraciones de Zn en planta ese obtuvieron con el efecto residual del Zn-AML a la dosis de 10 mg Zn kg-1 (224,75 mg Zn kg-1) y en el suelo calizo con el efecto residual del Zn-EDTA a la dosis de 10 mg Zn kg-1 (99,83 mg Zn kg-1). El Zn tomado por la planta fue determinado como combinación del rendimiento y de la concentración en planta. Bajo condiciones de humedad por encima de capacidad de campo, con lixiviación, el Zn tomado por la judía disminuyó en el cultivo actual con respecto al cultivo anterior. Sin embargo, en el cultivo de lino, bajo condiciones de humedad por debajo de la capacidad de campo, se obtuvieron cantidades de Zn tomado superiores en el cultivo actual con respecto al anterior. Esta tendencia también se observó, en ambos casos, con el porcentaje de Zn usado por la planta. Summary Zinc is essential for healthy growth and reproduction of plants, animals and humans. Zinc deficiency is one of the most widespread micronutrient deficiency in different crops, and affect different agricultural areas. Agronomic biofortification of crops produced by an increased of Zn in plant, is one way to avoid Zn deficiency in animals and humans Sources with inorganic Zn, such as ZnSO4, have been used traditionally. Although, in recent years, Zn complexes are used as sources of this micronutrient, the provide high concentrations of soluble and available Zn in soil. However, the aging of the source in the soil could cause significant changes in their availability to plants. When an inorganic source of Zn is added to soil, Zn forms more soluble and extractability lose activity over time, transforming into forms more stable and less bioavailable. This study examines the residual effect of different natural and synthetic Zn complexes on navy bean and flax crops, under two different moisture conditions (above and below field capacity, respectively) and in two different soils (acid and calcareous). Fertilizers were applied to the previous crop in three different doses (0, 5 y 10 mg Zn kg-1 soil). The easily leachable Zn was estimated by extraction with 0.1 M BaCl2. Under conditions of moisture above field capacity, the percentage of leachable Zn in the calcareous soil was higher than in acid soil. In the case of navy bean experiment, performed in moisture conditions of above field capacity, amounts extracted of easily leachable Zn were compared with the real leachable Zn. Correlation analysis between the leachable Zn and the estimate was only valid for complex with high mobility and for each soil separately. Under moisture conditions below field capacity, the concentration of bioavailable easily leachable Zn showed highly significant positive correlations with the concentration of available soil Zn. The available Zn was estimated with several commonly used extraction methods: DTPA-TEA, AB-DTPA, Mehlich-3 and LMWOAs. These concentrations were higher in acidic soil than in the calcareous. The different methods used to estimate the available Zn showed highly significant positive correlations with each other. The distribution of Zn in the different fractions of soil was estimated with different sequential extractions. The sequential extractions showed a decrease between the two crops (the previous and current) at the most labile Zn fraction and an increase in the concentration of Zn associated with the less labile fractions, such as carbonates, oxides and organic matter. A positive and highly significant correlation was obtained between the concentrations of Zn associated with more labile fractions (WSEX and WS + EXC, navy bean and flax experiments, respectively) and available Zn concentrations determined by the different methods. Dry matter yield and Zn concentration in plants were determined in plant. Yield and Zn concentration in plant were higher with the residual concentrations of the higher dose applied (10 mg Zn kg-1) than with the lower dose (5 mg Zn kg-1), also these parameters showed higher values with application of this dose than with not Zn application. The increase of Zn concentration in plant with Zn treatments, respect to the control, was greater in the acid soil than in the calcareous. The Zn concentrations in plant indicated that in the calcareous soil, new applications of Zn are desirable in subsequent crops to maintain suitable concentrations in plant. 15 The highest concentrations of Zn in navy bean plant, performed under moisture conditions above the field capacity, were obtained with the residual effect of Zn-HEDTA at the dose of 10 mg Zn kg-1 (280.87 mg Zn kg-1) in the acid soil, and with the residual effect of Zn- DTPA-HEDTA-EDTA at a dose of 10 mg Zn kg-1 (49.89 mg Zn kg-1) in the calcareous soil. In the flax crop, performed under moisture conditions below field capacity, the highest Zn concentrations in plant were obtained with the residual effect of Zn-AML at the dose of 10 mg Zn kg-1 (224.75 Zn mg kg-1) and with the residual effect of Zn-EDTA at a dose of 10 mg Zn kg-1 (99.83 mg Zn kg-1) in the calcareous soil. The Zn uptake was determined as a combination of yield and Zn concentration in plant. Under moisture conditions above field capacity, with leaching, Zn uptake by navy bean decreased in the current crop, respect to the previous crop. However, in the flax crop, under moisture conditions below field capacity, Zn uptake was higher in the current crop than in the previous. This trend is also observed in both cases, with the percentage of Zn used by the plant

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background. Over the last years, the number of available informatics resources in medicine has grown exponentially. While specific inventories of such resources have already begun to be developed for Bioinformatics (BI), comparable inventories are as yet not available for Medical Informatics (MI) field, so that locating and accessing them currently remains a hard and time-consuming task. Description. We have created a repository of MI resources from the scientific literature, providing free access to its contents through a web-based service. Relevant information describing the resources is automatically extracted from manuscripts published in top-ranked MI journals. We used a pattern matching approach to detect the resources? names and their main features. Detected resources are classified according to three different criteria: functionality, resource type and domain. To facilitate these tasks, we have built three different taxonomies by following a novel approach based on folksonomies and social tagging. We adopted the terminology most frequently used by MI researchers in their publications to create the concepts and hierarchical relationships belonging to the taxonomies. The classification algorithm identifies the categories associated to resources and annotates them accordingly. The database is then populated with this data after manual curation and validation. Conclusions. We have created an online repository of MI resources to assist researchers in locating and accessing the most suitable resources to perform specific tasks. The database contained 282 resources at the time of writing. We are continuing to expand the number of available resources by taking into account further publications as well as suggestions from users and resource developers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the mid-long-term after a nuclear accident, the contamination of drinking water sources, fish and other aquatic foodstuffs, irrigation supplies and people?s exposure during recreational activities may create considerable public concern, even though dose assessment may in certain situations indicate lesser importance than for other sources, as clearly experienced in the aftermath of past accidents. In such circumstances there are a number of available countermeasure options, ranging from specific chemical treatment of lakes to bans on fish ingestion or on the use of water for crop irrigation. The potential actions can be broadly grouped into four main categories, chemical, biological, physical and social. In some cases a combination of actions may be the optimal strategy and a decision support system (DSS) like MOIRA-PLUS can be of great help to optimise a decision. A further option is of course not to take any remedial actions, although this may also have significant socio-economic repercussions which should be adequately evaluated. MOIRA-PLUS is designed to allow for a reliable assessment of the long-term evolution of the radiological situation and of feasible alternative rehabilitation strategies, including an objective evaluation of their social, economic and ecological impacts in a rational and comprehensive manner. MOIRA-PLUS also features a decision analysis methodology, making use of multi-attribute analysis, which can take into account the preferences and needs of different types of stakeholders. The main functions and elements of the system are described summarily. Also the conclusions from end-user?s experiences with the system are discussed, including exercises involving the organizations responsible for emergency management and the affected services, as well as different local and regional stakeholders. MOIRAPLUS has proven to be a mature system, user friendly and relatively easy to set up. It can help to better decisionmaking by enabling a realistic evaluation of the complete impacts of possible recovery strategies. Also, the interaction with stakeholders has allowed identifying improvements of the system that have been recently implemented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes the experiences using remote laboratories for thorough analysis of a thermal system, including disturbances. Remote laboratories for education in subjects of control, is a common resorted method, used by universities. This method is applied to offer a flexible service in schedules so as to obtain greater and better results of available resources. Remote laboratories have been used for controlling physical devices remotely. Furthermore, remote labs have been used for transfer function identification of real equipment. Nevertheless, remote analyses of disturbances have not been done. The aim of this contribution is thereby to apply the experience of remote laboratories in the study of disturbances. Some experiments are carried out to demonstrate the effectiveness in using remote laboratories for complete analysis of a thermal system. Considering the remote access to thermal system, “Sistema de Laboratorios a Distancia” (SLD) was used.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Toponomastics is increasingly interested in the subjective role of place names in quotidian life. In the frame of Urban Geography, the interest in this matter is currently growing, as the recently change in modes of habitation has urged our discipline to find new ways of exploring the cities. In this context, the study of how name's significance is connected to a urban society constitutes a very interesting approach. We believe in the importance of place names as tools for decoding urban areas and societies at a local-scale. This consideration has been frequently taken into account in the analysis of exonyms, although in their case they are not exempt of political and practical implications that prevail over the tool function. The study of toponomastic processes helps us understanding how the city works, by analyzing the liaison between urban landscape, imaginaries and toponyms which is reflected in the scarcity of some names, in the biased creation of new toponyms and in the pressure exercised over every place name by tourists, residents and local government for changing, maintaining or eliminating them. Our study-case, Toledo, is one of the oldest cities in Spain, full of myths, stories and histories that can only be understood combined with processes of internal evolution of the city linked to the arrival of new residents and the more and more notorious change of its historical landscape. At a local scale, we are willing to decode the information which is contained in its toponyms about its landscape and its society.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In current communication systems, there are many new challenges like various competitive standards, the scarcity of frequency resource, etc., especially the development of personal wireless communication systems result the new system update faster than ever before, the conventional hardware-based wireless communication system is difficult to adapt to this situation. The emergence of SDR enabled the third revolution of wireless communication which from hardware to software and build a flexible, reliable, upgradable, reusable, reconfigurable and low cost platform. The Universal Software Radio Peripheral (USRP) products are commonly used with the GNU Radio software suite to create complex SDR systems. GNU Radio is a toolkit where digital signal processing blocks are written in C++, and connected to each other with Python. This makes it easy to develop more sophisticated signal processing systems, because many blocks already written by others and you can quickly put them together to create a complete system. Although the main function of GNU Radio is not be a simulator, but if there is no RF hardware components,it supports to researching the signal processing algorithm based on pre-stored and generated data by signal generator. This thesis introduced SDR platform from hardware (USRP) and software(GNU Radio), as well as some basic modulation techniques in wireless communication system. Based on the examples provided by GNU Radio, carried out some related experiments, for example GSM scanning and FM radio station receiving on USRP. And make a certain degree of improvement based on the experience of some investigators to observe OFDM spectrum and simulate real-time video transmission. GNU Radio combine with USRP hardware proved to be a valuable lab platform for implementing complex radio system prototypes in a short time. RESUMEN. Software Defined Radio (SDR) es una tecnología emergente que está creando un impacto revolucionario en la tecnología de radio convencional. Un buen ejemplo de radio software son los sistemas de código abierto llamados GNU Radio que emplean un kit de herramientas de desarrollo de software libre. En este trabajo se ha empleado un kit de desarrollo comercial (Ettus Research) que consiste en un módulo de procesado de señal y un hardaware sencillo. El módulo emplea un software de desarrollo basado en Linux sobre el que se pueden implementar aplicaciones de radio software muy variadas. El hardware de desarrollo consta de un microprocesador de propósito general, un dispositivo programable (FPGA) y un interfaz de radiofrecuencia que cubre de 50 a 2200MHz. Este hardware se conecta al PC por medio de un interfaz USB de 8Mb/s de velocidad. Sobre la plataforma de Ettus se pueden ejecutar aplicaciones GNU radio que utilizan principalmente lenguaje de programación Python para implementarse. Sin embargo, su módulo de procesado de señal está construido en C + + y emplea un microprocesador con aritmética de coma flotante. Por lo tanto, los desarrolladores pueden rápida y fácilmente construir aplicaciones en tiempo real sistemas de comunicación inalámbrica de alta capacidad. Aunque su función principal no es ser un simulador, si no puesto que hay componentes de hardware RF, Radio GNU sirve de apoyo a la investigación del algoritmo de procesado de señales basado en pre-almacenados y generados por los datos del generador de señal. En este trabajo fin de máster se ha evaluado la plataforma de hardware de DEG (USRP) y el software (GNU Radio). Para ello se han empleado algunas técnicas de modulación básicas en el sistema de comunicación inalámbrica. A partir de los ejemplos proporcionados por GNU Radio, hemos realizado algunos experimentos relacionados, por ejemplo, escaneado del espectro, demodulación de señales de FM empleando siempre el hardware de USRP. Una vez evaluadas aplicaciones sencillas se ha pasado a realizar un cierto grado de mejora y optimización de aplicaciones complejas descritas en la literatura. Se han empleado aplicaciones como la que consiste en la generación de un espectro de OFDM y la simulación y transmisión de señales de vídeo en tiempo real. Con estos resultados se está ahora en disposición de abordar la elaboración de aplicaciones complejas.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The modal analysis of a structural system consists on computing its vibrational modes. The experimental way to estimate these modes requires to excite the system with a measured or known input and then to measure the system output at different points using sensors. Finally, system inputs and outputs are used to compute the modes of vibration. When the system refers to large structures like buildings or bridges, the tests have to be performed in situ, so it is not possible to measure system inputs such as wind, traffic, . . .Even if a known input is applied, the procedure is usually difficult and expensive, and there are still uncontrolled disturbances acting at the time of the test. These facts led to the idea of computing the modes of vibration using only the measured vibrations and regardless of the inputs that originated them, whether they are ambient vibrations (wind, earthquakes, . . . ) or operational loads (traffic, human loading, . . . ). This procedure is usually called Operational Modal Analysis (OMA), and in general consists on to fit a mathematical model to the measured data assuming the unobserved excitations are realizations of a stationary stochastic process (usually white noise processes). Then, the modes of vibration are computed from the estimated model. The first issue investigated in this thesis is the performance of the Expectation- Maximization (EM) algorithm for the maximum likelihood estimation of the state space model in the field of OMA. The algorithm is described in detail and it is analysed how to apply it to vibration data. After that, it is compared to another well known method, the Stochastic Subspace Identification algorithm. The maximum likelihood estimate enjoys some optimal properties from a statistical point of view what makes it very attractive in practice, but the most remarkable property of the EM algorithm is that it can be used to address a wide range of situations in OMA. In this work, three additional state space models are proposed and estimated using the EM algorithm: • The first model is proposed to estimate the modes of vibration when several tests are performed in the same structural system. Instead of analyse record by record and then compute averages, the EM algorithm is extended for the joint estimation of the proposed state space model using all the available data. • The second state space model is used to estimate the modes of vibration when the number of available sensors is lower than the number of points to be tested. In these cases it is usual to perform several tests changing the position of the sensors from one test to the following (multiple setups of sensors). Here, the proposed state space model and the EM algorithm are used to estimate the modal parameters taking into account the data of all setups. • And last, a state space model is proposed to estimate the modes of vibration in the presence of unmeasured inputs that cannot be modelled as white noise processes. In these cases, the frequency components of the inputs cannot be separated from the eigenfrequencies of the system, and spurious modes are obtained in the identification process. The idea is to measure the response of the structure corresponding to different inputs; then, it is assumed that the parameters common to all the data correspond to the structure (modes of vibration), and the parameters found in a specific test correspond to the input in that test. The problem is solved using the proposed state space model and the EM algorithm. Resumen El análisis modal de un sistema estructural consiste en calcular sus modos de vibración. Para estimar estos modos experimentalmente es preciso excitar el sistema con entradas conocidas y registrar las salidas del sistema en diferentes puntos por medio de sensores. Finalmente, los modos de vibración se calculan utilizando las entradas y salidas registradas. Cuando el sistema es una gran estructura como un puente o un edificio, los experimentos tienen que realizarse in situ, por lo que no es posible registrar entradas al sistema tales como viento, tráfico, . . . Incluso si se aplica una entrada conocida, el procedimiento suele ser complicado y caro, y todavía están presentes perturbaciones no controladas que excitan el sistema durante el test. Estos hechos han llevado a la idea de calcular los modos de vibración utilizando sólo las vibraciones registradas en la estructura y sin tener en cuenta las cargas que las originan, ya sean cargas ambientales (viento, terremotos, . . . ) o cargas de explotación (tráfico, cargas humanas, . . . ). Este procedimiento se conoce en la literatura especializada como Análisis Modal Operacional, y en general consiste en ajustar un modelo matemático a los datos registrados adoptando la hipótesis de que las excitaciones no conocidas son realizaciones de un proceso estocástico estacionario (generalmente ruido blanco). Posteriormente, los modos de vibración se calculan a partir del modelo estimado. El primer problema que se ha investigado en esta tesis es la utilización de máxima verosimilitud y el algoritmo EM (Expectation-Maximization) para la estimación del modelo espacio de los estados en el ámbito del Análisis Modal Operacional. El algoritmo se describe en detalle y también se analiza como aplicarlo cuando se dispone de datos de vibraciones de una estructura. A continuación se compara con otro método muy conocido, el método de los Subespacios. Los estimadores máximo verosímiles presentan una serie de propiedades que los hacen óptimos desde un punto de vista estadístico, pero la propiedad más destacable del algoritmo EM es que puede utilizarse para resolver un amplio abanico de situaciones que se presentan en el Análisis Modal Operacional. En este trabajo se proponen y estiman tres modelos en el espacio de los estados: • El primer modelo se utiliza para estimar los modos de vibración cuando se dispone de datos correspondientes a varios experimentos realizados en la misma estructura. En lugar de analizar registro a registro y calcular promedios, se utiliza algoritmo EM para la estimación conjunta del modelo propuesto utilizando todos los datos disponibles. • El segundo modelo en el espacio de los estados propuesto se utiliza para estimar los modos de vibración cuando el número de sensores disponibles es menor que vi Resumen el número de puntos que se quieren analizar en la estructura. En estos casos es usual realizar varios ensayos cambiando la posición de los sensores de un ensayo a otro (múltiples configuraciones de sensores). En este trabajo se utiliza el algoritmo EM para estimar los parámetros modales teniendo en cuenta los datos de todas las configuraciones. • Por último, se propone otro modelo en el espacio de los estados para estimar los modos de vibración en la presencia de entradas al sistema que no pueden modelarse como procesos estocásticos de ruido blanco. En estos casos, las frecuencias de las entradas no se pueden separar de las frecuencias del sistema y se obtienen modos espurios en la fase de identificación. La idea es registrar la respuesta de la estructura correspondiente a diferentes entradas; entonces se adopta la hipótesis de que los parámetros comunes a todos los registros corresponden a la estructura (modos de vibración), y los parámetros encontrados en un registro específico corresponden a la entrada en dicho ensayo. El problema se resuelve utilizando el modelo propuesto y el algoritmo EM.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Geologic storage of carbon dioxide (CO2) has been proposed as a viable means for reducing anthropogenic CO2 emissions. Once injection begins, a program for measurement, monitoring, and verification (MMV) of CO2 distribution is required in order to: a) research key features, effects and processes needed for risk assessment; b) manage the injection process; c) delineate and identify leakage risk and surface escape; d) provide early warnings of failure near the reservoir; and f) verify storage for accounting and crediting. The selection of the methodology of monitoring (characterization of site and control and verification in the post-injection phase) is influenced by economic and technological variables. Multiple Criteria Decision Making (MCDM) refers to a methodology developed for making decisions in the presence of multiple criteria. MCDM as a discipline has only a relatively short history of 40 years, and it has been closely related to advancements on computer technology. Evaluation methods and multicriteria decisions include the selection of a set of feasible alternatives, the simultaneous optimization of several objective functions, and a decision-making process and evaluation procedures that must be rational and consistent. The application of a mathematical model of decision-making will help to find the best solution, establishing the mechanisms to facilitate the management of information generated by number of disciplines of knowledge. Those problems in which decision alternatives are finite are called Discrete Multicriteria Decision problems. Such problems are most common in reality and this case scenario will be applied in solving the problem of site selection for storing CO2. Discrete MCDM is used to assess and decide on issues that by nature or design support a finite number of alternative solutions. Recently, Multicriteria Decision Analysis has been applied to hierarchy policy incentives for CCS, to assess the role of CCS, and to select potential areas which could be suitable to store. For those reasons, MCDM have been considered in the monitoring phase of CO2 storage, in order to select suitable technologies which could be techno-economical viable. In this paper, we identify techniques of gas measurements in subsurface which are currently applying in the phase of characterization (pre-injection); MCDM will help decision-makers to hierarchy the most suitable technique which fit the purpose to monitor the specific physic-chemical parameter.