951 resultados para one-meson-exchange: independent-particle shell model
Resumo:
Bromoform (CHBr3) is one important precursor of atmospheric reactive bromine species that are involved in ozone depletion in the troposphere and stratosphere. In the open ocean bromoform production is linked to phytoplankton that contains the enzyme bromoperoxidase. Coastal sources of bromoform are higher than open ocean sources. However, open ocean emissions are important because the transfer of tracers into higher altitude in the air, i.e. into the ozone layer, strongly depends on the location of emissions. For example, emissions in the tropics are more rapidly transported into the upper atmosphere than emissions from higher latitudes. Global spatio-temporal features of bromoform emissions are poorly constrained. Here, a global three-dimensional ocean biogeochemistry model (MPIOM-HAMOCC) is used to simulate bromoform cycling in the ocean and emissions into the atmosphere using recently published data of global atmospheric concentrations (Ziska et al., 2013) as upper boundary conditions. Our simulated surface concentrations of CHBr3 match the observations well. Simulated global annual emissions based on monthly mean model output are lower than previous estimates, including the estimate by Ziska et al. (2013), because the gas exchange reverses when less bromoform is produced in non-blooming seasons. This is the case for higher latitudes, i.e. the polar regions and northern North Atlantic. Further model experiments show that future model studies may need to distinguish different bromoform-producing phytoplankton species and reveal that the transport of CHBr3 from the coast considerably alters open ocean bromoform concentrations, in particular in the northern sub-polar and polar regions.
Resumo:
Specimens of two species of planktic foraminifera, Globigerinoides ruber and Globigerinella siphonifera, were grown under controlled laboratory conditions at a range of temperatures (18-31 °C), salinities (32-44 psu) and pH levels (7.9-8.4). The shells were examined for their calcium isotope compositions (d44/40Ca) and strontium to calcium ratios (Sr/Ca) using Thermal Ionization Mass Spectrometry and Inductively Coupled Plasma Mass Spectrometry. Although the total variation in d44/40Ca (~0.3 per mill) in the studied species is on the same order as the external reproducibility, the data set reveals some apparent trends that are controlled by more than one environmental parameter. There is a well-defined inverse linear relationship between d44/40Ca and Sr/Ca in all experiments, suggesting similar controls on these proxies in foraminiferal calcite independent of species. Analogous to recent results from inorganically precipitated calcite, we suggest that Ca isotope fractionation and Sr partitioning in planktic foraminifera are mainly controlled by precipitation kinetics. This postulation provides us with a unique tool to calculate precipitation rates and draws support from the observation that Sr/Ca ratios are positively correlated with average growth rates. At 25 °C water temperature, precipitation rates in G. siphonifera and G. ruber are calculated to be on the order of 2000 and 3000 µmol/m**2/h, respectively. The lower d44/40Ca observed at 29 °C in both species is consistent with increased precipitation rates at high water temperatures. Salinity response of d44/40Ca (and Sr/Ca) in G. siphonifera implies that this species has the highest precipitation rates at the salinity of its natural habitat, whereas increasing salinities appear to trigger higher precipitation rates in G. ruber. Isotope effects that cannot be explained by precipitation rate in planktic foraminifera can be explained by a biological control, related to a vacuolar pathway for supply of ions during biomineralization and a pH regulation mechanism in these vacuoles. In case of an additional pathway via cross-membrane transport, supplying light Ca for calcification, the d44/40Ca of the reservoir is constrained as -0.2 per mill relative to seawater. Using a Rayleigh distillation model, we calculate that calcification occurs in a semi-open system, where less than half of the Ca supplied by vacuolization is utilized for calcite precipitation. Our findings are relevant for interpreting paleo-proxy data on d44/40Ca and Sr/Ca in foraminifera as well as understanding their biomineralization processes.
Resumo:
Antarctic calcified macroorganisms are particularly vulnerable to ocean acidification because many are weakly calcified, the dissolution rates of calcium carbonate are inversely related to temperature, and high latitude seas are predicted to become undersaturated in aragonite by the year 2100. We examined the post-mortem dissolution rates of aragonitic and calcitic shells from four species of Antarctic benthic marine invertebrates (two bivalves, one limpet, one brachiopod) and the thallus of a limpet shell-encrusting coralline alga exposed to acidified pH (7.4) or non-acidified pH (8.2) seawater at a constant temperature of 4 C. Within a period of only 14-35 days, shells of all four species held in pH 7.4 seawater had suffered significant dissolution. Despite calcite being 35% less soluble in seawater than aragonite, there was surprisingly, no consistent pattern of calcitic shells having slower dissolution rates than aragonitic shells. Outer surfaces of shells held in pH 7.4 seawater exhibited deterioration by day 35, and by day 56 there was exposure of aragonitic or calcitic prisms within the shell architecture of three of the macroinvertebrate species. Dissolution of coralline algae was confirmed by differences in weight loss in limpet shells with and without coralline algae. By day 56, thalli of the coralline alga held in pH 7.4 displayed a loss of definition of the conceptacle pores and cracking was evident at the zone of interface with limpet shells. Experimental studies are needed to evaluate whether there are adequate compensatory mechanisms in these and other calcified Antarctic benthic macroorganisms to cope with anticipated ocean acidification. In their absence, these organisms, and the communities they comprise, are likely to be among the first to experience the cascading impacts of ocean acidification.
Resumo:
A regional ocean general circulation model of the Mediterranean is used to study the climate of the Last Glacial Maximum. The atmospheric forcing for these simulations has been derived from simulations with an atmospheric general circulation model, which in turn was forced with surface conditions from a coarse resolution earth system model. The model is successful in reproducing the general patterns of reconstructed sea surface temperature anomalies with the strongest cooling in summer in the northwestern Mediterranean and weak cooling in the Levantine, although the model underestimates the extent of the summer cooling in the western Mediterranean. However, there is a strong vertical gradient associated with this pattern of summer cooling, which makes the comparison with reconstructions complicated. The exchange with the Atlantic is decreased to roughly one half of its present value, which can be explained by the shallower Strait of Gibraltar as a consequence of lower global sea level. This reduced exchange causes a strong increase of salinity in the Mediterranean in spite of reduced net evaporation.
Resumo:
Atmospheric monitoring of high northern latitudes (> 40°N) has shown an enhanced seasonal cycle of carbon dioxide (CO2) since the 1960s but the underlying mechanisms are not yet fully understood. The much stronger increase in high latitudes compared to low ones suggests that northern ecosystems are experiencing large changes in vegetation and carbon cycle dynamics. Here we show that the latitudinal gradient of the increasing CO2 amplitude is mainly driven by positive trends in photosynthetic carbon uptake caused by recent climate change and mediated by changing vegetation cover in northern ecosystems. Our results emphasize the importance of climate-vegetation-carbon cycle feedbacks at high latitudes, and indicate that during the last decades photosynthetic carbon uptake has reacted much more strongly to warming than carbon release processes.
Resumo:
The variable nature of the irradiance can produce significant fluctuations in the power generated by large grid-connected photovoltaic (PV) plants. Experimental 1 s data were collected throughout a year from six PV plants, 18 MWp in total. Then, the dependence of short (below 10 min) power fluctuation on PV plant size has been investigated. The analysis focuses on the study of fluctuation frequency as well as the maximum fluctuation value registered. An analytic model able to describe the frequency of a given fluctuation for a certain day is proposed
Resumo:
The Fractal Image Informatics toolbox (Oleschko et al., 2008 a; Torres-Argüelles et al., 2010) was applied to extract, classify and model the topological structure and dynamics of surface roughness in two highly eroded catchments of Mexico. Both areas are affected by gully erosion (Sidorchuk, 2005) and characterized by avalanche-like matter transport. Five contrasting morphological patterns were distinguished across the slope of the bare eroded surface of Faeozem (Queretaro State) while only one (apparently independent on the slope) roughness pattern was documented for Andosol (Michoacan State). We called these patterns ?the roughness clusters? and compared them in terms of metrizability, continuity, compactness, topological connectedness (global and local) and invariance, separability, and degree of ramification (Weyl, 1937). All mentioned topological measurands were correlated with the variance, skewness and kurtosis of the gray-level distribution of digital images. The morphology0 spatial dynamics of roughness clusters was measured and mapped with high precision in terms of fractal descriptors. The Hurst exponent was especially suitable to distinguish between the structure of ?turtle shell? and ?ramification? patterns (sediment producing zone A of the slope); as well as ?honeycomb? (sediment transport zone B) and ?dinosaur steps? and ?corals? (sediment deposition zone C) roughness clusters. Some other structural attributes of studied patterns were also statistically different and correlated with the variance, skewness and kurtosis of gray distribution of multiscale digital images. The scale invariance of classified roughness patterns was documented inside the range of five image resolutions. We conjectured that the geometrization of erosion patterns in terms of roughness clustering might benefit the most semi-quantitative models developed for erosion and sediment yield assessments (de Vente and Poesen, 2005).
Resumo:
A Near Infrared Spectroscopy (NIRS) industrial application was developed by the LPF-Tagralia team, and transferred to a Spanish dehydrator company (Agrotécnica Extremeña S.L.) for the classification of dehydrator onion bulbs for breeding purposes. The automated operation of the system has allowed the classification of more than one million onion bulbs during seasons 2004 to 2008 (Table 1). The performance achieved by the original model (R2=0,65; SEC=2,28ºBrix) was enough for qualitative classification thanks to the broad range of variation of the initial population (18ºBrix). Nevertheless, a reduction of the classification performance of the model has been observed with the passing of seasons. One of the reasons put forward is the reduction of the range of variation that naturally occurs during a breeding process, the other is the variations in other parameters than the variable of interest but whose effects would probably be affecting the measurements [1]. This study points to the application of Independent Component Analysis (ICA) on this highly variable dataset coming from a NIRS industrial application for the identification of the different sources of variation present through seasons.
Resumo:
The authors are from UPM and are relatively grouped, and all have intervened in different academic or real cases on the subject, at different times as being of different age. With precedent from E. Torroja and A. Páez in Madrid Spain Safety Probabilistic models for concrete about 1957, now in ICOSSAR conferences, author J.M. Antón involved since autumn 1967 for euro-steel construction in CECM produced a math model for independent load superposition reductions, and using it a load coefficient pattern for codes in Rome Feb. 1969, practically adopted for European constructions, giving in JCSS Lisbon Feb. 1974 suggestion of union for concrete-steel-al.. That model uses model for loads like Gumbel type I, for 50 years for one type of load, reduced to 1 year to be added to other independent loads, the sum set in Gumbel theories to 50 years return period, there are parallel models. A complete reliability system was produced, including non linear effects as from buckling, phenomena considered somehow in actual Construction Eurocodes produced from Model Codes. The system was considered by author in CEB in presence of Hydraulic effects from rivers, floods, sea, in reference with actual practice. When redacting a Road Drainage Norm in MOPU Spain an optimization model was realized by authors giving a way to determine the figure of Return Period, 10 to 50 years, for the cases of hydraulic flows to be considered in road drainage. Satisfactory examples were a stream in SE of Spain with Gumbel Type I model and a paper of Ven Te Chow with Mississippi in Keokuk using Gumbel type II, and the model can be modernized with more varied extreme laws. In fact in the MOPU drainage norm the redacting commission acted also as expert to set a table of return periods for elements of road drainage, in fact as a multi-criteria complex decision system. These precedent ideas were used e.g. in wide Codes, indicated in symposia or meetings, but not published in journals in English, and a condensate of contributions of authors is presented. The authors are somehow involved in optimization for hydraulic and agro planning, and give modest hints of intended applications in presence of agro and environment planning as a selection of the criteria and utility functions involved in bayesian, multi-criteria or mixed decision systems. Modest consideration is made of changing in climate, and on the production and commercial systems, and on others as social and financial.
Resumo:
This paper deals with the detection and tracking of an unknown number of targets using a Bayesian hierarchical model with target labels. To approximate the posterior probability density function, we develop a two-layer particle filter. One deals with track initiation, and the other with track maintenance. In addition, the parallel partition method is proposed to sample the states of the surviving targets.
Resumo:
OntoTag - A Linguistic and Ontological Annotation Model Suitable for the Semantic Web
1. INTRODUCTION. LINGUISTIC TOOLS AND ANNOTATIONS: THEIR LIGHTS AND SHADOWS
Computational Linguistics is already a consolidated research area. It builds upon the results of other two major ones, namely Linguistics and Computer Science and Engineering, and it aims at developing computational models of human language (or natural language, as it is termed in this area). Possibly, its most well-known applications are the different tools developed so far for processing human language, such as machine translation systems and speech recognizers or dictation programs.
These tools for processing human language are commonly referred to as linguistic tools. Apart from the examples mentioned above, there are also other types of linguistic tools that perhaps are not so well-known, but on which most of the other applications of Computational Linguistics are built. These other types of linguistic tools comprise POS taggers, natural language parsers and semantic taggers, amongst others. All of them can be termed linguistic annotation tools.
Linguistic annotation tools are important assets. In fact, POS and semantic taggers (and, to a lesser extent, also natural language parsers) have become critical resources for the computer applications that process natural language. Hence, any computer application that has to analyse a text automatically and ‘intelligently’ will include at least a module for POS tagging. The more an application needs to ‘understand’ the meaning of the text it processes, the more linguistic tools and/or modules it will incorporate and integrate.
However, linguistic annotation tools have still some limitations, which can be summarised as follows:
1. Normally, they perform annotations only at a certain linguistic level (that is, Morphology, Syntax, Semantics, etc.).
2. They usually introduce a certain rate of errors and ambiguities when tagging. This error rate ranges from 10 percent up to 50 percent of the units annotated for unrestricted, general texts.
3. Their annotations are most frequently formulated in terms of an annotation schema designed and implemented ad hoc.
A priori, it seems that the interoperation and the integration of several linguistic tools into an appropriate software architecture could most likely solve the limitations stated in (1). Besides, integrating several linguistic annotation tools and making them interoperate could also minimise the limitation stated in (2). Nevertheless, in the latter case, all these tools should produce annotations for a common level, which would have to be combined in order to correct their corresponding errors and inaccuracies. Yet, the limitation stated in (3) prevents both types of integration and interoperation from being easily achieved.
In addition, most high-level annotation tools rely on other lower-level annotation tools and their outputs to generate their own ones. For example, sense-tagging tools (operating at the semantic level) often use POS taggers (operating at a lower level, i.e., the morphosyntactic) to identify the grammatical category of the word or lexical unit they are annotating. Accordingly, if a faulty or inaccurate low-level annotation tool is to be used by other higher-level one in its process, the errors and inaccuracies of the former should be minimised in advance. Otherwise, these errors and inaccuracies would be transferred to (and even magnified in) the annotations of the high-level annotation tool.
Therefore, it would be quite useful to find a way to
(i) correct or, at least, reduce the errors and the inaccuracies of lower-level linguistic tools;
(ii) unify the annotation schemas of different linguistic annotation tools or, more generally speaking, make these tools (as well as their annotations) interoperate.
Clearly, solving (i) and (ii) should ease the automatic annotation of web pages by means of linguistic tools, and their transformation into Semantic Web pages (Berners-Lee, Hendler and Lassila, 2001). Yet, as stated above, (ii) is a type of interoperability problem. There again, ontologies (Gruber, 1993; Borst, 1997) have been successfully applied thus far to solve several interoperability problems. Hence, ontologies should help solve also the problems and limitations of linguistic annotation tools aforementioned.
Thus, to summarise, the main aim of the present work was to combine somehow these separated approaches, mechanisms and tools for annotation from Linguistics and Ontological Engineering (and the Semantic Web) in a sort of hybrid (linguistic and ontological) annotation model, suitable for both areas. This hybrid (semantic) annotation model should (a) benefit from the advances, models, techniques, mechanisms and tools of these two areas; (b) minimise (and even solve, when possible) some of the problems found in each of them; and (c) be suitable for the Semantic Web. The concrete goals that helped attain this aim are presented in the following section.
2. GOALS OF THE PRESENT WORK
As mentioned above, the main goal of this work was to specify a hybrid (that is, linguistically-motivated and ontology-based) model of annotation suitable for the Semantic Web (i.e. it had to produce a semantic annotation of web page contents). This entailed that the tags included in the annotations of the model had to (1) represent linguistic concepts (or linguistic categories, as they are termed in ISO/DCR (2008)), in order for this model to be linguistically-motivated; (2) be ontological terms (i.e., use an ontological vocabulary), in order for the model to be ontology-based; and (3) be structured (linked) as a collection of ontology-based
Resumo:
Andorra-I is the first implementation of a language based on the Andorra Principie, which states that determinate goals can (and shonld) be run before other goals, and even in a parallel fashion. This principie has materialized in a framework called the Basic Andorra model, which allows or-parallelism as well as (dependent) and-parallelism for determinate goals. In this report we show that it is possible to further extend this model in order to allow general independent and-parallelism for nondeterminate goals, withont greatly modifying the underlying implementation machinery. A simple an easy way to realize such an extensión is to make each (nondeterminate) independent goal determinate, by using a special "bagof" constract. We also show that this can be achieved antomatically by compile-time translation from original Prolog programs. A transformation that fulfüls this objective and which can be easily antomated is presented in this report.
Resumo:
Abstract The proliferation of wireless sensor networks and the variety of envisioned applications associated with them has motivated the development of distributed algorithms for collaborative processing over networked systems. One of the applications that has attracted the attention of the researchers is that of target localization where the nodes of the network try to estimate the position of an unknown target that lies within its coverage area. Particularly challenging is the problem of estimating the target’s position when we use received signal strength indicator (RSSI) due to the nonlinear relationship between the measured signal and the true position of the target. Many of the existing approaches suffer either from high computational complexity (e.g., particle filters) or lack of accuracy. Further, many of the proposed solutions are centralized which make their application to a sensor network questionable. Depending on the application at hand and, from a practical perspective it could be convenient to find a balance between localization accuracy and complexity. Into this direction we approach the maximum likelihood location estimation problem by solving a suboptimal (and more tractable) problem. One of the main advantages of the proposed scheme is that it allows for a decentralized implementation using distributed processing tools (e.g., consensus and convex optimization) and therefore, it is very suitable to be implemented in real sensor networks. If further accuracy is needed an additional refinement step could be performed around the found solution. Under the assumption of independent noise among the nodes such local search can be done in a fully distributed way using a distributed version of the Gauss-Newton method based on consensus. Regardless of the underlying application or function of the sensor network it is al¬ways necessary to have a mechanism for data reporting. While some approaches use a special kind of nodes (called sink nodes) for data harvesting and forwarding to the outside world, there are however some scenarios where such an approach is impractical or even impossible to deploy. Further, such sink nodes become a bottleneck in terms of traffic flow and power consumption. To overcome these issues instead of using sink nodes for data reporting one could use collaborative beamforming techniques to forward directly the generated data to a base station or gateway to the outside world. In a dis-tributed environment like a sensor network nodes cooperate in order to form a virtual antenna array that can exploit the benefits of multi-antenna communications. In col-laborative beamforming nodes synchronize their phases in order to add constructively at the receiver. Some of the inconveniences associated with collaborative beamforming techniques is that there is no control over the radiation pattern since it is treated as a random quantity. This may cause interference to other coexisting systems and fast bat-tery depletion at the nodes. Since energy-efficiency is a major design issue we consider the development of a distributed collaborative beamforming scheme that maximizes the network lifetime while meeting some quality of service (QoS) requirement at the re¬ceiver side. Using local information about battery status and channel conditions we find distributed algorithms that converge to the optimal centralized beamformer. While in the first part we consider only battery depletion due to communications beamforming, we extend the model to account for more realistic scenarios by the introduction of an additional random energy consumption. It is shown how the new problem generalizes the original one and under which conditions it is easily solvable. By formulating the problem under the energy-efficiency perspective the network’s lifetime is significantly improved. Resumen La proliferación de las redes inalámbricas de sensores junto con la gran variedad de posi¬bles aplicaciones relacionadas, han motivado el desarrollo de herramientas y algoritmos necesarios para el procesado cooperativo en sistemas distribuidos. Una de las aplicaciones que suscitado mayor interés entre la comunidad científica es la de localization, donde el conjunto de nodos de la red intenta estimar la posición de un blanco localizado dentro de su área de cobertura. El problema de la localization es especialmente desafiante cuando se usan niveles de energía de la seal recibida (RSSI por sus siglas en inglés) como medida para la localization. El principal inconveniente reside en el hecho que el nivel de señal recibida no sigue una relación lineal con la posición del blanco. Muchas de las soluciones actuales al problema de localization usando RSSI se basan en complejos esquemas centralizados como filtros de partículas, mientas que en otras se basan en esquemas mucho más simples pero con menor precisión. Además, en muchos casos las estrategias son centralizadas lo que resulta poco prácticos para su implementación en redes de sensores. Desde un punto de vista práctico y de implementation, es conveniente, para ciertos escenarios y aplicaciones, el desarrollo de alternativas que ofrezcan un compromiso entre complejidad y precisión. En esta línea, en lugar de abordar directamente el problema de la estimación de la posición del blanco bajo el criterio de máxima verosimilitud, proponemos usar una formulación subóptima del problema más manejable analíticamente y que ofrece la ventaja de permitir en¬contrar la solución al problema de localization de una forma totalmente distribuida, convirtiéndola así en una solución atractiva dentro del contexto de redes inalámbricas de sensores. Para ello, se usan herramientas de procesado distribuido como los algorit¬mos de consenso y de optimización convexa en sistemas distribuidos. Para aplicaciones donde se requiera de un mayor grado de precisión se propone una estrategia que con¬siste en la optimización local de la función de verosimilitud entorno a la estimación inicialmente obtenida. Esta optimización se puede realizar de forma descentralizada usando una versión basada en consenso del método de Gauss-Newton siempre y cuando asumamos independencia de los ruidos de medida en los diferentes nodos. Independientemente de la aplicación subyacente de la red de sensores, es necesario tener un mecanismo que permita recopilar los datos provenientes de la red de sensores. Una forma de hacerlo es mediante el uso de uno o varios nodos especiales, llamados nodos “sumidero”, (sink en inglés) que actúen como centros recolectores de información y que estarán equipados con hardware adicional que les permita la interacción con el exterior de la red. La principal desventaja de esta estrategia es que dichos nodos se convierten en cuellos de botella en cuanto a tráfico y capacidad de cálculo. Como alter¬nativa se pueden usar técnicas cooperativas de conformación de haz (beamforming en inglés) de manera que el conjunto de la red puede verse como un único sistema virtual de múltiples antenas y, por tanto, que exploten los beneficios que ofrecen las comu¬nicaciones con múltiples antenas. Para ello, los distintos nodos de la red sincronizan sus transmisiones de manera que se produce una interferencia constructiva en el recep¬tor. No obstante, las actuales técnicas se basan en resultados promedios y asintóticos, cuando el número de nodos es muy grande. Para una configuración específica se pierde el control sobre el diagrama de radiación causando posibles interferencias sobre sis¬temas coexistentes o gastando más potencia de la requerida. La eficiencia energética es una cuestión capital en las redes inalámbricas de sensores ya que los nodos están equipados con baterías. Es por tanto muy importante preservar la batería evitando cambios innecesarios y el consecuente aumento de costes. Bajo estas consideraciones, se propone un esquema de conformación de haz que maximice el tiempo de vida útil de la red, entendiendo como tal el máximo tiempo que la red puede estar operativa garantizando unos requisitos de calidad de servicio (QoS por sus siglas en inglés) que permitan una decodificación fiable de la señal recibida en la estación base. Se proponen además algoritmos distribuidos que convergen a la solución centralizada. Inicialmente se considera que la única causa de consumo energético se debe a las comunicaciones con la estación base. Este modelo de consumo energético es modificado para tener en cuenta otras formas de consumo de energía derivadas de procesos inherentes al funcionamiento de la red como la adquisición y procesado de datos, las comunicaciones locales entre nodos, etc. Dicho consumo adicional de energía se modela como una variable aleatoria en cada nodo. Se cambia por tanto, a un escenario probabilístico que generaliza el caso determinista y se proporcionan condiciones bajo las cuales el problema se puede resolver de forma eficiente. Se demuestra que el tiempo de vida de la red mejora de forma significativa usando el criterio propuesto de eficiencia energética.