945 resultados para Model compliant mechanisms
Resumo:
The marine nitrogen (N) inventory is thought to be stabilized by negative feedback mechanisms that reduce N inventory excursions relative to the more slowly overturning phosphorus inventory. Using a global biogeochemical ocean circulation model we show that negative feedbacks stabilizing the N inventory cannot persist if a close spatial association of N2 fixation and denitrification occurs. In our idealized model experiments, nitrogen deficient waters, generated by denitrification, stimulate local N2 fixation activity. But, because of stoichiometric constraints, the denitrification of newly fixed nitrogen leads to a net loss of N. This can enhance the N deficit, thereby triggering additional fixation in a vicious cycle, ultimately leading to a runaway N loss. To break this vicious cycle, and allow for stabilizing negative feedbacks to occur, inputs of new N need to be spatially decoupled from denitrification. Our idealized model experiments suggest that factors such as iron limitation or dissolved organic matter cycling can promote such decoupling and allow for negative feedbacks that stabilize the N inventory. Conversely, close spatial co-location of N2 fixation and denitrification could lead to net N loss.
Resumo:
Ocean acidification (OA) due to atmospheric CO2 rise is expected to influence marine primary productivity. In order to investigate the interactive effects of OA and light changes on diatoms, we grew Phaeodactylum tricornutum, under ambient (390 ppmv; LC) and elevated CO2 (1000 ppmv; HC) conditions for 80 generations, and measured its physiological performance under different light levels (60 µmol/m**2/s, LL; 200 µmol/m**2/s, ML; 460 µmol/m**2/s, HL) for another 25 generations. The specific growth rate of the HC-grown cells was higher (about 12-18%) than that of the LC-grown ones, with the highest under the ML level. With increasing light levels, the effective photochemical yield of PSII (Fv'/Fm') decreased, but was enhanced by the elevated CO2, especially under the HL level. The cells acclimated to the HC condition showed a higher recovery rate of their photochemical yield of PSII compared to the LC-grown cells. For the HC-grown cells, dissolved inorganic carbon or CO2 levels for half saturation of photosynthesis (K1/2 DIC or K1/2 CO2) increased by 11, 55 and 32%, under the LL, ML and HL levels, reflecting a light dependent down-regulation of carbon concentrating mechanisms (CCMs). The linkage between higher level of the CCMs down-regulation and higher growth rate at ML under OA supports the theory that the saved energy from CCMs down-regulation adds on to enhance the growth of the diatom.
Resumo:
Hide Intense debate persists about the climatic mechanisms governing hydrologic changes in tropical and subtropical southeast Africa since the Last Glacial Maximum, about 20,000 years ago. In particular, the relative importance of atmospheric and oceanic processes is not firmly established. Southward shifts of the intertropical convergence zone (ITCZ) driven by high-latitude climate changes have been suggested as a primary forcing, whereas other studies infer a predominant influence of Indian Ocean sea surface temperatures on regional rainfall changes. To address this question, a continuous record representing an integrated signal of regional climate variability is required, but has until now been missing. Here we show that remote atmospheric forcing by cold events in the northern high latitudes appears to have been the main driver of hydro-climatology in southeast Africa during rapid climate changes over the past 17,000 years. Our results are based on a reconstruction of precipitation and river discharge changes, as recorded in a marine sediment core off the mouth of the Zambezi River, near the southern boundary of the modern seasonal ITCZ migration. Indian Ocean sea surface temperatures did not exert a primary control over southeast African hydrologic variability. Instead, phases of high precipitation and terrestrial discharge occurred when the ITCZ was forced southwards during Northern Hemisphere cold events, such as Heinrich stadial 1 (around 16,000 years ago) and the Younger Dryas (around 12,000 years ago), or when local summer insolation was high in the late Holocene, i.e., during the last 4,000 years.
Resumo:
Production pathways of the prominent volatile organic halogen compound methyl iodide (CH3I) are not fully understood. Based on observations, production of CH3I via photochemical degradation of organic material or via phytoplankton production has been proposed. Additional insights could not be gained from correlations between observed biological and environmental variables or from biogeochemical modeling to identify unambiguously the source of methyl iodide. In this study, we aim to address this question of source mechanisms with a three-dimensional global ocean general circulation model including biogeochemistry (MPIOM-HAMOCC (MPIOM - Max Planck Institute Ocean Model HAMOCC - HAMburg Ocean Carbon Cycle model)) by carrying out a series of sensitivity experiments. The simulated fields are compared with a newly available global data set. Simulated distribution patterns and emissions of CH3I differ largely for the two different production pathways. The evaluation of our model results with observations shows that, on the global scale, observed surface concentrations of CH3I can be best explained by the photochemical production pathway. Our results further emphasize that correlations between CH3I and abiotic or biotic factors do not necessarily provide meaningful insights concerning the source of origin. Overall, we find a net global annual CH3I air-sea flux that ranges between 70 and 260 Gg/yr. On the global scale, the ocean acts as a net source of methyl iodide for the atmosphere, though in some regions in boreal winter, fluxes are of the opposite direction (from the atmosphere to the ocean).
Resumo:
Atmospheric monitoring of high northern latitudes (> 40°N) has shown an enhanced seasonal cycle of carbon dioxide (CO2) since the 1960s but the underlying mechanisms are not yet fully understood. The much stronger increase in high latitudes compared to low ones suggests that northern ecosystems are experiencing large changes in vegetation and carbon cycle dynamics. Here we show that the latitudinal gradient of the increasing CO2 amplitude is mainly driven by positive trends in photosynthetic carbon uptake caused by recent climate change and mediated by changing vegetation cover in northern ecosystems. Our results emphasize the importance of climate-vegetation-carbon cycle feedbacks at high latitudes, and indicate that during the last decades photosynthetic carbon uptake has reacted much more strongly to warming than carbon release processes.
Resumo:
This thesis deals with the problem of efficiently tracking 3D objects in sequences of images. We tackle the efficient 3D tracking problem by using direct image registration. This problem is posed as an iterative optimization procedure that minimizes a brightness error norm. We review the most popular iterative methods for image registration in the literature, turning our attention to those algorithms that use efficient optimization techniques. Two forms of efficient registration algorithms are investigated. The first type comprises the additive registration algorithms: these algorithms incrementally compute the motion parameters by linearly approximating the brightness error function. We centre our attention on Hager and Belhumeur’s factorization-based algorithm for image registration. We propose a fundamental requirement that factorization-based algorithms must satisfy to guarantee good convergence, and introduce a systematic procedure that automatically computes the factorization. Finally, we also bring out two warp functions to register rigid and nonrigid 3D targets that satisfy the requirement. The second type comprises the compositional registration algorithms, where the brightness function error is written by using function composition. We study the current approaches to compositional image alignment, and we emphasize the importance of the Inverse Compositional method, which is known to be the most efficient image registration algorithm. We introduce a new algorithm, the Efficient Forward Compositional image registration: this algorithm avoids the necessity of inverting the warping function, and provides a new interpretation of the working mechanisms of the inverse compositional alignment. By using this information, we propose two fundamental requirements that guarantee the convergence of compositional image registration methods. Finally, we support our claims by using extensive experimental testing with synthetic and real-world data. We propose a distinction between image registration and tracking when using efficient algorithms. We show that, depending whether the fundamental requirements are hold, some efficient algorithms are eligible for image registration but not for tracking.
Resumo:
This paper shows the development of a science-technological knowledge transfer model in Mexico, as a means to boost the limited relations between the scientific and industrial environments. The proposal is based on the analysis of eight organizations (research centers and firms) with varying degrees of skill in the practice of science-technological knowledge transfer, and carried out by the case study approach. The analysis highlights the synergistic use of the organizational and technological capabilities of each organization, as a means to identification of the knowledge transfer mechanisms best suited to enabling the establishment of cooperative processes, and achieve the R&D and innovation activities results.
Resumo:
An accurate characterization of the near-region propagation of radio waves inside tunnels is of practical importance for the design and planning of advanced communication systems. However, there has been no consensus yet on the propagation mechanism in this region. Some authors claim that the propagation mechanism follows the free space model, others intend to interpret it by the multi-mode waveguide model. This paper clarifies the situation in the near-region of arched tunnels by analytical modeling of the division point between the two propagation mechanisms. The procedure is based on the combination of the propagation theory and the three-dimensional solid geometry. Three groups of measurements are employed to verify the model in different tunnels at different frequencies. Furthermore, simplified models for the division point in five specific application situations are derived to facilitate the use of the model. The results in this paper could help to deepen the insight into the propagation mechanism within tunnel environments.
Resumo:
OntoTag - A Linguistic and Ontological Annotation Model Suitable for the Semantic Web
1. INTRODUCTION. LINGUISTIC TOOLS AND ANNOTATIONS: THEIR LIGHTS AND SHADOWS
Computational Linguistics is already a consolidated research area. It builds upon the results of other two major ones, namely Linguistics and Computer Science and Engineering, and it aims at developing computational models of human language (or natural language, as it is termed in this area). Possibly, its most well-known applications are the different tools developed so far for processing human language, such as machine translation systems and speech recognizers or dictation programs.
These tools for processing human language are commonly referred to as linguistic tools. Apart from the examples mentioned above, there are also other types of linguistic tools that perhaps are not so well-known, but on which most of the other applications of Computational Linguistics are built. These other types of linguistic tools comprise POS taggers, natural language parsers and semantic taggers, amongst others. All of them can be termed linguistic annotation tools.
Linguistic annotation tools are important assets. In fact, POS and semantic taggers (and, to a lesser extent, also natural language parsers) have become critical resources for the computer applications that process natural language. Hence, any computer application that has to analyse a text automatically and ‘intelligently’ will include at least a module for POS tagging. The more an application needs to ‘understand’ the meaning of the text it processes, the more linguistic tools and/or modules it will incorporate and integrate.
However, linguistic annotation tools have still some limitations, which can be summarised as follows:
1. Normally, they perform annotations only at a certain linguistic level (that is, Morphology, Syntax, Semantics, etc.).
2. They usually introduce a certain rate of errors and ambiguities when tagging. This error rate ranges from 10 percent up to 50 percent of the units annotated for unrestricted, general texts.
3. Their annotations are most frequently formulated in terms of an annotation schema designed and implemented ad hoc.
A priori, it seems that the interoperation and the integration of several linguistic tools into an appropriate software architecture could most likely solve the limitations stated in (1). Besides, integrating several linguistic annotation tools and making them interoperate could also minimise the limitation stated in (2). Nevertheless, in the latter case, all these tools should produce annotations for a common level, which would have to be combined in order to correct their corresponding errors and inaccuracies. Yet, the limitation stated in (3) prevents both types of integration and interoperation from being easily achieved.
In addition, most high-level annotation tools rely on other lower-level annotation tools and their outputs to generate their own ones. For example, sense-tagging tools (operating at the semantic level) often use POS taggers (operating at a lower level, i.e., the morphosyntactic) to identify the grammatical category of the word or lexical unit they are annotating. Accordingly, if a faulty or inaccurate low-level annotation tool is to be used by other higher-level one in its process, the errors and inaccuracies of the former should be minimised in advance. Otherwise, these errors and inaccuracies would be transferred to (and even magnified in) the annotations of the high-level annotation tool.
Therefore, it would be quite useful to find a way to
(i) correct or, at least, reduce the errors and the inaccuracies of lower-level linguistic tools;
(ii) unify the annotation schemas of different linguistic annotation tools or, more generally speaking, make these tools (as well as their annotations) interoperate.
Clearly, solving (i) and (ii) should ease the automatic annotation of web pages by means of linguistic tools, and their transformation into Semantic Web pages (Berners-Lee, Hendler and Lassila, 2001). Yet, as stated above, (ii) is a type of interoperability problem. There again, ontologies (Gruber, 1993; Borst, 1997) have been successfully applied thus far to solve several interoperability problems. Hence, ontologies should help solve also the problems and limitations of linguistic annotation tools aforementioned.
Thus, to summarise, the main aim of the present work was to combine somehow these separated approaches, mechanisms and tools for annotation from Linguistics and Ontological Engineering (and the Semantic Web) in a sort of hybrid (linguistic and ontological) annotation model, suitable for both areas. This hybrid (semantic) annotation model should (a) benefit from the advances, models, techniques, mechanisms and tools of these two areas; (b) minimise (and even solve, when possible) some of the problems found in each of them; and (c) be suitable for the Semantic Web. The concrete goals that helped attain this aim are presented in the following section.
2. GOALS OF THE PRESENT WORK
As mentioned above, the main goal of this work was to specify a hybrid (that is, linguistically-motivated and ontology-based) model of annotation suitable for the Semantic Web (i.e. it had to produce a semantic annotation of web page contents). This entailed that the tags included in the annotations of the model had to (1) represent linguistic concepts (or linguistic categories, as they are termed in ISO/DCR (2008)), in order for this model to be linguistically-motivated; (2) be ontological terms (i.e., use an ontological vocabulary), in order for the model to be ontology-based; and (3) be structured (linked) as a collection of ontology-based
Resumo:
Nitrogen sputtering yields as high as 104 atoms/ion, are obtained by irradiating N-rich-Cu3N films (N concentration: 33 ± 2 at.%) with Cu ions at energies in the range 10?42 MeV. The kinetics of N sputtering as a function of ion fluence is determined at several energies (stopping powers) for films deposited on both, glass and silicon substrates. The kinetic curves show that the amount of nitrogen release strongly increases with rising irradiation fluence up to reaching a saturation level at a low remaining nitrogen fraction (5?10%), in which no further nitrogen reduction is observed. The sputtering rate for nitrogen depletion is found to be independent of the substrate and to linearly increase with electronic stopping power (Se). A stopping power (Sth) threshold of ?3.5 keV/nm for nitrogen depletion has been estimated from extrapolation of the data. Experimental kinetic data have been analyzed within a bulk molecular recombination model. The microscopic mechanisms of the nitrogen depletion process are discussed in terms of a non-radiative exciton decay model. In particular, the estimated threshold is related to a minimum exciton density which is required to achieve efficient sputtering rates.