961 resultados para Three-state Potts model
Resumo:
Sediments from five Leg 167 drill sites and three piston cores were analyzed for Corg and CaCO3. Oxygen isotope stratigraphy on benthic foraminifers was used to assign age models to these sedimentary records. We find that the northern and central California margin is characterized by k.y.-scale events that can be found in both the CaCO3 and Corg time series. We show that the CaCO3 events are caused by changes in CaCO3 production by plankton, not by dissolution. We also show that these CaCO3 events occur in marine isotope Stages (MIS) 2, 3, and 4 during Dansgaard/Oeschger interstadials. They occur most strongly, however, on the MIS 5/4 glaciation and MIS 2/1 deglaciation. We believe that the link between the northeastern Pacific Ocean and North Atlantic is primarily transmitted by the atmosphere, not the ocean. Highest CaCO3 production and burial occurs when the surface ocean is somewhat cooler than the modern ocean, and the surface mixed layer is somewhat more stable.
Resumo:
Studies on the consequences of ocean acidification for the marine ecosystem have revealed behavioural changes in coral reef fishes exposed to sustained near-future CO2 levels. The changes have been linked to altered function of GABAergic neurotransmitter systems, because the behavioural alterations can be reversed rapidly by treatment with the GABAA receptor antagonist gabazine. Characterization of the molecular mechanisms involved would be greatly aided if these can be examined in a well-characterized model organism with a sequenced genome. It was recently shown that CO2-induced behavioural alterations are not confined to tropical species, but also affect the three-spined stickleback, although an involvement of the GABAA receptor was not examined. Here, we show that loss of lateralization in the stickleback can be restored rapidly and completely by gabazine treatment. This points towards a worrying universality of disturbed GABAA function after high-CO2 exposure in fishes from tropical to temperate marine habitats. Importantly, the stickleback is a model species with a sequenced and annotated genome, which greatly facilitates future studies on underlying molecular mechanisms.
Resumo:
Dating of sediment cores from the Baltic Sea has proven to be difficult due to uncertainties surrounding the 14C reservoir age and a scarcity of macrofossils suitable for dating. Here we present the results of multiple dating methods carried out on cores in the Gotland Deep area of the Baltic Sea. Particular emphasis is placed on the Littorina stage (8 ka ago to the present) of the Baltic Sea and possible changes in the 14C reservoir age of our dated samples. Three geochronological methods are used. Firstly, palaeomagnetic secular variations (PSV) are reconstructed, whereby ages are transferred to PSV features through comparison with varved lake sediment based PSV records. Secondly, lead (Pb) content and stable isotope analysis are used to identify past peaks in anthropogenic atmospheric Pb pollution. Lastly, 14C determinations were carried out on benthic foraminifera (Elphidium spec.) samples from the brackish Littorina stage of the Baltic Sea. Determinations carried out on smaller samples (as low as 4 µg C) employed an experimental, state-of-the-art method involving the direct measurement of CO2 from samples by a gas ion source without the need for a graphitisation step - the first time this method has been performed on foraminifera in an applied study. The PSV chronology, based on the uppermost Littorina stage sediments, produced ten age constraints between 6.29 and 1.29 cal ka BP, and the Pb depositional analysis produced two age constraints associated with the Medieval pollution peak. Analysis of PSV data shows that adequate directional data can be derived from both the present Littorina saline phase muds and Baltic Ice Lake stage varved glacial sediments. Ferrimagnetic iron sulphides, most likely authigenic greigite (Fe3S4), present in the intermediate Ancylus Lake freshwater stage sediments acquire a gyroremanent magnetisation during static alternating field (AF) demagnetisation, preventing the identification of a primary natural remanent magnetisation for these sediments. An inferred marine reservoir age offset (deltaR) is calculated by comparing the foraminifera 14C determinations to a PSV & Pb age model. This deltaR is found to trend towards younger values upwards in the core, possibly due to a gradual change in hydrographic conditions brought about by a reduction in marine water exchange from the open sea due to continued isostatic rebound.
Resumo:
Orbital forcing does not only exert direct insolation effects, but also alters climate indirectly through feedback mechanisms that modify atmosphere and ocean dynamics and meridional heat and moisture transfers. We investigate the regional effects of these changes by detailed analysis of atmosphere and ocean circulation and heat transports in a coupled atmosphere-ocean-sea ice-biosphere general circulation model (ECHAM5/JSBACH/MPI-OM). We perform long term quasi equilibrium simulations under pre-industrial, mid-Holocene (6000 years before present - yBP), and Eemian (125 000 yBP) orbital boundary conditions. Compared to pre-industrial climate, Eemian and Holocene temperatures show generally warmer conditions at higher and cooler conditions at lower latitudes. Changes in sea-ice cover, ocean heat transports, and atmospheric circulation patterns lead to pronounced regional heterogeneity. Over Europe, the warming is most pronounced over the north-eastern part in accordance with recent reconstructions for the Holocene. We attribute this warming to enhanced ocean circulation in the Nordic Seas and enhanced ocean-atmosphere heat flux over the Barents Shelf in conduction with retreat of sea ice and intensified winter storm tracks over northern Europe.
Resumo:
In Montiel Olea and Strzalecki (2014), authors have axiomatically developed an algorithm to infer the parameters of beta-delta model of cognitive bias (present and future biases). While this is extremely useful, it allows the implied beta to become very large when the response is impatient in the future choices relative to present choices, i.e., when there is a strong future bias. I modify the model to further exponentiate the functional form to get more reasonable beta values.
Resumo:
A study on the manoeuvrability of a riverine support patrol vessel is made to derive a mathematical model and simulate maneuvers with this ship. The vessel is mainly characterized by both its wide-beam and the unconventional propulsion system, that is, a pump-jet type azimuthal propulsion. By processing experimental data and the ship characteristics with diverse formulae to find the proper hydrodynamic coefficients and propulsion forces, a system of three differential equations is completed and tuned to carry out simulations of the turning test. The simulation is able to accept variable speed, jet angle and water depth as input parameters and its output consists of time series of the state variables and a plot of the simulated path and heading of the ship during the maneuver. Thanks to the data of full-scale trials previously performed with the studied vessel, a process of validation was made, which shows a good fit between simulated and full-scale experimental results, especially on the turning diameter
Resumo:
The study of matter under conditions of high density, pressure, and temperature is a valuable subject for inertial confinement fusion (ICF), astrophysical phenomena, high-power laser interaction with matter, etc. In all these cases, matter is heated and compressed by strong shocks to high pressures and temperatures, becomes partially or completely ionized via thermal or pressure ionization, and is in the form of dense plasma. The thermodynamics and the hydrodynamics of hot dense plasmas cannot be predicted without the knowledge of the equation of state (EOS) that describes how a material reacts to pressure and how much energy is involved. Therefore, the equation of state often takes the form of pressure and energy as functions of density and temperature. Furthermore, EOS data must be obtained in a timely manner in order to be useful as input in hydrodynamic codes. By this reason, the use of fast, robust and reasonably accurate atomic models, is necessary for computing the EOS of a material.
Resumo:
OntoTag - A Linguistic and Ontological Annotation Model Suitable for the Semantic Web
1. INTRODUCTION. LINGUISTIC TOOLS AND ANNOTATIONS: THEIR LIGHTS AND SHADOWS
Computational Linguistics is already a consolidated research area. It builds upon the results of other two major ones, namely Linguistics and Computer Science and Engineering, and it aims at developing computational models of human language (or natural language, as it is termed in this area). Possibly, its most well-known applications are the different tools developed so far for processing human language, such as machine translation systems and speech recognizers or dictation programs.
These tools for processing human language are commonly referred to as linguistic tools. Apart from the examples mentioned above, there are also other types of linguistic tools that perhaps are not so well-known, but on which most of the other applications of Computational Linguistics are built. These other types of linguistic tools comprise POS taggers, natural language parsers and semantic taggers, amongst others. All of them can be termed linguistic annotation tools.
Linguistic annotation tools are important assets. In fact, POS and semantic taggers (and, to a lesser extent, also natural language parsers) have become critical resources for the computer applications that process natural language. Hence, any computer application that has to analyse a text automatically and ‘intelligently’ will include at least a module for POS tagging. The more an application needs to ‘understand’ the meaning of the text it processes, the more linguistic tools and/or modules it will incorporate and integrate.
However, linguistic annotation tools have still some limitations, which can be summarised as follows:
1. Normally, they perform annotations only at a certain linguistic level (that is, Morphology, Syntax, Semantics, etc.).
2. They usually introduce a certain rate of errors and ambiguities when tagging. This error rate ranges from 10 percent up to 50 percent of the units annotated for unrestricted, general texts.
3. Their annotations are most frequently formulated in terms of an annotation schema designed and implemented ad hoc.
A priori, it seems that the interoperation and the integration of several linguistic tools into an appropriate software architecture could most likely solve the limitations stated in (1). Besides, integrating several linguistic annotation tools and making them interoperate could also minimise the limitation stated in (2). Nevertheless, in the latter case, all these tools should produce annotations for a common level, which would have to be combined in order to correct their corresponding errors and inaccuracies. Yet, the limitation stated in (3) prevents both types of integration and interoperation from being easily achieved.
In addition, most high-level annotation tools rely on other lower-level annotation tools and their outputs to generate their own ones. For example, sense-tagging tools (operating at the semantic level) often use POS taggers (operating at a lower level, i.e., the morphosyntactic) to identify the grammatical category of the word or lexical unit they are annotating. Accordingly, if a faulty or inaccurate low-level annotation tool is to be used by other higher-level one in its process, the errors and inaccuracies of the former should be minimised in advance. Otherwise, these errors and inaccuracies would be transferred to (and even magnified in) the annotations of the high-level annotation tool.
Therefore, it would be quite useful to find a way to
(i) correct or, at least, reduce the errors and the inaccuracies of lower-level linguistic tools;
(ii) unify the annotation schemas of different linguistic annotation tools or, more generally speaking, make these tools (as well as their annotations) interoperate.
Clearly, solving (i) and (ii) should ease the automatic annotation of web pages by means of linguistic tools, and their transformation into Semantic Web pages (Berners-Lee, Hendler and Lassila, 2001). Yet, as stated above, (ii) is a type of interoperability problem. There again, ontologies (Gruber, 1993; Borst, 1997) have been successfully applied thus far to solve several interoperability problems. Hence, ontologies should help solve also the problems and limitations of linguistic annotation tools aforementioned.
Thus, to summarise, the main aim of the present work was to combine somehow these separated approaches, mechanisms and tools for annotation from Linguistics and Ontological Engineering (and the Semantic Web) in a sort of hybrid (linguistic and ontological) annotation model, suitable for both areas. This hybrid (semantic) annotation model should (a) benefit from the advances, models, techniques, mechanisms and tools of these two areas; (b) minimise (and even solve, when possible) some of the problems found in each of them; and (c) be suitable for the Semantic Web. The concrete goals that helped attain this aim are presented in the following section.
2. GOALS OF THE PRESENT WORK
As mentioned above, the main goal of this work was to specify a hybrid (that is, linguistically-motivated and ontology-based) model of annotation suitable for the Semantic Web (i.e. it had to produce a semantic annotation of web page contents). This entailed that the tags included in the annotations of the model had to (1) represent linguistic concepts (or linguistic categories, as they are termed in ISO/DCR (2008)), in order for this model to be linguistically-motivated; (2) be ontological terms (i.e., use an ontological vocabulary), in order for the model to be ontology-based; and (3) be structured (linked) as a collection of ontology-based
Resumo:
The vertical dynamic actions transmitted by railway vehicles to the ballasted track infrastructure is evaluated taking into account models with different degree of detail. In particular, we have studied this matter from a two-dimensional (2D) finite element model to a fully coupled three-dimensional (3D) multi-body finite element model. The vehicle and track are coupled via a non-linear Hertz contact mechanism. The method of Lagrange multipliers is used for the contact constraint enforcement between wheel and rail. Distributed elevation irregularities are generated based on power spectral density (PSD) distributions which are taken into account for the interaction. The numerical simulations are performed in the time domain, using a direct integration method for solving the transient problem due to the contact nonlinearities. The results obtained include contact forces, forces transmitted to the infrastructure (sleeper) by railpads and envelopes of relevant results for several track irregularities and speed ranges. The main contribution of this work is to identify and discuss coincidences and differences between discrete 2D models and continuum 3D models, as wheel as assessing the validity of evaluating the dynamic loading on the track with simplified 2D models
Resumo:
The derivative nonlinear Schrödinger (DNLS) equation, describing propagation of circularly polarized Alfven waves of finite amplitude in a cold plasma, is truncated to explore the coherent, weakly nonlinear, cubic coupling of three waves near resonance, one wave being linearly unstable and the other waves damped. In a reduced three-wave model (equal damping of daughter waves, three-dimensional flow for two wave amplitudes and one relative phase), no matter how small the growth rate of the unstable wave there exists a parametric domain with the flow exhibiting chaotic dynamics that is absent for zero growth-rate. This hard transition in phase-space behavior occurs for left-hand (LH) polarized waves, paralelling the known fact that only LH time-harmonic solutions of the DNLS equation are modulationally unstable.
Resumo:
Integral Masonry System consisting of intersecting steel trusses alo ng each of the three dimensional directions of space on walls and slabs using any masonry material, had yet been backed up by the previous adobe test for seismic areas. This paper presents the comparison this last test and the adaptation of the IMS using h ollow brick. A prototype based on a two storey model house (6mx6mx6m) has being also built in two different scales in order to maximize the load and size of the shake table: the first one half size the whole building (3mx3mx3m) and the second, a quarter of the real size (3mx3mx6m). Both tests have suffered some mild to moderate damages while supporting the higher seismic action subjected by the shake table, without even fissuring the first test and with very few damages the second one. The thickness of the hollow brick wall and the diameter of the tree - dimensional truss reinforcement were scaled to the real size test in order to ascertain its great structural behaviour in relation to the previous structural model calculations. The aim of this study is to sum marize the results of the research collaboration between the ETSAM - UPM and the PUCP in whose laboratory these tests were carried out.