99 resultados para PHYSICS EVENT GENERATION
Resumo:
In this article we present a hybrid approach for automatic summarization of Spanish medical texts. There are a lot of systems for automatic summarization using statistics or linguistics, but only a few of them combining both techniques. Our idea is that to reach a good summary we need to use linguistic aspects of texts, but as well we should benefit of the advantages of statistical techniques. We have integrated the Cortex (Vector Space Model) and Enertex (statistical physics) systems coupled with the Yate term extractor, and the Disicosum system (linguistics). We have compared these systems and afterwards we have integrated them in a hybrid approach. Finally, we have applied this hybrid system over a corpora of medical articles and we have evaluated their performances obtaining good results.
Resumo:
In this work we present the results of experimental work on the development of lexical class-based lexica by automatic means. Our purpose is to assess the use of linguistic lexical-class based information as a feature selection methodology for the use of classifiers in quick lexical development. The results show that the approach can help reduce the human effort required in the development of language resources significantly.
Resumo:
This paper is aimed at exploring the determinants of female activity from a dynamic perspective. An event-history analysis of the transition form employment to housework has been made resorting to data from the European Household Panel Survey. Four countries representing different welfare regimes and, more specifically, different family policies, have been selected for the analysis: Britain, Denmark, Germany and Spain. The results confirm the importance of individual-level factors, which is consistent with an economic approach to female labour supply. Nonetheless, there are significant cross-national differences in how these factors act over the risk of abandoning the labour market. First, the number of trnasitions is much lower among Danish working women than among British, German or Spanish ones, revealing the relative importance of universal provision of childcare services, vis-à-vis other elements of the family policy, as time or money.
Resumo:
Comentario a la traducción por encargo del libro WOTBG, centrado en las muestras de obra poética de las distintas autoras y la necesidad de conservar sus similitudes y sus diferéncias.
Resumo:
The first generation models of currency crises have often been criticized because they predict that, in the absence of very large triggering shocks, currency attacks should be predictable and lead to small devaluations. This paper shows that these features of first generation models are not robust to the inclusion of private information. In particular, this paper analyzes a generalization of the Krugman-Flood-Garber (KFG) model, which relaxes the assumption that all consumers are perfectly informed about the level of fundamentals. In this environment, the KFG equilibrium of zero devaluation is only one of many possible equilibria. In all the other equilibria, the lack of perfect information delays the attack on the currency past the point at which the shadow exchange rate equals the peg, giving rise to unpredictable and discrete devaluations.
Resumo:
We use a simulation model to study how the diversification of electricity generation portfoliosinfluences wholesale prices. We find that technological diversification generally leads to lower market prices but that the relationship is mediated by the supply to demand ratio. In each demand case there is a threshold where pivotal dynamics change. Pivotal dynamics pre- and post-threshold are the cause of non-linearities in the influence of diversification on market prices. The findings are robust to our choice of behavioural parameters and match close-form solutions where those are available.
Resumo:
A polarizable quantum mechanics and molecular mechanics model has been extended to account for the difference between the macroscopic electric field and the actual electric field felt by the solute molecule. This enables the calculation of effective microscopic properties which can be related to macroscopic susceptibilities directly comparable with experimental results. By seperating the discrete local field into two distinct contribution we define two different microscopic properties, the so-called solute and effective properties. The solute properties account for the pure solvent effects, i.e., effects even when the macroscopic electric field is zero, and the effective properties account for both the pure solvent effects and the effect from the induced dipoles in the solvent due to the macroscopic electric field. We present results for the linear and nonlinear polarizabilities of water and acetonitrile both in the gas phase and in the liquid phase. For all the properties we find that the pure solvent effect increases the properties whereas the induced electric field decreases the properties. Furthermore, we present results for the refractive index, third-harmonic generation (THG), and electric field induced second-harmonic generation (EFISH) for liquid water and acetonitrile. We find in general good agreement between the calculated and experimental results for the refractive index and the THG susceptibility. For the EFISH susceptibility, however, the difference between experiment and theory is larger since the orientational effect arising from the static electric field is not accurately described
Resumo:
Interdisciplinary frameworks for studying natural hazards and their temporal trends have an important potential in data generation for risk assessment, land use planning, and therefore the sustainable management of resources. This paper focuses on the adjustments required because of the wide variety of scientific fields involved in the reconstruction and characterisation of flood events for the past 1000 years. The aim of this paper is to describe various methodological aspects of the study of flood events in their historical dimension, including the critical evaluation of old documentary and instrumental sources, flood-event classification and hydraulic modelling, and homogeneity and quality control tests. Standardized criteria for flood classification have been defined and applied to the Isère and Drac floods in France, from 1600 to 1950, and to the Ter, the Llobregat and the Segre floods, in Spain, from 1300 to 1980. The analysis on the Drac and Isère data series from 1600 to the present day showed that extraordinary and catastrophic floods were not distributed uniformly in time. However, the largest floods (general catastrophic floods) were homogeneously distributed in time within the period 1600¿1900. No major flood occurred during the 20th century in these rivers. From 1300 to the present day, no homogeneous behaviour was observed for extraordinary floods in the Spanish rivers. The largest floods were uniformly distributed in time within the period 1300-1900, for the Segre and Ter rivers.
Resumo:
In the scope of the European project Hydroptimet, INTERREG IIIB-MEDOCC programme, limited area model (LAM) intercomparison of intense events that produced many damages to people and territory is performed. As the comparison is limited to single case studies, the work is not meant to provide a measure of the different models' skill, but to identify the key model factors useful to give a good forecast on such a kind of meteorological phenomena. This work focuses on the Spanish flash-flood event, also known as "Montserrat-2000" event. The study is performed using forecast data from seven operational LAMs, placed at partners' disposal via the Hydroptimet ftp site, and observed data from Catalonia rain gauge network. To improve the event analysis, satellite rainfall estimates have been also considered. For statistical evaluation of quantitative precipitation forecasts (QPFs), several non-parametric skill scores based on contingency tables have been used. Furthermore, for each model run it has been possible to identify Catalonia regions affected by misses and false alarms using contingency table elements. Moreover, the standard "eyeball" analysis of forecast and observed precipitation fields has been supported by the use of a state-of-the-art diagnostic method, the contiguous rain area (CRA) analysis. This method allows to quantify the spatial shift forecast error and to identify the error sources that affected each model forecasts. High-resolution modelling and domain size seem to have a key role for providing a skillful forecast. Further work is needed to support this statement, including verification using a wider observational data set.
Resumo:
The 10 June 2000 event was the largest flash flood event that occurred in the Northeast of Spain in the late 20th century, both as regards its meteorological features and its considerable social impact. This paper focuses on analysis of the structures that produced the heavy rainfalls, especially from the point of view of meteorological radar. Due to the fact that this case is a good example of a Mediterranean flash flood event, a final objective of this paper is to undertake a description of the evolution of the rainfall structure that would be sufficiently clear to be understood at an interdisciplinary forum. Then, it could be useful not only to improve conceptual meteorological models, but also for application in downscaling models. The main precipitation structure was a Mesoscale Convective System (MCS) that crossed the region and that developed as a consequence of the merging of two previous squall lines. The paper analyses the main meteorological features that led to the development and triggering of the heavy rainfalls, with special emphasis on the features of this MCS, its life cycle and its dynamic features. To this end, 2-D and 3-D algorithms were applied to the imagery recorded over the complete life cycle of the structures, which lasted approximately 18 h. Mesoscale and synoptic information were also considered. Results show that it was an NS-MCS, quasi-stationary during its stage of maturity as a consequence of the formation of a convective train, the different displacement directions of the 2-D structures and the 3-D structures, including the propagation of new cells, and the slow movement of the convergence line associated with the Mediterranean mesoscale low.
Resumo:
Increased production of reactive oxygen species (ROS) in mitochondria underlies major systemic diseases, and this clinical problem stimulates a great scientific interest in the mechanism of ROS generation. However, the mechanism of hypoxia-induced change in ROS production is not fully understood. To mathematically analyze this mechanism in details, taking into consideration all the possible redox states formed in the process of electron transport, even for respiratory complex III, a system of hundreds of differential equations must be constructed. Aimed to facilitate such tasks, we developed a new methodology of modeling, which resides in the automated construction of large sets of differential equations. The detailed modeling of electron transport in mitochondria allowed for the identification of two steady state modes of operation (bistability) of respiratory complex III at the same microenvironmental conditions. Various perturbations could induce the transition of respiratory chain from one steady state to another. While normally complex III is in a low ROS producing mode, temporal anoxia could switch it to a high ROS producing state, which persists after the return to normal oxygen supply. This prediction, which we qualitatively validated experimentally, explains the mechanism of anoxia-induced cell damage. Recognition of bistability of complex III operation may enable novel therapeutic strategies for oxidative stress and our method of modeling could be widely used in systems biology studies.
Resumo:
We report the design and validation of simple magnetic tweezers for oscillating ferromagnetic beads in the piconewton and nanometer scales. The system is based on a single pair of coaxial coils operating in two sequential modes: permanent magnetization of the beads through a large and brief pulse of magnetic field and generation of magnetic gradients to produce uniaxial oscillatory forces. By using this two step method, the magnetic moment of the beads remains constant during measurements. Therefore, the applied force can be computed and varies linearly with the driving signal. No feedback control is required to produce well defined force oscillations over a wide bandwidth. The design of the coils was optimized to obtain high magnetic fields (280 mT) and gradients (2 T/m) with high homogeneity (5% variation) within the sample. The magnetic tweezers were implemented in an inverted optical microscope with a videomicroscopy-based multiparticle tracking system. The apparatus was validated with 4.5 ¿m magnetite beads obtaining forces up to ~2 pN and subnanometer resolution. The applicability of the device includes microrheology of biopolymer and cell cytoplasm, molecular mechanics, and mechanotransduction in living cells.
Resumo:
The need to move forward in the knowledge of the subatomic world has stimulated the development of new particle colliders. However, the objectives of the next generation of colliders sets unprecedented challenges to the detector performance. The purpose of this contribution is to present a bidimensional array based on avalanche photodiodes operated in the Geiger mode to track high energy particles in future linear colliders. The bidimensional array can function in a gated mode to reduce the probability to detect noise counts interfering with real events. Low reverse overvoltages are used to lessen the dark count rate. Experimental results demonstrate that the prototype fabricated with a standard HV-CMOS process presents an increased efficiency and avoids sensor blindness by applying the proposed techniques.
Resumo:
Leakage detection is an important issue in many chemical sensing applications. Leakage detection hy thresholds suffers from important drawbacks when sensors have serious drifts or they are affected by cross-sensitivities. Here we present an adaptive method based in a Dynamic Principal Component Analysis that models the relationships between the sensors in the may. In normal conditions a certain variance distribution characterizes sensor signals. However, in the presence of a new source of variance the PCA decomposition changes drastically. In order to prevent the influence of sensor drifts the model is adaptive and it is calculated in a recursive manner with minimum computational effort. The behavior of this technique is studied with synthetic signals and with real signals arising by oil vapor leakages in an air compressor. Results clearly demonstrate the efficiency of the proposed method.