942 resultados para Rough set theory
Resumo:
In recent years there has been an explosive growth in the development of adaptive and data driven methods. One of the efficient and data-driven approaches is based on statistical learning theory (Vapnik 1998). The theory is based on Structural Risk Minimisation (SRM) principle and has a solid statistical background. When applying SRM we are trying not only to reduce training error ? to fit the available data with a model, but also to reduce the complexity of the model and to reduce generalisation error. Many nonlinear learning procedures recently developed in neural networks and statistics can be understood and interpreted in terms of the structural risk minimisation inductive principle. A recent methodology based on SRM is called Support Vector Machines (SVM). At present SLT is still under intensive development and SVM find new areas of application (www.kernel-machines.org). SVM develop robust and non linear data models with excellent generalisation abilities that is very important both for monitoring and forecasting. SVM are extremely good when input space is high dimensional and training data set i not big enough to develop corresponding nonlinear model. Moreover, SVM use only support vectors to derive decision boundaries. It opens a way to sampling optimization, estimation of noise in data, quantification of data redundancy etc. Presentation of SVM for spatially distributed data is given in (Kanevski and Maignan 2004).
Resumo:
In earlier work, the present authors have shown that hardness profiles are less dependent on the level of calculation than energy profiles for potential energy surfaces (PESs) having pathological behaviors. At variance with energy profiles, hardness profiles always show the correct number of stationary points. This characteristic has been used to indicate the existence of spurious stationary points on the PESs. In the present work, we apply this methodology to the hydrogen fluoride dimer, a classical difficult case for the density functional theory methods
Resumo:
A static comparative study on set-solutions for cooperative TU games is carried out. The analysis focuses on studying the compatibility between two classical and reasonable properties introduced by Young (1985) in the context of single valued solutions, namely core-selection and coalitional monotonicity. As the main result, it is showed that coalitional monotonicity is not only incompatible with the core-selection property but also with the bargaining-selection property. This new impossibility result reinforces the tradeoff between these kinds of interesting and intuitive economic properties. Positive results about compatibility between desirable economic properties are given replacing the core selection requirement by the core-extension property.
Resumo:
This paper provides an axiomatic framework to compare the D-core (the set of undominatedimputations) and the core of a cooperative game with transferable utility. Theorem1 states that the D-core is the only solution satisfying projection consistency, reasonableness (from above), (*)-antimonotonicity, and modularity. Theorem 2 characterizes the core replacing (*)-antimonotonicity by antimonotonicity. Moreover, these axioms alsocharacterize the core on the domain of convex games, totally balanced games, balancedgames, and superadditive games
Resumo:
We extend the relativistic mean field theory model of Sugahara and Toki by adding new couplings suggested by modern effective field theories. An improved set of parameters is developed with the goal to test the ability of the models based on effective field theory to describe the properties of finite nuclei and, at the same time, to be consistent with the trends of Dirac-Brueckner-Hartree-Fock calculations at densities away from the saturation region. We compare our calculations with other relativistic nuclear force parameters for various nuclear phenomena.
Resumo:
PURPOSE: The current study tested the applicability of Jessor's problem behavior theory (PBT) in national probability samples from Georgia and Switzerland. Comparisons focused on (1) the applicability of the problem behavior syndrome (PBS) in both developmental contexts, and (2) on the applicability of employing a set of theory-driven risk and protective factors in the prediction of problem behaviors. METHODS: School-based questionnaire data were collected from n = 18,239 adolescents in Georgia (n = 9499) and Switzerland (n = 8740) following the same protocol. Participants rated five measures of problem behaviors (alcohol and drug use, problems because of alcohol and drug use, and deviance), three risk factors (future uncertainty, depression, and stress), and three protective factors (family, peer, and school attachment). Final study samples included n = 9043 Georgian youth (mean age = 15.57; 58.8% females) and n = 8348 Swiss youth (mean age = 17.95; 48.5% females). Data analyses were completed using structural equation modeling, path analyses, and post hoc z-tests for comparisons of regression coefficients. RESULTS: Findings indicated that the PBS replicated in both samples, and that theory-driven risk and protective factors accounted for 13% and 10% in Georgian and Swiss samples, respectively in the PBS, net the effects by demographic variables. Follow-up z-tests provided evidence of some differences in the magnitude, but not direction, in five of six individual paths by country. CONCLUSION: PBT and the PBS find empirical support in these Eurasian and Western European samples; thus, Jessor's theory holds value and promise in understanding the etiology of adolescent problem behaviors outside of the United States.
Resumo:
In this Contribution we show that a suitably defined nonequilibrium entropy of an N-body isolated system is not a constant of the motion, in general, and its variation is bounded, the bounds determined by the thermodynamic entropy, i.e., the equilibrium entropy. We define the nonequilibrium entropy as a convex functional of the set of n-particle reduced distribution functions (n ? N) generalizing the Gibbs fine-grained entropy formula. Additionally, as a consequence of our microscopic analysis we find that this nonequilibrium entropy behaves as a free entropic oscillator. In the approach to the equilibrium regime, we find relaxation equations of the Fokker-Planck type, particularly for the one-particle distribution function.
Resumo:
In the last 50 years, we have had approximately 40 events with characteristics related to financial crisis. The most severe crisis was in 1929, when the financial markets plummet and the US gross domestic product decline in more than 30 percent. Recently some years ago, a new crisis developed in the United States, but instantly caused consequences and effects in the rest of the world.This new economic and financial crisis has increased the interest and motivation for the academic community, professors and researchers, to understand the causes and effects of the crisis, to learn from it. This is the one of the main reasons for the compilation of this book, which begins with a meeting of a group of IAFI researchers from the University of Barcelona, where researchers form Mexico and Spain, explain causes and consequences of the crisis of 2007.For that reason, we believed this set of chapters related to methodologies, applications and theories, would conveniently explained the characteristics and events of the past and future financial crisisThis book consists in 3 main sections, the first one called "State of the Art and current situation", the second named "Econometric applications to estimate crisis time periods" , and the third one "Solutions to diminish the effects of the crisis". The first section explains the current point of view of many research papers related to financial crisis, it has 2 chapters. In the first one, it describe and analyzes the models that historically have been used to explain financial crisis, furthermore, it proposes to used alternative methodologies such as Fuzzy Cognitive Maps. On the other hand , Chapter 2 , explains the characteristics and details of the 2007 crisis from the US perspective and its comparison to 1929 crisis, presenting some effects in Mexico and Latin America.The second section presents two econometric applications to estimate possible crisis periods. For this matter, Chapter 3, studies 3 Latin-American countries: Argentina, Brazil and Peru in the 1994 crisis and estimates the multifractal characteristics to identify financial and economic distress.Chapter 4 explains the crisis situations in Argentina (2001), Mexico (1994) and the recent one in the United States (2007) and its effects in other countries through a financial series methodology related to the stock market.The last section shows an alternative to prevent the effects of the crisis. The first chapter explains the financial stability effects through the financial system regulation and some globalization standards. Chapter 6, study the benefits of the Investor activism and a way to protect personal and national wealth to face the financial crisis risks.
Resumo:
This paper provides an axiomatic framework to compare the D-core (the set of undominatedimputations) and the core of a cooperative game with transferable utility. Theorem1 states that the D-core is the only solution satisfying projection consistency, reasonableness (from above), (*)-antimonotonicity, and modularity. Theorem 2 characterizes the core replacing (*)-antimonotonicity by antimonotonicity. Moreover, these axioms alsocharacterize the core on the domain of convex games, totally balanced games, balancedgames, and superadditive games
Resumo:
A static comparative study on set-solutions for cooperative TU games is carried out. The analysis focuses on studying the compatibility between two classical and reasonable properties introduced by Young (1985) in the context of single valued solutions, namely core-selection and coalitional monotonicity. As the main result, it is showed that coalitional monotonicity is not only incompatible with the core-selection property but also with the bargaining-selection property. This new impossibility result reinforces the tradeoff between these kinds of interesting and intuitive economic properties. Positive results about compatibility between desirable economic properties are given replacing the core selection requirement by the core-extension property.
Resumo:
We present a model in which particles (or individuals of a biological population) disperse with a rest time between consecutive motions (or migrations) which may take several possible values from a discrete set. Particles (or individuals) may also react (or reproduce). We derive a new equation for the effective rest time T˜ of the random walk. Application to the neolithic transition in Europe makes it possible to derive more realistic theoretical values for its wavefront speed than those following from the single-delayed framework presented previously [J. Fort and V. Méndez, Phys. Rev. Lett. 82, 867 (1999)]. The new results are consistent with the archaeological observations of this important historical process
Resumo:
CERNin tutkimuskeskuksen rakenteilla olevan hadronikiihdyttimen eräs tarkoitus on todistaa Higgsin bosonin olemassaolo. Higgsin bosonin löytyminen yhtenäistäisi nykyisen hiukkasfysiikan teorian ja antaisi selityksen sille kuinka hiukkaset saavat massansa. Kiihdyttimen CMS koeasema on tarkoitettu erityisesti myonien ilmaisuun. Tämä työ liittyy CMS koeaseman RPC-ilmaisintyypin linkkijärjestelmään, jonka tarkoituksena on käsitellä ilmaisimelta tulevia myonien aiheuttamia signaaleja ja lähettää tiedot tärkeäksi katsotuista törmäystapahtumista tallennettavaksi analysointia varten. Työssä on toteutettu linkkijärjestelmän ohjaus- ja linkkikorteille testiympäristö, jolla voidaan todeta järjestelmän eri osien keskinäinen yhteensopivuus ja toimivuus. Työn alkuosassa esitellään ilmaisimen linkkijärjestelmän eri osat ja niiden merkitykset. Työn loppuosassa käydään läpi eri testimenetelmiä ja analysoidaan niiden antamia tuloksia.
Resumo:
Molecular docking is a computational approach for predicting the most probable position of ligands in the binding sites of macromolecules and constitutes the cornerstone of structure-based computer-aided drug design. Here, we present a new algorithm called Attracting Cavities that allows molecular docking to be performed by simple energy minimizations only. The approach consists in transiently replacing the rough potential energy hypersurface of the protein by a smooth attracting potential driving the ligands into protein cavities. The actual protein energy landscape is reintroduced in a second step to refine the ligand position. The scoring function of Attracting Cavities is based on the CHARMM force field and the FACTS solvation model. The approach was tested on the 85 experimental ligand-protein structures included in the Astex diverse set and achieved a success rate of 80% in reproducing the experimental binding mode starting from a completely randomized ligand conformer. The algorithm thus compares favorably with current state-of-the-art docking programs. © 2015 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc.
Resumo:
Understanding and quantifying seismic energy dissipation, which manifests itself in terms of velocity dispersion and attenuation, in fluid-saturated porous rocks is of considerable interest, since it offers the perspective of extracting information with regard to the elastic and hydraulic rock properties. There is increasing evidence to suggest that wave-induced fluid flow, or simply WIFF, is the dominant underlying physical mechanism governing these phenomena throughout the seismic, sonic, and ultrasonic frequency ranges. This mechanism, which can prevail at the microscopic, mesoscopic, and macroscopic scale ranges, operates through viscous energy dissipation in response to fluid pressure gradients and inertial effects induced by the passing wavefield. In the first part of this thesis, we present an analysis of broad-band multi-frequency sonic log data from a borehole penetrating water-saturated unconsolidated glacio-fluvial sediments. An inherent complication arising in the interpretation of the observed P-wave attenuation and velocity dispersion is, however, that the relative importance of WIFF at the various scales is unknown and difficult to unravel. An important generic result of our work is that the levels of attenuation and velocity dispersion due to the presence of mesoscopic heterogeneities in water-saturated unconsolidated clastic sediments are expected to be largely negligible. Conversely, WIFF at the macroscopic scale allows for explaining most of the considered data while refinements provided by including WIFF at the microscopic scale in the analysis are locally meaningful. Using a Monte-Carlo-type inversion approach, we compare the capability of the different models describing WIFF at the macroscopic and microscopic scales with regard to their ability to constrain the dry frame elastic moduli and the permeability as well as their local probability distribution. In the second part of this thesis, we explore the issue of determining the size of a representative elementary volume (REV) arising in the numerical upscaling procedures of effective seismic velocity dispersion and attenuation of heterogeneous media. To this end, we focus on a set of idealized synthetic rock samples characterized by the presence of layers, fractures or patchy saturation in the mesocopic scale range. These scenarios are highly pertinent because they tend to be associated with very high levels of velocity dispersion and attenuation caused by WIFF in the mesoscopic scale range. The problem of determining the REV size for generic heterogeneous rocks is extremely complex and entirely unexplored in the given context. In this pilot study, we have therefore focused on periodic media, which assures the inherent self- similarity of the considered samples regardless of their size and thus simplifies the problem to a systematic analysis of the dependence of the REV size on the applied boundary conditions in the numerical simulations. Our results demonstrate that boundary condition effects are absent for layered media and negligible in the presence of patchy saturation, thus resulting in minimum REV sizes. Conversely, strong boundary condition effects arise in the presence of a periodic distribution of finite-length fractures, thus leading to large REV sizes. In the third part of the thesis, we propose a novel effective poroelastic model for periodic media characterized by mesoscopic layering, which accounts for WIFF at both the macroscopic and mesoscopic scales as well as for the anisotropy associated with the layering. Correspondingly, this model correctly predicts the existence of the fast and slow P-waves as well as quasi and pure S-waves for any direction of wave propagation as long as the corresponding wavelengths are much larger than the layer thicknesses. The primary motivation for this work is that, for formations of intermediate to high permeability, such as, for example, unconsolidated sediments, clean sandstones, or fractured rocks, these two WIFF mechanisms may prevail at similar frequencies. This scenario, which can be expected rather common, cannot be accounted for by existing models for layered porous media. Comparisons of analytical solutions of the P- and S-wave phase velocities and inverse quality factors for wave propagation perpendicular to the layering with those obtained from numerical simulations based on a ID finite-element solution of the poroelastic equations of motion show very good agreement as long as the assumption of long wavelengths remains valid. A limitation of the proposed model is its inability to account for inertial effects in mesoscopic WIFF when both WIFF mechanisms prevail at similar frequencies. Our results do, however, also indicate that the associated error is likely to be relatively small, as, even at frequencies at which both inertial and scattering effects are expected to be at play, the proposed model provides a solution that is remarkably close to its numerical benchmark. -- Comprendre et pouvoir quantifier la dissipation d'énergie sismique qui se traduit par la dispersion et l'atténuation des vitesses dans les roches poreuses et saturées en fluide est un intérêt primordial pour obtenir des informations à propos des propriétés élastique et hydraulique des roches en question. De plus en plus d'études montrent que le déplacement relatif du fluide par rapport au solide induit par le passage de l'onde (wave induced fluid flow en anglais, dont on gardera ici l'abréviation largement utilisée, WIFF), représente le principal mécanisme physique qui régit ces phénomènes, pour la gamme des fréquences sismiques, sonique et jusqu'à l'ultrasonique. Ce mécanisme, qui prédomine aux échelles microscopique, mésoscopique et macroscopique, est lié à la dissipation d'énergie visqueuse résultant des gradients de pression de fluide et des effets inertiels induits par le passage du champ d'onde. Dans la première partie de cette thèse, nous présentons une analyse de données de diagraphie acoustique à large bande et multifréquences, issues d'un forage réalisé dans des sédiments glaciaux-fluviaux, non-consolidés et saturés en eau. La difficulté inhérente à l'interprétation de l'atténuation et de la dispersion des vitesses des ondes P observées, est que l'importance des WIFF aux différentes échelles est inconnue et difficile à quantifier. Notre étude montre que l'on peut négliger le taux d'atténuation et de dispersion des vitesses dû à la présence d'hétérogénéités à l'échelle mésoscopique dans des sédiments clastiques, non- consolidés et saturés en eau. A l'inverse, les WIFF à l'échelle macroscopique expliquent la plupart des données, tandis que les précisions apportées par les WIFF à l'échelle microscopique sont localement significatives. En utilisant une méthode d'inversion du type Monte-Carlo, nous avons comparé, pour les deux modèles WIFF aux échelles macroscopique et microscopique, leur capacité à contraindre les modules élastiques de la matrice sèche et la perméabilité ainsi que leur distribution de probabilité locale. Dans une seconde partie de cette thèse, nous cherchons une solution pour déterminer la dimension d'un volume élémentaire représentatif (noté VER). Cette problématique se pose dans les procédures numériques de changement d'échelle pour déterminer l'atténuation effective et la dispersion effective de la vitesse sismique dans un milieu hétérogène. Pour ce faire, nous nous concentrons sur un ensemble d'échantillons de roches synthétiques idéalisés incluant des strates, des fissures, ou une saturation partielle à l'échelle mésoscopique. Ces scénarios sont hautement pertinents, car ils sont associés à un taux très élevé d'atténuation et de dispersion des vitesses causé par les WIFF à l'échelle mésoscopique. L'enjeu de déterminer la dimension d'un VER pour une roche hétérogène est très complexe et encore inexploré dans le contexte actuel. Dans cette étude-pilote, nous nous focalisons sur des milieux périodiques, qui assurent l'autosimilarité des échantillons considérés indépendamment de leur taille. Ainsi, nous simplifions le problème à une analyse systématique de la dépendance de la dimension des VER aux conditions aux limites appliquées. Nos résultats indiquent que les effets des conditions aux limites sont absents pour un milieu stratifié, et négligeables pour un milieu à saturation partielle : cela résultant à des dimensions petites des VER. Au contraire, de forts effets des conditions aux limites apparaissent dans les milieux présentant une distribution périodique de fissures de taille finie : cela conduisant à de grandes dimensions des VER. Dans la troisième partie de cette thèse, nous proposons un nouveau modèle poro- élastique effectif, pour les milieux périodiques caractérisés par une stratification mésoscopique, qui prendra en compte les WIFF à la fois aux échelles mésoscopique et macroscopique, ainsi que l'anisotropie associée à ces strates. Ce modèle prédit alors avec exactitude l'existence des ondes P rapides et lentes ainsi que les quasis et pures ondes S, pour toutes les directions de propagation de l'onde, tant que la longueur d'onde correspondante est bien plus grande que l'épaisseur de la strate. L'intérêt principal de ce travail est que, pour les formations à perméabilité moyenne à élevée, comme, par exemple, les sédiments non- consolidés, les grès ou encore les roches fissurées, ces deux mécanismes d'WIFF peuvent avoir lieu à des fréquences similaires. Or, ce scénario, qui est assez commun, n'est pas décrit par les modèles existants pour les milieux poreux stratifiés. Les comparaisons des solutions analytiques des vitesses des ondes P et S et de l'atténuation de la propagation des ondes perpendiculaires à la stratification, avec les solutions obtenues à partir de simulations numériques en éléments finis, fondées sur une solution obtenue en 1D des équations poro- élastiques, montrent un très bon accord, tant que l'hypothèse des grandes longueurs d'onde reste valable. Il y a cependant une limitation de ce modèle qui est liée à son incapacité à prendre en compte les effets inertiels dans les WIFF mésoscopiques quand les deux mécanismes d'WIFF prédominent à des fréquences similaires. Néanmoins, nos résultats montrent aussi que l'erreur associée est relativement faible, même à des fréquences à laquelle sont attendus les deux effets d'inertie et de diffusion, indiquant que le modèle proposé fournit une solution qui est remarquablement proche de sa référence numérique.
Resumo:
One main assumption in the theory of rough sets applied to information tables is that the elements that exhibit the same information are indiscernible (similar) and form blocks that can be understood as elementary granules of knowledge about the universe. We propose a variant of this concept defining a measure of similarity between the elements of the universe in order to consider that two objects can be indiscernible even though they do not share all the attribute values because the knowledge is partial or uncertain. The set of similarities define a matrix of a fuzzy relation satisfying reflexivity and symmetry but transitivity thus a partition of the universe is not attained. This problem can be solved calculating its transitive closure what ensure a partition for each level belonging to the unit interval [0,1]. This procedure allows generalizing the theory of rough sets depending on the minimum level of similarity accepted. This new point of view increases the rough character of the data because increases the set of indiscernible objects. Finally, we apply our results to a not real application to be capable to remark the differences and the improvements between this methodology and the classical one