974 resultados para MULTIRESOLUTION APPROXIMATION
Resumo:
We present a combined shape and mechanical anisotropy evolution model for a two-phase inclusion-bearing rock subject to large deformation. A single elliptical inclusion embedded in a homogeneous but anisotropic matrix is used to represent a simplified shape evolution enforced on all inclusions. The mechanical anisotropy develops due to the alignment of elongated inclusions. The effective anisotropy is quantified using the differential effective medium (DEM) approach. The model can be run for any deformation path and an arbitrary viscosity ratio between the inclusion and host phase. We focus on the case of simple shear and weak inclusions. The shape evolution of the representative inclusion is largely insensitive to the anisotropy development and to parameter variations in the studied range. An initial hardening stage is observed up to a shear strain of gamma = 1 irrespective of the inclusion fraction. The hardening is followed by a softening stage related to the developing anisotropy and its progressive rotation toward the shear direction. The traction needed to maintain a constant shear rate exhibits a fivefold drop at gamma = 5 in the limiting case of an inviscid inclusion. Numerical simulations show that our analytical model provides a good approximation to the actual evolution of a two-phase inclusion-host composite. However, the inclusions develop complex sigmoidal shapes resulting in the formation of an S-C fabric. We attribute the observed drop in the effective normal viscosity to this structural development. We study the localization potential in a rock column bearing varying fraction of inclusions. In the inviscid inclusion case, a strain jump from gamma = 3 to gamma = 100 is observed for a change of the inclusion fraction from 20% to 33%.
Resumo:
The multiscale finite-volume (MSFV) method has been derived to efficiently solve large problems with spatially varying coefficients. The fine-scale problem is subdivided into local problems that can be solved separately and are coupled by a global problem. This algorithm, in consequence, shares some characteristics with two-level domain decomposition (DD) methods. However, the MSFV algorithm is different in that it incorporates a flux reconstruction step, which delivers a fine-scale mass conservative flux field without the need for iterating. This is achieved by the use of two overlapping coarse grids. The recently introduced correction function allows for a consistent handling of source terms, which makes the MSFV method a flexible algorithm that is applicable to a wide spectrum of problems. It is demonstrated that the MSFV operator, used to compute an approximate pressure solution, can be equivalently constructed by writing the Schur complement with a tangential approximation of a single-cell overlapping grid and incorporation of appropriate coarse-scale mass-balance equations.
Resumo:
Remote sensing image processing is nowadays a mature research area. The techniques developed in the field allow many real-life applications with great societal value. For instance, urban monitoring, fire detection or flood prediction can have a great impact on economical and environmental issues. To attain such objectives, the remote sensing community has turned into a multidisciplinary field of science that embraces physics, signal theory, computer science, electronics, and communications. From a machine learning and signal/image processing point of view, all the applications are tackled under specific formalisms, such as classification and clustering, regression and function approximation, image coding, restoration and enhancement, source unmixing, data fusion or feature selection and extraction. This paper serves as a survey of methods and applications, and reviews the last methodological advances in remote sensing image processing.
Resumo:
We evaluate the performance of different optimization techniques developed in the context of optical flow computation with different variational models. In particular, based on truncated Newton methods (TN) that have been an effective approach for large-scale unconstrained optimization, we de- velop the use of efficient multilevel schemes for computing the optical flow. More precisely, we evaluate the performance of a standard unidirectional mul- tilevel algorithm - called multiresolution optimization (MR/OPT), to a bidrec- tional multilevel algorithm - called full multigrid optimization (FMG/OPT). The FMG/OPT algorithm treats the coarse grid correction as an optimiza- tion search direction and eventually scales it using a line search. Experimental results on different image sequences using four models of optical flow com- putation show that the FMG/OPT algorithm outperforms both the TN and MR/OPT algorithms in terms of the computational work and the quality of the optical flow estimation.
Resumo:
This study aimed to use the plantar pressure insole for estimating the three-dimensional ground reaction force (GRF) as well as the frictional torque (T(F)) during walking. Eleven subjects, six healthy and five patients with ankle disease participated in the study while wearing pressure insoles during several walking trials on a force-plate. The plantar pressure distribution was analyzed and 10 principal components of 24 regional pressure values with the stance time percentage (STP) were considered for GRF and T(F) estimation. Both linear and non-linear approximators were used for estimating the GRF and T(F) based on two learning strategies using intra-subject and inter-subjects data. The RMS error and the correlation coefficient between the approximators and the actual patterns obtained from force-plate were calculated. Our results showed better performance for non-linear approximation especially when the STP was considered as input. The least errors were observed for vertical force (4%) and anterior-posterior force (7.3%), while the medial-lateral force (11.3%) and frictional torque (14.7%) had higher errors. The result obtained for the patients showed higher error; nevertheless, when the data of the same patient were used for learning, the results were improved and in general slight differences with healthy subjects were observed. In conclusion, this study showed that ambulatory pressure insole with data normalization, an optimal choice of inputs and a well-trained nonlinear mapping function can estimate efficiently the three-dimensional ground reaction force and frictional torque in consecutive gait cycle without requiring a force-plate.
Resumo:
L’estudi que es presenta a continuació té l’objectiu de comprendre quina és la realitat ociosa de les persones de 50 a 70 anys als municipis de Malla (Catalunya, Espanya) i de San Juan la Laguna (Sololá, Guatemala) des de la perspectiva humanista en termes de concepció i pràctica, i veure quina és la influència i la força que prenen les característiques de la societat en la qual es desenvolupa. Per tal de dur-ho a terme, primerament s’ha realitzat un procés d’aproximació respecte el concepte de l’oci, i una recerca concreta vers l’oci humanista. A partir d’aquí, s’ha fet l’estudi amb una mostra formada per deu persones del municipi de Malla i deu membres de San Juan la Laguna que es troben entre els 50 i 70 anys, i amb unes condicions econòmiques i uns estils de vida diferents. Per tal de realitzar la recerca i l’anàlisi de l’oci humanista en els contexts de Malla i San Juan la Laguna s’ha emprat una metodologia qualitativa, i s’ha utilitzat l’instrument corresponent a l’entrevista. Aquesta ha estat elaborada prenent com a marc de referència la metodologia de la Grounded Theory (Glaser y Strauss, 1967). El projecte també compte amb una vessant d’etnografia. Els resultats que s’han obtingut demostren que hi ha una presència significativa de l’oci humanista en els contexts analitzats, però que en el cas de San Juan la Laguna aquest esdevé un element en construcció.
Resumo:
This paper proposes a very fast method for blindly initial- izing a nonlinear mapping which transforms a sum of random variables. The method provides a surprisingly good approximation even when the basic assumption is not fully satis¯ed. The method can been used success- fully for initializing nonlinearity in post-nonlinear mixtures or in Wiener system inversion, for improving algorithm speed and convergence.
Resumo:
Despite the considerable evidence showing that dispersal between habitat patches is often asymmetric, most of the metapopulation models assume symmetric dispersal. In this paper, we develop a Monte Carlo simulation model to quantify the effect of asymmetric dispersal on metapopulation persistence. Our results suggest that metapopulation extinctions are more likely when dispersal is asymmetric. Metapopulation viability in systems with symmetric dispersal mirrors results from a mean field approximation, where the system persists if the expected per patch colonization probability exceeds the expected per patch local extinction rate. For asymmetric cases, the mean field approximation underestimates the number of patches necessary for maintaining population persistence. If we use a model assuming symmetric dispersal when dispersal is actually asymmetric, the estimation of metapopulation persistence is wrong in more than 50% of the cases. Metapopulation viability depends on patch connectivity in symmetric systems, whereas in the asymmetric case the number of patches is more important. These results have important implications for managing spatially structured populations, when asymmetric dispersal may occur. Future metapopulation models should account for asymmetric dispersal, while empirical work is needed to quantify the patterns and the consequences of asymmetric dispersal in natural metapopulations.
Resumo:
Mediante programa informático es posible simular las vías de comunicación existentes entre asentamientos humanos, siendo muchos los trabajos publicados y varias las formas de aproximación al problema. El objetivo del trabajo que nos ocupa ha consistido en la generación de un nuevo modelo a partir del cual fuera posible simular una vía de comunicación, con la salvedad de que en este caso ya existía una idea aproximada de cual sería su trayectoria a partir de trabajos de otros autores. Esta vía de comunicación es la vía Augusta, en el tramo que une el Coll de Panissars y la ciudad de Girona.
Resumo:
By means of computer simulations and solution of the equations of the mode coupling theory (MCT),we investigate the role of the intramolecular barriers on several dynamic aspects of nonentangled polymers. The investigated dynamic range extends from the caging regime characteristic of glass-formers to the relaxation of the chain Rouse modes. We review our recent work on this question,provide new results, and critically discuss the limitations of the theory. Solutions of the MCT for the structural relaxation reproduce qualitative trends of simulations for weak and moderate barriers. However, a progressive discrepancy is revealed as the limit of stiff chains is approached. This dis-agreement does not seem related with dynamic heterogeneities, which indeed are not enhanced by increasing barrier strength. It is not connected either with the breakdown of the convolution approximation for three-point static correlations, which retains its validity for stiff chains. These findings suggest the need of an improvement of the MCT equations for polymer melts. Concerning the relaxation of the chain degrees of freedom, MCT provides a microscopic basis for time scales from chain reorientation down to the caging regime. It rationalizes, from first principles, the observed deviations from the Rouse model on increasing the barrier strength. These include anomalous scaling of relaxation times, long-time plateaux, and nonmonotonous wavelength dependence of the mode correlators.
Resumo:
In order to understand the development of non-genetically encoded actions during an animal's lifespan, it is necessary to analyze the dynamics and evolution of learning rules producing behavior. Owing to the intrinsic stochastic and frequency-dependent nature of learning dynamics, these rules are often studied in evolutionary biology via agent-based computer simulations. In this paper, we show that stochastic approximation theory can help to qualitatively understand learning dynamics and formulate analytical models for the evolution of learning rules. We consider a population of individuals repeatedly interacting during their lifespan, and where the stage game faced by the individuals fluctuates according to an environmental stochastic process. Individuals adjust their behavioral actions according to learning rules belonging to the class of experience-weighted attraction learning mechanisms, which includes standard reinforcement and Bayesian learning as special cases. We use stochastic approximation theory in order to derive differential equations governing action play probabilities, which turn out to have qualitative features of mutator-selection equations. We then perform agent-based simulations to find the conditions where the deterministic approximation is closest to the original stochastic learning process for standard 2-action 2-player fluctuating games, where interaction between learning rules and preference reversal may occur. Finally, we analyze a simplified model for the evolution of learning in a producer-scrounger game, which shows that the exploration rate can interact in a non-intuitive way with other features of co-evolving learning rules. Overall, our analyses illustrate the usefulness of applying stochastic approximation theory in the study of animal learning.
Resumo:
We extend a previous model of the Neolithic transition in Europe [J. Fort and V. Méndez, Phys. Rev. Lett. 82, 867 (1999)] by taking two effects into account: (i) we do not use the diffusion approximation (which corresponds to second-order Taylor expansions), and (ii) we take proper care of the fact that parents do not migrate away from their children (we refer to this as a time-order effect, in the sense that it implies that children grow up with their parents, before they become adults and can survive and migrate). We also derive a time-ordered, second-order equation, which we call the sequential reaction-diffusion equation, and use it to show that effect (ii) is the most important one, and that both of them should in general be taken into account to derive accurate results. As an example, we consider the Neolithic transition: the model predictions agree with the observed front speed, and the corrections relative to previous models are important (up to 70%)
Resumo:
BACKGROUND: DNA sequence polymorphisms analysis can provide valuable information on the evolutionary forces shaping nucleotide variation, and provides an insight into the functional significance of genomic regions. The recent ongoing genome projects will radically improve our capabilities to detect specific genomic regions shaped by natural selection. Current available methods and software, however, are unsatisfactory for such genome-wide analysis. RESULTS: We have developed methods for the analysis of DNA sequence polymorphisms at the genome-wide scale. These methods, which have been tested on a coalescent-simulated and actual data files from mouse and human, have been implemented in the VariScan software package version 2.0. Additionally, we have also incorporated a graphical-user interface. The main features of this software are: i) exhaustive population-genetic analyses including those based on the coalescent theory; ii) analysis adapted to the shallow data generated by the high-throughput genome projects; iii) use of genome annotations to conduct a comprehensive analyses separately for different functional regions; iv) identification of relevant genomic regions by the sliding-window and wavelet-multiresolution approaches; v) visualization of the results integrated with current genome annotations in commonly available genome browsers. CONCLUSION: VariScan is a powerful and flexible suite of software for the analysis of DNA polymorphisms. The current version implements new algorithms, methods, and capabilities, providing an important tool for an exhaustive exploratory analysis of genome-wide DNA polymorphism data.
Resumo:
Rapport de synthèse Cette thèse consiste en trois essais sur les stratégies optimales de dividendes. Chaque essai correspond à un chapitre. Les deux premiers essais ont été écrits en collaboration avec les Professeurs Hans Ulrich Gerber et Elias S. W. Shiu et ils ont été publiés; voir Gerber et al. (2006b) ainsi que Gerber et al. (2008). Le troisième essai a été écrit en collaboration avec le Professeur Hans Ulrich Gerber. Le problème des stratégies optimales de dividendes remonte à de Finetti (1957). Il se pose comme suit: considérant le surplus d'une société, déterminer la stratégie optimale de distribution des dividendes. Le critère utilisé consiste à maximiser la somme des dividendes escomptés versés aux actionnaires jusqu'à la ruine2 de la société. Depuis de Finetti (1957), le problème a pris plusieurs formes et a été résolu pour différents modèles. Dans le modèle classique de théorie de la ruine, le problème a été résolu par Gerber (1969) et plus récemment, en utilisant une autre approche, par Azcue and Muler (2005) ou Schmidli (2008). Dans le modèle classique, il y a un flux continu et constant d'entrées d'argent. Quant aux sorties d'argent, elles sont aléatoires. Elles suivent un processus à sauts, à savoir un processus de Poisson composé. Un exemple qui correspond bien à un tel modèle est la valeur du surplus d'une compagnie d'assurance pour lequel les entrées et les sorties sont respectivement les primes et les sinistres. Le premier graphique de la Figure 1 en illustre un exemple. Dans cette thèse, seules les stratégies de barrière sont considérées, c'est-à-dire quand le surplus dépasse le niveau b de la barrière, l'excédent est distribué aux actionnaires comme dividendes. Le deuxième graphique de la Figure 1 montre le même exemple du surplus quand une barrière de niveau b est introduite, et le troisième graphique de cette figure montre, quand à lui, les dividendes cumulés. Chapitre l: "Maximizing dividends without bankruptcy" Dans ce premier essai, les barrières optimales sont calculées pour différentes distributions du montant des sinistres selon deux critères: I) La barrière optimale est calculée en utilisant le critère usuel qui consiste à maximiser l'espérance des dividendes escomptés jusqu'à la ruine. II) La barrière optimale est calculée en utilisant le second critère qui consiste, quant à lui, à maximiser l'espérance de la différence entre les dividendes escomptés jusqu'à la ruine et le déficit au moment de la ruine. Cet essai est inspiré par Dickson and Waters (2004), dont l'idée est de faire supporter aux actionnaires le déficit au moment de la ruine. Ceci est d'autant plus vrai dans le cas d'une compagnie d'assurance dont la ruine doit être évitée. Dans l'exemple de la Figure 1, le déficit au moment de la ruine est noté R. Des exemples numériques nous permettent de comparer le niveau des barrières optimales dans les situations I et II. Cette idée, d'ajouter une pénalité au moment de la ruine, a été généralisée dans Gerber et al. (2006a). Chapitre 2: "Methods for estimating the optimal dividend barrier and the probability of ruin" Dans ce second essai, du fait qu'en pratique on n'a jamais toute l'information nécessaire sur la distribution du montant des sinistres, on suppose que seuls les premiers moments de cette fonction sont connus. Cet essai développe et examine des méthodes qui permettent d'approximer, dans cette situation, le niveau de la barrière optimale, selon le critère usuel (cas I ci-dessus). Les approximations "de Vylder" et "diffusion" sont expliquées et examinées: Certaines de ces approximations utilisent deux, trois ou quatre des premiers moments. Des exemples numériques nous permettent de comparer les approximations du niveau de la barrière optimale, non seulement avec les valeurs exactes mais également entre elles. Chapitre 3: "Optimal dividends with incomplete information" Dans ce troisième et dernier essai, on s'intéresse à nouveau aux méthodes d'approximation du niveau de la barrière optimale quand seuls les premiers moments de la distribution du montant des sauts sont connus. Cette fois, on considère le modèle dual. Comme pour le modèle classique, dans un sens il y a un flux continu et dans l'autre un processus à sauts. A l'inverse du modèle classique, les gains suivent un processus de Poisson composé et les pertes sont constantes et continues; voir la Figure 2. Un tel modèle conviendrait pour une caisse de pension ou une société qui se spécialise dans les découvertes ou inventions. Ainsi, tant les approximations "de Vylder" et "diffusion" que les nouvelles approximations "gamma" et "gamma process" sont expliquées et analysées. Ces nouvelles approximations semblent donner de meilleurs résultats dans certains cas.