978 resultados para Thresholding Approximation
Resumo:
This paper proposes a very fast method for blindly initial- izing a nonlinear mapping which transforms a sum of random variables. The method provides a surprisingly good approximation even when the basic assumption is not fully satis¯ed. The method can been used success- fully for initializing nonlinearity in post-nonlinear mixtures or in Wiener system inversion, for improving algorithm speed and convergence.
Resumo:
Despite the considerable evidence showing that dispersal between habitat patches is often asymmetric, most of the metapopulation models assume symmetric dispersal. In this paper, we develop a Monte Carlo simulation model to quantify the effect of asymmetric dispersal on metapopulation persistence. Our results suggest that metapopulation extinctions are more likely when dispersal is asymmetric. Metapopulation viability in systems with symmetric dispersal mirrors results from a mean field approximation, where the system persists if the expected per patch colonization probability exceeds the expected per patch local extinction rate. For asymmetric cases, the mean field approximation underestimates the number of patches necessary for maintaining population persistence. If we use a model assuming symmetric dispersal when dispersal is actually asymmetric, the estimation of metapopulation persistence is wrong in more than 50% of the cases. Metapopulation viability depends on patch connectivity in symmetric systems, whereas in the asymmetric case the number of patches is more important. These results have important implications for managing spatially structured populations, when asymmetric dispersal may occur. Future metapopulation models should account for asymmetric dispersal, while empirical work is needed to quantify the patterns and the consequences of asymmetric dispersal in natural metapopulations.
Resumo:
Mediante programa informático es posible simular las vías de comunicación existentes entre asentamientos humanos, siendo muchos los trabajos publicados y varias las formas de aproximación al problema. El objetivo del trabajo que nos ocupa ha consistido en la generación de un nuevo modelo a partir del cual fuera posible simular una vía de comunicación, con la salvedad de que en este caso ya existía una idea aproximada de cual sería su trayectoria a partir de trabajos de otros autores. Esta vía de comunicación es la vía Augusta, en el tramo que une el Coll de Panissars y la ciudad de Girona.
Resumo:
By means of computer simulations and solution of the equations of the mode coupling theory (MCT),we investigate the role of the intramolecular barriers on several dynamic aspects of nonentangled polymers. The investigated dynamic range extends from the caging regime characteristic of glass-formers to the relaxation of the chain Rouse modes. We review our recent work on this question,provide new results, and critically discuss the limitations of the theory. Solutions of the MCT for the structural relaxation reproduce qualitative trends of simulations for weak and moderate barriers. However, a progressive discrepancy is revealed as the limit of stiff chains is approached. This dis-agreement does not seem related with dynamic heterogeneities, which indeed are not enhanced by increasing barrier strength. It is not connected either with the breakdown of the convolution approximation for three-point static correlations, which retains its validity for stiff chains. These findings suggest the need of an improvement of the MCT equations for polymer melts. Concerning the relaxation of the chain degrees of freedom, MCT provides a microscopic basis for time scales from chain reorientation down to the caging regime. It rationalizes, from first principles, the observed deviations from the Rouse model on increasing the barrier strength. These include anomalous scaling of relaxation times, long-time plateaux, and nonmonotonous wavelength dependence of the mode correlators.
Resumo:
In order to understand the development of non-genetically encoded actions during an animal's lifespan, it is necessary to analyze the dynamics and evolution of learning rules producing behavior. Owing to the intrinsic stochastic and frequency-dependent nature of learning dynamics, these rules are often studied in evolutionary biology via agent-based computer simulations. In this paper, we show that stochastic approximation theory can help to qualitatively understand learning dynamics and formulate analytical models for the evolution of learning rules. We consider a population of individuals repeatedly interacting during their lifespan, and where the stage game faced by the individuals fluctuates according to an environmental stochastic process. Individuals adjust their behavioral actions according to learning rules belonging to the class of experience-weighted attraction learning mechanisms, which includes standard reinforcement and Bayesian learning as special cases. We use stochastic approximation theory in order to derive differential equations governing action play probabilities, which turn out to have qualitative features of mutator-selection equations. We then perform agent-based simulations to find the conditions where the deterministic approximation is closest to the original stochastic learning process for standard 2-action 2-player fluctuating games, where interaction between learning rules and preference reversal may occur. Finally, we analyze a simplified model for the evolution of learning in a producer-scrounger game, which shows that the exploration rate can interact in a non-intuitive way with other features of co-evolving learning rules. Overall, our analyses illustrate the usefulness of applying stochastic approximation theory in the study of animal learning.
Resumo:
We extend a previous model of the Neolithic transition in Europe [J. Fort and V. Méndez, Phys. Rev. Lett. 82, 867 (1999)] by taking two effects into account: (i) we do not use the diffusion approximation (which corresponds to second-order Taylor expansions), and (ii) we take proper care of the fact that parents do not migrate away from their children (we refer to this as a time-order effect, in the sense that it implies that children grow up with their parents, before they become adults and can survive and migrate). We also derive a time-ordered, second-order equation, which we call the sequential reaction-diffusion equation, and use it to show that effect (ii) is the most important one, and that both of them should in general be taken into account to derive accurate results. As an example, we consider the Neolithic transition: the model predictions agree with the observed front speed, and the corrections relative to previous models are important (up to 70%)
Resumo:
Rapport de synthèse Cette thèse consiste en trois essais sur les stratégies optimales de dividendes. Chaque essai correspond à un chapitre. Les deux premiers essais ont été écrits en collaboration avec les Professeurs Hans Ulrich Gerber et Elias S. W. Shiu et ils ont été publiés; voir Gerber et al. (2006b) ainsi que Gerber et al. (2008). Le troisième essai a été écrit en collaboration avec le Professeur Hans Ulrich Gerber. Le problème des stratégies optimales de dividendes remonte à de Finetti (1957). Il se pose comme suit: considérant le surplus d'une société, déterminer la stratégie optimale de distribution des dividendes. Le critère utilisé consiste à maximiser la somme des dividendes escomptés versés aux actionnaires jusqu'à la ruine2 de la société. Depuis de Finetti (1957), le problème a pris plusieurs formes et a été résolu pour différents modèles. Dans le modèle classique de théorie de la ruine, le problème a été résolu par Gerber (1969) et plus récemment, en utilisant une autre approche, par Azcue and Muler (2005) ou Schmidli (2008). Dans le modèle classique, il y a un flux continu et constant d'entrées d'argent. Quant aux sorties d'argent, elles sont aléatoires. Elles suivent un processus à sauts, à savoir un processus de Poisson composé. Un exemple qui correspond bien à un tel modèle est la valeur du surplus d'une compagnie d'assurance pour lequel les entrées et les sorties sont respectivement les primes et les sinistres. Le premier graphique de la Figure 1 en illustre un exemple. Dans cette thèse, seules les stratégies de barrière sont considérées, c'est-à-dire quand le surplus dépasse le niveau b de la barrière, l'excédent est distribué aux actionnaires comme dividendes. Le deuxième graphique de la Figure 1 montre le même exemple du surplus quand une barrière de niveau b est introduite, et le troisième graphique de cette figure montre, quand à lui, les dividendes cumulés. Chapitre l: "Maximizing dividends without bankruptcy" Dans ce premier essai, les barrières optimales sont calculées pour différentes distributions du montant des sinistres selon deux critères: I) La barrière optimale est calculée en utilisant le critère usuel qui consiste à maximiser l'espérance des dividendes escomptés jusqu'à la ruine. II) La barrière optimale est calculée en utilisant le second critère qui consiste, quant à lui, à maximiser l'espérance de la différence entre les dividendes escomptés jusqu'à la ruine et le déficit au moment de la ruine. Cet essai est inspiré par Dickson and Waters (2004), dont l'idée est de faire supporter aux actionnaires le déficit au moment de la ruine. Ceci est d'autant plus vrai dans le cas d'une compagnie d'assurance dont la ruine doit être évitée. Dans l'exemple de la Figure 1, le déficit au moment de la ruine est noté R. Des exemples numériques nous permettent de comparer le niveau des barrières optimales dans les situations I et II. Cette idée, d'ajouter une pénalité au moment de la ruine, a été généralisée dans Gerber et al. (2006a). Chapitre 2: "Methods for estimating the optimal dividend barrier and the probability of ruin" Dans ce second essai, du fait qu'en pratique on n'a jamais toute l'information nécessaire sur la distribution du montant des sinistres, on suppose que seuls les premiers moments de cette fonction sont connus. Cet essai développe et examine des méthodes qui permettent d'approximer, dans cette situation, le niveau de la barrière optimale, selon le critère usuel (cas I ci-dessus). Les approximations "de Vylder" et "diffusion" sont expliquées et examinées: Certaines de ces approximations utilisent deux, trois ou quatre des premiers moments. Des exemples numériques nous permettent de comparer les approximations du niveau de la barrière optimale, non seulement avec les valeurs exactes mais également entre elles. Chapitre 3: "Optimal dividends with incomplete information" Dans ce troisième et dernier essai, on s'intéresse à nouveau aux méthodes d'approximation du niveau de la barrière optimale quand seuls les premiers moments de la distribution du montant des sauts sont connus. Cette fois, on considère le modèle dual. Comme pour le modèle classique, dans un sens il y a un flux continu et dans l'autre un processus à sauts. A l'inverse du modèle classique, les gains suivent un processus de Poisson composé et les pertes sont constantes et continues; voir la Figure 2. Un tel modèle conviendrait pour une caisse de pension ou une société qui se spécialise dans les découvertes ou inventions. Ainsi, tant les approximations "de Vylder" et "diffusion" que les nouvelles approximations "gamma" et "gamma process" sont expliquées et analysées. Ces nouvelles approximations semblent donner de meilleurs résultats dans certains cas.
Resumo:
Osteoporosis (OP) is a systemic skeletal disease characterized by a low bone mineral density (BMD) and a micro-architectural (MA) deterioration. Clinical risk factors (CRF) are often used as a MA approximation. MA is yet evaluable in daily practice by the trabecular bone score (TBS) measure. TBS is very simple to obtain, by reanalyzing a lumbar DXA-scan. TBS has proven to have diagnosis and prognosis values, partially independent of CRF and BMD. The aim of the OsteoLaus cohort is to combine in daily practice the CRF and the information given by DXA (BMD, TBS and vertebral fracture assessment (VFA)) to better identify women at high fracture risk. The OsteoLaus cohort (1400 women 50 to 80 years living in Lausanne, Switzerland) started in 2010. This study is derived from the cohort COLAUS who started in Lausanne in 2003. The main goal of COLAUS is to obtain information on the epidemiology and genetic determinants of cardiovascular risk in 6700 men and women. CRF for OP, bone ultrasound of the heel, lumbar spine and hip BMD, VFA by DXA and MA evaluation by TBS are recorded in OsteoLaus. Preliminary results are reported. We included 631 women: mean age 67.4 ± 6.7 years, BMI 26.1 ± 4.6, mean lumbar spine BMD 0.943 ± 0.168 (T-score − 1.4 SD), and TBS 1.271 ± 0.103. As expected, correlation between BMD and site matched TBS is low (r2 = 0.16). Prevalence of VFx grade 2/3, major OP Fx and all OP Fx is 8.4%, 17.0% and 26.0% respectively. Age- and BMI-adjusted ORs (per SD decrease) are 1.8 (1.2-2.5), 1.6 (1.2-2.1), and 1.3 (1.1-1.6) for BMD for the different categories of fractures and 2.0 (1.4-3.0), 1.9 (1.4-2.5), and 1.4 (1.1-1.7) for TBS respectively. Only 32 to 37% of women with OP Fx have a BMD < − 2.5 SD or a TBS < 1.200. If we combine a BMD < − 2.5 SD or a TBS < 1.200, 54 to 60% of women with an osteoporotic Fx are identified. As in the already published studies, these preliminary results confirm the partial independence between BMD and TBS. More importantly, a combination of TBS subsequent to BMD increases significantly the identification of women with prevalent OP Fx which would have been misclassified by BMD alone. For the first time we are able to have complementary information about fracture (VFA), density (BMD), micro- and macro architecture (TBS and HAS) from a simple, low ionizing radiation and cheap device: DXA. Such complementary information is very useful for the patient in the daily practice and moreover will likely have an impact on cost effectiveness analysis.
Resumo:
We have constructed a forward modelling code in Matlab, capable of handling several commonly used electrical and electromagnetic methods in a 1D environment. We review the implemented electromagnetic field equations for grounded wires, frequency and transient soundings and present new solutions in the case of a non-magnetic first layer. The CR1Dmod code evaluates the Hankel transforms occurring in the field equations using either the Fast Hankel Transform based on digital filter theory, or a numerical integration scheme applied between the zeros of the Bessel function. A graphical user interface allows easy construction of 1D models and control of the parameters. Modelling results are in agreement with other authors, but the time of computation is less efficient than other available codes. Nevertheless, the CR1Dmod routine handles complex resistivities and offers solutions based on the full EM-equations as well as the quasi-static approximation. Thus, modelling of effects based on changes in the magnetic permeability and the permittivity is also possible.
Resumo:
The most adequate approach for benchmarking web accessibility is manual expert evaluation supplemented by automatic analysis tools. But manual evaluation has a high cost and is impractical to be applied on large websites. In reality, there is no choice but to rely on automated tools when reviewing large web sites for accessibility. The question is: to what extent the results from automatic evaluation of a web site and individual web pages can be used as an approximation for manual results? This paper presents the initial results of an investigation aimed at answering this question. He have performed both manual and automatic evaluations of the accessibility of web pages of two sites and we have compared the results. In our data set automatically retrieved results could most definitely be used as an approximation manual evaluation results.
Resumo:
In May 1999, the European Space Agency (ESA) selected the Earth Explorer Opportunity Soil Moisture and Ocean Salinity (SMOS) mission to obtain global and frequent soil moisture and ocean salinity maps. SMOS' single payload is the Microwave Imaging Radiometer by Aperture Synthesis (MIRAS), an L-band two-dimensional aperture synthesis radiometer with multiangular observation capabilities. At L-band, the brightness temperature sensitivity to the sea surface salinity (SSS) is low, approximately 0.5 K/psu at 20/spl deg/C, decreasing to 0.25 K/psu at 0/spl deg/C, comparable to that to the wind speed /spl sim/0.2 K/(m/s) at nadir. However, at a given time, the sea state does not depend only on local winds, but on the local wind history and the presence of waves traveling from far distances. The Wind and Salinity Experiment (WISE) 2000 and 2001 campaigns were sponsored by ESA to determine the impact of oceanographic and atmospheric variables on the L-band brightness temperature at vertical and horizontal polarizations. This paper presents the results of the analysis of three nonstationary sea state conditions: growing and decreasing sea, and the presence of swell. Measured sea surface spectra are compared with the theoretical ones, computed using the instantaneous wind speed. Differences can be minimized using an "effective wind speed" that makes the theoretical spectrum best match the measured one. The impact on the predicted brightness temperatures is then assessed using the small slope approximation/small perturbation method (SSA/SPM).
A performance lower bound for quadratic timing recovery accounting for the symbol transition density
Resumo:
The symbol transition density in a digitally modulated signal affects the performance of practical synchronization schemes designed for timing recovery. This paper focuses on the derivation of simple performance limits for the estimation of the time delay of a noisy linearly modulated signal in the presence of various degrees of symbol correlation produced by the varioustransition densities in the symbol streams. The paper develops high- and low-signal-to-noise ratio (SNR) approximations of the so-called (Gaussian) unconditional Cramér–Rao bound (UCRB),as well as general expressions that are applicable in all ranges of SNR. The derived bounds are valid only for the class of quadratic, non-data-aided (NDA) timing recovery schemes. To illustrate the validity of the derived bounds, they are compared with the actual performance achieved by some well-known quadratic NDA timing recovery schemes. The impact of the symbol transitiondensity on the classical threshold effect present in NDA timing recovery schemes is also analyzed. Previous work on performancebounds for timing recovery from various authors is generalized and unified in this contribution.
Resumo:
This paper analyzes the asymptotic performance of maximum likelihood (ML) channel estimation algorithms in wideband code division multiple access (WCDMA) scenarios. We concentrate on systems with periodic spreading sequences (period larger than or equal to the symbol span) where the transmitted signal contains a code division multiplexed pilot for channel estimation purposes. First, the asymptotic covariances of the training-only, semi-blind conditional maximum likelihood (CML) and semi-blind Gaussian maximum likelihood (GML) channelestimators are derived. Then, these formulas are further simplified assuming randomized spreading and training sequences under the approximation of high spreading factors and high number of codes. The results provide a useful tool to describe the performance of the channel estimators as a function of basicsystem parameters such as number of codes, spreading factors, or traffic to training power ratio.
Resumo:
This work provides a general framework for the design of second-order blind estimators without adopting anyapproximation about the observation statistics or the a prioridistribution of the parameters. The proposed solution is obtainedminimizing the estimator variance subject to some constraints onthe estimator bias. The resulting optimal estimator is found todepend on the observation fourth-order moments that can be calculatedanalytically from the known signal model. Unfortunately,in most cases, the performance of this estimator is severely limitedby the residual bias inherent to nonlinear estimation problems.To overcome this limitation, the second-order minimum varianceunbiased estimator is deduced from the general solution by assumingaccurate prior information on the vector of parameters.This small-error approximation is adopted to design iterativeestimators or trackers. It is shown that the associated varianceconstitutes the lower bound for the variance of any unbiasedestimator based on the sample covariance matrix.The paper formulation is then applied to track the angle-of-arrival(AoA) of multiple digitally-modulated sources by means ofa uniform linear array. The optimal second-order tracker is comparedwith the classical maximum likelihood (ML) blind methodsthat are shown to be quadratic in the observed data as well. Simulationshave confirmed that the discrete nature of the transmittedsymbols can be exploited to improve considerably the discriminationof near sources in medium-to-high SNR scenarios.
Resumo:
Aquest text és un recull de procediments per inserir els blocs d'AutoCAD de forma més eficient, en la resolució de problemes prèviament tipificats: la PRIMERA PART descriu protocols d'actuació que l'usuari haurà d'aplicar manualment, mentre que la SEGONA PART ofereix rutines programades en AutoLISP i VisualLISP que l'eximiran d'aquesta obligació.Si ho deixéssim aquí, però, podria semblar que els mateixos mètodes manuals presentats en primer lloc són després els que AutoLISP automatitza; per això convé aclarir que la problemàtica de la PRIMERA PART, tot i que pròxima a la de la SEGONA, és diferent i reprodueix el contingut d'una monografia (BLOCS I GEOMETRIA: 5 EXERCICIS COMENTATS) que forma part del material de suport a l'assignatura ELEMENTS DE CAD, impartida per l'autor en l'ETS d'Enginyeria de Telecomunicació de Barcelona i que té per objecte cobrir el buit bibliogràfic que es detectava en el vessant geomètric de la inserció de blocs, a diferència del que s'ocupa de l'estructura de dades més adient en cada context (incrustació de dibuixos amb INSERT versus vinculació mitjançant REFX), més profusament tractat, proposant una sistematització tipològica dels casos on l'escala és funció lineal d'una distància.La SEGONA PART va més enllà i amplia el repertori d'AutoCAD amb les ordres GINSERT, RATREDIT, INSERTOK, INS2D, INS3D, BLOQUEOK, DESCOMPOK, DEF-TRANSF, APL-TRANSF-V i APL-TRANSF-N, de les quals INS2D i INS3D (INSERTOK és una versió simplificada de INS2D, per a blocs sense atributs) són l'aportació més innovadora i que més lluny porta les potencialitats de la inserció de blocs: resumint-ho en una frase, es tracta d’aconseguir que la inserció d’un bloc (que pot ser l’original, un bloc constituït per una inserció de l’original o un de constituït per la inserció del precedent) s’encabeixi en un marc prèviament establert, a semblança de les ordres ESCALA o GIRA, que mitjançant l'opció Referencia apliquen als objectes seleccionats la transformació d'escalat o de rotació necessària per tal que un element de referència assoleixi una determinada grandària o posició. Tot i que, per identificar amb encert el nucli del problema, serà inevitable introduir una reflexió: quan s’ha tingut la precaució de referir un bloc 2D a un quadrat unitari ortogonal, inserir-lo de manera que s’adapti a qualsevol marc rectangular establert en el dibuix és immediat, però ja no ho és tant concatenar insercions de manera que, a més d’una combinació simple de escalat, gir i translació, l’operació dugui implícita una transformació de cisallament. Perquè és clar que si inserim el bloc girat i convertim la inserció en un bloc que al seu torn tornem a inserir, ara però amb escalat no uniforme, el transformat del quadrat de referència primitiu serà un paral·lelogram, però el problema és: dibuixat un marc romboïdal concret, ¿quin gir caldrà donar a la primera inserció, i quin gir i factors d’escala caldrà aplicar a la segona perquè el quadrat de referència s’adapti al marc? El problema es complica si, a més, volem aprofitar el resultat de la primera inserció per a d’altres paral·lelograms, organitzant un sistema no redundant de insercions intermèdies. Doncs bé: INS2D i INS3D donen satisfacció a aquestes qüestions (la segona ja no contempla l'encaix en un paral·lelogram, sinó en un paral·lelepípede) i són aplicables a blocs proveïts d’atributs, no només de tipus convencional (els continguts en el pla de base del bloc, únics de funcionament garantit amb l’ordre INSERT), sinó també dels situats i orientats lliurement.