969 resultados para RANDOM-WALK SIMULATIONS
Resumo:
This paper suggests a method for obtaining efficiency bounds in models containing either only infinite-dimensional parameters or both finite- and infinite-dimensional parameters (semiparametric models). The method is based on a theory of random linear functionals applied to the gradient of the log-likelihood functional and is illustrated by computing the lower bound for Cox's regression model
Resumo:
The objective of this work was to compare random regression models for the estimation of genetic parameters for Guzerat milk production, using orthogonal Legendre polynomials. Records (20,524) of test-day milk yield (TDMY) from 2,816 first-lactation Guzerat cows were used. TDMY grouped into 10-monthly classes were analyzed for additive genetic effect and for environmental and residual permanent effects (random effects), whereas the contemporary group, calving age (linear and quadratic effects) and mean lactation curve were analized as fixed effects. Trajectories for the additive genetic and permanent environmental effects were modeled by means of a covariance function employing orthogonal Legendre polynomials ranging from the second to the fifth order. Residual variances were considered in one, four, six, or ten variance classes. The best model had six residual variance classes. The heritability estimates for the TDMY records varied from 0.19 to 0.32. The random regression model that used a second-order Legendre polynomial for the additive genetic effect, and a fifth-order polynomial for the permanent environmental effect is adequate for comparison by the main employed criteria. The model with a second-order Legendre polynomial for the additive genetic effect, and that with a fourth-order for the permanent environmental effect could also be employed in these analyses.
Resumo:
Langattoman laajakaistaisen tietoliikennetekniikan kehittyminen on herättänyt kiinnostuksen sen ammattimaiseen hyödyntämiseen yleisen turvallisuuden ja kriisinhallinnan tarpeisiin. Hätätilanteissa usein olemassa olevat kiinteät tietoliikennejärjestelmät eivät ole ollenkaan käytettävissä tai niiden tarjoama kapasiteetti ei ole riittävä. Tästä syystä on noussut esiin tarve nopeasti toimintakuntoon saatettaville ja itsenäisille langattomille laajakaistaisille järjestelmille. Tässä diplomityössä on tarkoitus tutkia langattomia ad hoc monihyppy -verkkoja yleisen turvallisuuden tarpeiden pohjalta ja toteuttaa testialusta, jolla voidaan demonstroida sekä tutkia tällaisen järjestelmän toimintaa käytännössä. Työssä tutkitaan pisteestä pisteeseen sekä erityisesti pisteestä moneen pisteeseen suoritettavaa tietoliikennettä. Mittausten kohteena on testialustan tiedonsiirtonopeus, lähetysteho ja vastaanottimen herkkyys. Näitä tuloksia käytetään simulaattorin parametreina, jotta simulaattorin tulokset olisivat mahdollisimman aidot ja yhdenmukaiset testialustan kanssa. Sen jälkeen valitaan valikoima yleisen turvallisuuden vaatimusten mukaisia ohjelmia ja sovellusmalleja, joiden suorituskyky mitataan erilaisten reititysmenetelmien alaisena sekä testialustalla että simulaattorilla. Tuloksia arvioidaan ja vertaillaan. Multicast monihyppy -video päätettiin sovelluksista valita tutkimusten pääkohteeksi ja sitä sekä sen ominaisuuksia on tarkoitus myös oikeissa kenttäkokeissa.
Resumo:
Diplomityön tavoitteena oli tutkia miten ilman turbulenttisuus vaikuttaa tasaisesti liikkuvan rainan tilaan. Yhtenä sovelluskohteena teollisuudessa voidaan mainita esimerkiksi leiju-kuivain. Tiedetään, että konenopeuksien kasvu ja siitä johtuva ilmavirran nopeuden kasvu aiheuttaa voimavaikutuksia rainaan ja voi aiheuttaa lepatusta. Lepatus johtaa dynaamiseen epästabiilisuuteen, joka voidaan havaita, kun lineaarinen systeemi tulee epävakaaksi ja joh-taa epälineaariseen, rajoitettuun värähtelyyn. Lepatus huonontaa tuotteiden laatua ja voi johtaa ratakatkoihin. Työssä on esitetty tietoa ilman ja rainan vuorovaikutuksesta, jota hyödyntämällä voidaan kehittää yksinkertaistettu malli, jonka avulla liikkuvaa rainaa voidaan simuloida kuivaimes-sa. Kaasufaasin virtausyhtälöt on ratkaistu eri turbulenttimalleja käyttäen. Myös viskoelas-tisen rainan muodonmuutosta on tarkasteltu. Koska rainalle ei ole kirjallisuudesta saatavilla tarkkoja fysikaalisia ja mekaanisia arvoja, näitä ominaisuuksia testattiin eri arvoilla, jotta rainan käyttäytymistä jännityksen alaisena voidaan tarkastella. Näiden ominaisuuksien tun-teminen on ensiarvoisen tärkeää määritettäessä rainan aeroviskoelastista käyttäytymistä. Virtaussimulointi on kallista ja aikaa vievää. Tämä tarkoittaa uusien tutkimusmenetelmien omaksumista. Tässä työssä vaihtoehtoisena lähestymistapana on esitetty yksinkertaistettu malli, joka sisältää ilman ja rainan vuorovaikutusta kuvaavat ominaisuudet. Mallin avulla saadaan tietoa epälineaarisuuden ja turbulenssin vaikutuksesta sekä monimutkaisesta yh-teydestä stabiilisuuden ja ulkoisesti aikaansaadun värähtelyn sekä itse aiheutetun värähtelyn välillä. Työn lopussa on esitetty havainnollinen esimerkki, jolla voidaan kuvata olosuhteita, jossa rainan tasainen liike muuttuu epävakaaksi. Kun turbulenttisuudesta johtuva painevaih-telu ylittää tietyn rajan, rainan värähtely kasvaa muuttuen satunnaisesta järjestäytyneeksi. Saaduttulokset osoittavat, että turbulenttisuudella on suuri vaikutus eikä sitä voi jättää huomioimatta. Myös rainan viskoelastiset ominaisuudet tulee huomioida, jotta rainan käyt-täytymistä voidaan kuvata tarkasti.
Resumo:
Résumé : La radiothérapie par modulation d'intensité (IMRT) est une technique de traitement qui utilise des faisceaux dont la fluence de rayonnement est modulée. L'IMRT, largement utilisée dans les pays industrialisés, permet d'atteindre une meilleure homogénéité de la dose à l'intérieur du volume cible et de réduire la dose aux organes à risque. Une méthode usuelle pour réaliser pratiquement la modulation des faisceaux est de sommer de petits faisceaux (segments) qui ont la même incidence. Cette technique est appelée IMRT step-and-shoot. Dans le contexte clinique, il est nécessaire de vérifier les plans de traitement des patients avant la première irradiation. Cette question n'est toujours pas résolue de manière satisfaisante. En effet, un calcul indépendant des unités moniteur (représentatif de la pondération des chaque segment) ne peut pas être réalisé pour les traitements IMRT step-and-shoot, car les poids des segments ne sont pas connus à priori, mais calculés au moment de la planification inverse. Par ailleurs, la vérification des plans de traitement par comparaison avec des mesures prend du temps et ne restitue pas la géométrie exacte du traitement. Dans ce travail, une méthode indépendante de calcul des plans de traitement IMRT step-and-shoot est décrite. Cette méthode est basée sur le code Monte Carlo EGSnrc/BEAMnrc, dont la modélisation de la tête de l'accélérateur linéaire a été validée dans une large gamme de situations. Les segments d'un plan de traitement IMRT sont simulés individuellement dans la géométrie exacte du traitement. Ensuite, les distributions de dose sont converties en dose absorbée dans l'eau par unité moniteur. La dose totale du traitement dans chaque élément de volume du patient (voxel) peut être exprimée comme une équation matricielle linéaire des unités moniteur et de la dose par unité moniteur de chacun des faisceaux. La résolution de cette équation est effectuée par l'inversion d'une matrice à l'aide de l'algorithme dit Non-Negative Least Square fit (NNLS). L'ensemble des voxels contenus dans le volume patient ne pouvant être utilisés dans le calcul pour des raisons de limitations informatiques, plusieurs possibilités de sélection ont été testées. Le meilleur choix consiste à utiliser les voxels contenus dans le Volume Cible de Planification (PTV). La méthode proposée dans ce travail a été testée avec huit cas cliniques représentatifs des traitements habituels de radiothérapie. Les unités moniteur obtenues conduisent à des distributions de dose globale cliniquement équivalentes à celles issues du logiciel de planification des traitements. Ainsi, cette méthode indépendante de calcul des unités moniteur pour l'IMRT step-andshootest validée pour une utilisation clinique. Par analogie, il serait possible d'envisager d'appliquer une méthode similaire pour d'autres modalités de traitement comme par exemple la tomothérapie. Abstract : Intensity Modulated RadioTherapy (IMRT) is a treatment technique that uses modulated beam fluence. IMRT is now widespread in more advanced countries, due to its improvement of dose conformation around target volume, and its ability to lower doses to organs at risk in complex clinical cases. One way to carry out beam modulation is to sum smaller beams (beamlets) with the same incidence. This technique is called step-and-shoot IMRT. In a clinical context, it is necessary to verify treatment plans before the first irradiation. IMRT Plan verification is still an issue for this technique. Independent monitor unit calculation (representative of the weight of each beamlet) can indeed not be performed for IMRT step-and-shoot, because beamlet weights are not known a priori, but calculated by inverse planning. Besides, treatment plan verification by comparison with measured data is time consuming and performed in a simple geometry, usually in a cubic water phantom with all machine angles set to zero. In this work, an independent method for monitor unit calculation for step-and-shoot IMRT is described. This method is based on the Monte Carlo code EGSnrc/BEAMnrc. The Monte Carlo model of the head of the linear accelerator is validated by comparison of simulated and measured dose distributions in a large range of situations. The beamlets of an IMRT treatment plan are calculated individually by Monte Carlo, in the exact geometry of the treatment. Then, the dose distributions of the beamlets are converted in absorbed dose to water per monitor unit. The dose of the whole treatment in each volume element (voxel) can be expressed through a linear matrix equation of the monitor units and dose per monitor unit of every beamlets. This equation is solved by a Non-Negative Least Sqvare fif algorithm (NNLS). However, not every voxels inside the patient volume can be used in order to solve this equation, because of computer limitations. Several ways of voxel selection have been tested and the best choice consists in using voxels inside the Planning Target Volume (PTV). The method presented in this work was tested with eight clinical cases, which were representative of usual radiotherapy treatments. The monitor units obtained lead to clinically equivalent global dose distributions. Thus, this independent monitor unit calculation method for step-and-shoot IMRT is validated and can therefore be used in a clinical routine. It would be possible to consider applying a similar method for other treatment modalities, such as for instance tomotherapy or volumetric modulated arc therapy.
Resumo:
ABSTRACT: Massive synaptic pruning following over-growth is a general feature of mammalian brain maturation. Pruning starts near time of birth and is completed by time of sexual maturation. Trigger signals able to induce synaptic pruning could be related to dynamic functions that depend on the timing of action potentials. Spike-timing-dependent synaptic plasticity (STDP) is a change in the synaptic strength based on the ordering of pre- and postsynaptic spikes. The relation between synaptic efficacy and synaptic pruning suggests that the weak synapses may be modified and removed through competitive "learning" rules. This plasticity rule might produce the strengthening of the connections among neurons that belong to cell assemblies characterized by recurrent patterns of firing. Conversely, the connections that are not recurrently activated might decrease in efficiency and eventually be eliminated. The main goal of our study is to determine whether or not, and under which conditions, such cell assemblies may emerge out of a locally connected random network of integrate-and-fire units distributed on a 2D lattice receiving background noise and content-related input organized in both temporal and spatial dimensions. The originality of our study stands on the relatively large size of the network, 10,000 units, the duration of the experiment, 10E6 time units (one time unit corresponding to the duration of a spike), and the application of an original bio-inspired STDP modification rule compatible with hardware implementation. A first batch of experiments was performed to test that the randomly generated connectivity and the STDP-driven pruning did not show any spurious bias in absence of stimulation. Among other things, a scale factor was approximated to compensate for the network size on the ac¬tivity. Networks were then stimulated with the spatiotemporal patterns. The analysis of the connections remaining at the end of the simulations, as well as the analysis of the time series resulting from the interconnected units activity, suggest that feed-forward circuits emerge from the initially randomly connected networks by pruning. RESUME: L'élagage massif des synapses après une croissance excessive est une phase normale de la ma¬turation du cerveau des mammifères. L'élagage commence peu avant la naissance et est complété avant l'âge de la maturité sexuelle. Les facteurs déclenchants capables d'induire l'élagage des synapses pourraient être liés à des processus dynamiques qui dépendent de la temporalité rela¬tive des potentiels d'actions. La plasticité synaptique à modulation temporelle relative (STDP) correspond à un changement de la force synaptique basé sur l'ordre des décharges pré- et post- synaptiques. La relation entre l'efficacité synaptique et l'élagage des synapses suggère que les synapses les plus faibles pourraient être modifiées et retirées au moyen d'une règle "d'appren¬tissage" faisant intervenir une compétition. Cette règle de plasticité pourrait produire le ren¬forcement des connexions parmi les neurones qui appartiennent à une assemblée de cellules caractérisée par des motifs de décharge récurrents. A l'inverse, les connexions qui ne sont pas activées de façon récurrente pourraient voir leur efficacité diminuée et être finalement éliminées. Le but principal de notre travail est de déterminer s'il serait possible, et dans quelles conditions, que de telles assemblées de cellules émergent d'un réseau d'unités integrate-and¬-fire connectées aléatoirement et distribuées à la surface d'une grille bidimensionnelle recevant à la fois du bruit et des entrées organisées dans les dimensions temporelle et spatiale. L'originalité de notre étude tient dans la taille relativement grande du réseau, 10'000 unités, dans la durée des simulations, 1 million d'unités de temps (une unité de temps correspondant à une milliseconde), et dans l'utilisation d'une règle STDP originale compatible avec une implémentation matérielle. Une première série d'expériences a été effectuée pour tester que la connectivité produite aléatoirement et que l'élagage dirigé par STDP ne produisaient pas de biais en absence de stimu¬lation extérieure. Entre autres choses, un facteur d'échelle a pu être approximé pour compenser l'effet de la variation de la taille du réseau sur son activité. Les réseaux ont ensuite été stimulés avec des motifs spatiotemporels. L'analyse des connexions se maintenant à la fin des simulations, ainsi que l'analyse des séries temporelles résultantes de l'activité des neurones, suggèrent que des circuits feed-forward émergent par l'élagage des réseaux initialement connectés au hasard.
Resumo:
The active magnetic bearings have recently been intensively developed because of noncontact support having several advantages compared to conventional bearings. Due to improved materials, strategies of control, and electrical components, the performance and reliability of the active magnetic bearings are improving. However, additional bearings, retainer bearings, still have a vital role in the applications of the active magnetic bearings. The most crucial moment when the retainer bearings are needed is when the rotor drops from the active magnetic bearings on the retainer bearings due to component or power failure. Without appropriate knowledge of the retainer bearings, there is a chance that an active magnetic bearing supported rotor system will be fatal in a drop-down situation. This study introduces a detailed simulation model of a rotor system in order to describe a rotor drop-down situation on the retainer bearings. The introduced simulation model couples a finite element model with component mode synthesis and detailed bearing models. In this study, electrical components and electromechanical forces are not in the focus. The research looks at the theoretical background of the finite element method with component mode synthesis that can be used in the dynamic analysis of flexible rotors. The retainer bearings are described by using two ball bearing models, which include damping and stiffness properties, oil film, inertia of rolling elements and friction between races and rolling elements. Thefirst bearing model assumes that the cage of the bearing is ideal and that the cage holds the balls in their predefined positions precisely. The second bearing model is an extension of the first model and describes the behavior of the cageless bearing. In the bearing model, each ball is described by using two degrees of freedom. The models introduced in this study are verified with a corresponding actual structure. By using verified bearing models, the effects of the parameters of the rotor system onits dynamics during emergency stops are examined. As shown in this study, the misalignment of the retainer bearings has a significant influence on the behavior of the rotor system in a drop-down situation. In this study, a stability map of the rotor system as a function of rotational speed of the rotor and the misalignment of the retainer bearings is presented. In addition, the effects of parameters of the simulation procedure and the rotor system on the dynamics of system are studied.
Resumo:
The dynamical properties ofshaken granular materials are important in many industrial applications where the shaking is used to mix, segregate and transport them. In this work asystematic, large scale simulation study has been performed to investigate the rheology of dense granular media, in the presence of gas, in a three dimensional vertical cylinder filled with glass balls. The base wall of the cylinder is subjected to sinusoidal oscillation in the vertical direction. The viscoelastic behavior of glass balls during a collision, have been studied experimentally using a modified Newton's Cradle device. By analyzing the results of the measurements, using numerical model based on finite element method, the viscous damping coefficient was determinedfor the glass balls. To obtain detailed information about the interparticle interactions in a shaker, a simplified model for collision between particles of a granular material was proposed. In order to simulate the flow of surrounding gas, a formulation of the equations for fluid flow in a porous medium including particle forces was proposed. These equations are solved with Large Eddy Simulation (LES) technique using a subgrid-model originally proposed for compressible turbulent flows. For a pentagonal prism-shaped container under vertical vibrations, the results show that oscillon type structures were formed. Oscillons are highly localized particle-like excitations of the granular layer. This self-sustaining state was named by analogy with its closest large-scale analogy, the soliton, which was first documented by J.S. Russell in 1834. The results which has been reportedbyBordbar and Zamankhan(2005b)also show that slightly revised fluctuation-dissipation theorem might apply to shaken sand, which appears to be asystem far from equilibrium and could exhibit strong spatial and temporal variations in quantities such as density and local particle velocity. In this light, hydrodynamic type continuum equations were presented for describing the deformation and flow of dense gas-particle mixtures. The constitutive equation used for the stress tensor provides an effective viscosity with a liquid-like character at low shear rates and a gaseous-like behavior at high shear rates. The numerical solutions were obtained for the aforementioned hydrodynamic equations for predicting the flow dynamics ofdense mixture of gas and particles in vertical cylindrical containers. For a heptagonal prism shaped container under vertical vibrations, the model results were found to predict bubbling behavior analogous to those observed experimentally. This bubbling behavior may be explained by the unusual gas pressure distribution found in the bed. In addition, oscillon type structures were found to be formed using a vertically vibrated, pentagonal prism shaped container in agreement with computer simulation results. These observations suggest that the pressure distribution plays a key rolein deformation and flow of dense mixtures of gas and particles under vertical vibrations. The present models provide greater insight toward the explanation of poorly understood hydrodynamic phenomena in the field of granular flows and dense gas-particle mixtures. The models can be generalized to investigate the granular material-container wall interactions which would be an issue of high interests in the industrial applications. By following this approach ideal processing conditions and powder transport can be created in industrial systems.
Resumo:
Cooperation and coordination are desirable behaviors that are fundamental for the harmonious development of society. People need to rely on cooperation with other individuals in many aspects of everyday life, such as teamwork and economic exchange in anonymous markets. However, cooperation may easily fall prey to exploitation by selfish individuals who only care about short- term gain. For cooperation to evolve, specific conditions and mechanisms are required, such as kinship, direct and indirect reciprocity through repeated interactions, or external interventions such as punishment. In this dissertation we investigate the effect of the network structure of the population on the evolution of cooperation and coordination. We consider several kinds of static and dynamical network topologies, such as Baraba´si-Albert, social network models and spatial networks. We perform numerical simulations and laboratory experiments using the Prisoner's Dilemma and co- ordination games in order to contrast human behavior with theoretical results. We show by numerical simulations that even a moderate amount of random noise on the Baraba´si-Albert scale-free network links causes a significant loss of cooperation, to the point that cooperation almost vanishes altogether in the Prisoner's Dilemma when the noise rate is high enough. Moreover, when we consider fixed social-like networks we find that current models of social networks may allow cooperation to emerge and to be robust at least as much as in scale-free networks. In the framework of spatial networks, we investigate whether cooperation can evolve and be stable when agents move randomly or performing Le´vy flights in a continuous space. We also consider discrete space adopting purposeful mobility and binary birth-death process to dis- cover emergent cooperative patterns. The fundamental result is that cooperation may be enhanced when this migration is opportunistic or even when agents follow very simple heuristics. In the experimental laboratory, we investigate the issue of social coordination between indi- viduals located on networks of contacts. In contrast to simulations, we find that human players dynamics do not converge to the efficient outcome more often in a social-like network than in a random network. In another experiment, we study the behavior of people who play a pure co- ordination game in a spatial environment in which they can move around and when changing convention is costly. We find that each convention forms homogeneous clusters and is adopted by approximately half of the individuals. When we provide them with global information, i.e., the number of subjects currently adopting one of the conventions, global consensus is reached in most, but not all, cases. Our results allow us to extract the heuristics used by the participants and to build a numerical simulation model that agrees very well with the experiments. Our findings have important implications for policymakers intending to promote specific, desired behaviors in a mobile population. Furthermore, we carry out an experiment with human subjects playing the Prisoner's Dilemma game in a diluted grid where people are able to move around. In contrast to previous results on purposeful rewiring in relational networks, we find no noticeable effect of mobility in space on the level of cooperation. Clusters of cooperators form momentarily but in a few rounds they dissolve as cooperators at the boundaries stop tolerating being cheated upon. Our results highlight the difficulties that mobile agents have to establish a cooperative environment in a spatial setting without a device such as reputation or the possibility of retaliation. i.e. punishment. Finally, we test experimentally the evolution of cooperation in social networks taking into ac- count a setting where we allow people to make or break links at their will. In this work we give particular attention to whether information on an individual's actions is freely available to poten- tial partners or not. Studying the role of information is relevant as information on other people's actions is often not available for free: a recruiting firm may need to call a job candidate's refer- ences, a bank may need to find out about the credit history of a new client, etc. We find that people cooperate almost fully when information on their actions is freely available to their potential part- ners. Cooperation is less likely, however, if people have to pay about half of what they gain from cooperating with a cooperator. Cooperation declines even further if people have to pay a cost that is almost equivalent to the gain from cooperating with a cooperator. Thus, costly information on potential neighbors' actions can undermine the incentive to cooperate in dynamical networks.
Resumo:
Many species are able to learn to associate behaviours with rewards as this gives fitness advantages in changing environments. Social interactions between population members may, however, require more cognitive abilities than simple trial-and-error learning, in particular the capacity to make accurate hypotheses about the material payoff consequences of alternative action combinations. It is unclear in this context whether natural selection necessarily favours individuals to use information about payoffs associated with nontried actions (hypothetical payoffs), as opposed to simple reinforcement of realized payoff. Here, we develop an evolutionary model in which individuals are genetically determined to use either trial-and-error learning or learning based on hypothetical reinforcements, and ask what is the evolutionarily stable learning rule under pairwise symmetric two-action stochastic repeated games played over the individual's lifetime. We analyse through stochastic approximation theory and simulations the learning dynamics on the behavioural timescale, and derive conditions where trial-and-error learning outcompetes hypothetical reinforcement learning on the evolutionary timescale. This occurs in particular under repeated cooperative interactions with the same partner. By contrast, we find that hypothetical reinforcement learners tend to be favoured under random interactions, but stable polymorphisms can also obtain where trial-and-error learners are maintained at a low frequency. We conclude that specific game structures can select for trial-and-error learning even in the absence of costs of cognition, which illustrates that cost-free increased cognition can be counterselected under social interactions.
Resumo:
Geophysical tomography captures the spatial distribution of the underlying geophysical property at a relatively high resolution, but the tomographic images tend to be blurred representations of reality and generally fail to reproduce sharp interfaces. Such models may cause significant bias when taken as a basis for predictive flow and transport modeling and are unsuitable for uncertainty assessment. We present a methodology in which tomograms are used to condition multiple-point statistics (MPS) simulations. A large set of geologically reasonable facies realizations and their corresponding synthetically calculated cross-hole radar tomograms are used as a training image. The training image is scanned with a direct sampling algorithm for patterns in the conditioning tomogram, while accounting for the spatially varying resolution of the tomograms. In a post-processing step, only those conditional simulations that predicted the radar traveltimes within the expected data error levels are accepted. The methodology is demonstrated on a two-facies example featuring channels and an aquifer analog of alluvial sedimentary structures with five facies. For both cases, MPS simulations exhibit the sharp interfaces and the geological patterns found in the training image. Compared to unconditioned MPS simulations, the uncertainty in transport predictions is markedly decreased for simulations conditioned to tomograms. As an improvement to other approaches relying on classical smoothness-constrained geophysical tomography, the proposed method allows for: (1) reproduction of sharp interfaces, (2) incorporation of realistic geological constraints and (3) generation of multiple realizations that enables uncertainty assessment.
Resumo:
We present computer simulations of a simple bead-spring model for polymer melts with intramolecular barriers. By systematically tuning the strength of the barriers, we investigate their role on the glass transition. Dynamic observables are analyzed within the framework of the mode coupling theory (MCT). Critical nonergodicity parameters, critical temperatures, and dynamic exponents are obtained from consistent fits of simulation data to MCT asymptotic laws. The so-obtained MCT λ-exponent increases from standard values for fully flexible chains to values close to the upper limit for stiff chains. In analogy with systems exhibiting higher-order MCT transitions, we suggest that the observed large λ-values arise form the interplay between two distinct mechanisms for dynamic arrest: general packing effects and polymer-specific intramolecular barriers. We compare simulation results with numerical solutions of the MCT equations for polymer systems, within the polymer reference interaction site model (PRISM) for static correlations. We verify that the approximations introduced by the PRISM are fulfilled by simulations, with the same quality for all the range of investigated barrier strength. The numerical solutions reproduce the qualitative trends of simulations for the dependence of the nonergodicity parameters and critical temperatures on the barrier strength. In particular, the increase in the barrier strength at fixed density increases the localization length and the critical temperature. However the qualitative agreement between theory and simulation breaks in the limit of stiff chains. We discuss the possible origin of this feature.
Resumo:
Magical ideation and belief in the paranormal is considered to represent a trait-like character; people either believe in it or not. Yet, anecdotes indicate that exposure to an anomalous event can turn skeptics into believers. This transformation is likely to be accompanied by altered cognitive functioning such as impaired judgments of event likelihood. Here, we investigated whether the exposure to an anomalous event changes individuals' explicit traditional (religious) and non-traditional (e.g., paranormal) beliefs as well as cognitive biases that have previously been associated with non-traditional beliefs, e.g., repetition avoidance when producing random numbers in a mental dice task. In a classroom, 91 students saw a magic demonstration after their psychology lecture. Before the demonstration, half of the students were told that the performance was done respectively by a conjuror (magician group) or a psychic (psychic group). The instruction influenced participants' explanations of the anomalous event. Participants in the magician, as compared to the psychic group, were more likely to explain the event through conjuring abilities while the reverse was true for psychic abilities. Moreover, these explanations correlated positively with their prior traditional and non-traditional beliefs. Finally, we observed that the psychic group showed more repetition avoidance than the magician group, and this effect remained the same regardless of whether assessed before or after the magic demonstration. We conclude that pre-existing beliefs and contextual suggestions both influence people's interpretations of anomalous events and associated cognitive biases. Beliefs and associated cognitive biases are likely flexible well into adulthood and change with actual life events.
Resumo:
Substantial collective flow is observed in collisions between lead nuclei at Large Hadron Collider (LHC) as evidenced by the azimuthal correlations in the transverse momentum distributions of the produced particles. Our calculations indicate that the global v1-flow, which at RHIC peaked at negative rapidities (named third flow component or antiflow), now at LHC is going to turn toward forward rapidities (to the same side and direction as the projectile residue). Potentially this can provide a sensitive barometer to estimate the pressure and transport properties of the quark-gluon plasma. Our calculations also take into account the initial state center-of-mass rapidity fluctuations, and demonstrate that these are crucial for v1 simulations. In order to better study the transverse momentum flow dependence we suggest a new"symmetrized" vS1(pt) function, and we also propose a new method to disentangle global v1 flow from the contribution generated by the random fluctuations in the initial state. This will enhance the possibilities of studying the collective Global v1 flow both at the STAR Beam Energy Scan program and at LHC.
Resumo:
Past temperature variations are usually inferred from proxy data or estimated using general circulation models. Comparisons between climate estimations derived from proxy records and from model simulations help to better understand mechanisms driving climate variations, and also offer the possibility to identify deficiencies in both approaches. This paper presents regional temperature reconstructions based on tree-ring maximum density series in the Pyrenees, and compares them with the output of global simulations for this region and with regional climate model simulations conducted for the target region. An ensemble of 24 reconstructions of May-to-September regional mean temperature was derived from 22 maximum density tree-ring site chronologies distributed over the larger Pyrenees area. Four different tree-ring series standardization procedures were applied, combining two detrending methods: 300-yr spline and the regional curve standardization (RCS). Additionally, different methodological variants for the regional chronology were generated by using three different aggregation methods. Calibration verification trials were performed in split periods and using two methods: regression and a simple variance matching. The resulting set of temperature reconstructions was compared with climate simulations performed with global (ECHO-G) and regional (MM5) climate models. The 24 variants of May-to-September temperature reconstructions reveal a generally coherent pattern of inter-annual to multi-centennial temperature variations in the Pyrenees region for the last 750 yr. However, some reconstructions display a marked positive trend for the entire length of the reconstruction, pointing out that the application of the RCS method to a suboptimal set of samples may lead to unreliable results. Climate model simulations agree with the tree-ring based reconstructions at multi-decadal time scales, suggesting solar variability and volcanism as the main factors controlling preindustrial mean temperature variations in the Pyrenees. Nevertheless, the comparison also highlights differences with the reconstructions, mainly in the amplitude of past temperature variations and in the 20th century trends. Neither proxy-based reconstructions nor model simulations are able to perfectly track the temperature variations of the instrumental record, suggesting that both approximations still need further improvements.