948 resultados para Discrete Choice Experiment
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
The timed-initiation paradigm developed by Ghez and colleagues (1997) has revealed two modes of motor planning: continuous and discrete. Continuous responding occurs when targets are separated by less than 60° of spatial angle, and discrete responding occurs when targets are separated by greater than 60°. Although these two modes are thought to reflect the operation of separable strategic planning systems, a new theory of movement preparation, the Dynamic Field Theory, suggests that two modes emerge flexibly from the same system. Experiment 1 replicated continuous and discrete performance using a task modified to allow for a critical test of the single system view. In Experiment 2, participants were allowed to correct their movements following movement initiation (the standard task does not allow corrections). Results showed continuous planning performance at large and small target separations. These results are consistent with the proposal that the two modes reflect the time-dependent “preshaping” of a single planning system.
Resumo:
The lattice Boltzmann method is a popular approach for simulating hydrodynamic interactions in soft matter and complex fluids. The solvent is represented on a discrete lattice whose nodes are populated by particle distributions that propagate on the discrete links between the nodes and undergo local collisions. On large length and time scales, the microdynamics leads to a hydrodynamic flow field that satisfies the Navier-Stokes equation. In this thesis, several extensions to the lattice Boltzmann method are developed. In complex fluids, for example suspensions, Brownian motion of the solutes is of paramount importance. However, it can not be simulated with the original lattice Boltzmann method because the dynamics is completely deterministic. It is possible, though, to introduce thermal fluctuations in order to reproduce the equations of fluctuating hydrodynamics. In this work, a generalized lattice gas model is used to systematically derive the fluctuating lattice Boltzmann equation from statistical mechanics principles. The stochastic part of the dynamics is interpreted as a Monte Carlo process, which is then required to satisfy the condition of detailed balance. This leads to an expression for the thermal fluctuations which implies that it is essential to thermalize all degrees of freedom of the system, including the kinetic modes. The new formalism guarantees that the fluctuating lattice Boltzmann equation is simultaneously consistent with both fluctuating hydrodynamics and statistical mechanics. This establishes a foundation for future extensions, such as the treatment of multi-phase and thermal flows. An important range of applications for the lattice Boltzmann method is formed by microfluidics. Fostered by the "lab-on-a-chip" paradigm, there is an increasing need for computer simulations which are able to complement the achievements of theory and experiment. Microfluidic systems are characterized by a large surface-to-volume ratio and, therefore, boundary conditions are of special relevance. On the microscale, the standard no-slip boundary condition used in hydrodynamics has to be replaced by a slip boundary condition. In this work, a boundary condition for lattice Boltzmann is constructed that allows the slip length to be tuned by a single model parameter. Furthermore, a conceptually new approach for constructing boundary conditions is explored, where the reduced symmetry at the boundary is explicitly incorporated into the lattice model. The lattice Boltzmann method is systematically extended to the reduced symmetry model. In the case of a Poiseuille flow in a plane channel, it is shown that a special choice of the collision operator is required to reproduce the correct flow profile. This systematic approach sheds light on the consequences of the reduced symmetry at the boundary and leads to a deeper understanding of boundary conditions in the lattice Boltzmann method. This can help to develop improved boundary conditions that lead to more accurate simulation results.
Resumo:
Das Standardmodell der Teilchenphysik, das drei der vier fundamentalen Wechselwirkungen beschreibt, stimmt bisher sehr gut mit den Messergebnissen der Experimente am CERN, dem Fermilab und anderen Forschungseinrichtungen überein. rnAllerdings können im Rahmen dieses Modells nicht alle Fragen der Teilchenphysik beantwortet werden. So lässt sich z.B. die vierte fundamentale Kraft, die Gravitation, nicht in das Standardmodell einbauen.rnDarüber hinaus hat das Standardmodell auch keinen Kandidaten für dunkle Materie, die nach kosmologischen Messungen etwa 25 % unseres Universum ausmacht.rnAls eine der vielversprechendsten Lösungen für diese offenen Fragen wird die Supersymmetrie angesehen, die eine Symmetrie zwischen Fermionen und Bosonen einführt. rnAus diesem Modell ergeben sich sogenannte supersymmetrische Teilchen, denen jeweils ein Standardmodell-Teilchen als Partner zugeordnet sind.rnEin mögliches Modell dieser Symmetrie ist das R-Paritätserhaltende mSUGRA-Modell, falls Supersymmetrie in der Natur realisiert ist.rnIn diesem Modell ist das leichteste supersymmetrische Teilchen (LSP) neutral und schwach wechselwirkend, sodass es nicht direkt im Detektor nachgewiesen werden kann, sondern indirekt über die vom LSP fortgetragene Energie, die fehlende transversale Energie (etmiss), nachgewiesen werden muss.rnrnDas ATLAS-Experiment wird 2010 mit Hilfe des pp-Beschleunigers LHC mit einer Schwerpunktenergie von sqrt(s)=7-10 TeV mit einer Luminosität von 10^32 #/(cm^2*s) mit der Suche nach neuer Physik starten.rnDurch die sehr hohe Datenrate, resultierend aus den etwa 10^8 Auslesekanälen des ATLAS-Detektors bei einer Bunchcrossingrate von 40 MHz, wird ein Triggersystem benötigt, um die zu speichernde Datenmenge zu reduzieren.rnDabei muss ein Kompromiss zwischen der verfügbaren Triggerrate und einer sehr hohen Triggereffizienz für die interessanten Ereignisse geschlossen werden, da etwa nur jedes 10^8-te Ereignisse für die Suche nach neuer Physik interessant ist.rnZur Erfüllung der Anforderungen an das Triggersystem wird im Experiment ein dreistufiges System verwendet, bei dem auf der ersten Triggerstufe mit Abstand die höchste Datenreduktion stattfindet.rnrnIm Rahmen dieser Arbeit rn%, die vollständig auf Monte-Carlo-Simulationen basiert, rnist zum einen ein wesentlicher Beitrag zum grundlegenden Verständnis der Eigenschaft der fehlenden transversalen Energie auf der ersten Triggerstufe geleistet worden.rnZum anderen werden Methoden vorgestellt, mit denen es möglich ist, die etmiss-Triggereffizienz für Standardmodellprozesse und mögliche mSUGRA-Szenarien aus Daten zu bestimmen. rnBei der Optimierung der etmiss-Triggerschwellen für die erste Triggerstufe ist die Triggerrate bei einer Luminosität von 10^33 #/(cm^2*s) auf 100 Hz festgelegt worden.rnFür die Triggeroptimierung wurden verschiedene Simulationen benötigt, bei denen eigene Entwicklungsarbeit eingeflossen ist.rnMit Hilfe dieser Simulationen und den entwickelten Optimierungsalgorithmen wird gezeigt, dass trotz der niedrigen Triggerrate das Entdeckungspotential (für eine Signalsignifikanz von mindestens 5 sigma) durch Kombinationen der etmiss-Schwelle mit Lepton bzw. Jet-Triggerschwellen gegenüber dem bestehenden ATLAS-Triggermenü auf der ersten Triggerstufe um bis zu 66 % erhöht wird.
Resumo:
na provide students with motivation for the study of quantum mechanics. That microscopic matter exists in quantized states can be demonstrated with modem versions of historic experiments: atomic line spectra (I), resonance potentials, and blackbody radiation. The resonance potentials of mercury were discovered by Franck and Hertz in 1914 (2). Their experiment consisted of bombarding atoms by electrons, and detecting the kinetic energy loss of the scattered electrons (3). Prior to the Franck-Hertz experiment, spectroscopic work bv Balmer and Rvdbere revealed that atoms emitted radiatibn at discrete ekergiis. The Franck-Hertz experiment showed directly that auantized enerm levels in an atom are real, not jist optiEal artifacts. atom can be raised to excited states by inelastic collisions with electrons as well as lowered from excited states by emission of photons. The classic Franck-Hertz experiment is carried out with mercury (4-7). Here we present an experiment for the study of resonance potentials using neon.
Resumo:
Simulation is an important resource for researchers in diverse fields. However, many researchers have found flaws in the methodology of published simulation studies and have described the state of the simulation community as being in a crisis of credibility. This work describes the project of the Simulation Automation Framework for Experiments (SAFE), which addresses the issues that undermine credibility by automating the workflow in the execution of simulation studies. Automation reduces the number of opportunities for users to introduce error in the scientific process thereby improvingthe credibility of the final results. Automation also eases the job of simulation users and allows them to focus on the design of models and the analysis of results rather than on the complexities of the workflow.
Resumo:
The transition in Central and Eastern Europe since the late 1980s has provided a testing ground for classic propositions. This project looked at the impact of privatisation on private consumption, using the Czech experiment of voucher privatisation to test the permanent income hypothesis. This form of privatisation moved state assets to individuals and represented an unexpected windfall gain for participants in the scheme. Whether the windfall was consumed or saved offers a clear test of the permanent income hypothesis. Of a total population of 10 million, 6 million Czechs, i.e. virtually every household, participated in the scheme,. In a January 1996 survey, 1263 individuals were interviewed , 75% of whom had taken part. The data obtained suggests that only a small quantity of transferred assets were cashed in and spent on consumption, providing support for the permanent income hypothesis. The fraction of the windfall consumed grows with age, as would be predicted from the lower life expectancy of older consumers. The most interesting deviation was for people aged 26 to 35, who apparently consumed more that they would if the windfall were annuitised. As these people are at the stage in their lives when they would otherwise be borrowing to cover consumption related to establishing a family, etc., this is however consistent with the permanent income hypothesis, which predicts that individuals who would otherwise borrow money would use the windfall to avoid doing so.
Resumo:
The accuracy of Global Positioning System (GPS) time series is degraded by the presence of offsets. To assess the effectiveness of methods that detect and remove these offsets, we designed and managed the Detection of Offsets in GPS Experiment. We simulated time series that mimicked realistic GPS data consisting of a velocity component, offsets, white and flicker noises (1/f spectrum noises) composed in an additive model. The data set was made available to the GPS analysis community without revealing the offsets, and several groups conducted blind tests with a range of detection approaches. The results show that, at present, manual methods (where offsets are hand picked) almost always give better results than automated or semi‒automated methods (two automated methods give quite similar velocity bias as the best manual solutions). For instance, the fifth percentile range (5% to 95%) in velocity bias for automated approaches is equal to 4.2 mm/year (most commonly ±0.4 mm/yr from the truth), whereas it is equal to 1.8 mm/yr for the manual solutions (most commonly 0.2 mm/yr from the truth). The magnitude of offsets detectable by manual solutions is smaller than for automated solutions, with the smallest detectable offset for the best manual and automatic solutions equal to 5 mm and 8 mm, respectively. Assuming the simulated time series noise levels are representative of real GPS time series, robust geophysical interpretation of individual site velocities lower than 0.2–0.4 mm/yr is therefore certainly not robust, although a limit of nearer 1 mm/yr would be a more conservative choice. Further work to improve offset detection in GPS coordinates time series is required before we can routinely interpret sub‒mm/yr velocities for single GPS stations.
Resumo:
The major aim of this study was to examine the influence of an embedded viscoelastic-plastic layer at different viscosity values on accretionary wedges at subduction zones. To quantify the effects of the layer viscosity, we analysed the wedge geometry, accretion mode, thrust systems and mass transport pattern. Therefore, we developed a numerical 2D 'sandbox' model utilising the Discrete Element Method. Starting with a simple pure Mohr Coulomb sequence, we added an embedded viscoelastic-plastic layer within the brittle, undeformed 'sediment' package. This layer followed Burger's rheology, which simulates the creep behaviour of natural rocks, such as evaporites. This layer got thrusted and folded during the subduction process. The testing of different bulk viscosity values, from 1 × 10**13 to 1 × 10**14 (Pa s), revealed a certain range where an active detachment evolved within the viscoelastic-plastic layer that decoupled the over- and the underlying brittle strata. This mid-level detachment caused the evolution of a frontally accreted wedge above it and a long underthrusted and subsequently basally accreted sequence beneath it. Both sequences were characterised by specific mass transport patterns depending on the used viscosity value. With decreasing bulk viscosities, thrust systems above this weak mid-level detachment became increasingly symmetrical and the particle uplift was reduced, as would be expected for a salt controlled forearc in nature. Simultaneously, antiformal stacking was favoured over hinterland dipping in the lower brittle layer and overturning of the uplifted material increased. Hence, we validated that the viscosity of an embedded detachment strongly influences the whole wedge mechanics, both the respective lower slope and the upper slope duplex, shown by e.g. the mass transport pattern.
Resumo:
In 2000, Ramadan school vacation coincided with the original annual exam period of December in Bangladesh. This forced schools to pre-pone their final exam schedules in November, which was the month before the harvest begins. 'Ramadan 2000' is a natural experiment that reduced the labor demand for children during the exam period. Using household level panel data of 2000 and 2003, and after controlling for various unobservable variations including individual fixed effects, aggregate year effects, and subdistrict-level year effects, this paper finds evidence of statistically significant impact of seasonal labor demand on school dropout in Bangladesh among the children from agricultural households.
Resumo:
The formulation of thermodynamically consistent (TC) time integration methods was introduced by a general procedure based on the GENERIC form of the evolution equations for thermo-mechanical problems. The use of the entropy was reported to be the best choice for the thermodynamical variable to easily provide TC integrators. Also the employment of the internal energy was proved to not involve excessive complications. However, attempts towards the use of the temperature in the design of GENERIC-based TC schemes have so far been unfruitful. This paper complements the said procedure to attain TC integrators by presenting a TC scheme based on the temperature as thermodynamical state variable. As a result, the problems which arise due to the use of the entropy are overcome, mainly the definition of boundary conditions. What is more, the newly proposed method exhibits the general enhanced numerical stability and robustness properties of the entropy formulation.
Resumo:
The relative contribution of genetic and socio-cultural factors in the shaping of behavior is of fundamental importance to biologists and social scientists, yet it has proven to be extremely difficult to study in a controlled, experimental fashion. Here I describe experiments that examined the strength of genetic and cultural (imitative) factors in determining female mate choice in the guppy, Poecilia reticulata. Female guppies from the Paria River in Trinidad have a genetic, heritable preference for the amount of orange body color possessed by males. Female guppies will, however, also copy (imitate) the mate choice of other females in that when two males are matched for orange color, an "observer" female will copy the mate choice of another ("model") female. Three treatments were undertaken in which males differed by an average of 12%, 24%, or 40% of the total orange body color. In all cases, observer females viewed a model female prefer the less colorful male. When males differed by 12% or 24%, observer females preferred the less colorful male and thus copied the mate choice of others, despite a strong heritable preference for orange body color in males. When males differed by 40% orange body color, however, observer females preferred the more colorful male and did not copy the mate choice of the other female. In this system, then, imitation can "override" genetic preferences when the difference between orange body color in males is small or moderate, but genetic factors block out imitation effects when the difference in orange body color in males is large. This experiment provides the first attempt to experimentally examine the relative strength of cultural and genetic preferences for a particular trait and suggests that these two factors moderate one another in shaping social behavior.
Resumo:
The responses of larger (>50 µm in diameter) protozooplankton groups to a phytoplankton bloom induced by in situ iron fertilization (EisenEx) in the Polar Frontal Zone (PFZ) of the Southern Ocean in austral spring are presented. During the 21 days of the experiment, samples were collected from seven discrete depths in the upper 150 m inside and outside the fertilized patch for the enumeration of acantharia, foraminifera, radiolaria, heliozoa, tintinnid ciliates and aplastidic thecate dinoflagellates. Inside the patch, acantharian numbers increased twofold, but only negligibly in surrounding waters. This finding is of major interest, since acantharia are suggested to be involved in the formation of barite (BaSO_4 ) found in sediments and which is a palaeoindicator of both ancient and modern high productivity regimes. Foraminifera increased significantly in abundance inside and outside the fertilized patch. However the marked increase of juveniles after a full moon event suggests a lunar periodicity in the reproduction cycle of some foraminiferan species rather than a reproductive response to enhanced food availability. In contrast, adult radiolaria showed no clear trend during the experiment, but juveniles increased threefold indicating elevated reproduction. Aplastidic thecate dinoflagellates almost doubled in numbers and biomass, but also increased outside the patch. Tintinnid numbers decreased twofold, although biomass remained constant due to a shift in the size spectrum. Empty tintinnid loricae, however, increased by a factor of two indicating that grazing pressure on this group mainly by copepods intensified during EisenEx. The results show that iron-fertilization experiments can shed light on the biology and the role of these larger protists in pelagic ecosystem which will improve their use as proxies in palaeoceanography.
Resumo:
The speciation of strongly chelated iron during the 22-day course of an iron enrichment experiment in the Atlantic sector of the Southern Ocean deviates strongly from ambient natural waters. Three iron additions (ferrous sulfate solution) were conducted, resulting in elevated dissolved iron concentrations (Nishioka, J., Takeda, S., de Baar, H.J.W., Croot, P.L., Boye, M., Laan, P., Timmermans, K.R., 2005, Changes in the concentration of iron in different size fractions during an iron enrichment experiment in the open Southern Ocean. Marine Chemistry, doi:10.1016/j.marchem.2004.06.040) and significant Fe(II) levels (Croot, P.L., Laan, P., Nishioka, J., Strass, V., Cisewski, B., Boye, M., Timmermans, K.R., Bellerby, R.G., Goldson, L., Nightingale, P., de Baar, H.J.W., 2005, Spatial and Temporal distribution of Fe(II) and H2O2 during EisenEx, an open ocean mescoscale iron enrichment. Marine Chemistry, doi:10.1016/j.marchem.2004.06.041). Repeated vertical profiles for dissolved (filtrate < 0.2 µm) Fe(III)-binding ligands indicated a production of chelators in the upper water column induced by iron fertilizations. Abiotic processes (chemical reactions) and an inductive biologically mediated mechanism were the likely sources of the dissolved ligands which existed either as inorganic amorphous phases and/or as strong organic chelators. Discrete analysis on ultra-filtered samples (< 200 kDa) suggested that the produced ligands would be principally colloidal in size (> 200 kDa-< 0.2 µm), as opposed to the soluble fraction (< 200 kDa) which dominated prior to the iron infusions. Yet these colloidal ligands would exist in a more transient nature than soluble ligands which may have a longer residence time. The production of dissolved Fe-chelators was generally smaller than the overall increase in dissolved iron in the surface infused mixed layer, leaving a fraction (about 13-40%) of dissolved Fe not bound by these dissolved Fe-chelators. It is suggested that this fraction would be inorganic colloids. The unexpected persistence of such high inorganic colloids concentrations above inorganic Fe-solubility limits illustrates the peculiar features of the chemical iron cycling in these waters. Obviously, the artificial about hundred-fold increase of overall Fe levels by addition of dissolved inorganic Fe(II) ions yields a major disruption of the natural physical-chemical abundances and reactivity of Fe in seawater. Hence the ensuing responses of the plankton ecosystem, while in itself significant, are not necessarily representative for a natural enrichment, for example by dry or wet deposition of aeolian dust. Ultimately, the temporal changes of the Fe(III)-binding ligand and iron concentrations were dominated by the mixing events that occurred during EISENEX, with storms leading to more than an order of magnitude dilution of the dissolved ligands and iron concentrations. This had strongest impact on the colloidal size class (> 200 kDa-< 0.2 µm) where a dramatic decrease of both the colloidal ligand and the colloidal iron levels (Nishioka, J., Takeda, S., de Baar, H.J.W., Croot, P.L., Boye, M., Laan, P., Timmermans, K.R., 2005, Changes in the concentration of iron in different size fractions during an iron enrichment experiment in the open Southern Ocean. Marine Chemistry, doi:10.1016/j.marchem.2004.06.040) was observed.
Resumo:
Cover title.