990 resultados para Nearly Neutral Theory
Resumo:
In the framework of gauged flavour symmetries, new fermions in parity symmetric representations of the standard model are generically needed for the compensation of mixed anomalies. The key point is that their masses are also protected by flavour symmetries and some of them are expected to lie way below the flavour symmetry breaking scale(s), which has to occur many orders of magnitude above the electroweak scale to be compatible with the available data from flavour changing neutral currents and CP violation experiments. We argue that, actually, some of these fermions would plausibly get masses within the LHC range. If they are taken to be heavy quarks and leptons, in (bi)-fundamental representations of the standard model symmetries, their mixings with the light ones are strongly constrained to be very small by electroweak precision data. The alternative chosen here is to exactly forbid such mixings by breaking of flavour symmetries into an exact discrete symmetry, the so-called proton-hexality, primarily suggested to avoid proton decay. As a consequence of the large value needed for the flavour breaking scale, those heavy particles are long-lived and rather appropriate for the current and future searches at the LHC for quasi-stable hadrons and leptons. In fact, the LHC experiments have already started to look for them.
Resumo:
In der vorliegenden Dissertation werden zwei verschiedene Aspekte des Sektors ungerader innerer Parität der mesonischen chiralen Störungstheorie (mesonische ChPT) untersucht. Als erstes wird die Ein-Schleifen-Renormierung des führenden Terms, der sog. Wess-Zumino-Witten-Wirkung, durchgeführt. Dazu muß zunächst der gesamte Ein-Schleifen-Anteil der Theorie mittels Sattelpunkt-Methode extrahiert werden. Im Anschluß isoliert man alle singulären Ein-Schleifen-Strukturen im Rahmen der Heat-Kernel-Technik. Zu guter Letzt müssen diese divergenten Anteile absorbiert werden. Dazu benötigt man eine allgemeinste anomale Lagrange-Dichte der Ordnung O(p^6), welche systematisch entwickelt wird. Erweitert man die chirale Gruppe SU(n)_L x SU(n)_R auf SU(n)_L x SU(n)_R x U(1)_V, so kommen zusätzliche Monome ins Spiel. Die renormierten Koeffizienten dieser Lagrange-Dichte, die Niederenergiekonstanten (LECs), sind zunächst freie Parameter der Theorie, die individuell fixiert werden müssen. Unter Betrachtung eines komplementären vektormesonischen Modells können die Amplituden geeigneter Prozesse bestimmt und durch Vergleich mit den Ergebnissen der mesonischen ChPT eine numerische Abschätzung einiger LECs vorgenommen werden. Im zweiten Teil wird eine konsistente Ein-Schleifen-Rechnung für den anomalen Prozeß (virtuelles) Photon + geladenes Kaon -> geladenes Kaon + neutrales Pion durchgeführt. Zur Kontrolle unserer Resultate wird eine bereits vorhandene Rechnung zur Reaktion (virtuelles) Photon + geladenes Pion -> geladenes Pion + neutrales Pion reproduziert. Unter Einbeziehung der abgeschätzten Werte der jeweiligen LECs können die zugehörigen hadronischen Strukturfunktionen numerisch bestimmt und diskutiert werden.
Resumo:
International migration has increased rapidly in the Czech Republic, with more than 150,000 legally registered foreign residents at the end of 1996. A large proportion of these are in Prague - 35% of the total in December 1996. The aim of this project was to enrich the fund of information concerning the "environment", reasons and "mechanisms" behind immigration to the Czech Republic. Mr. Drbohlav looked first at the empirical situation and on this basis set out to test certain well-known migration theories. He focused on four main areas: 1) a detailed description and explanation of the stock of foreign citizens legally settled in Czech territory, concentrating particularly on "economic" migrants; 2) a questionnaire survey targeting a total of 192 Ukrainian workers (98 in the fall 1995 and 94 in the fall 1996) working in Prague or its vicinity; 3) a second questionnaire survey of 40 "western" firms (20 in 1996 and 20 in 1997) operating out of Prague; 4) an opinion poll on how the Czech population reacts to foreign workers in the CR. Over 80% of economic immigrants at the end of 1996 were from European countries, 16% from Asia and under 2% from North America. The largest single nationalities were Ukrainians, Slovaks, Vietnamese and Poles. There has been a huge increase in the Ukrainian immigrant community over both space (by region) and time (a ten-fold increase since 1993), and at 40,000 persons this represents one third of all legal immigrants. Indications are that many more live and work there illegally. Young males with low educational/skills levels predominate, in contrast with the more heterogeneous immigration from the "West". The primary reason for this migration is the higher wages in the Czech Republic. In 1994 the relative figures of GDP adjusted for parity of purchasing power were US$ 8,095 for the Czech Republic versus US$ 3,330 for the Ukraine as a whole and US$ 1,600 for the Zakarpatye region from which 49% of the respondents in the survey came. On an individual level, the average Czech wage is about US$ 330 per month, while 50% of the Ukrainian respondents put their last monthly wage before leaving for the Czech Republic at under US$ 27. The very low level of unemployment in the latter country (fluctuating around 4%) was also mentioned as an important factor. Migration was seen as a way of diversifying the family's source of income and 49% of the respondents had made their plans together with partners or close relatives, while 45% regularly send remittances to Ukraine (94% do so through friends or relatives). Looking at Ukrainian migration from the point of view of the dual market theory, these migrants' type and conditions of work, work load and earnings were all significantly worse than in the primary sector, which employs well educated people and offers them good earnings, job security and benefits. 53% of respondents were working and/or staying in the Czech Republic illegally at the time of the research, 73% worked as unqualified, unskilled workers or auxiliary workers, 62% worked more than 12 hours a day, and 40% evaluated their working conditions as hard. 51% had no days off, earnings were low in relation to the number of hours worked. and 85% said that their earnings did not increase over time. Nearly half the workers were recruited in Ukraine and only 4% expressed a desire to stay in the Czech Republic. Network theories were also borne out to some extent as 33% of immigrants came together with friends from the same village, town or region in Ukraine. The number who have relatives working in the Czech Republic is rising, and many wish to invite relatives or children to visit them. The presence of organisations which organised cross-border migration, including some which resort to organising illegal documents, also gives some support for the institutional theory. Mr. Drbohlav found that all the migration theories considered offered some insights on the situation, but that none was sufficient to explain it all. He also points out parallels with many other regions of the world, including Central America, South and North America, Melanesia, Indonesia, East Africa, India, the Middle East and Russia. For the survey of foreign and international firms, those chosen were largely from countries represented by more than one company and were mainly active in market services such as financial and trade services, marketing and consulting. While 48% of the firms had more than 10,000 employees spread through many countries, more than two thirds had fewer than 50 employees in the Czech Republic. Czechs formed 80% plus of general staff in these firms although not more than 50% of senior management, and very few other "easterners" were employed. All companies absolutely denied employing people illegally. The average monthly wage of Czech staff was US$ 850, with that of top managers from the firm's "mother country" being US$ 6,350 and that of other western managers US$ 3,410. The foreign staff were generally highly mobile and were rarely accompanied by their families. Most saw their time in the Czech Republic as positive for their careers but very few had any intention of remaining there. Factors in the local situation which were evaluated positively included market opportunities, the economic and political environment, the quality of technical and managerial staff, and cheap labour and low production costs. In contrast, the level of appropriate business ethics and conduct, the attitude of local and regional authorities, environmental production conditions, the legal environment and financial markets and fiscal policy were rated very low. In the final section of his work Mr. Drbohlav looked at the opinions expressed by the local Czech population in a poll carried out at the beginning of 1997. This confirmed that international labour migration has become visible in this country, with 43% of respondents knowing at least one foreigner employed by a Czech firm in this country. Perception differ according to the region from which the workers come and those from "the West" are preferred to those coming from further east. 49% saw their attitude towards the former as friendly but only 20% felt thus towards the latter. Overall, attitudes towards migrant workers is neutral, although 38% said that such workers should not have the same rights as Czech citizens. Sympathy towards foreign workers tends to increase with education and the standard of living, and the relatively positive attitudes towards foreigners in the South Bohemia region contradicted the frequent belief that a lack of experience of international migration lowers positive perceptions of it.
Resumo:
We define an applicative theory of truth TPT which proves totality exactly for the polynomial time computable functions. TPT has natural and simple axioms since nearly all its truth axioms are standard for truth theories over an applicative framework. The only exception is the axiom dealing with the word predicate. The truth predicate can only reflect elementhood in the words for terms that have smaller length than a given word. This makes it possible to achieve the very low proof-theoretic strength. Truth induction can be allowed without any constraints. For these reasons the system TPT has the high expressive power one expects from truth theories. It allows embeddings of feasible systems of explicit mathematics and bounded arithmetic. The proof that the theory TPT is feasible is not easy. It is not possible to apply a standard realisation approach. For this reason we develop a new realisation approach whose realisation functions work on directed acyclic graphs. In this way, we can express and manipulate realisation information more efficiently.
Resumo:
Phylogenetic trees for groups of closely related species often have different topologies, depending on the genes used. One explanation for the discordant topologies is the persistence of polymorphisms through the speciation phase, followed by differential fixation of alleles in the resulting species. The existence of transspecies polymorphisms has been documented for alleles maintained by balancing selection but not for neutral alleles. In the present study, transspecific persistence of neutral polymorphisms was tested in the endemic haplochromine species flock of Lake Victoria cichlid fish. Putative noncoding region polymorphisms were identified at four randomly selected nuclear loci and tested on a collection of 12 Lake Victoria species and their putative riverine ancestors. At all loci, the same polymorphism was found to be present in nearly all the tested species, both lacustrine and riverine. Different polymorphisms at these loci were found in cichlids of other East African lakes (Malawi and Tanganyika). The Lake Victoria polymorphisms must have therefore arisen after the flocks now inhabiting the three great lakes diverged from one another, but before the riverine ancestors of the Lake Victoria flock colonized the Lake. Calculations based on the mtDNA clock suggest that the polymorphisms have persisted for about 1.4 million years. To maintain neutral polymorphisms for such a long time, the population size must have remained large throughout the entire period.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
The theoretical impacts of anthropogenic habitat degradation on genetic resources have been well articulated. Here we use a simulation approach to assess the magnitude of expected genetic change, and review 31 studies of 23 neotropical tree species to assess whether empirical case studies conform to theory. Major differences in the sensitivity of measures to detect the genetic health of degraded populations were obvious. Most studies employing genetic diversity (nine out of 13) found no significant consequences, yet most that assessed progeny inbreeding (six out of eight), reproductive output (seven out of 10) and fitness (all six) highlighted significant impacts. These observations are in line with theory, where inbreeding is observed immediately following impact, but genetic diversity is lost slowly over subsequent generations, which for trees may take decades. Studies also highlight the ecological, not just genetic, consequences of habitat degradation that can cause reduced seed set and progeny fitness. Unexpectedly, two studies examining pollen flow using paternity analysis highlight an extensive network of gene flow at smaller spatial scales (less than 10 km). Gene flow can thus mitigate against loss of genetic diversity and assist in long-term population viability, even in degraded landscapes. Unfortunately, the surveyed studies were too few and heterogeneous to examine concepts of population size thresholds and genetic resilience in relation to life history. Future suggested research priorities include undertaking integrated studies on a range of species in the same landscapes; better documentation of the extent and duration of impact; and most importantly, combining neutral marker, pollination dynamics, ecological consequences, and progeny fitness assessment within single studies.
Resumo:
We introduce the Survey for Ionization in Neutral Gas Galaxies (SINGG), a census of star formation in H I selected galaxies. The survey consists of H alpha and R-band imaging of a sample of 468 galaxies selected from the H I Parkes All Sky Survey (HIPASS). The sample spans three decades in H I mass and is free of many of the biases that affect other star-forming galaxy samples. We present the criteria for sample selection, list the entire sample, discuss our observational techniques, and describe the data reduction and calibration methods. This paper focuses on 93 SINGG targets whose observations have been fully reduced and analyzed to date. The majority of these show a single emission line galaxy (ELG). We see multiple ELGs in 13 fields, with up to four ELGs in a single field. All of the targets in this sample are detected in H alpha, indicating that dormant (non-star-forming) galaxies with M-H I greater than or similar to 3x10(7) M-circle dot are very rare. A database of the measured global properties of the ELGs is presented. The ELG sample spans 4 orders of magnitude in luminosity (H alpha and R band), and H alpha surface brightness, nearly 3 orders of magnitude in R surface brightness and nearly 2 orders of magnitude in H alpha equivalent width (EW). The surface brightness distribution of our sample is broader than that of the Sloan Digital Sky Survey (SDSS) spectroscopic sample, the EW distribution is broader than prism-selected samples, and the morphologies found include all common types of star-forming galaxies (e.g., irregular, spiral, blue compact dwarf, starbursts, merging and colliding systems, and even residual star formation in S0 and Sa spirals). Thus, SINGG presents a superior census of star formation in the local universe suitable for further studies ranging from the analysis of H II regions to determination of the local cosmic star formation rate density.
Resumo:
A theory is discussed of single-component transport in nanopores, recently developed by Bhatia and coworkers. The theory considers the oscillatory motion of molecules between diffuse wall collisions, arising from the fluid-wall interaction, along with superimposed viscous flow due to fluid-fluid interaction. The theory is tested against molecular dynamics simulations for hydrogen, methane, and carbon tetrafluoride flow in cylindrical nanopores in silica. Although exact at low densities, the theory performs well even at high densities, with the density dependency of the transport coefficient arising from viscous effects. Such viscous effects are reduced at high densities because of the large increase in viscosity, which explains the maximum in the transport coefficient with increase in density. Further, it is seen that in narrow pore sizes of less than two molecular diameters, where a complete monolayer cannot form on the surface, the mutual interference of molecules on opposite sides of the cross section can reduce the transport coefficient, and lead to a maximum in the transport coefficient with increasing density. The theory is also tested for the case of partially diffuse reflection and shows the viscous contribution to be negligible when the reflection is nearly specular. (c) 2005 American Institute of Chemical Engineers AIChE J, 52: 29-38, 2006.
Resumo:
Similar to classic Signal Detection Theory (SDT), recent optimal Binary Signal Detection Theory (BSDT) and based on it Neural Network Assembly Memory Model (NNAMM) can successfully reproduce Receiver Operating Characteristic (ROC) curves although BSDT/NNAMM parameters (intensity of cue and neuron threshold) and classic SDT parameters (perception distance and response bias) are essentially different. In present work BSDT/NNAMM optimal likelihood and posterior probabilities are analytically analyzed and used to generate ROCs and modified (posterior) mROCs, optimal overall likelihood and posterior. It is shown that for the description of basic discrimination experiments in psychophysics within the BSDT a ‘neural space’ can be introduced where sensory stimuli as neural codes are represented and decision processes are defined, the BSDT’s isobias curves can simultaneously be interpreted as universal psychometric functions satisfying the Neyman-Pearson objective, the just noticeable difference (jnd) can be defined and interpreted as an atom of experience, and near-neutral values of biases are observers’ natural choice. The uniformity or no-priming hypotheses, concerning the ‘in-mind’ distribution of false-alarm probabilities during ROC or overall probability estimations, is introduced. The BSDT’s and classic SDT’s sensitivity, bias, their ROC and decision spaces are compared.
Resumo:
A szerző röviden összefoglalja a származtatott termékek árazásával kapcsolatos legfontosabb ismereteket és problémákat. A derivatív árazás elmélete a piacon levő termékek közötti redundanciát kihasználva próbálja meghatározni az egyes termékek relatív árát. Ezt azonban csak teljes piacon lehet megtenni, és így csak teljes piac esetén lehetséges a hasznossági függvények fogalmát az elméletből és a ráépülő gyakorlatból elhagyni, ezért a kockázatsemleges árazás elve félrevezető. Másképpen fogalmazva: a származtatott termékek elmélete csak azon az áron képes a hasznossági függvény fogalmától megszabadulni, ha a piac szerkezetére a valóságban nem teljesülő megkötéseket tesz. Ennek hangsúlyozása mind a piaci gyakorlatban, mind az oktatásban elengedhetetlen. / === / The author sums up briefly the main aspects and problems to do with the pricing of derived products. The theory of derivative pricing uses the redundancy among products on the market to arrive at relative product prices. But this can be done only on a complete market, so that only with a complete market does it become possible to omit from the theory and the practice built upon it the concept of utility functions, and for that reason the principle of risk-neutral pricing is misleading. To put it another way, the theory of derived products is capable of freeing itself from the concept of utility functions only at a price where in practice it places impossible restrictions on the market structure. This it is essential to emphasize in market practice and in teaching.
Resumo:
An Ab Initio/RRKM study of the reaction mechanism and product branching ratios of neutral-radical ethynyl (C2H) and cyano (CN) radical species with unsaturated hydrocarbons is performed. The reactions studied apply to cold conditions such as planetary atmospheres including Titan, the Interstellar Medium (ISM), icy bodies and molecular clouds. The reactions of C2H and CN additions to gaseous unsaturated hydrocarbons are an active area of study. NASA's Cassini/Huygens mission found a high concentration of C2H and CN from photolysis of ethyne (C2H2) and hydrogen cyanide (HCN), respectively, in the organic haze layers of the atmosphere of Titan. The reactions involved in the atmospheric chemistry of Titan lead to a vast array of larger, more complex intermediates and products and may also serve as a chemical model of Earth's primordial atmospheric conditions. The C2H and CN additions are rapid and exothermic, and often occur barrierlessly to various carbon sites of unsaturated hydrocarbons. The reaction mechanism is proposed on the basis of the resulting potential energy surface (PES) that includes all the possible intermediates and transition states that can occur, and all the products that lie on the surface. The B3LYP/6-311g(d,p) level of theory is employed to determine optimized electronic structures, moments of inertia, vibrational frequencies, and zero-point energy. They are followed by single point higher-level CCSD(T)/cc-vtz calculations, including extrapolations to complete basis sets (CBS) of the reactants and products. A microcanonical RRKM study predicts single-collision (zero-pressure limit) rate constants of all reaction paths on the potential energy surface, which is then used to compute the branching ratios of the products that result. These theoretical calculations are conducted either jointly or in parallel to experimental work to elucidate the chemical composition of Titan's atmosphere, the ISM, and cold celestial bodies.<.
Resumo:
As the world population continues to grow past seven billion people and global challenges continue to persist including resource availability, biodiversity loss, climate change and human well-being, a new science is required that can address the integrated nature of these challenges and the multiple scales on which they are manifest. Sustainability science has emerged to fill this role. In the fifteen years since it was first called for in the pages of Science, it has rapidly matured, however its place in the history of science and the way it is practiced today must be continually evaluated. In Part I, two chapters address this theoretical and practical grounding. Part II transitions to the applied practice of sustainability science in addressing the urban heat island (UHI) challenge wherein the climate of urban areas are warmer than their surrounding rural environs. The UHI has become increasingly important within the study of earth sciences given the increased focus on climate change and as the balance of humans now live in urban areas.
In Chapter 2 a novel contribution to the historical context of sustainability is argued. Sustainability as a concept characterizing the relationship between humans and nature emerged in the mid to late 20th century as a response to findings used to also characterize the Anthropocene. Emerging from the human-nature relationships that came before it, evidence is provided that suggests Sustainability was enabled by technology and a reorientation of world-view and is unique in its global boundary, systematic approach and ambition for both well being and the continued availability of resources and Earth system function. Sustainability is further an ambition that has wide appeal, making it one of the first normative concepts of the Anthropocene.
Despite its widespread emergence and adoption, sustainability science continues to suffer from definitional ambiguity within the academe. In Chapter 3, a review of efforts to provide direction and structure to the science reveals a continuum of approaches anchored at either end by differing visions of how the science interfaces with practice (solutions). At one end, basic science of societally defined problems informs decisions about possible solutions and their application. At the other end, applied research directly affects the options available to decision makers. While clear from the literature, survey data further suggests that the dichotomy does not appear to be as apparent in the minds of practitioners.
In Chapter 4, the UHI is first addressed at the synoptic, mesoscale. Urban climate is the most immediate manifestation of the warming global climate for the majority of people on earth. Nearly half of those people live in small to medium sized cities, an understudied scale in urban climate research. Widespread characterization would be useful to decision makers in planning and design. Using a multi-method approach, the mesoscale UHI in the study region is characterized and the secular trend over the last sixty years evaluated. Under isolated ideal conditions the findings indicate a UHI of 5.3 ± 0.97 °C to be present in the study area, the magnitude of which is growing over time.
Although urban heat islands (UHI) are well studied, there remain no panaceas for local scale mitigation and adaptation methods, therefore continued attention to characterization of the phenomenon in urban centers of different scales around the globe is required. In Chapter 5, a local scale analysis of the canopy layer and surface UHI in a medium sized city in North Carolina, USA is conducted using multiple methods including stationary urban sensors, mobile transects and remote sensing. Focusing on the ideal conditions for UHI development during an anticyclonic summer heat event, the study observes a range of UHI intensity depending on the method of observation: 8.7 °C from the stationary urban sensors; 6.9 °C from mobile transects; and, 2.2 °C from remote sensing. Additional attention is paid to the diurnal dynamics of the UHI and its correlation with vegetation indices, dewpoint and albedo. Evapotranspiration is shown to drive dynamics in the study region.
Finally, recognizing that a bridge must be established between the physical science community studying the Urban Heat Island (UHI) effect, and the planning community and decision makers implementing urban form and development policies, Chapter 6 evaluates multiple urban form characterization methods. Methods evaluated include local climate zones (LCZ), national land cover database (NCLD) classes and urban cluster analysis (UCA) to determine their utility in describing the distribution of the UHI based on three standard observation types 1) fixed urban temperature sensors, 2) mobile transects and, 3) remote sensing. Bivariate, regression and ANOVA tests are used to conduct the analyses. Findings indicate that the NLCD classes are best correlated to the UHI intensity and distribution in the study area. Further, while the UCA method is not useful directly, the variables included in the method are predictive based on regression analysis so the potential for better model design exists. Land cover variables including albedo, impervious surface fraction and pervious surface fraction are found to dominate the distribution of the UHI in the study area regardless of observation method.
Chapter 7 provides a summary of findings, and offers a brief analysis of their implications for both the scientific discourse generally, and the study area specifically. In general, the work undertaken does not achieve the full ambition of sustainability science, additional work is required to translate findings to practice and more fully evaluate adoption. The implications for planning and development in the local region are addressed in the context of a major light-rail infrastructure project including several systems level considerations like human health and development. Finally, several avenues for future work are outlined. Within the theoretical development of sustainability science, these pathways include more robust evaluations of the theoretical and actual practice. Within the UHI context, these include development of an integrated urban form characterization model, application of study methodology in other geographic areas and at different scales, and use of novel experimental methods including distributed sensor networks and citizen science.
Resumo:
The thesis presents experimental results, simulations, and theory on turbulence excited in magnetized plasmas near the ionosphere’s upper hybrid layer. The results include: The first experimental observations of super small striations (SSS) excited by the High-Frequency Auroral Research Project (HAARP) The first detection of high-frequency (HF) waves from the HAARP transmitter over a distance of 16x10^3 km The first simulations indicating that upper hybrid (UH) turbulence excites electron Bernstein waves associated with all nearby gyroharmonics Simulation results that indicate that the resulting bulk electron heating near the upper hybrid (UH) resonance is caused primarily by electron Bernstein waves parametrically excited near the first gyroharmonic. On the experimental side we present two sets of experiments performed at the HAARP heating facility in Alaska. In the first set of experiments, we present the first detection of super-small (cm scale) striations (SSS) at the HAARP facility. We detected density structures smaller than 30 cm for the first time through a combination of satellite and ground based measurements. In the second set of experiments, we present the results of a novel diagnostic implemented by the Ukrainian Antarctic Station (UAS) in Verdansky. The technique allowed the detection of the HAARP signal at a distance of nearly 16 Mm, and established that the HAARP signal was injected into the ionospheric waveguide by direct scattering off of dekameter-scale density structures induced by the heater. On the theoretical side, we present results of Vlasov simulations near the upper hybrid layer. These results are consistent with the bulk heating required by previous work on the theory of the formation of descending artificial ionospheric layers (DIALs), and with the new observations of DIALs at HAARP’s upgraded effective radiated power (ERP). The simulations that frequency sweeps, and demonstrate that the heating changes from a bulk heating between gyroharmonics, to a tail acceleration as the pump frequency is swept through the fourth gyroharmonic. These simulations are in good agreement with experiments. We also incorporate test particle simulations that isolate the effects of specific wave modes on heating, and we find important contributions from both electron Bernstein waves and upper hybrid waves, the former of which have not yet been detected by experiments, and have not been previously explored as a driver of heating. In presenting these results, we analyzed data from HAARP diagnostics and assisted in planning the second round of experiments. We integrated the data into a picture of experiments that demonstrated the detection of SSS, hysteresis effects in simulated electromagnetic emission (SEE) features, and the direct scattering of the HF pump into the ionospheric waveguide. We performed simulations and analyzed simulation data to build the understanding of collisionless heating near the upper hybrid layer, and we used these simulations to show that bulk electron heating at the upper hybrid layer is possible, which is required by current theories of DAIL formation. We wrote a test particle simulation to isolate the effects of electron Bernstein waves and upper hybrid layers on collisionless heating, and integrated this code to work with both the output of Vlasov simulations and the input for simulations of DAIL formation.
Decoherence models for discrete-time quantum walks and their application to neutral atom experiments
Resumo:
We discuss decoherence in discrete-time quantum walks in terms of a phenomenological model that distinguishes spin and spatial decoherence. We identify the dominating mechanisms that affect quantum-walk experiments realized with neutral atoms walking in an optical lattice. From the measured spatial distributions, we determine with good precision the amount of decoherence per step, which provides a quantitative indication of the quality of our quantum walks. In particular, we find that spin decoherence is the main mechanism responsible for the loss of coherence in our experiment. We also find that the sole observation of ballistic-instead of diffusive-expansion in position space is not a good indicator of the range of coherent delocalization. We provide further physical insight by distinguishing the effects of short- and long-time spin dephasing mechanisms. We introduce the concept of coherence length in the discrete-time quantum walk, which quantifies the range of spatial coherences. Unexpectedly, we find that quasi-stationary dephasing does not modify the local properties of the quantum walk, but instead affects spatial coherences. For a visual representation of decoherence phenomena in phase space, we have developed a formalism based on a discrete analogue of the Wigner function. We show that the effects of spin and spatial decoherence differ dramatically in momentum space.