997 resultados para weak-strong uniqueness
Resumo:
Magneto-transport measurements have been carried out on three heavily Si delta-doped In-0.52 Al-0.48 As/In-0.53 Ga-0.47 As/In-0.52 A(10.48) As single quantum well samples in which two subbands were occupied by electrons. The weak anti-localization (WAL) has been found in such high electron mobility systems. The strong Rashba spin-orbit (SO) coupling is due to the high structure inversion asymmetry (SIA) of the quantum wells. Since the WAL theory model is so complicated in fitting our experimental results, we obtained the Rashba SO coupling constant alpha and the zero-field spin splitting Delta(0) by an approximate approach. The results are consistent with that obtained by the Shubnikov-de Haas (SdH) oscillation analysis. The WAL effect in high electron mobility system suggests that finding a useful approach for deducing alpha and Delta(0) is important in designing future spintronics devices that utilize the Rashba SO coupling.
Resumo:
Electron cyclotron resonance CR) measurements have been carried out in magnetic fields up to 32 T to study electron-phonon interaction in two heavily modulation-delta -doped GaAs/Al0.3Ga0.7As single-quantum-well samples. No measurable resonant magnetopolaron effects were observed in either sample in the region of the GaAs longitudinal optical (LO) phonons. However, when the CR frequency is above LO phonon frequency, omega (LO)=E-LO/(h) over bar, at high magnetic fields (B>27 T), electron CR exhibits a strong avoided-level-crossing splitting for both samples at frequencies close to (omega (LO)+ (E-2-E-1)1 (h) over bar, where E-2, and E-1 are the energies of the bottoms of the second and the first subbands, respectively. The energy separation between the two branches is large with the minimum separation of 40 cm(-1) occurring at around 30.5 T. A detailed theoretical analysis, which includes a self-consistent calculation of the band structure and the effects of electron-phonon interaction on the CR, shows that this type of splitting is due to a three-level resonance between the second Landau level of the first electron subband and the lowest Landau level of the second subband plus one GaAs LO phonon. The absence of occupation effects in the final states and weak screening or this three-level process yields large energy separation even in the presence of high electron densities. Excellent agreement between the theory and the experimental results is obtained.
Resumo:
Electron cyclotron resonance (CR) has been studied in magnetic fields up to 32 T in two heavily modulation-delta-doped GaAs/Al0.3Ga0.7As single quantum well samples. Little effect on electron CR is observed in either sample in the region of resonance with the GaAs LO phonons. However, above the LO-phonon frequency energy E-LO at B > 27 T, electron CR exhibits a strong avoided-level-crossing splitting for both samples at energies close to E-LO + (E-2 - E-1), where E-2, and E-1 are the energies of the bottoms of the second and the first subbands, respectively. The energy separation between the two branches is large, reaching a minimum of about 40 cm(-1) around 30.5 T for both samples. This splitting is due to a three-level resonance between the second LI, of the first electron subband and the lowest LL of the second subband plus an LO phonon. The large splitting in the presence: of high electron densities is due to the absence of occupation (Pauli-principle) effects in the final states and weak screening for this three-level process. (C) 2000 Published by Elsevier Science B.V. All rights reserved.
Resumo:
Separation of the acidic compounds in the ion-exchange capillary electrochromatograph (IE-CEC) with strong anion-exchange packing as the stationary phase was studied. It was observed that the electroosmotic flow (EOF) in strong anion-exchange CEC moderately changed with increase of the eluent ionic strength and decrease of the eluent pH, but the acetonitrile concentration in the eluent had almost no effect on the EOF. The EOF in Strong anion-exchange CEC with eluent of low pH value was much larger than that in RP-CEC with Spherisorb-ODS as the stationary phase. The retention of acidic compounds on the strong anion-exchange packing was relatively weak due to only partial ionization of them, and both chromatographic and electrophoretic processes contributed to separation. It was observed that the retention values of acidic compounds decreased with the increase of phosphate buffer and acetonitrile concentration in the eluent as well as the decrease of the applied voltage, and even the acidic compounds could elute before the void time. These factors also made an important contribution to the separation selectivity for tested acidic compounds, which could be separated rapidly with high column efficiency of more than 220 000 plates/m under the optimized separation conditions. (C) 2000 Elsevier Science BN. All rights reserved.
Resumo:
With web caching and cache-related services like CDNs and edge services playing an increasingly significant role in the modern internet, the problem of the weak consistency and coherence provisions in current web protocols is becoming increasingly significant and drawing the attention of the standards community [LCD01]. Toward this end, we present definitions of consistency and coherence for web-like environments, that is, distributed client-server information systems where the semantics of interactions with resource are more general than the read/write operations found in memory hierarchies and distributed file systems. We then present a brief review of proposed mechanisms which strengthen the consistency of caches in the web, focusing upon their conceptual contributions and their weaknesses in real-world practice. These insights motivate a new mechanism, which we call "Basis Token Consistency" or BTC; when implemented at the server, this mechanism allows any client (independent of the presence and conformity of any intermediaries) to maintain a self-consistent view of the server's state. This is accomplished by annotating responses with additional per-resource application information which allows client caches to recognize the obsolescence of currently cached entities and identify responses from other caches which are already stale in light of what has already been seen. The mechanism requires no deviation from the existing client-server communication model, and does not require servers to maintain any additional per-client state. We discuss how our mechanism could be integrated into a fragment-assembling Content Management System (CMS), and present a simulation-driven performance comparison between the BTC algorithm and the use of the Time-To-Live (TTL) heuristic.
Resumo:
A weak reference is a reference to an object that is not followed by the pointer tracer when garbage collection is called. That is, a weak reference cannot prevent the object it references from being garbage collected. Weak references remain a troublesome programming feature largely because there is not an accepted, precise semantics that describes their behavior (in fact, we are not aware of any formalization of their semantics). The trouble is that weak references allow reachable objects to be garbage collected, therefore allowing garbage collection to influence the result of a program. Despite this difficulty, weak references continue to be used in practice for reasons related to efficient storage management, and are included in many popular programming languages (Standard ML, Haskell, OCaml, and Java). We give a formal semantics for a calculus called λweak that includes weak references and is derived from Morrisett, Felleisen, and Harper’s λgc. λgc formalizes the notion of garbage collection by means of a rewrite rule. Such a formalization is required to precisely characterize the semantics of weak references. However, the inclusion of a garbage-collection rewrite-rule in a language with weak references introduces non-deterministic evaluation, even if the parameter-passing mechanism is deterministic (call-by-value in our case). This raises the question of confluence for our rewrite system. We discuss natural restrictions under which our rewrite system is confluent, thus guaranteeing uniqueness of program result. We define conditions that allow other garbage collection algorithms to co-exist with our semantics of weak references. We also introduce a polymorphic type system to prove the absence of erroneous program behavior (i.e., the absence of “stuck evaluation”) and a corresponding type inference algorithm. We prove the type system sound and the inference algorithm sound and complete.
Resumo:
Using the theory of Eliashberg and Nambu for strong-coupling superconductors, we have calculated the gap function for a model superconductor and a selection of real superconductors includong the elements Al, Sn, Tl, Nb, In, Pb and Hg and one alloy, Bi2Tl. We have determined thetemperature-dependent gap edge in each and found that in materials with weak electron-phonon ($\lambda 1.20$), not only is the gap edge double valued but it also departs significantly from the BCS form and develops a shoulderlike structure which may, in some cases, denote a gap edge exceeding the $T = 0$ value. These computational results support the insights obtained by Leavens in an analytic consideration of the general problem. Both the shoulder and double value arise from a common origin seated in the form of the gap function in strong coupled materials at finite temperatures. From the calculated gap function, we can determine the densities of states in the materials and the form of the tunneling current-voltage characteristics for junctions with these materials as electroddes. By way of illustration, results are shown for the contrasting cases of Sn ($\lambda=0.74$) and Hg ($\lambad=1.63$). The reported results are distinct in several ways from BCS predictions and provide an incentive determinative experimental studies with techniques such as tunneling and far infrared absorption.
Resumo:
The carbon-to-oxygen ratio (C/O) in a planet provides critical information about its primordial origins and subsequent evolution. A primordial C/O greater than 0.8 causes a carbide-dominated interior, as opposed to the silicate-dominated composition found on Earth; the atmosphere can also differ from those in the Solar System. The solar C/O is 0.54 (ref. 3). Here we report an analysis of dayside multi-wavelength photometry of the transiting hot-Jupiter WASP-12b (ref. 6) that reveals C/O>=1 in its atmosphere. The atmosphere is abundant in CO. It is depleted in water vapour and enhanced in methane, each by more than two orders of magnitude compared to a solar-abundance chemical-equilibrium model at the expected temperatures. We also find that the extremely irradiated atmosphere (T>2,500K) of WASP-12b lacks a prominent thermal inversion (or stratosphere) and has very efficient day-night energy circulation. The absence of a strong thermal inversion is in stark contrast to theoretical predictions for the most highly irradiated hot-Jupiter atmospheres.
Resumo:
La dernière décennie a connu un intérêt croissant pour les problèmes posés par les variables instrumentales faibles dans la littérature économétrique, c’est-à-dire les situations où les variables instrumentales sont faiblement corrélées avec la variable à instrumenter. En effet, il est bien connu que lorsque les instruments sont faibles, les distributions des statistiques de Student, de Wald, du ratio de vraisemblance et du multiplicateur de Lagrange ne sont plus standard et dépendent souvent de paramètres de nuisance. Plusieurs études empiriques portant notamment sur les modèles de rendements à l’éducation [Angrist et Krueger (1991, 1995), Angrist et al. (1999), Bound et al. (1995), Dufour et Taamouti (2007)] et d’évaluation des actifs financiers (C-CAPM) [Hansen et Singleton (1982,1983), Stock et Wright (2000)], où les variables instrumentales sont faiblement corrélées avec la variable à instrumenter, ont montré que l’utilisation de ces statistiques conduit souvent à des résultats peu fiables. Un remède à ce problème est l’utilisation de tests robustes à l’identification [Anderson et Rubin (1949), Moreira (2002), Kleibergen (2003), Dufour et Taamouti (2007)]. Cependant, il n’existe aucune littérature économétrique sur la qualité des procédures robustes à l’identification lorsque les instruments disponibles sont endogènes ou à la fois endogènes et faibles. Cela soulève la question de savoir ce qui arrive aux procédures d’inférence robustes à l’identification lorsque certaines variables instrumentales supposées exogènes ne le sont pas effectivement. Plus précisément, qu’arrive-t-il si une variable instrumentale invalide est ajoutée à un ensemble d’instruments valides? Ces procédures se comportent-elles différemment? Et si l’endogénéité des variables instrumentales pose des difficultés majeures à l’inférence statistique, peut-on proposer des procédures de tests qui sélectionnent les instruments lorsqu’ils sont à la fois forts et valides? Est-il possible de proposer les proédures de sélection d’instruments qui demeurent valides même en présence d’identification faible? Cette thèse se focalise sur les modèles structurels (modèles à équations simultanées) et apporte des réponses à ces questions à travers quatre essais. Le premier essai est publié dans Journal of Statistical Planning and Inference 138 (2008) 2649 – 2661. Dans cet essai, nous analysons les effets de l’endogénéité des instruments sur deux statistiques de test robustes à l’identification: la statistique d’Anderson et Rubin (AR, 1949) et la statistique de Kleibergen (K, 2003), avec ou sans instruments faibles. D’abord, lorsque le paramètre qui contrôle l’endogénéité des instruments est fixe (ne dépend pas de la taille de l’échantillon), nous montrons que toutes ces procédures sont en général convergentes contre la présence d’instruments invalides (c’est-à-dire détectent la présence d’instruments invalides) indépendamment de leur qualité (forts ou faibles). Nous décrivons aussi des cas où cette convergence peut ne pas tenir, mais la distribution asymptotique est modifiée d’une manière qui pourrait conduire à des distorsions de niveau même pour de grands échantillons. Ceci inclut, en particulier, les cas où l’estimateur des double moindres carrés demeure convergent, mais les tests sont asymptotiquement invalides. Ensuite, lorsque les instruments sont localement exogènes (c’est-à-dire le paramètre d’endogénéité converge vers zéro lorsque la taille de l’échantillon augmente), nous montrons que ces tests convergent vers des distributions chi-carré non centrées, que les instruments soient forts ou faibles. Nous caractérisons aussi les situations où le paramètre de non centralité est nul et la distribution asymptotique des statistiques demeure la même que dans le cas des instruments valides (malgré la présence des instruments invalides). Le deuxième essai étudie l’impact des instruments faibles sur les tests de spécification du type Durbin-Wu-Hausman (DWH) ainsi que le test de Revankar et Hartley (1973). Nous proposons une analyse en petit et grand échantillon de la distribution de ces tests sous l’hypothèse nulle (niveau) et l’alternative (puissance), incluant les cas où l’identification est déficiente ou faible (instruments faibles). Notre analyse en petit échantillon founit plusieurs perspectives ainsi que des extensions des précédentes procédures. En effet, la caractérisation de la distribution de ces statistiques en petit échantillon permet la construction des tests de Monte Carlo exacts pour l’exogénéité même avec les erreurs non Gaussiens. Nous montrons que ces tests sont typiquement robustes aux intruments faibles (le niveau est contrôlé). De plus, nous fournissons une caractérisation de la puissance des tests, qui exhibe clairement les facteurs qui déterminent la puissance. Nous montrons que les tests n’ont pas de puissance lorsque tous les instruments sont faibles [similaire à Guggenberger(2008)]. Cependant, la puissance existe tant qu’au moins un seul instruments est fort. La conclusion de Guggenberger (2008) concerne le cas où tous les instruments sont faibles (un cas d’intérêt mineur en pratique). Notre théorie asymptotique sous les hypothèses affaiblies confirme la théorie en échantillon fini. Par ailleurs, nous présentons une analyse de Monte Carlo indiquant que: (1) l’estimateur des moindres carrés ordinaires est plus efficace que celui des doubles moindres carrés lorsque les instruments sont faibles et l’endogenéité modérée [conclusion similaire à celle de Kiviet and Niemczyk (2007)]; (2) les estimateurs pré-test basés sur les tests d’exogenété ont une excellente performance par rapport aux doubles moindres carrés. Ceci suggère que la méthode des variables instrumentales ne devrait être appliquée que si l’on a la certitude d’avoir des instruments forts. Donc, les conclusions de Guggenberger (2008) sont mitigées et pourraient être trompeuses. Nous illustrons nos résultats théoriques à travers des expériences de simulation et deux applications empiriques: la relation entre le taux d’ouverture et la croissance économique et le problème bien connu du rendement à l’éducation. Le troisième essai étend le test d’exogénéité du type Wald proposé par Dufour (1987) aux cas où les erreurs de la régression ont une distribution non-normale. Nous proposons une nouvelle version du précédent test qui est valide même en présence d’erreurs non-Gaussiens. Contrairement aux procédures de test d’exogénéité usuelles (tests de Durbin-Wu-Hausman et de Rvankar- Hartley), le test de Wald permet de résoudre un problème courant dans les travaux empiriques qui consiste à tester l’exogénéité partielle d’un sous ensemble de variables. Nous proposons deux nouveaux estimateurs pré-test basés sur le test de Wald qui performent mieux (en terme d’erreur quadratique moyenne) que l’estimateur IV usuel lorsque les variables instrumentales sont faibles et l’endogénéité modérée. Nous montrons également que ce test peut servir de procédure de sélection de variables instrumentales. Nous illustrons les résultats théoriques par deux applications empiriques: le modèle bien connu d’équation du salaire [Angist et Krueger (1991, 1999)] et les rendements d’échelle [Nerlove (1963)]. Nos résultats suggèrent que l’éducation de la mère expliquerait le décrochage de son fils, que l’output est une variable endogène dans l’estimation du coût de la firme et que le prix du fuel en est un instrument valide pour l’output. Le quatrième essai résout deux problèmes très importants dans la littérature économétrique. D’abord, bien que le test de Wald initial ou étendu permette de construire les régions de confiance et de tester les restrictions linéaires sur les covariances, il suppose que les paramètres du modèle sont identifiés. Lorsque l’identification est faible (instruments faiblement corrélés avec la variable à instrumenter), ce test n’est en général plus valide. Cet essai développe une procédure d’inférence robuste à l’identification (instruments faibles) qui permet de construire des régions de confiance pour la matrices de covariances entre les erreurs de la régression et les variables explicatives (possiblement endogènes). Nous fournissons les expressions analytiques des régions de confiance et caractérisons les conditions nécessaires et suffisantes sous lesquelles ils sont bornés. La procédure proposée demeure valide même pour de petits échantillons et elle est aussi asymptotiquement robuste à l’hétéroscédasticité et l’autocorrélation des erreurs. Ensuite, les résultats sont utilisés pour développer les tests d’exogénéité partielle robustes à l’identification. Les simulations Monte Carlo indiquent que ces tests contrôlent le niveau et ont de la puissance même si les instruments sont faibles. Ceci nous permet de proposer une procédure valide de sélection de variables instrumentales même s’il y a un problème d’identification. La procédure de sélection des instruments est basée sur deux nouveaux estimateurs pré-test qui combinent l’estimateur IV usuel et les estimateurs IV partiels. Nos simulations montrent que: (1) tout comme l’estimateur des moindres carrés ordinaires, les estimateurs IV partiels sont plus efficaces que l’estimateur IV usuel lorsque les instruments sont faibles et l’endogénéité modérée; (2) les estimateurs pré-test ont globalement une excellente performance comparés à l’estimateur IV usuel. Nous illustrons nos résultats théoriques par deux applications empiriques: la relation entre le taux d’ouverture et la croissance économique et le modèle de rendements à l’éducation. Dans la première application, les études antérieures ont conclu que les instruments n’étaient pas trop faibles [Dufour et Taamouti (2007)] alors qu’ils le sont fortement dans la seconde [Bound (1995), Doko et Dufour (2009)]. Conformément à nos résultats théoriques, nous trouvons les régions de confiance non bornées pour la covariance dans le cas où les instruments sont assez faibles.
Resumo:
Superparamagnetic nanocomposites based on Y-Fe2O3 and sulphonated polystyrene were synthesised by ion-exchange process and the structural characterisation has been carried out using X-ray diffraction technique. Doping of cobalt in to the Y-Fe2O3 lattice was effected in situ and the doping was varied in the atomic percentage range 1–10. The optical absorption studies show a band gap of 2.84 eV, which is blue shifted by 0.64 eV when compared to the reported values for the bulk samples (2.2 eV). This is explained on the basis of weak quantum confinement. Further size reduction can result in a strong confinement, which can yield transparent magnetic nanocomposites because of further blue shifting. The band gap gets red shifted further with the addition of cobalt in the lattice and this red shift increases with the increase in doping. The observed red shift can be attributed to the strain in the lattice caused by the anisotropy induced by the addition of cobalt. Thus, tuning of bandgap and blue shifting is aided by weak exciton confinement and further red shifting of the bandgap is assisted by cobalt doping.
Resumo:
In this paper we show that if the electrons in a quantum Hall sample are subjected to a constant electric field in the plane of the material, comparable in magnitude to the background magnetic field on the system of electrons, a multiplicity of edge states localized at different regions of space is produced in the sample. The actions governing the dynamics of these edge states are obtained starting from the well-known Schrödinger field theory for a system of nonrelativistic electrons, where on top of the constant background electric and magnetic fields, the electrons are further subject to slowly varying weak electromagnetic fields. In the regions between the edges, dubbed as the "bulk," the fermions can be integrated out entirely and the dynamics expressed in terms of a local effective action involving the slowly varying electromagnetic potentials. It is further shown how the bulk action is gauge noninvariant in a particular way, and how the edge states conspire to restore the U(1) electromagnetic gauge invariance of the system. In the edge action we obtain a heretofore unnoticed gauge-invariant term that depends on the particular edge. We argue that this term may be detected experimentally as different edges respond differently to a monochromatic probe due to this term
Resumo:
The P-1-P-1 finite element pair is known to allow the existence of spurious pressure (surface elevation) modes for the shallow water equations and to be unstable for mixed formulations. We show that this behavior is strongly influenced by the strong or the weak enforcement of the impermeability boundary conditions. A numerical analysis of the Stommel model is performed for both P-1-P-1 and P-1(NC)-P-1 mixed formulations. Steady and transient test cases are considered. We observe that the P-1-P-1 element exhibits stable discrete solutions with weak boundary conditions or with fully unstructured meshes. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
We analyze ionospheric convection patterns over the polar regions during the passage of an interplanetary magnetic cloud on January 14, 1988, when the interplanetary magnetic field (IMF) rotated slowly in direction and had a large amplitude. Using the assimilative mapping of ionospheric electrodynamics (AMIE) procedure, we combine simultaneous observations of ionospheric drifts and magnetic perturbations from many different instruments into consistent patterns of high-latitude electrodynamics, focusing on the period of northward IMF. By combining satellite data with ground-based observations, we have generated one of the most comprehensive data sets yet assembled and used it to produce convection maps for both hemispheres. We present evidence that a lobe convection cell was embedded within normal merging convection during a period when the IMF By and Bz components were large and positive. As the IMF became predominantly northward, a strong reversed convection pattern (afternoon-to-morning potential drop of around 100 kV) appeared in the southern (summer) polar cap, while convection in the northern (winter) hemisphere became weak and disordered with a dawn-to-dusk potential drop of the order of 30 kV. These patterns persisted for about 3 hours, until the IMF rotated significantly toward the west. We interpret this behavior in terms of a recently proposed merging model for northward IMF under solstice conditions, for which lobe field lines from the hemisphere tilted toward the Sun (summer hemisphere) drape over the dayside magnetosphere, producing reverse convection in the summer hemisphere and impeding direct contact between the solar wind and field lines connected to the winter polar cap. The positive IMF Bx component present at this time could have contributed to the observed hemispheric asymmetry. Reverse convection in the summer hemisphere broke down rapidly after the ratio |By/Bz| exceeded unity, while convection in the winter hemisphere strengthened. A dominant dawn-to-dusk potential drop was established in both hemispheres when the magnitude of By exceeded that of Bz, with potential drops of the order of 100 kV, even while Bz remained northward. The later transition to southward Bz produced a gradual intensification of the convection, but a greater qualitative change occurred at the transition through |By/Bz| = 1 than at the transition through Bz = 0. The various convection patterns we derive under northward IMF conditions illustrate all possibilities previously discussed in the literature: nearly single-cell and multicell, distorted and symmetric, ordered and unordered, and sunward and antisunward.
Weak intermolecular interactions in an ionically bound molecular adsorbate: cyclopentadienyl=Cu(111)
Resumo:
The dissociative adsorption of cyclopentadiene (C5H6) on Cu(111) yields a cyclopentadienyl (Cp) species with strongly anionic characteristics. The Cp potential energy surface and frictional coupling to the substrate are determined from measurements of dynamics of the molecule together with density functional calculations. The molecule is shown to occupy degenerate threefold adsorption sites and molecular motion is characterized by a low diffusional energy barrier of 40 +/- 3 meV with strong frictional dissipation. Repulsive dipole-dipole interactions are not detected despite charge transfer from substrate to adsorbate.
Resumo:
4-Dimensional Variational Data Assimilation (4DVAR) assimilates observations through the minimisation of a least-squares objective function, which is constrained by the model flow. We refer to 4DVAR as strong-constraint 4DVAR (sc4DVAR) in this thesis as it assumes the model is perfect. Relaxing this assumption gives rise to weak-constraint 4DVAR (wc4DVAR), leading to a different minimisation problem with more degrees of freedom. We consider two wc4DVAR formulations in this thesis, the model error formulation and state estimation formulation. The 4DVAR objective function is traditionally solved using gradient-based iterative methods. The principle method used in Numerical Weather Prediction today is the Gauss-Newton approach. This method introduces a linearised `inner-loop' objective function, which upon convergence, updates the solution of the non-linear `outer-loop' objective function. This requires many evaluations of the objective function and its gradient, which emphasises the importance of the Hessian. The eigenvalues and eigenvectors of the Hessian provide insight into the degree of convexity of the objective function, while also indicating the difficulty one may encounter while iterative solving 4DVAR. The condition number of the Hessian is an appropriate measure for the sensitivity of the problem to input data. The condition number can also indicate the rate of convergence and solution accuracy of the minimisation algorithm. This thesis investigates the sensitivity of the solution process minimising both wc4DVAR objective functions to the internal assimilation parameters composing the problem. We gain insight into these sensitivities by bounding the condition number of the Hessians of both objective functions. We also precondition the model error objective function and show improved convergence. We show that both formulations' sensitivities are related to error variance balance, assimilation window length and correlation length-scales using the bounds. We further demonstrate this through numerical experiments on the condition number and data assimilation experiments using linear and non-linear chaotic toy models.