979 resultados para Modelli pseudo-hermitiani,non-unitary conformal field theory,c-theorem
Resumo:
Inclusive doubly differential cross sections d 2 σ pA /dx F dp T 2 as a function of Feynman-x (x F ) and transverse momentum (p T ) for the production of K S 0 , Λ and Λ¯ in proton-nucleus interactions at 920 GeV are presented. The measurements were performed by HERA-B in the negative x F range (−0.12
Resumo:
A study of the angular distributions of leptons from decays of J/ψ"s produced in p-C and p-W collisions at s√=41.6~GeV has been performed in the J/ψ Feynman-x region −0.34
Resumo:
Using mean field theory, we have studied Bose-Fermi mixtures in a one-dimensional optical lattice in the case of an attractive boson-fermion interaction. We consider that the fermions are in the degenerate regime and that the laser intensities are such that quantum coherence across the condensate is ensured. We discuss the effect of the optical lattice on the critical rotational frequency for vortex line creation in the Bose-Einstein condensate, as well as how it affects the stability of the boson-fermion mixture. A reduction of the critical frequency for nucleating a vortex is observed as the strength of the applied laser is increased. The onset of instability of the mixture occurs for a sizably lower number of fermions in the presence of a deep optical lattice.
Resumo:
Topological order has proven a useful concept to describe quantum phase transitions which are not captured by the Ginzburg-Landau type of symmetry-breaking order. However, lacking a local order parameter, topological order is hard to detect. One way to detect it is via direct observation of anyonic properties of excitations which are usually discussed in the thermodynamic limit, but so far has not been realized in macroscopic quantum Hall samples. Here we consider a system of few interacting bosons subjected to the lowest Landau level by a gauge potential, and theoretically investigate vortex excitations in order to identify topological properties of different ground states. Our investigation demonstrates that even in surprisingly small systems anyonic properties are able to characterize the topological order. In addition, focusing on a system in the Laughlin state, we study the robustness of its anyonic behavior in the presence of tunable finite-range interactions acting as a perturbation. A clear signal of a transition to a different state is reflected by the system's anyonic properties.
Resumo:
A search for charmless three-body decays of B 0 and B0s mesons with a K0S meson in the final state is performed using the pp collision data, corresponding to an integrated luminosity of 1.0 fb−1, collected at a centre-of-mass energy of 7 TeV recorded by the LHCb experiment. Branching fractions of the B0(s)→K0Sh+h′− decay modes (h (′) = π, K), relative to the well measured B0→K0Sπ+π− decay, are obtained. First observation of the decay modes B0s→K0SK±π∓ and B0s→K0Sπ+π− and confirmation of the decay B0→K0SK±π∓ are reported. The following relative branching fraction measurements or limits are obtained $ B(B0→K0SK±π∓)B(B0→K0Sπ+π−)=0.128±0.017(stat.)±0.009(syst.),B(B0→K0SK+K−)B(B0→K0Sπ+π−)=0.385±0.031(stat.)±0.023(syst.),B(B0s→K0Sπ+π−)B(B0→K0Sπ+π−)=0.29±0.06(stat.)±0.03(syst.)±0.02(fs/fd),B(B0s→K0SK±π∓)B(B0→K0Sπ+π−)=1.48±0.12(stat.)±0.08(syst.)±0.12(fs/fd)B(B0s→K0SK+K−)B(B0→K0Sπ+π−)∈[0.004;0.068]at90%CL.
Resumo:
The results of searches for B0(s)→J/ψ pp¯ and B + → J/ψ p p¯ π+ decays are reported. The analysis is based on a data sample, corresponding to an integrated luminosity of 1.0 fb−1 of pp collisions, collected with the LHCb detector. An excess with 2.8 σ significance is seen for the decay B0s→J/ψ pp¯ and an upper limit on the branching fraction is set at the 90 % confidence level: B(B0s→J/ψ pp¯) < 4.8 × 10−6, which is the first such limit. No significant signals are seen for B 0 → J/ψ p p¯ and B + → J/ψ p p¯ π + decays, for which the corresponding limits are set: B(B0→J/ψ pp¯) < 5.2 × 10−7, which significantly improves the existing limit; and B(B+→J/ψ pp¯π+) < 5.0 × 10−7, which is the first limit on this branching fraction.
Resumo:
A study of D +π−, D 0π+ and D ∗+π− final states is performed using pp collision data, corresponding to an integrated luminosity of 1.0 fb−1, collected at a centre-of-mass energy of 7 TeV with the LHCb detector. The D 1(2420)0 resonance is observed in the D ∗+π− final state and the D∗2(2460) resonance is observed in the D +π−, D 0π+ and D ∗+π− final states. For both resonances, their properties and spin-parity assignments are obtained. In addition, two natural parity and two unnatural parity resonances are observed in the mass region between 2500 and 2800 MeV. Further structures in the region around 3000 MeV are observed in all the D ∗+π−, D +π− and D 0π+ final states.
Resumo:
Prompt production of charmonium χ c0, χ c1 and χ c2 mesons is studied using proton-proton collisions at the LHC at a centre-of-mass energy of TeX TeV. The χ c mesons are identified through their decay to J/ψγ, with J/ψ → μ + μ − using photons that converted in the detector. A data sample, corresponding to an integrated luminosity of 1.0 fb−1 collected by the LHCb detector, is used to measure the relative prompt production rate of χ c1 and χ c2 in the rapidity range 2.0 < y < 4.5 as a function of the J/ψ transverse momentum from 3 to 20 GeV/c. First evidence for χ c0 meson production at a high-energy hadron collider is also presented.
Resumo:
The experiment introduces the undergraduate students to the crystal field theory. The electronic spectra of the octahedral complexes of [Ni(L)n]2+ (L = H2O, dmso, NH3 and en) obtained in the experiment are used to calculate 10Dq and B parameters. The experiment shows how the parameters can be calculated and correlated with the nature of the ligands and the field intensities produced.
Resumo:
It is a well known phenomenon that the constant amplitude fatigue limit of a large component is lower than the fatigue limit of a small specimen made of the same material. In notched components the opposite occurs: the fatigue limit defined as the maximum stress at the notch is higher than that achieved with smooth specimens. These two effects have been taken into account in most design handbooks with the help of experimental formulas or design curves. The basic idea of this study is that the size effect can mainly be explained by the statistical size effect. A component subjected to an alternating load can be assumed to form a sample of initiated cracks at the end of the crack initiation phase. The size of the sample depends on the size of the specimen in question. The main objective of this study is to develop a statistical model for the estimation of this kind of size effect. It was shown that the size of a sample of initiated cracks shall be based on the stressed surface area of the specimen. In case of varying stress distribution, an effective stress area must be calculated. It is based on the decreasing probability of equally sized initiated cracks at lower stress level. If the distribution function of the parent population of cracks is known, the distribution of the maximum crack size in a sample can be defined. This makes it possible to calculate an estimate of the largest expected crack in any sample size. The estimate of the fatigue limit can now be calculated with the help of the linear elastic fracture mechanics. In notched components another source of size effect has to be taken into account. If we think about two specimens which have similar shape, but the size is different, it can be seen that the stress gradient in the smaller specimen is steeper. If there is an initiated crack in both of them, the stress intensity factor at the crack in the larger specimen is higher. The second goal of this thesis is to create a calculation method for this factor which is called the geometric size effect. The proposed method for the calculation of the geometric size effect is also based on the use of the linear elastic fracture mechanics. It is possible to calculate an accurate value of the stress intensity factor in a non linear stress field using weight functions. The calculated stress intensity factor values at the initiated crack can be compared to the corresponding stress intensity factor due to constant stress. The notch size effect is calculated as the ratio of these stress intensity factors. The presented methods were tested against experimental results taken from three German doctoral works. Two candidates for the parent population of initiated cracks were found: the Weibull distribution and the log normal distribution. Both of them can be used successfully for the prediction of the statistical size effect for smooth specimens. In case of notched components the geometric size effect due to the stress gradient shall be combined with the statistical size effect. The proposed method gives good results as long as the notch in question is blunt enough. For very sharp notches, stress concentration factor about 5 or higher, the method does not give sufficient results. It was shown that the plastic portion of the strain becomes quite high at the root of this kind of notches. The use of the linear elastic fracture mechanics becomes therefore questionable.
Resumo:
Tutkimuksen tavoitteena oli kehittää Lappeenrannan teknilliselle yliopistolle nopeasti ja ketterästi liikkuva, lävistyksiä ja muovauksia tekevä puristin. Laitteisto on tarkoitus liittää osaksi tuotantolinjaa, jossa suulakepuristin eli ekstruuderi tuottaa pehmeä, ei-metallista materiaalia katkeamattomasti. Teoriaosuudessa esitellään vaihtoehdot puristimen, ohjauksen, työkalun ja liikutuslaitteiston osalta. Empiriaosuudessa suoritetaan laitteiston eri komponenttien valinta hinnan, nopeuden, kestävyyden, käytettävyyden ja koon perusteella. Tutkimuksen lopussa arvioidaan laitteistokokonaisuuden nopeutta suulakepuristimen tuotaman materiaalin maksiminopeuteen. Laitteiston nopeuden rajoittavana tekijänä toimii puristimen iskunopeus, jonka perusteella suulakepuristimen tuottaman materiaalin etenemisnopeus joudutaan pudottamaan hieman alle puoleen tavoitteena olleesta nopeudesta. Laitteiston kestävyyden heikon kohta on työstöä tekevien komponenttien liikuttamiseen tarkoitettu laitteisto. Puristuslaitteistoa on tarkoitus käyttää kahdeksan tuntia päivässä, viitenä päivänä viikossa, 52 viikkoa vuodessa. Näiden käyttötietojen perusteella koko laitteiston käyttöikä on useita vuosia.
Resumo:
This is a sociological study of the views of officers in the Swedish Army and its Amphibious Forces on tactics in Irregular Warfare (IW), in particular, Counterinsurgency (COIN). IW comprises struggles, where the military weaker part uses an indirect approach with smaller units and integrates the civilian and military dimensions in a violence spectrum including subversion, terrorism, Guerrilla Warfare and infantry actions. IW is the main armed warfare style in insurgencies. COIN is the combined political, military, economic, social and legal actions in counter insurgencies. Data has been collected by means of interviews with almost all (n =43) officers, who were either commanding battalions or rifle and manoeuvre companies while undergoing training for general warfare and international operations. The main theoretical and methodological inspiration is the traditional one for research on social fields, inaugurated by the French sociologist Pierre Bourdieu. The statistical technique used is Multiple Correspondence Analysis. As a background and context base, an inquiry inspired by the Begriffsgechichte (Conceptual History) tradition explores the genesis and development of understandings of the term Irregular Warfare. The research question is outlined as; “how can contemporary Swedish military thought on tactics in Irregular Warfare be characterized using descriptive patterns, mapped in relation to background factors and normative standards? The most significant findings are that there are two main opposing notions separating the officers’ views on tactics in Irregular Warfare: (1) a focus on larger, combat oriented and collectively operating military units versus smaller and larger, more intelligence oriented and dispersed operating units, and (2) a focus on military tasks and kinetic effects versus military and civilian tasks as well as “soft” effects. The distribution of these views can be presented as a two-dimensional space structured by the two axes. This space represents four categories of tactics, partly diverging from normative military standards for Counterinsurgency. This social space of standpoints shows different structural tendencies for background factors of social and cultural character, particularly dominant concerning military backgrounds, international mission experiences and civilian education. Compared to military standards for Counterinsurgency, the two tactical types characterized by a Regular Warfare mind-set stands out as counter-normative. Signs of creative thought on military practice and theory, as well as a still persistent Regular Warfare doxa are apparent. Power struggles might thus develop, effecting the transformation to a broadened warfare culture with an enhanced focus also on Irregular Warfare. The result does not support research results arguing for a convergence of military thought in the European transformation of Armed Forces. The main argument goes beyond tactics and suggests sociological analysis on reciprocal effects regarding strategy, operational art, tactics as well as leadership, concerning the mind-set and preferences for Regular, Irregular and Hybrid Warfare.
Resumo:
Software quality has become an important research subject, not only in the Information and Communication Technology spheres, but also in other industries at large where software is applied. Software quality is not a happenstance; it is defined, planned and created into the software product throughout the Software Development Life Cycle. The research objective of this study is to investigate the roles of human and organizational factors that influence software quality construction. The study employs the Straussian grounded theory. The empirical data has been collected from 13 software companies, and the data includes 40 interviews. The results of the study suggest that tools, infrastructure and other resources have a positive impact on software quality, but human factors involved in the software development processes will determine the quality of the products developed. On the other hand, methods of development were found to bring little effect on software quality. The research suggests that software quality is an information-intensive process whereby organizational structures, mode of operation, and information flow within the company variably affect software quality. The results also suggest that software development managers influence the productivity of developers and the quality of the software products. Several challenges of software testing that affect software quality are also brought to light. The findings of this research are expected to benefit the academic community and software practitioners by providing an insight into the issues pertaining to software quality construction undertakings.
Resumo:
Ancrée dans le domaine de la didactique des mathématiques, notre thèse cible le « travail de l’erreur » effectué par trois enseignants dans leur première année de carrière. Libérés des contraintes associées au système de formation initiale, ces sujets assument pleinement leur nouveau rôle au sein de la classe ordinaire. Ils se chargent, entre autres, de l’enseignement de l’arithmétique et, plus précisément, de la division euclidienne. Parmi leurs responsabilités se trouvent le repérage et l’intervention sur les procédures erronées. Le « travail de l’erreur » constitue l’expression spécifique désignant cette double tâche (Portugais 1995). À partir d’un dispositif de recherche combinant les méthodes d’observation et d’entrevue, nous documentons des séances d’enseignement afin de dégager les situations où nos maîtres du primaire identifient des erreurs dans les procédures algorithmiques des élèves et déploient, subséquemment, des stratégies d’intervention. Nous montrons comment ces deux activités sont coordonnées en décrivant les choix, décisions et actions mises en œuvre par nos sujets. Il nous est alors possible d’exposer l’organisation de la conduite de ces jeunes enseignants en fonction du traitement effectif de l’erreur arithmétique. En prenant appui sur la théorie de champs conceptuels (Vergnaud 1991), nous révélons l’implicite des connaissances mobilisées par nos sujets et mettons en relief les mécanismes cognitifs qui sous-tendent cette activité professionnelle. Nous pouvons ainsi témoigner, du moins en partie, du travail de conceptualisation réalisé in situ. Ce travail analytique permet de proposer l’existence d’un schème du travail de l’erreur chez ces maîtres débutants, mais aussi de spécifier sa nature et son fonctionnement. En explorant le versant cognitif de l’activité enseignante, notre thèse aborde une nouvelle perspective associée au thème du repérage et de l’intervention sur l’erreur de calcul de divisions en colonne.