966 resultados para Final state Interaction
Resumo:
A final-state-effects formalism suitable to analyze the high-momentum response of Fermi liquids is presented and used to study the dynamic structure function of liquid 3He. The theory, developed as a natural extension of the Gersch-Rodriguez formalism, incorporates the Fermi statistics explicitly through a new additive term which depends on the semidiagonal two-body density matrix. The use of a realistic momentum distribution, calculated using the diffusion Monte Carlo method, and the inclusion of this additive correction allows for good agreement with available deep-inelastic neutron scattering data.
Resumo:
We study the (K-, p) reaction on nuclei with a 1 GeV/c momentum kaon beam, paying special attention to the region of emitted protons having kinetic energy above 600 MeV, which was used to claim a deeply attractive kaon nucleus optical potential. Our model describes the nuclear reaction in the framework of a local density approach and the calculations are performed following two different procedures: one is based on a many-body method using the Lindhard function and the other is based on a Monte Carlo simulation. The simulation method offers flexibility to account for processes other than kaon quasielastic scattering, such as K- absorption by one and two nucleons, producing hyperons, and allows consideration of final-state interactions of the K-, the p, and all other primary and secondary particles on their way out of the nucleus, as well as the weak decay of the produced hyperons into pi N. We find a limited sensitivity of the cross section to the strength of the kaon optical potential. We also show a serious drawback in the experimental setup-the requirement for having, together with the energetic proton, at least one charged particle detected in the decay counter surrounding the target-as we find that the shape of the original cross section is appreciably distorted, to the point of invalidating the claims made in the experimental paper on the strength of the kaon nucleus optical.
Resumo:
Summary Detection, analysis and monitoring of slope movements by high-resolution digital elevation modelsSlope movements, such as rockfalls, rockslides, shallow landslides or debris flows, are frequent in many mountainous areas. These natural hazards endanger the inhabitants and infrastructures making it necessary to assess the hazard and risk caused by these phenomena. This PhD thesis explores various approaches using digital elevation models (DEMs) - and particularly high-resolution DEMs created by aerial or terrestrial laser scanning (TLS) - that contribute to the assessment of slope movement hazard at regional and local scales.The regional detection of areas prone to rockfalls and large rockslides uses different morphologic criteria or geometric instability factors derived from DEMs, i.e. the steepness of the slope, the presence of discontinuities, which enable a sliding mechanism, and the denudation potential. The combination of these factors leads to a map of susceptibility to rockfall initiation that is in good agreement with field studies as shown with the example of the Little Mill Campground area (Utah, USA). Another case study in the Illgraben catchment in the Swiss Alps highlighted the link between areas with a high denudation potential and actual rockfall areas.Techniques for a detailed analysis and characterization of slope movements based on high-resolution DEMs have been developed for specific, localized sites, i.e. ancient slide scars, present active instabilities or potential slope instabilities. The analysis of the site's characteristics mainly focuses on rock slopes and includes structural analyses (orientation of discontinuities); estimation of spacing, persistence and roughness of discontinuities; failure mechanisms based on the structural setting; and volume calculations. For the volume estimation a new 3D approach was tested to reconstruct the topography before a landslide or to construct the basal failure surface of an active or potential instability. The rockslides at Åknes, Tafjord and Rundefjellet in western Norway were principally used as study sites to develop and test the different techniques.The monitoring of slope instabilities investigated in this PhD thesis is essentially based on multitemporal (or sequential) high-resolution DEMs, in particular sequential point clouds acquired by TLS. The changes in the topography due to slope movements can be detected and quantified by sequential TLS datasets, notably by shortest distance comparisons revealing the 3D slope movements over the entire region of interest. A detailed analysis of rock slope movements is based on the affine transformation between an initial and a final state of the rock mass and its decomposition into translational and rotational movements. Monitoring using TLS was very successful on the fast-moving Eiger rockslide in the Swiss Alps, but also on the active rockslides of Åknes and Nordnesfjellet (northern Norway). One of the main achievements on the Eiger and Aknes rockslides is to combine the site's morphology and structural setting with the measured slope movements to produce coherent instability models. Both case studies also highlighted a strong control of the structures in the rock mass on the sliding directions. TLS was also used to monitor slope movements in soils, such as landslides in sensitive clays in Québec (Canada), shallow landslides on river banks (Sorge River, Switzerland) and a debris flow channel (Illgraben).The PhD thesis underlines the broad uses of high-resolution DEMs and especially of TLS in the detection, analysis and monitoring of slope movements. Future studies should explore in more depth the different techniques and approaches developed and used in this PhD, improve them and better integrate the findings in current hazard assessment practices and in slope stability models.Résumé Détection, analyse et surveillance de mouvements de versant à l'aide de modèles numériques de terrain de haute résolutionDes mouvements de versant, tels que des chutes de blocs, glissements de terrain ou laves torrentielles, sont fréquents dans des régions montagneuses et mettent en danger les habitants et les infrastructures ce qui rend nécessaire d'évaluer le danger et le risque causé par ces phénomènes naturels. Ce travail de thèse explore diverses approches qui utilisent des modèles numériques de terrain (MNT) et surtout des MNT de haute résolution créés par scanner laser terrestre (SLT) ou aérien - et qui contribuent à l'évaluation du danger de mouvements de versant à l'échelle régionale et locale.La détection régionale de zones propices aux chutes de blocs ou aux éboulements utilise plusieurs critères morphologiques dérivés d'un MNT, tels que la pente, la présence de discontinuités qui permettent un mécanisme de glissement ou le potentiel de dénudation. La combinaison de ces facteurs d'instabilité mène vers une carte de susceptibilité aux chutes de blocs qui est en accord avec des travaux de terrain comme démontré avec l'exemple du Little Mill Campground (Utah, États-Unis). Un autre cas d'étude - l'Illgraben dans les Alpes valaisannes - a mis en évidence le lien entre les zones à fort potentiel de dénudation et les sources effectives de chutes de blocs et d'éboulements.Des techniques pour l'analyse et la caractérisation détaillée de mouvements de versant basées sur des MNT de haute résolution ont été développées pour des sites spécifiques et localisés, comme par exemple des cicatrices d'anciens éboulements et des instabilités actives ou potentielles. Cette analyse se focalise principalement sur des pentes rocheuses et comprend l'analyse structurale (orientation des discontinuités); l'estimation de l'espacement, la persistance et la rugosité des discontinuités; l'établissement des mécanismes de rupture; et le calcul de volumes. Pour cela une nouvelle approche a été testée en rétablissant la topographie antérieure au glissement ou en construisant la surface de rupture d'instabilités actuelles ou potentielles. Les glissements rocheux d'Åknes, Tafjord et Rundefjellet en Norvège ont été surtout utilisés comme cas d'étude pour développer et tester les diverses approches. La surveillance d'instabilités de versant effectuée dans cette thèse de doctorat est essentiellement basée sur des MNT de haute résolution multi-temporels (ou séquentiels), en particulier des nuages de points séquentiels acquis par SLT. Les changements topographiques dus aux mouvements de versant peuvent être détectés et quantifiés sur l'ensemble d'un glissement, notamment par comparaisons des distances les plus courtes entre deux nuages de points. L'analyse détaillée des mouvements est basée sur la transformation affine entre la position initiale et finale d'un bloc et sa décomposition en mouvements translationnels et rotationnels. La surveillance par SLT a démontré son potentiel avec l'effondrement d'un pan de l'Eiger dans les Alpes suisses, mais aussi aux glissements rocheux d'Aknes et Nordnesfjellet en Norvège. Une des principales avancées à l'Eiger et à Aknes est la création de modèles d'instabilité cohérents en combinant la morphologie et l'agencement structural des sites avec les mesures de déplacements. Ces deux cas d'étude ont aussi démontré le fort contrôle des structures existantes dans le massif rocheux sur les directions de glissement. Le SLT a également été utilisé pour surveiller des glissements dans des terrains meubles comme dans les argiles sensibles au Québec (Canada), sur les berges de la rivière Sorge en Suisse et dans le chenal à laves torrentielles de l'Illgraben.Cette thèse de doctorat souligne le vaste champ d'applications des MNT de haute résolution et particulièrement du SLT dans la détection, l'analyse et la surveillance des mouvements de versant. Des études futures devraient explorer plus en profondeur les différentes techniques et approches développées, les améliorer et mieux les intégrer dans des pratiques actuelles d'analyse de danger et surtout dans la modélisation de stabilité des versants.
Resumo:
An accidental burst of a pressure vessel is an uncontrollable and explosion-like batch process. In this study it is called an explosion. The destructive effectof a pressure vessel explosion is relative to the amount of energy released in it. However, in the field of pressure vessel safety, a mutual understanding concerning the definition of explosion energy has not yet been achieved. In this study the definition of isentropic exergy is presented. Isentropic exergy is the greatest possible destructive energy which can be obtained from a pressure vessel explosion when its state changes in an isentropic way from the initial to the final state. Finally, after the change process, the gas has similar pressure and flow velocity as the environment. Isentropic exergy differs from common exergy inthat the process is assumed to be isentropic and the final gas temperature usually differs from the ambient temperature. The explosion process is so fast that there is no time for the significant heat exchange needed for the common exergy.Therefore an explosion is better characterized by isentropic exergy. Isentropicexergy is a characteristic of a pressure vessel and it is simple to calculate. Isentropic exergy can be defined also for any thermodynamic system, such as the shock wave system developing around an exploding pressure vessel. At the beginning of the explosion process the shock wave system has the same isentropic exergyas the pressure vessel. When the system expands to the environment, its isentropic exergy decreases because of the increase of entropy in the shock wave. The shock wave system contains the pressure vessel gas and a growing amount of ambient gas. The destructive effect of the shock wave on the ambient structures decreases when its distance from the starting point increases. This arises firstly from the fact that the shock wave system is distributed to a larger space. Secondly, the increase of entropy in the shock waves reduces the amount of isentropic exergy. Equations concerning the change of isentropic exergy in shock waves are derived. By means of isentropic exergy and the known flow theories, equations illustrating the pressure of the shock wave as a function of distance are derived. Amethod is proposed as an application of the equations. The method is applicablefor all shapes of pressure vessels in general use, such as spheres, cylinders and tubes. The results of this method are compared to measurements made by various researchers and to accident reports on pressure vessel explosions. The test measurements are found to be analogous with the proposed method and the findings in the accident reports are not controversial to it.
Resumo:
A weak version of the cosmic censorship hypothesis is implemented as a set of boundary conditions on exact semiclassical solutions of two-dimensional dilaton gravity. These boundary conditions reflect low-energy matter from the strong coupling region and they also serve to stabilize the vacuum of the theory against decay into negative energy states. Information about low-energy incoming matter can be recovered in the final state but at high energy black holes are formed and inevitably lead to information loss at the semiclassical level.
Resumo:
A search for charmless three-body decays of B 0 and B0s mesons with a K0S meson in the final state is performed using the pp collision data, corresponding to an integrated luminosity of 1.0 fb−1, collected at a centre-of-mass energy of 7 TeV recorded by the LHCb experiment. Branching fractions of the B0(s)→K0Sh+h′− decay modes (h (′) = π, K), relative to the well measured B0→K0Sπ+π− decay, are obtained. First observation of the decay modes B0s→K0SK±π∓ and B0s→K0Sπ+π− and confirmation of the decay B0→K0SK±π∓ are reported. The following relative branching fraction measurements or limits are obtained $ B(B0→K0SK±π∓)B(B0→K0Sπ+π−)=0.128±0.017(stat.)±0.009(syst.),B(B0→K0SK+K−)B(B0→K0Sπ+π−)=0.385±0.031(stat.)±0.023(syst.),B(B0s→K0Sπ+π−)B(B0→K0Sπ+π−)=0.29±0.06(stat.)±0.03(syst.)±0.02(fs/fd),B(B0s→K0SK±π∓)B(B0→K0Sπ+π−)=1.48±0.12(stat.)±0.08(syst.)±0.12(fs/fd)B(B0s→K0SK+K−)B(B0→K0Sπ+π−)∈[0.004;0.068]at90%CL.
Resumo:
We present the results of stereoscopic observations of the satellite galaxy Segue 1 with the MAGIC Telescopes, carried out between 2011 and 2013. With almost 160 hours of good-quality data, this is the deepest observational campaign on any dwarf galaxy performed so far in the very high energy range of the electromagnetic spectrum. We search this large data sample for signals of dark matter particles in the mass range between 100 GeV and 20 TeV. For this we use the full likelihood analysis method, which provides optimal sensitivity to characteristic gamma-ray spectral features, like those expected from dark matter annihilation or decay. In particular, we focus our search on gamma-rays produced from different final state Standard Model particles, annihilation with internal bremsstrahlung, monochromatic lines and box-shaped signals. Our results represent the most stringent constraints to the annihilation cross-section or decay lifetime obtained from observations of satellite galaxies, for masses above few hundred GeV. In particular, our strongest limit (95% confidence level) corresponds to a ~ 500 GeV dark matter particle annihilating into τ+τ−, and is of order langleσannvrangle simeq 1.2 × 10−24 cm3 s−1 a factor ~ 40 above the langleσannvrangle simeq thermal value.
Resumo:
It is well known that the Neolithic transition spread across Europe at a speed of about 1 km/yr. This result has been previously interpreted as a range expansion of the Neolithic driven mainly by demic diffusion (whereas cultural diffusion played a secondary role). However, a long-standing problem is whether this value (1 km/yr) and its interpretation (mainly demic diffusion) are characteristic only of Europe or universal (i.e. intrinsic features of Neolithic transitions all over the world). So far Neolithic spread rates outside Europe have been barely measured, and Neolithic spread rates substantially faster than 1 km/yr have not been previously reported. Here we show that the transition from hunting and gathering into herding in southern Africa spread at a rate of about 2.4 km/yr, i.e. about twice faster than the European Neolithic transition. Thus the value 1 km/yr is not a universal feature of Neolithic transitions in the world. Resorting to a recent demic-cultural wave-of-advance model, we also find that the main mechanism at work in the southern African Neolithic spread was cultural diffusion (whereas demic diffusion played a secondary role). This is in sharp contrast to the European Neolithic. Our results further suggest that Neolithic spread rates could be mainly driven by cultural diffusion in cases where the final state of this transition is herding/pastoralism (such as in southern Africa) rather than farming and stockbreeding (as in Europe)
Resumo:
Validation and verification operations encounter various challenges in product development process. Requirements for increasing the development cycle pace set new requests for component development process. Verification and validation usually represent the largest activities, up to 40 50 % of R&D resources utilized. This research studies validation and verification as part of case company's component development process. The target is to define framework that can be used in improvement of the validation and verification capability evaluation and development in display module development projects. Validation and verification definition and background is studied in this research. Additionally, theories such as project management, system, organisational learning and causality is studied. Framework and key findings of this research are presented. Feedback system according of the framework is defined and implemented to the case company. This research is divided to the theory and empirical parts. Theory part is conducted in literature review. Empirical part is done in case study. Constructive methode and design research methode are used in this research A framework for capability evaluation and development was defined and developed as result of this research. Key findings of this study were that double loop learning approach with validation and verification V+ model enables defining a feedback reporting solution. Additional results, some minor changes in validation and verification process were proposed. There are a few concerns expressed on the results on validity and reliability of this study. The most important one was the selected research method and the selected model itself. The final state can be normative, the researcher may set study results before the actual study and in the initial state, the researcher may describe expectations for the study. Finally reliability of this study, and validity of this work are studied.
Resumo:
Pulse Response Based Control (PRBC) is a recently developed minimum time control method for flexible structures. The flexible behavior of the structure is represented through a set of discrete time sequences, which are the responses of the structure due to rectangular force pulses. The rectangular force pulses are given by the actuators that control the structure. The set of pulse responses, desired outputs, and force bounds form a numerical optimization problem. The solution of the optimization problem is a minimum time piecewise constant control sequence for driving the system to a desired final state. The method was developed for driving positive semi-definite systems. In case the system is positive definite, some final states of the system may not be reachable. Necessary conditions for reachability of the final states are derived for systems with a finite number of degrees of freedom. Numerical results are presented that confirm the derived analytical conditions. Numerical simulations of maneuvers of distributed parameter systems have shown a relationship between the error in the estimated minimum control time and sampling interval
Resumo:
Chaos is a subject oftopical interest and, studied in great detail in relation to its relevance in almost all branches of science, which include physical, chemical, and biological fields. Chaos in the literal sense signifies utter confusion, but the scientific community has differentiated chaos as deterministic chaos and white noise. Deterministic chaos implies the complex behaviour of systems, which are governed by deterministic laws. Behaviour of such systems often become unpredictable in the long run. This unpredictability arises from the sensitivity of the system to its initial conditions. The essential requirement for ‘sensitivity to initial condition’ is nonlinearity of the system. The only method for determining the future of such systems is numerically simulating its final state from a set ofinitial conditions. Synchronisation
Resumo:
During recent years, quantum information processing and the study of N−qubit quantum systems have attracted a lot of interest, both in theory and experiment. Apart from the promise of performing efficient quantum information protocols, such as quantum key distribution, teleportation or quantum computation, however, these investigations also revealed a great deal of difficulties which still need to be resolved in practise. Quantum information protocols rely on the application of unitary and non–unitary quantum operations that act on a given set of quantum mechanical two-state systems (qubits) to form (entangled) states, in which the information is encoded. The overall system of qubits is often referred to as a quantum register. Today the entanglement in a quantum register is known as the key resource for many protocols of quantum computation and quantum information theory. However, despite the successful demonstration of several protocols, such as teleportation or quantum key distribution, there are still many open questions of how entanglement affects the efficiency of quantum algorithms or how it can be protected against noisy environments. To facilitate the simulation of such N−qubit quantum systems and the analysis of their entanglement properties, we have developed the Feynman program. The program package provides all necessary tools in order to define and to deal with quantum registers, quantum gates and quantum operations. Using an interactive and easily extendible design within the framework of the computer algebra system Maple, the Feynman program is a powerful toolbox not only for teaching the basic and more advanced concepts of quantum information but also for studying their physical realization in the future. To this end, the Feynman program implements a selection of algebraic separability criteria for bipartite and multipartite mixed states as well as the most frequently used entanglement measures from the literature. Additionally, the program supports the work with quantum operations and their associated (Jamiolkowski) dual states. Based on the implementation of several popular decoherence models, we provide tools especially for the quantitative analysis of quantum operations. As an application of the developed tools we further present two case studies in which the entanglement of two atomic processes is investigated. In particular, we have studied the change of the electron-ion spin entanglement in atomic photoionization and the photon-photon polarization entanglement in the two-photon decay of hydrogen. The results show that both processes are, in principle, suitable for the creation and control of entanglement. Apart from process-specific parameters like initial atom polarization, it is mainly the process geometry which offers a simple and effective instrument to adjust the final state entanglement. Finally, for the case of the two-photon decay of hydrogenlike systems, we study the difference between nonlocal quantum correlations, as given by the violation of the Bell inequality and the concurrence as a true entanglement measure.
Resumo:
The diffusion of astrophysical magnetic fields in conducting fluids in the presence of turbulence depends on whether magnetic fields can change their topology via reconnection in highly conducting media. Recent progress in understanding fast magnetic reconnection in the presence of turbulence reassures that the magnetic field behavior in computer simulations and turbulent astrophysical environments is similar, as far as magnetic reconnection is concerned. This makes it meaningful to perform MHD simulations of turbulent flows in order to understand the diffusion of magnetic field in astrophysical environments. Our studies of magnetic field diffusion in turbulent medium reveal interesting new phenomena. First of all, our three-dimensional MHD simulations initiated with anti-correlating magnetic field and gaseous density exhibit at later times a de-correlation of the magnetic field and density, which corresponds well to the observations of the interstellar media. While earlier studies stressed the role of either ambipolar diffusion or time-dependent turbulent fluctuations for de-correlating magnetic field and density, we get the effect of permanent de-correlation with one fluid code, i.e., without invoking ambipolar diffusion. In addition, in the presence of gravity and turbulence, our three-dimensional simulations show the decrease of the magnetic flux-to-mass ratio as the gaseous density at the center of the gravitational potential increases. We observe this effect both in the situations when we start with equilibrium distributions of gas and magnetic field and when we follow the evolution of collapsing dynamically unstable configurations. Thus, the process of turbulent magnetic field removal should be applicable both to quasi-static subcritical molecular clouds and cores and violently collapsing supercritical entities. The increase of the gravitational potential as well as the magnetization of the gas increases the segregation of the mass and magnetic flux in the saturated final state of the simulations, supporting the notion that the reconnection-enabled diffusivity relaxes the magnetic field + gas system in the gravitational field to its minimal energy state. This effect is expected to play an important role in star formation, from its initial stages of concentrating interstellar gas to the final stages of the accretion to the forming protostar. In addition, we benchmark our codes by studying the heat transfer in magnetized compressible fluids and confirm the high rates of turbulent advection of heat obtained in an earlier study.
Resumo:
To comprehend the recent Brookhaven National Laboratory experiment E788 on (4)(Lambda)He, we have outlined a simple theoretical framework. based on the independent-particle shell model, for the one-nucleon-induced nonmesonic weak decay spectra. Basically, the shapes of all the spectra are tailored by the kinematics of the corresponding phase space, depending very weakly on the dynamics, which is gauged here by the one-meson-exchange potential. In spite of the straightforwardness of the approach a good agreement with data is achieved. This might be an indication that the final-state-interactions and the two-nucleon induced processes are not very important in the decay of this hypernucleus. We have also found that the pi + K exchange potential with soft vertex-form-factor cutoffs (Lambda(pi) approximate to 0.7 GeV, Lambda(K) approximate to 0.9 GeV), is able to account simultaneously for the available experimental data related to Gamma(p) and Gamma(n) for (4)(Lambda)H, (4)(Lambda)H, and (5)(Lambda)H. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
We study the effects of final state interactions in two-proton emission by nuclei. Our approach is based on the solution the time-dependent Schrodinger equation. We show that the final relative energy between the protons is substantially influenced by the final state interactions. We also show that alternative correlation functions can be constructed showing large sensitivity to the spin of the diproton system. (c) 2008 Elsevier B.V. All rights reserved.