943 resultados para Monte-Carlo simulation, Rod-coil block copolymer, Tetrapod polymer mixture


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Structural durability is an important criterion that must be evaluated for every type of structure. Concerning reinforced concrete members, chloride diffusion process is widely used to evaluate durability, especially when these structures are constructed in aggressive atmospheres. The chloride ingress triggers the corrosion of reinforcements; therefore, by modelling this phenomenon, the corrosion process can be better evaluated as well as the structural durability. The corrosion begins when a threshold level of chloride concentration is reached at the steel bars of reinforcements. Despite the robustness of several models proposed in literature, deterministic approaches fail to predict accurately the corrosion time initiation due the inherent randomness observed in this process. In this regard, structural durability can be more realistically represented using probabilistic approaches. This paper addresses the analyses of probabilistic corrosion time initiation in reinforced concrete structures exposed to chloride penetration. The chloride penetration is modelled using the Fick's diffusion law. This law simulates the chloride diffusion process considering time-dependent effects. The probability of failure is calculated using Monte Carlo simulation and the first order reliability method, with a direct coupling approach. Some examples are considered in order to study these phenomena. Moreover, a simplified method is proposed to determine optimal values for concrete cover.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The oregano is a plant, rich in essential oil and very used as spice in the preparation of foods. The objective of this paper was to analyze the viability of irrigation for oregano in Presidente Prudente, São Paulo state, Brazil, including economic risk factors, their effect on irrigation total cost, as well as the different pumping kinds. The Monte Carlo simulation was utilized to study the economic factors: fixed cost, labor, maintenance, pumping and water. The use of irrigation for the oregano in the region of Presidente Prudente is indicated because of its economic feasibility and the reduced risks. The average values of the benefit/cost for all water depths tested were higher than 1, indicating viability. The use of irrigation promoted lower risks compared to the non irrigated crop. The micro irrigation system presented greater sensitivity to changes of prices of the equipment associated to the variation of the useful life of the system. The oregano selling price was the most important factor involved in annual net profit. The water cost was the factor of lesser influence on the total cost. Due to the characteristic of high drip irrigation frequency there was no difference between the tariffs based in use hour of electric energy classified as green and blue, which are characterized by applying different rates on the energy consumption and demand according to the hours of day and times of the year. For the studied region it was recommended drip irrigation water management of oregano with the daily application of 100% of pan evaporation Class A using electric motor with tariffs blue or green.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Polynomial Chaos Expansion (PCE) is widely recognized as a flexible tool to represent different types of random variables/processes. However, applications to real, experimental data are still limited. In this article, PCE is used to represent the random time-evolution of metal corrosion growth in marine environments. The PCE coefficients are determined in order to represent data of 45 corrosion coupons tested by Jeffrey and Melchers (2001) at Taylors Beach, Australia. Accuracy of the representation and possibilities for model extrapolation are considered in the study. Results show that reasonably accurate smooth representations of the corrosion process can be obtained. The representation is not better because a smooth model is used to represent non-smooth corrosion data. Random corrosion leads to time-variant reliability problems, due to resistance degradation over time. Time variant reliability problems are not trivial to solve, especially under random process loading. Two example problems are solved herein, showing how the developed PCE representations can be employed in reliability analysis of structures subject to marine corrosion. Monte Carlo Simulation is used to solve the resulting time-variant reliability problems. However, an accurate and more computationally efficient solution is also presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper addresses the analysis of probabilistic corrosion time initiation in reinforced concrete structures exposed to ions chloride penetration. Structural durability is an important criterion which must be evaluated in every type of structure, especially when these structures are constructed in aggressive atmospheres. Considering reinforced concrete members, chloride diffusion process is widely used to evaluate the durability. Therefore, at modelling this phenomenon, corrosion of reinforcements can be better estimated and prevented. These processes begin when a threshold level of chlorides concentration is reached at the steel bars of reinforcements. Despite the robustness of several models proposed in the literature, deterministic approaches fail to predict accurately the corrosion time initiation due to the inherently randomness observed in this process. In this regard, the durability can be more realistically represented using probabilistic approaches. A probabilistic analysis of ions chloride penetration is presented in this paper. The ions chloride penetration is simulated using the Fick's second law of diffusion. This law represents the chloride diffusion process, considering time dependent effects. The probability of failure is calculated using Monte Carlo simulation and the First Order Reliability Method (FORM) with a direct coupling approach. Some examples are considered in order to study these phenomena and a simplified method is proposed to determine optimal values for concrete cover.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract Background HBV genotype F is primarily found in indigenous populations from South America and is classified in four subgenotypes (F1 to F4). Subgenotype F2a is the most common in Brazil among genotype F cases. The aim of this study was to characterize HBV genotype F2a circulating in 16 patients from São Paulo, Brazil. Samples were collected between 2006 and 2012 and sent to Hospital Israelita Albert Einstein. A fragment of 1306 bp partially comprising HBsAg and DNA polymerase coding regions was amplified and sequenced. Viral sequences were genotyped by phylogenetic analysis using reference sequences from GenBank (n=198), including 80 classified as subgenotype F2a. Bayesian Markov chain Monte Carlo simulation implemented in BEAST v.1.5.4 was applied to obtain the best possible estimates using the model of nucleotide substitutions GTR+G+I. Findings It were identified three groups of sequences of subgenotype F2a: 1) 10 sequences from São Paulo state; 2) 3 sequences from Rio de Janeiro and one from São Paulo states; 3) 8 sequences from the West Amazon Basin. Conclusions These results showing for the first time the distribution of F2a subgenotype in Brazil. The spreading and the dynamic of subgenotype F2a in Brazil requires the study of a higher number of samples from different regions as it is unfold in almost all Brazilian populations studied so far. We cannot infer with certainty the origin of these different groups due to the lack of available sequences. Nevertheless, our data suggest that the common origin of these groups probably occurred a long time ago.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There is special interest in the incorporation of metallic nanoparticles in a surrounding dielectric matrix for obtaining composites with desirable characteristics such as for surface plasmon resonance, which can be used in photonics and sensing, and controlled surface electrical conductivity. We investigated nanocomposites produced through metallic ion implantation in insulating substrate, where the implanted metal self-assembles into nanoparticles. During the implantation, the excess of metal atom concentration above the solubility limit leads to nucleation and growth of metal nanoparticles, driven by the temperature and temperature gradients within the implanted sample including the beam-induced thermal characteristics. The nanoparticles nucleate near the maximum of the implantation depth profile (projected range), that can be estimated by computer simulation using the TRIDYN. This is a Monte Carlo simulation program based on the TRIM (Transport and Range of Ions in Matter) code that takes into account compositional changes in the substrate due to two factors: previously implanted dopant atoms, and sputtering of the substrate surface. Our study suggests that the nanoparticles form a bidimentional array buried few nanometers below the substrate surface. More specifically we have studied Au/PMMA (polymethylmethacrylate), Pt/PMMA, Ti/alumina and Au/alumina systems. Transmission electron microscopy of the implanted samples showed the metallic nanoparticles formed in the insulating matrix. The nanocomposites were characterized by measuring the resistivity of the composite layer as function of the dose implanted. These experimental results were compared with a model based on percolation theory, in which electron transport through the composite is explained by conduction through a random resistor network formed by the metallic nanoparticles. Excellent agreement was found between the experimental results and the predictions of the theory. It was possible to conclude, in all cases, that the conductivity process is due only to percolation (when the conducting elements are in geometric contact) and that the contribution from tunneling conduction is negligible.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work, we have used a combined of atomistic simulation methods to explore the effects of confinement of water molecules between silica surfaces. Firstly, the mechanical properties of water severe confined (~3A) between two silica alpha-quartz was determined based on first principles calculations within the density functional theory (DFT). Simulated annealing methods were employed due to the complex potential energry surface, and the difficulties to avoid local minima. Our results suggest that much of the stiffness of the material (46%) remains, even after the insertion of a water monolayer in the silica. Secondly, in order to access typical time scales for confined systems, classical molecular dynamics was used to determine the dynamical properties of water confined in silica cylindrical pores, with diameters varying from 10 to 40A. in this case we have varied the passivation of the silica surface, from 13% to 100% of SiOH, and the other terminations being SiOH2 and SiOH3, the distribution of the different terminations was obtained with a Monte Carlo simulation. The simulations indicates a lowering of the diffusion coefficientes as the diameter decreases, due to the structuration of hydrogen bonds of water molecules; we have also obtained the density profiles of the confined water and the interfacial tension.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Máster Universitario en Sistemas Inteligentes y Aplicaciones Numéricas en Ingeniería (SIANI)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The inherent stochastic character of most of the physical quantities involved in engineering models has led to an always increasing interest for probabilistic analysis. Many approaches to stochastic analysis have been proposed. However, it is widely acknowledged that the only universal method available to solve accurately any kind of stochastic mechanics problem is Monte Carlo Simulation. One of the key parts in the implementation of this technique is the accurate and efficient generation of samples of the random processes and fields involved in the problem at hand. In the present thesis an original method for the simulation of homogeneous, multi-dimensional, multi-variate, non-Gaussian random fields is proposed. The algorithm has proved to be very accurate in matching both the target spectrum and the marginal probability. The computational efficiency and robustness are very good too, even when dealing with strongly non-Gaussian distributions. What is more, the resulting samples posses all the relevant, welldefined and desired properties of “translation fields”, including crossing rates and distributions of extremes. The topic of the second part of the thesis lies in the field of non-destructive parametric structural identification. Its objective is to evaluate the mechanical characteristics of constituent bars in existing truss structures, using static loads and strain measurements. In the cases of missing data and of damages that interest only a small portion of the bar, Genetic Algorithm have proved to be an effective tool to solve the problem.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis is focused on the financial model for interest rates called the LIBOR Market Model. In the appendixes, we provide the necessary mathematical theory. In the inner chapters, firstly, we define the main interest rates and financial instruments concerning with the interest rate models, then, we set the LIBOR market model, demonstrate its existence, derive the dynamics of forward LIBOR rates and justify the pricing of caps according to the Black’s formula. Then, we also present the Swap Market Model, which models the forward swap rates instead of the LIBOR ones. Even this model is justified by a theoretical demonstration and the resulting formula to price the swaptions coincides with the Black’s one. However, the two models are not compatible from a theoretical point. Therefore, we derive various analytical approximating formulae to price the swaptions in the LIBOR market model and we explain how to perform a Monte Carlo simulation. Finally, we present the calibration of the LIBOR market model to the markets of both caps and swaptions, together with various examples of application to the historical correlation matrix and the cascade calibration of the forward volatilities to the matrix of implied swaption volatilities provided by the market.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Der AMANDA-II Detektor ist primär für den richtungsaufgelösten Nachweis hochenergetischer Neutrinos konzipiert. Trotzdem können auch niederenergetische Neutrinoausbrüche, wie sie von Supernovae erwartet werden, mit hoher Signifikanz nachgewiesen werden, sofern sie innerhalb der Milchstraße stattfinden. Die experimentelle Signatur im Detektor ist ein kollektiver Anstieg der Rauschraten aller optischen Module. Zur Abschätzung der Stärke des erwarteten Signals wurden theoretische Modelle und Simulationen zu Supernovae und experimentelle Daten der Supernova SN1987A studiert. Außerdem wurden die Sensitivitäten der optischen Module neu bestimmt. Dazu mussten für den Fall des südpolaren Eises die Energieverluste geladener Teilchen untersucht und eine Simulation der Propagation von Photonen entwickelt werden. Schließlich konnte das im Kamiokande-II Detektor gemessene Signal auf die Verhältnisse des AMANDA-II Detektors skaliert werden. Im Rahmen dieser Arbeit wurde ein Algorithmus zur Echtzeit-Suche nach Signalen von Supernovae als Teilmodul der Datennahme implementiert. Dieser beinhaltet diverse Verbesserungen gegenüber der zuvor von der AMANDA-Kollaboration verwendeten Version. Aufgrund einer Optimierung auf Rechengeschwindigkeit können nun mehrere Echtzeit-Suchen mit verschiedenen Analyse-Zeitbasen im Rahmen der Datennahme simultan laufen. Die Disqualifikation optischer Module mit ungeeignetem Verhalten geschieht in Echtzeit. Allerdings muss das Verhalten der Module zu diesem Zweck anhand von gepufferten Daten beurteilt werden. Dadurch kann die Analyse der Daten der qualifizierten Module nicht ohne eine Verzögerung von etwa 5 Minuten geschehen. Im Falle einer erkannten Supernova werden die Daten für die Zeitdauer mehrerer Minuten zur späteren Auswertung in 10 Millisekunden-Intervallen archiviert. Da die Daten des Rauschverhaltens der optischen Module ansonsten in Intervallen von 500 ms zur Verfgung stehen, ist die Zeitbasis der Analyse in Einheiten von 500 ms frei wählbar. Im Rahmen dieser Arbeit wurden drei Analysen dieser Art am Südpol aktiviert: Eine mit der Zeitbasis der Datennahme von 500 ms, eine mit der Zeitbasis 4 s und eine mit der Zeitbasis 10 s. Dadurch wird die Sensitivität für Signale maximiert, die eine charakteristische exponentielle Zerfallszeit von 3 s aufweisen und gleichzeitig eine gute Sensitivität über einen weiten Bereich exponentieller Zerfallszeiten gewahrt. Anhand von Daten der Jahre 2000 bis 2003 wurden diese Analysen ausführlich untersucht. Während die Ergebnisse der Analyse mit t = 500 ms nicht vollständig nachvollziehbare Ergebnisse produzierte, konnten die Resultate der beiden Analysen mit den längeren Zeitbasen durch Simulationen reproduziert und entsprechend gut verstanden werden. Auf der Grundlage der gemessenen Daten wurden die erwarteten Signale von Supernovae simuliert. Aus einem Vergleich zwischen dieser Simulation den gemessenen Daten der Jahre 2000 bis 2003 und der Simulation des erwarteten statistischen Untergrunds kann mit einem Konfidenz-Niveau von mindestens 90 % gefolgert werden, dass in der Milchstraße nicht mehr als 3.2 Supernovae pro Jahr stattfinden. Zur Identifikation einer Supernova wird ein Ratenanstieg mit einer Signifikanz von mindestens 7.4 Standardabweichungen verlangt. Die Anzahl erwarteter Ereignisse aus dem statistischen Untergrund beträgt auf diesem Niveau weniger als ein Millionstel. Dennoch wurde ein solches Ereignis gemessen. Mit der gewählten Signifikanzschwelle werden 74 % aller möglichen Vorläufer-Sterne von Supernovae in der Galaxis überwacht. In Kombination mit dem letzten von der AMANDA-Kollaboration veröffentlicheten Ergebnis ergibt sich sogar eine obere Grenze von nur 2.6 Supernovae pro Jahr. Im Rahmen der Echtzeit-Analyse wird für die kollektive Ratenüberhöhung eine Signifikanz von mindestens 5.5 Standardabweichungen verlangt, bevor eine Meldung über die Detektion eines Supernova-Kandidaten verschickt wird. Damit liegt der überwachte Anteil Sterne der Galaxis bei 81 %, aber auch die Frequenz falscher Alarme steigt auf bei etwa 2 Ereignissen pro Woche. Die Alarm-Meldungen werden über ein Iridium-Modem in die nördliche Hemisphäre übertragen, und sollen schon bald zu SNEWS beitragen, dem weltweiten Netzwerk zur Früherkennung von Supernovae.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Die Drei-Spektrometer-Anlage am Mainzer Institut für Kernphysik wurde um ein zusätzliches Spektrometer ergänzt, welches sich durch seine kurze Baulänge auszeichnet und deshalb Short-Orbit-Spektrometer (SOS) genannt wird. Beim nominellen Abstand des SOS vom Target (66 cm) legen die nachzuweisenden Teilchen zwischen Reaktionsort und Detektor eine mittlere Bahnlänge von 165 cm zurück. Für die schwellennahe Pionproduktion erhöht sich dadurch im Vergleich zu den großen Spektrometern die Überlebenswahrscheinlichkeit geladener Pionen mit Impuls 100 MeV/c von 15% auf 73%. Demzufolge verringert sich der systematische Fehler ("Myon-Kontamination"), etwa bei der geplanten Messung der schwachen Formfaktoren G_A(Q²) und G_P(Q²), signifikant. Den Schwerpunkt der vorliegenden Arbeit bildet die Driftkammer des SOS. Ihre niedrige Massenbelegung (0,03% X_0) zur Reduzierung der Kleinwinkelstreuung ist auf den Nachweis niederenergetischer Pionen hin optimiert. Aufgrund der neuartigen Geometrie des Detektors musste eine eigene Software zur Spurrekonstruktion, Effizienzbestimmung etc. entwickelt werden. Eine komfortable Möglichkeit zur Eichung der Driftweg-Driftzeit-Relation, die durch kubische Splines dargestellt wird, wurde implementiert. Das Auflösungsvermögen des Spurdetektors liegt in der dispersiven Ebene bei 76 µm für die Orts- und 0,23° für die Winkelkoordinate (wahrscheinlichster Fehler) sowie entsprechend in der nicht-dispersiven Ebene bei 110 µm bzw. 0,29°. Zur Rückrechnung der Detektorkoordinaten auf den Reaktionsort wurde die inverse Transfermatrix des Spektrometers bestimmt. Hierzu wurden an Protonen im ¹²C-Kern quasielastisch gestreute Elektronen verwendet, deren Startwinkel durch einen Lochkollimator definiert wurden. Daraus ergeben sich experimentelle Werte für die mittlere Winkelauflösung am Target von sigma_phi = 1,3 mrad bzw. sigma_theta = 10,6 mrad. Da die Impulseichung des SOS nur mittels quasielastischer Streuung (Zweiarmexperiment) durchgeführt werden kann, muss man den Beitrag des Protonarms zur Breite des Piks der fehlenden Masse in einer Monte-Carlo-Simulation abschätzen und herausfalten. Zunächst lässt sich nur abschätzen, dass die Impulsauflösung sicher besser als 1% ist.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis we consider three different models for strongly correlated electrons, namely a multi-band Hubbard model as well as the spinless Falicov-Kimball model, both with a semi-elliptical density of states in the limit of infinite dimensions d, and the attractive Hubbard model on a square lattice in d=2. In the first part, we study a two-band Hubbard model with unequal bandwidths and anisotropic Hund's rule coupling (J_z-model) in the limit of infinite dimensions within the dynamical mean-field theory (DMFT). Here, the DMFT impurity problem is solved with the use of quantum Monte Carlo (QMC) simulations. Our main result is that the J_z-model describes the occurrence of an orbital-selective Mott transition (OSMT), in contrast to earlier findings. We investigate the model with a high-precision DMFT algorithm, which was developed as part of this thesis and which supplements QMC with a high-frequency expansion of the self-energy. The main advantage of this scheme is the extraordinary accuracy of the numerical solutions, which can be obtained already with moderate computational effort, so that studies of multi-orbital systems within the DMFT+QMC are strongly improved. We also found that a suitably defined Falicov-Kimball (FK) model exhibits an OSMT, revealing the close connection of the Falicov-Kimball physics to the J_z-model in the OSM phase. In the second part of this thesis we study the attractive Hubbard model in two spatial dimensions within second-order self-consistent perturbation theory. This model is considered on a square lattice at finite doping and at low temperatures. Our main result is that the predictions of first-order perturbation theory (Hartree-Fock approximation) are renormalized by a factor of the order of unity even at arbitrarily weak interaction (U->0). The renormalization factor q can be evaluated as a function of the filling n for 00, the q-factor vanishes, signaling the divergence of self-consistent perturbation theory in this limit. Thus we present the first asymptotically exact results at weak-coupling for the negative-U Hubbard model in d=2 at finite doping.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis we describe in detail the Monte Carlo simulation (LVDG4) built to interpret the experimental data collected by LVD and to measure the muon-induced neutron yield in iron and liquid scintillator. A full Monte Carlo simulation, based on the Geant4 (v 9.3) toolkit, has been developed and validation tests have been performed. We used the LVDG4 to determine the active vetoing and the shielding power of LVD. The idea was to evaluate the feasibility to host a dark matter detector in the most internal part, called Core Facility (LVD-CF). The first conclusion is that LVD is a good moderator, but the iron supporting structure produce a great number of neutrons near the core. The second conclusions is that if LVD is used as an active veto for muons, the neutron flux in the LVD-CF is reduced by a factor 50, of the same order of magnitude of the neutron flux in the deepest laboratory of the world, Sudbury. Finally, the muon-induced neutron yield has been measured. In liquid scintillator we found $(3.2 \pm 0.2) \times 10^{-4}$ n/g/cm$^2$, in agreement with previous measurements performed at different depths and with the general trend predicted by theoretical calculations and Monte Carlo simulations. Moreover we present the first measurement, in our knowledge, of the neutron yield in iron: $(1.9 \pm 0.1) \times 10^{-3}$ n/g/cm$^2$. That measurement provides an important check for the MC of neutron production in heavy materials that are often used as shield in low background experiments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Der radiative Zerfall eines Hyperons in ein leichteres Hyperon und ein Photon erlaubt eine Untersuchung der Struktur der elektroschwachen Wechselwirkung von Hadronen. Dazu wird die Zerfallsasymmetrie $alpha$ betrachtet. Sie beschreibt die Verteilung des Tochterhyperons bezüglich der Polarisation $vec{P}$ des Mutterhyperons mit $dN / d cos(Theta) propto 1 + alpha |vec{P}| cos(Theta)$, wobei $Theta$ der Winkel zwischen $vec{P}$ und dem Impuls des Tochterhyperons ist. Von besonderem Interesse ist der radiative Zerfall $Xi^0 to Lambda gamma$, für den alle Rechnungen auf Quarkniveau eine positive Asymmetrie vorhersagen, wohingegen bisher eine negative Asymmetrie von $alpha_{Lambda gamma} = -0,73 +- 0,17$ gemessen wurde. Ziel dieser Arbeit war es, die bisherigen Messungen zu überprüfen und die Asymmetrie mit einer deutlich höheren Präzision zu bestimmen. Ferner wurden die Zerfallsasymmetrie des radiativen Zerfalls $Xi^0 to Sigma^0 gamma$ ermittelt und zum Test der angewandten Analysemethode der gut bekannte Zerfall $Xi^0 to Lambda pi^0$ herangezogen. Während der Datennahme im Jahr 2002 zeichnete das NA48/1-Experiment am CERN gezielt seltene $K_S$- und Hyperonzerfälle auf. Damit konnte der weltweit größte Datensatz an $Xi^0$-Zerfällen gewonnen werden, aus dem etwa 52.000 $Xi^0 to Lambda gamma$-Zerfälle, 15.000 $Xi^0 to Sigma^0 gamma$-Zerfälle und 4 Mill. $Xi^0 to Lambda pi^0$-Zerfälle mit nur geringem Untergrund extrahiert wurden. Ebenso wurden die entsprechenden $antiXi$-Zerfälle mit etwa einem Zehntel der obigen Ereigniszahlen registriert. Die Bestimmung der Zerfallsasymmetrien erfolgte durch den Vergleich der gemessene Daten mit einer detaillierten Detektorsimulation und führte zu den folgenden Resultaten dieser Arbeit: $alpha_{Lambda gamma} = -0,701 +- 0,019_{stat} +- 0,064_{sys}$, $alpha_{Sigma^0 gamma} = -0,683 +- 0,032_{stat} +- 0,077_{sys}$, $alpha_{Lambda pi^0} = -0,439 +- 0,002_{stat} +- 0,056_{sys}$, $alpha_{antiLambda gamma} = 0,772 +- 0,064_{stat} +- 0,066_{sys}$, $alpha_{antiSigma^0 gamma} = 0,811 +- 0,103_{stat} +- 0,135_{sys}$, $alpha_{antiLambda pi^0} = 0,451 +- 0,005_{stat} +- 0,057_{sys}$. Somit konnte die Unsicherheit der $Xi^0 to Lambda gamma$-Zerfallsasymmetrie auf etwa ein Drittel reduziert werden. Ihr negatives Vorzeichen und damit der Widerspruch zu den Vorhersagen der Quarkmodellrechnungen ist so zweifelsfrei bestätigt. Mit den zum ersten Mal gemessenen $antiXi$-Asymmetrien konnten zusätzlich Grenzen auf eine mögliche CP-Verletzung in den $Xi^0$-Zerfällen, die $alpha_{Xi^0} neq -alpha_{antiXi}$ zur Folge hätte, bestimmt werden.