907 resultados para PRECISION EXPERIMENTS
Resumo:
Zu den Hauptcharakteristika von Teilchen gehoert - neben der Masse - die Lebensdauer. Die mittlere Lebensdauer des Xi0-Hyperons, die sich aus der mittleren Lebensdauer des Xi--Hyperons ueber die Delta I=1/2-Regel theoretisch voraussagen laesst, wurde bereits mehrfach experimentell bestimmt. Die neueste Messung aus dem Jahr 1977 besitzt jedoch eine relative Unsicherheit von 5%, was sich mit Daten neuerer Experimente deutlich verbessern laesst. Die mittlere Lebensdauer ist ein wichtiger Parameter bei der Bestimmung des Matrixelements Vus der Cabibbo-Kobayashi-Maskawa-Matrix in semileptonischen Xi0-Zerfaellen. Im Jahre 2002 wurde mit dem NA48-Detektor eine Datennahme mit hoher Intensitaet durchgefuehrt, bei der unter anderem etwa 10^9 Xi0-Zerfallskandidaten aufgezeichnet wurden. Davon wurden im Rahmen dieser Arbeit 192000 Ereignisse des Typs "Xi0 nach Lambda pi0" rekonstruiert und 107000 Ereignisse zur Bestimmung der mittleren Lebensdauer durch Vergleich mit simulierten Ereignissen verwendet. Zur Vermeidung von systematischen Fehlern wurde die Lebensdauer in zehn Energieintervallen durch Vergleich von gemessenen und simulierten Daten ermittelt. Das Ergebnis ist wesentlich genauer als bisherige Messungen und weicht vom Literaturwert (tau=(2,90+-0,09)*10^(-10)s) um (+4,99+-0,50(stat)+-0,58(syst))% ab, was 1,7 Standardabweichungen entspricht. Die Lebensdauer ergibt sich zu tau=(3,045+-0,015(stat)+-0,017(syst))*10^(-10)s. Auf die gleiche Weise konnte mit den zur Verfuegung stehenden Daten erstmals die Lebensdauer des Anti-Xi0-Hyperons gemessen werden. Das Ergebnis dieser Messung ist tau=(3,042+-0,045(stat)+-0,017(syst))*10^(-10)s.
Resumo:
The present dissertation focuses on the measurement of nonmethane organic carbon compounds (NMOC) and their exchange by biosphere-atmosphere interactions. To access the accuracy, precision, and reproducibility of NMOC analysis, two intercomparison experiments were carried out during the present study. These experiments comprised the sampling of NMOCs on graphitised carbon blacks, followed by gas-chromatographic analysis. Furthermore, they comprised the sampling of short chain carbonyl compounds on solid phase extraction cartridges and their analysis by high pressure liquid chromatography. To investigate the exchange of NMOCs between vegetation and the atmosphere, plant enclosure studies were performed on two European deciduous tree species. These measurements were conducted during two consecutive summer seasons by utilisation of the above specified techniques on sunlit and shaded leaves of European beech (Fagus sylvatica L., monoterpene emitter) and sunlit leaves of English oak (Quercus robur L., isoprene emitter). According to its broad geographical distribution, the impact of European beech on the European monoterpene budget was characterized by a model simulation. Complementary an instrument was developed, that is capable of measuring the amount of total NMOC that is exchanged by biosphere-atmosphere interactions. The instrument was tested under laboratory conditions and was evaluated versus an independent method performing branch enclosure measurements.
Resumo:
Es gibt kaum eine präzisere Beschreibung der Natur als die durch das Standardmodell der Elementarteilchen (SM). Es ist in der Lage bis auf wenige Ausnahmen, die Physik der Materie- und Austauschfelder zu beschreiben. Dennoch ist man interessiert an einer umfassenderen Theorie, die beispielsweise auch die Gravitation mit einbezieht, Neutrinooszillationen beschreibt, und die darüber hinaus auch weitere offene Fragen klärt. Um dieser Theorie ein Stück näher zu kommen, befasst sich die vorliegende Arbeit mit einem effektiven Potenzreihenansatz zur Beschreibung der Physik des Standardmodells und neuer Phänomene. Mit Hilfe eines Massenparameters und einem Satz neuer Kopplungskonstanten wird die Neue Physik parametrisiert. In niedrigster Ordnung erhält man das bekannte SM, Terme höherer Ordnung in der Kopplungskonstanten beschreiben die Effekte jenseits des SMs. Aus gewissen Symmetrie-Anforderungen heraus ergibt sich eine definierte Anzahl von effektiven Operatoren mit Massendimension sechs, die den hier vorgestellten Rechnungen zugrunde liegen. Wir berechnen zunächst für eine bestimmte Auswahl von Prozessen zugehörige Zerfallsbreiten bzw. Wirkungsquerschnitte in einem Modell, welches das SM um einen einzigen neuen effektiven Operator erweitertet. Unter der Annahme, dass der zusätzliche Beitrag zur Observablen innerhalb des experimentellen Messfehlers ist, geben wir anhand von vorliegenden experimentellen Ergebnissen aus leptonischen und semileptonischen Präzisionsmessungen Ausschlussgrenzen der neuen Kopplungen in Abhängigkeit von dem Massenparameter an. Die hier angeführten Resultate versetzen Physiker zum Einen in die Lage zu beurteilen, bei welchen gemessenen Observablen eine Erhöhung der Präzision sinnvoll ist, um bessere Ausschlussgrenzen angeben zu können. Zum anderen erhält man einen Anhaltspunkt, welche Prozesse im Hinblick auf Entdeckungen Neuer Physik interessant sind.
Resumo:
Computer simulations have become an important tool in physics. Especially systems in the solid state have been investigated extensively with the help of modern computational methods. This thesis focuses on the simulation of hydrogen-bonded systems, using quantum chemical methods combined with molecular dynamics (MD) simulations. MD simulations are carried out for investigating the energetics and structure of a system under conditions that include physical parameters such as temperature and pressure. Ab initio quantum chemical methods have proven to be capable of predicting spectroscopic quantities. The combination of these two features still represents a methodological challenge. Furthermore, conventional MD simulations consider the nuclei as classical particles. Not only motional effects, but also the quantum nature of the nuclei are expected to influence the properties of a molecular system. This work aims at a more realistic description of properties that are accessible via NMR experiments. With the help of the path integral formalism the quantum nature of the nuclei has been incorporated and its influence on the NMR parameters explored. The effect on both the NMR chemical shift and the Nuclear Quadrupole Coupling Constants (NQCC) is presented for intra- and intermolecular hydrogen bonds. The second part of this thesis presents the computation of electric field gradients within the Gaussian and Augmented Plane Waves (GAPW) framework, that allows for all-electron calculations in periodic systems. This recent development improves the accuracy of many calculations compared to the pseudopotential approximation, which treats the core electrons as part of an effective potential. In combination with MD simulations of water, the NMR longitudinal relaxation times for 17O and 2H have been obtained. The results show a considerable agreement with the experiment. Finally, an implementation of the calculation of the stress tensor into the quantum chemical program suite CP2K is presented. This enables MD simulations under constant pressure conditions, which is demonstrated with a series of liquid water simulations, that sheds light on the influence of the exchange-correlation functional used on the density of the simulated liquid.
Resumo:
Weak lensing experiments such as the future ESA-accepted mission Euclid aim to measure cosmological parameters with unprecedented accuracy. It is important to assess the precision that can be obtained in these measurements by applying analysis software on mock images that contain many sources of noise present in the real data. In this Thesis, we show a method to perform simulations of observations, that produce realistic images of the sky according to characteristics of the instrument and of the survey. We then use these images to test the performances of the Euclid mission. In particular, we concentrate on the precision of the photometric redshift measurements, which are key data to perform cosmic shear tomography. We calculate the fraction of the total observed sample that must be discarded to reach the required level of precision, that is equal to 0.05(1+z) for a galaxy with measured redshift z, with different ancillary ground-based observations. The results highlight the importance of u-band observations, especially to discriminate between low (z < 0.5) and high (z ~ 3) redshifts, and the need for good observing sites, with seeing FWHM < 1. arcsec. We then construct an optimal filter to detect galaxy clusters through photometric catalogues of galaxies, and we test it on the COSMOS field, obtaining 27 lensing-confirmed detections. Applying this algorithm on mock Euclid data, we verify the possibility to detect clusters with mass above 10^14.2 solar masses with a low rate of false detections.
Resumo:
Herbicides are becoming emergent contaminants in Italian surface, coastal and ground waters, due to their intensive use in agriculture. In marine environments herbicides have adverse effects on non-target organisms, as primary producers, resulting in oxygen depletion and decreased primary productivity. Alterations of species composition in algal communities can also occur due to the different sensitivity among the species. In the present thesis the effects of herbicides, widely used in the Northern Adriatic Sea, on different algal species were studied. The main goal of this work was to study the influence of temperature on algal growth in the presence of the triazinic herbicide terbuthylazine (TBA), and the cellular responses adopted to counteract the toxic effects of the pollutant (Chapter 1 and 2). The development of simulation models to be applied in environmental management are needed to organize and track information in a way that would not be possible otherwise and simulate an ecological prospective. The data collected from laboratory experiments were used to simulate algal responses to the TBA exposure at increasing temperature conditions (Chapter 3). Part of the thesis was conducted in foreign countries. The work presented in Chapter 4 was focused on the effect of high light on growth, toxicity and mixotrophy of the ichtyotoxic species Prymnesium parvum. In addition, a mesocosm experiment was conducted in order to study the synergic effect of the pollutant emamectin benzoate with other anthropogenic stressors, such as oil pollution and induced phytoplankton blooms (Chapter 5).
Resumo:
One of the main goals of the COMPASS experiment at CERN is the
determination of the gluon polarisation in the nucleon. It is determined from spin asymmetries in the scattering of
160 GeV/c polarised muons on a polarised LiD target.
The gluon polarisation is accessed by the selection of photon-gluon fusion (PGF) events. The PGF-process can be tagged through hadrons with high transverse momenta or through charmed hadrons in the final state. The advantage of the open charm channel is that, in leading order, the PGF-process is the only process for charm production, thus no physical background contributes to the selected data sample.
This thesis presents a measurement of the gluon polarisation from the COMPASS data taken in the years 2002-2004. In the analysis, charm production is tagged through a
reconstructed D0-meson decaying in $D^{0}-> K^{-}pi^{+}$ (and charge conjugates). The reconstruction is done on a combinatorial basis. The background of wrong track pairs is reduced using kinematic cuts to the reconstructed D0-candidate and the information on particle identification from the Ring Imaging Cerenkov counter. In addition, the event sample is separated into D0-candidates, where a soft pion from the decay of the D*-meson to a D0-meson, is found, and the D0-candidates without this tag. Due to the small mass difference between D*-meson and D0-meson the signal purity of the D*-tagged sample is about 7 times higher than in the untagged sample.
The gluon polarisation is measured from the event asymmetries for the for the different spin configurations of the COMPASS target. To improve the statistical precision of the final results, the events in the final sample are weighted.
This method results in an average value of the gluon polarisation in the x-range covered by the data. For the COMPASS data from 2002-2004, the resulting value of the gluon polarisation is $
Resumo:
“Plasmon” is a synonym for collective oscillations of the conduction electrons in a metal nanoparticle (excited by an incoming light wave), which cause strong optical responses like efficient light scattering. The scattering cross-section with respect to the light wavelength depends not only on material, size and shape of the nanoparticle, but also on the refractive index of the embedding medium. For this reason, plasmonic nanoparticles are interesting candidates for sensing applications. Here, two novel setups for rapid spectral investigations of single nanoparticles and different sensing experiments are presented.rnrnPrecisely, the novel setups are based on an optical microscope operated in darkfield modus. For the fast single particle spectroscopy (fastSPS) setup, the entrance pinhole of a coupled spectrometer is replaced by a liquid crystal device (LCD) acting as spatially addressable electronic shutter. This improvement allows the automatic and continuous investigation of several particles in parallel for the first time. The second novel setup (RotPOL) usesrna rotating wedge-shaped polarizer and encodes the full polarization information of each particle within one image, which reveals the symmetry of the particles and their plasmon modes. Both setups are used to observe nanoparticle growth in situ on a single-particle level to extract quantitative data on nanoparticle growth.rnrnUsing the fastSPS setup, I investigate the membrane coating of gold nanorods in aqueous solution and show unequivocally the subsequent detection of protein binding to the membrane. This binding process leads to a spectral shift of the particles resonance due to the higher refractive index of the protein compared to water. Hence, the nanosized addressable sensor platform allows for local analysis of protein interactions with biological membranes as a function of the lateral composition of phase separated membranes.rnrnThe sensitivity on changes in the environmental refractive index depends on the particles’ aspect ratio. On the basis of simulations and experiments, I could present the existence of an optimal aspect ratio range between 3 and 4 for gold nanorods for sensing applications. A further sensitivity increase can only be reached by chemical modifications of the gold nanorods. This can be achieved by synthesizing an additional porous gold cage around the nanorods, resulting in a plasmon sensitivity raise of up to 50 % for those “nanorattles” compared to gold nanorods with the same resonance wavelength. Another possibility isrnto coat the gold nanorods with a thin silver shell. This reduces the single particle’s resonance spectral linewidth about 30 %, which enlarges the resolution of the observable shift. rnrnThis silver coating evokes the interesting effect of reducing the ensemble plasmon linewidth by changing the relation connecting particle shape and plasmon resonance wavelength. This change, I term plasmonic focusing, leads to less variation of resonance wavelengths for the same particle size distribution, which I show experimentally and theoretically.rnrnIn a system of two coupled nanoparticles, the plasmon modes of the transversal and longitudinal axis depend on the refractive index of the environmental solution, but only the latter one is influenced by the interparticle distance. I show that monitoring both modes provides a self-calibrating system, where interparticle distance variations and changes of the environmental refractive index can be determined with high precision.
Resumo:
Atmospheric aerosol particles serving as cloud condensation nuclei (CCN) are key elements of the hydrological cycle and climate. Knowledge of the spatial and temporal distribution of CCN in the atmosphere is essential to understand and describe the effects of aerosols in meteorological models. In this study, CCN properties were measured in polluted and pristine air of different continental regions, and the results were parameterized for efficient prediction of CCN concentrations.The continuous-flow CCN counter used for size-resolved measurements of CCN efficiency spectra (activation curves) was calibrated with ammonium sulfate and sodium chloride aerosols for a wide range of water vapor supersaturations (S=0.068% to 1.27%). A comprehensive uncertainty analysis showed that the instrument calibration depends strongly on the applied particle generation techniques, Köhler model calculations, and water activity parameterizations (relative deviations in S up to 25%). Laboratory experiments and a comparison with other CCN instruments confirmed the high accuracy and precision of the calibration and measurement procedures developed and applied in this study.The mean CCN number concentrations (NCCN,S) observed in polluted mega-city air and biomass burning smoke (Beijing and Pearl River Delta, China) ranged from 1000 cm−3 at S=0.068% to 16 000 cm−3 at S=1.27%, which is about two orders of magnitude higher than in pristine air at remote continental sites (Swiss Alps, Amazonian rainforest). Effective average hygroscopicity parameters, κ, describing the influence of chemical composition on the CCN activity of aerosol particles were derived from the measurement data. They varied in the range of 0.3±0.2, were size-dependent, and could be parameterized as a function of organic and inorganic aerosol mass fraction. At low S (≤0.27%), substantial portions of externally mixed CCN-inactive particles with much lower hygroscopicity were observed in polluted air (fresh soot particles with κ≈0.01). Thus, the aerosol particle mixing state needs to be known for highly accurate predictions of NCCN,S. Nevertheless, the observed CCN number concentrations could be efficiently approximated using measured aerosol particle number size distributions and a simple κ-Köhler model with a single proxy for the effective average particle hygroscopicity. The relative deviations between observations and model predictions were on average less than 20% when a constant average value of κ=0.3 was used in conjunction with variable size distribution data. With a constant average size distribution, however, the deviations increased up to 100% and more. The measurement and model results demonstrate that the aerosol particle number and size are the major predictors for the variability of the CCN concentration in continental boundary layer air, followed by particle composition and hygroscopicity as relatively minor modulators. Depending on the required and applicable level of detail, the measurement results and parameterizations presented in this study can be directly implemented in detailed process models as well as in large-scale atmospheric and climate models for efficient description of the CCN activity of atmospheric aerosols.
Resumo:
Nuclear masses are an important quantity to study nuclear structure since they reflect the sum of all nucleonic interactions. Many experimental possibilities exist to precisely measure masses, out of which the Penning trap is the tool to reach the highest precision. Moreover, absolute mass measurements can be performed using carbon, the atomic-mass standard, as a reference. The new double-Penning trap mass spectrometer TRIGA-TRAP has been installed and commissioned within this thesis work, which is the very first experimental setup of this kind located at a nuclear reactor. New technical developments have been carried out such as a reliable non-resonant laser ablation ion source for the production of carbon cluster ions and are still continued, like a non-destructive ion detection technique for single-ion measurements. Neutron-rich fission products will be available by the reactor that are important for nuclear astrophysics, especially the r-process. Prior to the on-line coupling to the reactor, TRIGA-TRAP already performed off-line mass measurements on stable and long-lived isotopes and will continue this program. The main focus within this thesis was on certain rare-earth nuclides in the well-established region of deformation around N~90. Another field of interest are mass measurements on actinoids to test mass models and to provide direct links to the mass standard. Within this thesis, the mass of 241-Am could be measured directly for the first time.
Resumo:
Hypernuclear physics is currently attracting renewed interest, due tornthe important role of hypernuclei spectroscopy rn(hyperon-hyperon and hyperon-nucleon interactions) rnas a unique toolrnto describe the baryon-baryon interactions in a unified way and to rnunderstand the origin of their short-range.rnrnHypernuclear research will be one of the main topics addressed by the {sc PANDA} experimentrnat the planned Facility for Antiproton and Ion Research {sc FAIR}.rnThanks to the use of stored $overline{p}$ beams, copiousrnproduction of double $Lambda$ hypernuclei is expected at thern{sc PANDA} experiment, which will enable high precision $gamma$rnspectroscopy of such nuclei for the first time.rnAt {sc PANDA} excited states of $Xi^-$ hypernuclei will be usedrnas a basis for the formation of double $Lambda$ hypernuclei.rnFor their detection, a devoted hypernuclear detector setup is planned. This setup consists ofrna primary nuclear target for the production of $Xi^{-}+overline{Xi}$ pairs, a secondary active targetrnfor the hypernuclei formation and the identification of associated decay products and a germanium array detector to perform $gamma$ spectroscopy.rnrnIn the present work, the feasibility of performing high precision $gamma$rnspectroscopy of double $Lambda$ hypernuclei at the {sc PANDA} experiment has been studiedrnby means of a Monte Carlo simulation. For this issue, the designing and simulation of the devoted detector setup as well as of the mechanism to produce double $Lambda$ hypernuclei have been optimizedrntogether with the performance of the whole system. rnIn addition, the production yields of double hypernuclei in excitedrnparticle stable states have been evaluated within a statistical decay model.rnrnA strategy for the unique assignment of various newly observed $gamma$-transitions rnto specific double hypernuclei has been successfully implemented by combining the predicted energy spectra rnof each target with the measurement of two pion momenta from the subsequent weak decays of a double hypernucleus.rn% Indeed, based on these Monte Carlo simulation, the analysis of the statistical decay of $^{13}_{Lambda{}Lambda}$B has been performed. rn% As result, three $gamma$-transitions associated to the double hypernuclei $^{11}_{Lambda{}Lambda}$Bern% and to the single hyperfragments $^{4}_{Lambda}$H and $^{9}_{Lambda}$Be, have been well identified.rnrnFor the background handling a method based on time measurement has also been implemented.rnHowever, the percentage of tagged events related to the production of $Xi^{-}+overline{Xi}$ pairs, variesrnbetween 20% and 30% of the total number of produced events of this type. As a consequence, further considerations have to be made to increase the tagging efficiency by a factor of 2.rnrnThe contribution of the background reactions to the radiation damage on the germanium detectorsrnhas also been studied within the simulation. Additionally, a test to check the degradation of the energyrnresolution of the germanium detectors in the presence of a magnetic field has also been performed.rnNo significant degradation of the energy resolution or in the electronics was observed. A correlationrnbetween rise time and the pulse shape has been used to correct the measured energy. rnrnBased on the present results, one can say that the performance of $gamma$ spectroscopy of double $Lambda$ hypernuclei at the {sc PANDA} experiment seems feasible.rnA further improvement of the statistics is needed for the background rejection studies. Moreover, a more realistic layout of the hypernuclear detectors has been suggested using the results of these studies to accomplish a better balance between the physical and the technical requirements.rn
Resumo:
The discovery of the Cosmic Microwave Background (CMB) radiation in 1965 is one of the fundamental milestones supporting the Big Bang theory. The CMB is one of the most important source of information in cosmology. The excellent accuracy of the recent CMB data of WMAP and Planck satellites confirmed the validity of the standard cosmological model and set a new challenge for the data analysis processes and their interpretation. In this thesis we deal with several aspects and useful tools of the data analysis. We focus on their optimization in order to have a complete exploitation of the Planck data and contribute to the final published results. The issues investigated are: the change of coordinates of CMB maps using the HEALPix package, the problem of the aliasing effect in the generation of low resolution maps, the comparison of the Angular Power Spectrum (APS) extraction performances of the optimal QML method, implemented in the code called BolPol, and the pseudo-Cl method, implemented in Cromaster. The QML method has been then applied to the Planck data at large angular scales to extract the CMB APS. The same method has been applied also to analyze the TT parity and the Low Variance anomalies in the Planck maps, showing a consistent deviation from the standard cosmological model, the possible origins for this results have been discussed. The Cromaster code instead has been applied to the 408 MHz and 1.42 GHz surveys focusing on the analysis of the APS of selected regions of the synchrotron emission. The new generation of CMB experiments will be dedicated to polarization measurements, for which are necessary high accuracy devices for separating the polarizations. Here a new technology, called Photonic Crystals, is exploited to develop a new polarization splitter device and its performances are compared to the devices used nowadays.
Resumo:
In case of violation of CPT- and Lorentz Symmetry, the minimal Standard Model Extension (SME) of Kostelecky and coworkers predicts sidereal modulations of atomic transition frequencies as the Earth rotates relative to a Lorentz-violating background field. One method to search for these modulations is the so-called clock-comparison experiment, where the frequencies of co-located clocks are compared as they rotate with respect to the fixed stars. In this work an experiment is presented where polarized 3He and 129Xe gas samples in a glass cell serve as clocks, whose nuclear spin precession frequencies are detected with the help of highly sensitive SQUID sensors inside a magnetically shielded room. The unique feature of this experiment is the fact that the spins are precessing freely, with transverse relaxation times of up to 4.4 h for 129Xe and 14.1 h for 3He. To be sensitive to Lorentz-violating effects, the influence of external magnetic fields is canceled via the weighted difference of the 3He and 129Xe frequencies or phases. The Lorentz-violating SME parameters for the neutron are determined out of a fit on the phase difference data of 7 spin precession measurements of 12 to 16 hours length. The result of the fit gives an upper limit for the equatorial component of the neutron parameter b_n of 3.7×10^(−32) GeV at the 95% confidence level. This value is not limited by the signal-to-noise ratio, but by the strong correlations between the fit parameters. To reduce the correlations and therewith improve the sensitivity of future experiments, it will be necessary to change the time structure of the weighted phase difference, which can be realized by increasing the 129Xe relaxation time.
Resumo:
This thesis concerns artificially intelligent natural language processing systems that are capable of learning the properties of lexical items (properties like verbal valency or inflectional class membership) autonomously while they are fulfilling their tasks for which they have been deployed in the first place. Many of these tasks require a deep analysis of language input, which can be characterized as a mapping of utterances in a given input C to a set S of linguistically motivated structures with the help of linguistic information encoded in a grammar G and a lexicon L: G + L + C → S (1) The idea that underlies intelligent lexical acquisition systems is to modify this schematic formula in such a way that the system is able to exploit the information encoded in S to create a new, improved version of the lexicon: G + L + S → L' (2) Moreover, the thesis claims that a system can only be considered intelligent if it does not just make maximum usage of the learning opportunities in C, but if it is also able to revise falsely acquired lexical knowledge. So, one of the central elements in this work is the formulation of a couple of criteria for intelligent lexical acquisition systems subsumed under one paradigm: the Learn-Alpha design rule. The thesis describes the design and quality of a prototype for such a system, whose acquisition components have been developed from scratch and built on top of one of the state-of-the-art Head-driven Phrase Structure Grammar (HPSG) processing systems. The quality of this prototype is investigated in a series of experiments, in which the system is fed with extracts of a large English corpus. While the idea of using machine-readable language input to automatically acquire lexical knowledge is not new, we are not aware of a system that fulfills Learn-Alpha and is able to deal with large corpora. To instance four major challenges of constructing such a system, it should be mentioned that a) the high number of possible structural descriptions caused by highly underspeci ed lexical entries demands for a parser with a very effective ambiguity management system, b) the automatic construction of concise lexical entries out of a bulk of observed lexical facts requires a special technique of data alignment, c) the reliability of these entries depends on the system's decision on whether it has seen 'enough' input and d) general properties of language might render some lexical features indeterminable if the system tries to acquire them with a too high precision. The cornerstone of this dissertation is the motivation and development of a general theory of automatic lexical acquisition that is applicable to every language and independent of any particular theory of grammar or lexicon. This work is divided into five chapters. The introductory chapter first contrasts three different and mutually incompatible approaches to (artificial) lexical acquisition: cue-based queries, head-lexicalized probabilistic context free grammars and learning by unification. Then the postulation of the Learn-Alpha design rule is presented. The second chapter outlines the theory that underlies Learn-Alpha and exposes all the related notions and concepts required for a proper understanding of artificial lexical acquisition. Chapter 3 develops the prototyped acquisition method, called ANALYZE-LEARN-REDUCE, a framework which implements Learn-Alpha. The fourth chapter presents the design and results of a bootstrapping experiment conducted on this prototype: lexeme detection, learning of verbal valency, categorization into nominal count/mass classes, selection of prepositions and sentential complements, among others. The thesis concludes with a review of the conclusions and motivation for further improvements as well as proposals for future research on the automatic induction of lexical features.