4 resultados para Deep Belief Network, Deep Learning, Gaze, Head Pose, Surveillance, Unsupervised Learning
em ArchiMeD - Elektronische Publikationen der Universität Mainz - Alemanha
Resumo:
This thesis concerns artificially intelligent natural language processing systems that are capable of learning the properties of lexical items (properties like verbal valency or inflectional class membership) autonomously while they are fulfilling their tasks for which they have been deployed in the first place. Many of these tasks require a deep analysis of language input, which can be characterized as a mapping of utterances in a given input C to a set S of linguistically motivated structures with the help of linguistic information encoded in a grammar G and a lexicon L: G + L + C → S (1) The idea that underlies intelligent lexical acquisition systems is to modify this schematic formula in such a way that the system is able to exploit the information encoded in S to create a new, improved version of the lexicon: G + L + S → L' (2) Moreover, the thesis claims that a system can only be considered intelligent if it does not just make maximum usage of the learning opportunities in C, but if it is also able to revise falsely acquired lexical knowledge. So, one of the central elements in this work is the formulation of a couple of criteria for intelligent lexical acquisition systems subsumed under one paradigm: the Learn-Alpha design rule. The thesis describes the design and quality of a prototype for such a system, whose acquisition components have been developed from scratch and built on top of one of the state-of-the-art Head-driven Phrase Structure Grammar (HPSG) processing systems. The quality of this prototype is investigated in a series of experiments, in which the system is fed with extracts of a large English corpus. While the idea of using machine-readable language input to automatically acquire lexical knowledge is not new, we are not aware of a system that fulfills Learn-Alpha and is able to deal with large corpora. To instance four major challenges of constructing such a system, it should be mentioned that a) the high number of possible structural descriptions caused by highly underspeci ed lexical entries demands for a parser with a very effective ambiguity management system, b) the automatic construction of concise lexical entries out of a bulk of observed lexical facts requires a special technique of data alignment, c) the reliability of these entries depends on the system's decision on whether it has seen 'enough' input and d) general properties of language might render some lexical features indeterminable if the system tries to acquire them with a too high precision. The cornerstone of this dissertation is the motivation and development of a general theory of automatic lexical acquisition that is applicable to every language and independent of any particular theory of grammar or lexicon. This work is divided into five chapters. The introductory chapter first contrasts three different and mutually incompatible approaches to (artificial) lexical acquisition: cue-based queries, head-lexicalized probabilistic context free grammars and learning by unification. Then the postulation of the Learn-Alpha design rule is presented. The second chapter outlines the theory that underlies Learn-Alpha and exposes all the related notions and concepts required for a proper understanding of artificial lexical acquisition. Chapter 3 develops the prototyped acquisition method, called ANALYZE-LEARN-REDUCE, a framework which implements Learn-Alpha. The fourth chapter presents the design and results of a bootstrapping experiment conducted on this prototype: lexeme detection, learning of verbal valency, categorization into nominal count/mass classes, selection of prepositions and sentential complements, among others. The thesis concludes with a review of the conclusions and motivation for further improvements as well as proposals for future research on the automatic induction of lexical features.
Resumo:
KurzfassungIm Einzugsgebiet der Hunte (NW-Deutsches Becken, Niedersachsen) wurde untersucht, ob die Landschaftsgenese durch tektonische Bewegungen der Oberkruste beeinflußt ist. Krustenbewegungen führten im Bereich einer Hauptschollengrenze zu einer Hebung der weichselzeitlichen Niederterrasse (durchschnittliche Hebungssrate von ~0,5 mm/a über die letzten 12000 Jahre). Tektonischer Einfluß auf die heutige Landoberfläche ist über einem permischen Salzkissen zu verzeichnen, wo sich das Gefälle der holozänen Aue umkehrt. Krustenbewegungen haben mit großer Wahrscheinlichkeit Vorzugsrichtungen verursacht, die an der Tertiärbasis und in der heutigen Landschaft nachweisbar sind (0-5° und 90-95°). Das Abfließen der Hunte nach Norden scheint durch eine aktive, nordwärts gerichtete Kippung des NW-Deutschen Beckens verursacht zu sein. Hohe lineare Korrelationskoeffizienten zwischen Tiefenlage der Tertiärbasis und Höhenlage der heutigen Landoberfläche weisen auf eine aktive Kippung des Beckens hin. Beckensubsidenz hat möglicherweise die Akkumulation der weichselzeitlichen Niederterrasse gesteuert, da eine Übereinstimmung zwischen rezenter Beckensubsidenz und durchschnittlicher Sedimentationsrate des Niederterrassenkörpers besteht. Untersuchungen an einer geschlossenen Hohlform deuten auf eine aktive Sackungsstruktur hin, da sich Anomalien des geologischen Untergrundes mit der topographischen Lage der Struktur decken.
Resumo:
A numerical model for studying the influences of deep convective cloud systems on photochemistry was developed based on a non-hydrostatic meteorological model and chemistry from a global chemistry transport model. The transport of trace gases, the scavenging of soluble trace gases, and the influences of lightning produced nitrogen oxides (NOx=NO+NO2) on the local ozone-related photochemistry were investigated in a multi-day case study for an oceanic region located in the tropical western Pacific. Model runs considering influences of large scale flows, previously neglected in multi-day cloud resolving and single column model studies of tracer transport, yielded that the influence of the mesoscale subsidence (between clouds) on trace gas transport was considerably overestimated in these studies. The simulated vertical transport and scavenging of highly soluble tracers were found to depend on the initial profiles, reconciling contrasting results from two previous studies. Influences of the modeled uptake of trace gases by hydrometeors in the liquid and the ice phase were studied in some detail for a small number of atmospheric trace gases and novel aspects concerning the role of the retention coefficient (i.e. the fraction of a dissolved trace gas that is retained in the ice phase upon freezing) on the vertical transport of highly soluble gases were illuminated. Including lightning NOx production inside a 500 km 2-D model domain was found to be important for the NOx budget and caused small to moderate changes in the domain averaged ozone concentrations. A number of sensitivity studies yielded that the fraction of lightning associated NOx which was lost through photochemical reactions in the vicinity of the lightning source was considerable, but strongly depended on assumptions about the magnitude and the altitude of the lightning NOx source. In contrast to a suggestion from an earlier study, it was argued that the near zero upper tropospheric ozone mixing ratios which were observed close to the study region were most probably not caused by the formation of NO associated with lightning. Instead, it was argued in agreement with suggestions from other studies that the deep convective transport of ozone-poor air masses from the relatively unpolluted marine boundary layer, which have most likely been advected horizontally over relatively large distances (both before and after encountering deep convection) probably played a role. In particular, it was suggested that the ozone profiles observed during CEPEX (Central Equatorial Pacific Experiment) were strongly influenced by the deep convection and the larger scale flow which are associated with the intra-seasonal oscillation.
Resumo:
Deep convection by pyro-cumulonimbus clouds (pyroCb) can transport large amounts of forest fire smoke into the upper troposphere and lower stratosphere. Here, results from numerical simulations of such deep convective smoke transport are presented. The structure, shape and injection height of the pyroCb simulated for a specific case study are in good agreement with observations. The model results confirm that substantial amounts of smoke are injected into the lower stratosphere. Small-scale mixing processes at the cloud top result in a significant enhancement of smoke injection into the stratosphere. Sensitivity studies show that the release of sensible heat by the fire plays an important role for the dynamics of the pyroCb. Furthermore, the convection is found to be very sensitive to background meteorological conditions. While the abundance of aerosol particles acting as cloud condensation nuclei (CCN) has a strong influence on the microphysical structure of the pyroCb, the CCN effect on the convective dynamics is rather weak. The release of latent heat dominates the overall energy budget of the pyroCb. Since most of the cloud water originates from moisture entrained from the background atmosphere, the fire-released moisture contributes only minor to convection dynamics. Sufficient fire heating, favorable meteorological conditions, and small-scale mixing processes at the cloud top are identified as the key ingredients for troposphere-to-stratosphere transport by pyroCb convection.