909 resultados para ambiguity aversion
Resumo:
Piezoelectric ceramics, such as PZT, can generate subnanometric displacements, bu t in order to generate multi- micrometric displacements, they should be either driven by high electric voltages (hundreds of volts ), or operate at a mechanical resonant frequency (in narrow band), or have large dimensions (tens of centimeters). A piezoelectric flextensional actuator (PFA) is a device with small dimensions that can be driven by reduced voltages and can operate in the nano- and micro scales. Interferometric techniques are very adequate for the characterization of these devices, because there is no mechanical contact in the measurement process, and it has high sensitivity, bandwidth and dynamic range. A low cost open-loop homodyne Michelson interferometer is utilized in this work to experimentally detect the nanovi brations of PFAs, based on the spectral analysis of the interfero metric signal. By employing the well known J 1 ...J 4 phase demodulation method, a new and improved version is proposed, which presents the following characteristics: is direct, self-consistent, is immune to fading, and does not present phase ambiguity problems. The proposed method has resolution that is similar to the modified J 1 ...J 4 method (0.18 rad); however, differently from the former, its dynamic range is 20% larger, does not demand Bessel functions algebraic sign correction algorithms and there are no singularities when the static phase shift between the interferometer arms is equal to an integer multiple of /2 rad. Electronic noise and random phase drifts due to ambient perturbations are taken into account in the analysis of the method. The PFA nanopositioner characterization was based on the analysis of linearity betw een the applied voltage and the resulting displacement, on the displacement frequency response and determination of main resonance frequencies.
Resumo:
The increasing aversion to technological risks of the society requires the development of inherently safer and environmentally friendlier processes, besides assuring the economic competitiveness of the industrial activities. The different forms of impact (e.g. environmental, economic and societal) are frequently characterized by conflicting reduction strategies and must be holistically taken into account in order to identify the optimal solutions in process design. Though the literature reports an extensive discussion of strategies and specific principles, quantitative assessment tools are required to identify the marginal improvements in alternative design options, to allow the trade-off among contradictory aspects and to prevent the “risk shift”. In the present work a set of integrated quantitative tools for design assessment (i.e. design support system) was developed. The tools were specifically dedicated to the implementation of sustainability and inherent safety in process and plant design activities, with respect to chemical and industrial processes in which substances dangerous for humans and environment are used or stored. The tools were mainly devoted to the application in the stages of “conceptual” and “basic design”, when the project is still open to changes (due to the large number of degrees of freedom) which may comprise of strategies to improve sustainability and inherent safety. The set of developed tools includes different phases of the design activities, all through the lifecycle of a project (inventories, process flow diagrams, preliminary plant lay-out plans). The development of such tools gives a substantial contribution to fill the present gap in the availability of sound supports for implementing safety and sustainability in early phases of process design. The proposed decision support system was based on the development of a set of leading key performance indicators (KPIs), which ensure the assessment of economic, societal and environmental impacts of a process (i.e. sustainability profile). The KPIs were based on impact models (also complex), but are easy and swift in the practical application. Their full evaluation is possible also starting from the limited data available during early process design. Innovative reference criteria were developed to compare and aggregate the KPIs on the basis of the actual sitespecific impact burden and the sustainability policy. Particular attention was devoted to the development of reliable criteria and tools for the assessment of inherent safety in different stages of the project lifecycle. The assessment follows an innovative approach in the analysis of inherent safety, based on both the calculation of the expected consequences of potential accidents and the evaluation of the hazards related to equipment. The methodology overrides several problems present in the previous methods proposed for quantitative inherent safety assessment (use of arbitrary indexes, subjective judgement, build-in assumptions, etc.). A specific procedure was defined for the assessment of the hazards related to the formations of undesired substances in chemical systems undergoing “out of control” conditions. In the assessment of layout plans, “ad hoc” tools were developed to account for the hazard of domino escalations and the safety economics. The effectiveness and value of the tools were demonstrated by the application to a large number of case studies concerning different kinds of design activities (choice of materials, design of the process, of the plant, of the layout) and different types of processes/plants (chemical industry, storage facilities, waste disposal). An experimental survey (analysis of the thermal stability of isomers of nitrobenzaldehyde) provided the input data necessary to demonstrate the method for inherent safety assessment of materials.
Resumo:
Sociology of work in Italy revived at the end of WWII, after thirty years of forced oblivion. This thesis examines the history of discipline by considering three paths that it followed from its revival up to its institutionalization: the influence of the productivity drive, the role of trade unions and the activity of early young researchers. European Productivity Agency's Italian office Comitato Nazionale per la Produttività propagandised studies on management and on the effects of the industrialization on work and society. Academicians, technicians, psychologists who worked for CNP started rethinking sociology of work, but the managerial use of sociology was unacceptable for both trade unions and young researchers. So âfree unionâ CISL created a School in Florence with an eager attention to social sciences as a medium to become a new model union, while Marxist CGIL, despite its ideological aversion to sociology, finally accepted the social sciences lexicon in order to explain the work changes and to resist against the employers' association offensive. On the other hand, political and social engagement led a first generation of sociologists to study social phenomenon in the recently industrialized Italy by using the sociological analysis. Finally, the thesis investigate the cultural transfers from France, whose industrial sociology (sociologie du travail) was considered as a reference in continental Europe. Nearby the wide importance of French sociologie, financially aided by planning institutions in order to employ it in the industrial reconstruction, other minor experiences such as the social surveys accomplished by worker-priests in the suburbs of industrial cities and the heterodox Marxism of the review âSocialisme ou Barbarieâ influenced Italian sociology of work.
Resumo:
Longstanding taxonomic ambiguity and uncertainty exist in the identification of the common (M. mustelus) and blackspotted (M. punctulatus) smooth-hound in the Adriatic Sea. The lack of a clear and accurate method of morphological identification, leading to frequent misidentification, prevents the collation of species-specific landings and survey data for these fishes and hampers the delineation of the distribution ranges and stock boundaries of the species. In this context, adequate species-specific conservation and management strategies can not be applied without risks of population declining and local extinction. In this thesis work I investigated the molecular ecology of the two smooth-hound sharks which are abundant in the demersal trawl surveys carried out in the NC Adriatic Sea to monitor and assess the fishery resources. Ecological and evolutionary relationships were assessed by two molecular tests: a DNA barcoding analysis to improve species identification (and consequently the knowledge of their spatial ecology and taxonomy) and a hybridization assay based on the nuclear codominant marker ITS2 to evaluate reproductive interactions (hybridization or gene introgression). The smooth-hound sharks (N=208) were collected during the MEDITS 2008 and 2010 campaigns along the Italian and Croatian coasts of the Adriatic Sea, in the Sicilian Channel and in the Algerian fisheries. Since the identification based on morphological characters is not strongly reliable, I performed a molecular identification of the specimens producing for each one the cytochrome oxidase subunit 1 (COI) gene sequence (ca. 640 bp long) and compared them with reference sequences from different databases (GenBank and BOLD). From these molecular ID data I inferred the distribution of the two target species in the NC Adriatic Sea. In almost the totality of the MEDITS hauls I found no evidence of species sympatry. The data collected during the MEDITS survey showed an almost different distribution of M. mustelus (confined along the Italian coasts) and M. punctulatus (confined along the Croatian coasts); just one sample (Gulf of Venice, where probably the ranges of the species overlap) was found to have catches of both the species. Despite these data results suggested no interaction occurred between my two target species at least during the summertime (the period in which MEDITS survey is carried out), I still wanted to know if there were inter-species reproductive interactions so I developed a simple molecular genetic method to detect hybridization. This method is based on DNA sequence polymorphism among species in the nuclear ribosomal Internal Transcribed Spacer 2 locus (ITS2). Its application to the 208 specimens collected raised important questions regarding the ecology of this two species in the Adriatic Sea. In fact results showed signs of hybridization and/or gene introgression in two sharks collected during the trawl survey of 2008 and one collected during the 2010 one along the Italian and Croatian coasts. In the case that it will be confirmed the hybrid nature of these individuals, a spatiotemporal overlapping of the mating behaviour and ecology must occur. At the spatial level, the northern part of the Adriatic Sea (an area where the two species occur with high frequency of immature individuals) could likely play the role of a common nursery area for both species.
Resumo:
This thesis adresses the problem of localization, and analyzes its crucial aspects, within the context of cooperative WSNs. The three main issues discussed in the following are: network synchronization, position estimate and tracking. Time synchronization is a fundamental requirement for every network. In this context, a new approach based on the estimation theory is proposed to evaluate the ultimate performance limit in network time synchronization. In particular the lower bound on the variance of the average synchronization error in a fully connected network is derived by taking into account the statistical characterization of the Message Delivering Time (MDT) . Sensor network localization algorithms estimate the locations of sensors with initially unknown location information by using knowledge of the absolute positions of a few sensors and inter-sensor measurements such as distance and bearing measurements. Concerning this issue, i.e. the position estimate problem, two main contributions are given. The first is a new Semidefinite Programming (SDP) framework to analyze and solve the problem of flip-ambiguity that afflicts range-based network localization algorithms with incomplete ranging information. The occurrence of flip-ambiguous nodes and errors due to flip ambiguity is studied, then with this information a new SDP formulation of the localization problem is built. Finally a flip-ambiguity-robust network localization algorithm is derived and its performance is studied by Monte-Carlo simulations. The second contribution in the field of position estimate is about multihop networks. A multihop network is a network with a low degree of connectivity, in which couples of given any nodes, in order to communicate, they have to rely on one or more intermediate nodes (hops). Two new distance-based source localization algorithms, highly robust to distance overestimates, typically present in multihop networks, are presented and studied. The last point of this thesis discuss a new low-complexity tracking algorithm, inspired by the Fano’s sequential decoding algorithm for the position tracking of a user in a WLAN-based indoor localization system.
Resumo:
The quark condensate is a fundamental free parameter of Chiral Perturbation Theory ($chi PT$), since it determines the relative size of the mass and momentum terms in the power expansion. In order to confirm or contradict the assumption of a large quark condensate, on which $chi PT$ is based, experimental tests are needed. In particular, the $S$-wave $pipi$ scattering lengths $a_0^0$ and $a_0^2$ can be predicted precisely within $chi PT$ as a function of this parameter and can be measured very cleanly in the decay $K^{pm} to pi^{+} pi^{-} e^{pm} stackrel{mbox{tiny(---)}}{nu_e}$ ($K_{e4}$). About one third of the data collected in 2003 and 2004 by the NA48/2 experiment were analysed and 342,859 $K_{e4}$ candidates were selected. The background contamination in the sample could be reduced down to 0.3% and it could be estimated directly from the data, by selecting events with the same signature as $K_{e4}$, but requiring for the electron the opposite charge with respect to the kaon, the so-called ``wrong sign'' events. This is a clean background sample, since the kaon decay with $Delta S=-Delta Q$, that would be the only source of signal, can only take place through two weak decays and is therefore strongly suppressed. The Cabibbo-Maksymowicz variables, used to describe the kinematics of the decay, were computed under the assumption of a fixed kaon momentum of 60 GeV/$c$ along the $z$ axis, so that the neutrino momentum could be obtained without ambiguity. The measurement of the form factors and of the $pipi$ scattering length $a_0^0$ was performed in a single step by comparing the five-dimensional distributions of data and MC in the kinematic variables. The MC distributions were corrected in order to properly take into account the trigger and selection efficiencies of the data and the background contamination. The following parameter values were obtained from a binned maximum likelihood fit, where $a_0^2$ was expressed as a function of $a_0^0$ according to the prediction of chiral perturbation theory: f'_s/f_s = 0.133+- 0.013(stat)+- 0.026(syst) f''_s/f_s = -0.041+- 0.013(stat)+- 0.020(syst) f_e/f_s = 0.221+- 0.051(stat)+- 0.105(syst) f'_e/f_s = -0.459+- 0.170(stat)+- 0.316(syst) tilde{f_p}/f_s = -0.112+- 0.013(stat)+- 0.023(syst) g_p/f_s = 0.892+- 0.012(stat)+- 0.025(syst) g'_p/f_s = 0.114+- 0.015(stat)+- 0.022(syst) h_p/f_s = -0.380+- 0.028(stat)+- 0.050(syst) a_0^0 = 0.246+- 0.009(stat)+- 0.012(syst)}+- 0.002(theor), where the statistical uncertainty only includes the effect of the data statistics and the theoretical uncertainty is due to the width of the allowed band for $a_0^2$.
Resumo:
La tesi sviluppa le proposte teoriche della Linguistica Cognitiva a proposito della metafora e propone una loro possibile applicazione in ambito didattico. La linguistica cognitiva costituisce la cornice interpretativa della ricerca, a partire dai suoi concetti principali: la prospettiva integrata, l’embodiment, la centralità della semantica, l’attenzione per la psicolinguistica e le neuroscienze. All’interno di questo panorama, prende vigore un’idea di metafora come punto d’incontro tra lingua e pensiero, come criterio organizzatore delle conoscenze, strumento conoscitivo fondamentale nei processi di apprendimento. A livello didattico, la metafora si rivela imprescindibile sia come strumento operativo che come oggetto di riflessione. L’approccio cognitivista può fornire utili indicazioni su come impostare un percorso didattico sulla metafora. Nel presente lavoro, si indaga in particolare l’uso didattico di stimoli non verbali nel rafforzamento delle competenze metaforiche di studenti di scuola media. Si è scelto come materiale di partenza la pubblicità, per due motivi: il diffuso impiego di strategie retoriche in ambito pubblicitario e la specificità comunicativa del genere, che permette una chiara disambiguazione di fenomeni che, in altri contesti, non potrebbero essere analizzati con la stessa univocità. Si presenta dunque un laboratorio finalizzato al miglioramento della competenza metaforica degli studenti che si avvale di due strategie complementari: da una parte, una spiegazione ispirata ai modelli cognitivisti, sia nella terminologia impiegata che nella modalità di analisi (di tipo usage-based); dall’altra un training con metafore visive in pubblicità, che comprende una fase di analisi e una fase di produzione. È stato usato un test, suddiviso in compiti specifici, per oggettivare il più possibile i progressi degli studenti alla fine del training, ma anche per rilevare le difficoltà e i punti di forza nell’analisi rispetto sia ai contesti d’uso (letterario e convenzionale) sia alle forme linguistiche assunte dalla metafora (nominale, verbale, aggettivale).
Resumo:
The aim of the thesis is to investigate the topic of semantic under-determinacy, i.e. the failure of the semantic content of certain expressions to determine a truth-evaluable utterance content. In the first part of the thesis, I engage with the problem of setting apart semantic under-determinacy as opposed to other phenomena such as ambiguity, vagueness, indexicality. As I will argue, the feature that distinguishes semantic under-determinacy from these phenomena is its being explainable solely in terms of under-articulation. In the second part of the thesis, I discuss the topic of how communication is possible, despite the semantic under-determinacy of language. I discuss a number of answers that have been offered: (i) the Radical Contextualist explanation which emphasises the role of pragmatic processes in utterance comprehension; (ii) the Indexicalist explanation in terms of hidden syntactic positions; (iii) the Relativist account, which regards sentences as true or false relative to extra coordinates in the circumstances of evaluation (besides possible worlds). In the final chapter, I propose an account of the comprehension of utterances of semantically under-determined sentences in terms of conceptual constraints, i.e. ways of organising information which regulate thought and discourse on certain matters. Conceptual constraints help the hearer to work out the truth-conditions of an utterance of a semantically under-determined sentence. Their role is clearly semantic, in that they contribute to “what is said” (rather than to “what is implied”); however, they do not respond to any syntactic constraint. The view I propose therefore differs, on the one hand, from Radical Contextualism, because it stresses the role of semantic-governed processes as opposed to pragmatics-governed processes; on the other hand, it differs from Indexicalism in its not endorsing any commitment as to hidden syntactic positions; and it differs from Relativism in that it maintains a monadic notion if truth.
Resumo:
Che cos’è il riferimento? La risposta che difendo è che il riferimento è un atto che coinvolge un parlante, un’espressione linguistica e uno specifico oggetto, in una data occasione d’uso. Nel primo capitolo, inquadro storicamente il dibattito sul riferimento opponendo il modello soddisfazionale à la Russell a quello referenziale à la Donnellan. Introduco la teoria russelliana su nomi propri e descrizioni definite e difendo la tesi che gli usi referenziali siano caratterizzati da una direzione di adattamento inversa rispetto al modello soddisfazionale. Nel secondo capitolo, sostengo che il riferimento è un’azione che può essere felice o infelice, a seconda che il parlante ne rispetti i vincoli o meno. Analizzo due condizioni necessarie del riferimento: che vi sia un legame causale tra parlante, espressione e referente, e che le parole siano usate convenzionalmente. Normalmente, si parla di fallimento referenziale solo quando il presunto referente non esiste, mentre io propongo di usare l’espressione per i riferimenti infelici. Secondo e terzo capitolo equiparano più tipi di espressioni in merito al riferimento. Insisto sulla dipendenza contestuale di nomi propri e descrizioni definite (sia usate referenzialmente che attributivamente). Due degli argomenti usati sono basati sui nomi omofoni e omografi e sulle descrizioni definite incomplete. Infine sintetizzo i punti precedenti in una proposta originale. L’atto referenziale, di cui ho difeso la possibilità che fallisca, è dipendente anche dall’essere teso verso la comunicazione. Per illustrare il punto confronto il processo di istituzione di una convenzione con l’uso di una convenzione già istituita. Il progetto è di dare un resoconto del riferimento bilanciato tra l’uso del linguaggio incentrato sul soggetto e i suoi legami con il mondo, da una parte, e le espressioni linguistiche, strumenti per ottenere risultati all’interno di una data comunità, dall’altra parte. L’atto referenziale, sostengo, ha diverse gradazioni di efficacia dipendenti da tutti questi elementi.
Resumo:
The optical resonances of metallic nanoparticles placed at nanometer distances from a metal plane were investigated. At certain wavelengths, these “sphere-on-plane” systems become resonant with the incident electromagnetic field and huge enhancements of the field are predicted localized in the small gaps created between the nanoparticle and the plane. An experimental architecture to fabricate sphere-on-plane systems was successfully achieved in which in addition to the commonly used alkanethiols, polyphenylene dendrimers were used as molecular spacers to separate the metallic nanoparticles from the metal planes. They allow for a defined nanoparticle-plane separation and some often are functionalized with a chromophore core which is therefore positioned exactly in the gap. The metal planes used in the system architecture consisted of evaporated thin films of gold or silver. Evaporated gold or silver films have a smooth interface with their substrate and a rougher top surface. To investigate the influence of surface roughness on the optical response of such a film, two gold films were prepared with a smooth and a rough side which were as similar as possible. Surface plasmons were excited in Kretschmann configuration both on the rough and on the smooth side. Their reflectivity could be well modeled by a single gold film for each individual measurement. The film has to be modeled as two layers with significantly different optical constants. The smooth side, although polycrystalline, had an optical response that was very similar to a monocrystalline surface while for the rough side the standard response of evaporated gold is retrieved. For investigations on thin non-absorbing dielectric films though, this heterogeneity introduces only a negligible error. To determine the resonant wavelength of the sphere-on-plane systems a strategy was developed which is based on multi-wavelength surface plasmon spectroscopy experiments in Kretschmann-configuration. The resonant behavior of the system lead to characteristic changes in the surface plasmon dispersion. A quantitative analysis was performed by calculating the polarisability per unit area /A treating the sphere-on-plane systems as an effective layer. This approach completely avoids the ambiguity in the determination of thickness and optical response of thin films in surface plasmon spectroscopy. Equal area densities of polarisable units yielded identical response irrespective of the thickness of the layer they are distributed in. The parameter range where the evaluation of surface plasmon data in terms of /A is applicable was determined for a typical experimental situation. It was shown that this analysis yields reasonable quantitative agreement with a simple theoretical model of the sphere-on-plane resonators and reproduces the results from standard extinction experiments having a higher information content and significantly increased signal-to-noise ratio. With the objective to acquire a better quantitative understanding of the dependence of the resonance wavelength on the geometry of the sphere-on-plane systems, different systems were fabricated in which the gold nanoparticle size, type of spacer and ambient medium were varied and the resonance wavelength of the system was determined. The gold nanoparticle radius was varied in the range from 10 nm to 80 nm. It could be shown that the polyphenylene dendrimers can be used as molecular spacers to fabricate systems which support gap resonances. The resonance wavelength of the systems could be tuned in the optical region between 550 nm and 800 nm. Based on a simple analytical model, a quantitative analysis was developed to relate the systems’ geometry with the resonant wavelength and surprisingly good agreement of this simple model with the experiment without any adjustable parameters was found. The key feature ascribed to sphere-on-plane systems is a very large electromagnetic field localized in volumes in the nanometer range. Experiments towards a quantitative understanding of the field enhancements taking place in the gap of the sphere-on-plane systems were done by monitoring the increase in fluorescence of a metal-supported monolayer of a dye-loaded dendrimer upon decoration of the surface with nanoparticles. The metal used (gold and silver), the colloid mean size and the surface roughness were varied. Large silver crystallites on evaporated silver surfaces lead to the most pronounced fluorescence enhancements in the order of 104. They constitute a very promising sample architecture for the study of field enhancements.
Resumo:
Die Entstehung eines Marktpreises für einen Vermögenswert kann als Superposition der einzelnen Aktionen der Marktteilnehmer aufgefasst werden, die damit kumulativ Angebot und Nachfrage erzeugen. Dies ist in der statistischen Physik mit der Entstehung makroskopischer Eigenschaften vergleichbar, die von mikroskopischen Wechselwirkungen zwischen den beteiligten Systemkomponenten hervorgerufen werden. Die Verteilung der Preisänderungen an Finanzmärkten unterscheidet sich deutlich von einer Gaußverteilung. Dies führt zu empirischen Besonderheiten des Preisprozesses, zu denen neben dem Skalierungsverhalten nicht-triviale Korrelationsfunktionen und zeitlich gehäufte Volatilität zählen. In der vorliegenden Arbeit liegt der Fokus auf der Analyse von Finanzmarktzeitreihen und den darin enthaltenen Korrelationen. Es wird ein neues Verfahren zur Quantifizierung von Muster-basierten komplexen Korrelationen einer Zeitreihe entwickelt. Mit dieser Methodik werden signifikante Anzeichen dafür gefunden, dass sich typische Verhaltensmuster von Finanzmarktteilnehmern auf kurzen Zeitskalen manifestieren, dass also die Reaktion auf einen gegebenen Preisverlauf nicht rein zufällig ist, sondern vielmehr ähnliche Preisverläufe auch ähnliche Reaktionen hervorrufen. Ausgehend von der Untersuchung der komplexen Korrelationen in Finanzmarktzeitreihen wird die Frage behandelt, welche Eigenschaften sich beim Wechsel von einem positiven Trend zu einem negativen Trend verändern. Eine empirische Quantifizierung mittels Reskalierung liefert das Resultat, dass unabhängig von der betrachteten Zeitskala neue Preisextrema mit einem Anstieg des Transaktionsvolumens und einer Reduktion der Zeitintervalle zwischen Transaktionen einhergehen. Diese Abhängigkeiten weisen Charakteristika auf, die man auch in anderen komplexen Systemen in der Natur und speziell in physikalischen Systemen vorfindet. Über 9 Größenordnungen in der Zeit sind diese Eigenschaften auch unabhängig vom analysierten Markt - Trends, die nur für Sekunden bestehen, zeigen die gleiche Charakteristik wie Trends auf Zeitskalen von Monaten. Dies eröffnet die Möglichkeit, mehr über Finanzmarktblasen und deren Zusammenbrüche zu lernen, da Trends auf kleinen Zeitskalen viel häufiger auftreten. Zusätzlich wird eine Monte Carlo-basierte Simulation des Finanzmarktes analysiert und erweitert, um die empirischen Eigenschaften zu reproduzieren und Einblicke in deren Ursachen zu erhalten, die zum einen in der Finanzmarktmikrostruktur und andererseits in der Risikoaversion der Handelsteilnehmer zu suchen sind. Für die rechenzeitintensiven Verfahren kann mittels Parallelisierung auf einer Graphikkartenarchitektur eine deutliche Rechenzeitreduktion erreicht werden. Um das weite Spektrum an Einsatzbereichen von Graphikkarten zu aufzuzeigen, wird auch ein Standardmodell der statistischen Physik - das Ising-Modell - auf die Graphikkarte mit signifikanten Laufzeitvorteilen portiert. Teilresultate der Arbeit sind publiziert in [PGPS07, PPS08, Pre11, PVPS09b, PVPS09a, PS09, PS10a, SBF+10, BVP10, Pre10, PS10b, PSS10, SBF+11, PB10].
Resumo:
Several countries have acquired, over the past decades, large amounts of area covering Airborne Electromagnetic data. Contribution of airborne geophysics has dramatically increased for both groundwater resource mapping and management proving how those systems are appropriate for large-scale and efficient groundwater surveying. We start with processing and inversion of two AEM dataset from two different systems collected over the Spiritwood Valley Aquifer area, Manitoba, Canada respectively, the AeroTEM III (commissioned by the Geological Survey of Canada in 2010) and the “Full waveform VTEM” dataset, collected and tested over the same survey area, during the fall 2011. We demonstrate that in the presence of multiple datasets, either AEM and ground data, due processing, inversion, post-processing, data integration and data calibration is the proper approach capable of providing reliable and consistent resistivity models. Our approach can be of interest to many end users, ranging from Geological Surveys, Universities to Private Companies, which are often proprietary of large geophysical databases to be interpreted for geological and\or hydrogeological purposes. In this study we deeply investigate the role of integration of several complimentary types of geophysical data collected over the same survey area. We show that data integration can improve inversions, reduce ambiguity and deliver high resolution results. We further attempt to use the final, most reliable output resistivity models as a solid basis for building a knowledge-driven 3D geological voxel-based model. A voxel approach allows a quantitative understanding of the hydrogeological setting of the area, and it can be further used to estimate the aquifers volumes (i.e. potential amount of groundwater resources) as well as hydrogeological flow model prediction. In addition, we investigated the impact of an AEM dataset towards hydrogeological mapping and 3D hydrogeological modeling, comparing it to having only a ground based TEM dataset and\or to having only boreholes data.
Resumo:
This thesis concerns artificially intelligent natural language processing systems that are capable of learning the properties of lexical items (properties like verbal valency or inflectional class membership) autonomously while they are fulfilling their tasks for which they have been deployed in the first place. Many of these tasks require a deep analysis of language input, which can be characterized as a mapping of utterances in a given input C to a set S of linguistically motivated structures with the help of linguistic information encoded in a grammar G and a lexicon L: G + L + C → S (1) The idea that underlies intelligent lexical acquisition systems is to modify this schematic formula in such a way that the system is able to exploit the information encoded in S to create a new, improved version of the lexicon: G + L + S → L' (2) Moreover, the thesis claims that a system can only be considered intelligent if it does not just make maximum usage of the learning opportunities in C, but if it is also able to revise falsely acquired lexical knowledge. So, one of the central elements in this work is the formulation of a couple of criteria for intelligent lexical acquisition systems subsumed under one paradigm: the Learn-Alpha design rule. The thesis describes the design and quality of a prototype for such a system, whose acquisition components have been developed from scratch and built on top of one of the state-of-the-art Head-driven Phrase Structure Grammar (HPSG) processing systems. The quality of this prototype is investigated in a series of experiments, in which the system is fed with extracts of a large English corpus. While the idea of using machine-readable language input to automatically acquire lexical knowledge is not new, we are not aware of a system that fulfills Learn-Alpha and is able to deal with large corpora. To instance four major challenges of constructing such a system, it should be mentioned that a) the high number of possible structural descriptions caused by highly underspeci ed lexical entries demands for a parser with a very effective ambiguity management system, b) the automatic construction of concise lexical entries out of a bulk of observed lexical facts requires a special technique of data alignment, c) the reliability of these entries depends on the system's decision on whether it has seen 'enough' input and d) general properties of language might render some lexical features indeterminable if the system tries to acquire them with a too high precision. The cornerstone of this dissertation is the motivation and development of a general theory of automatic lexical acquisition that is applicable to every language and independent of any particular theory of grammar or lexicon. This work is divided into five chapters. The introductory chapter first contrasts three different and mutually incompatible approaches to (artificial) lexical acquisition: cue-based queries, head-lexicalized probabilistic context free grammars and learning by unification. Then the postulation of the Learn-Alpha design rule is presented. The second chapter outlines the theory that underlies Learn-Alpha and exposes all the related notions and concepts required for a proper understanding of artificial lexical acquisition. Chapter 3 develops the prototyped acquisition method, called ANALYZE-LEARN-REDUCE, a framework which implements Learn-Alpha. The fourth chapter presents the design and results of a bootstrapping experiment conducted on this prototype: lexeme detection, learning of verbal valency, categorization into nominal count/mass classes, selection of prepositions and sentential complements, among others. The thesis concludes with a review of the conclusions and motivation for further improvements as well as proposals for future research on the automatic induction of lexical features.
Resumo:
Diese Ausarbeitung zeigt Strukturen des menschlichen Miteinander im Rahmen einer systematisch-komparatistischen Annäherung ´auf dem Weg zum Anderen´ vor dem Hintergrund von Musils Roman ´Der Mann ohne Eigenschaften´ auf; sie verweist auf die Gefahren des zunehmend selbstzentrierten Identitätsdenkens, indem sie mit Blick auf den Anfang des 20. Jahrhunderts eine Auswahl philosophischer Denker aus dieser Zeit auf Grundlage einer poetischen Orientierung in ein Gespräch geführt, das damals in Wirklichkeit leider nicht stattgefunden hat: Ulrich, der Protagonist des Romans, übernimmt in dieser Ausarbeitung neben der poetisch-orientierenden Funktion die Rolle des Begleiters; er leitet den Leser durch die Arbeit und verbindet ´auf dem Weg zum Anderen´ philosophische Richtungen mit Musils Roman. Mit der Metapher vom ´Konflikt der beiden Bäume´, den Ulrich in sich bemerkt, beginnt der ´Weg zum Anderen´: Unter beiden Bäumen wird menschlichem Miteinander nachgespürt,indem phänomenologische Ansätze dargestellt, analysiert und komparatistisch betrachtet werden. Der ´Baum des harten Gewirrs´ steht für distanziertes Erkennen; Husserls Intentionalität und Intersubjektivität führen in ein ´Konzert einsamer Monaden´. Der ´Baum der Schatten und Träume´ - repräsentiert durch Klages - steht für verschmelzend mystisch-pathisches Erleben, das Menschen ebenfalls isoliert. Eine Verbindung der beiden Bäume erfolgt in der ´Begegnung zwischen den Bäumen´ im menschlichen Miteinander von Ulrich und seiner Schwester Agathe; hier gedeiht – um im Bild zu bleiben – der ´Baum des Lebens´ auf dem Boden der ´Notwendigkeit des Du für das Ich´. Dieser Baum wird vorgestellt hinsichtlich seiner Verwurzelung: Ansätze Feuerbachs, Diltheys und Plessners verweisen auf Gemeinschaftlichkeit, Geschichtlichkeit und Exzentrizität des Menschen. Daran schließt sich die Analyse der Struktur des Baumes an: Hier verweist Löwiths Ansatz auf die im Menschen angelegte ontologisch-konstitutionelle Zweideutigkeit. In der Krone des ´Lebensbaumes´ suchen die Dialogiker Buber, Rosenzweig und Rosenstock-Huessy nach Gleichursprünglichkeit in der ´Sphäre des Zwischen´ und beschreiten den Weg von der Menschwerdung in der ´Sphäre des Zwischen´ zu einer gelebten voraussetzungsvollen Mitmenschlichkeit im Horizont gesprochener Sprache. Komparatistische Betrachtungen offenbaren divergierende Tendenzen, die im Resümee verdichtet aufgezeigt werden: Unter philosophisch-inhaltlichem Aspekt wird dargestellt, warum Menschen ´unter beiden Bäumen´ in einsamer Beschränktheit und Endlichkeit verharren, während sie in ´Begegnung zwischen den Bäumen´ - im menschlichen Miteinander - Freiheit und Unendlichkeit erlangen: ´Haltung versus Eingebundenheit´ entscheidet über isoliertes oder gelingendes Leben. Unter philosophisch-kulturwissenschaftlichem Aspekt werden Spuren in Musils Roman ´Der Mann ohne Eigenschaften´ aufgedeckt, die vermuten lassen, Musil habe über seinen Roman Dialogisches Denken ´inkognito´ vermitteln wollen; die darin erweckte Sehnsucht nach menschlichem Miteinander gilt es, im Leben zu verantworten – zwischen Menschen, konkret und immer wieder...
Resumo:
The aim of this dissertation is to demonstrate how theory and practice are linked in translation. The translation of the essay Light Years Ahead helped me to understand this connection and to develop the two main thesis included in this work, that is the possibility the translator has to choose among all the different theories, without giving one or another the absolute supremacy, and the diversity of the non-fiction genre. Therefore, the first chapter focuses on the different theories of translation, presented in a way which suggests that one might be the completion and the development of another. The second chapter deals with the peculiar issues of non-fiction translation, with particular attention to the way in which this genre gathers different elements of other text types. Despite this variety, it is also claimed that the function at the higher level of an essay is always the informative one. This concept led me to simplify and make more intelligible the Italian version of the text I translated (Light Years Ahead). In the third chapter, this last point is discussed, as well as my considerations about the function, the dominant aspect and the cultural analysis of the text, with particular regard to how the quality of the English translation affected my choices. In the fourth chapter I included some examples of translation, which best demonstrate the distinctive variety of styles of non-fiction texts and the possibility for the translator to choose each time which theory suits them best. Finally, I also included three examples which represent a sort of defeat for me, that is to say three points where the ambiguity of the text obliged me to remove that information for the sake of the dominant informative function.