912 resultados para Applied linguistics. Discourse Processing. Understanding. Narratives. EJA
Resumo:
La tesi sviluppa le proposte teoriche della Linguistica Cognitiva a proposito della metafora e propone una loro possibile applicazione in ambito didattico. La linguistica cognitiva costituisce la cornice interpretativa della ricerca, a partire dai suoi concetti principali: la prospettiva integrata, l’embodiment, la centralità della semantica, l’attenzione per la psicolinguistica e le neuroscienze. All’interno di questo panorama, prende vigore un’idea di metafora come punto d’incontro tra lingua e pensiero, come criterio organizzatore delle conoscenze, strumento conoscitivo fondamentale nei processi di apprendimento. A livello didattico, la metafora si rivela imprescindibile sia come strumento operativo che come oggetto di riflessione. L’approccio cognitivista può fornire utili indicazioni su come impostare un percorso didattico sulla metafora. Nel presente lavoro, si indaga in particolare l’uso didattico di stimoli non verbali nel rafforzamento delle competenze metaforiche di studenti di scuola media. Si è scelto come materiale di partenza la pubblicità, per due motivi: il diffuso impiego di strategie retoriche in ambito pubblicitario e la specificità comunicativa del genere, che permette una chiara disambiguazione di fenomeni che, in altri contesti, non potrebbero essere analizzati con la stessa univocità. Si presenta dunque un laboratorio finalizzato al miglioramento della competenza metaforica degli studenti che si avvale di due strategie complementari: da una parte, una spiegazione ispirata ai modelli cognitivisti, sia nella terminologia impiegata che nella modalità di analisi (di tipo usage-based); dall’altra un training con metafore visive in pubblicità, che comprende una fase di analisi e una fase di produzione. È stato usato un test, suddiviso in compiti specifici, per oggettivare il più possibile i progressi degli studenti alla fine del training, ma anche per rilevare le difficoltà e i punti di forza nell’analisi rispetto sia ai contesti d’uso (letterario e convenzionale) sia alle forme linguistiche assunte dalla metafora (nominale, verbale, aggettivale).
Resumo:
Over the past years fruit and vegetable industry has become interested in the application of both osmotic dehydration and vacuum impregnation as mild technologies because of their low temperature and energy requirements. Osmotic dehydration is a partial dewatering process by immersion of cellular tissue in hypertonic solution. The diffusion of water from the vegetable tissue to the solution is usually accompanied by the simultaneous solutes counter-diffusion into the tissue. Vacuum impregnation is a unit operation in which porous products are immersed in a solution and subjected to a two-steps pressure change. The first step (vacuum increase) consists of the reduction of the pressure in a solid-liquid system and the gas in the product pores is expanded, partially flowing out. When the atmospheric pressure is restored (second step), the residual gas in the pores compresses and the external liquid flows into the pores. This unit operation allows introducing specific solutes in the tissue, e.g. antioxidants, pH regulators, preservatives, cryoprotectancts. Fruit and vegetable interact dynamically with the environment and the present study attempts to enhance our understanding on the structural, physico-chemical and metabolic changes of plant tissues upon the application of technological processes (osmotic dehydration and vacuum impregnation), by following a multianalytical approach. Macro (low-frequency nuclear magnetic resonance), micro (light microscopy) and ultrastructural (transmission electron microscopy) measurements combined with textural and differential scanning calorimetry analysis allowed evaluating the effects of individual osmotic dehydration or vacuum impregnation processes on (i) the interaction between air and liquid in real plant tissues, (ii) the plant tissue water state and (iii) the cell compartments. Isothermal calorimetry, respiration and photosynthesis determinations led to investigate the metabolic changes upon the application of osmotic dehydration or vacuum impregnation. The proposed multianalytical approach should enable both better designs of processing technologies and estimations of their effects on tissue.
Resumo:
The interaction between disciplines in the study of human population history is of primary importance, profiting from the biological and cultural characteristics of humankind. In fact, data from genetics, linguistics, archaeology and cultural anthropology can be combined to allow for a broader research perspective. This multidisciplinary approach is here applied to the study of the prehistory of sub-Saharan African populations: in this continent, where Homo sapiens originally started his evolution and diversification, the understanding of the patterns of human variation has a crucial relevance. For this dissertation, molecular data is interpreted and complemented with a major contribution from linguistics: linguistic data are compared to the genetic data and the research questions are contextualized within a linguistic perspective. In the four articles proposed, we analyze Y chromosome SNPs and STRs profiles and full mtDNA genomes on a representative number of samples to investigate key questions of African human variability. Some of these questions address i) the amount of genetic variation on a continental scale and the effects of the widespread migration of Bantu speakers, ii) the extent of ancient population structure, which has been lost in present day populations, iii) the colonization of the southern edge of the continent together with the degree of population contact/replacement, and iv) the prehistory of the diverse Khoisan ethnolinguistic groups, who were traditionally understudied despite representing one of the most ancient divergences of modern human phylogeny. Our results uncover a deep level of genetic structure within the continent and a multilayered pattern of contact between populations. These case studies represent a valuable contribution to the debate on our prehistory and open up further research threads.
Resumo:
In the present thesis, a new methodology of diagnosis based on advanced use of time-frequency technique analysis is presented. More precisely, a new fault index that allows tracking individual fault components in a single frequency band is defined. More in detail, a frequency sliding is applied to the signals being analyzed (currents, voltages, vibration signals), so that each single fault frequency component is shifted into a prefixed single frequency band. Then, the discrete Wavelet Transform is applied to the resulting signal to extract the fault signature in the frequency band that has been chosen. Once the state of the machine has been qualitatively diagnosed, a quantitative evaluation of the fault degree is necessary. For this purpose, a fault index based on the energy calculation of approximation and/or detail signals resulting from wavelet decomposition has been introduced to quantify the fault extend. The main advantages of the developed new method over existing Diagnosis techniques are the following: - Capability of monitoring the fault evolution continuously over time under any transient operating condition; - Speed/slip measurement or estimation is not required; - Higher accuracy in filtering frequency components around the fundamental in case of rotor faults; - Reduction in the likelihood of false indications by avoiding confusion with other fault harmonics (the contribution of the most relevant fault frequency components under speed-varying conditions are clamped in a single frequency band); - Low memory requirement due to low sampling frequency; - Reduction in the latency of time processing (no requirement of repeated sampling operation).
Resumo:
This thesis concerns artificially intelligent natural language processing systems that are capable of learning the properties of lexical items (properties like verbal valency or inflectional class membership) autonomously while they are fulfilling their tasks for which they have been deployed in the first place. Many of these tasks require a deep analysis of language input, which can be characterized as a mapping of utterances in a given input C to a set S of linguistically motivated structures with the help of linguistic information encoded in a grammar G and a lexicon L: G + L + C → S (1) The idea that underlies intelligent lexical acquisition systems is to modify this schematic formula in such a way that the system is able to exploit the information encoded in S to create a new, improved version of the lexicon: G + L + S → L' (2) Moreover, the thesis claims that a system can only be considered intelligent if it does not just make maximum usage of the learning opportunities in C, but if it is also able to revise falsely acquired lexical knowledge. So, one of the central elements in this work is the formulation of a couple of criteria for intelligent lexical acquisition systems subsumed under one paradigm: the Learn-Alpha design rule. The thesis describes the design and quality of a prototype for such a system, whose acquisition components have been developed from scratch and built on top of one of the state-of-the-art Head-driven Phrase Structure Grammar (HPSG) processing systems. The quality of this prototype is investigated in a series of experiments, in which the system is fed with extracts of a large English corpus. While the idea of using machine-readable language input to automatically acquire lexical knowledge is not new, we are not aware of a system that fulfills Learn-Alpha and is able to deal with large corpora. To instance four major challenges of constructing such a system, it should be mentioned that a) the high number of possible structural descriptions caused by highly underspeci ed lexical entries demands for a parser with a very effective ambiguity management system, b) the automatic construction of concise lexical entries out of a bulk of observed lexical facts requires a special technique of data alignment, c) the reliability of these entries depends on the system's decision on whether it has seen 'enough' input and d) general properties of language might render some lexical features indeterminable if the system tries to acquire them with a too high precision. The cornerstone of this dissertation is the motivation and development of a general theory of automatic lexical acquisition that is applicable to every language and independent of any particular theory of grammar or lexicon. This work is divided into five chapters. The introductory chapter first contrasts three different and mutually incompatible approaches to (artificial) lexical acquisition: cue-based queries, head-lexicalized probabilistic context free grammars and learning by unification. Then the postulation of the Learn-Alpha design rule is presented. The second chapter outlines the theory that underlies Learn-Alpha and exposes all the related notions and concepts required for a proper understanding of artificial lexical acquisition. Chapter 3 develops the prototyped acquisition method, called ANALYZE-LEARN-REDUCE, a framework which implements Learn-Alpha. The fourth chapter presents the design and results of a bootstrapping experiment conducted on this prototype: lexeme detection, learning of verbal valency, categorization into nominal count/mass classes, selection of prepositions and sentential complements, among others. The thesis concludes with a review of the conclusions and motivation for further improvements as well as proposals for future research on the automatic induction of lexical features.
Resumo:
The relatively young discipline of astronautics represents one of the scientifically most fascinating and technologically advanced achievements of our time. The human exploration in space does not offer only extraordinary research possibilities but also demands high requirements from man and technology. The space environment provides a lot of attractive experimental tools towards the understanding of fundamental mechanism in natural sciences. It has been shown that especially reduced gravity and elevated radiation, two distinctive factors in space, influence the behavior of biological systems significantly. For this reason one of the key objectives on board of an earth orbiting laboratory is the research in the field of life sciences, covering the broad range from botany, human physiology and crew health up to biotechnology. The Columbus Module is the only European low gravity platform that allows researchers to perform ambitious experiments in a continuous time frame up to several months. Biolab is part of the initial outfitting of the Columbus Laboratory; it is a multi-user facility supporting research in the field of biology, e.g. effect of microgravity and space radiation on cell cultures, micro-organisms, small plants and small invertebrates. The Biolab IEC are projects designed to work in the automatic part of Biolab. In this moment in the TO-53 department of Airbus Defence & Space (formerly Astrium) there are two experiments that are in phase C/D of the development and they are the subject of this thesis: CELLRAD and CYTOSKELETON. They will be launched in soft configuration, that means packed inside a block of foam that has the task to reduce the launch loads on the payload. Until 10 years ago the payloads which were launched in soft configuration were supposed to be structural safe by themselves and a specific structural analysis could be waived on them; with the opening of the launchers market to private companies (that are not under the direct control of the international space agencies), the requirements on the verifications of payloads are changed and they have become much more conservative. In 2012 a new random environment has been introduced due to the new Space-X launch specification that results to be particularly challenging for the soft launched payloads. The last ESA specification requires to perform structural analysis on the payload for combined loads (random vibration, quasi-steady acceleration and pressure). The aim of this thesis is to create FEM models able to reproduce the launch configuration and to verify that all the margins of safety are positive and to show how they change because of the new Space-X random environment. In case the results are negative, improved design solution are implemented. Based on the FEM result a study of the joins has been carried out and, when needed, a crack growth analysis has been performed.
Resumo:
Our research asked the following main questions: how the characteristics of professionals service firms allow them to successfully innovate in exploiting through exploring by combining internal and external factors of innovation and how these ambidextrous organisations perceive these factors; and how do successful innovators in professional service firms use corporate entrepreneurship models in their new service development processes? With a goal to shed light on innovation in professional knowledge intensive business service firms’ (PKIBS), we concluded a qualitative analysis of ten globally acting law firms, providing business legal services. We analyse the internal and factors of innovation that are critical for PKIBS’ innovation. We suggest how these firms become ambidextrous in changing environment. Our findings show that this kind of firms has particular type of ambidexterity due to their specific characteristics. As PKIBS are very dependant on its human capital, governance structure, and the high expectations of their clients, their ambidexterity is structural, but also contextual at the same time. In addition, we suggest 3 types of corporate entrepreneurship models that international PKIBS use to enhance innovation in turbulent environments. We looked at how law firms going through turbulent environments were using corporate entrepreneurship activities as a part of their strategies to be more innovative. Using visual mapping methodology, we developed three types of innovation patterns in the law firms. We suggest that corporate entrepreneurship models depend on successful application of mainly three elements: who participates in corporate entrepreneurship initiatives; what are the formal processes that enhances these initiatives; and what are the policies applied to this type of behaviour.
Resumo:
Das vorliegende Werk behandelt die Ursachen für die Nicht-, Schwer- und Missverständlichkeit von Bedienungsanleitungen und Instruktionen sowohl theoretisch durch eine Auswertung der einschlägigen Fachliteratur als auch praktisch durch eine empirische Untersuchung dreier Informationsprodukte. Zur Veranschaulichung der Tragweite von dysfunktionalen Instruktionen stellt die vorliegende Arbeit zunächst die rechtlichen Rahmenbedingungen für Bedienungsanleitungen dar. Im Anschluss daran erläutert sie die thematisch relevanten Kommunikationstheorien, die grundlegenden Kommunikationsmodelle sowie die zentralen Theorien der Kognitionswissenschaft zur Textverarbeitung und zum Textverstehen als Grundlage für die durchgeführten Lese- und Benutzertests. Die praktische Untersuchung veranschaulicht die vielfältigen und omnipräsenten Ursachen für eine dysfunktionale Rezeption von Instruktionen und legt aufgrund der potenziell gefährlichen Folgen die Durchführung von Benutzertests zur retrospektiven Vermeidung von Kommunikationsstörungen und zur prospektiven Stärkung des Problembewusstseins bei der Erstellung von Bedienungsanleitungen nahe.
Resumo:
In recent years the hot water treatment (HW) represents an effective and safe approach for managing postharvest decay. This study reported the effect of an HW (60°C for 60 s and 45°C for 10 min) on brown rot and blue mould respectively. Peaches was found more thermotolerant compared to apple fruit, otherwise Penicillium expansum was more resistant to heat with respect to Monilinia spp. In semi-commercial and commercial trials, the inhibition of brown rot in naturally infected peaches was higher than 78% after 6 days at 0°C and 3 days at 20°C. Moreover, in laboratory trials a 100% disease incidence reduction was obtained by treating artificially infected peaches at 6-12 h after inoculation revealing a curative effect of HW. The expression levels of some genes were evaluated by qRT-PCR. Specifically, the cell wall genes (β-GAL, PL, PG, PME) showed a general decrease of expression level whereas PAL, CHI, HSP70 and ROS-scavenging genes were induced in treated peaches compared to the control ones. Contrarily, HW applied on artificially infected fruit before the inoculum was found to increase brown rot susceptibility. This aspect might be due to an increase of fruit VOCs emission as revealed by PTR-ToF-MS analysis. In addition a microarray experiment was conducted to analyze molecular mechanisms underneath the apple response to heat. Our results showed a largest amount of induced Heat shock proteins (HSPs), Heat shock cognate proteins (HSCs), Heat shock transcription factors (HSTFs) genes found at 1 and 4 hours from the treatment. Those genes required for the thermotolerance process could be involved in induced resistance response. The hypothesis was confirmed by 30% of blue mold disease reduction in artificially inoculated apple after 1 and 4 hours from the treatment. In order to improve peaches quality and disease management during storage, an innovative tool was also used: Da-meter.
Resumo:
This thesis is focused on the paleomagnetic rotation pattern inside the deforming zone of strike-slip faults, and the kinematics and geodynamics describing it. The paleomagnetic investigation carried out along both the LOFZ and the fore-arc sliver (38º-42ºS, southern Chile) revealed an asymmetric rotation pattern. East of the LOFZ and adjacent to it, rotations are up to 170° clockwise (CW) and fade out ~10 km east of fault. West of the LOFZ at 42ºS (Chiloé Island) and around 39°S (Villarrica domain) systematic CCW rotations have been observed, while at 40°-41°S (Ranco-Osorno domain) and adjacent to the LOFZ CW rotations reach up to 136° before evolving to CCW rotations at ~30 km from the fault. These data suggest a directed relation with subduction interface plate coupling. Zones of high coupling yield to a wide deforming zone (~30 km) west of the LOFZ characterized by CW rotations. Low coupling implies a weak LOFZ and a fore-arc dominated by CCW rotations related to NW-sinistral fault kinematics. The rotation pattern is consistent with a quasi-continuous crust kinematics. However, it seems unlikely that the lower crust flux can control block rotation in the upper crust, considering the cold and thick fore-arc crust. I suggest that rotations are consequence of forces applied directly on both the block edges and along the main fault, within the upper crust. Farther south, at the Austral Andes (54°S) I measured the anisotropy of magnetic susceptibility (AMS) of 22 Upper Cretaceous to Upper Eocene sites from the Magallanes fold-thrust belt internal domains. The data document continuous compression from the Early Cretaceous until the Late Oligocene. AMS data also show that the tectonic inversion of Jurassic extensional faults during the Late Cretaceous compressive phase may have controlled the Cenozoic kinematic evolution of the Magallanes fold-thrust belt, yielding slip partitioning.
Resumo:
Aerosol particles are important actors in the Earth’s atmosphere and climate system. They scatter and absorb sunlight, serve as nuclei for water droplets and ice crystals in clouds and precipitation, and are a subject of concern for public health. Atmospheric aerosols originate from both natural and anthropogenic sources, and emissions resulting from human activities have the potential to influence the hydrological cycle and climate. An assessment of the extent and impacts of this human force requires a sound understanding of the natural aerosol background. This dissertation addresses the composition, properties, and atmospheric cycling of biogenic aerosol particles, which represent a major fraction of the natural aerosol burden. The main focal points are: (i) Studies of the autofluo-rescence of primary biological aerosol particles (PBAP) and its application in ambient measure-ments, and (ii) X-ray microscopic and spectroscopic investigations of biogenic secondary organic aerosols (SOA) from the Amazonian rainforest.rnAutofluorescence of biological material has received increasing attention in atmospheric science because it allows real-time monitoring of PBAP in ambient air, however it is associated with high uncertainty. This work aims at reducing the uncertainty through a comprehensive characterization of the autofluorescence properties of relevant biological materials. Fluorescence spectroscopy and microscopy were applied to analyze the fluorescence signatures of pure biological fluorophores, potential non-biological interferences, and various types of reference PBAP. Characteristic features and fingerprint patterns were found and provide support for the operation, interpretation, and further development of PBAP autofluorescence measurements. Online fluorescence detection and offline fluorescence microscopy were jointly applied in a comprehensive bioaerosol field measurement campaign that provided unprecedented insights into PBAP-linked biosphere-atmosphere interactions in a North-American semi-arid forest environment. Rain showers were found to trigger massive bursts of PBAP, including high concentrations of biological ice nucleators that may promote further precipitation and can be regarded as part of a bioprecipitation feedback cycle in the climate system. rnIn the pristine tropical rainforest air of the Amazon, most cloud and fog droplets form on bio-genic SOA particles, but the composition, morphology, mixing state and origin of these particles is hardly known. X-ray microscopy and spectroscopy (STXM-NEXAFS) revealed distinctly different types of secondary organic matter (carboxyl- vs. hydroxy-rich) with internal structures that indicate a strong influence of phase segregation, cloud and fog processing on SOA formation, and aging. In addition, nanometer-sized potassium-rich particles emitted by microorganisms and vegetation were found to act as seeds for the condensation of SOA. Thus, the influence of forest biota on the atmospheric abundance of cloud condensation nuclei appears to be more direct than previously assumed. Overall, the results of this dissertation suggest that biogenic aerosols, clouds and precipitation are indeed tightly coupled through a bioprecipitation cycle, and that advanced microscopic and spectroscopic techniques can provide detailed insights into these mechanisms.rn
Resumo:
Die Herstellung von Polymer-Solarzellen aus wässriger Phase stellt eine attraktive Alternative zu der konventionellen lösemittelbasierten Formulierung dar. Die Vorteile der aus wässriger Lösung hergestellten Solarzellen liegen besonders in dem umweltschonenden Herstellungsprozess und in der Möglichkeit, druckbare optoelektronische Bauteile zu generieren. Die Prozessierbarkeit von hydrophoben Halbleitern im wässrigen Milieu wird durch die Dispergierung der Materialien, in Form von Nanopartikeln, erreicht. Der Transfer der Halbleiter in eine Dispersion erfolgt über die Lösemittelverdampfungsmethode. Die Idee der Verwendung von partikelbasierte Solarzellen wurde bereits umgesetzt, allerdings blieben eine genaue Charakterisierung der Partikel sowie ein umfassendes Verständnis des gesamten Fabrikationsvorgangs aus. Deshalb besteht das Ziel dieser Arbeit darin, einen detaillierten Einblick in den Herstellungsprozess von partikelbasierten Solarzellen zu erlangen, mögliche Schwächen aufzudecken, diese zu beseitigen, um so zukünftige Anwendungen zu verbessern. Zur Herstellung von Solarzellen aus wässrigen Dispersionen wurde Poly(3-hexylthiophen-2,5-diyl)/[6,6]-Phenyl-C61-Buttersäure-Methylester (P3HT/PCBM) als Donor/Akzeptor-System verwendet. Die Kernpunkte der Untersuchungen richteten sich zum einen die auf Partikelmorphologie und zum anderen auf die Generierung einer geeigneten Partikelschicht. Beide Parameter haben Auswirkungen auf die Solarzelleneffizienz. Die Morphologie wurde sowohl spektroskopisch über Photolumineszenz-Messungen, als auch visuell mittels Elektronenmikroskopie ermittelt. Auf diese Weise konnte die Partikelmorphologie vollständig aufgeklärt werden, wobei Parallelen zu der Struktur von lösemittelbasierten Solarzellen gefunden wurden. Zudem wurde eine Abhängigkeit der Morphologie von der Präparationstemperatur beobachtet, was eine einfache Steuerung der Partikelstruktur ermöglicht. Im Zuge der Partikelschichtausbildung wurden direkte sowie grenzflächenvermittelnde Beschichtungsmethoden herangezogen. Von diesen Techniken hatte sich aber nur die Rotationsbeschichtung als brauchbare Methode erwiesen, Partikel aus der Dispersion in einen homogenen Film zu überführen. Des Weiteren stand die Aufarbeitung der Partikelschicht durch Ethanol-Waschung und thermische Behandlung im Fokus dieser Arbeit. Beide Maßnahmen wirkten sich positiv auf die Effizienz der Solarzellen aus und trugen entscheidend zu einer Verbesserung der Zellen bei. Insgesamt liefern die gewonnen Erkenntnisse einen detaillierten Überblick über die Herausforderungen, welche bei dem Einsatz von wasserbasierten Dispersionen auftreten. Die Anforderungen partikelbasierter Solarzellen konnten offengelegt werden, dadurch gelang die Herstellung einer Solarzelle mit einer Effizienz von 0.53%. Dieses Ergebnis stellt jedoch noch nicht das Optimum dar und lässt noch Möglichkeiten für Verbesserungen offen.
Resumo:
Granular matter, also known as bulk solids, consists of discrete particles with sizes between micrometers and meters. They are present in many industrial applications as well as daily life, like in food processing, pharmaceutics or in the oil and mining industry. When handling granular matter the bulk solids are stored, mixed, conveyed or filtered. These techniques are based on observations in macroscopic experiments, i.e. rheological examinations of the bulk properties. Despite the amply investigations of bulk mechanics, the relation between single particle motion and macroscopic behavior is still not well understood. For exploring the microscopic properties on a single particle level, 3D imaging techniques are required.rnThe objective of this work was the investigation of single particle motions in a bulk system in 3D under an external mechanical load, i.e. compression and shear. During the mechanical load the structural and dynamical properties of these systems were examined with confocal microscopy. Therefor new granular model systems in the wet and dry state were designed and prepared. As the particles are solid bodies, their motion is described by six degrees of freedom. To explore their entire motion with all degrees of freedom, a technique to visualize the rotation of spherical micrometer sized particles in 3D was developed. rnOne of the foci during this dissertation was a model system for dry cohesive granular matter. In such systems the particle motion during a compression of the granular matter was investigated. In general the rotation of single particles was the more sensitive parameter compared to the translation. In regions with large structural changes the rotation had an earlier onset than the translation. In granular systems under shear, shear dilatation and shear zone formation were observed. Globally the granular sediments showed a shear behavior, which was known already from classical shear experiments, for example with Jenike cells. Locally the shear zone formation was enhanced, when near the applied load a pre-diluted region existed. In regions with constant volume fraction a mixing between the different particle layers occurred. In particular an exchange of particles between the current flowing region and the non-flowing region was observed. rnThe second focus was on model systems for wet granular matter, where an additional binding liquid is added to the particle suspension. To examine the 3D structure of the binding liquid on the micrometer scale independently from the particles, a second illumination and detection beam path was implemented. In shear and compression experiments of wet clusters and bulk systems completely different dynamics compared to dry cohesive models systems occured. In a Pickering emulsion-like system large structural changes predominantly occurred in the local environment of binding liquid droplets. These large local structural changes were due to an energy interplay between the energy stored in the binding droplet during its deformation and the binding energy of particles at the droplet interface. rnConfocal microscopy in combination with nanoindentation gave new insights into the single particle motions and dynamics of granular systems under a mechanical load. These novel experimental results can help to improve the understanding of the relationship between bulk properties of granular matter, such as volume fraction or yield stress and the dynamics on a single particle level.rnrn
Resumo:
The present study is concerned with exploring the linguistic identity construction of Barack Obama and Hillary Clinton in the context of USA 2008 Democratic Party primaries. Thus, their speeches are examined in order to detect the aspects of identity that each politician resorted to in the process of projecting a political identity. The study, however, takes a special interest in the ways in which gender identity is projected by Obama and Clinton. Moreover, the notions of gender bias as well as gender representations are also investigated.
Resumo:
This is the first part of a study investigating a model-based transient calibration process for diesel engines. The motivation is to populate hundreds of parameters (which can be calibrated) in a methodical and optimum manner by using model-based optimization in conjunction with the manual process so that, relative to the manual process used by itself, a significant improvement in transient emissions and fuel consumption and a sizable reduction in calibration time and test cell requirements is achieved. Empirical transient modelling and optimization has been addressed in the second part of this work, while the required data for model training and generalization are the focus of the current work. Transient and steady-state data from a turbocharged multicylinder diesel engine have been examined from a model training perspective. A single-cylinder engine with external air-handling has been used to expand the steady-state data to encompass transient parameter space. Based on comparative model performance and differences in the non-parametric space, primarily driven by a high engine difference between exhaust and intake manifold pressures (ΔP) during transients, it has been recommended that transient emission models should be trained with transient training data. It has been shown that electronic control module (ECM) estimates of transient charge flow and the exhaust gas recirculation (EGR) fraction cannot be accurate at the high engine ΔP frequently encountered during transient operation, and that such estimates do not account for cylinder-to-cylinder variation. The effects of high engine ΔP must therefore be incorporated empirically by using transient data generated from a spectrum of transient calibrations. Specific recommendations on how to choose such calibrations, how many data to acquire, and how to specify transient segments for data acquisition have been made. Methods to process transient data to account for transport delays and sensor lags have been developed. The processed data have then been visualized using statistical means to understand transient emission formation. Two modes of transient opacity formation have been observed and described. The first mode is driven by high engine ΔP and low fresh air flowrates, while the second mode is driven by high engine ΔP and high EGR flowrates. The EGR fraction is inaccurately estimated at both modes, while EGR distribution has been shown to be present but unaccounted for by the ECM. The two modes and associated phenomena are essential to understanding why transient emission models are calibration dependent and furthermore how to choose training data that will result in good model generalization.