937 resultados para Text analysis


Relevância:

30.00% 30.00%

Publicador:

Resumo:

“This presentation utilizes correspondence theory to analyze African American undergraduate student access to and completion of higher education in the United States. Findings from this research are presented and policy recommendations affecting Black student enrollment and graduation are discussed.”

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Methods from statistical physics, such as those involving complex networks, have been increasingly used in the quantitative analysis of linguistic phenomena. In this paper, we represented pieces of text with different levels of simplification in co-occurrence networks and found that topological regularity correlated negatively with textual complexity. Furthermore, in less complex texts the distance between concepts, represented as nodes, tended to decrease. The complex networks metrics were treated with multivariate pattern recognition techniques, which allowed us to distinguish between original texts and their simplified versions. For each original text, two simplified versions were generated manually with increasing number of simplification operations. As expected, distinction was easier for the strongly simplified versions, where the most relevant metrics were node strength, shortest paths and diversity. Also, the discrimination of complex texts was improved with higher hierarchical network metrics, thus pointing to the usefulness of considering wider contexts around the concepts. Though the accuracy rate in the distinction was not as high as in methods using deep linguistic knowledge, the complex network approach is still useful for a rapid screening of texts whenever assessing complexity is essential to guarantee accessibility to readers with limited reading ability. Copyright (c) EPLA, 2012

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We review recent visualization techniques aimed at supporting tasks that require the analysis of text documents, from approaches targeted at visually summarizing the relevant content of a single document to those aimed at assisting exploratory investigation of whole collections of documents.Techniques are organized considering their target input materialeither single texts or collections of textsand their focus, which may be at displaying content, emphasizing relevant relationships, highlighting the temporal evolution of a document or collection, or helping users to handle results from a query posed to a search engine.We describe the approaches adopted by distinct techniques and briefly review the strategies they employ to obtain meaningful text models, discuss how they extract the information required to produce representative visualizations, the tasks they intend to support and the interaction issues involved, and strengths and limitations. Finally, we show a summary of techniques, highlighting their goals and distinguishing characteristics. We also briefly discuss some open problems and research directions in the fields of visual text mining and text analytics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: In normal aging, the decrease in the syntactic complexity of written production is usually associated with cognitive deficits. This study was aimed to analyze the quality of older adults' textual production indicated by verbal fluency (number of words) and grammatical complexity (number of ideas) in relation to gender, age, schooling, and cognitive status. Methods: From a probabilistic sample of community-dwelling people aged 65 years and above (n = 900), 577 were selected on basis of their responses to the Mini-Mental State Examination (MMSE) sentence writing, which were submitted to content analysis; 323 were excluded as they left the item blank or performed illegible or not meaningful responses. Education adjusted cut-off scores for the MMSE were used to classify the participants as cognitively impaired or unimpaired. Total and subdomain MMSE scores were computed. Results: 40.56% of participants whose answers to the MMSE sentence were excluded from the analyses had cognitive impairment compared to 13.86% among those whose answers were included. The excluded participants were older and less educated. Women and those older than 80 years had the lowest scores in the MMSE. There was no statistically significant relationship between gender, age, schooling, and textual performance. There was a modest but significant correlation between number of words written and the scores in the Language subdomain. Conclusions: Results suggest the strong influence of schooling and age over MMSE sentence performance. Failing to write a sentence may suggest cognitive impairment, yet, instructions for the MMSE sentence, i.e. to produce a simple sentence, may limit its clinical interpretation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The thesis consists of three independent parts. Part I: Polynomial amoebas We study the amoeba of a polynomial, as de ned by Gelfand, Kapranov and Zelevinsky. A central role in the treatment is played by a certain convex function which is linear in each complement component of the amoeba, which we call the Ronkin function. This function is used in two di erent ways. First, we use it to construct a polyhedral complex, which we call a spine, approximating the amoeba. Second, the Monge-Ampere measure of the Ronkin function has interesting properties which we explore. This measure can be used to derive an upper bound on the area of an amoeba in two dimensions. We also obtain results on the number of complement components of an amoeba, and consider possible extensions of the theory to varieties of codimension higher than 1. Part II: Differential equations in the complex plane We consider polynomials in one complex variable arising as eigenfunctions of certain differential operators, and obtain results on the distribution of their zeros. We show that in the limit when the degree of the polynomial approaches innity, its zeros are distributed according to a certain probability measure. This measure has its support on the union of nitely many curve segments, and can be characterized by a simple condition on its Cauchy transform. Part III: Radon transforms and tomography This part is concerned with different weighted Radon transforms in two dimensions, in particular the problem of inverting such transforms. We obtain stability results of this inverse problem for rather general classes of weights, including weights of attenuation type with data acquisition limited to a 180 degrees range of angles. We also derive an inversion formula for the exponential Radon transform, with the same restriction on the angle.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Membrane proteins are a large and important class of proteins. They are responsible for several of the key functions in a living cell, e.g. transport of nutrients and ions, cell-cell signaling, and cell-cell adhesion. Despite their importance it has not been possible to study their structure and organization in much detail because of the difficulty to obtain 3D structures. In this thesis theoretical studies of membrane protein sequences and structures have been carried out by analyzing existing experimental data. The data comes from several sources including sequence databases, genome sequencing projects, and 3D structures. Prediction of the membrane spanning regions by hydrophobicity analysis is a key technique used in several of the studies. A novel method for this is also presented and compared to other methods. The primary questions addressed in the thesis are: What properties are common to all membrane proteins? What is the overall architecture of a membrane protein? What properties govern the integration into the membrane? How many membrane proteins are there and how are they distributed in different organisms? Several of the findings have now been backed up by experiments. An analysis of the large family of G-protein coupled receptors pinpoints differences in length and amino acid composition of loops between proteins with and without a signal peptide and also differences between extra- and intracellular loops. Known 3D structures of membrane proteins have been studied in terms of hydrophobicity, distribution of secondary structure and amino acid types, position specific residue variability, and differences between loops and membrane spanning regions. An analysis of several fully and partially sequenced genomes from eukaryotes, prokaryotes, and archaea has been carried out. Several differences in the membrane protein content between organisms were found, the most important being the total number of membrane proteins and the distribution of membrane proteins with a given number of transmembrane segments. Of the properties that were found to be similar in all organisms, the most obvious is the bias in the distribution of positive charges between the extra- and intracellular loops. Finally, an analysis of homologues to membrane proteins with known topology uncovered two related, multi-spanning proteins with opposite predicted orientations. The predicted topologies were verified experimentally, providing a first example of "divergent topology evolution".

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Type Ia supernovae have been successfully used as standardized candles to study the expansion history of the Universe. In the past few years, these studies led to the exciting result of an accelerated expansion caused by the repelling action of some sort of dark energy. This result has been confirmed by measurements of cosmic microwave background radiation, the large-scale structure, and the dynamics of galaxy clusters. The combination of all these experiments points to a “concordance model” of the Universe with flat large-scale geometry and a dominant component of dark energy. However, there are several points related to supernova measurements which need careful analysis in order to doubtlessly establish the validity of the concordance model. As the amount and quality of data increases, the need of controlling possible systematic effects which may bias the results becomes crucial. Also important is the improvement of our knowledge of the physics of supernovae events to assure and possibly refine their calibration as standardized candle. This thesis addresses some of those issues through the quantitative analysis of supernova spectra. The stress is put on a careful treatment of the data and on the definition of spectral measurement methods. The comparison of measurements for a large set of spectra from nearby supernovae is used to study the homogeneity and to search for spectral parameters which may further refine the calibration of the standardized candle. One such parameter is found to reduce the dispersion in the distance estimation of a sample of supernovae to below 6%, a precision which is comparable with the current lightcurve-based calibration, and is obtained in an independent manner. Finally, the comparison of spectral measurements from nearby and distant objects is used to test the possibility of evolution with cosmic time of the intrinsic brightness of type Ia supernovae.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis some multivariate spectroscopic methods for the analysis of solutions are proposed. Spectroscopy and multivariate data analysis form a powerful combination for obtaining both quantitative and qualitative information and it is shown how spectroscopic techniques in combination with chemometric data evaluation can be used to obtain rapid, simple and efficient analytical methods. These spectroscopic methods consisting of spectroscopic analysis, a high level of automation and chemometric data evaluation can lead to analytical methods with a high analytical capacity, and for these methods, the term high-capacity analysis (HCA) is suggested. It is further shown how chemometric evaluation of the multivariate data in chromatographic analyses decreases the need for baseline separation. The thesis is based on six papers and the chemometric tools used are experimental design, principal component analysis (PCA), soft independent modelling of class analogy (SIMCA), partial least squares regression (PLS) and parallel factor analysis (PARAFAC). The analytical techniques utilised are scanning ultraviolet-visible (UV-Vis) spectroscopy, diode array detection (DAD) used in non-column chromatographic diode array UV spectroscopy, high-performance liquid chromatography with diode array detection (HPLC-DAD) and fluorescence spectroscopy. The methods proposed are exemplified in the analysis of pharmaceutical solutions and serum proteins. In Paper I a method is proposed for the determination of the content and identity of the active compound in pharmaceutical solutions by means of UV-Vis spectroscopy, orthogonal signal correction and multivariate calibration with PLS and SIMCA classification. Paper II proposes a new method for the rapid determination of pharmaceutical solutions by the use of non-column chromatographic diode array UV spectroscopy, i.e. a conventional HPLC-DAD system without any chromatographic column connected. In Paper III an investigation is made of the ability of a control sample, of known content and identity to diagnose and correct errors in multivariate predictions something that together with use of multivariate residuals can make it possible to use the same calibration model over time. In Paper IV a method is proposed for simultaneous determination of serum proteins with fluorescence spectroscopy and multivariate calibration. Paper V proposes a method for the determination of chromatographic peak purity by means of PCA of HPLC-DAD data. In Paper VI PARAFAC is applied for the decomposition of DAD data of some partially separated peaks into the pure chromatographic, spectral and concentration profiles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of the work is: define and calculate a factor of collapse related to traditional method to design sheet pile walls. Furthermore, we tried to find the parameters that most influence a finite element model representative of this problem. The text is structured in this way: from chapter 1 to 5, we analyzed a series of arguments which are usefull to understanding the problem, while the considerations mainly related to the purpose of the text are reported in the chapters from 6 to 10. In the first part of the document the following arguments are shown: what is a sheet pile wall, what are the codes to be followed for the design of these structures and what they say, how can be formulated a mathematical model of the soil, some fundamentals of finite element analysis, and finally, what are the traditional methods that support the design of sheet pile walls. In the chapter 6 we performed a parametric analysis, giving an answer to the second part of the purpose of the work. Comparing the results from a laboratory test for a cantilever sheet pile wall in a sandy soil, with those provided by a finite element model of the same problem, we concluded that:in modelling a sandy soil we should pay attention to the value of cohesion that we insert in the model (some programs, like Abaqus, don’t accept a null value for this parameter), friction angle and elastic modulus of the soil, they influence significantly the behavior of the system (structure-soil), others parameters, like the dilatancy angle or the Poisson’s ratio, they don’t seem influence it. The logical path that we followed in the second part of the text is reported here. We analyzed two different structures, the first is able to support an excavation of 4 m, while the second an excavation of 7 m. Both structures are first designed by using the traditional method, then these structures are implemented in a finite element program (Abaqus), and they are pushed to collapse by decreasing the friction angle of the soil. The factor of collapse is the ratio between tangents of the initial friction angle and of the friction angle at collapse. At the end, we performed a more detailed analysis of the first structure, observing that, the value of the factor of collapse is influenced by a wide range of parameters including: the value of the coefficients assumed in the traditional method and by the relative stiffness of the structure-soil system. In the majority of cases, we found that the value of the factor of collapse is between and 1.25 and 2. With some considerations, reported in the text, we can compare the values so far found, with the value of the safety factor proposed by the code (linked to the friction angle of the soil).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Progettazione di un sistema di Social Intelligence e Sentiment Analysis per un'azienda del settore consumer goods

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Il problema relativo alla predizione, la ricerca di pattern predittivi all‘interno dei dati, è stato studiato ampiamente. Molte metodologie robuste ed efficienti sono state sviluppate, procedimenti che si basano sull‘analisi di informazioni numeriche strutturate. Quella testuale, d‘altro canto, è una tipologia di informazione fortemente destrutturata. Quindi, una immediata conclusione, porterebbe a pensare che per l‘analisi predittiva su dati testuali sia necessario sviluppare metodi completamente diversi da quelli ben noti dalle tecniche di data mining. Un problema di predizione può essere risolto utilizzando invece gli stessi metodi : dati testuali e documenti possono essere trasformati in valori numerici, considerando per esempio l‘assenza o la presenza di termini, rendendo di fatto possibile una utilizzazione efficiente delle tecniche già sviluppate. Il text mining abilita la congiunzione di concetti da campi di applicazione estremamente eterogenei. Con l‘immensa quantità di dati testuali presenti, basti pensare, sul World Wide Web, ed in continua crescita a causa dell‘utilizzo pervasivo di smartphones e computers, i campi di applicazione delle analisi di tipo testuale divengono innumerevoli. L‘avvento e la diffusione dei social networks e della pratica di micro blogging abilita le persone alla condivisione di opinioni e stati d‘animo, creando un corpus testuale di dimensioni incalcolabili aggiornato giornalmente. Le nuove tecniche di Sentiment Analysis, o Opinion Mining, si occupano di analizzare lo stato emotivo o la tipologia di opinione espressa all‘interno di un documento testuale. Esse sono discipline attraverso le quali, per esempio, estrarre indicatori dello stato d‘animo di un individuo, oppure di un insieme di individui, creando una rappresentazione dello stato emotivo sociale. L‘andamento dello stato emotivo sociale può condizionare macroscopicamente l‘evolvere di eventi globali? Studi in campo di Economia e Finanza Comportamentale assicurano un legame fra stato emotivo, capacità nel prendere decisioni ed indicatori economici. Grazie alle tecniche disponibili ed alla mole di dati testuali continuamente aggiornati riguardanti lo stato d‘animo di milioni di individui diviene possibile analizzare tali correlazioni. In questo studio viene costruito un sistema per la previsione delle variazioni di indici di borsa, basandosi su dati testuali estratti dalla piattaforma di microblogging Twitter, sotto forma di tweets pubblici; tale sistema include tecniche di miglioramento della previsione basate sullo studio di similarità dei testi, categorizzandone il contributo effettivo alla previsione.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The uncertainties in the determination of the stratigraphic profile of natural soils is one of the main problems in geotechnics, in particular for landslide characterization and modeling. The study deals with a new approach in geotechnical modeling which relays on a stochastic generation of different soil layers distributions, following a boolean logic – the method has been thus called BoSG (Boolean Stochastic Generation). In this way, it is possible to randomize the presence of a specific material interdigitated in a uniform matrix. In the building of a geotechnical model it is generally common to discard some stratigraphic data in order to simplify the model itself, assuming that the significance of the results of the modeling procedure would not be affected. With the proposed technique it is possible to quantify the error associated with this simplification. Moreover, it could be used to determine the most significant zones where eventual further investigations and surveys would be more effective to build the geotechnical model of the slope. The commercial software FLAC was used for the 2D and 3D geotechnical model. The distribution of the materials was randomized through a specifically coded MatLab program that automatically generates text files, each of them representing a specific soil configuration. Besides, a routine was designed to automate the computation of FLAC with the different data files in order to maximize the sample number. The methodology is applied with reference to a simplified slope in 2D, a simplified slope in 3D and an actual landslide, namely the Mortisa mudslide (Cortina d’Ampezzo, BL, Italy). However, it could be extended to numerous different cases, especially for hydrogeological analysis and landslide stability assessment, in different geological and geomorphological contexts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Die BBC-Serie SHERLOCK war 2011 eine der meistexportierten Fernsehproduktionen Großbritanniens und wurde weltweit in viele Sprachen übersetzt. Eine der Herausforderungen bei der Übersetzung stellen die Schrifteinblendungen der Serie (kurz: Inserts) dar. Die Inserts versprachlichen die Gedanken des Protagonisten, bilden schriftliche und digitale Kommunikation ab und zeichnen sich dabei durch ihre visuelle Auffälligkeit und teilweise als einzige Träger sprachlicher Kommunikation aus, womit sie zum wichtigen ästhetischen und narrativen Mittel in der Serie werden. Interessanterweise sind in der Übersetztung alle stilistischen Eigenschaften der Original-Inserts erhalten. In dieser Arbeit wird einerseits untersucht, wie Schrifteinblendungen im Film theoretisch beschrieben werden können, und andererseits, was sie in der Praxis so übersetzt werden können, wie es in der deutschen Version von Sherlock geschah. Zur theoretischen Beschreibung werden zunächst die Schrifteinblendungen in Sherlock Untertitelungsnormen anhand relevanter grundlegender semiotischer Dimensionen gegenübergestellt. Weiterhin wird das Verhältnis zwischen Schrifteinblendungen und Filmbild erkundet. Dazu wird geprüft, wie gut verschiedene Beschreibungsansätze zu Text-Bild-Verhältnissen aus der Sprachwissenschaft, Comicforschung, Übersetzungswissenschaft und Typografie die Einblendungen in Sherlock erklären können. Im praktischen Teil wird die Übersetzung der Einblendungen beleuchtet. Der Übersetzungsprozess bei der deutschen Version wird auf Grundlage eines Experteninterviews mit dem Synchronautor der Serie rekonstruiert, der auch für die Formulierung der Inserts zuständig war. Abschließend werden spezifische Übersetzungsprobleme der Inserts aus der zweiten Staffel von SHERLOCK diskutiert. Es zeigt sich, dass Untertitelungsnormen zur Beschreibung von Inserts nicht geeignet sind, da sie in Dimensionen wie Position, grafische Gestaltung, Animation, Soundeffekte, aber auch Timing stark eingeschränkt sind. Dies lässt sich durch das historisch geprägte Verständnis von Untertiteln erklären, die als möglichst wenig störendes Beiwerk zum fertigen Filmbild und -ablauf (notgedrungen) hinzugefügt werden, wohingegen für die Inserts in SHERLOCK teilweise sogar ein zentraler Platz in der Bild- und Szenenkomposition bereits bei den Dreharbeiten vorgesehen wurde. In Bezug auf Text-Bild-Verhältnisse zeigen sich die größten Parallelen zu Ansätzen aus der Comicforschung, da auch dort schriftliche Texte im Bild eingebettet sind anstatt andersherum. Allerdings sind auch diese Ansätze zur Beschreibung von Bewegung und Ton unzureichend. Die Erkundung der Erklärungsreichweite weiterer vielversprechender Konzepte, wie Interface und Usability, bleibt ein Ziel für künftige Studien. Aus dem Experteninterview lässt sich schließen, dass die Übersetzung von Inserts ein neues, noch unstandardisiertes Verfahren ist, in dem idiosynkratische praktische Lösungen zur sprachübergreifenden Kommunikation zwischen verschiedenen Prozessbeteiligten zum Einsatz kommen. Bei hochqualitative Produktionen zeigt ist auch für die ersetzende Insertübersetzung der Einsatz von Grafikern unerlässlich, zumindest für die Erstellung neuer Inserts als Übersetzungen von gefilmtem Text (Display). Hierbei sind die theoretisch möglichen Synergien zwischen Sprach- und Bildexperten noch nicht voll ausgeschöpft. Zudem zeigt sich Optimierungspotential mit Blick auf die Bereitstellung von sorgfältiger Dokumentation zur ausgangssprachlichen Version. Diese wäre als Referenzmaterial für die Übersetzung insbesondere auch für Zwecke der internationalen Qualitätssicherung relevant. Die übersetzten Inserts in der deutschen Version weisen insgesamt eine sehr hohe Qualität auf. Übersetzungsprobleme ergeben sich für das genretypische Element der Codes, die wegen ihrer Kompaktheit und multiplen Bezügen zum Film eine Herausforderung darstellen. Neben weiteren bekannten Übersetzungsproblemen wie intertextuellen Bezügen und Realia stellt sich immer wieder die Frage, wieviel der im Original dargestellten Insert- und Displaytexte übersetzt werden müssen. Aus Gründen der visuellen Konsistenz wurden neue Inserts zur Übersetzung von Displays notwendig. Außerdem stellt sich die Frage insbesondere bei Fülltexten. Sie dienen der Repräsentation von Text und der Erweiterung der Grenzen der fiktiv dargestellten Welt, sind allerdings mit hohem Übersetzungsaufwand bei minimaler Bedeutung für die Handlung verbunden.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This is the first part of a study investigating a model-based transient calibration process for diesel engines. The motivation is to populate hundreds of parameters (which can be calibrated) in a methodical and optimum manner by using model-based optimization in conjunction with the manual process so that, relative to the manual process used by itself, a significant improvement in transient emissions and fuel consumption and a sizable reduction in calibration time and test cell requirements is achieved. Empirical transient modelling and optimization has been addressed in the second part of this work, while the required data for model training and generalization are the focus of the current work. Transient and steady-state data from a turbocharged multicylinder diesel engine have been examined from a model training perspective. A single-cylinder engine with external air-handling has been used to expand the steady-state data to encompass transient parameter space. Based on comparative model performance and differences in the non-parametric space, primarily driven by a high engine difference between exhaust and intake manifold pressures (ΔP) during transients, it has been recommended that transient emission models should be trained with transient training data. It has been shown that electronic control module (ECM) estimates of transient charge flow and the exhaust gas recirculation (EGR) fraction cannot be accurate at the high engine ΔP frequently encountered during transient operation, and that such estimates do not account for cylinder-to-cylinder variation. The effects of high engine ΔP must therefore be incorporated empirically by using transient data generated from a spectrum of transient calibrations. Specific recommendations on how to choose such calibrations, how many data to acquire, and how to specify transient segments for data acquisition have been made. Methods to process transient data to account for transport delays and sensor lags have been developed. The processed data have then been visualized using statistical means to understand transient emission formation. Two modes of transient opacity formation have been observed and described. The first mode is driven by high engine ΔP and low fresh air flowrates, while the second mode is driven by high engine ΔP and high EGR flowrates. The EGR fraction is inaccurately estimated at both modes, while EGR distribution has been shown to be present but unaccounted for by the ECM. The two modes and associated phenomena are essential to understanding why transient emission models are calibration dependent and furthermore how to choose training data that will result in good model generalization.