878 resultados para Data-Information-Knowledge Chain


Relevância:

40.00% 40.00%

Publicador:

Resumo:

For its particular position and the complex geological history, the Northern Apennines has been considered as a natural laboratory to apply several kinds of investigations. By the way, it is complicated to joint all the knowledge about the Northern Apennines in a unique picture that explains the structural and geological emplacement that produced it. The main goal of this thesis is to put together all information on the deformation - in the crust and at depth - of this region and to describe a geodynamical model that takes account of it. To do so, we have analyzed the pattern of deformation in the crust and in the mantle. In both cases the deformation has been studied using always information recovered from earthquakes, although using different techniques. In particular the shallower deformation has been studied using seismic moment tensors information. For our purpose we used the methods described in Arvidsson and Ekstrom (1998) that allowing the use in the inversion of surface waves [and not only of the body waves as the Centroid Moment Tensor (Dziewonski et al., 1981) one] allow to determine seismic source parameters for earthquakes with magnitude as small as 4.0. We applied this tool in the Northern Apennines and through this activity we have built up the Italian CMT dataset (Pondrelli et al., 2006) and the pattern of seismic deformation using the Kostrov (1974) method on a regular grid of 0.25 degree cells. We obtained a map of lateral variations of the pattern of seismic deformation on different layers of depth, taking into account the fact that shallow earthquakes (within 15 km of depth) in the region occur everywhere while most of events with a deeper hypocenter (15-40 km) occur only in the outer part of the belt, on the Adriatic side. For the analysis of the deep deformation, i.e. that occurred in the mantle, we used the anisotropy information characterizing the structure below the Northern Apennines. The anisotropy is an earth properties that in the crust is due to the presence of aligned fluid filled cracks or alternating isotropic layers with different elastic properties while in the mantle the most important cause of seismic anisotropy is the lattice preferred orientation (LPO) of the mantle minerals as the olivine. This last is a highly anisotropic mineral and tends to align its fast crystallographic axes (a-axis) parallel to the astenospheric flow as a response to finite strain induced by geodynamic processes. The seismic anisotropy pattern of a region is measured utilizing the shear wave splitting phenomenon (that is the seismological analogue to optical birefringence). Here, to do so, we apply on teleseismic earthquakes recorded on stations located in the study region, the Sileny and Plomerova (1996) approach. The results are analyzed on the basis of their lateral and vertical variations to better define the earth structure beneath Northern Apennines. We find different anisotropic domains, a Tuscany and an Adria one, with a pattern of seismic anisotropy which laterally varies in a similar way respect to the seismic deformation. Moreover, beneath the Adriatic region the distribution of the splitting parameters is so complex to request an appropriate analysis. Therefore we applied on our data the code of Menke and Levin (2003) which allows to look for different models of structures with multilayer anisotropy. We obtained that the structure beneath the Po Plain is probably even more complicated than expected. On the basis of the results obtained for this thesis, added with those from previous works, we suggest that slab roll-back, which created the Apennines and opened the Tyrrhenian Sea, evolved in the north boundary of Northern Apennines in a different way from its southern part. In particular, the trench retreat developed primarily south of our study region, with an eastward roll-back. In the northern portion of the orogen, after a first stage during which the retreat was perpendicular to the trench, it became oblique with respect to the structure.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

By the end of the 19th century, geodesy has contributed greatly to the knowledge of regional tectonics and fault movement through its ability to measure, at sub-centimetre precision, the relative positions of points on the Earth’s surface. Nowadays the systematic analysis of geodetic measurements in active deformation regions represents therefore one of the most important tool in the study of crustal deformation over different temporal scales [e.g., Dixon, 1991]. This dissertation focuses on motion that can be observed geodetically with classical terrestrial position measurements, particularly triangulation and leveling observations. The work is divided into two sections: an overview of the principal methods for estimating longterm accumulation of elastic strain from terrestrial observations, and an overview of the principal methods for rigorously inverting surface coseismic deformation fields for source geometry with tests on synthetic deformation data sets and applications in two different tectonically active regions of the Italian peninsula. For the long-term accumulation of elastic strain analysis, triangulation data were available from a geodetic network across the Messina Straits area (southern Italy) for the period 1971 – 2004. From resulting angle changes, the shear strain rates as well as the orientation of the principal axes of the strain rate tensor were estimated. The computed average annual shear strain rates for the time period between 1971 and 2004 are γ˙1 = 113.89 ± 54.96 nanostrain/yr and γ˙2 = -23.38 ± 48.71 nanostrain/yr, with the orientation of the most extensional strain (θ) at N140.80° ± 19.55°E. These results suggests that the first-order strain field of the area is dominated by extension in the direction perpendicular to the trend of the Straits, sustaining the hypothesis that the Messina Straits could represents an area of active concentrated deformation. The orientation of θ agree well with GPS deformation estimates, calculated over shorter time interval, and is consistent with previous preliminary GPS estimates [D’Agostino and Selvaggi, 2004; Serpelloni et al., 2005] and is also similar to the direction of the 1908 (MW 7.1) earthquake slip vector [e.g., Boschi et al., 1989; Valensise and Pantosti, 1992; Pino et al., 2000; Amoruso et al., 2002]. Thus, the measured strain rate can be attributed to an active extension across the Messina Straits, corresponding to a relative extension rate ranges between < 1mm/yr and up to ~ 2 mm/yr, within the portion of the Straits covered by the triangulation network. These results are consistent with the hypothesis that the Messina Straits is an important active geological boundary between the Sicilian and the Calabrian domains and support previous preliminary GPS-based estimates of strain rates across the Straits, which show that the active deformation is distributed along a greater area. Finally, the preliminary dislocation modelling has shown that, although the current geodetic measurements do not resolve the geometry of the dislocation models, they solve well the rate of interseismic strain accumulation across the Messina Straits and give useful information about the locking the depth of the shear zone. Geodetic data, triangulation and leveling measurements of the 1976 Friuli (NE Italy) earthquake, were available for the inversion of coseismic source parameters. From observed angle and elevation changes, the source parameters of the seismic sequence were estimated in a join inversion using an algorithm called “simulated annealing”. The computed optimal uniform–slip elastic dislocation model consists of a 30° north-dipping shallow (depth 1.30 ± 0.75 km) fault plane with azimuth of 273° and accommodating reverse dextral slip of about 1.8 m. The hypocentral location and inferred fault plane of the main event are then consistent with the activation of Periadriatic overthrusts or other related thrust faults as the Gemona- Kobarid thrust. Then, the geodetic data set exclude the source solution of Aoudia et al. [2000], Peruzza et al. [2002] and Poli et al. [2002] that considers the Susans-Tricesimo thrust as the May 6 event. The best-fit source model is then more consistent with the solution of Pondrelli et al. [2001], which proposed the activation of other thrusts located more to the North of the Susans-Tricesimo thrust, probably on Periadriatic related thrust faults. The main characteristics of the leveling and triangulation data are then fit by the optimal single fault model, that is, these results are consistent with a first-order rupture process characterized by a progressive rupture of a single fault system. A single uniform-slip fault model seems to not reproduce some minor complexities of the observations, and some residual signals that are not modelled by the optimal single-fault plane solution, were observed. In fact, the single fault plane model does not reproduce some minor features of the leveling deformation field along the route 36 south of the main uplift peak, that is, a second fault seems to be necessary to reproduce these residual signals. By assuming movements along some mapped thrust located southward of the inferred optimal single-plane solution, the residual signal has been successfully modelled. In summary, the inversion results presented in this Thesis, are consistent with the activation of some Periadriatic related thrust for the main events of the sequence, and with a minor importance of the southward thrust systems of the middle Tagliamento plain.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The research activity carried out during the PhD course in Electrical Engineering belongs to the branch of electric and electronic measurements. The main subject of the present thesis is a distributed measurement system to be installed in Medium Voltage power networks, as well as the method developed to analyze data acquired by the measurement system itself and to monitor power quality. In chapter 2 the increasing interest towards power quality in electrical systems is illustrated, by reporting the international research activity inherent to the problem and the relevant standards and guidelines emitted. The aspect of the quality of voltage provided by utilities and influenced by customers in the various points of a network came out only in recent years, in particular as a consequence of the energy market liberalization. Usually, the concept of quality of the delivered energy has been associated mostly to its continuity. Hence the reliability was the main characteristic to be ensured for power systems. Nowadays, the number and duration of interruptions are the “quality indicators” commonly perceived by most customers; for this reason, a short section is dedicated also to network reliability and its regulation. In this contest it should be noted that although the measurement system developed during the research activity belongs to the field of power quality evaluation systems, the information registered in real time by its remote stations can be used to improve the system reliability too. Given the vast scenario of power quality degrading phenomena that usually can occur in distribution networks, the study has been focused on electromagnetic transients affecting line voltages. The outcome of such a study has been the design and realization of a distributed measurement system which continuously monitor the phase signals in different points of a network, detect the occurrence of transients superposed to the fundamental steady state component and register the time of occurrence of such events. The data set is finally used to locate the source of the transient disturbance propagating along the network lines. Most of the oscillatory transients affecting line voltages are due to faults occurring in any point of the distribution system and have to be seen before protection equipment intervention. An important conclusion is that the method can improve the monitored network reliability, since the knowledge of the location of a fault allows the energy manager to reduce as much as possible both the area of the network to be disconnected for protection purposes and the time spent by technical staff to recover the abnormal condition and/or the damage. The part of the thesis presenting the results of such a study and activity is structured as follows: chapter 3 deals with the propagation of electromagnetic transients in power systems by defining characteristics and causes of the phenomena and briefly reporting the theory and approaches used to study transients propagation. Then the state of the art concerning methods to detect and locate faults in distribution networks is presented. Finally the attention is paid on the particular technique adopted for the same purpose during the thesis, and the methods developed on the basis of such approach. Chapter 4 reports the configuration of the distribution networks on which the fault location method has been applied by means of simulations as well as the results obtained case by case. In this way the performance featured by the location procedure firstly in ideal then in realistic operating conditions are tested. In chapter 5 the measurement system designed to implement the transients detection and fault location method is presented. The hardware belonging to the measurement chain of every acquisition channel in remote stations is described. Then, the global measurement system is characterized by considering the non ideal aspects of each device that can concur to the final combined uncertainty on the estimated position of the fault in the network under test. Finally, such parameter is computed according to the Guide to the Expression of Uncertainty in Measurements, by means of a numeric procedure. In the last chapter a device is described that has been designed and realized during the PhD activity aiming at substituting the commercial capacitive voltage divider belonging to the conditioning block of the measurement chain. Such a study has been carried out aiming at providing an alternative to the used transducer that could feature equivalent performance and lower cost. In this way, the economical impact of the investment associated to the whole measurement system would be significantly reduced, making the method application much more feasible.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Negli ultimi anni, un crescente numero di studiosi ha focalizzato la propria attenzione sullo sviluppo di strategie che permettessero di caratterizzare le proprietà ADMET dei farmaci in via di sviluppo, il più rapidamente possibile. Questa tendenza origina dalla consapevolezza che circa la metà dei farmaci in via di sviluppo non viene commercializzato perché ha carenze nelle caratteristiche ADME, e che almeno la metà delle molecole che riescono ad essere commercializzate, hanno comunque qualche problema tossicologico o ADME [1]. Infatti, poco importa quanto una molecola possa essere attiva o specifica: perché possa diventare farmaco è necessario che venga ben assorbita, distribuita nell’organismo, metabolizzata non troppo rapidamente, ne troppo lentamente e completamente eliminata. Inoltre la molecola e i suoi metaboliti non dovrebbero essere tossici per l’organismo. Quindi è chiaro come una rapida determinazione dei parametri ADMET in fasi precoci dello sviluppo del farmaco, consenta di risparmiare tempo e denaro, permettendo di selezionare da subito i composti più promettenti e di lasciar perdere quelli con caratteristiche negative. Questa tesi si colloca in questo contesto, e mostra l’applicazione di una tecnica semplice, la biocromatografia, per caratterizzare rapidamente il legame di librerie di composti alla sieroalbumina umana (HSA). Inoltre mostra l’utilizzo di un’altra tecnica indipendente, il dicroismo circolare, che permette di studiare gli stessi sistemi farmaco-proteina, in soluzione, dando informazioni supplementari riguardo alla stereochimica del processo di legame. La HSA è la proteina più abbondante presente nel sangue. Questa proteina funziona da carrier per un gran numero di molecole, sia endogene, come ad esempio bilirubina, tiroxina, ormoni steroidei, acidi grassi, che xenobiotici. Inoltre aumenta la solubilità di molecole lipofile poco solubili in ambiente acquoso, come ad esempio i tassani. Il legame alla HSA è generalmente stereoselettivo e ad avviene a livello di siti di legame ad alta affinità. Inoltre è ben noto che la competizione tra farmaci o tra un farmaco e metaboliti endogeni, possa variare in maniera significativa la loro frazione libera, modificandone l’attività e la tossicità. Per queste sue proprietà la HSA può influenzare sia le proprietà farmacocinetiche che farmacodinamiche dei farmaci. Non è inusuale che un intero progetto di sviluppo di un farmaco possa venire abbandonato a causa di un’affinità troppo elevata alla HSA, o a un tempo di emivita troppo corto, o a una scarsa distribuzione dovuta ad un debole legame alla HSA. Dal punto di vista farmacocinetico, quindi, la HSA è la proteina di trasporto del plasma più importante. Un gran numero di pubblicazioni dimostra l’affidabilità della tecnica biocromatografica nello studio dei fenomeni di bioriconoscimento tra proteine e piccole molecole [2-6]. Il mio lavoro si è focalizzato principalmente sull’uso della biocromatografia come metodo per valutare le caratteristiche di legame di alcune serie di composti di interesse farmaceutico alla HSA, e sul miglioramento di tale tecnica. Per ottenere una miglior comprensione dei meccanismi di legame delle molecole studiate, gli stessi sistemi farmaco-HSA sono stati studiati anche con il dicroismo circolare (CD). Inizialmente, la HSA è stata immobilizzata su una colonna di silice epossidica impaccata 50 x 4.6 mm di diametro interno, utilizzando una procedura precedentemente riportata in letteratura [7], con alcune piccole modifiche. In breve, l’immobilizzazione è stata effettuata ponendo a ricircolo, attraverso una colonna precedentemente impaccata, una soluzione di HSA in determinate condizioni di pH e forza ionica. La colonna è stata quindi caratterizzata per quanto riguarda la quantità di proteina correttamente immobilizzata, attraverso l’analisi frontale di L-triptofano [8]. Di seguito, sono stati iniettati in colonna alcune soluzioni raceme di molecole note legare la HSA in maniera enantioselettiva, per controllare che la procedura di immobilizzazione non avesse modificato le proprietà di legame della proteina. Dopo essere stata caratterizzata, la colonna è stata utilizzata per determinare la percentuale di legame di una piccola serie di inibitori della proteasi HIV (IPs), e per individuarne il sito(i) di legame. La percentuale di legame è stata calcolata attraverso il fattore di capacità (k) dei campioni. Questo parametro in fase acquosa è stato estrapolato linearmente dal grafico log k contro la percentuale (v/v) di 1-propanolo presente nella fase mobile. Solamente per due dei cinque composti analizzati è stato possibile misurare direttamente il valore di k in assenza di solvente organico. Tutti gli IPs analizzati hanno mostrato un’elevata percentuale di legame alla HSA: in particolare, il valore per ritonavir, lopinavir e saquinavir è risultato maggiore del 95%. Questi risultati sono in accordo con dati presenti in letteratura, ottenuti attraverso il biosensore ottico [9]. Inoltre, questi risultati sono coerenti con la significativa riduzione di attività inibitoria di questi composti osservata in presenza di HSA. Questa riduzione sembra essere maggiore per i composti che legano maggiormente la proteina [10]. Successivamente sono stati eseguiti degli studi di competizione tramite cromatografia zonale. Questo metodo prevede di utilizzare una soluzione a concentrazione nota di un competitore come fase mobile, mentre piccole quantità di analita vengono iniettate nella colonna funzionalizzata con HSA. I competitori sono stati selezionati in base al loro legame selettivo ad uno dei principali siti di legame sulla proteina. In particolare, sono stati utilizzati salicilato di sodio, ibuprofene e valproato di sodio come marker dei siti I, II e sito della bilirubina, rispettivamente. Questi studi hanno mostrato un legame indipendente dei PIs ai siti I e II, mentre è stata osservata una debole anticooperatività per il sito della bilirubina. Lo stesso sistema farmaco-proteina è stato infine investigato in soluzione attraverso l’uso del dicroismo circolare. In particolare, è stato monitorata la variazione del segnale CD indotto di un complesso equimolare [HSA]/[bilirubina], a seguito dell’aggiunta di aliquote di ritonavir, scelto come rappresentante della serie. I risultati confermano la lieve anticooperatività per il sito della bilirubina osservato precedentemente negli studi biocromatografici. Successivamente, lo stesso protocollo descritto precedentemente è stato applicato a una colonna di silice epossidica monolitica 50 x 4.6 mm, per valutare l’affidabilità del supporto monolitico per applicazioni biocromatografiche. Il supporto monolitico monolitico ha mostrato buone caratteristiche cromatografiche in termini di contropressione, efficienza e stabilità, oltre che affidabilità nella determinazione dei parametri di legame alla HSA. Questa colonna è stata utilizzata per la determinazione della percentuale di legame alla HSA di una serie di poliamminochinoni sviluppati nell’ambito di una ricerca sulla malattia di Alzheimer. Tutti i composti hanno mostrato una percentuale di legame superiore al 95%. Inoltre, è stata osservata una correlazione tra percentuale di legame è caratteristiche della catena laterale (lunghezza e numero di gruppi amminici). Successivamente sono stati effettuati studi di competizione dei composti in esame tramite il dicroismo circolare in cui è stato evidenziato un effetto anticooperativo dei poliamminochinoni ai siti I e II, mentre rispetto al sito della bilirubina il legame si è dimostrato indipendente. Le conoscenze acquisite con il supporto monolitico precedentemente descritto, sono state applicate a una colonna di silice epossidica più corta (10 x 4.6 mm). Il metodo di determinazione della percentuale di legame utilizzato negli studi precedenti si basa su dati ottenuti con più esperimenti, quindi è necessario molto tempo prima di ottenere il dato finale. L’uso di una colonna più corta permette di ridurre i tempi di ritenzione degli analiti, per cui la determinazione della percentuale di legame alla HSA diventa molto più rapida. Si passa quindi da una analisi a medio rendimento a una analisi di screening ad alto rendimento (highthroughput- screening, HTS). Inoltre, la riduzione dei tempi di analisi, permette di evitare l’uso di soventi organici nella fase mobile. Dopo aver caratterizzato la colonna da 10 mm con lo stesso metodo precedentemente descritto per le altre colonne, sono stati iniettati una serie di standard variando il flusso della fase mobile, per valutare la possibilità di utilizzare flussi elevati. La colonna è stata quindi impiegata per stimare la percentuale di legame di una serie di molecole con differenti caratteristiche chimiche. Successivamente è stata valutata la possibilità di utilizzare una colonna così corta, anche per studi di competizione, ed è stata indagato il legame di una serie di composti al sito I. Infine è stata effettuata una valutazione della stabilità della colonna in seguito ad un uso estensivo. L’uso di supporti cromatografici funzionalizzati con albumine di diversa origine (ratto, cane, guinea pig, hamster, topo, coniglio), può essere proposto come applicazione futura di queste colonne HTS. Infatti, la possibilità di ottenere informazioni del legame dei farmaci in via di sviluppo alle diverse albumine, permetterebbe un migliore paragone tra i dati ottenuti tramite esperimenti in vitro e i dati ottenuti con esperimenti sull’animale, facilitando la successiva estrapolazione all’uomo, con la velocità di un metodo HTS. Inoltre, verrebbe ridotto anche il numero di animali utilizzati nelle sperimentazioni. Alcuni lavori presenti in letteratura dimostrano l’affidabilita di colonne funzionalizzate con albumine di diversa origine [11-13]: l’utilizzo di colonne più corte potrebbe aumentarne le applicazioni.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Ontology design and population -core aspects of semantic technologies- re- cently have become fields of great interest due to the increasing need of domain-specific knowledge bases that can boost the use of Semantic Web. For building such knowledge resources, the state of the art tools for ontology design require a lot of human work. Producing meaningful schemas and populating them with domain-specific data is in fact a very difficult and time-consuming task. Even more if the task consists in modelling knowledge at a web scale. The primary aim of this work is to investigate a novel and flexible method- ology for automatically learning ontology from textual data, lightening the human workload required for conceptualizing domain-specific knowledge and populating an extracted schema with real data, speeding up the whole ontology production process. Here computational linguistics plays a fundamental role, from automati- cally identifying facts from natural language and extracting frame of relations among recognized entities, to producing linked data with which extending existing knowledge bases or creating new ones. In the state of the art, automatic ontology learning systems are mainly based on plain-pipelined linguistics classifiers performing tasks such as Named Entity recognition, Entity resolution, Taxonomy and Relation extraction [11]. These approaches present some weaknesses, specially in capturing struc- tures through which the meaning of complex concepts is expressed [24]. Humans, in fact, tend to organize knowledge in well-defined patterns, which include participant entities and meaningful relations linking entities with each other. In literature, these structures have been called Semantic Frames by Fill- 6 Introduction more [20], or more recently as Knowledge Patterns [23]. Some NLP studies has recently shown the possibility of performing more accurate deep parsing with the ability of logically understanding the structure of discourse [7]. In this work, some of these technologies have been investigated and em- ployed to produce accurate ontology schemas. The long-term goal is to collect large amounts of semantically structured information from the web of crowds, through an automated process, in order to identify and investigate the cognitive patterns used by human to organize their knowledge.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A study of maar-diatreme volcanoes has been perfomed by inversion of gravity and magnetic data. The geophysical inverse problem has been solved by means of the damped nonlinear least-squares method. To ensure stability and convergence of the solution of the inverse problem, a mathematical tool, consisting in data weighting and model scaling, has been worked out. Theoretical gravity and magnetic modeling of maar-diatreme volcanoes has been conducted in order to get information, which is used for a simple rough qualitative and/or quantitative interpretation. The information also serves as a priori information to design models for the inversion and/or to assist the interpretation of inversion results. The results of theoretical modeling have been used to roughly estimate the heights and the dip angles of the walls of eight Eifel maar-diatremes — each taken as a whole. Inversemodeling has been conducted for the Schönfeld Maar (magnetics) and the Hausten-Morswiesen Maar (gravity and magnetics). The geometrical parameters of these maars, as well as the density and magnetic properties of the rocks filling them, have been estimated. For a reliable interpretation of the inversion results, beside the knowledge from theoretical modeling, it was resorted to other tools such like field transformations and spectral analysis for complementary information. Geologic models, based on thesynthesis of the respective interpretation results, are presented for the two maars mentioned above. The results gave more insight into the genesis, physics and posteruptive development of the maar-diatreme volcanoes. A classification of the maar-diatreme volcanoes into three main types has been elaborated. Relatively high magnetic anomalies are indicative of scoria cones embeded within maar-diatremes if they are not caused by a strong remanent component of the magnetization. Smaller (weaker) secondary gravity and magnetic anomalies on the background of the main anomaly of a maar-diatreme — especially in the boundary areas — are indicative for subsidence processes, which probably occurred in the late sedimentation phase of the posteruptive development. Contrary to postulates referring to kimberlite pipes, there exists no generalized systematics between diameter and height nor between geophysical anomaly and the dimensions of the maar-diatreme volcanoes. Although both maar-diatreme volcanoes and kimberlite pipes are products of phreatomagmatism, they probably formed in different thermodynamic and hydrogeological environments. In the case of kimberlite pipes, large amounts of magma and groundwater, certainly supplied by deep and large reservoirs, interacted under high pressure and temperature conditions. This led to a long period phreatomagmatic process and hence to the formation of large structures. Concerning the maar-diatreme and tuff-ring-diatreme volcanoes, the phreatomagmatic process takes place due to an interaction between magma from small and shallow magma chambers (probably segregated magmas) and small amounts of near-surface groundwater under low pressure and temperature conditions. This leads to shorter time eruptions and consequently to structures of smaller size in comparison with kimberlite pipes. Nevertheless, the results show that the diameter to height ratio for 50% of the studied maar-diatremes is around 1, whereby the dip angle of the diatreme walls is similar to that of the kimberlite pipes and lies between 70 and 85°. Note that these numerical characteristics, especially the dip angle, hold for the maars the diatremes of which — estimated by modeling — have the shape of a truncated cone. This indicates that the diatreme can not be completely resolved by inversion.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this thesis, the author presents a query language for an RDF (Resource Description Framework) database and discusses its applications in the context of the HELM project (the Hypertextual Electronic Library of Mathematics). This language aims at meeting the main requirements coming from the RDF community. in particular it includes: a human readable textual syntax and a machine-processable XML (Extensible Markup Language) syntax both for queries and for query results, a rigorously exposed formal semantics, a graph-oriented RDF data access model capable of exploring an entire RDF graph (including both RDF Models and RDF Schemata), a full set of Boolean operators to compose the query constraints, fully customizable and highly structured query results having a 4-dimensional geometry, some constructions taken from ordinary programming languages that simplify the formulation of complex queries. The HELM project aims at integrating the modern tools for the automation of formal reasoning with the most recent electronic publishing technologies, in order create and maintain a hypertextual, distributed virtual library of formal mathematical knowledge. In the spirit of the Semantic Web, the documents of this library include RDF metadata describing their structure and content in a machine-understandable form. Using the author's query engine, HELM exploits this information to implement some functionalities allowing the interactive and automatic retrieval of documents on the basis of content-aware requests that take into account the mathematical nature of these documents.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Nowadays communication is switching from a centralized scenario, where communication media like newspapers, radio, TV programs produce information and people are just consumers, to a completely different decentralized scenario, where everyone is potentially an information producer through the use of social networks, blogs, forums that allow a real-time worldwide information exchange. These new instruments, as a result of their widespread diffusion, have started playing an important socio-economic role. They are the most used communication media and, as a consequence, they constitute the main source of information enterprises, political parties and other organizations can rely on. Analyzing data stored in servers all over the world is feasible by means of Text Mining techniques like Sentiment Analysis, which aims to extract opinions from huge amount of unstructured texts. This could lead to determine, for instance, the user satisfaction degree about products, services, politicians and so on. In this context, this dissertation presents new Document Sentiment Classification methods based on the mathematical theory of Markov Chains. All these approaches bank on a Markov Chain based model, which is language independent and whose killing features are simplicity and generality, which make it interesting with respect to previous sophisticated techniques. Every discussed technique has been tested in both Single-Domain and Cross-Domain Sentiment Classification areas, comparing performance with those of other two previous works. The performed analysis shows that some of the examined algorithms produce results comparable with the best methods in literature, with reference to both single-domain and cross-domain tasks, in $2$-classes (i.e. positive and negative) Document Sentiment Classification. However, there is still room for improvement, because this work also shows the way to walk in order to enhance performance, that is, a good novel feature selection process would be enough to outperform the state of the art. Furthermore, since some of the proposed approaches show promising results in $2$-classes Single-Domain Sentiment Classification, another future work will regard validating these results also in tasks with more than $2$ classes.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Local to regional climate anomalies are to a large extent determined by the state of the atmospheric circulation. The knowledge of large-scale sea level pressure (SLP) variations in former times is therefore crucial when addressing past climate changes across Europe and the Mediterranean. However, currently available SLP reconstructions lack data from the ocean, particularly in the pre-1850 period. Here we present a new statistically-derived 5° × 5° resolved gridded seasonal SLP dataset covering the eastern North Atlantic, Europe and the Mediterranean area (40°W–50°E; 20°N–70°N) back to 1750 using terrestrial instrumental pressure series and marine wind information from ship logbooks. For the period 1750–1850, the new SLP reconstruction provides a more accurate representation of the strength of the winter westerlies as well as the location and variability of the Azores High than currently available multiproxy pressure field reconstructions. These findings strongly support the potential of ship logbooks as an important source to determine past circulation variations especially for the pre-1850 period. This new dataset can be further used for dynamical studies relating large-scale atmospheric circulation to temperature and precipitation variability over the Mediterranean and Eurasia, for the comparison with outputs from GCMs as well as for detection and attribution studies.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

During the past 20 years or so, more has become known about the properties of khat, its pharmacology, physiological and psychological effects on humans. However, at the same time its reputation of social and recreational use in traditional contexts has hindered the dissemination of knowledge about its detrimental effects in terms of mortality. This paper focuses on this particular deficit and adds to the knowledge-base by reviewing the scant literature that does exist on mortality associated with the trade and use of khat. We sought all peer-reviewed papers relating to deaths associated with khat. From an initial list of 111, we identified 15 items meeting our selection criteria. Examination of these revealed 61 further relevant items. These were supplemented with published reports, newspaper and other media reports. A conceptual framework was then developed for classifying mortality associated with each stage of the plant's journey from its cultivation, transportation, consumption, to its effects on the human body. The model is demonstrated with concrete examples drawn from the above sources. These highlight a number of issues for which more substantive statistical data are needed, including population-based studies of the physiological and psychological determinants of khat-related fatalities. Khat-consuming communities, and health professionals charged with their care should be more aware of the physiological and psychological effects of khat, together with the risks for morbidity and mortality associated with its use. There is also a need for information to be collected at international and national levels on other causes of death associated with khat cultivation, transportation, and trade. Both these dimensions need to be understood.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Researchers examining the effects of programs, in this case a state-level pharmaceutical assistance program for the elderly, sometimes must rely on multiple methods of data collection. Two-stage data collection (e.g., a telephone interview followed by a mail questionnaire) was used to obtain a full range of information. Older age groups were found to participate less frequently in the telephone interview, while certain demographic factors characterized mail questionnaire nonparticipants, all of which supports past research. Results also show that those in the poorest health are less likely to participate in the mail survey. Combining the two methods did not result in high attrition, suggesting that innovation can be successfully employed. Knowledge of the bias associated with each method will aid in targeting special groups.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The new knowledge environments of the digital age are oen described as places where we are all closely read, with our buying habits, location, and identities available to advertisers, online merchants, the government, and others through our use of the Internet. This is represented as a loss of privacy in which these entities learn about our activities and desires, using means that were unavailable in the pre-digital era. This article argues that the reciprocal nature of digital networks means 1) that the privacy issues that we face online are not radically different from those of the pre-Internet era, and 2) that we need to reconceive of close reading as an activity of which both humans and computer algorithms are capable.