947 resultados para XML, Schema matching


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The cooperation and the sharing of information cataloguing and bibliographical in environment automated, this was only possible with the creation and adoption of interchange format MARC21. But due to the progresses of the technologies of information and communication, of the crescent use of Internet and of the databases and databanks, there were the need of the creation and development of tools that optimize the organization activities, retrieval and interchange of information. XML is one of those developments that have as purpose to facilitate the management, storage and transmission of data through Internet. Before that, it was proposed through a literature revision, to analyze Interchange Format MARC21 and Markup Language XML as tools for the consolidation of the Automated Cooperative Cataloguing, your differences of storage flexibilities, organization, retrieval and interchange of data through Internet. This research made possible the divulgation to the community librarian, through a literature revision, that has been discussed internationally on MARC21 and XML

Relevância:

20.00% 20.00%

Publicador:

Resumo:

[EN] Introduction: Candidemia in critically ill patients is usually a severe and life-threatening condition with a high crude mortality. Very few studies have focused on the impact of candidemia on ICU patient outcome and attributable mortality still remains controversial. This study was carried out to determine the attributable mortality of ICU-acquired candidemia in critically ill patients using propensity score matching analysis. Methods: A prospective observational study was conducted of all consecutive non-neutropenic adult patients admitted for at least seven days to 36 ICUs in Spain, France, and Argentina between April 2006 and June 2007. The probability of developing candidemia was estimated using a multivariate logistic regression model. Each patient with ICU-acquired candidemia was matched with two control patients with the nearest available Mahalanobis metric matching within the calipers defined by the propensity score. Standardized differences tests (SDT) for each variable before and after matching were calculated. Attributable mortality was determined by a modified Poisson regression model adjusted by those variables that still presented certain misalignments defined as a SDT > 10%. Results: Thirty-eight candidemias were diagnosed in 1,107 patients (34.3 episodes/1,000 ICU patients). Patients with and without candidemia had an ICU crude mortality of 52.6% versus 20.6% (P < 0.001) and a crude hospital mortality of 55.3% versus 29.6% (P = 0.01), respectively. In the propensity matched analysis, the corresponding figures were 51.4% versus 37.1% (P = 0.222) and 54.3% versus 50% (P = 0.680). After controlling residual confusion by the Poisson regression model, the relative risk (RR) of ICU- and hospital-attributable mortality from candidemia was RR 1.298 (95% confidence interval (CI) 0.88 to 1.98) and RR 1.096 (95% CI 0.68 to 1.69), respectively. Conclusions: ICU-acquired candidemia in critically ill patients is not associated with an increase in either ICU or hospital mortality.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nello sviluppo di sistemi informatici si sono affermate numerose tecnologie, che vanno utilizzate in modo combinato e, possibilmente sinergico. Da una parte, i sistemi di gestione di basi di dati relazionali consentono una gestione efficiente ed efficace di dati persistenti, condivisi e transazionali. Dall'altra, gli strumenti e i metodi orientati agli oggetti (linguaggi di programmazione, ma anche metodologie di analisi e progettazione) consentono uno sviluppo efficace della logica applicativa delle applicazioni. E’ utile in questo contesto spiegare che cosa s'intende per sistema informativo e sistema informatico. Sistema informativo: L'insieme di persone, risorse tecnologiche, procedure aziendali il cui compito è quello di produrre e conservare le informazioni che servono per operare nell'impresa e gestirla. Sistema informatico: L'insieme degli strumenti informatici utilizzati per il trattamento automatico delle informazioni, al fine di agevolare le funzioni del sistema informativo. Ovvero, il sistema informatico raccoglie, elabora, archivia, scambia informazione mediante l'uso delle tecnologie proprie dell'Informazione e della Comunicazione (ICT): calcolatori, periferiche, mezzi di comunicazione, programmi. Il sistema informatico è quindi un componente del sistema informativo. Le informazioni ottenute dall'elaborazione dei dati devono essere salvate da qualche parte, in modo tale da durare nel tempo dopo l'elaborazione. Per realizzare questo scopo viene in aiuto l'informatica. I dati sono materiale informativo grezzo, non (ancora) elaborato da chi lo riceve, e possono essere scoperti, ricercati, raccolti e prodotti. Sono la materia prima che abbiamo a disposizione o produciamo per costruire i nostri processi comunicativi. L'insieme dei dati è il tesoro di un'azienda e ne rappresenta la storia evolutiva. All'inizio di questa introduzione è stato accennato che nello sviluppo dei sistemi informatici si sono affermate diverse tecnologie e che, in particolare, l'uso di sistemi di gestione di basi di dati relazionali comporta una gestione efficace ed efficiente di dati persistenti. Per persistenza di dati in informatica si intende la caratteristica dei dati di sopravvivere all'esecuzione del programma che li ha creati. Se non fosse cosi, i dati verrebbero salvati solo in memoria RAM e sarebbero persi allo spegnimento del computer. Nella programmazione informatica, per persistenza si intende la possibilità di far sopravvivere strutture dati all'esecuzione di un programma singolo. Occorre il salvataggio in un dispositivo di memorizzazione non volatile, come per esempio su un file system o su un database. In questa tesi si è sviluppato un sistema che è in grado di gestire una base di dati gerarchica o relazionale consentendo l'importazione di dati descritti da una grammatica DTD. Nel capitolo 1 si vedranno più in dettaglio cosa di intende per Sistema Informativo, modello client-server e sicurezza dei dati. Nel capitolo 2 parleremo del linguaggio di programmazione Java, dei database e dei file XML. Nel capitolo 3 descriveremo un linguaggio di analisi e modellazione UML con esplicito riferimento al progetto sviluppato. Nel capitolo 4 descriveremo il progetto che è stato implementato e le tecnologie e tools utilizzati.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Environmental Management includes many components, among which we can include Environmental Management Systems (EMS), Environmental Reporting and Analysis, Environmental Information Systems and Environmental Communication. In this work two applications are presented: the developement and implementation of an Environmental Management System in local administrations, according to the European scheme "EMAS", and the analysis of a territorial energy system through scenario building and environmental sustainability assessment. Both applications are linked by the same objective, which is the quest for more scientifically sound elements; in fact, both EMS and energy planning are oftec carachterized by localism and poor comparability. Emergy synthesis, proposed by ecologist H.T. Odum and described in his book "Environmental Accounting: Emergy and Environmental Decision Making" (1996) has been chosen and applied as an environmental evaluation tool, in order complete the analysis with an assessment of the "global value" of goods and processes. In particular, eMergy syntesis has been applied in order to improve the evaluation of the significance of environmental aspects in an EMS, and in order to evaluate the environmental performance of three scenarios of future evolution of the energy system. Regarding EMS, in this work an application of an EMS together with the CLEAR methodology for environmental accounting is discussed, in order to improve the identification of the environmental aspects; data regarding environmental aspects and significant ones for 4 local authorities are also presented, together with a preliminary proposal for the integration of the assessment of the significance of environmental aspects with eMergy synthesis. Regarding the analysis of an energy system, in this work the carachterization of the current situation is presented together with the overall energy balance and the evaluation of the emissions of greenhouse gases; moreover, three scenarios of future evolution are described and discussed. The scenarios have been realized with the support of the LEAP software ("Long Term Energy Alternatives Planning System" by SEI - "Stockholm Environment Institute"). Finally, the eMergy synthesis of the current situation and of the three scenarios is shown.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An extensive sample (2%) of private vehicles in Italy are equipped with a GPS device that periodically measures their position and dynamical state for insurance purposes. Having access to this type of data allows to develop theoretical and practical applications of great interest: the real-time reconstruction of traffic state in a certain region, the development of accurate models of vehicle dynamics, the study of the cognitive dynamics of drivers. In order for these applications to be possible, we first need to develop the ability to reconstruct the paths taken by vehicles on the road network from the raw GPS data. In fact, these data are affected by positioning errors and they are often very distanced from each other (~2 Km). For these reasons, the task of path identification is not straightforward. This thesis describes the approach we followed to reliably identify vehicle paths from this kind of low-sampling data. The problem of matching data with roads is solved with a bayesian approach of maximum likelihood. While the identification of the path taken between two consecutive GPS measures is performed with a specifically developed optimal routing algorithm, based on A* algorithm. The procedure was applied on an off-line urban data sample and proved to be robust and accurate. Future developments will extend the procedure to real-time execution and nation-wide coverage.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Con il seguente elaborato propongo di presentare il lavoro svolto sui documenti XML che ci sono stati forniti. Più nello specifico, il lavoro è incentrato sui riferimenti bibliografici presenti in ogni documento e ha come fine l'elaborazione delle informazioni estrapolate al fine di poterle esportare nel formato RDF (Resource Description Framework). I documenti XML (eXtensible Markup Language) fornitimi provengono dalla casa editrice Elsevier, una delle più grandi case editrici di articoli scientifici organizzati in riviste specializzate (journal).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Il successo di XML ha rinnovato l'interesse per il controllo delle modifiche sugli alberi e i dati semi-strutturati. Le necessità principali sono gestire le revisioni dei documenti, interrogare e monitorare i cambiamenti e scambiare efficientemente i documenti e i loro aggiornamenti. I cambiamenti che si verificano tra due versioni di un documento sono sconosciuti al sistema. Quindi, un algoritmo di diffing viene utilizzato per costruire un delta che rappresenta i cambiamenti. Sono stati proposti vari algoritmi di diffing. Alcuni considerano la struttura ad albero dei documenti XML, mentre altri non lo fanno. Inoltre, alcuni algoritmi possono trovare una sequenza più "sintetica" delle modifiche. Questo migliora la qualità del monitoraggio e l'interrogazione delle modifiche. Esistono altri approcci sviluppati per monitorare i cambiamenti sui documenti XML, differenti dagli algoritmi di diffing, ma che comunque ottengono risultati quasi identici ed offrono un'interrogazione delle modifiche più agevole per gli utenti umani. Esistono infatti programmi di editing con strumenti di change tracking, che permettono a più autori di modificare diverse versioni dei documenti contemporaneamente e registrando in tempo reale tutti i cambiamenti da loro apportati. In questo lavoro studio i diversi strumenti e confronto i loro risultati sulla base di esperimenti condotti su documenti XML opportunamente modificati per riconoscere determinati cambiamenti. Ci sono anche diverse proposte di formati del delta per rappresentare i cambiamenti in XML, ma non vi è ancora alcuno standard. Espongo le principali proposte in base alle loro specifiche, le loro implementazioni e sui risultati degli esperimenti condotti. L'obiettivo è di fornire una valutazione della qualità degli strumenti e, sulla base di questo, guidare gli utenti nella scelta della soluzione appropriata per le loro applicazioni.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This dissertation mimics the Turkish college admission procedure. It started with the purpose to reduce the inefficiencies in Turkish market. For this purpose, we propose a mechanism under a new market structure; as we prefer to call, semi-centralization. In chapter 1, we give a brief summary of Matching Theory. We present the first examples in Matching history with the most general papers and mechanisms. In chapter 2, we propose our mechanism. In real life application, that is in Turkish university placements, the mechanism reduces the inefficiencies of the current system. The success of the mechanism depends on the preference profile. It is easy to show that under complete information the mechanism implements the full set of stable matchings for a given profile. In chapter 3, we refine our basic mechanism. The modification on the mechanism has a crucial effect on the results. The new mechanism is, as we call, a middle mechanism. In one of the subdomain, this mechanism coincides with the original basic mechanism. But, in the other partition, it gives the same results with Gale and Shapley's algorithm. In chapter 4, we apply our basic mechanism to well known Roommate Problem. Since the roommate problem is in one-sided game patern, firstly we propose an auxiliary function to convert the game semi centralized two-sided game, because our basic mechanism is designed for this framework. We show that this process is succesful in finding a stable matching in the existence of stability. We also show that our mechanism easily and simply tells us if a profile lacks of stability by using purified orderings. Finally, we show a method to find all the stable matching in the existence of multi stability. The method is simply to run the mechanism for all of the top agents in the social preference.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis we investigate several phenomenologically important properties of top-quark pair production at hadron colliders. We calculate double differential cross sections in two different kinematical setups, pair invariant-mass (PIM) and single-particle inclusive (1PI) kinematics. In pair invariant-mass kinematics we are able to present results for the double differential cross section with respect to the invariant mass of the top-quark pair and the top-quark scattering angle. Working in the threshold region, where the pair invariant mass M is close to the partonic center-of-mass energy sqrt{hat{s}}, we are able to factorize the partonic cross section into different energy regions. We use renormalization-group (RG) methods to resum large threshold logarithms to next-to-next-to-leading-logarithmic (NNLL) accuracy. On a technical level this is done using effective field theories, such as heavy-quark effective theory (HQET) and soft-collinear effective theory (SCET). The same techniques are applied when working in 1PI kinematics, leading to a calculation of the double differential cross section with respect to transverse-momentum pT and the rapidity of the top quark. We restrict the phase-space such that only soft emission of gluons is possible, and perform a NNLL resummation of threshold logarithms. The obtained analytical expressions enable us to precisely predict several observables, and a substantial part of this thesis is devoted to their detailed phenomenological analysis. Matching our results in the threshold regions to the exact ones at next-to-leading order (NLO) in fixed-order perturbation theory, allows us to make predictions at NLO+NNLL order in RG-improved, and at approximate next-to-next-to-leading order (NNLO) in fixed order perturbation theory. We give numerical results for the invariant mass distribution of the top-quark pair, and for the top-quark transverse-momentum and rapidity spectrum. We predict the total cross section, separately for both kinematics. Using these results, we analyze subleading contributions to the total cross section in 1PI and PIM originating from power corrections to the leading terms in the threshold expansions, and compare them to previous approaches. We later combine our PIM and 1PI results for the total cross section, this way eliminating uncertainties due to these corrections. The combined predictions for the total cross section are presented as a function of the top-quark mass in the pole, the minimal-subtraction (MS), and the 1S mass scheme. In addition, we calculate the forward-backward (FB) asymmetry at the Tevatron in the laboratory, and in the ttbar rest frames as a function of the rapidity and the invariant mass of the top-quark pair at NLO+NNLL. We also give binned results for the asymmetry as a function of the invariant mass and the rapidity difference of the ttbar pair, and compare those to recent measurements. As a last application we calculate the charge asymmetry at the LHC as a function of a lower rapidity cut-off for the top and anti-top quarks.