958 resultados para Linked Data Open Data Linked Open Data RDF dataset Linked Data Browser Application-Oriented Indexes


Relevância:

100.00% 100.00%

Publicador:

Resumo:

L'Open Data, letteralmente dati aperti, la corrente di pensiero (e il relativo movimento) che cerca di rispondere all'esigenza di poter disporre di dati legalmente aperti, ovvero liberamente re-usabili da parte del fruitore, per qualsiasi scopo. Lobiettivo dellOpen Data pu essere raggiunto per legge, come negli USA dove linformazione generata dal settore pubblico federale in pubblico dominio, oppure per scelta dei detentori dei diritti, tramite opportune licenze. Per motivare la necessit di avere dei dati in formato aperto, possiamo usare una comparazione del tipo: l'Open Data sta al Linked Data, come la rete Internet sta al Web. L'Open Data, quindi, linfrastruttura (o la piattaforma) di cui il Linked Data ha bisogno per poter creare la rete di inferenze tra i vari dati sparsi nel Web. Il Linked Data, in altre parole, una tecnologia ormai abbastanza matura e con grandi potenzialit, ma ha bisogno di grandi masse di dati tra loro collegati, ossia linkati, per diventare concretamente utile. Questo, in parte, gi stato ottenuto ed in corso di miglioramento, grazie a progetti come DBpedia o FreeBase. In parallelo ai contributi delle community online, un altro tassello importante una sorta di bulk upload molto prezioso potrebbe essere dato dalla disponibilit di grosse masse di dati pubblici, idealmente anche gi linkati dalle istituzioni stesse o comunque messi a disposizione in modo strutturato che aiutino a raggiungere una massa di Linked Data. A partire dal substrato, rappresentato dalla disponibilit di fatto dei dati e dalla loro piena riutilizzabilit (in modo legale), il Linked Data pu offrire una potente rappresentazione degli stessi, in termini di relazioni (collegamenti): in questo senso, Linked Data ed Open Data convergono e raggiungono la loro piena realizzazione nellapproccio Linked Open Data. Lobiettivo di questa tesi quello di approfondire ed esporre le basi sul funzionamento dei Linked Open Data e gli ambiti in cui vengono utilizzati.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Longitudinal surveys are increasingly used to collect event history data on person-specific processes such as transitions between labour market states. Surveybased event history data pose a number of challenges for statistical analysis. These challenges include survey errors due to sampling, non-response, attrition and measurement. This study deals with non-response, attrition and measurement errors in event history data and the bias caused by them in event history analysis. The study also discusses some choices faced by a researcher using longitudinal survey data for event history analysis and demonstrates their effects. These choices include, whether a design-based or a model-based approach is taken, which subset of data to use and, if a design-based approach is taken, which weights to use. The study takes advantage of the possibility to use combined longitudinal survey register data. The Finnish subset of European Community Household Panel (FI ECHP) survey for waves 15 were linked at person-level with longitudinal register data. Unemployment spells were used as study variables of interest. Lastly, a simulation study was conducted in order to assess the statistical properties of the Inverse Probability of Censoring Weighting (IPCW) method in a survey data context. The study shows how combined longitudinal survey register data can be used to analyse and compare the non-response and attrition processes, test the missingness mechanism type and estimate the size of bias due to non-response and attrition. In our empirical analysis, initial non-response turned out to be a more important source of bias than attrition. Reported unemployment spells were subject to seam effects, omissions, and, to a lesser extent, overreporting. The use of proxy interviews tended to cause spell omissions. An often-ignored phenomenon classification error in reported spell outcomes, was also found in the data. Neither the Missing At Random (MAR) assumption about non-response and attrition mechanisms, nor the classical assumptions about measurement errors, turned out to be valid. Both measurement errors in spell durations and spell outcomes were found to cause bias in estimates from event history models. Low measurement accuracy affected the estimates of baseline hazard most. The design-based estimates based on data from respondents to all waves of interest and weighted by the last wave weights displayed the largest bias. Using all the available data, including the spells by attriters until the time of attrition, helped to reduce attrition bias. Lastly, the simulation study showed that the IPCW correction to design weights reduces bias due to dependent censoring in design-based Kaplan-Meier and Cox proportional hazard model estimators. The study discusses implications of the results for survey organisations collecting event history data, researchers using surveys for event history analysis, and researchers who develop methods to correct for non-sampling biases in event history data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Software development methodologies are becoming increasingly abstract, progressing from low level assembly and implementation languages such as C and Ada, to component based approaches that can be used to assemble applications using technologies such as JavaBeans and the .NET framework. Meanwhile, model driven approaches emphasise the role of higher level models and notations, and embody a process of automatically deriving lower level representations and concrete software implementations. The relationship between data and software is also evolving. Modern data formats are becoming increasingly standardised, open and empowered in order to support a growing need to share data in both academia and industry. Many contemporary data formats, most notably those based on XML, are self-describing, able to specify valid data structure and content, and can also describe data manipulations and transformations. Furthermore, while applications of the past have made extensive use of data, the runtime behaviour of future applications may be driven by data, as demonstrated by the field of dynamic data driven application systems. The combination of empowered data formats and high level software development methodologies forms the basis of modern game development technologies, which drive software capabilities and runtime behaviour using empowered data formats describing game content. While low level libraries provide optimised runtime execution, content data is used to drive a wide variety of interactive and immersive experiences. This thesis describes the Fluid project, which combines component based software development and game development technologies in order to define novel component technologies for the description of data driven component based applications. The thesis makes explicit contributions to the fields of component based software development and visualisation of spatiotemporal scenes, and also describes potential implications for game development technologies. The thesis also proposes a number of developments in dynamic data driven application systems in order to further empower the role of data in this field.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cluster scheduling and collision avoidance are crucial issues in large-scale cluster-tree Wireless Sensor Networks (WSNs). The paper presents a methodology that provides a Time Division Cluster Scheduling (TDCS) mechanism based on the cyclic extension of RCPS/TC (Resource Constrained Project Scheduling with Temporal Constraints) problem for a cluster-tree WSN, assuming bounded communication errors. The objective is to meet all end-to-end deadlines of a predefined set of time-bounded data flows while minimizing the energy consumption of the nodes by setting the TDCS period as long as possible. Sinceeach cluster is active only once during the period, the end-to-end delay of a given flow may span over several periods when there are the flows with opposite direction. The scheduling tool enables system designers to efficiently configure all required parameters of the IEEE 802.15.4/ZigBee beaconenabled cluster-tree WSNs in the network design time. The performance evaluation of thescheduling tool shows that the problems with dozens of nodes can be solved while using optimal solvers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper provides empirical evidence that continuous time models with one factor of volatility, in some conditions, are able to fit the main characteristics of financial data. It also reports the importance of the feedback factor in capturing the strong volatility clustering of data, caused by a possible change in the pattern of volatility in the last part of the sample. We use the Efficient Method of Moments (EMM) by Gallant and Tauchen (1996) to estimate logarithmic models with one and two stochastic volatility factors (with and without feedback) and to select among them.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper develops a methodology to estimate the entire population distributions from bin-aggregated sample data. We do this through the estimation of the parameters of mixtures of distributions that allow for maximal parametric flexibility. The statistical approach we develop enables comparisons of the full distributions of height data from potential army conscripts across France's 88 departments for most of the nineteenth century. These comparisons are made by testing for differences-of-means stochastic dominance. Corrections for possible measurement errors are also devised by taking advantage of the richness of the data sets. Our methodology is of interest to researchers working on historical as well as contemporary bin-aggregated or histogram-type data, something that is still widely done since much of the information that is publicly available is in that form, often due to restrictions due to political sensitivity and/or confidentiality concerns.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Compositional random vectors are fundamental tools in the Bayesian analysis of categorical data.Many of the issues that are discussed with reference to the statistical analysis of compositionaldata have a natural counterpart in the construction of a Bayesian statistical model for categoricaldata.This note builds on the idea of cross-fertilization of the two areas recommended by Aitchison (1986)in his seminal book on compositional data. Particular emphasis is put on the problem of whatparameterization to use

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A comment about the article Local sensitivity analysis for compositional data with application to soil texture in hydrologic modelling writen by L. Loosvelt and co-authors. The present comment is centered in three specific points. The first one is related to the fact that the authors avoid the use of ilr-coordinates. The second one refers to some generalization of sensitivity analysis when input parameters are compositional. The third tries to show that the role of the Dirichlet distribution in the sensitivity analysis is irrelevant

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Compositional random vectors are fundamental tools in the Bayesian analysis of categorical data. Many of the issues that are discussed with reference to the statistical analysis of compositional data have a natural counterpart in the construction of a Bayesian statistical model for categorical data. This note builds on the idea of cross-fertilization of the two areas recommended by Aitchison (1986) in his seminal book on compositional data. Particular emphasis is put on the problem of what parameterization to use

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Analytical potential energy functions are reported for HOX (X=F, Cl, Br, I). The surface for HOF predicts two metastable minima as well as the equilibrium configuration. These correspond to HFO (bent) and OHF (linear). Ab initio calculations performed for the HOF surface confirm these predictions. Comparisons are drawn between the two sets of results, and a vibrational analysis is undertaken for the hydrogen bonded OHF species. For HOCl, one further minimum is predicted, corresponding to HClO (bent), the parameters for which compare favourably with those reported from ab initio studies. In contrast, only the equilibrium configurations are predicted to be stable for HOBr and HOI.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Performance modelling is a useful tool in the lifeycle of high performance scientific software, such as weather and climate models, especially as a means of ensuring efficient use of available computing resources. In particular, sufficiently accurate performance prediction could reduce the effort and experimental computer time required when porting and optimising a climate model to a new machine. In this paper, traditional techniques are used to predict the computation time of a simple shallow water model which is illustrative of the computation (and communication) involved in climate models. These models are compared with real execution data gathered on AMD Opteron-based systems, including several phases of the U.K. academic community HPC resource, HECToR. Some success is had in relating source code to achieved performance for the K10 series of Opterons, but the method is found to be inadequate for the next-generation Interlagos processor. The experience leads to the investigation of a data-driven application benchmarking approach to performance modelling. Results for an early version of the approach are presented using the shallow model as an example.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose a geoadditive negative binomial model (Geo-NB-GAM) for regional count data that allows us to address simultaneously some important methodological issues, such as spatial clustering, nonlinearities, and overdispersion. This model is applied to the study of location determinants of inward greenfield investments that occurred during 20032007 in 249 European regions. After presenting the data set and showing the presence of overdispersion and spatial clustering, we review the theoretical framework that motivates the choice of the location determinants included in the empirical model, and we highlight some reasons why the relationship between some of the covariates and the dependent variable might be nonlinear. The subsequent section first describes the solutions proposed by previous literature to tackle spatial clustering, nonlinearities, and overdispersion, and then presents the Geo-NB-GAM. The empirical analysis shows the good performance of Geo-NB-GAM. Notably, the inclusion of a geoadditive component (a smooth spatial trend surface) permits us to control for spatial unobserved heterogeneity that induces spatial clustering. Allowing for nonlinearities reveals, in keeping with theoretical predictions, that the positive effect of agglomeration economies fades as the density of economic activities reaches some threshold value. However, no matter how dense the economic activity becomes, our results suggest that congestion costs never overcome positive agglomeration externalities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Die vorliegende Arbeit befasst sich mit der Synthese und Charakterisierung von Polymeren mit redox-funktionalen Phenothiazin-Seitenketten. Phenothiazin und seine Derivate sind kleine Redoxeinheiten, deren reversibles Redoxverhalten mit electrochromen Eigenschaften verbunden ist. Das besondere an Phenothiazine ist die Bildung von stabilen Radikalkationen im oxidierten Zustand. Daher knnen Phenothiazine als bistabile Molekle agieren und zwischen zwei stabilen Redoxzustnden wechseln. Dieser Schaltprozess geht gleichzeitig mit einer Farbvernderung an her.rnrnIm Rahmen dieser Arbeit wird die Synthese neuartiger Phenothiazin-Polymere mittels radikalischer Polymerisation beschrieben. Phenothiazin-Derivate wurden kovalent an aliphatischen und aromatischen Polymerketten gebunden. Dies erfolgte ber zwei unterschiedlichen synthetischen Routen. Die erste Route beinhaltet den Einsatz von Vinyl-Monomeren mit Phenothiazin Funktionalitt zur direkten Polymerisation. Die zweite Route verwendet Amin modifizierte Phenothiazin-Derivate zur Funktionalisierung von Polymeren mit Aktivester-Seitenketten in einer polymeranalogen Reaktion. rnrnPolymere mit redox-funktionalen Phenothiazin-Seitenketten sind aufgrund ihrer Elektron-Donor-Eigenschaften geeignete Kandidaten fr die Verwendung als Kathodenmaterialien. Zur berprfung ihrer Eignung wurden Phenothiazin-Polymere als Elektrodenmaterialien in Lithium-Batteriezellen eingesetzt. Die verwendeten Polymere wiesen gute Kapazittswerte von circa 50-90 Ah/kg sowie schnelle Aufladezeiten in der Batteriezelle auf. Besonders die Aufladezeiten sind 5-10 mal hher als konventionelle Lithium-Batterien. Im Hinblick auf Anzahl der Lade- und Entladezyklen, erzielten die Polymere gute Werte in den Langzeit-Stabilittstests. Insgesamt berstehen die Polymere 500 Ladezyklen mit geringen Vernderungen der Anfangswerte bezglich Ladezeiten und -kapazitten. Die Langzeit-Stabilitt hngt unmittelbar mit der Radikalstabilitt zusammen. Eine Stabilisierung der Radikalkationen gelang durch die Verlngerung der Seitenkette am Stickstoffatom des Phenothiazins und der Polymerhauptkette. Eine derartige Alkyl-Substitution erhht die Radikalstabilitt durch verstrkte Wechselwirkung mit dem aromatischen Ring und verbessert somit die Batterieleistung hinsichtlich der Stabilitt gegenber Lade- und Entladezyklen. rnrnDes Weiteren wurde die praktische Anwendung von bistabilen Phenothiazin-Polymeren als Speichermedium fr hohe Datendichten untersucht. Dazu wurden dnne Filme des Polymers auf leitfhigen Substraten elektrochemisch oxidiert. Die elektrochemische Oxidation erfolgte mittels Rasterkraftmikroskopie in Kombination mit leitfhigen Mikroskopspitzen. Mittels dieser Technik gelang es, die Oberflche des Polymers im nanoskaligen Bereich zu oxidieren und somit die lokale Leitfhigkeit zu verndern. Damit konnten unterschiedlich groe Muster lithographisch beschrieben und aufgrund der Vernderung ihrer Leitfhigkeit detektiert werden. Der Schreibprozess fhrte nur zu einer Vernderung der lokalen Leitfhigkeit ohne die topographische Beschaffenheit des Polymerfilms zu beeinflussen. Auerdem erwiesen sich die Muster als besonders stabil sowohl mechanisch als auch ber die Zeit.rnrnZum Schluss wurden neue Synthesestrategien entwickelt um mechanisch stabile als auch redox-funktionale Oberflchen zu produzieren. Mit Hilfe der oberflchen-initiierten Atomtransfer-Radikalpolymerisation wurden gepfropfte Polymerbrsten mit redox-funktionalen Phenothiazin-Seitenketten hergestellt und mittels Rntgenmethoden und Rasterkraftmikroskopie analysiert. Eine der Synthesestrategien geht von gepfropften Aktivesterbrsten aus, die anschlieend in einem nachfolgenden Schritt mit redox-funktionalen Gruppen modifiziert werden knnen. Diese Vorgehensweise ist besonders vielversprechend und erlaubt es unterschiedliche funktionelle Gruppen an den Aktivesterbrsten zu verankern. Damit knnen durch Verwendung von vernetzenden Gruppen neben den Redoxeigenschaften, die mechanische Stabilitt solcher Polymerfilme optimiert werden. rn rn

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider inference in randomized studies, in which repeatedly measured outcomes may be informatively missing due to drop out. In this setting, it is well known that full data estimands are not identified unless unverified assumptions are imposed. We assume a non-future dependence model for the drop-out mechanism and posit an exponential tilt model that links non-identifiable and identifiable distributions. This model is indexed by non-identified parameters, which are assumed to have an informative prior distribution, elicited from subject-matter experts. Under this model, full data estimands are shown to be expressed as functionals of the distribution of the observed data. To avoid the curse of dimensionality, we model the distribution of the observed data using a Bayesian shrinkage model. In a simulation study, we compare our approach to a fully parametric and a fully saturated model for the distribution of the observed data. Our methodology is motivated and applied to data from the Breast Cancer Prevention Trial.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

T actitivity in LiPb LiPb mock-up material irradiated in Frascati: measurement and MCNP results