889 resultados para Open Data, Dati Aperti, Open Government Data


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Gli Open Data sono un'utile strumento che sta via via assumendo sempre più importanza nella società; in questa tesi vedremo la loro utilità attraverso la realizzazione di un'applicazione mobile, che utilizza questi dati per fornire informazioni circa lo stato ambientale dell'aria e dei pollini in Emilia Romagna, sfruttando i dataset forniti da un noto ente pubblico (Arpa Emilia Romagna). Tale applicazione mobile si basa su un Web Service che gestisce i vari passaggi dei dati e li immagazzina in un database Mongodb. Tale Web Service è stato creato per essere a sua volta messo a disposizione di programmatori, enti o persone comuni per studi e sviluppi futuri in tale ambito.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Open Educational Resources (OER) are teaching, learning and research materials that have been released under an open licence that permits online access and re-use by others. The 2012 Paris OER Declaration encourages the open licensing of educational materials produced with public funds. Digital data and data sets produced as a result of scientific and non-scientific research are an increasingly important category of educational materials. This paper discusses the legal challenges presented when publicly funded research data is made available as OER, arising from intellectual property rights, confidentiality and information privacy laws, and the lack of a legal duty to ensure data quality. If these legal challenges are not understood, addressed and effectively managed, they may impede and restrict access to and re-use of research data. This paper identifies some of the legal challenges that need to be addressed and describes 10 proposed best practices which are recommended for adoption to so that publicly funded research data can be made available for access and re-use as OER.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This special issue of the Journal of Urban Technology brings together five articles that are based on presentations given at the Street Computing Workshop held on 24 November 2009 in Melbourne in conjunction with the Australian Computer- Human Interaction conference (OZCHI 2009). Our own article introduces the Street Computing vision and explores the potential, challenges, and foundations of this research trajectory. In order to do so, we first look at the currently available sources of information and discuss their link to existing research efforts. Section 2 then introduces the notion of Street Computing and our research approach in more detail. Section 3 looks beyond the core concept itself and summarizes related work in this field of interest. We conclude by introducing the papers that have been contributed to this special issue.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

After nearly fifteen years of the open access (OA) movement and its hard-fought struggle for a more open scholarly communication system, publishers are realizing that business models can be both open and profitable. Making journal articles available on an OA license is becoming an accepted strategy for maximizing the value of content to both research communities and the businesses that serve them. The first blog in this two-part series celebrating Data Innovation Day looks at the role that data-innovation is playing in the shift to open access for journal articles.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Enterprises, both public and private, have rapidly commenced using the benefits of enterprise resource planning (ERP) combined with business analytics and “open data sets” which are often outside the control of the enterprise to gain further efficiencies, build new service operations and increase business activity. In many cases, these business activities are based around relevant software systems hosted in a “cloud computing” environment. “Garbage in, garbage out”, or “GIGO”, is a term long used to describe problems in unqualified dependency on information systems, dating from the 1960s. However, a more pertinent variation arose sometime later, namely “garbage in, gospel out” signifying that with large scale information systems, such as ERP and usage of open datasets in a cloud environment, the ability to verify the authenticity of those data sets used may be almost impossible, resulting in dependence upon questionable results. Illicit data set “impersonation” becomes a reality. At the same time the ability to audit such results may be an important requirement, particularly in the public sector. This paper discusses the need for enhancement of identity, reliability, authenticity and audit services, including naming and addressing services, in this emerging environment and analyses some current technologies that are offered and which may be appropriate. However, severe limitations to addressing these requirements have been identified and the paper proposes further research work in the area.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Enterprise resource planning (ERP) systems are rapidly being combined with “big data” analytics processes and publicly available “open data sets”, which are usually outside the arena of the enterprise, to expand activity through better service to current clients as well as identifying new opportunities. Moreover, these activities are now largely based around relevant software systems hosted in a “cloud computing” environment. However, the over 50- year old phrase related to mistrust in computer systems, namely “garbage in, garbage out” or “GIGO”, is used to describe problems of unqualified and unquestioning dependency on information systems. However, a more relevant GIGO interpretation arose sometime later, namely “garbage in, gospel out” signifying that with large scale information systems based around ERP and open datasets as well as “big data” analytics, particularly in a cloud environment, the ability to verify the authenticity and integrity of the data sets used may be almost impossible. In turn, this may easily result in decision making based upon questionable results which are unverifiable. Illicit “impersonation” of and modifications to legitimate data sets may become a reality while at the same time the ability to audit any derived results of analysis may be an important requirement, particularly in the public sector. The pressing need for enhancement of identity, reliability, authenticity and audit services, including naming and addressing services, in this emerging environment is discussed in this paper. Some current and appropriate technologies currently being offered are also examined. However, severe limitations in addressing the problems identified are found and the paper proposes further necessary research work for the area. (Note: This paper is based on an earlier unpublished paper/presentation “Identity, Addressing, Authenticity and Audit Requirements for Trust in ERP, Analytics and Big/Open Data in a ‘Cloud’ Computing Environment: A Review and Proposal” presented to the Department of Accounting and IT, College of Management, National Chung Chen University, 20 November 2013.)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A number of online algorithms have been developed that have small additional loss (regret) compared to the best “shifting expert”. In this model, there is a set of experts and the comparator is the best partition of the trial sequence into a small number of segments, where the expert of smallest loss is chosen in each segment. The regret is typically defined for worst-case data / loss sequences. There has been a recent surge of interest in online algorithms that combine good worst-case guarantees with much improved performance on easy data. A practically relevant class of easy data is the case when the loss of each expert is iid and the best and second best experts have a gap between their mean loss. In the full information setting, the FlipFlop algorithm by De Rooij et al. (2014) combines the best of the iid optimal Follow-The-Leader (FL) and the worst-case-safe Hedge algorithms, whereas in the bandit information case SAO by Bubeck and Slivkins (2012) competes with the iid optimal UCB and the worst-case-safe EXP3. We ask the same question for the shifting expert problem. First, we ask what are the simple and efficient algorithms for the shifting experts problem when the loss sequence in each segment is iid with respect to a fixed but unknown distribution. Second, we ask how to efficiently unite the performance of such algorithms on easy data with worst-case robustness. A particular intriguing open problem is the case when the comparator shifts within a small subset of experts from a large set under the assumption that the losses in each segment are iid.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper addresses the development of trust in the use of Open Data through incorporation of appropriate authentication and integrity parameters for use by end user Open Data application developers in an architecture for trustworthy Open Data Services. The advantages of this architecture scheme is that it is far more scalable, not another certificate-based hierarchy that has problems with certificate revocation management. With the use of a Public File, if the key is compromised: it is a simple matter of the single responsible entity replacing the key pair with a new one and re-performing the data file signing process. Under this proposed architecture, the the Open Data environment does not interfere with the internal security schemes that might be employed by the entity. However, this architecture incorporates, when needed, parameters from the entity, e.g. person who authorized publishing as Open Data, at the time that datasets are created/added.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The open service network for marine environmental data (NETMAR) project uses semantic web technologies in its pilot system which aims to allow users to search, download and integrate satellite, in situ and model data from open ocean and coastal areas. The semantic web is an extension of the fundamental ideas of the World Wide Web, building a web of data through annotation of metadata and data with hyperlinked resources. Within the framework of the NETMAR project, an interconnected semantic web resource was developed to aid in data and web service discovery and to validate Open Geospatial Consortium Web Processing Service orchestration. A second semantic resource was developed to support interoperability of coastal web atlases across jurisdictional boundaries. This paper outlines the approach taken to producing the resource registry used within the NETMAR project and demonstrates the use of these semantic resources to support user interactions with systems. Such interconnected semantic resources allow the increased ability to share and disseminate data through the facilitation of interoperability between data providers. The formal representation of geospatial knowledge to advance geospatial interoperability is a growing research area. Tools and methods such as those outlined in this paper have the potential to support these efforts.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents a framework for a telecommunications interface which allows data from sensors embedded in Smart Grid applications to reliably archive data in an appropriate time-series database. The challenge in doing so is two-fold, firstly the various formats in which sensor data is represented, secondly the problems of telecoms reliability. A prototype of the authors' framework is detailed which showcases the main features of the framework in a case study featuring Phasor Measurement Units (PMU) as the application. Useful analysis of PMU data is achieved whenever data from multiple locations can be compared on a common time axis. The prototype developed highlights its reliability, extensibility and adoptability; features which are largely deferred from industry standards for data representation to proprietary database solutions. The open source framework presented provides link reliability for any type of Smart Grid sensor and is interoperable with existing proprietary database systems, and open database systems. The features of the authors' framework allow for researchers and developers to focus on the core of their real-time or historical analysis applications, rather than having to spend time interfacing with complex protocols.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, an open source solution for measurement of temperature and ultrasonic signals (RF-lines) is proposed. This software is an alternative to the expensive commercial data acquisition software, enabling the user to tune applications to particular acquisition architectures. The collected ultrasonic and temperature signals were used for non-invasive temperature estimation using neural networks. The existence of precise temperature estimators is an essential point aiming at the secure and effective applica tion of thermal therapies in humans. If such estimators exist then effective controllers could be developed for the therapeutic instrumentation. In previous works the time-shift between RF-lines echoes were extracted, and used for creation of neural networks estimators. The obtained estimators successfully represent the temperature in the time-space domain, achieving a maximum absolute error inferior to the threshold value defined for hyperthermia/diathermia applications.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Das Thema Linked Open Data hat in den vergangenen Jahren im Bereich der Bibliotheken viel Aufmerksamkeit erfahren. Unterschiedlichste Projekte werden von Bibliotheken betrieben, um Linked Open Data für die Einrichtung und die Kunden nutzbringend einzusetzen. Ausgangspunkt für diese Arbeit ist die These, dass Linked Open Data im Bibliotheksbereich das größte Potenzial freisetzen kann. Es wird überprüft, inwiefern diese Aussage auch auf Öffentliche Bibliotheken zutrifft und aufgezeigt, welche Möglichkeiten sich daraus ergeben könnten. Die Arbeit führt in die Grundlagen von Linked Open Data (LOD) ein und betrachtet die Entwicklungen im Bibliotheksbereich. Dabei werden besonders Initiativen zur Behandlung bibliothekarischer Metadaten und der aktuelle Entwicklungsstand von LOD-fähigen Bibliothekssystemen behandelt. Danach wird eine Auswahl an LOD-Datensets vorgestellt, die bibliothekarische Metadaten liefern oder deren Daten als Anreicherungsinformationen in Kataloganwendungen eingesetzt werden können. Im Anschluss wird das Projekt OpenCat der Öffentlichen Bibliothek Fresnes (Frankreich) sowie das LOD-Projekt an der Deichmanske Bibliothek Oslo (Norwegen) vorgestellt. Darauf folgt ein Einblick in die Möglichkeiten, welche durch die Verwendung von LOD in Öffentlichen Bibliotheken verwirklicht werden könnten sowie erste Handlungsempfehlungen für Öffentliche Bibliotheken.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Contém resumo

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Diverse attitudes to open data