848 resultados para Peer-to-peer databases


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In a peer-to-peer network, the nodes interact with each other by sharing resources, services and information. Many applications have been developed using such networks, being a class of such applications are peer-to-peer databases. The peer-to-peer databases systems allow the sharing of unstructured data, being able to integrate data from several sources, without the need of large investments, because they are used existing repositories. However, the high flexibility and dynamicity of networks the network, as well as the absence of a centralized management of information, becomes complex the process of locating information among various participants in the network. In this context, this paper presents original contributions by a proposed architecture for a routing system that uses the Ant Colony algorithm to optimize the search for desired information supported by ontologies to add semantics to shared data, enabling integration among heterogeneous databases and the while seeking to reduce the message traffic on the network without causing losses in the amount of responses, confirmed by the improve of 22.5% in this amount. © 2011 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

CDS/ISIS, an advanced non-numerical information storage and retrieval software was developed by UNESCO. With the emergence of WWW technology, most of the information activities are becoming Web-centric. Libraries and information providers are taking advantage of these Internet developments to provide access to their resources/information on the Web. A number of tools are now available for publishing CDS/ISIS databases on the Internet. One such tool is the WWWISIS Web gateway software, developed by BIREME, Brazil. This paper illustrates porting of sample records from a bibliographic database into CDS/ISIS, and then publishing this database on the Internet using WWWISIS.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we derive an approach for the effective utilization of thermodynamic data in phase-field simulations. While the most widely used methodology for multi-component alloys is following the work by Eiken et al. (2006), wherein, an extrapolative scheme is utilized in conjunction with the TQ interface for deriving the driving force for phase transformation, a corresponding simplistic method based on the formulation of a parabolic free-energy model incorporating all the thermodynamics has been laid out for binary alloys in the work by Folch and Plapp (2005). In the following, we extend this latter approach for multi-component alloys in the framework of the grand-potential formalism. The coupling is applied for the case of the binary eutectic solidification in the Cr-Ni alloy and two-phase solidification in the ternary eutectic alloy (Al-Cr-Ni). A thermodynamic justification entails the basis of the formulation and places it in context of the bigger picture of Integrated Computational Materials Engineering. (C) 2015 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes the spatial data handling procedures used to create a vector database of the Connecticut shoreline from Coastal Survey Maps. The appendix contains detailed information on how the procedures were implemented using Geographic Transformer Software 5 and ArcGIS 8.3. The project was a joint project of the Connecticut Department of Environmental Protection and the University of Connecticut Center for Geographic Information and Analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Natural Language Interfaces to Query Databases (NLIDBs) have been an active research field since the 1960s. However, they have not been widely adopted. This article explores some of the biggest challenges and approaches for building NLIDBs and proposes techniques to reduce implementation and adoption costs. The article describes {AskMe*}, a new system that leverages some of these approaches and adds an innovative feature: query-authoring services, which lower the entry barrier for end users. Advantages of these approaches are proven with experimentation. Results confirm that, even when {AskMe*} is automatically reconfigurable against multiple domains, its accuracy is comparable to domain-specific NLIDBs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A significant set of information stored in different databases around the world, can be shared through peer-topeer databases. With that, is obtained a large base of knowledge, without the need for large investments because they are used existing databases, as well as the infrastructure in place. However, the structural characteristics of peer-topeer, makes complex the process of finding such information. On the other side, these databases are often heterogeneous in their schemas, but semantically similar in their content. A good peer-to-peer databases systems should allow the user access information from databases scattered across the network and receive only the information really relate to your topic of interest. This paper proposes to use ontologies in peer-to-peer database queries to represent the semantics inherent to the data. The main contribution of this work is enable integration between heterogeneous databases, improve the performance of such queries and use the algorithm of optimization Ant Colony to solve the problem of locating information on peer-to-peer networks, which presents an improve of 18% in results. © 2011 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The development of new technologies that use peer-to-peer networks grows every day, with the object to supply the need of sharing information, resources and services of databases around the world. Among them are the peer-to-peer databases that take advantage of peer-to-peer networks to manage distributed knowledge bases, allowing the sharing of information semantically related but syntactically heterogeneous. However, it is a challenge to ensure the efficient search for information without compromising the autonomy of each node and network flexibility, given the structural characteristics of these networks. On the other hand, some studies propose the use of ontology semantics by assigning standardized categorization of information. The main original contribution of this work is the approach of this problem with a proposal for optimization of queries supported by the Ant Colony algorithm and classification though ontologies. The results show that this strategy enables the semantic support to the searches in peer-to-peer databases, aiming to expand the results without compromising network performance. © 2011 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pacific Journalism Review has consistently, at a good standard, honoured its 1994 founding goal: to be a credible peer-reviewed journal in the Asia-Pacific region, probing developments in journalism and media, and supporting journalism education. Global, it considers new media and social movements; ‘regional’, it promotes vernacular media, human freedoms and sustainable development. Asking how it developed, the method for this article was to research the archive, noting authors, subject matter, themes. The article concludes that one answer is the journal’s collegiate approach; hundreds of academics, journalists and others, have been invited to contribute. Second has been the dedication of its one principal editor, Professor David Robie, always somehow providing resources—at Port Moresby, Suva, and now Auckland—with a consistent editorial stance. Eclectic, not partisan, it has nevertheless been vigilant over rights, such as monitoring the Fiji coups d’etat. Watching through a media lens, it follows a ‘Pacific way’, handling hard information through understanding and consensus. It has 237 subscriptions indexed to seven databases. Open source, it receives more than 1000 site visits weekly. With ‘clientele’ mostly in Australia, New Zealand and ‘Oceania’, it extends much further afield. From 1994 to 2014, 701 articles and reviews were published, now more than 24 scholarly articles each year.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This study discusses the scope of historical earthquake analysis in low-seismicity regions. Examples of non-damaging earthquake reports are given from the Eastern Baltic (Fennoscandian) Shield in north-eastern Europe from the 16th to the 19th centuries. The information available for past earthquakes in the region is typically sparse and cannot be increased through a careful search of the archives. This study applies recommended rigorous methodologies of historical seismology developed using ample data to the sparse reports from the Eastern Baltic Shield. Attention is paid to the context of reporting, the identity and role of the authors, the circumstances of the reporting, and the opportunity to verify the available information by collating the sources. We evaluate the reliability of oral earthquake recollections and develop criteria for cases when a historical earthquake is attested to by a single source. We propose parametric earthquake scenarios as a way to deal with sparse macroseismic reports and as an improvement to existing databases.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In the current study, epidemiology study is done by means of literature survey in groups identified to be at higher potential for DDIs as well as in other cases to explore patterns of DDIs and the factors affecting them. The structure of the FDA Adverse Event Reporting System (FAERS) database is studied and analyzed in detail to identify issues and challenges in data mining the drug-drug interactions. The necessary pre-processing algorithms are developed based on the analysis and the Apriori algorithm is modified to suit the process. Finally, the modules are integrated into a tool to identify DDIs. The results are compared using standard drug interaction database for validation. 31% of the associations obtained were identified to be new and the match with existing interactions was 69%. This match clearly indicates the validity of the methodology and its applicability to similar databases. Formulation of the results using the generic names expanded the relevance of the results to a global scale. The global applicability helps the health care professionals worldwide to observe caution during various stages of drug administration thus considerably enhancing pharmacovigilance

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This article is concerned with the risks associated with the monopolisation of information that is available from a single source only. Although there is a longstanding consensus that sole-source databases should not receive protection under the EU Database Directive, and there are legislative provisions to ensure that lawful users have access to a database’s contents, Ryanair v PR Aviation challenges this assumption by affirming that the use of non-protected databases can be restricted by contract. Owners of non-protected databases can contractually exclude lawful users from taking the benefit of statutorily permitted uses, because such databases are not covered from the legislation that declares this kind of contract null and void. We argue that this judgment is not consistent with the legislative history and can have a profound impact on the functioning of the digital single market, where new information services, such as meta-search engines or price-comparison websites, base their operation on the systematic extraction and re-utilisation of materials available from online sources. This is an issue that the Commission should address in a forthcoming evaluation of the Database Directive.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

[EN]In this paper, we address the challenge of gender classi - cation using large databases of images with two goals. The rst objective is to evaluate whether the error rate decreases compared to smaller databases. The second goal is to determine if the classi er that provides the best classi cation rate for one database, improves the classi cation results for other databases, that is, the cross-database performance.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In recent years considerable effort has gone into quantifying the reuse and recycling potential of waste generated by residential construction. Unfortunately less information is available for the commercial refurbishment sector. It is hypothesised that significant economic and environmental benefit can be derived from closer monitoring of the commercial construction waste stream. With the aim of assessing these benefits, the authors are involved in ongoing case studies to record both current standard practice and the most effective means of improving the eco-efficiency of materials use in office building refurbishments. This paper focuses on the issues involved in developing methods for obtaining the necessary information on better waste management practices and establishing benchmark indicators. The need to create databases to establish benchmarks of waste minimisation best practice in commercial construction is stressed. Further research will monitor the delivery of case study projects and the levels of reuse and recycling achieved in directly quantifiable ways

Relevância:

80.00% 80.00%

Publicador:

Resumo:

EMR (Electronic Medical Record) is an emerging technology that is highly-blended between non-IT and IT area. One methodology is to link the non-IT and IT area is to construct databases. Nowadays, it supports before and after-treatment for patients and should satisfy all stakeholders such as practitioners, nurses, researchers, administrators and financial departments and so on. In accordance with the database maintenance, DAS (Data as Service) model is one solution for outsourcing. However, there are some scalability and strategy issues when we need to plan to use DAS model properly. We constructed three kinds of databases such as plan-text, MS built-in encryption which is in-house model and custom AES (Advanced Encryption Standard) - DAS model scaling from 5K to 2560K records. To perform custom AES-DAS better, we also devised Bucket Index using Bloom Filter. The simulation showed the response times arithmetically increased in the beginning but after a certain threshold, exponentially increased in the end. In conclusion, if the database model is close to in-house model, then vendor technology is a good way to perform and get query response times in a consistent manner. If the model is DAS model, it is easy to outsource the database, however, some techniques like Bucket Index enhances its utilization. To get faster query response times, designing database such as consideration of the field type is also important. This study suggests cloud computing would be a next DAS model to satisfy the scalability and the security issues.