980 resultados para Untouchable Databases


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Based on recent developments in occupational health and a review of industry practices, it is argued that integrated exposure database and surveillance systems hold considerable promise for improving workplace health and safety. A foundation from which to build practical and effective exposure surveillance systems is proposed based on the integration of recent developments in electronic exposure databases, the codification of exposure assessment practice, and the theory and practice of public health surveillance. The merging of parallel, but until now largely separate, efforts in these areas into exposure surveillance systems combines unique strengths from each subdiscipline. The promise of exposure database and surveillance systems, however, is yet to be realized. Exposure surveillance practices in general industry are reviewed based on the published literature as well as an Internet survey of three prominent industrial hygiene e-mail lists. Although the benefits of exposure surveillance are many, relatively few organizations use electronic exposure databases, and even fewer have active exposure surveillance systems. Implementation of exposure databases and surveillance systems can likely be improved by the development of systems that are more responsive to workplace or organizational-level needs. An overview of exposure database software packages provides guidance to readers considering the implementation of commercially available systems. Strategies for improving the implementation of exposure database and surveillance systems are outlined. A companion report in this issue on the development and pilot testing of a workplace-level exposure surveillance system concretely illustrates the application of the conceptual framework proposed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We outline methods for integrating epidemiologic and industrial hygiene data systems for the purpose of exposure estimation, exposure surveillance, worker notification, and occupational medicine practice. We present examples of these methods from our work at the Rocky Flats Plant?a former nuclear weapons facility that fabricated plutonium triggers for nuclear weapons and is now being decontaminated and decommissioned. The weapons production processes exposed workers to plutonium, gamma photons, neutrons, beryllium, asbestos, and several hazardous chemical agents, including chlorinated hydrocarbons and heavy metals. We developed a job exposure matrix (JEM) for estimating exposures to 10 chemical agents in 20 buildings for 120 different job categories over a production history spanning 34 years. With the JEM, we estimated lifetime chemical exposures for about 12,000 of the 16,000 former production workers. We show how the JEM database is used to estimate cumulative exposures over different time periods for epidemiological studies and to provide notification and determine eligibility for a medical screening program developed for former workers. We designed an industrial hygiene data system for maintaining exposure data for current cleanup workers. We describe how this system can be used for exposure surveillance and linked with the JEM and databases on radiation doses to develop lifetime exposure histories and to determine appropriate medical monitoring tests for current cleanup workers. We also present time-line-based graphical methods for reviewing and correcting exposure estimates and reporting them to individual workers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Automatic face recognition is an area with immense practical potential which includes a wide range of commercial and law enforcement applications. Hence it is unsurprising that it continues to be one of the most active research areas of computer vision. Even after over three decades of intense research, the state-of-the-art in face recognition continues to improve, benefitting from advances in a range of different research fields such as image processing, pattern recognition, computer graphics, and physiology. Systems based on visible spectrum images, the most researched face recognition modality, have reached a significant level of maturity with some practical success. However, they continue to face challenges in the presence of illumination, pose and expression changes, as well as facial disguises, all of which can significantly decrease recognition accuracy. Amongst various approaches which have been proposed in an attempt to overcome these limitations, the use of infrared (IR) imaging has emerged as a particularly promising research direction. This paper presents a comprehensive and timely review of the literature on this subject. Our key contributions are (i) a summary of the inherent properties of infrared imaging which makes this modality promising in the context of face recognition; (ii) a systematic review of the most influential approaches, with a focus on emerging common trends as well as key differences between alternative methodologies; (iii) a description of the main databases of infrared facial images available to the researcher; and lastly (iv) a discussion of the most promising avenues for future research. © 2014 Elsevier Ltd.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Child sexual abuse has a serious impact on victims, their families and the broader community. As such, there is a critical need for sound research evidence to inform specialist responses. Increasingly, researchers are utilising administrative databases to track outcomes of individual cases across health, justice and other government agencies. There are unique advantages to this approach, including the ability to access a rich source of information at a population-wide level. However, the potential limitations of utilising administrative databases have not been fully explored. Because these databases were created originally for administrative rather than research purposes, there are significant problems with using this data at face value for research projects. We draw on our collective research experience in child sexual abuse to highlight common problems that have emerged when applying administrative databases to research questions. Some of the problems discussed include identification of relevant cases, ensuring reliability and dealing with missing data. Our article concludes with recommendations for researchers and policy-makers to enhance data quality.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Database query verification schemes provide correctness guarantees for database queries. Typically such guarantees are required and advisable where queries are executed on untrusted servers. This need to verify query results, even though they may have been executed on one’s own database, is something new that has arisen with the advent of cloud services. The traditional model of hosting one’s own databases on one’s own servers did not require such verification because the hardware and software were both entirely within one’s control, and therefore fully trusted. However, with the economical and technological benefits of cloud services beckoning, many are now considering outsourcing both data and execution of database queries to the cloud, despite obvious risks. This survey paper provides an overview into the field of database query verification and explores the current state of the art in terms of query execution and correctness guarantees provided for query results. We also provide indications towards future work in the area.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Database query verification schemes attempt to provide authenticity, completeness, and freshness guarantees for queries executed on untrusted cloud servers. A number of such schemes currently exist in the literature, allowing query verification for queries that are based on matching whole values (such as numbers, dates, etc.) or for queries based on keyword matching. However, there is a notable gap in the research with regard to query verification schemes for pattern-matching queries. Our contribution here is to provide such a verification scheme that provides correctness guarantees for pattern-matching queries executed on the cloud. We describe a trivial scheme, ȃŸż and show how it does not provide completeness guarantees, and then proceed to describe our scheme based on efficient primitives such as cryptographic hashing and Merkle hash trees along with suffix arrays. We also provide experimental results based on a working prototype to show the practicality of our scheme.Ÿż

Relevância:

20.00% 20.00%

Publicador:

Resumo:

http://digitalcommons.winthrop.edu/dacusfocus/1027/thumbnail.jpg

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With the constant grow of enterprises and the need to share information across departments and business areas becomes more critical, companies are turning to integration to provide a method for interconnecting heterogeneous, distributed and autonomous systems. Whether the sales application needs to interface with the inventory application, the procurement application connect to an auction site, it seems that any application can be made better by integrating it with other applications. Integration between applications can face several troublesome due the fact that applications may not have been designed and implemented having integration in mind. Regarding to integration issues, two tier software systems, composed by the database tier and by the “front-end” tier (interface), have shown some limitations. As a solution to overcome the two tier limitations, three tier systems were proposed in the literature. Thus, by adding a middle-tier (referred as middleware) between the database tier and the “front-end” tier (or simply referred application), three main benefits emerge. The first benefit is related with the fact that the division of software systems in three tiers enables increased integration capabilities with other systems. The second benefit is related with the fact that any modifications to the individual tiers may be carried out without necessarily affecting the other tiers and integrated systems and the third benefit, consequence of the others, is related with less maintenance tasks in software system and in all integrated systems. Concerning software development in three tiers, this dissertation focus on two emerging technologies, Semantic Web and Service Oriented Architecture, combined with middleware. These two technologies blended with middleware, which resulted in the development of Swoat framework (Service and Semantic Web Oriented ArchiTecture), lead to the following four synergic advantages: (1) allow the creation of loosely-coupled systems, decoupling the database from “front-end” tiers, therefore reducing maintenance; (2) the database schema is transparent to “front-end” tiers which are aware of the information model (or domain model) that describes what data is accessible; (3) integration with other heterogeneous systems is allowed by providing services provided by the middleware; (4) the service request by the “frontend” tier focus on ‘what’ data and not on ‘where’ and ‘how’ related issues, reducing this way the application development time by developers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

DBMODELING is a relational database of annotated comparative protein structure models and their metabolic, pathway characterization. It is focused on enzymes identified in the genomes of Mycobacterium tuberculosis and Xylella fastidiosa. The main goal of the present database is to provide structural models to be used in docking simulations and drug design. However, since the accuracy of structural models is highly dependent on sequence identity between template and target, it is necessary to make clear to the user that only models which show high structural quality should be used in such efforts. Molecular modeling of these genomes generated a database, in which all structural models were built using alignments presenting more than 30% of sequence identity, generating models with medium and high accuracy. All models in the database are publicly accessible at http://www.biocristalografia.df.ibilce.unesp.br/tools. DBMODELING user interface provides users friendly menus, so that all information can be printed in one stop from any web browser. Furthermore, DBMODELING also provides a docking interface, which allows the user to carry out geometric docking simulation, against the molecular models available in the database. There are three other important homology model databases: MODBASE, SWISSMODEL, and GTOP. The main applications of these databases are described in the present article. © 2007 Bentham Science Publishers Ltd.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In a peer-to-peer network, the nodes interact with each other by sharing resources, services and information. Many applications have been developed using such networks, being a class of such applications are peer-to-peer databases. The peer-to-peer databases systems allow the sharing of unstructured data, being able to integrate data from several sources, without the need of large investments, because they are used existing repositories. However, the high flexibility and dynamicity of networks the network, as well as the absence of a centralized management of information, becomes complex the process of locating information among various participants in the network. In this context, this paper presents original contributions by a proposed architecture for a routing system that uses the Ant Colony algorithm to optimize the search for desired information supported by ontologies to add semantics to shared data, enabling integration among heterogeneous databases and the while seeking to reduce the message traffic on the network without causing losses in the amount of responses, confirmed by the improve of 22.5% in this amount. © 2011 IEEE.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Multi-relational data mining enables pattern mining from multiple tables. The existing multi-relational mining association rules algorithms are not able to process large volumes of data, because the amount of memory required exceeds the amount available. The proposed algorithm MRRadix presents a framework that promotes the optimization of memory usage. It also uses the concept of partitioning to handle large volumes of data. The original contribution of this proposal is enable a superior performance when compared to other related algorithms and moreover successfully concludes the task of mining association rules in large databases, bypass the problem of available memory. One of the tests showed that the MR-Radix presents fourteen times less memory usage than the GFP-growth. © 2011 IEEE.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Includes bibliography