879 resultados para Security of data
Resumo:
Pode-se afirmar que a evolução tecnológica (desenvolvimento de novos instrumentos de medição como, softwares, satélites e computadores, bem como, o barateamento das mídias de armazenamento) permite às Organizações produzirem e adquirirem grande quantidade de dados em curto espaço de tempo. Devido ao volume de dados, Organizações de pesquisa se tornam potencialmente vulneráveis aos impactos da explosão de informações. Uma solução adotada por algumas Organizações é a utilização de ferramentas de sistemas de informação para auxiliar na documentação, recuperação e análise dos dados. No âmbito científico, essas ferramentas são desenvolvidas para armazenar diferentes padrões de metadados (dados sobre dados). Durante o processo de desenvolvimento destas ferramentas, destaca-se a adoção de padrões como a Linguagem Unificada de Modelagem (UML, do Inglês Unified Modeling Language), cujos diagramas auxiliam na modelagem de diferentes aspectos do software. O objetivo deste estudo é apresentar uma ferramenta de sistemas de informação para auxiliar na documentação dos dados das Organizações por meio de metadados e destacar o processo de modelagem de software, por meio da UML. Será abordado o Padrão de Metadados Digitais Geoespaciais, amplamente utilizado na catalogação de dados por Organizações científicas de todo mundo, e os diagramas dinâmicos e estáticos da UML como casos de uso, sequências e classes. O desenvolvimento das ferramentas de sistemas de informação pode ser uma forma de promover a organização e a divulgação de dados científicos. No entanto, o processo de modelagem requer especial atenção para o desenvolvimento de interfaces que estimularão o uso das ferramentas de sistemas de informação.
Resumo:
O objetivo deste trabalho e apresentar uma investigação preliminar da precisão nos resultados do sistema de localização geográfica de transmissores desenvolvido utilizando o software da rede brasileira de coleta de dados. Um conjunto de medidas de desvio Doppler de uma única passagem do satélite, considerando uma Plataforma de Coleta de Dados (PCD) e uma rede de estações de recepção terrestrês, e denominado uma rede de recepção de dados. Assim, a rede brasileira de coleta de dados com o uso de múltiplas estações de recepção permitira o incremento na quantidade de dados coletados com consequente melhora na precisão e na confiabilidade das localizações fornecidas. Consequentemente uma maior quantidade de localizações válidas e mais precisas. Os resultados e análises foram obtidos sob duas condições: na primeira foi considerada uma condição prática com dados reais e dados ideais simulados, para comparar os resultados considerando a mesma passagem do satélite, transmissor e duas estações de recepção conhecidas; na segunda foram consideradas as condições ideais simuladas a partir de medidas de um transmissor fixo, três estações de recepção e dois satélites. Os resultados utilizando a rede de recepção de dados foram bastante satisfatórios. O estudo realizado mostrou a importãncia da instalação de novas estações de recepção terrenas distribuídas no territorio nacional, para um aumento na quantidade de medidas e consequentemente uma maior quantidade de localizações válidas e mais precisas.
Resumo:
The DO experiment at Fermilab's Tevatron will record several petabytes of data over the next five years in pursuing the goals of understanding nature and searching for the origin of mass. Computing resources required to analyze these data far exceed capabilities of any one institution. Moreover, the widely scattered geographical distribution of DO collaborators poses further serious difficulties for optimal use of human and computing resources. These difficulties will exacerbate in future high energy physics experiments, like the LHC. The computing grid has long been recognized as a solution to these problems. This technology is being made a more immediate reality to end users in DO by developing a grid in the DO Southern Analysis Region (DOSAR), DOSAR-Grid, using a available resources within it and a home-grown local task manager, McFarm. We will present the architecture in which the DOSAR-Grid is implemented, the use of technology and the functionality of the grid, and the experience from operating the grid in simulation, reprocessing and data analyses for a currently running HEP experiment.
Resumo:
Literature data about the female reproductive system of some species in the subfamily Ponerinae are presented. Our project objective to compile a report containing the largest possible number of data about the reproductive system in Ponerinae for a better understanding of the reproductive biology of these Insects, was completed.
Resumo:
The contents of some nutrients in 35 Brazilian green and roasted coffee samples were determined by flame atomic absorption spectrometry (Ca, Mg, Fe, Cu, Mn, and Zn), flame atomic emission photometry (Na and K) and Kjeldahl (N) after preparing the samples by wet digestion procedures using i) a digester heating block and ii) a conventional microwave oven system with pressure and temperature control. The accuracy of the procedures was checked using three standard reference materials (National Institute of Standards and Technology, SRM 1573a Tomato Leaves, SRM 1547 Peach Leaves, SRM 1570a Trace Elements in Spinach). Analysis of data after application of t-test showed that results obtained by microwave-assisted digestion were more accurate than those obtained by block digester at 95% confidence level. Additionally to better accuracy, other favorable characteristics found were lower analytical blanks, lower reagent consumption, and shorter digestion time. Exploratory analysis of results using Principal Component Analysis (PCA) and Hierarchical Cluster Analysis (HCA) showed that Na, K, Ca, Cu, Mg, and Fe were the principal elements to discriminate between green and roasted coffee samples. ©2007 Sociedade Brasileira de Química.
Resumo:
Spanish document available at the Library
Resumo:
Includes bibliography
Resumo:
This work presents a methodological proposal for acquisition of biometric data through telemetry basing its development on a research-action and a case study. Nowadays, the qualified professionals of physical evaluation have to use specific devices to obtain biometric signals and data. These devices in the most of the time are high cost and difficult to use and handling. Therefore, the methodological proposal was elaborate in order to develop, conceptually, a bio telemetric device which could acquire the desirable biometric signals: oxymetry, biometrics, corporal temperature and pedometry which are essential for the area of physical evaluation. It was researched the existent biometrics sensors, the possible ways for the remote transmission of signals and the computer systems available so that the acquisition of data could be possible. This methodological proposal of remote acquisition of biometrical signals is structured in four modules: Acquisitor of biometrics data; Converser and transmitter of biometric signals; Receiver and Processor of biometrics signals and Generator of Interpretative Graphs. The modules aim the obtention of interpretative graphics of human biometric signals. In order to validate this proposal a functional prototype was developed and it is presented in the development of this work.
Resumo:
Includes bibliography
Resumo:
Includes bibliography
Resumo:
Includes bibliography.
Resumo:
The strategic management of information plays a fundamental role in the organizational management process since the decision-making process depend on the need for survival in a highly competitive market. Companies are constantly concerned about information transparency and good practices of corporate governance (CG) which, in turn, directs relations between the controlling power of the company and investors. In this context, this article presents the relationship between the disclosing of information of joint-stock companies by means of using XBRL, the open data model adopted by the Brazilian government, a model that boosted the publication of Information Access Law (Lei de Acesso à Informação), nº 12,527 of 18 November 2011. Information access should be permeated by a mediation policy in order to subsidize the knowledge construction and decision-making of investors. The XBRL is the main model for the publishing of financial information. The use of XBRL by means of new semantic standard created for Linked Data, strengthens the information dissemination, as well as creates analysis mechanisms and cross-referencing of data with different open databases available on the Internet, providing added value to the data/information accessed by civil society.
Resumo:
The Amazonian lowlands include large patches of open vegetation which contrast sharply with the rainforest, and the origin of these patches has been debated. This study focuses on a large area of open vegetation in northern Brazil, where d13C and, in some instances, C/N analyses of the organic matter preserved in late Quaternary sediments were used to achieve floristic reconstructions over time. The main goal was to determine when the modern open vegetation started to develop in this area. The variability in d13C data derived from nine cores ranges from -32.2 to -19.6 parts per thousand, but with nearly 60% of data above -26.5 parts per thousand. The most enriched values were detected only in ecotone and open vegetated areas. The development of open vegetation communities was asynchronous, varying between estimated ages of 6400 and 3000 cal a BP. This suggests that the origin of the studied patches of open vegetation might be linked to sedimentary dynamics of a late Quaternary megafan system. As sedimentation ended, this vegetation type became established over the megafan surface. In addition, the data presented here show that the presence of C4 plants must be used carefully as a proxy to interpret dry paleoclimatic episodes in Amazonian areas. Copyright (c) 2012 John Wiley & Sons, Ltd.
Resumo:
Abstract Background The study and analysis of gene expression measurements is the primary focus of functional genomics. Once expression data is available, biologists are faced with the task of extracting (new) knowledge associated to the underlying biological phenomenon. Most often, in order to perform this task, biologists execute a number of analysis activities on the available gene expression dataset rather than a single analysis activity. The integration of heteregeneous tools and data sources to create an integrated analysis environment represents a challenging and error-prone task. Semantic integration enables the assignment of unambiguous meanings to data shared among different applications in an integrated environment, allowing the exchange of data in a semantically consistent and meaningful way. This work aims at developing an ontology-based methodology for the semantic integration of gene expression analysis tools and data sources. The proposed methodology relies on software connectors to support not only the access to heterogeneous data sources but also the definition of transformation rules on exchanged data. Results We have studied the different challenges involved in the integration of computer systems and the role software connectors play in this task. We have also studied a number of gene expression technologies, analysis tools and related ontologies in order to devise basic integration scenarios and propose a reference ontology for the gene expression domain. Then, we have defined a number of activities and associated guidelines to prescribe how the development of connectors should be carried out. Finally, we have applied the proposed methodology in the construction of three different integration scenarios involving the use of different tools for the analysis of different types of gene expression data. Conclusions The proposed methodology facilitates the development of connectors capable of semantically integrating different gene expression analysis tools and data sources. The methodology can be used in the development of connectors supporting both simple and nontrivial processing requirements, thus assuring accurate data exchange and information interpretation from exchanged data.
Resumo:
An important feature in computer systems developed for the agricultural sector is to satisfy the heterogeneity of data generated in different processes. Most problems related with this heterogeneity arise from the lack of standard for different computing solutions proposed. An efficient solution for that is to create a single standard for data exchange. The study on the actual process involved in cotton production was based on a research developed by the Brazilian Agricultural Research Corporation (EMBRAPA) that reports all phases as a result of the compilation of several theoretical and practical researches related to cotton crop. The proposition of a standard starts with the identification of the most important classes of data involved in the process, and includes an ontology that is the systematization of concepts related to the production of cotton fiber and results in a set of classes, relations, functions and instances. The results are used as a reference for the development of computational tools, transforming implicit knowledge into applications that support the knowledge described. This research is based on data from the Midwest of Brazil. The choice of the cotton process as a study case comes from the fact that Brazil is one of the major players and there are several improvements required for system integration in this segment.