986 resultados para Store information
Resumo:
The present rate of technological advance continues to place significant demands on data storage devices. The sheer amount of digital data being generated each year along with consumer expectations, fuels these demands. At present, most digital data is stored magnetically, in the form of hard disk drives or on magnetic tape. The increase in areal density (AD) of magnetic hard disk drives over the past 50 years has been of the order of 100 million times, and current devices are storing data at ADs of the order of hundreds of gigabits per square inch. However, it has been known for some time that the progress in this form of data storage is approaching fundamental limits. The main limitation relates to the lower size limit that an individual bit can have for stable storage. Various techniques for overcoming these fundamental limits are currently the focus of considerable research effort. Most attempt to improve current data storage methods, or modify these slightly for higher density storage. Alternatively, three dimensional optical data storage is a promising field for the information storage needs of the future, offering very high density, high speed memory. There are two ways in which data may be recorded in a three dimensional optical medium; either bit-by-bit (similar in principle to an optical disc medium such as CD or DVD) or by using pages of bit data. Bit-by-bit techniques for three dimensional storage offer high density but are inherently slow due to the serial nature of data access. Page-based techniques, where a two-dimensional page of data bits is written in one write operation, can offer significantly higher data rates, due to their parallel nature. Holographic Data Storage (HDS) is one such page-oriented optical memory technique. This field of research has been active for several decades, but with few commercial products presently available. Another page-oriented optical memory technique involves recording pages of data as phase masks in a photorefractive medium. A photorefractive material is one by which the refractive index can be modified by light of the appropriate wavelength and intensity, and this property can be used to store information in these materials. In phase mask storage, two dimensional pages of data are recorded into a photorefractive crystal, as refractive index changes in the medium. A low-intensity readout beam propagating through the medium will have its intensity profile modified by these refractive index changes and a CCD camera can be used to monitor the readout beam, and thus read the stored data. The main aim of this research was to investigate data storage using phase masks in the photorefractive crystal, lithium niobate (LiNbO3). Firstly the experimental methods for storing the two dimensional pages of data (a set of vertical stripes of varying lengths) in the medium are presented. The laser beam used for writing, whose intensity profile is modified by an amplitudemask which contains a pattern of the information to be stored, illuminates the lithium niobate crystal and the photorefractive effect causes the patterns to be stored as refractive index changes in the medium. These patterns are read out non-destructively using a low intensity probe beam and a CCD camera. A common complication of information storage in photorefractive crystals is the issue of destructive readout. This is a problem particularly for holographic data storage, where the readout beam should be at the same wavelength as the beam used for writing. Since the charge carriers in the medium are still sensitive to the read light field, the readout beam erases the stored information. A method to avoid this is by using thermal fixing. Here the photorefractive medium is heated to temperatures above 150�C; this process forms an ionic grating in the medium. This ionic grating is insensitive to the readout beam and therefore the information is not erased during readout. A non-contact method for determining temperature change in a lithium niobate crystal is presented in this thesis. The temperature-dependent birefringent properties of the medium cause intensity oscillations to be observed for a beam propagating through the medium during a change in temperature. It is shown that each oscillation corresponds to a particular temperature change, and by counting the number of oscillations observed, the temperature change of the medium can be deduced. The presented technique for measuring temperature change could easily be applied to a situation where thermal fixing of data in a photorefractive medium is required. Furthermore, by using an expanded beam and monitoring the intensity oscillations over a wide region, it is shown that the temperature in various locations of the crystal can be monitored simultaneously. This technique could be used to deduce temperature gradients in the medium. It is shown that the three dimensional nature of the recording medium causes interesting degradation effects to occur when the patterns are written for a longer-than-optimal time. This degradation results in the splitting of the vertical stripes in the data pattern, and for long writing exposure times this process can result in the complete deterioration of the information in the medium. It is shown in that simply by using incoherent illumination, the original pattern can be recovered from the degraded state. The reason for the recovery is that the refractive index changes causing the degradation are of a smaller magnitude since they are induced by the write field components scattered from the written structures. During incoherent erasure, the lower magnitude refractive index changes are neutralised first, allowing the original pattern to be recovered. The degradation process is shown to be reversed during the recovery process, and a simple relationship is found relating the time at which particular features appear during degradation and recovery. A further outcome of this work is that the minimum stripe width of 30 ìm is required for accurate storage and recovery of the information in the medium, any size smaller than this results in incomplete recovery. The degradation and recovery process could be applied to an application in image scrambling or cryptography for optical information storage. A two dimensional numerical model based on the finite-difference beam propagation method (FD-BPM) is presented and used to gain insight into the pattern storage process. The model shows that the degradation of the patterns is due to the complicated path taken by the write beam as it propagates through the crystal, and in particular the scattering of this beam from the induced refractive index structures in the medium. The model indicates that the highest quality pattern storage would be achieved with a thin 0.5 mm medium; however this type of medium would also remove the degradation property of the patterns and the subsequent recovery process. To overcome the simplistic treatment of the refractive index change in the FD-BPM model, a fully three dimensional photorefractive model developed by Devaux is presented. This model shows significant insight into the pattern storage, particularly for the degradation and recovery process, and confirms the theory that the recovery of the degraded patterns is possible since the refractive index changes responsible for the degradation are of a smaller magnitude. Finally, detailed analysis of the pattern formation and degradation dynamics for periodic patterns of various periodicities is presented. It is shown that stripe widths in the write beam of greater than 150 ìm result in the formation of different types of refractive index changes, compared with the stripes of smaller widths. As a result, it is shown that the pattern storage method discussed in this thesis has an upper feature size limit of 150 ìm, for accurate and reliable pattern storage.
Resumo:
Esse trabalho aborda a elaboração de requisitos dos usuários como parte da adequação do sistema de informação-SISTEMA DE INFORMAÇÃO DE GESTÃO ACADÊMICA-Administrativo- SIGA-Adm, proposto para o Hospital Universitário da Universidade Federal de Juiz de Fora. SIGA-Adm está em utilização, desde 2007, na Universidade Federal de Juiz de Fora. No entanto, para a efetiva aplicabilidade no âmbito do Hospital Universitário, torna-se necessário uma investigação acerca dos processos de trabalho, dentro do HU e uma ação de melhoria desses processos, a fim de evitar a incorporação de um sistema de informação que não corresponda a realidade apresentada. A elaboração dos requisitos dos usuários para a adequação visa a ser um instrumento de intermediação para discussão acerca da elaboração do SIGA-Adm para o HU/UFJF. Os requisitos apresentados originaram-se no processo de gerenciamento de materiais, como resultado da utilização da Metodologia de Análise e Melhoria de Processos-MAMP, aplicada na cadeia de suprimentos do Hospital Universitário, no período de 2004 a 2005. O acompanhamento do gerenciamento de materiais é uma estratégia utilizada pela Rede Sentinela para o desenvolvimento de ações de pós-comercialização de produtos para a saúde tais como: investigação de uso, registro de produtos, retirada de produto do mercado e etc... O Hospital Universitário da UFJF é integrante da Rede Brasileira de Hospitais Sentinela, implementada pela Agência Nacional de Vigilância Sanitária, desde 2001. As ações Pós-Comercialização de Produtos para a Saúde, em Tecnovigilância, são compartilhadas pela ANVISA e pelos integrantes da Rede Sentinela, através de informações relativas a queixas técnicas e/ou eventos adversos que possam causar algum dano à população. A pesquisa indica a possibilidade do SIGA-Adm armazenar informações sobre os produtos que permitam o rastreamento em condições normais e/ou anormais de uso pela instituição, contribuindo para o exercício em Tecnovigilância. Além disso, espera-se que construção do SIGA-Adm/HU-UFJF permita integrar os sistemas existentes no HU.
Resumo:
Engineering companies face many challenges today such as increased competition, higher expectations from consumers and decreasing product lifecycle times. This means that product development times must be reduced to meet these challenges. Concurrent engineering, reuse of engineering knowledge and the use of advanced methods and tools are among the ways of reducing product development times. Concurrent engineering is crucial in making sure that the products are designed with all issues considered simultaneously. The reuse of engineering knowledge allows existing solutions to be reused. It can also help to avoid the mistakes made in previous designs. Computer-based tools are used to store information, automate tasks, distribute work, perform simulation and so forth. This research concerns the evaluation of tools that can be used to support the design process. These tools are evaluated in terms of the capture of information generated during the design process. This information is vital to allow the reuse of knowledge. Present CAD systems store only information on the final definition of the product such as geometry, materials and manufacturing processes. Product Data Management (PDM) systems can manage all this CAD information along with other product related information. The research includes the evaluation of two PDM systems, Windchill and Metaphase, using the design of a single-handed water tap as a case study. The two PDMs were then compared to PROSUS/DDM. PROSUS is the Process-Based Support System proposed by [Blessing 94] using the same case study. The Design Data Model is the product data model that includes PROSUS. The results look promising. PROSUS/DDM is able to capture most design information and structure and present it logically. The design process and product information is related and stored within the DDM structure. The PDMs can capture most design information, but information from early stages of design is stored only as unstructured documentation. Some problems were found with PROSUS/DDM. A proposal is made that may make it possible to resolve these problems, but this will require further research.
Resumo:
Personal communication devices are increasingly equipped with sensors that are able to collect and locally store information from their environs. The mobility of users carrying such devices, and hence the mobility of sensor readings in space and time, opens new horizons for interesting applications. In particular, we envision a system in which the collective sensing, storage and communication resources, and mobility of these devices could be leveraged to query the state of (possibly remote) neighborhoods. Such queries would have spatio-temporal constraints which must be met for the query answers to be useful. Using a simplified mobility model, we analytically quantify the benefits from cooperation (in terms of the system's ability to satisfy spatio-temporal constraints), which we show to go beyond simple space-time tradeoffs. In managing the limited storage resources of such cooperative systems, the goal should be to minimize the number of unsatisfiable spatio-temporal constraints. We show that Data Centric Storage (DCS), or "directed placement", is a viable approach for achieving this goal, but only when the underlying network is well connected. Alternatively, we propose, "amorphous placement", in which sensory samples are cached locally, and shuffling of cached samples is used to diffuse the sensory data throughout the whole network. We evaluate conditions under which directed versus amorphous placement strategies would be more efficient. These results lead us to propose a hybrid placement strategy, in which the spatio-temporal constraints associated with a sensory data type determine the most appropriate placement strategy for that data type. We perform an extensive simulation study to evaluate the performance of directed, amorphous, and hybrid placement protocols when applied to queries that are subject to timing constraints. Our results show that, directed placement is better for queries with moderately tight deadlines, whereas amorphous placement is better for queries with looser deadlines, and that under most operational conditions, the hybrid technique gives the best compromise.
Resumo:
We study the fundamental Byzantine leader election problem in dynamic networks where the topology can change from round to round and nodes can also experience heavy {\em churn} (i.e., nodes can join and leave the network continuously over time). We assume the full information model where the Byzantine nodes have complete knowledge about the entire state of the network at every round (including random choices made by all the nodes), have unbounded computational power and can deviate arbitrarily from the protocol. The churn is controlled by an adversary that has complete knowledge and control over which nodes join and leave and at what times and also may rewire the topology in every round and has unlimited computational power, but is oblivious to the random choices made by the algorithm. Our main contribution is an $O(\log^3 n)$ round algorithm that achieves Byzantine leader election under the presence of up to $O({n}^{1/2 - \epsilon})$ Byzantine nodes (for a small constant $\epsilon > 0$) and a churn of up to \\$O(\sqrt{n}/\poly\log(n))$ nodes per round (where $n$ is the stable network size).The algorithm elects a leader with probability at least $1-n^{-\Omega(1)}$ and guarantees that it is an honest node with probability at least $1-n^{-\Omega(1)}$; assuming the algorithm succeeds, the leader's identity will be known to a $1-o(1)$ fraction of the honest nodes. Our algorithm is fully-distributed, lightweight, and is simple to implement. It is also scalable, as it runs in polylogarithmic (in $n$) time and requires nodes to send and receive messages of only polylogarithmic size per round.To the best of our knowledge, our algorithm is the first scalable solution for Byzantine leader election in a dynamic network with a high rate of churn; our protocol can also be used to solve Byzantine agreement in a straightforward way.We also show how to implement an (almost-everywhere) public coin with constant bias in a dynamic network with Byzantine nodes and provide a mechanism for enabling honest nodes to store information reliably in the network, which might be of independent interest.
Resumo:
The complexity of modern SCADA networks and their associated cyber-attacks requires an expressive but flexible manner for representing both domain knowledge and collected intrusion alerts with the ability to integrate them for enhanced analytical capabilities and better understanding of attacks. This paper proposes an ontology-based approach for contextualized intrusion alerts in SCADA networks. In this approach, three security ontologies were developed to represent and store information on intrusion alerts, Modbus communications, and Modbus attack descriptions. This information is correlated into enriched intrusion alerts using simple ontology logic rules written in Semantic Query-Enhanced Web Rules (SQWRL). The contextualized alerts give analysts the means to better understand evolving attacks and to uncover the semantic relationships between sequences of individual attack events. The proposed system is illustrated by two use case scenarios.
Resumo:
This paper discusses the changes brought by the communication revolution in teaching and learning in the scope of LSP. Its aim is to provide an insight on how teaching which was bi-dimensional, turned into a multidimensional system, gathering other complementary resources that have transformed, in a incredibly short time, the ways we receive share and store information, for instance as professionals, and keep in touch with our peers. The increasing rise of electronic publications, the incredible boom of social and professional networks, search engines, blogs, list servs, forums, e-mail blasts, Facebook pages, YouTube contents, Tweets and Apps, have twisted the way information is conveyed. Classes ceased to be predictable and have been empowered by digital platforms, innumerous and different data repositories (TILDE, IATE, LINGUEE, and so many other terminological data banks) that have definitely transformed the academic world in general and tertiary education in particular. There is a bulk of information to be digested by students, who are no longer passive but instead responsible and active for their academic outcomes. The question is whether they possess the tools to select only what is accurate and important for a certain subject or assignment, due to that overflow? Due to the reduction of the number of course years in most degrees, after the implementation of Bologna and the shrinking of the curricula contents, have students the possibility of developing critical thinking? Both teaching and learning rely on digital resources to improve the speed of the spreading of knowledge. But have those changes been effective to promote really communication? Furthermore, with the increasing Apps that have already been developed and will continue to appear for learning foreign languages, for translation among others, will the students feel the need of learning them once they have those Apps. These are some the questions we would like to discuss in our paper.
Resumo:
Dissertação para a obtenção do grau de Mestre em Engenharia Electrotécnica Ramo de Energia
Resumo:
L'apprentissage profond est un domaine de recherche en forte croissance en apprentissage automatique qui est parvenu à des résultats impressionnants dans différentes tâches allant de la classification d'images à la parole, en passant par la modélisation du langage. Les réseaux de neurones récurrents, une sous-classe d'architecture profonde, s'avèrent particulièrement prometteurs. Les réseaux récurrents peuvent capter la structure temporelle dans les données. Ils ont potentiellement la capacité d'apprendre des corrélations entre des événements éloignés dans le temps et d'emmagasiner indéfiniment des informations dans leur mémoire interne. Dans ce travail, nous tentons d'abord de comprendre pourquoi la profondeur est utile. Similairement à d'autres travaux de la littérature, nos résultats démontrent que les modèles profonds peuvent être plus efficaces pour représenter certaines familles de fonctions comparativement aux modèles peu profonds. Contrairement à ces travaux, nous effectuons notre analyse théorique sur des réseaux profonds acycliques munis de fonctions d'activation linéaires par parties, puisque ce type de modèle est actuellement l'état de l'art dans différentes tâches de classification. La deuxième partie de cette thèse porte sur le processus d'apprentissage. Nous analysons quelques techniques d'optimisation proposées récemment, telles l'optimisation Hessian free, la descente de gradient naturel et la descente des sous-espaces de Krylov. Nous proposons le cadre théorique des méthodes à région de confiance généralisées et nous montrons que plusieurs de ces algorithmes développés récemment peuvent être vus dans cette perspective. Nous argumentons que certains membres de cette famille d'approches peuvent être mieux adaptés que d'autres à l'optimisation non convexe. La dernière partie de ce document se concentre sur les réseaux de neurones récurrents. Nous étudions d'abord le concept de mémoire et tentons de répondre aux questions suivantes: Les réseaux récurrents peuvent-ils démontrer une mémoire sans limite? Ce comportement peut-il être appris? Nous montrons que cela est possible si des indices sont fournis durant l'apprentissage. Ensuite, nous explorons deux problèmes spécifiques à l'entraînement des réseaux récurrents, à savoir la dissipation et l'explosion du gradient. Notre analyse se termine par une solution au problème d'explosion du gradient qui implique de borner la norme du gradient. Nous proposons également un terme de régularisation conçu spécifiquement pour réduire le problème de dissipation du gradient. Sur un ensemble de données synthétique, nous montrons empiriquement que ces mécanismes peuvent permettre aux réseaux récurrents d'apprendre de façon autonome à mémoriser des informations pour une période de temps indéfinie. Finalement, nous explorons la notion de profondeur dans les réseaux de neurones récurrents. Comparativement aux réseaux acycliques, la définition de profondeur dans les réseaux récurrents est souvent ambiguë. Nous proposons différentes façons d'ajouter de la profondeur dans les réseaux récurrents et nous évaluons empiriquement ces propositions.
Resumo:
Objetivo. Para fortalecer las estrategias preventivas de forma integral, se pretende caracterizar la accidentalidad ocurrida en una empresa del sector de hidrocarburos e identificar los posibles factores de riesgo relacionados con estos eventos. Metodología: Se realizó un estudio descriptivo retrospectivo, utilizando las bases de datos SIGA y AUDICOMP del periodo Julio-2010 a Junio-2013, que almacenan información sobre accidentes laborales en trabajadores vinculados a Petrobras, Colombia. Nuestra variable resultado fue el número de accidentes laborales en función de la experiencia laboral y tiempo de contratación en la empresa, estratificada por características demográficas, propias del cargo ocupado y área anatómica lesionada. A las variables continuas se les calculo las medidas de tendencia central y dispersión y a las categóricas la proporción; se estimó el Odds Ratio (OR) de presentar un accidente en < 1 o entre 1-5 años de contratación. Resultados: se presentaron 457 accidentes, 96% (IC95% 94.2-97.8) fueron hombres, la década entre 25-34 años (36.8; IC95% 32.4-41.2) y el tipo de cargo obrero fueron los más frecuentes (35.3%; IC95% 30.9-39.6). Ser obrero (IC95% 2,11-2,65) y contar con experiencia laboral menor a un año (IC95% 1,78-2,33) fueron los principales factores relacionados con un accidente en < 1 año de contratación; el modelo con mayor AUC fue el de hombres entre 18-24 años de edad, contratados para laborar como obrero y con menos de un año de experiencia laboral (AUC 0,973; IC95% 0,865-0,995). Conclusiones. Los hombres entre 18-24 años de edad, contratados para laborar como obrero y con menos de un año de experiencia laboral, tenían mayor riesgo de presentar un accidente en menos de un año desde la contratación. El modelo propuesto ayudó a identificar a trabajadores con alta probabilidad de presentar un accidente en < 1 año desde la contratación.
Resumo:
For those few readers who do not know, CAFS is a system developed by ICL to search through data at speeds of several million characters per second. Its full name is Content Addressable File Store Information Search Processor, CAFS-ISP or CAFS for short. It is an intelligent hardware-based searching engine, currently available with both ICL's 2966 family of computers and the recently announced Series 39, operating within the VME environment. It uses content addressing techniques to perform fast searches of data or text stored on discs: almost all fields are equally accessible as search keys. Software in the mainframe generates a search task; the CAFS hardware performs the search, and returns the hit records to the mainframe. Because special hardware is used, the searching process is very much more efficient than searching performed by any software method. Various software interfaces are available which allow CAFS to be used in many different situations. CAFS can be used with existing systems without significant change. It can be used to make online enquiries of mainframe files or databases or directly from user written high level language programs. These interfaces are outlined in the body of the report.
Resumo:
Online geographic-databases have been growing increasingly as they have become a crucial source of information for both social networks and safety-critical systems. Since the quality of such applications is largely related to the richness and completeness of their data, it becomes imperative to develop adaptable and persistent storage systems, able to make use of several sources of information as well as enabling the fastest possible response from them. This work will create a shared and extensible geographic model, able to retrieve and store information from the major spatial sources available. A geographic-based system also has very high requirements in terms of scalability, computational power and domain complexity, causing several difficulties for a traditional relational database as the number of results increases. NoSQL systems provide valuable advantages for this scenario, in particular graph databases which are capable of modeling vast amounts of inter-connected data while providing a very substantial increase of performance for several spatial requests, such as finding shortestpath routes and performing relationship lookups with high concurrency. In this work, we will analyze the current state of geographic information systems and develop a unified geographic model, named GeoPlace Explorer (GE). GE is able to import and store spatial data from several online sources at a symbolic level in both a relational and a graph databases, where several stress tests were performed in order to find the advantages and disadvantages of each database paradigm.
Resumo:
Spiking neural networks - networks that encode information in the timing of spikes - are arising as a new approach in the artificial neural networks paradigm, emergent from cognitive science. One of these new models is the pulsed neural network with radial basis function, a network able to store information in the axonal propagation delay of neurons. Learning algorithms have been proposed to this model looking for mapping input pulses into output pulses. Recently, a new method was proposed to encode constant data into a temporal sequence of spikes, stimulating deeper studies in order to establish abilities and frontiers of this new approach. However, a well known problem of this kind of network is the high number of free parameters - more that 15 - to be properly configured or tuned in order to allow network convergence. This work presents for the first time a new learning function for this network training that allow the automatic configuration of one of the key network parameters: the synaptic weight decreasing factor.
Resumo:
Redes neurais pulsadas - redes que utilizam uma codificação temporal da informação - têm despontado como uma promissora abordagem dentro do paradigma conexionista, emergente da ciência cognitiva. Um desses novos modelos é a rede neural pulsada com função de base radial, que é capaz de armazenar informação nos tempos de atraso axonais dos neurônios. Um algoritmo de aprendizado foi aplicado com sucesso nesta rede pulsada, que se mostrou capaz de mapear uma seqüência de pulsos de entrada em uma seqüência de pulsos de saída. Mais recentemente, um método baseado no uso de campos receptivos gaussianos foi proposto para codificar dados constantes em uma seqüência de pulsos temporais. Este método tornou possível a essa rede lidar com dados computacionais. O processo de aprendizado desta nova rede não se encontra plenamente compreendido e investigações mais profundas são necessárias para situar este modelo dentro do contexto do aprendizado de máquinas e também para estabelecer as habilidades e limitações desta rede. Este trabalho apresenta uma investigação desse novo classificador e um estudo de sua capacidade de agrupar dados em três dimensões, particularmente procurando estabelecer seus domínios de aplicação e horizontes no campo da visão computacional.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)