924 resultados para Data storage equipment


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Solving large-scale all-to-all comparison problems using distributed computing is increasingly significant for various applications. Previous efforts to implement distributed all-to-all comparison frameworks have treated the two phases of data distribution and comparison task scheduling separately. This leads to high storage demands as well as poor data locality for the comparison tasks, thus creating a need to redistribute the data at runtime. Furthermore, most previous methods have been developed for homogeneous computing environments, so their overall performance is degraded even further when they are used in heterogeneous distributed systems. To tackle these challenges, this paper presents a data-aware task scheduling approach for solving all-to-all comparison problems in heterogeneous distributed systems. The approach formulates the requirements for data distribution and comparison task scheduling simultaneously as a constrained optimization problem. Then, metaheuristic data pre-scheduling and dynamic task scheduling strategies are developed along with an algorithmic implementation to solve the problem. The approach provides perfect data locality for all comparison tasks, avoiding rearrangement of data at runtime. It achieves load balancing among heterogeneous computing nodes, thus enhancing the overall computation time. It also reduces data storage requirements across the network. The effectiveness of the approach is demonstrated through experimental studies.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A computer-controlled laser writing system for optical integrated circuits and data storage is described. The system is characterized by holographic (649F) and high-resolution plates. A minimum linewidth of 2.5 mum is obtained by controlling the system parameters. We show that this system can also be used for data storage applications.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Storage systems are widely used and have played a crucial rule in both consumer and industrial products, for example, personal computers, data centers, and embedded systems. However, such system suffers from issues of cost, restricted-lifetime, and reliability with the emergence of new systems and devices, such as distributed storage and flash memory, respectively. Information theory, on the other hand, provides fundamental bounds and solutions to fully utilize resources such as data density, information I/O and network bandwidth. This thesis bridges these two topics, and proposes to solve challenges in data storage using a variety of coding techniques, so that storage becomes faster, more affordable, and more reliable.

We consider the system level and study the integration of RAID schemes and distributed storage. Erasure-correcting codes are the basis of the ubiquitous RAID schemes for storage systems, where disks correspond to symbols in the code and are located in a (distributed) network. Specifically, RAID schemes are based on MDS (maximum distance separable) array codes that enable optimal storage and efficient encoding and decoding algorithms. With r redundancy symbols an MDS code can sustain r erasures. For example, consider an MDS code that can correct two erasures. It is clear that when two symbols are erased, one needs to access and transmit all the remaining information to rebuild the erasures. However, an interesting and practical question is: What is the smallest fraction of information that one needs to access and transmit in order to correct a single erasure? In Part I we will show that the lower bound of 1/2 is achievable and that the result can be generalized to codes with arbitrary number of parities and optimal rebuilding.

We consider the device level and study coding and modulation techniques for emerging non-volatile memories such as flash memory. In particular, rank modulation is a novel data representation scheme proposed by Jiang et al. for multi-level flash memory cells, in which a set of n cells stores information in the permutation induced by the different charge levels of the individual cells. It eliminates the need for discrete cell levels, as well as overshoot errors, when programming cells. In order to decrease the decoding complexity, we propose two variations of this scheme in Part II: bounded rank modulation where only small sliding windows of cells are sorted to generated permutations, and partial rank modulation where only part of the n cells are used to represent data. We study limits on the capacity of bounded rank modulation and propose encoding and decoding algorithms. We show that overlaps between windows will increase capacity. We present Gray codes spanning all possible partial-rank states and using only ``push-to-the-top'' operations. These Gray codes turn out to solve an open combinatorial problem called universal cycle, which is a sequence of integers generating all possible partial permutations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The work presented in this thesis revolves around erasure correction coding, as applied to distributed data storage and real-time streaming communications.

First, we examine the problem of allocating a given storage budget over a set of nodes for maximum reliability. The objective is to find an allocation of the budget that maximizes the probability of successful recovery by a data collector accessing a random subset of the nodes. This optimization problem is challenging in general because of its combinatorial nature, despite its simple formulation. We study several variations of the problem, assuming different allocation models and access models, and determine the optimal allocation and the optimal symmetric allocation (in which all nonempty nodes store the same amount of data) for a variety of cases. Although the optimal allocation can have nonintuitive structure and can be difficult to find in general, our results suggest that, as a simple heuristic, reliable storage can be achieved by spreading the budget maximally over all nodes when the budget is large, and spreading it minimally over a few nodes when it is small. Coding would therefore be beneficial in the former case, while uncoded replication would suffice in the latter case.

Second, we study how distributed storage allocations affect the recovery delay in a mobile setting. Specifically, two recovery delay optimization problems are considered for a network of mobile storage nodes: the maximization of the probability of successful recovery by a given deadline, and the minimization of the expected recovery delay. We show that the first problem is closely related to the earlier allocation problem, and solve the second problem completely for the case of symmetric allocations. It turns out that the optimal allocations for the two problems can be quite different. In a simulation study, we evaluated the performance of a simple data dissemination and storage protocol for mobile delay-tolerant networks, and observed that the choice of allocation can have a significant impact on the recovery delay under a variety of scenarios.

Third, we consider a real-time streaming system where messages created at regular time intervals at a source are encoded for transmission to a receiver over a packet erasure link; the receiver must subsequently decode each message within a given delay from its creation time. For erasure models containing a limited number of erasures per coding window, per sliding window, and containing erasure bursts whose maximum length is sufficiently short or long, we show that a time-invariant intrasession code asymptotically achieves the maximum message size among all codes that allow decoding under all admissible erasure patterns. For the bursty erasure model, we also show that diagonally interleaved codes derived from specific systematic block codes are asymptotically optimal over all codes in certain cases. We also study an i.i.d. erasure model in which each transmitted packet is erased independently with the same probability; the objective is to maximize the decoding probability for a given message size. We derive an upper bound on the decoding probability for any time-invariant code, and show that the gap between this bound and the performance of a family of time-invariant intrasession codes is small when the message size and packet erasure probability are small. In a simulation study, these codes performed well against a family of random time-invariant convolutional codes under a number of scenarios.

Finally, we consider the joint problems of routing and caching for named data networking. We propose a backpressure-based policy that employs virtual interest packets to make routing and caching decisions. In a packet-level simulation, the proposed policy outperformed a basic protocol that combines shortest-path routing with least-recently-used (LRU) cache replacement.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The document reports on Phase 1 of a definition study to appraise the options to develop fish tracking equipment, in particular tags and data logging systems in order to improve the efficiency of the Environment Agency's tracking studies and to obtain a greater understanding of fish biology. Covered in this report are radio telemetry, audio telemetry, High Resolution Position Fixing, data storage and archival tags and other fish tracking systems such as biosonics.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This short position paper considers issues in developing Data Architecture for the Internet of Things (IoT) through the medium of an exemplar project, Domain Expertise Capture in Authoring and Development ­Environments (DECADE). A brief discussion sets the background for IoT, and the development of the ­distinction between things and computers. The paper makes a strong argument to avoid reinvention of the wheel, and to reuse approaches to distributed heterogeneous data architectures and the lessons learned from that work, and apply them to this situation. DECADE requires an autonomous recording system, ­local data storage, semi-autonomous verification model, sign-off mechanism, qualitative and ­quantitative ­analysis ­carried out when and where required through web-service architecture, based on ontology and analytic agents, with a self-maintaining ontology model. To develop this, we describe a web-service ­architecture, ­combining a distributed data warehouse, web services for analysis agents, ontology agents and a ­verification engine, with a centrally verified outcome database maintained by certifying body for qualification/­professional status.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Atualmente, as Tecnologias de Informação (TI) são cada vez mais vitais dentro das organizações. As TI são o motor de suporte do negócio. Para grande parte das organizações, o funcionamento e desenvolvimento das TI têm como base infraestruturas dedicadas (internas ou externas) denominadas por Centro de Dados (CD). Nestas infraestruturas estão concentrados os equipamentos de processamento e armazenamento de dados de uma organização, por isso, são e serão cada vez mais desafiadas relativamente a diversos fatores tais como a escalabilidade, disponibilidade, tolerância à falha, desempenho, recursos disponíveis ou disponibilizados, segurança, eficiência energética e inevitavelmente os custos associados. Com o aparecimento das tecnologias baseadas em computação em nuvem e virtualização, abrese todo um leque de novas formas de endereçar os desafios anteriormente descritos. Perante este novo paradigma, surgem novas oportunidades de consolidação dos CD que podem representar novos desafios para os gestores de CD. Por isso, é no mínimo irrealista para as organizações simplesmente eliminarem os CD ou transforma-los segundo os mais altos padrões de qualidade. As organizações devem otimizar os seus CD, contudo um projeto eficiente desta natureza, com capacidade para suportar as necessidades impostas pelo mercado, necessidades dos negócios e a velocidade da evolução tecnológica, exigem soluções complexas e dispendiosas tanto para a sua implementação como a sua gestão. É neste âmbito que surge o presente trabalho. Com o objetivo de estudar os CD inicia-se um estudo sobre esta temática, onde é detalhado o seu conceito, evolução histórica, a sua topologia, arquitetura e normas existentes que regem os mesmos. Posteriormente o estudo detalha algumas das principais tendências condicionadoras do futuro dos CD. Explorando o conhecimento teórico resultante do estudo anterior, desenvolve-se uma metodologia de avaliação dos CD baseado em critérios de decisão. O estudo culmina com uma análise sobre uma nova solução tecnológica e a avaliação de três possíveis cenários de implementação: a primeira baseada na manutenção do atual CD; a segunda baseada na implementação da nova solução em outro CD em regime de hosting externo; e finalmente a terceira baseada numa implementação em regime de IaaS.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Remote Data acquisition and analysing systems developed for fisheries and related environmental studies have been reported. It consists of three units. The first one namely multichannel remote data acquisition system is installed at the remote place powered by a rechargeable battery. It acquires and stores the 16 channel environmental data on a battery backed up RAM. The second unit called the Field data analyser is used for insitue display and analysis of the data stored in the backed up RAM. The third unit namely Laboratory data analyser is an IBM compatible PC based unit for detailed analysis and interpretation of the data after bringing the RAM unit to the laboratory. The data collected using the system has been analysed and presented in the form of a graph. The system timer operated at negligibly low current, switches on the power to the entire remote operated system at prefixed time interval of 2 hours.Data storage at remote site on low power battery backedupRAM and retrieval and analysis of data using PC are the special i ty of the system. The remote operated system takes about 7 seconds including the 5 second stabilization time to acquire and store data and is very ideal for remote operation on rechargeable bat tery. The system can store 16 channel data scanned at 2 hour interval for 10 days on 2K backed up RAM with memory expansion facility for 8K RAM.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Data is one of the domains in grid research that deals with the storage, replication, and management of large data sets in a distributed environment. The all-data-to-all sites replication scheme such as read-one write-all and tree grid structure (TGS) are the popular techniques being used for replication and management of data in this domain. However, these techniques have its weaknesses in terms of data storage capacity and also data access times due to some number of sites must ‘agree’ in common to execute certain transactions. In this paper, we propose the all-data-to-some-sites scheme called the neighbor replication on triangular grid (NRTG) technique by considering only neighbors have the replicated data, and thus, minimizes the storage capacity as well as high update availability. Also, the technique tolerates failures such as server failures, site failure or even network partitioning using remote procedure call (RPC).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Data in grid research deals with storage, replication, and management of large data sets in a distributed environment. The all-data-to-all-sites replication schemes, like Read-One Write-All (ROWA) and Tree Grid Structure (TGS), are the popular techniques in grid. However, these techniques have a weakness in data storage capacity and data access times. In this paper, we propose the all-data-to-some-sites scheme called the 'Neighbour Replication on Triangular Grid' (NRTG) technique. The proposed scheme minimises the storage capacity as well as data access time with high update availability. It also tolerates failures such as server and site failures.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Since Licklider in the 1960s [27] influential proponents of networked computing have envisioned electronic information in terms of a relatively small (even singular) number of 'sources', distributed through technologies such as the Internet. Most recently, Levy writes, in Becoming Virtual, that "in cyberspace, since any point is directly accessible from any other point, there is an increasing tendency to replace copies of documents with hypertext links. Ultimately, there will only need to be a single physical exemplar of the text" [13 p.61]. Hypertext implies, in theory, the end of 'the copy', and the multiplication of access points to the original. But, in practice, the Internet abounds with copying, both large and small scale, both as conscious human practice, and also as autonomous computer function. Effective and cheap data storage that encourages computer users to keep anything of use they have downloaded, lest the links they have found, 'break'; while browsers don't 'browse' the Internet - they download copies of everything to client machines. Not surprisingly, there is significant regulation against 'copying' - regulation that constrains our understanding of 'copying' to maintain a legal fiction of the 'original' for the purposes of intellectual property protection. In this paper, I will firstly demonstrate, by a series of examples, how 'copying' is more than just copyright infringement of music and software, but is a defining, multi-faceted feature of Internet behaviour. I will then argue that the Internet produces an interaction between dematerialised, digital data and human subjectivity and desire that fundamentally challenges notions of originality and copy. Walter Benjamin noted about photography: "one can make any number of prints [from a negative]; to ask for the 'authentic' print makes no sense" [4 p.224]. In cyberspace, I conclude, it makes no sense to ask which one is the copy.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Pós-graduação em Televisão Digital: Informação e Conhecimento - FAAC

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Due to the advancement of both, information technology in general, and databases in particular; data storage devices are becoming cheaper and data processing speed is increasing. As result of this, organizations tend to store large volumes of data holding great potential information. Decision Support Systems, DSS try to use the stored data to obtain valuable information for organizations. In this paper, we use both data models and use cases to represent the functionality of data processing in DSS following Software Engineering processes. We propose a methodology to develop DSS in the Analysis phase, respective of data processing modeling. We have used, as a starting point, a data model adapted to the semantics involved in multidimensional databases or data warehouses, DW. Also, we have taken an algorithm that provides us with all the possible ways to automatically cross check multidimensional model data. Using the aforementioned, we propose diagrams and descriptions of use cases, which can be considered as patterns representing the DSS functionality, in regard to DW data processing, DW on which DSS are based. We highlight the reusability and automation benefits that this can be achieved, and we think this study can serve as a guide in the development of DSS.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Atualmente um dos principais objetivos na área de pesquisa tecnológica é o desenvolvimento de soluções em favor do Meio Ambiente. Este trabalho tem por objetivo demonstrar a reutilização e consequentemente o aumento da vida útil de uma bateria Chumbo-Ácido, comumente instaladas em veículos automóveis, bem como beneficiar locais e usuários remotos onde o investimento na instalação de linhas de transmissão se torna inviável geográfica e economicamente, utilizando a luz solar como fonte de energia. No entanto a parte mais suscetível a falhas são as próprias baterias, justamente pela vida útil delas serem pequenas (em torno de 3 anos para a bateria automotiva) em comparação com o restante do sistema. Considerando uma unidade que já foi usada anteriormente, a possibilidade de falhas é ainda maior. A fim de diagnosticar e evitar que uma simples bateria possa prejudicar o funcionamento do sistema como um todo, o projeto considera a geração de energia elétrica por células fotovoltaicas e também contempla um sistema microcontrolado para leitura de dados utilizando o microcontrolador ATmega/Arduino, leitura de corrente por sensores de efeito hall da Allegro Systems, relés nas baterias para abertura e fechamento delas no circuito e um sistema de alerta para o usuário final de qual bateria está em falha e que precisa ser reparada e/ou trocada. Esse projeto foi montado na Ilha dos Arvoredos SP, distante da costa continental em aproximadamente 2,0km. Foram instaladas células solares e um banco de baterias, a fim de estudar o comportamento das baterias. O programa pôde diagnosticar e isolar uma das baterias que estava apresentando defeito, a fim de se evitar que a mesma viesse a prejudicar o sistema como um todo. Por conta da dificuldade de locomoção imposta pela geografia, foi escolhido o cartão SD para o armazenamento dos dados obtidos pelo Arduino. Posteriormente os dados foram compilados e analisados. A partir dos resultados apresentados podemos concluir que é possível usar baterias novas e baterias usadas em um mesmo sistema, de tal forma que se alguma das baterias apresentar uma falha o sistema por si só isolará a unidade.