886 resultados para Distributed File System


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Working memory is the process of actively maintaining a representation of information for a brief period of time so that it is available for use. In monkeys, visual working memory involves the concerted activity of a distributed neural system, including posterior areas in visual cortex and anterior areas in prefrontal cortex. Within visual cortex, ventral stream areas are selectively involved in object vision, whereas dorsal stream areas are selectively involved in spatial vision. This domain specificity appears to extend forward into prefrontal cortex, with ventrolateral areas involved mainly in working memory for objects and dorsolateral areas involved mainly in working memory for spatial locations. The organization of this distributed neural system for working memory in monkeys appears to be conserved in humans, though some differences between the two species exist. In humans, as compared with monkeys, areas specialized for object vision in the ventral stream have a more inferior location in temporal cortex, whereas areas specialized for spatial vision in the dorsal stream have a more superior location in parietal cortex. Displacement of both sets of visual areas away from the posterior perisylvian cortex may be related to the emergence of language over the course of brain evolution. Whereas areas specialized for object working memory in humans and monkeys are similarly located in ventrolateral prefrontal cortex, those specialized for spatial working memory occupy a more superior and posterior location within dorsal prefrontal cortex in humans than in monkeys. As in posterior cortex, this displacement in frontal cortex also may be related to the emergence of new areas to serve distinctively human cognitive abilities.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Thesis (Master's)--University of Washington, 2016-06

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Distributed digital control systems provide alternatives to conventional, centralised digital control systems. Typically, a modern distributed control system will comprise a multi-processor or network of processors, a communications network, an associated set of sensors and actuators, and the systems and applications software. This thesis addresses the problem of how to design robust decentralised control systems, such as those used to control event-driven, real-time processes in time-critical environments. Emphasis is placed on studying the dynamical behaviour of a system and identifying ways of partitioning the system so that it may be controlled in a distributed manner. A structural partitioning technique is adopted which makes use of natural physical sub-processes in the system, which are then mapped into the software processes to control the system. However, communications are required between the processes because of the disjoint nature of the distributed (i.e. partitioned) state of the physical system. The structural partitioning technique, and recent developments in the theory of potential controllability and observability of a system, are the basis for the design of controllers. In particular, the method is used to derive a decentralised estimate of the state vector for a continuous-time system. The work is also extended to derive a distributed estimate for a discrete-time system. Emphasis is also given to the role of communications in the distributed control of processes and to the partitioning technique necessary to design distributed and decentralised systems with resilient structures. A method is presented for the systematic identification of necessary communications for distributed control. It is also shwon that the structural partitions can be used directly in the design of software fault tolerant concurrent controllers. In particular, the structural partition can be used to identify the boundary of the conversation which can be used to protect a specific part of the system. In addition, for certain classes of system, the partitions can be used to identify processes which may be dynamically reconfigured in the event of a fault. These methods should be of use in the design of robust distributed systems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The immune system is perhaps the largest yet most diffuse and distributed somatic system in vertebrates. It plays vital roles in fighting infection and in the homeostatic control of chronic disease. As such, the immune system in both pathological and healthy states is a prime target for therapeutic interventions by drugs-both small-molecule and biologic. Comprising both the innate and adaptive immune systems, human immunity is awash with potential unexploited molecular targets. Key examples include the pattern recognition receptors of the innate immune system and the major histocompatibility complex of the adaptive immune system. Moreover, the immune system is also the source of many current and, hopefully, future drugs, of which the prime example is the monoclonal antibody, the most exciting and profitable type of present-day drug moiety. This brief review explores the identity and synergies of the hierarchy of drug targets represented by the human immune system, with particular emphasis on the emerging paradigm of systems pharmacology. © the authors, publisher and licensee Libertas Academica Limited.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A substantial amount of information on the Internet is present in the form of text. The value of this semi-structured and unstructured data has been widely acknowledged, with consequent scientific and commercial exploitation. The ever-increasing data production, however, pushes data analytic platforms to their limit. This thesis proposes techniques for more efficient textual big data analysis suitable for the Hadoop analytic platform. This research explores the direct processing of compressed textual data. The focus is on developing novel compression methods with a number of desirable properties to support text-based big data analysis in distributed environments. The novel contributions of this work include the following. Firstly, a Content-aware Partial Compression (CaPC) scheme is developed. CaPC makes a distinction between informational and functional content in which only the informational content is compressed. Thus, the compressed data is made transparent to existing software libraries which often rely on functional content to work. Secondly, a context-free bit-oriented compression scheme (Approximated Huffman Compression) based on the Huffman algorithm is developed. This uses a hybrid data structure that allows pattern searching in compressed data in linear time. Thirdly, several modern compression schemes have been extended so that the compressed data can be safely split with respect to logical data records in distributed file systems. Furthermore, an innovative two layer compression architecture is used, in which each compression layer is appropriate for the corresponding stage of data processing. Peripheral libraries are developed that seamlessly link the proposed compression schemes to existing analytic platforms and computational frameworks, and also make the use of the compressed data transparent to developers. The compression schemes have been evaluated for a number of standard MapReduce analysis tasks using a collection of real-world datasets. In comparison with existing solutions, they have shown substantial improvement in performance and significant reduction in system resource requirements.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

To explore the feasibility of processing Compact Muon Solenoid (CMS) analysis jobs across the wide area network, the FIU CMS Tier-3 center and the Florida CMS Tier-2 center designed a remote data access strategy. A Kerberized Lustre test bed was installed at the Tier-2 with the design to provide storage resources to private-facing worker nodes at the Tier-3. However, the Kerberos security layer is not capable of authenticating resources behind a private network. As a remedy, an xrootd server on a public-facing node at the Tier-3 was installed to export the file system to the private-facing worker nodes. We report the performance of CMS analysis jobs processed by the Tier-3 worker nodes accessing data from a Kerberized Lustre file. The processing performance of this configuration is benchmarked against a direct connection to the Lustre file system, and separately, where the xrootd server is near the Lustre file system.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper, a review on radio-over-fiber (RoF) technology is conducted to support the exploding growth of mobile broadband. An RoF system will provide a platform for distributed antenna system (DAS) as a fronthaul of long term evolution (LTE) technology. A higher splitting ratio from a macrocell is required to support large DAS topology, hence higher optical launch power (OLP) is the right approach. However, high OLP generates undesired nonlinearities, namely the stimulated Brillouin scattering (SBS). Three different aspects of solving the SBS process are covered in this paper, where the solutions ultimately provided an additional 4 dB link budget.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Formal Concept Analysis is an unsupervised machine learning technique that has successfully been applied to document organisation by considering documents as objects and keywords as attributes. The basic algorithms of Formal Concept Analysis then allow an intelligent information retrieval system to cluster documents according to keyword views. This paper investigates the scalability of this idea. In particular we present the results of applying spatial data structures to large datasets in formal concept analysis. Our experiments are motivated by the application of the Formal Concept Analysis idea of a virtual filesystem [11,17,15]. In particular the libferris [1] Semantic File System. This paper presents customizations to an RD-Tree Generalized Index Search Tree based index structure to better support the application of Formal Concept Analysis to large data sources.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

O presente trabalho pretende contribuir para a melhoria da eficiência dos sistemas de transporte e distribuição de água, possível de conseguir através da recuperação de energia potencial que, em certas situações, existe em excesso em condutas gravíticas. Sendo uma questão já abordada em diversos estudos, as poupanças de energia a que poderá conduzir, justificam a análise de todas as oportunidades, em especial no nosso País, cuja dependência energética do exterior é bem conhecida. Todavia, a implementação de soluções que recorrem à instalação de turbinas em condutas de abastecimento de água, causam naturalmente alguma apreensão às respectivas entidades gestoras, uma vez que pode pôr em causa a integridade das condutas e, em consequência, o abastecimento de água. Neste contexto, o estudo de modelos de controlo específicos para os referidos equipamentos poderá ser um contributo para a implementação mais alargada das soluções de melhoria da eficiência de sistemas de abastecimento de água, através da instalação de geradores hidroeléctricos, que terão a dupla função de controlo de caudal e produção de energia. O estudo e simulação dos modelos de controlo contidos neste trabalho permite concluir que é possível garantir a segurança das condutas e produzir energia eléctrica com turbinas nelas instaladas. Interessa assim aprofundar este tipo de estudos de forma a conseguir modelos de controlo que, com as premissas indicadas, possibilitem a optimização da produção de energia.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Scheduling resolution requires the intervention of highly skilled human problemsolvers. This is a very hard and challenging domain because current systems are becoming more and more complex, distributed, interconnected and subject to rapidly changing. A natural Autonomic Computing evolution in relation to Current Computing is to provide systems with Self-Managing ability with a minimum human interference. This paper addresses the resolution of complex scheduling problems using cooperative negotiation. A Multi-Agent Autonomic and Meta-heuristics based framework with self-configuring capabilities is proposed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Consider the problem of disseminating data from an arbitrary source node to all other nodes in a distributed computer system, like Wireless Sensor Networks (WSNs). We assume that wireless broadcast is used and nodes do not know the topology. We propose new protocols which disseminate data faster and use fewer broadcasts than the simple broadcast protocol.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

O recurso à monitorização do comportamento dos programas durante a execução é necessário em diversos contextos de aplicação. Por exemplo, para verificar a utilização dos recursos computacionais durante a execução, para calcular métricas que permitam melhor definir o perfil da aplicação ou para melhor identificar em que pontos da execução estão as causas de desvios do comportamento desejado de um programa e, noutros casos, para controlar a configuração da aplicação ou do sistema que suporta a sua execução. Esta técnica tem sido aplicada, quer no caso de programas sequenciais, quer se trate de programas distribuídos. Em particular, no caso de computações paralelas, dada a complexidade devida ao seu não determinismo, estas técnicas têm sido a melhor fonte de informação para compreender a execução da aplicação, quer em termos da sua correcção, quer na avaliação do seu desempenho e utilização dos recursos computacionais. As principais dificuldades no desenvolvimento e na adopção de ferramentas de monitorização, prendem-se com a complexidade dos sistemas de computação paralela e distribuída e com a necessidade de desenvolver soluções específicas para cada plataforma, para cada arquitectura e para cada objectivo. No entanto existem funcionalidades genéricas que, se presentes em todos os casos, podem ajudar ao desenvolvimento de novas ferramentas e à sua adaptação a diferentes ambientes computacionais. Esta dissertação propõe um modelo para suportar a observação e o controlo de aplicações paralelas e distribuídas (DAMS - Distributed ApplicationsMonitoring System). O modelo define uma arquitectura abstracta de monitorização baseada num núcleo mínimo sobre o qual assentam conjuntos de serviços que realizam as funcionalidades pretendidas em cada cenário de utilização. A sua organização em camadas de abstracção e a capacidade de extensão modular, permitem suportar o desenvolvimento de conjuntos de funcionalidades que podem ser partilhadas por distintas ferramentas. Por outro lado, o modelo proposto facilita o desenvolvimento de ferramentas de observação e controlo, sobre diferentes plataformas de suporte à execução. Nesta dissertação, são apresentados exemplos da utilização do modelo e da infraestrutura que o suporta, em diversos cenários de observação e controlo. Descreve-se também a experimentação realizada, com base em protótipos desenvolvidos sobre duas plataformas computacionais distintas.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Consider a distributed computer system comprising many computer nodes, each interconnected with a controller area network (CAN) bus. We prove that if priorities to message streams are assigned using rate-monotonic (RM) and if the requested capacity of the CAN bus does not exceed 25% then all deadlines are met.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Consider a distributed computer system such that every computer node can perform a wireless broadcast and when it does so, all other nodes receive this message. The computer nodes take sensor readings but individual sensor readings are not very important. It is important however to compute the aggregated quantities of these sensor readings. We show that a prioritized medium access control (MAC) protocol for wireless broadcast can compute simple aggregated quantities in a single transaction, and more complex quantities with many (but still a small number of) transactions. This leads to significant improvements in the time-complexity and as a consequence also similar reduction in energy “consumption”.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

One important step in the design of air stripping operations for the removal of VOC is the choice of operating conditions, which are based in the phase ratio. This parameter sets on directly the stripping factor and the efficiency of the operation. Its value has an upper limit determined by the flooding regime, which is previewed using empirical correlations, namely the one developed by Eckert. This type of approach is not suitable for the development of algorithms. Using a pilot scale column and a convenient solution, the pressure drop was determined in different operating conditions and the experimental values were compared with the estimations. This particular research will be incorporated in a global model for simulating the dynamics of air stripping using a multi variable distributed parameter system.