837 resultados para Distributed operating systems (Computers) - Design


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper experimentally demonstrates that, for two representative indoor distributed antenna system (DAS) scenarios, existing radio-over-fiber (RoF) DAS installations can enhance the capacity advantages of broadband 3 × 3 multiple-input-multiple-output (MIMO) radio services without requiring additional fibers or multiplexing schemes. This is true for both single-and multiple-user cases with a single base station and multiple base stations. First, a theoretical example is used to illustrate that there is a negligible improvement in signal-to-noise ratio (SNR) when using a MIMO DAS with all N spatial streams replicated at N RAUs, compared with a MIMO DAS with only one of the N streams replicated at each RAU for N ≤ 4. It is then experimentally confirmed that a 3 × 3 MIMO DAS offers improved capacity and throughput compared with a 3 × 3 MIMO collocated antenna system (CAS) for the single-user case in two typical indoor DAS scenarios, i.e., one with significant line-of-sight (LOS) propagation and the other with entirely non-line-of-sight (NLOS) propagation. The improvement in capacity is 3.2% and 4.1%, respectively. Then, experimental channel measurements confirm that there is a negligible capacity increase in the 3 × 3 configuration with three spatial streams per antenna unit over the 3 × 3 configuration with a single spatial stream per antenna unit. The former layout is observed to provide an increase of ∼1% in the median channel capacity in both the single-and multiple-user scenarios. With 20 users and three base stations, a MIMO DAS using the latter layout offers median aggregate capacities of 259 and 233 bit/s/Hz for the LOS and NLOS scenarios, respectively. It is concluded that DAS installations can further enhance the capacity offered to multiple users by multiple 3 × 3 MIMO-enabled base stations. Further, designing future DAS systems to support broadband 3 × 3 MIMO systems may not require significant upgrades to existing installations for small numbers of spatial streams. © 2013 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Statistical Rate Monotonic Scheduling (SRMS) is a generalization of the classical RMS results of Liu and Layland [LL73] for periodic tasks with highly variable execution times and statistical QoS requirements. The main tenet of SRMS is that the variability in task resource requirements could be smoothed through aggregation to yield guaranteed QoS. This aggregation is done over time for a given task and across multiple tasks for a given period of time. Similar to RMS, SRMS has two components: a feasibility test and a scheduling algorithm. SRMS feasibility test ensures that it is possible for a given periodic task set to share a given resource without violating any of the statistical QoS constraints imposed on each task in the set. The SRMS scheduling algorithm consists of two parts: a job admission controller and a scheduler. The SRMS scheduler is a simple, preemptive, fixed-priority scheduler. The SRMS job admission controller manages the QoS delivered to the various tasks through admit/reject and priority assignment decisions. In particular, it ensures the important property of task isolation, whereby tasks do not infringe on each other. In this paper we present the design and implementation of SRMS within the KURT Linux Operating System [HSPN98, SPH 98, Sri98]. KURT Linux supports conventional tasks as well as real-time tasks. It provides a mechanism for transitioning from normal Linux scheduling to a mixed scheduling of conventional and real-time tasks, and to a focused mode where only real-time tasks are scheduled. We overview the technical issues that we had to overcome in order to integrate SRMS into KURT Linux and present the API we have developed for scheduling periodic real-time tasks using SRMS.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The availability of a very accurate dependence graph for a scalar code is the basis for the automatic generation of an efficient parallel implementation. The strategy for this task which is encapsulated in a comprehensive data partitioning code generation algorithm is described. This algorithm involves the data partition, calculation of assignment ranges for partitioned arrays, addition of a comprehensive set of execution control masks, altering loop limits, addition and optimisation of communications for all data. In this context, the development and implementation of strategies to merge communications wherever possible has proved an important feature in producing efficient parallel implementations for numerical mesh based codes. The code generation strategies described here are embedded within the Computer Aided Parallelisation tools (CAPTools) software as a key part of a toolkit for automating as much as possible of the parallelisation process for mesh based numerical codes. The algorithms used enables parallelisation of real computational mechanics codes with only minor user interaction and without any prior manual customisation of the serial code to suit the parallelisation tool.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Realizing scalable performance on high performance computing systems is not straightforward for single-phenomenon codes (such as computational fluid dynamics [CFD]). This task is magnified considerably when the target software involves the interactions of a range of phenomena that have distinctive solution procedures involving different discretization methods. The problems of addressing the key issues of retaining data integrity and the ordering of the calculation procedures are significant. A strategy for parallelizing this multiphysics family of codes is described for software exploiting finite-volume discretization methods on unstructured meshes using iterative solution procedures. A mesh partitioning-based SPMD approach is used. However, since different variables use distinct discretization schemes, this means that distinct partitions are required; techniques for addressing this issue are described using the mesh-partitioning tool, JOSTLE. In this contribution, the strategy is tested for a variety of test cases under a wide range of conditions (e.g., problem size, number of processors, asynchronous / synchronous communications, etc.) using a variety of strategies for mapping the mesh partition onto the processor topology.

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Systematic principal component analysis (PCA) methods are presented in this paper for reliable islanding detection for power systems with significant penetration of distributed generations (DGs), where synchrophasors recorded by Phasor Measurement Units (PMUs) are used for system monitoring. Existing islanding detection methods such as Rate-of-change-of frequency (ROCOF) and Vector Shift are fast for processing local information, however with the growth in installed capacity of DGs, they suffer from several drawbacks. Incumbent genset islanding detection cannot distinguish a system wide disturbance from an islanding event, leading to mal-operation. The problem is even more significant when the grid does not have sufficient inertia to limit frequency divergences in the system fault/stress due to the high penetration of DGs. To tackle such problems, this paper introduces PCA methods for islanding detection. Simple control chart is established for intuitive visualization of the transients. A Recursive PCA (RPCA) scheme is proposed as a reliable extension of the PCA method to reduce the false alarms for time-varying process. To further reduce the computational burden, the approximate linear dependence condition (ALDC) errors are calculated to update the associated PCA model. The proposed PCA and RPCA methods are verified by detecting abnormal transients occurring in the UK utility network.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

O tema principal desta tese é o problema de cancelamento de interferência para sistemas multi-utilizador, com antenas distribuídas. Como tal, ao iniciar, uma visão geral das principais propriedades de um sistema de antenas distribuídas é apresentada. Esta descrição inclui o estudo analítico do impacto da ligação, dos utilizadores do sistema, a mais antenas distribuídas. Durante essa análise é demonstrado que a propriedade mais importante do sistema para obtenção do ganho máximo, através da ligação de mais antenas de transmissão, é a simetria espacial e que os utilizadores nas fronteiras das células são os mais bene ciados. Tais resultados são comprovados através de simulação. O problema de cancelamento de interferência multi-utilizador é considerado tanto para o caso unidimensional (i.e. sem codi cação) como para o multidimensional (i.e. com codi cação). Para o caso unidimensional um algoritmo de pré-codi cação não-linear é proposto e avaliado, tendo como objectivo a minimização da taxa de erro de bit. Tanto o caso de portadora única como o de multipla-portadora são abordados, bem como o cenário de antenas colocadas e distribuidas. É demonstrado que o esquema proposto pode ser visto como uma extensão do bem conhecido esquema de zeros forçados, cuja desempenho é provado ser um limite inferior para o esquema generalizado. O algoritmo é avaliado, para diferentes cenários, através de simulação, a qual indica desempenho perto do óptimo, com baixa complexidade. Para o caso multi-dimensional um esquema para efectuar "dirty paper coding" binário, tendo como base códigos de dupla camada é proposto. No desenvolvimento deste esquema, a compressão com perdas de informação, é considerada como um subproblema. Resultados de simulação indicam transmissão dedigna proxima do limite de Shannon.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Trabalho de Projeto para obtenção do grau de Mestre em Engenharia Informática e de Computadores

Relevância:

100.00% 100.00%

Publicador:

Resumo:

From the Brookshear Chapter on Operating Systems

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Based on Brookshear textbook (Pearson copyright) files but with additional slides by hcd

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract: A new methodology was created to measure the energy consumption and related green house gas (GHG) emissions of a computer operating system (OS) across different device platforms. The methodology involved the direct power measurement of devices under different activity states. In order to include all aspects of an OS, the methodology included measurements in various OS modes, whilst uniquely, also incorporating measurements when running an array of defined software activities, so as to include OS application management features. The methodology was demonstrated on a laptop and phone that could each run multiple OSs, results confirmed that OS can significantly impact the energy consumption of devices. In particular, the new versions of the Microsoft Windows OS were tested and highlighted significant differences between the OS versions on the same hardware. The developed methodology could enable a greater awareness of energy consumption, during both the software development and software marketing processes.