967 resultados para Non-uniform array
Resumo:
The effects of channel inequality on nonlinear signal switching in a nonlinear optical fiber loop mirror (NOLM) were investigated. It was found that the channel-to-channel amplitude differences in optical time division multiplexing (OTDM) have strong impact on swiching behavior of individual channels in a 2R regenerator. The optical pulses in different channels face either suppression of the amplitude noise or increase in noise, depending on the inter-channel amplitude difference. It was stated that appropriate control of the channel uniformity in the OTDM transmitters is required to support stable long-haul transmission in 2R regenerated systems.
Resumo:
Non-uniform B-spline dictionaries on a compact interval are discussed in the context of sparse signal representation. For each given partition, dictionaries of B-spline functions for the corresponding spline space are built up by dividing the partition into subpartitions and joining together the bases for the concomitant subspaces. The resulting slightly redundant dictionaries are composed of B-spline functions of broader support than those corresponding to the B-spline basis for the identical space. Such dictionaries are meant to assist in the construction of adaptive sparse signal representation through a combination of stepwise optimal greedy techniques.
Resumo:
The impact of design of sharp non-uniform fiber Bragg gratings on system performance was presented. The evolution of the Q-value of the worst channel against the propagation distance was shown. The results suggested that to apply approximated flat-dispersion gratings as inline filters in a periodic system, some post-compensation was included to account for the extra dispersion introduced by the gratings.
Resumo:
Next-generation networks are likely to be non-uniform in all their aspects, including number of lightpaths carried per link, number of wavelengths per link, number of fibres per link, asymmetry of the links, and traffic flows. Routing and wavelength allocation models generally assume that the optical network is uniform and that the number of wavelengths per link is a constant. In practice however, some nodes and links carry heavy traffic and additional wavelengths are needed in those links. We study a wavelength-routed optical network based on the UK JANET topology where traffic demands between nodes are assumed to be non-uniform. We investigate how network capacity can be increased by locating congested links and suggesting cost-effective upgrades. Different traffic demands patterns, hop distances, number of wavelengths per link, and routing algorithms are considered. Numerical results show that a 95% increase in network capacity is possible by overlaying fibre on just 5% of existing links. We conclude that non-uniform traffic allocation can be beneficial to localize traffic in nodes and links deep in the network core and provisioning of additional resources there can efficiently and cost-effectively increase network capacity. © 2013 IEEE.
Resumo:
Cache-coherent non uniform memory access (ccNUMA) architecture is a standard design pattern for contemporary multicore processors, and future generations of architectures are likely to be NUMA. NUMA architectures create new challenges for managed runtime systems. Memory-intensive applications use the system’s distributed memory banks to allocate data, and the automatic memory manager collects garbage left in these memory banks. The garbage collector may need to access remote memory banks, which entails access latency overhead and potential bandwidth saturation for the interconnection between memory banks. This dissertation makes five significant contributions to garbage collection on NUMA systems, with a case study implementation using the Hotspot Java Virtual Machine. It empirically studies data locality for a Stop-The-World garbage collector when tracing connected objects in NUMA heaps. First, it identifies a locality richness which exists naturally in connected objects that contain a root object and its reachable set— ‘rooted sub-graphs’. Second, this dissertation leverages the locality characteristic of rooted sub-graphs to develop a new NUMA-aware garbage collection mechanism. A garbage collector thread processes a local root and its reachable set, which is likely to have a large number of objects in the same NUMA node. Third, a garbage collector thread steals references from sibling threads that run on the same NUMA node to improve data locality. This research evaluates the new NUMA-aware garbage collector using seven benchmarks of an established real-world DaCapo benchmark suite. In addition, evaluation involves a widely used SPECjbb benchmark and Neo4J graph database Java benchmark, as well as an artificial benchmark. The results of the NUMA-aware garbage collector on a multi-hop NUMA architecture show an average of 15% performance improvement. Furthermore, this performance gain is shown to be as a result of an improved NUMA memory access in a ccNUMA system. Fourth, the existing Hotspot JVM adaptive policy for configuring the number of garbage collection threads is shown to be suboptimal for current NUMA machines. The policy uses outdated assumptions and it generates a constant thread count. In fact, the Hotspot JVM still uses this policy in the production version. This research shows that the optimal number of garbage collection threads is application-specific and configuring the optimal number of garbage collection threads yields better collection throughput than the default policy. Fifth, this dissertation designs and implements a runtime technique, which involves heuristics from dynamic collection behavior to calculate an optimal number of garbage collector threads for each collection cycle. The results show an average of 21% improvements to the garbage collection performance for DaCapo benchmarks.
Resumo:
The utilization of solar energy by photovoltaic (PV) systems have received much research and development (R&D) attention across the globe. In the past decades, a large number of PV array have been installed. Since the installed PV arrays often operate in harsh environments, non-uniform aging can occur and impact adversely on the performance of PV systems, especially in the middle and late periods of their service life. Due to the high cost of replacing aged PV modules by new modules, it is appealing to improve energy efficiency of aged PV systems. For this purpose, this paper presents a PV module reconfiguration strategy to achieve the maximum power generation from non-uniformly aged PV arrays without significant investment. The proposed reconfiguration strategy is based on the cell-unit structure of PV modules, the operating voltage limit of gird-connected converter, and the resulted bucket-effect of the maximum short circuit current. The objectives are to analyze all the potential reorganization options of the PV modules, find the maximum power point and express it in a proposition. This proposition is further developed into a novel implementable algorithm to calculate the maximum power generation and the corresponding reconfiguration of the PV modules. The immediate benefits from this reconfiguration are the increased total power output and maximum power point voltage information for global maximum power point tracking (MPPT). A PV array simulation model is used to illustrate the proposed method under three different cases. Furthermore, an experimental rig is built to verify the effectiveness of the proposed method. The proposed method will open an effective approach for condition-based maintenance of emerging aging PV arrays.
Resumo:
La thèse est divisée principalement en deux parties. La première partie regroupe les chapitres 2 et 3. La deuxième partie regroupe les chapitres 4 et 5. La première partie concerne l'échantillonnage de distributions continues non uniformes garantissant un niveau fixe de précision. Knuth et Yao démontrèrent en 1976 comment échantillonner exactement n'importe quelle distribution discrète en n'ayant recours qu'à une source de bits non biaisés indépendants et identiquement distribués. La première partie de cette thèse généralise en quelque sorte la théorie de Knuth et Yao aux distributions continues non uniformes, une fois la précision fixée. Une borne inférieure ainsi que des bornes supérieures pour des algorithmes génériques comme l'inversion et la discrétisation figurent parmi les résultats de cette première partie. De plus, une nouvelle preuve simple du résultat principal de l'article original de Knuth et Yao figure parmi les résultats de cette thèse. La deuxième partie concerne la résolution d'un problème en théorie de la complexité de la communication, un problème qui naquit avec l'avènement de l'informatique quantique. Étant donné une distribution discrète paramétrée par un vecteur réel de dimension N et un réseau de N ordinateurs ayant accès à une source de bits non biaisés indépendants et identiquement distribués où chaque ordinateur possède un et un seul des N paramètres, un protocole distribué est établi afin d'échantillonner exactement ladite distribution.
A benchmark-driven modelling approach for evaluating deployment choices on a multi-core architecture
Resumo:
The complexity of current and emerging architectures provides users with options about how best to use the available resources, but makes predicting performance challenging. In this work a benchmark-driven model is developed for a simple shallow water code on a Cray XE6 system, to explore how deployment choices such as domain decomposition and core affinity affect performance. The resource sharing present in modern multi-core architectures adds various levels of heterogeneity to the system. Shared resources often includes cache, memory, network controllers and in some cases floating point units (as in the AMD Bulldozer), which mean that the access time depends on the mapping of application tasks, and the core's location within the system. Heterogeneity further increases with the use of hardware-accelerators such as GPUs and the Intel Xeon Phi, where many specialist cores are attached to general-purpose cores. This trend for shared resources and non-uniform cores is expected to continue into the exascale era. The complexity of these systems means that various runtime scenarios are possible, and it has been found that under-populating nodes, altering the domain decomposition and non-standard task to core mappings can dramatically alter performance. To find this out, however, is often a process of trial and error. To better inform this process, a performance model was developed for a simple regular grid-based kernel code, shallow. The code comprises two distinct types of work, loop-based array updates and nearest-neighbour halo-exchanges. Separate performance models were developed for each part, both based on a similar methodology. Application specific benchmarks were run to measure performance for different problem sizes under different execution scenarios. These results were then fed into a performance model that derives resource usage for a given deployment scenario, with interpolation between results as necessary.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
This is the first paper in a two-part series devoted to studying the Hausdorff dimension of invariant sets of non-uniformly hyperbolic, non-conformal maps. Here we consider a general abstract model, that we call piecewise smooth maps with holes. We show that the Hausdorff dimension of the repeller is strictly less than the dimension of the ambient manifold. Our approach also provides information on escape rates and dynamical dimension of the repeller.