986 resultados para dynamic storage allocation
Resumo:
This thesis examines three different, but related problems in the broad area of portfolio management for long-term institutional investors, and focuses mainly on the case of pension funds. The first idea (Chapter 3) is the application of a novel numerical technique – robust optimization – to a real-world pension scheme (the Universities Superannuation Scheme, USS) for first time. The corresponding empirical results are supported by many robustness checks and several benchmarks such as the Bayes-Stein and Black-Litterman models that are also applied for first time in a pension ALM framework, the Sharpe and Tint model and the actual USS asset allocations. The second idea presented in Chapter 4 is the investigation of whether the selection of the portfolio construction strategy matters in the SRI industry, an issue of great importance for long term investors. This study applies a variety of optimal and naïve portfolio diversification techniques to the same SRI-screened universe, and gives some answers to the question of which portfolio strategies tend to create superior SRI portfolios. Finally, the third idea (Chapter 5) compares the performance of a real-world pension scheme (USS) before and after the recent major changes in the pension rules under different dynamic asset allocation strategies and the fixed-mix portfolio approach and quantifies the redistributive effects between various stakeholders. Although this study deals with a specific pension scheme, the methodology can be applied by other major pension schemes in countries such as the UK and USA that have changed their rules.
Resumo:
By considering a network of dissipative quantum harmonic oscillators, we deduce and analyse the optimum topologies which are able to store quantum superposition states, protecting them from decoherence, for the longest period of time. The storage is made dynamically, in that the states to be protected evolve through the network before being retrieved back in the oscillator where they were prepared. The decoherence time during the dynamic storage process is computed and we demonstrate that it is proportional to the number of oscillators in the network for a particular regime of parameters.
Resumo:
The evolution of wireless communication systems leads to Dynamic Spectrum Allocation for Cognitive Radio, which requires reliable spectrum sensing techniques. Among the spectrum sensing methods proposed in the literature, those that exploit cyclostationary characteristics of radio signals are particularly suitable for communication environments with low signal-to-noise ratios, or with non-stationary noise. However, such methods have high computational complexity that directly raises the power consumption of devices which often have very stringent low-power requirements. We propose a strategy for cyclostationary spectrum sensing with reduced energy consumption. This strategy is based on the principle that p processors working at slower frequencies consume less power than a single processor for the same execution time. We devise a strict relation between the energy savings and common parallel system metrics. The results of simulations show that our strategy promises very significant savings in actual devices.
Resumo:
Nowadays, networks must support applications such as: distance learning, electronic commerce, access to Internet, Intranets and Extranets, voice over IP (Internet Protocol) and many others. These new applications, employing data, voice, and video traffic, require high bandwidth and Quality of Service (QoS). The ATM (Asynchronous Transfer Mode) technology, together with dynamic resource allocation methods, offers network connections that guarantee QoS parameters, such as minimum losses and delays. This paper presents a system that uses Network Management Functions together with dynamic resource allocation for provision of the end-to-end QoS parameters for rt-VBR connections.
Resumo:
A CMOS memory-cell for dynamic storage of analog data and suitable for LVLP applications is proposed. Information is memorized as the gate-voltage of input-transistor of a gain-boosting triode-transconductor. The enhanced output-resistance improves accuracy on reading out the sampled currents. Additionally, a four-quadrant multiplication between the input to regulation-amplifier of the transconductor and the stored voltage is provided. Designing complies with a low-voltage 1.2μm N-well CMOS fabrication process. For a 1.3V-supply, CCELL=3.6pF and sampling interval is 0.25μA≤ ISAMPLE ≤ 0.75μA. The specified retention time is 1.28ms and corresponds to a charge-variation of 1% due to junction leakage @75°C. A range of MR simulations confirm circuit performance. Absolute read-out error is below O.40% while the four-quadrant multiplier nonlinearity, at full-scale is 8.2%. Maximum stand-by consumption is 3.6μW/cell.
Resumo:
Transactional memory (TM) is a new synchronization mechanism devised to simplify parallel programming, thereby helping programmers to unleash the power of current multicore processors. Although software implementations of TM (STM) have been extensively analyzed in terms of runtime performance, little attention has been paid to an equally important constraint faced by nearly all computer systems: energy consumption. In this work we conduct a comprehensive study of energy and runtime tradeoff sin software transactional memory systems. We characterize the behavior of three state-of-the-art lock-based STM algorithms, along with three different conflict resolution schemes. As a result of this characterization, we propose a DVFS-based technique that can be integrated into the resolution policies so as to improve the energy-delay product (EDP). Experimental results show that our DVFS-enhanced policies are indeed beneficial for applications with high contention levels. Improvements of up to 59% in EDP can be observed in this scenario, with an average EDP reduction of 16% across the STAMP workloads. © 2012 IEEE.
Resumo:
Este trabalho apresenta uma solução para o problema de controle admissão de conexão e alocação dinâmica de recursos em redes IEEE 802.16 através da modelagem de um Processo Markoviano de Decisão (PMD) utilizando o conceito de degradação de largura de banda, o qual é baseado nos requisitos diferenciados de largura de banda das classes de serviço do IEEE 802.16. Para o critério de desempenho do PMD é feita a atribuição de diferentes retornos a cada classe de serviço, fazendo assim o tratamento diferenciado de cada fluxo. Nesse sentido, é possível avaliar a política ótima, obtida através de um algoritmo de iteração de valores, considerando aspectos como o nível de degradação médio das classes de serviço, utilização dos recursos e probabilidades de bloqueios de cada classe de serviço em relação à carga do sistema. Resultados obtidos mostram que o método de controle markoviano proposto é capaz de priorizar as classes de serviço consideradas mais relevantes para o sistema.
Resumo:
Nowadays, computing is migrating from traditional high performance and distributed computing to pervasive and utility computing based on heterogeneous networks and clients. The current trend suggests that future IT services will rely on distributed resources and on fast communication of heterogeneous contents. The success of this new range of services is directly linked to the effectiveness of the infrastructure in delivering them. The communication infrastructure will be the aggregation of different technologies even though the current trend suggests the emergence of single IP based transport service. Optical networking is a key technology to answer the increasing requests for dynamic bandwidth allocation and configure multiple topologies over the same physical layer infrastructure, optical networks today are still “far” from accessible from directly configure and offer network services and need to be enriched with more “user oriented” functionalities. However, current Control Plane architectures only facilitate efficient end-to-end connectivity provisioning and certainly cannot meet future network service requirements, e.g. the coordinated control of resources. The overall objective of this work is to provide the network with the improved usability and accessibility of the services provided by the Optical Network. More precisely, the definition of a service-oriented architecture is the enable technology to allow user applications to gain benefit of advanced services over an underlying dynamic optical layer. The definition of a service oriented networking architecture based on advanced optical network technologies facilitates users and applications access to abstracted levels of information regarding offered advanced network services. This thesis faces the problem to define a Service Oriented Architecture and its relevant building blocks, protocols and languages. In particular, this work has been focused on the use of the SIP protocol as a inter-layers signalling protocol which defines the Session Plane in conjunction with the Network Resource Description language. On the other hand, an advantage optical network must accommodate high data bandwidth with different granularities. Currently, two main technologies are emerging promoting the development of the future optical transport network, Optical Burst and Packet Switching. Both technologies respectively promise to provide all optical burst or packet switching instead of the current circuit switching. However, the electronic domain is still present in the scheduler forwarding and routing decision. Because of the high optics transmission frequency the burst or packet scheduler faces a difficult challenge, consequentially, high performance and time focused design of both memory and forwarding logic is need. This open issue has been faced in this thesis proposing an high efficiently implementation of burst and packet scheduler. The main novelty of the proposed implementation is that the scheduling problem has turned into simple calculation of a min/max function and the function complexity is almost independent of on the traffic conditions.
Resumo:
Identifying drivers of species diversity is a major challenge in understanding and predicting the dynamics of species-rich semi-natural grasslands. In particular in temperate grasslands changes in land use and its consequences, i.e. increasing fragmentation, the on-going loss of habitat and the declining importance of regional processes such as seed dispersal by livestock, are considered key drivers of the diversity loss witnessed within the last decades. It is a largely unresolved question to what degree current temperate grassland communities already reflect a decline of regional processes such as longer distance seed dispersal. Answering this question is challenging since it requires both a mechanistic approach to community dynamics and a sufficient data basis that allows identifying general patterns. Here, we present results of a local individual- and trait-based community model that was initialized with plant functional types (PFTs) derived from an extensive empirical data set of species-rich grasslands within the `Biodiversity Exploratories' in Germany. Driving model processes included above- and belowground competition, dynamic resource allocation to shoots and roots, clonal growth, grazing, and local seed dispersal. To test for the impact of regional processes we also simulated seed input from a regional species pool. Model output, with and without regional seed input, was compared with empirical community response patterns along a grazing gradient. Simulated response patterns of changes in PFT richness, Shannon diversity, and biomass production matched observed grazing response patterns surprisingly well if only local processes were considered. Already low levels of additional regional seed input led to stronger deviations from empirical community pattern. While these findings cannot rule out that regional processes other than those considered in the modeling study potentially play a role in shaping the local grassland communities, our comparison indicates that European grasslands are largely isolated, i.e. local mechanisms explain observed community patterns to a large extent.
Resumo:
Plants have evolved intricate strategies to withstand attacks by herbivores and pathogens. Although it is known that plants change their primary and secondary metabolism in leaves to resist and tolerate aboveground attack, there is little awareness of the role of roots in these processes. This is surprising given that plant roots are responsible for the synthesis of plant toxins, play an active role in environmental sensing and defense signaling, and serve as dynamic storage organs to allow regrowth. Hence, studying roots is essential for a solid understanding of resistance and tolerance to leaf-feeding insects and pathogens. Here, we highlight this function of roots in plant resistance to aboveground attackers, with a special focus on systemic signaling and insect herbivores
Resumo:
Irregular computations pose sorne of the most interesting and challenging problems in automatic parallelization. Irregularity appears in certain kinds of numerical problems and is pervasive in symbolic applications. Such computations often use dynamic data structures, which make heavy use of pointers. This complicates all the steps of a parallelizing compiler, from independence detection to task partitioning and placement. Starting in the mid 80s there has been significant progress in the development of parallelizing compilers for logic programming (and more recently, constraint programming) resulting in quite capable parallelizers. The typical applications of these paradigms frequently involve irregular computations, and make heavy use of dynamic data structures with pointers, since logical variables represent in practice a well-behaved form of pointers. This arguably makes the techniques used in these compilers potentially interesting. In this paper, we introduce in a tutoríal way, sorne of the problems faced by parallelizing compilers for logic and constraint programs and provide pointers to sorne of the significant progress made in the area. In particular, this work has resulted in a series of achievements in the areas of inter-procedural pointer aliasing analysis for independence detection, cost models and cost analysis, cactus-stack memory management, techniques for managing speculative and irregular computations through task granularity control and dynamic task allocation such as work-stealing schedulers), etc.
Resumo:
Irregular computations pose some of the most interesting and challenging problems in automatic parallelization. Irregularity appears in certain kinds of numerical problems and is pervasive in symbolic applications. Such computations often use dynamic data structures which make heavy use of pointers. This complicates all the steps of a parallelizing compiler, from independence detection to task partitioning and placement. In the past decade there has been significant progress in the development of parallelizing compilers for logic programming and, more recently, constraint programming. The typical applications of these paradigms frequently involve irregular computations, which arguably makes the techniques used in these compilers potentially interesting. In this paper we introduce in a tutorial way some of the problems faced by parallelizing compilers for logic and constraint programs. These include the need for inter-procedural pointer aliasing analysis for independence detection and having to manage speculative and irregular computations through task granularity control and dynamic task allocation. We also provide pointers to some of the progress made in these áreas. In the associated talk we demónstrate representatives of several generations of these parallelizing compilers.
Resumo:
Third Generation cellular communication systems are expected to support mixed cell architecture in which picocells, microcells and macrocells are used to achieve full coverage and increase the spectral capacity. Supporting higher numbers of mobile terminals and the use of smaller cells will result in an increase in the number of handovers, and consequently an increase in the time delays required to perform these handovers. Higher time delays will generate call interruptions and forced terminations, particularly for time sensitive applications like real-time multimedia and data services. Currently in the Global System for Mobile communications (GSM), the handover procedure is initiated and performed by the fixed part of the Public Land Mobile Network (PLMN). The mobile terminal is only capable of detecting candidate base stations suitable for the handover; it is the role of the network to interrogate a candidate base station for a free channel. Handover signalling is exchanged via the fixed network and the time delay required to perform the handover is greatly affected by the levels of teletraffic handled by the network. In this thesis, a new handover strategy is developed to reduce the total time delay for handovers in a microcellular system. The handover signalling is diverted from the fixed network to the air interface to prevent extra delays due to teletraffic congestion, and to allow the mobile terminal to exchange signalling directly with the candidate base station. The new strategy utilises Packet Reservation Multiple Access (PRMA) technique as a mechanism to transfer the control of the handover procedure from the fixed network to the mobile terminal. Simulation results are presented to show a dramatic reduction in the handover delay as compared to those obtained using fixed channel allocation and dynamic channel allocation schemes.
Resumo:
Fibre overlay is a cost-effective technique to alleviate wavelength blocking in some links of a wavelength-routed optical network by increasing the number of wavelengths in those links. In this letter, we investigate the effects of overlaying fibre in an all-optical network (AON) based on GÉANT2 topology. The constraint-based routing and wavelength assignment (CB-RWA) algorithm locates where cost-efficient upgrades should be implemented. Through numerical examples, we demonstrate that the network capacity improves by 25 per cent by overlaying fibre on 10 per cent of the links, and by 12 per cent by providing hop reduction links comprising 2 per cent of the links. For the upgraded network, we also show the impact of dynamic traffic allocation on the blocking probability. Copyright © 2010 John Wiley & Sons, Ltd.
Resumo:
Fibre overlay is a cost-effective technique to alleviate wavelength blocking in some links of a wavelength-routed optical network by increasing the number of wavelengths in those links. In this letter, we investigate the effects of overlaying fibre in an all-optical network (AON) based on GÉANT2 topology. The constraint-based routing and wavelength assignment (CB-RWA) algorithm locates where cost-efficient upgrades should be implemented. Through numerical examples, we demonstrate that the network capacity improves by 25 per cent by overlaying fibre on 10 per cent of the links, and by 12 per cent by providing hop reduction links comprising 2 per cent of the links. For the upgraded network, we also show the impact of dynamic traffic allocation on the blocking probability. Copyright © 2010 John Wiley & Sons, Ltd.