984 resultados para Power allocation
Resumo:
This dissertation discussed resource allocation mechanisms in several network topologies including infrastructure wireless network, non-infrastructure wireless network and wire-cum-wireless network. Different networks may have different resource constrains. Based on actual technologies and implementation models, utility function, game theory and a modern control algorithm have been introduced to balance power, bandwidth and customers' satisfaction in the system. ^ In infrastructure wireless networks, utility function was used in the Third Generation (3G) cellular network and the network was trying to maximize the total utility. In this dissertation, revenue maximization was set as an objective. Compared with the previous work on utility maximization, it is more practical to implement revenue maximization by the cellular network operators. The pricing strategies were studied and the algorithms were given to find the optimal price combination of power and rate to maximize the profit without degrading the Quality of Service (QoS) performance. ^ In non-infrastructure wireless networks, power capacity is limited by the small size of the nodes. In such a network, nodes need to transmit traffic not only for themselves but also for their neighbors, so power management become the most important issue for the network overall performance. Our innovative routing algorithm based on utility function, sets up a flexible framework for different users with different concerns in the same network. This algorithm allows users to make trade offs between multiple resource parameters. Its flexibility makes it a suitable solution for the large scale non-infrastructure network. This dissertation also covers non-cooperation problems. Through combining game theory and utility function, equilibrium points could be found among rational users which can enhance the cooperation in the network. ^ Finally, a wire-cum-wireless network architecture was introduced. This network architecture can support multiple services over multiple networks with smart resource allocation methods. Although a SONET-to-WiMAX case was used for the analysis, the mathematic procedure and resource allocation scheme could be universal solutions for all infrastructure, non-infrastructure and combined networks. ^
Resumo:
The increasing needs for computational power in areas such as weather simulation, genomics or Internet applications have led to sharing of geographically distributed and heterogeneous resources from commercial data centers and scientific institutions. Research in the areas of utility, grid and cloud computing, together with improvements in network and hardware virtualization has resulted in methods to locate and use resources to rapidly provision virtual environments in a flexible manner, while lowering costs for consumers and providers. ^ However, there is still a lack of methodologies to enable efficient and seamless sharing of resources among institutions. In this work, we concentrate in the problem of executing parallel scientific applications across distributed resources belonging to separate organizations. Our approach can be divided in three main points. First, we define and implement an interoperable grid protocol to distribute job workloads among partners with different middleware and execution resources. Second, we research and implement different policies for virtual resource provisioning and job-to-resource allocation, taking advantage of their cooperation to improve execution cost and performance. Third, we explore the consequences of on-demand provisioning and allocation in the problem of site-selection for the execution of parallel workloads, and propose new strategies to reduce job slowdown and overall cost.^
Resumo:
Pension funds have been part of the private sector since the 1850's. Defined Benefit pension plans [DB], where a company promises to make regular contributions to investment accounts held for participating employees in order to pay a promised lifelong annuity, are significant capital markets participants, amounting to 2.3 trillion dollars in 2010 (Federal Reserve Board, 2013). In 2006, Statement of Financial Accounting Standards No.158 (SFAS 158), Employers' Accounting for Defined Benefit Pension and Other Postemployment Plans, shifted information concerning funding status and pension asset/liability composition from disclosure in the footnotes to recognition in the financial statements. I add to the literature by being the first to examine the effect of recent pension reform during the financial crisis of 2008-09. This dissertation is comprised of three related essays. In my first essay, I investigate whether investors assign different pricing multiples to the various classes of pension assets when valuing firms. The pricing multiples on all classes of assets are significantly different from each other, but only investments in bonds and equities were value-relevant during the recent financial crisis. Consistent with investors viewing pension liabilities as liabilities of the firm, the pricing multiples on pension liabilities are significantly larger than those on non-pension liabilities. The only pension costs significantly associated with firm value are actual rate of return and interest expense. In my second essay, I investigate the role of accruals in predicting future cash flows, extending the Barth et al. (2001a) model of the accrual process. Using market value of equity as a proxy for cash flows, the results of this study suggest that aggregate accounting amounts mask how the components of earnings affect investors' ability to predict future cash flows. Disaggregating pension earnings components and accruals results in an increase in predictive power. During the 2008-2009 financial crisis, however, investors placed a greater (and negative) weight on the incremental information contained in the individual components of accruals. The inferences are robust to alternative specifications of accruals. Finally, in my third essay I investigate how investors view under-funded plans. On average, investors: view deficits arising from under-funded plans as belonging to the firm; reward firms with fully or over-funded pension plans; and encourage those funds with unfunded pension plans to become funded. Investors also encourage conservative pension asset allocations to mitigate firm risk, and smaller firms are perceived as being better able to handle the risk associated with underfunded plans. During the financial crisis of 2008-2009 underfunded status had a lower negative association with market value. In all three models, there are significant differences in pre- and post- SFAS 158 periods. These results are robust to various scenarios of the timing of the financial crisis and an alternative measure of funding.
Resumo:
Deployment of low power basestations within cellular networks can potentially increase both capacity and coverage. However, such deployments require efficient resource allocation schemes for managing interference from the low power and macro basestations that are located within each other’s transmission range. In this dissertation, we propose novel and efficient dynamic resource allocation algorithms in the frequency, time and space domains. We show that the proposed algorithms perform better than the current state-of-art resource management algorithms. In the first part of the dissertation, we propose an interference management solution in the frequency domain. We introduce a distributed frequency allocation scheme that shares frequencies between macro and low power pico basestations, and guarantees a minimum average throughput to users. The scheme seeks to minimize the total number of frequencies needed to honor the minimum throughput requirements. We evaluate our scheme using detailed simulations and show that it performs on par with the centralized optimum allocation. Moreover, our proposed scheme outperforms a static frequency reuse scheme and the centralized optimal partitioning between the macro and picos. In the second part of the dissertation, we propose a time domain solution to the interference problem. We consider the problem of maximizing the alpha-fairness utility over heterogeneous wireless networks (HetNets) by jointly optimizing user association, wherein each user is associated to any one transmission point (TP) in the network, and activation fractions of all TPs. Activation fraction of a TP is the fraction of the frame duration for which it is active, and together these fractions influence the interference seen in the network. To address this joint optimization problem which we show is NP-hard, we propose an alternating optimization based approach wherein the activation fractions and the user association are optimized in an alternating manner. The subproblem of determining the optimal activation fractions is solved using a provably convergent auxiliary function method. On the other hand, the subproblem of determining the user association is solved via a simple combinatorial algorithm. Meaningful performance guarantees are derived in either case. Simulation results over a practical HetNet topology reveal the superior performance of the proposed algorithms and underscore the significant benefits of the joint optimization. In the final part of the dissertation, we propose a space domain solution to the interference problem. We consider the problem of maximizing system utility by optimizing over the set of user and TP pairs in each subframe, where each user can be served by multiple TPs. To address this optimization problem which is NP-hard, we propose a solution scheme based on difference of submodular function optimization approach. We evaluate our scheme using detailed simulations and show that it performs on par with a much more computationally demanding difference of convex function optimization scheme. Moreover, the proposed scheme performs within a reasonable percentage of the optimal solution. We further demonstrate the advantage of the proposed scheme by studying its performance with variation in different network topology parameters.
Resumo:
As the semiconductor industry struggles to maintain its momentum down the path following the Moore's Law, three dimensional integrated circuit (3D IC) technology has emerged as a promising solution to achieve higher integration density, better performance, and lower power consumption. However, despite its significant improvement in electrical performance, 3D IC presents several serious physical design challenges. In this dissertation, we investigate physical design methodologies for 3D ICs with primary focus on two areas: low power 3D clock tree design, and reliability degradation modeling and management. Clock trees are essential parts for digital system which dissipate a large amount of power due to high capacitive loads. The majority of existing 3D clock tree designs focus on minimizing the total wire length, which produces sub-optimal results for power optimization. In this dissertation, we formulate a 3D clock tree design flow which directly optimizes for clock power. Besides, we also investigate the design methodology for clock gating a 3D clock tree, which uses shutdown gates to selectively turn off unnecessary clock activities. Different from the common assumption in 2D ICs that shutdown gates are cheap thus can be applied at every clock node, shutdown gates in 3D ICs introduce additional control TSVs, which compete with clock TSVs for placement resources. We explore the design methodologies to produce the optimal allocation and placement for clock and control TSVs so that the clock power is minimized. We show that the proposed synthesis flow saves significant clock power while accounting for available TSV placement area. Vertical integration also brings new reliability challenges including TSV's electromigration (EM) and several other reliability loss mechanisms caused by TSV-induced stress. These reliability loss models involve complex inter-dependencies between electrical and thermal conditions, which have not been investigated in the past. In this dissertation we set up an electrical/thermal/reliability co-simulation framework to capture the transient of reliability loss in 3D ICs. We further derive and validate an analytical reliability objective function that can be integrated into the 3D placement design flow. The reliability aware placement scheme enables co-design and co-optimization of both the electrical and reliability property, thus improves both the circuit's performance and its lifetime. Our electrical/reliability co-design scheme avoids unnecessary design cycles or application of ad-hoc fixes that lead to sub-optimal performance. Vertical integration also enables stacking DRAM on top of CPU, providing high bandwidth and short latency. However, non-uniform voltage fluctuation and local thermal hotspot in CPU layers are coupled into DRAM layers, causing a non-uniform bit-cell leakage (thereby bit flip) distribution. We propose a performance-power-resilience simulation framework to capture DRAM soft error in 3D multi-core CPU systems. In addition, a dynamic resilience management (DRM) scheme is investigated, which adaptively tunes CPU's operating points to adjust DRAM's voltage noise and thermal condition during runtime. The DRM uses dynamic frequency scaling to achieve a resilience borrow-in strategy, which effectively enhances DRAM's resilience without sacrificing performance. The proposed physical design methodologies should act as important building blocks for 3D ICs and push 3D ICs toward mainstream acceptance in the near future.
Resumo:
We compare auctioning and grandfathering as allocation mechanisms of emission permits when there is a secondary market with market power and firms have private information on their own abatement technologies. Based on real-life cases such as the EU ETS, we consider a multi-unit, multi-bid uniform auction. At the auction, each firm anticipates its role in the secondary market, either as a leader or a follower. This role affects each firms’ valuation of the permits (which are not common across firms) as well as their bidding strategies and it precludes the auction from generating a cost-effective allocation of permits, as it occurs in simpler auction models. Auctioning tends to be more cost-effective than grandfathering when the firms’ abatement cost functions are sufficiently different from one another, especially if the follower has lower abatement costs than the leader and the dispersion of the marginal costs is large enough.
Resumo:
The increasing needs for computational power in areas such as weather simulation, genomics or Internet applications have led to sharing of geographically distributed and heterogeneous resources from commercial data centers and scientific institutions. Research in the areas of utility, grid and cloud computing, together with improvements in network and hardware virtualization has resulted in methods to locate and use resources to rapidly provision virtual environments in a flexible manner, while lowering costs for consumers and providers. However, there is still a lack of methodologies to enable efficient and seamless sharing of resources among institutions. In this work, we concentrate in the problem of executing parallel scientific applications across distributed resources belonging to separate organizations. Our approach can be divided in three main points. First, we define and implement an interoperable grid protocol to distribute job workloads among partners with different middleware and execution resources. Second, we research and implement different policies for virtual resource provisioning and job-to-resource allocation, taking advantage of their cooperation to improve execution cost and performance. Third, we explore the consequences of on-demand provisioning and allocation in the problem of site-selection for the execution of parallel workloads, and propose new strategies to reduce job slowdown and overall cost.
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física
Resumo:
OBJECTIVE: This in situ study evaluated the discriminatory power and reliability of methods of dental plaque quantification and the relationship between visual indices (VI) and fluorescence camera (FC) to detect plaque. MATERIAL AND METHODS: Six volunteers used palatal appliances with six bovine enamel blocks presenting different stages of plaque accumulation. The presence of plaque with and without disclosing was assessed using VI. Images were obtained with FC and digital camera in both conditions. The area covered by plaque was assessed. Examinations were done by two independent examiners. Data were analyzed by Kruskal-Wallis and Kappa tests to compare different conditions of samples and to assess the inter-examiner reproducibility. RESULTS: Some methods presented adequate reproducibility. The Turesky index and the assessment of area covered by disclosed plaque in the FC images presented the highest discriminatory powers. CONCLUSION: The Turesky index and images with FC with disclosing present good reliability and discriminatory power in quantifying dental plaque.
Resumo:
This work deals with an improved plane frame formulation whose exact dynamic stiffness matrix (DSM) presents, uniquely, null determinant for the natural frequencies. In comparison with the classical DSM, the formulation herein presented has some major advantages: local mode shapes are preserved in the formulation so that, for any positive frequency, the DSM will never be ill-conditioned; in the absence of poles, it is possible to employ the secant method in order to have a more computationally efficient eigenvalue extraction procedure. Applying the procedure to the more general case of Timoshenko beams, we introduce a new technique, named ""power deflation"", that makes the secant method suitable for the transcendental nonlinear eigenvalue problems based on the improved DSM. In order to avoid overflow occurrences that can hinder the secant method iterations, limiting frequencies are formulated, with scaling also applied to the eigenvalue problem. Comparisons with results available in the literature demonstrate the strength of the proposed method. Computational efficiency is compared with solutions obtained both by FEM and by the Wittrick-Williams algorithm.
Resumo:
Much of social science literature about South African cities fails to represent its complex spectrum of sexual practices and associated identities. The unintended effects of such representations are that a compulsory heterosexuality is naturalised in, and reiterative with, dominant constructions of blackness in townships. In this paper, we argue that the assertion of discreet lesbian and gay identities in black townships of a South African city such as Cape Town is influenced by the historical racial and socio-economic divides that have marked urban landscape. In their efforts to recoup a positive sense of gendered personhood, residents have constructed a moral economy anchored in reproductive heterosexuality. We draw upon ethnographic data to show how sexual minorities live their lives vicariously in spaces they have prised open within the extant sex/gender binary. They are able to assert the identities of moffie and man-vrou (mannish woman) without threatening the dominant ideology of heterosexuality.
Resumo:
We describe the design and implementation of a high voltage pulse power supply (pulser) that supports the operation of a repetitively pulsed filtered vacuum arc plasma deposition facility in plasma immersion ion implantation and deposition (Mepiiid) mode. Negative pulses (micropulses) of up to 20 kV in magnitude and 20 A peak current are provided in gated pulse packets (macropulses) over a broad range of possible pulse width and duty cycle. Application of the system consisting of filtered vacuum arc and high voltage pulser is demonstrated by forming diamond-like carbon (DLC) thin films with and without substrate bias provided by the pulser. Significantly enhanced film/substrate adhesion is observed when the pulser is used to induce interface mixing between the DLC film and the underlying Si substrate. (C) 2010 American Institute of Physics. [doi:10.1063/1.3518969]
Resumo:
At the 2008 Summer Olympics in Beijing, Usain Bolt broke the world record for the 100 m sprint. Just one year later, at the 2009 World Championships in Athletics in Berlin he broke it again. A few months after Beijing, Eriksen [Am. J. Phys. 77, 224-228 (2009)] studied Bolt's performance and predicted that Bolt could have run about one-tenth of a second faster, which was confirmed in Berlin. In this paper we extend the analysis of Eriksen to model Bolt's velocity time dependence for the Beijing 2008 and Berlin 2009 records. We deduce the maximum force, the maximum power, and the total mechanical energy produced by Bolt in both races. Surprisingly, we conclude that all of these values were smaller in 2009 than in 2008.
Resumo:
Rheological properties of adherent cells are essential for their physiological functions, and microrheological measurements on living cells have shown that their viscoelastic responses follow a weak power law over a wide range of time scales. This power law is also influenced by mechanical prestress borne by the cytoskeleton, suggesting that cytoskeletal prestress determines the cell's viscoelasticity, but the biophysical origins of this behavior are largely unknown. We have recently developed a stochastic two-dimensional model of an elastically joined chain that links the power-law rheology to the prestress. Here we use a similar approach to study the creep response of a prestressed three-dimensional elastically jointed chain as a viscoelastic model of semiflexible polymers that comprise the prestressed cytoskeletal lattice. Using a Monte Carlo based algorithm, we show that numerical simulations of the chain's creep behavior closely correspond to the behavior observed experimentally in living cells. The power-law creep behavior results from a finite-speed propagation of free energy from the chain's end points toward the center of the chain in response to an externally applied stretching force. The property that links the power law to the prestress is the chain's stiffening with increasing prestress, which originates from entropic and enthalpic contributions. These results indicate that the essential features of cellular rheology can be explained by the viscoelastic behaviors of individual semiflexible polymers of the cytoskeleton.
Resumo:
Gaussianity and statistical isotropy of the Universe are modern cosmology's minimal set of hypotheses. In this work we introduce a new statistical test to detect observational deviations from this minimal set. By defining the temperature correlation function over the whole celestial sphere, we are able to independently quantify both angular and planar dependence (modulations) of the CMB temperature power spectrum over different slices of this sphere. Given that planar dependence leads to further modulations of the usual angular power spectrum C(l), this test can potentially reveal richer structures in the morphology of the primordial temperature field. We have also constructed an unbiased estimator for this angular-planar power spectrum which naturally generalizes the estimator for the usual C(l)'s. With the help of a chi-square analysis, we have used this estimator to search for observational deviations of statistical isotropy in WMAP's 5 year release data set (ILC5), where we found only slight anomalies on the angular scales l = 7 and l = 8. Since this angular-planar statistic is model-independent, it is ideal to employ in searches of statistical anisotropy (e.g., contaminations from the galactic plane) and to characterize non-Gaussianities.