961 resultados para financial constraints
Distributed Switch-and-Stay Combining in Cognitive Relay Networks under Spectrum Sharing Constraints
Resumo:
In finite difference time domain simulation of room acoustics, source functions are subject to various constraints. These depend on the way sources are injected into the grid and on the chosen parameters of the numerical scheme being used. This paper addresses the issue of selecting and designing sources for finite difference simulation, by first reviewing associated aims and constraints, and evaluating existing source models against these criteria. The process of exciting a model is generalized by introducing a system of three cascaded filters, respectively, characterizing the driving pulse, the source mechanics, and the injection of the resulting source function into the grid. It is shown that hard, soft, and transparent sources can be seen as special cases within this unified approach. Starting from the mechanics of a small pulsating sphere, a parametric source model is formulated by specifying suitable filters. This physically constrained source model is numerically consistent, does not scatter incoming waves, and is free from zero- and low-frequency artifacts. Simulation results are employed for comparison with existing source formulations in terms of meeting the spectral and temporal requirements on the outward propagating wave.
Resumo:
Hundsalm ice cave located at 1520 m altitude in a karst region of western Austria contains up to 7-m-thick deposits of snow, firn and congelation ice. Wood fragments exposed in the lower parts of an ice and firn wall were radiocarbon accelerator mass spectrometry (AMS) dated. Although the local stratigraphy is complex, the 19 individual dates - the largest currently available radiocarbon dataset for an Alpine ice cave - allow to place constraints on the accumulation and ablation history of the cave ice. Most of the cave was either ice free or contained only a small firn and ice body during the 'Roman Warm Period'; dates of three wood fragments mark the onset of firn and ice build-up in the 6th and 7th century ad. In the central part of the cave, the oldest samples date back to the 13th century and record ice growth coeval with the onset of the 'Little Ice Age'. The majority of the ice and firn deposit, albeit compromised by a disturbed stratigraphy, appears to have been formed during the subsequent centuries, supported by wood samples from the 15th to the 17th century. The oldest wood remains found so far inside the ice is from the end of the Bronze Age and implies that local relics of prehistoric ice may be preserved in this cave. The wood record from Hundsalm ice cave shows parallels to the Alpine glacier history of the last three millennia, for example, the lack of preserved wood remains during periods of known glacier minima, and underscores the potential of firn and ice in karst cavities as a long-term palaeoclimate archive, which has been degrading at an alarming rate in recent years. © The Author(s) 2013.
Resumo:
We consider a collision-sensitive secondary system that intends to opportunistically aggregate and utilize spectrum of a primary system to achieve higher data rates. In such opportunistic spectrum access, secondary transmission can collide with primary transmission. When the secondary system aggregates more channels for data transmission, more frequent collisions may occur, limiting the performance obtained by the opportunistic spectrum aggregation. In this context, dynamic spectrum aggregation problem is formulated to maximize the ergodic channel capacity under the constraint of collision tolerable level. To solve the problem, we develop the optimal spectrum aggregation approach, deriving closed-form expressions for the collision probability in terms of primary user traffic load, secondary user transmission interval, and the random number of sub-channels aggregated. Our results show that aggregating only a subset of sub-channels will be a better choice, depending on the ratio of collision sensitivity requirement to the primary user traffic.
Resumo:
One of the principal tasks facing post-crash academic political economy is to analyse patterns of ideational change and the conditions that produce such change. What has been missing from the existing literature on ideational change at times of crises however, is a sense of how processes of persuasive struggle, and how the success of those ‘norm entrepreneurs’ arguing for ideational change is shaped by two contextual variables: the most immediate material symptoms and problems that a crisis displays (the variety of crisis); and the institutional character of the policy subsystem that agents have to operate within to affect change. Introducing these two variables into our accounts of persuasive struggle and ideational change enables us to deepen our understanding of the dynamics of ideational change at times of crisis. The article identifies that a quite rapid and radical intellectual change has been evident in the field of financial regulation in the form of an embrace of a macroprudential frame. In contrast in the field of macroeconomic policy - both monetary and fiscal policy, many pre-crash beliefs remain prominent, there is evidence of ideational stickiness and inertia, and despite some policy experimentation, overarching policy frameworks and their rationales have not been overhauled. The article applies Peter Hall’s framework of three orders of policy changes to help illuminate and explain the variation in patterns of change in the fields of financial regulation and macroeconomic policy since the financial crash of 2008. The different patterns of ideational change in macroeconomic policy and financial regulation in the post-crash period can be explained by timing and variety of crisis; sequencing of policy change; and institutional political differences between micro policy sub systems and macro policy systems.
Resumo:
The paper addresses the issue of choice of bandwidth in the application of semiparametric estimation of the long memory parameter in a univariate time series process. The focus is on the properties of forecasts from the long memory model. A variety of cross-validation methods based on out of sample forecasting properties are proposed. These procedures are used for the choice of bandwidth and subsequent model selection. Simulation evidence is presented that demonstrates the advantage of the proposed new methodology.
Resumo:
We have calculated 90% confidence limits on the steady-state rate of catastrophic disruptions of main belt asteroids in terms of the absolute magnitude at which one catastrophic disruption occurs per year as a function of the post-disruption increase in brightness (Δm) and subsequent brightness decay rate (τ ). The confidence limits were calculated using the brightest unknown main belt asteroid (V=18.5) detected with the Pan-STARRS1 (Pan-STARRS1) telescope. We measured the Pan-STARRS1’s catastrophic disruption detection efficiency over a 453-day interval using the Pan-STARRS moving object processing system (MOPS) and a simple model for the catastrophic disruption event’s photometric behavior in a small aperture centered on the catastrophic disruption event. We then calculated the contours in the ranges from and encompassing measured values from known cratering and disruption events and our model’s predictions. Our simplistic catastrophic disruption model suggests that and which would imply that H0≳28—strongly inconsistent withH0,B2005=23.26±0.02 predicted by Bottke et al. (Bottke, W.F., Durda, D.D., Nesvorný, D., Jedicke, R., Morbidelli, A., Vokrouhlický, D., Levison, H.F. [2005]. Icarus, 179, 63–94.) using purely collisional models. However, if we assume that H0=H0,B2005 our results constrain , inconsistent with our simplistic impact-generated catastrophic disruption model. We postulate that the solution to the discrepancy is that >99% of main belt catastrophic disruptions in the size range to which this study was sensitive (∼100 m) are not impact-generated, but are instead due to fainter rotational breakups, of which the recent discoveries of disrupted asteroids P/2013 P5 and P/2013 R3 are probable examples. We estimate that current and upcoming asteroid surveys may discover up to 10 catastrophic disruptions/year brighter than V=18.5.
Resumo:
Energy consumption and total cost of ownership are daunting challenges for Datacenters, because they scale disproportionately with performance. Datacenters running financial analytics may incur extremely high operational costs in order to meet performance and latency requirements of their hosted applications. Recently, ARM-based microservers have emerged as a viable alternative to high-end servers, promising scalable performance via scale-out approaches and low energy consumption. In this paper, we investigate the viability of ARM-based microservers for option pricing, using the Monte Carlo and Binomial Tree kernels. We compare an ARM-based microserver against a state-of-the-art x86 server. We define application-related but platform-independent energy and performance metrics to compare those platforms fairly in the context of datacenters for financial analytics and give insight on the particular requirements of option pricing. Our experiments show that through scaling out energyefficient compute nodes within a 2U rack-mounted unit, an ARM-based microserver consumes as little as about 60% of the energy per option pricing compared to an x86 server, despite having significantly slower cores. We also find that the ARM microserver scales enough to meet a high fraction of market throughput demand, while consuming up to 30% less energy than an Intel server
Resumo:
Considering the development of aerospace composite components, designing for reduced manufacturing layup cost and structural complexity is increasingly important. While the advantage of composite materials is the ability to tailor designs to various structural loads for minimum mass, the challenge is obtaining a design that is manufacturable and minimizes local ply incompatibility. The focus of the presented research is understanding how the relationships between mass, manufacturability and design complexity, under realistic loads and design requirements, can be affected by enforcing ply continuity in the design process. Presented are a series of sizing case studies on an upper wing cover, designed using conventional analyses and the tabular laminate design process. Introducing skin ply continuity constraints can generate skin designs with minimal ply discontinuities, fewer ply drops and larger ply areas than designs not constrained for continuity. However, the reduced design freedom associated with the addition of these constraints results in a weight penalty over the total wing cover. Perhaps more interestingly, when considering manual hand layup the reduced design complexity is not translated into a reduced recurring manufacturing cost. In contrast, heavier wing cover designs appear to take more time to layup regardless of the laminate design complexity. © 2012 AIAA.
Resumo:
Microbial habitats that contain an excess of carbohydrate in the form of sugar are widespread in the microbial biosphere. Depending on the type of sugar, prevailing water activity and other substances present, sugar-rich environments can be highly dynamic or relatively stable, osmotically stressful, and/or destabilizing for macromolecular systems, and can thereby strongly impact the microbial ecology. Here, we review the microbiology of different high-sugar habitats, including their microbial diversity and physicochemical parameters, which act to impact microbial community assembly and constrain the ecosystem. Saturated sugar beet juice and floral nectar are used as case studies to explore the differences between the microbial ecologies of low and higher water-activity habitats respectively. Nectar is a paradigm of an open, dynamic and biodiverse habitat populated by many microbial taxa, often yeasts and bacteria such as, amongst many others, Metschnikowia spp. and Acinetobacter spp., respectively. By contrast, thick juice is a relatively stable, species-poor habitat and is typically dominated by a single, xerotolerant bacterium (Tetragenococcus halophilus). A number of high-sugar habitats contain chaotropic solutes (e.g. ethyl acetate, phenols, ethanol, fructose and glycerol) and hydrophobic stressors (e.g. ethyl octanoate, hexane, octanol and isoamyl acetate), all of which can induce chaotropicity-mediated stresses that inhibit or prevent multiplication of microbes. Additionally, temperature, pH, nutrition, microbial dispersion and habitat history can determine or constrain the microbiology of high-sugar milieux. Findings are discussed in relation to a number of unanswered scientific questions.
Resumo:
The global financial crisis has led many regulators and lawmakers to a rethinking about current versus optimum financial market structures and activities that include a variety and even radical ideas about delevaraging and downsizing finance. This paper focuses on the flaws and shortcomings of regulatory reforms of finance and on the necessity of and scope for more radical transformative strategies. With 'crisis economics' back, the most developed countries, including the EU member states, are still on the edge of disaster and confronted with systemic risk. Changes in financial regulation adopted in the aftermath of the financial meltdown have not been radical enough to transform the overall system of finance-driven capitalism towards a more sustainable system with a more embedded finance. The paper discusses financialisation in order to understand the development trends in finance over the past decades and examines various theories to describe the typical trends and patterns in financial regulation. By focusing on a limited number of regulatory reforms in the European Union, the limitations of current reforms and the need for additional transformative strategies necessary to overcome the finance-driven accumulation regime are explored. Finally, the regulatory space for such transformative strategies and for taming finance in times of crisis, austerity, and increased public protest potential is analysed.