890 resultados para real option analysis
Resumo:
We utilize top polarization in the process e(+)e(-) -> t (t) over bar at the International Linear Collider ( ILC) with transverse beam polarization to probe interactions of the scalar and tensor type beyond the standard model and to disentangle their individual contributions. Ninety percent confidence level limits on the interactions with realistic integrated luminosity are presented and are found to improve by an order of magnitude compared to the case when the spin of the top quark is not measured. Sensitivities of the order of a few times 10(-3) TeV-2 for real and imaginary parts of both scalar and tensor couplings at root s = 500 and 800 GeV with an integrated luminosity of 500 fb(-1) and completely polarized beams are shown to be possible. A powerful model-independent framework for inclusive measurements is employed to describe the spin-momentum correlations, and their C, P, and T properties are presented in a technical appendix.
Resumo:
We address risk minimizing option pricing in a regime switching market where the floating interest rate depends on a finite state Markov process. The growth rate and the volatility of the stock also depend on the Markov process. Using the minimal martingale measure, we show that the locally risk minimizing prices for certain exotic options satisfy a system of Black-Scholes partial differential equations with appropriate boundary conditions. We find the corresponding hedging strategies and the residual risk. We develop suitable numerical methods to compute option prices.
Resumo:
A new class of nets, called S-nets, is introduced for the performance analysis of scheduling algorithms used in real-time systems Deterministic timed Petri nets do not adequately model the scheduling of resources encountered in real-time systems, and need to be augmented with resource places and signal places, and a scheduler block, to facilitate the modeling of scheduling algorithms. The tokens are colored, and the transition firing rules are suitably modified. Further, the concept of transition folding is used, to get intuitively simple models of multiframe real-time systems. Two generic performance measures, called �load index� and �balance index,� which characterize the resource utilization and the uniformity of workload distribution, respectively, are defined. The utility of S-nets for evaluating heuristic-based scheduling schemes is illustrated by considering three heuristics for real-time scheduling. S-nets are useful in tuning the hardware configuration and the underlying scheduling policy, so that the system utilization is maximized, and the workload distribution among the computing resources is balanced.
Resumo:
The forestry sector provides a number of climate change mitigation options. Apart from this ecological benefit, it has significant social and economic relevance. Implementation of forestry options requires large investments and sustained long-term planning. Thus there is a need for a detailed analysis of forestry options to understand their implications on stock and flow of carbon, required investments, value of forest wealth, contribution to GNP and livelihood, demand management, employment and foreign trade. There is a need to evaluate the additional spending on forestry by analysing the environmental (particularly carbon abatement), social and economic benefits. The biomass needs for India are expected to increase by two to three times by 2020. Depending upon the forest types, ownership patterns and land use patterns, feasible forestry options are identified. It is found among many supply options to be feasible to meet the 'demand based needs' with a mix of management options, species choices and organisational set up. A comparative static framework is used to analyze the macro-economic impacts. Forestry accounts for 1.84% of GNP in India. It is characterized by significant forward industrial linkages and least backward linkage. Forestry generates about 36 million person years of employment annually. India imports Rs. 15 billion worth of forest based materials annually. Implementation of the demand based forestry options can lead to a number of ecological, economic and institutional changes. The notable ones are: enhancement of C stock from 9578 to 17 094 Mt and a net annual C-sequestration from 73 to 149 Mt after accounting for all emissions; a trebling of the output of forestry sector from Rs. 49 billion to Rs. 146 billion annually; an increase in GDP contribution of forestry from Rs. 32 billion to Rs. 105 billion over a period of 35 years; an increase in annual employment level by 23 million person years, emergence of forestry as a net contributor of foreign exchange through trading of forestry products; and an increase in economic value of forest capital stock by Rs. 7260 billion with a cost benefit analysis showing forestry as a profitable option. Implementation of forestry options calls for an understanding of current forest policies and barriers which are analyzed and a number of policy options are suggested. (C) 1997 Elsevier Science B.V.
Resumo:
Over past few years, the studies of cultured neuronal networks have opened up avenues for understanding the ion channels, receptor molecules, and synaptic plasticity that may form the basis of learning and memory. The hippocampal neurons from rats are dissociated and cultured on a surface containing a grid of 64 electrodes. The signals from these 64 electrodes are acquired using a fast data acquisition system MED64 (Alpha MED Sciences, Japan) at a sampling rate of 20 K samples with a precision of 16-bits per sample. A few minutes of acquired data runs in to a few hundreds of Mega Bytes. The data processing for the neural analysis is highly compute-intensive because the volume of data is huge. The major processing requirements are noise removal, pattern recovery, pattern matching, clustering and so on. In order to interface a neuronal colony to a physical world, these computations need to be performed in real-time. A single processor such as a desk top computer may not be adequate to meet this computational requirements. Parallel computing is a method used to satisfy the real-time computational requirements of a neuronal system that interacts with an external world while increasing the flexibility and scalability of the application. In this work, we developed a parallel neuronal system using a multi-node Digital Signal processing system. With 8 processors, the system is able to compute and map incoming signals segmented over a period of 200 ms in to an action in a trained cluster system in real time.
Resumo:
We propose a new abstract domain for static analysis of executable code. Concrete states are abstracted using circular linear progressions (CLPs). CLPs model computations using a finite word length as is seen in any real life processor. The finite abstraction allows handling overflow scenarios in a natural and straight-forward manner. Abstract transfer functions have been defined for a wide range of operations which makes this domain easily applicable for analyzing code for a wide range of ISAs. CLPs combine the scalability of interval domains with the discreteness of linear congruence domains. We also present a novel, lightweight method to track linear equality relations between static objects that is used by the analysis to improve precision. The analysis is efficient, the total space and time overhead being quadratic in the number of static objects being tracked.
Resumo:
In this paper, we give a brief review of pattern classification algorithms based on discriminant analysis. We then apply these algorithms to classify movement direction based on multivariate local field potentials recorded from a microelectrode array in the primary motor cortex of a monkey performing a reaching task. We obtain prediction accuracies between 55% and 90% using different methods which are significantly above the chance level of 12.5%.
Resumo:
Null dereferences are a bane of programming in languages such as Java. In this paper we propose a sound, demand-driven, inter-procedurally context-sensitive dataflow analysis technique to verify a given dereference as safe or potentially unsafe. Our analysis uses an abstract lattice of formulas to find a pre-condition at the entry of the program such that a null-dereference can occur only if the initial state of the program satisfies this pre-condition. We use a simplified domain of formulas, abstracting out integer arithmetic, as well as unbounded access paths due to recursive data structures. For the sake of precision we model aliasing relationships explicitly in our abstract lattice, enable strong updates, and use a limited notion of path sensitivity. For the sake of scalability we prune formulas continually as they get propagated, reducing to true conjuncts that are less likely to be useful in validating or invalidating the formula. We have implemented our approach, and present an evaluation of it on a set of ten real Java programs. Our results show that the set of design features we have incorporated enable the analysis to (a) explore long, inter-procedural paths to verify each dereference, with (b) reasonable accuracy, and (c) very quick response time per dereference, making it suitable for use in desktop development environments.
Resumo:
This paper presents a method for placement of Phasor Measurement Units, ensuring the monitoring of vulnerable buses which are obtained based on transient stability analysis of the overall system. Real-time monitoring of phase angles across different nodes, which indicates the proximity to instability, the very purpose will be well defined if the PMUs are placed at buses which are more vulnerable. The issue is to identify the key buses where the PMUs should be placed when the transient stability prediction is taken into account considering various disturbances. Integer Linear Programming technique with equality and inequality constraints is used to find out the optimal placement set with key buses identified from transient stability analysis. Results on IEEE-14 bus system are presented to illustrate the proposed approach.
Resumo:
Results from elasto-plastic numerical simulations of jointed rocks using both the equivalent continuum and discrete continuum approaches are presented, and are compared with experimental measurements. Initially triaxial compression tests on different types of rocks with wide variation in the uniaxial compressive strength are simulated using both the approaches and the results are compared. The applicability and relative merits and limitations of both the approaches for the simulation of jointed rocks are discussed. It is observed that both the approaches are reasonably good in predicting the real response. However, the equivalent continuum approach has predicted somewhat higher stiffness values at low strains. Considering the modelling effort involved in case of discrete continuum approach, for problems with complex geometry, it is suggested that a proper equivalent continuum model can be used, without compromising much on the accuracy of the results. Then the numerical analysis of a tunnel in Japan is taken up using the continuum approach. The deformations predicted are compared well against the field measurements and the predictions from discontinuum analysis. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Drought is the most crucial environmental factor that limits productivity of many crop plants. Exploring novel genes and gene combinations is of primary importance in plant drought tolerance research. Stress tolerant genotypes/species are known to express novel stress responsive genes with unique functional significance. Hence, identification and characterization of stress responsive genes from these tolerant species might be a reliable option to engineer the drought tolerance. Safflower has been found to be a relatively drought tolerant crop and thus, it has been the choice of study to characterize the genes expressed under drought stress. In the present study, we have evaluated differential drought tolerance of two cultivars of safflower namely, A1 and Nira using selective physiological marker traits and we have identified cultivar A1 as relatively drought tolerant. To identify the drought responsive genes, we have constructed a stress subtracted cDNA library from cultivar A1 following subtractive hybridization. Analysis of similar to 1,300 cDNA clones resulted in the identification of 667 unique drought responsive ESTs. Protein homology search revealed that 521 (78 %) out of 667 ESTs showed significant similarity to known sequences in the database and majority of them previously identified as drought stress-related genes and were found to be involved in a variety of cellular functions ranging from stress perception to cellular protection. Remaining 146 (22 %) ESTs were not homologous to known sequences in the database and therefore, they were considered to be unique and novel drought responsive genes of safflower. Since safflower is a stress-adapted oil-seed crop this observation has great relevance. In addition, to validate the differential expression of the identified genes, expression profiles of selected clones were analyzed using dot blot (reverse northern), and northern blot analysis. We showed that these clones were differentially expressed under different abiotic stress conditions. The implications of the analyzed genes in abiotic stress tolerance are discussed in our study.
Resumo:
The top polarization at the International Linear Collider (ILC) with transverse beam polarization is utilized in the process to probe interactions of the scalar and tensor type beyond the Standard Model and to disentangle their individual contributions. Confidence level limits of 90% are presented on the interactions with realistic integrated luminosity and are found to improve by an order of magnitude compared to the case when the spin of the top quark is not measured. Sensitivities of the order of a few times 10 (-aEuro parts per thousand 3) TeV (-aEuro parts per thousand 2) for real and imaginary parts of both scalar and tensor couplings at and 800 GeV with an integrated luminosity of 500 fb (-aEuro parts per thousand 1) and completely polarized beams are shown to be possible.
Resumo:
Knowledge about program worst case execution time (WCET) is essential in validating real-time systems and helps in effective scheduling. One popular approach used in industry is to measure execution time of program components on the target architecture and combine them using static analysis of the program. Measurements need to be taken in the least intrusive way in order to avoid affecting accuracy of estimated WCET. Several programs exhibit phase behavior, wherein program dynamic execution is observed to be composed of phases. Each phase being distinct from the other, exhibits homogeneous behavior with respect to cycles per instruction (CPI), data cache misses etc. In this paper, we show that phase behavior has important implications on timing analysis. We make use of the homogeneity of a phase to reduce instrumentation overhead at the same time ensuring that accuracy of WCET is not largely affected. We propose a model for estimating WCET using static worst case instruction counts of individual phases and a function of measured average CPI. We describe a WCET analyzer built on this model which targets two different architectures. The WCET analyzer is observed to give safe estimates for most benchmarks considered in this paper. The tightness of the WCET estimates are observed to be improved for most benchmarks compared to Chronos, a well known static WCET analyzer.
Resumo:
Pervasive use of pointers in large-scale real-world applications continues to make points-to analysis an important optimization-enabler. Rapid growth of software systems demands a scalable pointer analysis algorithm. A typical inclusion-based points-to analysis iteratively evaluates constraints and computes a points-to solution until a fixpoint. In each iteration, (i) points-to information is propagated across directed edges in a constraint graph G and (ii) more edges are added by processing the points-to constraints. We observe that prioritizing the order in which the information is processed within each of the above two steps can lead to efficient execution of the points-to analysis. While earlier work in the literature focuses only on the propagation order, we argue that the other dimension, that is, prioritizing the constraint processing, can lead to even higher improvements on how fast the fixpoint of the points-to algorithm is reached. This becomes especially important as we prove that finding an optimal sequence for processing the points-to constraints is NP-Complete. The prioritization scheme proposed in this paper is general enough to be applied to any of the existing points-to analyses. Using the prioritization framework developed in this paper, we implement prioritized versions of Andersen's analysis, Deep Propagation, Hardekopf and Lin's Lazy Cycle Detection and Bloom Filter based points-to analysis. In each case, we report significant improvements in the analysis times (33%, 47%, 44%, 20% respectively) as well as the memory requirements for a large suite of programs, including SPEC 2000 benchmarks and five large open source programs.
Resumo:
A network of ship-mounted real-time Automatic Weather Stations integrated with Indian geosynchronous satellites Indian National Satellites (INSATs)] 3A and 3C, named Indian National Centre for Ocean Information Services Real-Time Automatic Weather Stations (I-RAWS), is established. The purpose of I-RAWS is to measure the surface meteorological-ocean parameters and transmit the data in real time in order to validate and refine the forcing parameters (obtained from different meteorological agencies) of the Indian Ocean Forecasting System (INDOFOS). Preliminary validation and intercomparison of analyzed products obtained from the National Centre for Medium Range Weather Forecasting and the European Centre for Medium-Range Weather Forecasts using the data collected from I-RAWS were carried out. This I-RAWS was mounted on board oceanographic research vessel Sagar Nidhi during a cruise across three oceanic regimes, namely, the tropical Indian Ocean, the extratropical Indian Ocean, and the Southern Ocean. The results obtained from such a validation and intercomparison, and its implications with special reference to the usage of atmospheric model data for forcing ocean model, are discussed in detail. It is noticed that the performance of analysis products from both atmospheric models is similar and good; however, European Centre for Medium-Range Weather Forecasts air temperature over the extratropical Indian Ocean and wind speed in the Southern Ocean are marginally better.