34 resultados para Market analysis strategy
Resumo:
We consider the issue of the top quark Yukawa coupling measurement in a model-independent and general case with the inclusion of CP violation in the coupling. Arguably the best process to study this coupling is the associated production of the Higgs boson along with a t (t) over bar pair in a machine like the International Linear Collider (ILC). While detailed analyses of the sensitivity of the measurement-assuming a Standard Model (SM)-like coupling is available in the context of the ILC-conclude that the coupling could be pinned down to about a 10% level with modest luminosity, our investigations show that the scenario could be different in the case of a more general coupling. The modified Lorentz structure resulting in a changed functional dependence of the cross section on the coupling, along with the difference in the cross section itself leads to considerable deviation in the sensitivity. Our studies of the ILC with center-of-mass energies of 500 GeV, 800 GeV, and 1000 GeV show that moderate CP mixing in the Higgs sector could change the sensitivity to about 20%, while it could be worsened to 75% in cases which could accommodate more dramatic changes in the coupling. Detailed considerations of the decay distributions point to a need for a relook at the analysis strategy followed for the case of the SM, such as for a model-independent analysis of the top quark Yukawa coupling measurement. This study strongly suggests that a joint analysis of the CP properties and the Yukawa coupling measurement would be the way forward at the ILC and that caution must be exercised in the measurement of the Yukawa couplings and the conclusions drawn from it.
Resumo:
We study an s-channel resonance R as a viable candidate to fit the diboson excess reported by ATLAS. We compute the contribution of the similar to 2 TeV resonance R to semileptonic and leptonic final states at the 13 TeV LHC. To explain the absence of an excess in the semileptonic channel, we explore the possibility where the particle R decays to additional light scalars X, X or X, Y. A modified analysis strategy has been proposed to study the three-particle final state of the resonance decay and to identify decay channels of X. Associated production of R with gauge bosons has been studied in detail to identify the production mechanism of R. We construct comprehensive categories for vector and scalar beyond-standard-model particles which may play the role of particles R, X, Y and find alternate channels to fix the new couplings and search for these particles.
Resumo:
We address asymptotic analysis of option pricing in a regime switching market where the risk free interest rate, growth rate and the volatility of the stocks depend on a finite state Markov chain. We study two variations of the chain namely, when the chain is moving very fast compared to the underlying asset price and when it is moving very slow. Using quadratic hedging and asymptotic expansion, we derive corrections on the locally risk minimizing option price.
Resumo:
In this paper, we present an improved load distribution strategy, for arbitrarily divisible processing loads, to minimize the processing time in a distributed linear network of communicating processors by an efficient utilization of their front-ends. Closed-form solutions are derived, with the processing load originating at the boundary and at the interior of the network, under some important conditions on the arrangement of processors and links in the network. Asymptotic analysis is carried out to explore the ultimate performance limits of such networks. Two important theorems are stated regarding the optimal load sequence and the optimal load origination point. Comparative study of this new strategy with an earlier strategy is also presented.
Resumo:
Background: Tuberculosis still remains one of the largest killer infectious diseases, warranting the identification of newer targets and drugs. Identification and validation of appropriate targets for designing drugs are critical steps in drug discovery, which are at present major bottle-necks. A majority of drugs in current clinical use for many diseases have been designed without the knowledge of the targets, perhaps because standard methodologies to identify such targets in a high-throughput fashion do not really exist. With different kinds of 'omics' data that are now available, computational approaches can be powerful means of obtaining short-lists of possible targets for further experimental validation. Results: We report a comprehensive in silico target identification pipeline, targetTB, for Mycobacterium tuberculosis. The pipeline incorporates a network analysis of the protein-protein interactome, a flux balance analysis of the reactome, experimentally derived phenotype essentiality data, sequence analyses and a structural assessment of targetability, using novel algorithms recently developed by us. Using flux balance analysis and network analysis, proteins critical for survival of M. tuberculosis are first identified, followed by comparative genomics with the host, finally incorporating a novel structural analysis of the binding sites to assess the feasibility of a protein as a target. Further analyses include correlation with expression data and non-similarity to gut flora proteins as well as 'anti-targets' in the host, leading to the identification of 451 high-confidence targets. Through phylogenetic profiling against 228 pathogen genomes, shortlisted targets have been further explored to identify broad-spectrum antibiotic targets, while also identifying those specific to tuberculosis. Targets that address mycobacterial persistence and drug resistance mechanisms are also analysed. Conclusion: The pipeline developed provides rational schema for drug target identification that are likely to have high rates of success, which is expected to save enormous amounts of money, resources and time in the drug discovery process. A thorough comparison with previously suggested targets in the literature demonstrates the usefulness of the integrated approach used in our study, highlighting the importance of systems-level analyses in particular. The method has the potential to be used as a general strategy for target identification and validation and hence significantly impact most drug discovery programmes.
Resumo:
We address risk minimizing option pricing in a semi-Markov modulated market where the floating interest rate depends on a finite state semi-Markov process. The growth rate and the volatility of the stock also depend on the semi-Markov process. Using the Föllmer–Schweizer decomposition we find the locally risk minimizing price for European options and the corresponding hedging strategy. We develop suitable numerical methods for computing option prices.
Resumo:
Data-flow analysis is an integral part of any aggressive optimizing compiler. We propose a framework for improving the precision of data-flow analysis in the presence of complex control-flow. W initially perform data-flow analysis to determine those control-flow merges which cause the loss in data-flow analysis precision. The control-flow graph of the program is then restructured such that performing data-flow analysis on the resulting restructured graph gives more precise results. The proposed framework is both simple, involving the familiar notion of product automata, and also general, since it is applicable to any forward data-flow analysis. Apart from proving that our restructuring process is correct, we also show that restructuring is effective in that it necessarily leads to more optimization opportunities. Furthermore, the framework handles the trade-off between the increase in data-flow precision and the code size increase inherent in the restructuring. We show that determining an optimal restructuring is NP-hard, and propose and evaluate a greedy strategy. The framework has been implemented in the Scale research compiler, and instantiated for the specific problem of Constant Propagation. On the SPECINT 2000 benchmark suite we observe an average speedup of 4% in the running times over Wegman-Zadeck conditional constant propagation algorithm and 2% over a purely path profile guided approach.
Resumo:
India's energy challenges are multi-pronged. They are manifested through growing demand for modern energy carriers, a fossil fuel dominated energy system facing a severe resource crunch, the need for creating access to quality energy for the large section of deprived population, vulnerable energy security, local and global pollution regimes and the need for sustaining economic development. Renewable energy is considered as one of the most promising alternatives. Recognizing this potential, India has been implementing one of the largest renewable energy programmes in the world. Among the renewable energy technologies. bioenergy has a large diverse portfolio including efficient biomass stoves, biogas, biomass combustion and gasification and process heat and liquid fuels. India has also formulated and implemented a number of innovative policies and programmes to promote bioenergy technologies. However, according to some preliminary studies, the success rate is marginal compared to the potential available. This limited success is a clear indicator of the need for a serious reassessment of the bioenergy programme. Further, a realization of the need for adopting a sustainable energy path to address the above challenges will be the guiding force in this reassessment. In this paper an attempt is made to consider the potential of bioenergy to meet the rural energy needs: (I) biomass combustion and gasification for electricity; (2) biomethanation for cooking energy (gas) and electricity; and (3) efficient wood-burning devices for cooking. The paper focuses on analysing the effectiveness of bioenergy in creating this rural energy access and its sustainability in the long run through assessing: the demand for bioenergy and potential that could be created; technologies, status of commercialization and technology transfer and dissemination in India; economic and environmental performance and impacts: bioenergy policies, regulatory measures and barrier analysis. The whole assessment aims at presenting bioenergy as an integral part of a sustainable energy strategy for India. The results show that bioenergy technology (BET) alternatives compare favourably with the conventional ones. The cost comparisons show that the unit costs of BET alternatives are in the range of 15-187% of the conventional alternatives. The climate change benefits in terms of carbon emission reductions are to the tune of 110 T C per year provided the available potential of BETs are utilized.
Resumo:
Provision of modern energy services for cooking (with gaseous fuels)and lighting (with electricity) is an essential component of any policy aiming to address health, education or welfare issues; yet it gets little attention from policy-makers. Secure, adequate, low-cost energy of quality and convenience is core to the delivery of these services. The present study analyses the energy consumption pattern of Indian domestic sector and examines the urban-rural divide and income energy linkage. A comprehensive analysis is done to estimate the cost for providing modern energy services to everyone by 2030. A public-private partnership-driven business model, with entrepreneurship at the core, is developed with institutional, financing and pricing mechanisms for diffusion of energy services. This approach, termed as EMPOWERS (entrepreneurship model for provision of wholesome energy-related basic services), if adopted, can facilitate large-scale dissemination of energy-efficient and renewable technologies like small-scale biogas/biofuel plants, and distributed power generation technologies to provide clean, safe, reliable and sustainable energy to rural households and urban poor. It is expected to integrate the processes of market transformation and entrepreneurship development involving government, NGOs, financial institutions and community groups as stakeholders. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
We propose a self-regularized pseudo-time marching scheme to solve the ill-posed, nonlinear inverse problem associated with diffuse propagation of coherent light in a tissuelike object. In particular, in the context of diffuse correlation tomography (DCT), we consider the recovery of mechanical property distributions from partial and noisy boundary measurements of light intensity autocorrelation. We prove the existence of a minimizer for the Newton algorithm after establishing the existence of weak solutions for the forward equation of light amplitude autocorrelation and its Frechet derivative and adjoint. The asymptotic stability of the solution of the ordinary differential equation obtained through the introduction of the pseudo-time is also analyzed. We show that the asymptotic solution obtained through the pseudo-time marching converges to that optimal solution provided the Hessian of the forward equation is positive definite in the neighborhood of optimal solution. The superior noise tolerance and regularization-insensitive nature of pseudo-dynamic strategy are proved through numerical simulations in the context of both DCT and diffuse optical tomography. (C) 2010 Optical Society of America.
Resumo:
In this paper we introduce a nonlinear detector based on the phenomenon of suprathreshold stochastic resonance (SSR). We first present a model (an array of 1-bit quantizers) that demonstrates the SSR phenomenon. We then use this as a pre-processor to the conventional matched filter. We employ the Neyman-Pearson(NP) detection strategy and compare the performances of the matched filter, the SSR-based detector and the optimal detector. Although the proposed detector is non-optimal, for non-Gaussian noises with heavy tails (leptokurtic) it shows better performance than the matched filter. In situations where the noise is known to be leptokurtic without the availability of the exact knowledge of its distribution, the proposed detector turns out to be a better choice than the matched filter.
Resumo:
We provide a comparative performance analysis of network architectures for beacon enabled Zigbee sensor clusters using the CSMA/CA MAC defined in the IEEE 802.15.4 standard, and organised as (i) a star topology, and (ii) a two-hop topology. We provide analytical models for obtaining performance measures such as mean network delay, and mean node lifetime. We find that the star topology is substantially superior both in delay performance and lifetime performance than the two-hop topology.
Resumo:
A posteriori error estimation and adaptive refinement technique for fracture analysis of 2-D/3-D crack problems is the state-of-the-art. The objective of the present paper is to propose a new a posteriori error estimator based on strain energy release rate (SERR) or stress intensity factor (SIF) at the crack tip region and to use this along with the stress based error estimator available in the literature for the region away from the crack tip. The proposed a posteriori error estimator is called the K-S error estimator. Further, an adaptive mesh refinement (h-) strategy which can be used with K-S error estimator has been proposed for fracture analysis of 2-D crack problems. The performance of the proposed a posteriori error estimator and the h-adaptive refinement strategy have been demonstrated by employing the 4-noded, 8-noded and 9-noded plane stress finite elements. The proposed error estimator together with the h-adaptive refinement strategy will facilitate automation of fracture analysis process to provide reliable solutions.
Resumo:
The objective of the present paper is to select the best compromise irrigation planning strategy for the case study of Jayakwadi irrigation project, Maharashtra, India. Four-phase methodology is employed. In phase 1, separate linear programming (LP) models are formulated for the three objectives, namely. net economic benefits, agricultural production and labour employment. In phase 2, nondominated (compromise) irrigation planning strategies are generated using the constraint method of multiobjective optimisation. In phase 3, Kohonen neural networks (KNN) based classification algorithm is employed to sort nondominated irrigation planning strategies into smaller groups. In phase 4, multicriterion analysis (MCA) technique, namely, Compromise Programming is applied to rank strategies obtained from phase 3. It is concluded that the above integrated methodology is effective for modeling multiobjective irrigation planning problems and the present approach can be extended to situations where number of irrigation planning strategies are even large in number. (c) 2004 Elsevier Ltd. All rights reserved.
Resumo:
Hybrid elements, which are based on a two-field variational formulation with the displacements and stresses interpolated separately, are known to deliver very high accuracy, and to alleviate to a large extent problems of locking that plague standard displacement-based formulations. The choice of the stress interpolation functions is of course critical in ensuring the high accuracy and robustness of the method. Generally, an attempt is made to keep the stress interpolation to the minimum number of terms that will ensure that the stiffness matrix has no spurious zero-energy modes, since it is known that the stiffness increases with the increase in the number of terms. Although using such a strategy of keeping the number of interpolation terms to a minimum works very well in static problems, it results either in instabilities or fails to converge in transient problems. This is because choosing the stress interpolation functions merely on the basis of removing spurious energy modes can violate some basic principles that interpolation functions should obey. In this work, we address the issue of choosing the interpolation functions based on such basic principles of interpolation theory and mechanics. Although this procedure results in the use of more number of terms than the minimum (and hence in slightly increased stiffness) in many elements, we show that the performance continues to be far superior to displacement-based formulations, and, more importantly, that it also results in considerably increased robustness.