5 resultados para Users of Financial Statements

em Indian Institute of Science - Bangalore - Índia


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Complex systems inspired analysis suggests a hypothesis that financial meltdowns are abrupt critical transitions that occur when the system reaches a tipping point. Theoretical and empirical studies on climatic and ecological dynamical systems have shown that approach to tipping points is preceded by a generic phenomenon called critical slowing down, i.e. an increasingly slow response of the system to perturbations. Therefore, it has been suggested that critical slowing down may be used as an early warning signal of imminent critical transitions. Whether financial markets exhibit critical slowing down prior to meltdowns remains unclear. Here, our analysis reveals that three major US (Dow Jones Index, S&P 500 and NASDAQ) and two European markets (DAX and FTSE) did not exhibit critical slowing down prior to major financial crashes over the last century. However, all markets showed strong trends of rising variability, quantified by time series variance and spectral function at low frequencies, prior to crashes. These results suggest that financial crashes are not critical transitions that occur in the vicinity of a tipping point. Using a simple model, we argue that financial crashes are likely to be stochastic transitions which can occur even when the system is far away from the tipping point. Specifically, we show that a gradually increasing strength of stochastic perturbations may have caused to abrupt transitions in the financial markets. Broadly, our results highlight the importance of stochastically driven abrupt transitions in real world scenarios. Our study offers rising variability as a precursor of financial meltdowns albeit with a limitation that they may signal false alarms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper analyses environmental and socio-economic barriers for plantation activities on local and regional level and investigates the potential for carbon finance to stimulate the increased rates of forest plantation on wasteland, i.e., degraded lands, in southern India. Building on multidisciplinary field work and results from the model GCOMAP, the aim is to (1) identify and characterize the barriers to plantation activities in four agro-ecological zones in the state of Karnataka and (2) investigate what would be required to overcome these barriers and enhance the plantation rate and productivity. The results show that a rehabilitation of the wasteland based on plantation activities is not only possible but also anticipated by the local population and would lead to positive environmental and socio-economic effects at a local level. However, in many cases, the establishment of plantation activities is hindered by a lack of financial resources, low land productivity and water scarcity. Based on the model used and the results from the field work, it can be concluded that certified emission reductions such as carbon credits or other compensatory systems may help to overcome the financial barrier; however, the price needs to be significantly increased if these measures are to have any large-scale impact.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

MATLAB is an array language, initially popular for rapid prototyping, but is now being increasingly used to develop production code for numerical and scientific applications. Typical MATLAB programs have abundant data parallelism. These programs also have control flow dominated scalar regions that have an impact on the program's execution time. Today's computer systems have tremendous computing power in the form of traditional CPU cores and throughput oriented accelerators such as graphics processing units(GPUs). Thus, an approach that maps the control flow dominated regions to the CPU and the data parallel regions to the GPU can significantly improve program performance. In this paper, we present the design and implementation of MEGHA, a compiler that automatically compiles MATLAB programs to enable synergistic execution on heterogeneous processors. Our solution is fully automated and does not require programmer input for identifying data parallel regions. We propose a set of compiler optimizations tailored for MATLAB. Our compiler identifies data parallel regions of the program and composes them into kernels. The problem of combining statements into kernels is formulated as a constrained graph clustering problem. Heuristics are presented to map identified kernels to either the CPU or GPU so that kernel execution on the CPU and the GPU happens synergistically and the amount of data transfer needed is minimized. In order to ensure required data movement for dependencies across basic blocks, we propose a data flow analysis and edge splitting strategy. Thus our compiler automatically handles composition of kernels, mapping of kernels to CPU and GPU, scheduling and insertion of required data transfer. The proposed compiler was implemented and experimental evaluation using a set of MATLAB benchmarks shows that our approach achieves a geometric mean speedup of 19.8X for data parallel benchmarks over native execution of MATLAB.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

MATLAB is an array language, initially popular for rapid prototyping, but is now being increasingly used to develop production code for numerical and scientific applications. Typical MATLAB programs have abundant data parallelism. These programs also have control flow dominated scalar regions that have an impact on the program's execution time. Today's computer systems have tremendous computing power in the form of traditional CPU cores and throughput oriented accelerators such as graphics processing units(GPUs). Thus, an approach that maps the control flow dominated regions to the CPU and the data parallel regions to the GPU can significantly improve program performance. In this paper, we present the design and implementation of MEGHA, a compiler that automatically compiles MATLAB programs to enable synergistic execution on heterogeneous processors. Our solution is fully automated and does not require programmer input for identifying data parallel regions. We propose a set of compiler optimizations tailored for MATLAB. Our compiler identifies data parallel regions of the program and composes them into kernels. The problem of combining statements into kernels is formulated as a constrained graph clustering problem. Heuristics are presented to map identified kernels to either the CPU or GPU so that kernel execution on the CPU and the GPU happens synergistically and the amount of data transfer needed is minimized. In order to ensure required data movement for dependencies across basic blocks, we propose a data flow analysis and edge splitting strategy. Thus our compiler automatically handles composition of kernels, mapping of kernels to CPU and GPU, scheduling and insertion of required data transfer. The proposed compiler was implemented and experimental evaluation using a set of MATLAB benchmarks shows that our approach achieves a geometric mean speedup of 19.8X for data parallel benchmarks over native execution of MATLAB.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Elettra is one of the first 3rd-generation storage rings, recently upgraded to routinely operate in top-up mode at both 2.0 and 2.4 GeV. The facility hosts four dedicated beamlines for crystallography, two open to the users and two under construction, and expected to be ready for public use in 2015. In service since 1994, XRD1 is a general-purpose diffraction beamline. The light source for this wide (4-21 keV) energy range beamline is a permanent magnet wiggler. XRD1 covers experiments ranging from grazing incidence X-ray diffraction to macromolecular crystallography, from industrial applications of powder diffraction to X-ray phasing with long wavelengths. The bending magnet powder diffraction beamline MCX has been open to users since 2009, with a focus on microstructural investigations and studies under non-ambient conditions. A superconducting wiggler delivers a high photon flux to a new fully automated beamline dedicated to macromolecular crystallography and to a branch beamline hosting a high-pressure powder X-ray diffraction station (both currently under construction). Users of the latter experimental station will have access to a specialized sample preparation laboratory, shared with the SISSI infrared beamline. A high throughput crystallization platform equipped with an imaging system for the remote viewing, evaluation and scoring of the macromolecular crystallization experiments has also been established and is open to the user community.