87 resultados para Multi-model inference
em Indian Institute of Science - Bangalore - Índia
Resumo:
Climate projections for the Fifth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC) are made using the newly developed representative concentration pathways (RCPs) under the Coupled Model Inter-comparison Project 5 (CMIP5). This article provides multi-model and multi-scenario temperature and precipitation projections for India for the period 1860-2099 based on the new climate data. We find that CMIP5 ensemble mean climate is closer to observed climate than any individual model. The key findings of this study are: (i) under the business-as-usual (between RCP6.0 and RCP8.5) scenario, mean warming in India is likely to be in the range 1.7-2 degrees C by 2030s and 3.3-4.8 degrees C by 2080s relative to pre-industrial times; (ii) all-India precipitation under the business-as-usual scenario is projected to increase from 4% to 5% by 2030s and from 6% to 14% towards the end of the century (2080s) compared to the 1961-1990 baseline; (iii) while precipitation projections are generally less reliable than temperature projections, model agreement in precipitation projections increases from RCP2.6 to RCP8.5, and from short-to long-term projections, indicating that long-term precipitation projections are generally more robust than their short-term counterparts and (iv) there is a consistent positive trend in frequency of extreme precipitation days (e.g. > 40 mm/day) for decades 2060s and beyond. These new climate projections should be used in future assessment of impact of climate change and adaptation planning. There is need to consider not just the mean climate projections, but also the more important extreme projections in impact studies and as well in adaptation planning.
Resumo:
The impact of future climate change on the glaciers in the Karakoram and Himalaya (KH) is investigated using CMIP5 multi-model temperature and precipitation projections, and a relationship between glacial accumulation-area ratio and mass balance developed for the region based on the last 30 to 40 years of observational data. We estimate that the current glacial mass balance (year 2000) for the entire KH region is -6.6 +/- 1 Gta(-1), which decreases about sixfold to -35 +/- 2 Gta(-1) by the 2080s under the high emission scenario of RCP8.5. However, under the low emission scenario of RCP2.6 the glacial mass loss only doubles to -12 +/- 2 Gta(-1) by the 2080s. We also find that 10.6 and 27 % of the glaciers could face `eventual disappearance' by the end of the century under RCP2.6 and RCP8.5 respectively, underscoring the threat to water resources under high emission scenarios.
Resumo:
We study the problem of analyzing influence of various factors affecting individual messages posted in social media. The problem is challenging because of various types of influences propagating through the social media network that act simultaneously on any user. Additionally, the topic composition of the influencing factors and the susceptibility of users to these influences evolve over time. This problem has not been studied before, and off-the-shelf models are unsuitable for this purpose. To capture the complex interplay of these various factors, we propose a new non-parametric model called the Dynamic Multi-Relational Chinese Restaurant Process. This accounts for the user network for data generation and also allows the parameters to evolve over time. Designing inference algorithms for this model suited for large scale social-media data is another challenge. To this end, we propose a scalable and multi-threaded inference algorithm based on online Gibbs Sampling. Extensive evaluations on large-scale Twitter and Face book data show that the extracted topics when applied to authorship and commenting prediction outperform state-of-the-art baselines. More importantly, our model produces valuable insights on topic trends and user personality trends beyond the capability of existing approaches.
Resumo:
Recent studies have shown that changes in global mean precipitation are larger for solar forcing than for CO2 forcing of similar magnitude.In this paper, we use an atmospheric general circulation model to show that the differences originate from differing fast responses of the climate system. We estimate the adjusted radiative forcing and fast response using Hansen's ``fixed-SST forcing'' method.Total climate system response is calculated using mixed layer simulations using the same model. Our analysis shows that the fast response is almost 40% of the total response for few key variables like precipitation and evaporation. We further demonstrate that the hydrologic sensitivity, defined as the change in global mean precipitation per unit warming, is the same for the two forcings when the fast responses are excluded from the definition of hydrologic sensitivity, suggesting that the slow response (feedback) of the hydrological cycle is independent of the forcing mechanism. Based on our results, we recommend that the fast and slow response be compared separately in multi-model intercomparisons to discover and understand robust responses in hydrologic cycle. The significance of this study to geoengineering is discussed.
Resumo:
A new automata model Mr,k, with a conceptually significant innovation in the form of multi-state alternatives at each instance, is proposed in this study. Computer simulations of the Mr,k, model in the context of feature selection in an unsupervised environment has demonstrated the superiority of the model over similar models without this multi-state-choice innovation.
Resumo:
We have proposed a general method for finding the exact analytical solution for the multi-channel curve crossing problem in the presence of delta function couplings. We have analysed the case where aa potential energy curve couples to a continuum (in energy) of the potential energy curves.
Resumo:
Existing models for dmax predict that, in the limit of μd → ∞, dmax increases with 3/4 power of μd. Further, at low values of interfacial tension, dmax becomes independent of σ even at moderate values of μd. However, experiments contradict both the predictions show that dmax dependence on μd is much weaker, and that, even at very low values of σ,dmax does not become independent of it. A model is proposed to explain these results. The model assumes that a drop circulates in a stirred vessel along with the bulk fluid and repeatedly passes through a deformation zone followed by a relaxation zone. In the deformation zone, the turbulent inertial stress tends to deform the drop, while the viscous stress generated in the drop and the interfacial stress resist deformation. The relaxation zone is characterized by absence of turbulent stress and hence the drop tends to relax back to undeformed state. It is shown that a circulating drop, starting with some initial deformation, either reaches a steady state or breaks in one or several cycles. dmax is defined as the maximum size of a drop which, starting with an undeformed initial state for the first cycle, passes through deformation zone infinite number of times without breaking. The model predictions reduce to that of Lagisetty. (1986) for moderate values of μd and σ. The model successfully predicts the reduced dependence of dmax on μd at high values of μd as well as the dependence of dmax on σ at low values of σ. The data available in literature on dmax could be predicted to a greater accuracy by the model in comparison with existing models and correlations.
Resumo:
Existing models for dmax predict that, in the limit of μd → ∞, dmax increases with 3/4 power of μd. Further, at low values of interfacial tension, dmax becomes independent of σ even at moderate values of μd. However, experiments contradict both the predictions show that dmax dependence on μd is much weaker, and that, even at very low values of σ,dmax does not become independent of it. A model is proposed to explain these results. The model assumes that a drop circulates in a stirred vessel along with the bulk fluid and repeatedly passes through a deformation zone followed by a relaxation zone. In the deformation zone, the turbulent inertial stress tends to deform the drop, while the viscous stress generated in the drop and the interfacial stress resist deformation. The relaxation zone is characterized by absence of turbulent stress and hence the drop tends to relax back to undeformed state. It is shown that a circulating drop, starting with some initial deformation, either reaches a steady state or breaks in one or several cycles. dmax is defined as the maximum size of a drop which, starting with an undeformed initial state for the first cycle, passes through deformation zone infinite number of times without breaking. The model predictions reduce to that of Lagisetty. (1986) for moderate values of μd and σ. The model successfully predicts the reduced dependence of dmax on μd at high values of μd as well as the dependence of dmax on σ at low values of σ. The data available in literature on dmax could be predicted to a greater accuracy by the model in comparison with existing models and correlations.
Resumo:
Sub-pixel classification is essential for the successful description of many land cover (LC) features with spatial resolution less than the size of the image pixels. A commonly used approach for sub-pixel classification is linear mixture models (LMM). Even though, LMM have shown acceptable results, pragmatically, linear mixtures do not exist. A non-linear mixture model, therefore, may better describe the resultant mixture spectra for endmember (pure pixel) distribution. In this paper, we propose a new methodology for inferring LC fractions by a process called automatic linear-nonlinear mixture model (AL-NLMM). AL-NLMM is a three step process where the endmembers are first derived from an automated algorithm. These endmembers are used by the LMM in the second step that provides abundance estimation in a linear fashion. Finally, the abundance values along with the training samples representing the actual proportions are fed to multi-layer perceptron (MLP) architecture as input to train the neurons which further refines the abundance estimates to account for the non-linear nature of the mixing classes of interest. AL-NLMM is validated on computer simulated hyperspectral data of 200 bands. Validation of the output showed overall RMSE of 0.0089±0.0022 with LMM and 0.0030±0.0001 with the MLP based AL-NLMM, when compared to actual class proportions indicating that individual class abundances obtained from AL-NLMM are very close to the real observations.
Resumo:
Impact of global warming on daily rainfall is examined using atmospheric variables from five General Circulation Models (GCMs) and a stochastic downscaling model. Daily rainfall at eleven raingauges over Malaprabha catchment of India and National Center for Environmental Prediction (NCEP) reanalysis data at grid points over the catchment for a continuous time period 1971-2000 (current climate) are used to calibrate the downscaling model. The downscaled rainfall simulations obtained using GCM atmospheric variables corresponding to the IPCC-SRES (Intergovernmental Panel for Climate Change - Special Report on Emission Scenarios) A2 emission scenario for the same period are used to validate the results. Following this, future downscaled rainfall projections are constructed and examined for two 20 year time slices viz. 2055 (i.e. 2046-2065) and 2090 (i.e. 2081-2100). The model results show reasonable skill in simulating the rainfall over the study region for the current climate. The downscaled rainfall projections indicate no significant changes in the rainfall regime in this catchment in the future. More specifically, 2% decrease by 2055 and 5% decrease by 2090 in monsoon (HAS) rainfall compared to the current climate (1971-2000) under global warming conditions are noticed. Also, pre-monsoon (JFMAM) and post-monsoon (OND) rainfall is projected to increase respectively, by 2% in 2055 and 6% in 2090 and, 2% in 2055 and 12% in 2090, over the region. On annual basis slight decreases of 1% and 2% are noted for 2055 and 2090, respectively.
Resumo:
We address the problem of multi-instrument recognition in polyphonic music signals. Individual instruments are modeled within a stochastic framework using Student's-t Mixture Models (tMMs). We impose a mixture of these instrument models on the polyphonic signal model. No a priori knowledge is assumed about the number of instruments in the polyphony. The mixture weights are estimated in a latent variable framework from the polyphonic data using an Expectation Maximization (EM) algorithm, derived for the proposed approach. The weights are shown to indicate instrument activity. The output of the algorithm is an Instrument Activity Graph (IAG), using which, it is possible to find out the instruments that are active at a given time. An average F-ratio of 0 : 7 5 is obtained for polyphonies containing 2-5 instruments, on a experimental test set of 8 instruments: clarinet, flute, guitar, harp, mandolin, piano, trombone and violin.
Resumo:
Grating Compression Transform (GCT) is a two-dimensional analysis of speech signal which has been shown to be effective in multi-pitch tracking in speech mixtures. Multi-pitch tracking methods using GCT apply Kalman filter framework to obtain pitch tracks which requires training of the filter parameters using true pitch tracks. We propose an unsupervised method for obtaining multiple pitch tracks. In the proposed method, multiple pitch tracks are modeled using time-varying means of a Gaussian mixture model (GMM), referred to as TVGMM. The TVGMM parameters are estimated using multiple pitch values at each frame in a given utterance obtained from different patches of the spectrogram using GCT. We evaluate the performance of the proposed method on all voiced speech mixtures as well as random speech mixtures having well separated and close pitch tracks. TVGMM achieves multi-pitch tracking with 51% and 53% multi-pitch estimates having error <= 20% for random mixtures and all-voiced mixtures respectively. TVGMM also results in lower root mean squared error in pitch track estimation compared to that by Kalman filtering.
Resumo:
This article presents frequentist inference of accelerated life test data of series systems with independent log-normal component lifetimes. The means of the component log-lifetimes are assumed to depend on the stress variables through a linear stress translation function that can accommodate the standard stress translation functions in the literature. An expectation-maximization algorithm is developed to obtain the maximum likelihood estimates of model parameters. The maximum likelihood estimates are then further refined by bootstrap, which is also used to infer about the component and system reliability metrics at usage stresses. The developed methodology is illustrated by analyzing a real as well as a simulated dataset. A simulation study is also carried out to judge the effectiveness of the bootstrap. It is found that in this model, application of bootstrap results in significant improvement over the simple maximum likelihood estimates.
Resumo:
We formulate the problem of detecting the constituent instruments in a polyphonic music piece as a joint decoding problem. From monophonic data, parametric Gaussian Mixture Hidden Markov Models (GM-HMM) are obtained for each instrument. We propose a method to use the above models in a factorial framework, termed as Factorial GM-HMM (F-GM-HMM). The states are jointly inferred to explain the evolution of each instrument in the mixture observation sequence. The dependencies are decoupled using variational inference technique. We show that the joint time evolution of all instruments' states can be captured using F-GM-HMM. We compare performance of proposed method with that of Student's-t mixture model (tMM) and GM-HMM in an existing latent variable framework. Experiments on two to five polyphony with 8 instrument models trained on the RWC dataset, tested on RWC and TRIOS datasets show that F-GM-HMM gives an advantage over the other considered models in segments containing co-occurring instruments.