235 resultados para Statistical parameters
Resumo:
Diketopyrrolopyrrole (DPP) containing copolymers have gained a lot of interest in organic optoelectronics with great potential in organic photovoltaics. In this work, DPP based statistical copolymers, with slightly different bandgap energies and a varying fraction of donor-acceptor ratio are investigated using monochromatic photocurrent spectroscopy and Fourier-transform photocurrent spectroscopy (FTPS). The statistical copolymer with a lower DPP fraction, when blended with a fullerene derivative, shows the signature of an inter charge transfer complex state in photocurrent spectroscopy. Furthermore, the absorption spectrum of the blended sample with a lower DPP fraction is seen to change as a function of an external bias, qualitatively similar to the quantum confined Stark effect, from where we estimate the exciton binding energy. The statistical copolymer with a higher DPP fraction shows no signal of the inter charge transfer states and yields a higher external quantum efficiency in a photovoltaic structure. In order to gain insight into the origin of the observed charge transfer transitions, we present theoretical studies using density-functional theory and time-dependent density-functional theory for the two pristine DPP based statistical monomers.
Resumo:
This paper attempts to unravel any relations that may exist between turbulent shear flows and statistical mechanics through a detailed numerical investigation in the simplest case where both can be well defined. The flow considered for the purpose is the two-dimensional (2D) temporal free shear layer with a velocity difference Delta U across it, statistically homogeneous in the streamwise direction (x) and evolving from a plane vortex sheet in the direction normal to it (y) in a periodic-in-x domain L x +/-infinity. Extensive computer simulations of the flow are carried out through appropriate initial-value problems for a ``vortex gas'' comprising N point vortices of the same strength (gamma = L Delta U/N) and sign. Such a vortex gas is known to provide weak solutions of the Euler equation. More than ten different initial-condition classes are investigated using simulations involving up to 32 000 vortices, with ensemble averages evaluated over up to 10(3) realizations and integration over 10(4)L/Delta U. The temporal evolution of such a system is found to exhibit three distinct regimes. In Regime I the evolution is strongly influenced by the initial condition, sometimes lasting a significant fraction of L/Delta U. Regime III is a long-time domain-dependent evolution towards a statistically stationary state, via ``violent'' and ``slow'' relaxations P.-H. Chavanis, Physica A 391, 3657 (2012)], over flow time scales of order 10(2) and 10(4)L/Delta U, respectively (for N = 400). The final state involves a single structure that stochastically samples the domain, possibly constituting a ``relative equilibrium.'' The vortex distribution within the structure follows a nonisotropic truncated form of the Lundgren-Pointin (L-P) equilibrium distribution (with negatively high temperatures; L-P parameter lambda close to -1). The central finding is that, in the intermediate Regime II, the spreading rate of the layer is universal over the wide range of cases considered here. The value (in terms of momentum thickness) is 0.0166 +/- 0.0002 times Delta U. Regime II, extensively studied in the turbulent shear flow literature as a self-similar ``equilibrium'' state, is, however, a part of the rapid nonequilibrium evolution of the vortex-gas system, which we term ``explosive'' as it lasts less than one L/Delta U. Regime II also exhibits significant values of N-independent two-vortex correlations, indicating that current kinetic theories that neglect correlations or consider them as O(1/N) cannot describe this regime. The evolution of the layer thickness in present simulations in Regimes I and II agree with the experimental observations of spatially evolving (3D Navier-Stokes) shear layers. Further, the vorticity-stream-function relations in Regime III are close to those computed in 2D Navier-Stokes temporal shear layers J. Sommeria, C. Staquet, and R. Robert, J. Fluid Mech. 233, 661 (1991)]. These findings suggest the dominance of what may be called the Kelvin-Biot-Savart mechanism in determining the growth of the free shear layer through large-scale momentum and vorticity dispersal.
Resumo:
Although many sparse recovery algorithms have been proposed recently in compressed sensing (CS), it is well known that the performance of any sparse recovery algorithm depends on many parameters like dimension of the sparse signal, level of sparsity, and measurement noise power. It has been observed that a satisfactory performance of the sparse recovery algorithms requires a minimum number of measurements. This minimum number is different for different algorithms. In many applications, the number of measurements is unlikely to meet this requirement and any scheme to improve performance with fewer measurements is of significant interest in CS. Empirically, it has also been observed that the performance of the sparse recovery algorithms also depends on the underlying statistical distribution of the nonzero elements of the signal, which may not be known a priori in practice. Interestingly, it can be observed that the performance degradation of the sparse recovery algorithms in these cases does not always imply a complete failure. In this paper, we study this scenario and show that by fusing the estimates of multiple sparse recovery algorithms, which work with different principles, we can improve the sparse signal recovery. We present the theoretical analysis to derive sufficient conditions for performance improvement of the proposed schemes. We demonstrate the advantage of the proposed methods through numerical simulations for both synthetic and real signals.
Resumo:
Classification of pharmacologic activity of a chemical compound is an essential step in any drug discovery process. We develop two new atom-centered fragment descriptors (vertex indices) - one based solely on topological considerations without discriminating atomor bond types, and another based on topological and electronic features. We also assess their usefulness by devising a method to rank and classify molecules with regard to their antibacterial activity. Classification performances of our method are found to be superior compared to two previous studies on large heterogeneous data sets for hit finding and hit-to-lead studies even though we use much fewer parameters. It is found that for hit finding studies topological features (simple graph) alone provide significant discriminating power, and for hit-to-lead process small but consistent improvement can be made by additionally including electronic features (colored graph). Our approach is simple, interpretable, and suitable for design of molecules as we do not use any physicochemical properties. The singular use of vertex index as descriptor, novel range based feature extraction, and rigorous statistical validation are the key elements of this study.
Resumo:
An attempt has been made to quantify the variability in the seismic activity rate across the whole of India and adjoining areas (0–45°N and 60–105°E) using earthquake database compiled from various sources. Both historical and instrumental data were compiled and the complete catalog of Indian earthquakes till 2010 has been prepared. Region-specific earthquake magnitude scaling relations correlating different magnitude scales were achieved to develop a homogenous earthquake catalog for the region in unified moment magnitude scale. The dependent events (75.3%) in the raw catalog have been removed and the effect of aftershocks on the variation of b value has been quantified. The study area was divided into 2,025 grid points (1°91°) and the spatial variation of the seismicity across the region have been analyzed considering all the events within 300 km radius from each grid point. A significant decrease in seismic b value was seen when declustered catalog was used which illustrates that a larger proportion of dependent events in the earthquake catalog are related to lower magnitude events. A list of 203,448 earth- quakes (including aftershocks and foreshocks) occurred in the region covering the period from 250 B.C. to 2010 A.D. with all available details is uploaded in the website http://www.civil.iisc.ernet.in/*sreevals/resource.htm.
Resumo:
Smoothed functional (SF) schemes for gradient estimation are known to be efficient in stochastic optimization algorithms, especially when the objective is to improve the performance of a stochastic system However, the performance of these methods depends on several parameters, such as the choice of a suitable smoothing kernel. Different kernels have been studied in the literature, which include Gaussian, Cauchy, and uniform distributions, among others. This article studies a new class of kernels based on the q-Gaussian distribution, which has gained popularity in statistical physics over the last decade. Though the importance of this family of distributions is attributed to its ability to generalize the Gaussian distribution, we observe that this class encompasses almost all existing smoothing kernels. This motivates us to study SF schemes for gradient estimation using the q-Gaussian distribution. Using the derived gradient estimates, we propose two-timescale algorithms for optimization of a stochastic objective function in a constrained setting with a projected gradient search approach. We prove the convergence of our algorithms to the set of stationary points of an associated ODE. We also demonstrate their performance numerically through simulations on a queuing model.
Resumo:
The standard approach to signal reconstruction in frequency-domain optical-coherence tomography (FDOCT) is to apply the inverse Fourier transform to the measurements. This technique offers limited resolution (due to Heisenberg's uncertainty principle). We propose a new super-resolution reconstruction method based on a parametric representation. We consider multilayer specimens, wherein each layer has a constant refractive index and show that the backscattered signal from such a specimen fits accurately in to the framework of finite-rate-of-innovation (FRI) signal model and is represented by a finite number of free parameters. We deploy the high-resolution Prony method and show that high-quality, super-resolved reconstruction is possible with fewer measurements (about one-fourth of the number required for the standard Fourier technique). To further improve robustness to noise in practical scenarios, we take advantage of an iterated singular-value decomposition algorithm (Cadzow denoiser). We present results of Monte Carlo analyses, and assess statistical efficiency of the reconstruction techniques by comparing their performance against the Cramer-Rao bound. Reconstruction results on experimental data obtained from technical as well as biological specimens show a distinct improvement in resolution and signal-to-reconstruction noise offered by the proposed method in comparison with the standard approach.
Resumo:
The potential of Citrobacter freundii, a Gram negative bacteria for the remediation of hexavalent chromium (Cr(VI)) and trivalent chromium (Cr(III))) from aqueous solutions was investigated. Bioremediation of Cr(VI) involved both biosorption and bioreduction processes, as compared to only biosorption process observed with respect to Cr(III) bioremediation. In the case of Cr(VI) bioremediation studies, about 59 % biosorption was achieved at an equilibrium time of 2 h, initial Cr(VI) concentration of 4 mg/L, pH 1 and a biomass loading of 5x10(11) cells/mL. The remainder, 41 %, was found to be in the form of Cr(111) ions owing to bioreduction of Cr(VI) by the bacteria resulting in the absence of Cr(VI) ions in the residue, there by meeting the USEPA specifications. Similar studies were carried out using Cr(III) solution for an equilibrium time of 2 h, Cr(III) concentration of 4 mg/L, pH 3 and a biomass loading of 6.3x10(11) cells/mL., wherein a maximum biosorption of about 30 % was achieved.
Resumo:
Several statistical downscaling models have been developed in the past couple of decades to assess the hydrologic impacts of climate change by projecting the station-scale hydrological variables from large-scale atmospheric variables simulated by general circulation models (GCMs). This paper presents and compares different statistical downscaling models that use multiple linear regression (MLR), positive coefficient regression (PCR), stepwise regression (SR), and support vector machine (SVM) techniques for estimating monthly rainfall amounts in the state of Florida. Mean sea level pressure, air temperature, geopotential height, specific humidity, U wind, and V wind are used as the explanatory variables/predictors in the downscaling models. Data for these variables are obtained from the National Centers for Environmental Prediction-National Center for Atmospheric Research (NCEP-NCAR) reanalysis dataset and the Canadian Centre for Climate Modelling and Analysis (CCCma) Coupled Global Climate Model, version 3 (CGCM3) GCM simulations. The principal component analysis (PCA) and fuzzy c-means clustering method (FCM) are used as part of downscaling model to reduce the dimensionality of the dataset and identify the clusters in the data, respectively. Evaluation of the performances of the models using different error and statistical measures indicates that the SVM-based model performed better than all the other models in reproducing most monthly rainfall statistics at 18 sites. Output from the third-generation CGCM3 GCM for the A1B scenario was used for future projections. For the projection period 2001-10, MLR was used to relate variables at the GCM and NCEP grid scales. Use of MLR in linking the predictor variables at the GCM and NCEP grid scales yielded better reproduction of monthly rainfall statistics at most of the stations (12 out of 18) compared to those by spatial interpolation technique used in earlier studies.
Resumo:
Friction stir processing (FSP) is emerging as one of the most competent severe plastic deformation (SPD) method for producing bulk ultra-fine grained materials with improved properties. Optimizing the process parameters for a defect free process is one of the challenging aspects of FSP to mark its commercial use. For the commercial aluminium alloy 2024-T3 plate of 6 mm thickness, a bottom-up approach has been attempted to optimize major independent parameters of the process such as plunge depth, tool rotation speed and traverse speed. Tensile properties of the optimum friction stir processed sample were correlated with the microstructural characterization done using Scanning Electron Microscope (SEM) and Electron Back-Scattered Diffraction (EBSD). Optimum parameters from the bottom-up approach have led to a defect free FSP having a maximum strength of 93% the base material strength. Micro tensile testing of the samples taken from the center of processed zone has shown an increased strength of 1.3 times the base material. Measured maximum longitudinal residual stress on the processed surface was only 30 MPa which was attributed to the solid state nature of FSP. Microstructural observation reveals significant grain refinement with less variation in the grain size across the thickness and a large amount of grain boundary precipitation compared to the base metal. The proposed experimental bottom-up approach can be applied as an effective method for optimizing parameters during FSP of aluminium alloys, which is otherwise difficult through analytical methods due to the complex interactions between work-piece, tool and process parameters. Precipitation mechanisms during FSP were responsible for the fine grained microstructure in the nugget zone that provided better mechanical properties than the base metal. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
Grating Compression Transform (GCT) is a two-dimensional analysis of speech signal which has been shown to be effective in multi-pitch tracking in speech mixtures. Multi-pitch tracking methods using GCT apply Kalman filter framework to obtain pitch tracks which requires training of the filter parameters using true pitch tracks. We propose an unsupervised method for obtaining multiple pitch tracks. In the proposed method, multiple pitch tracks are modeled using time-varying means of a Gaussian mixture model (GMM), referred to as TVGMM. The TVGMM parameters are estimated using multiple pitch values at each frame in a given utterance obtained from different patches of the spectrogram using GCT. We evaluate the performance of the proposed method on all voiced speech mixtures as well as random speech mixtures having well separated and close pitch tracks. TVGMM achieves multi-pitch tracking with 51% and 53% multi-pitch estimates having error <= 20% for random mixtures and all-voiced mixtures respectively. TVGMM also results in lower root mean squared error in pitch track estimation compared to that by Kalman filtering.
Resumo:
Diffusion couple experiments are conducted to study phase evolutions in the Co-rich part of the Co-Ni-Ta phase diagram. This helps to examine the available phase diagram and propose a correction on the stability of the Co2Ta phase based on the compositional measurements and X-ray analysis. The growth rate of this phase decreases with an increase in Ni content. The same is reflected on the estimated integrated interdiffusion coefficients of the components in this phase. The possible reasons for this change are discussed based on the discussions of defects, crystal structure and the driving forces for diffusion. Diffusion rate of Co in the Co2Ta phase at the Co-rich composition is higher because of more number of Co-Co bonds present compared to that of Ta-Ta bonds and the presence of Co antisites for the deviation from the stoichiometry. The decrease in the diffusion coefficients because of Ni addition indicates that Ni preferably replaces Co antisites to decrease the diffusion rate. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
Materials with widely varying molecular topologies and exhibiting liquid crystalline properties have attracted considerable attention in recent years. C-13 NMR spectroscopy is a convenient method for studying such novel systems. In this approach the assignment of the spectrum is the first step which is a non-trivial problem. Towards this end, we propose here a method that enables the carbon skeleton of the different sub-units of the molecule to be traced unambiguously. The proposed method uses a heteronuclear correlation experiment to detect pairs of nearby carbons with attached protons in the liquid crystalline core through correlation of the carbon chemical shifts to the double-quantum coherences of protons generated through the dipolar coupling between them. Supplemented by experiments that identify non-protonated carbons, the method leads to a complete assignment of the spectrum. We initially apply this method for assigning the C-13 spectrum of the liquid crystal 4-n-pentyl-4'-cyanobiphenyl oriented in the magnetic field. We then utilize the method to assign the aromatic carbon signals of a thiophene based liquid crystal thereby enabling the local order-parameters of the molecule to be estimated and the mutual orientation of the different sub-units to be obtained.
Resumo:
Advances in forest carbon mapping have the potential to greatly reduce uncertainties in the global carbon budget and to facilitate effective emissions mitigation strategies such as REDD+ (Reducing Emissions from Deforestation and Forest Degradation). Though broad-scale mapping is based primarily on remote sensing data, the accuracy of resulting forest carbon stock estimates depends critically on the quality of field measurements and calibration procedures. The mismatch in spatial scales between field inventory plots and larger pixels of current and planned remote sensing products for forest biomass mapping is of particular concern, as it has the potential to introduce errors, especially if forest biomass shows strong local spatial variation. Here, we used 30 large (8-50 ha) globally distributed permanent forest plots to quantify the spatial variability in aboveground biomass density (AGBD in Mgha(-1)) at spatial scales ranging from 5 to 250m (0.025-6.25 ha), and to evaluate the implications of this variability for calibrating remote sensing products using simulated remote sensing footprints. We found that local spatial variability in AGBD is large for standard plot sizes, averaging 46.3% for replicate 0.1 ha subplots within a single large plot, and 16.6% for 1 ha subplots. AGBD showed weak spatial autocorrelation at distances of 20-400 m, with autocorrelation higher in sites with higher topographic variability and statistically significant in half of the sites. We further show that when field calibration plots are smaller than the remote sensing pixels, the high local spatial variability in AGBD leads to a substantial ``dilution'' bias in calibration parameters, a bias that cannot be removed with standard statistical methods. Our results suggest that topography should be explicitly accounted for in future sampling strategies and that much care must be taken in designing calibration schemes if remote sensing of forest carbon is to achieve its promise.
Resumo:
Frequent episode discovery is one of the methods used for temporal pattern discovery in sequential data. An episode is a partially ordered set of nodes with each node associated with an event type. For more than a decade, algorithms existed for episode discovery only when the associated partial order is total (serial episode) or trivial (parallel episode). Recently, the literature has seen algorithms for discovering episodes with general partial orders. In frequent pattern mining, the threshold beyond which a pattern is inferred to be interesting is typically user-defined and arbitrary. One way of addressing this issue in the pattern mining literature has been based on the framework of statistical hypothesis testing. This paper presents a method of assessing statistical significance of episode patterns with general partial orders. A method is proposed to calculate thresholds, on the non-overlapped frequency, beyond which an episode pattern would be inferred to be statistically significant. The method is first explained for the case of injective episodes with general partial orders. An injective episode is one where event-types are not allowed to repeat. Later it is pointed out how the method can be extended to the class of all episodes. The significance threshold calculations for general partial order episodes proposed here also generalize the existing significance results for serial episodes. Through simulations studies, the usefulness of these statistical thresholds in pruning uninteresting patterns is illustrated. (C) 2014 Elsevier Inc. All rights reserved.