989 resultados para Statistical Robustness
Resumo:
Diketopyrrolopyrrole (DPP) containing copolymers have gained a lot of interest in organic optoelectronics with great potential in organic photovoltaics. In this work, DPP based statistical copolymers, with slightly different bandgap energies and a varying fraction of donor-acceptor ratio are investigated using monochromatic photocurrent spectroscopy and Fourier-transform photocurrent spectroscopy (FTPS). The statistical copolymer with a lower DPP fraction, when blended with a fullerene derivative, shows the signature of an inter charge transfer complex state in photocurrent spectroscopy. Furthermore, the absorption spectrum of the blended sample with a lower DPP fraction is seen to change as a function of an external bias, qualitatively similar to the quantum confined Stark effect, from where we estimate the exciton binding energy. The statistical copolymer with a higher DPP fraction shows no signal of the inter charge transfer states and yields a higher external quantum efficiency in a photovoltaic structure. In order to gain insight into the origin of the observed charge transfer transitions, we present theoretical studies using density-functional theory and time-dependent density-functional theory for the two pristine DPP based statistical monomers.
Resumo:
This paper attempts to unravel any relations that may exist between turbulent shear flows and statistical mechanics through a detailed numerical investigation in the simplest case where both can be well defined. The flow considered for the purpose is the two-dimensional (2D) temporal free shear layer with a velocity difference Delta U across it, statistically homogeneous in the streamwise direction (x) and evolving from a plane vortex sheet in the direction normal to it (y) in a periodic-in-x domain L x +/-infinity. Extensive computer simulations of the flow are carried out through appropriate initial-value problems for a ``vortex gas'' comprising N point vortices of the same strength (gamma = L Delta U/N) and sign. Such a vortex gas is known to provide weak solutions of the Euler equation. More than ten different initial-condition classes are investigated using simulations involving up to 32 000 vortices, with ensemble averages evaluated over up to 10(3) realizations and integration over 10(4)L/Delta U. The temporal evolution of such a system is found to exhibit three distinct regimes. In Regime I the evolution is strongly influenced by the initial condition, sometimes lasting a significant fraction of L/Delta U. Regime III is a long-time domain-dependent evolution towards a statistically stationary state, via ``violent'' and ``slow'' relaxations P.-H. Chavanis, Physica A 391, 3657 (2012)], over flow time scales of order 10(2) and 10(4)L/Delta U, respectively (for N = 400). The final state involves a single structure that stochastically samples the domain, possibly constituting a ``relative equilibrium.'' The vortex distribution within the structure follows a nonisotropic truncated form of the Lundgren-Pointin (L-P) equilibrium distribution (with negatively high temperatures; L-P parameter lambda close to -1). The central finding is that, in the intermediate Regime II, the spreading rate of the layer is universal over the wide range of cases considered here. The value (in terms of momentum thickness) is 0.0166 +/- 0.0002 times Delta U. Regime II, extensively studied in the turbulent shear flow literature as a self-similar ``equilibrium'' state, is, however, a part of the rapid nonequilibrium evolution of the vortex-gas system, which we term ``explosive'' as it lasts less than one L/Delta U. Regime II also exhibits significant values of N-independent two-vortex correlations, indicating that current kinetic theories that neglect correlations or consider them as O(1/N) cannot describe this regime. The evolution of the layer thickness in present simulations in Regimes I and II agree with the experimental observations of spatially evolving (3D Navier-Stokes) shear layers. Further, the vorticity-stream-function relations in Regime III are close to those computed in 2D Navier-Stokes temporal shear layers J. Sommeria, C. Staquet, and R. Robert, J. Fluid Mech. 233, 661 (1991)]. These findings suggest the dominance of what may be called the Kelvin-Biot-Savart mechanism in determining the growth of the free shear layer through large-scale momentum and vorticity dispersal.
Resumo:
To combine the advantages of both stability and optimality-based designs, a single network adaptive critic (SNAC) aided nonlinear dynamic inversion approach is presented in this paper. Here, the gains of a dynamic inversion controller are selected in such a way that the resulting controller behaves very close to a pre-synthesized SNAC controller in the output regulation sense. Because SNAC is based on optimal control theory, it makes the dynamic inversion controller operate nearly optimal. More important, it retains the two major benefits of dynamic inversion, namely (i) a closed-form expression of the controller and (ii) easy scalability to command tracking applications without knowing the reference commands a priori. An extended architecture is also presented in this paper that adapts online to system modeling and inversion errors, as well as reduced control effectiveness, thereby leading to enhanced robustness. The strengths of this hybrid method of applying SNAC to optimize an nonlinear dynamic inversion controller is demonstrated by considering a benchmark problem in robotics, that is, a two-link robotic manipulator system. Copyright (C) 2013 John Wiley & Sons, Ltd.
Resumo:
The standard approach to signal reconstruction in frequency-domain optical-coherence tomography (FDOCT) is to apply the inverse Fourier transform to the measurements. This technique offers limited resolution (due to Heisenberg's uncertainty principle). We propose a new super-resolution reconstruction method based on a parametric representation. We consider multilayer specimens, wherein each layer has a constant refractive index and show that the backscattered signal from such a specimen fits accurately in to the framework of finite-rate-of-innovation (FRI) signal model and is represented by a finite number of free parameters. We deploy the high-resolution Prony method and show that high-quality, super-resolved reconstruction is possible with fewer measurements (about one-fourth of the number required for the standard Fourier technique). To further improve robustness to noise in practical scenarios, we take advantage of an iterated singular-value decomposition algorithm (Cadzow denoiser). We present results of Monte Carlo analyses, and assess statistical efficiency of the reconstruction techniques by comparing their performance against the Cramer-Rao bound. Reconstruction results on experimental data obtained from technical as well as biological specimens show a distinct improvement in resolution and signal-to-reconstruction noise offered by the proposed method in comparison with the standard approach.
Resumo:
Several statistical downscaling models have been developed in the past couple of decades to assess the hydrologic impacts of climate change by projecting the station-scale hydrological variables from large-scale atmospheric variables simulated by general circulation models (GCMs). This paper presents and compares different statistical downscaling models that use multiple linear regression (MLR), positive coefficient regression (PCR), stepwise regression (SR), and support vector machine (SVM) techniques for estimating monthly rainfall amounts in the state of Florida. Mean sea level pressure, air temperature, geopotential height, specific humidity, U wind, and V wind are used as the explanatory variables/predictors in the downscaling models. Data for these variables are obtained from the National Centers for Environmental Prediction-National Center for Atmospheric Research (NCEP-NCAR) reanalysis dataset and the Canadian Centre for Climate Modelling and Analysis (CCCma) Coupled Global Climate Model, version 3 (CGCM3) GCM simulations. The principal component analysis (PCA) and fuzzy c-means clustering method (FCM) are used as part of downscaling model to reduce the dimensionality of the dataset and identify the clusters in the data, respectively. Evaluation of the performances of the models using different error and statistical measures indicates that the SVM-based model performed better than all the other models in reproducing most monthly rainfall statistics at 18 sites. Output from the third-generation CGCM3 GCM for the A1B scenario was used for future projections. For the projection period 2001-10, MLR was used to relate variables at the GCM and NCEP grid scales. Use of MLR in linking the predictor variables at the GCM and NCEP grid scales yielded better reproduction of monthly rainfall statistics at most of the stations (12 out of 18) compared to those by spatial interpolation technique used in earlier studies.
Resumo:
Frequent episode discovery is one of the methods used for temporal pattern discovery in sequential data. An episode is a partially ordered set of nodes with each node associated with an event type. For more than a decade, algorithms existed for episode discovery only when the associated partial order is total (serial episode) or trivial (parallel episode). Recently, the literature has seen algorithms for discovering episodes with general partial orders. In frequent pattern mining, the threshold beyond which a pattern is inferred to be interesting is typically user-defined and arbitrary. One way of addressing this issue in the pattern mining literature has been based on the framework of statistical hypothesis testing. This paper presents a method of assessing statistical significance of episode patterns with general partial orders. A method is proposed to calculate thresholds, on the non-overlapped frequency, beyond which an episode pattern would be inferred to be statistically significant. The method is first explained for the case of injective episodes with general partial orders. An injective episode is one where event-types are not allowed to repeat. Later it is pointed out how the method can be extended to the class of all episodes. The significance threshold calculations for general partial order episodes proposed here also generalize the existing significance results for serial episodes. Through simulations studies, the usefulness of these statistical thresholds in pruning uninteresting patterns is illustrated. (C) 2014 Elsevier Inc. All rights reserved.
Resumo:
We formulate a natural model of loops and isolated vertices for arbitrary planar graphs, which we call the monopole-dimer model. We show that the partition function of this model can be expressed as a determinant. We then extend the method of Kasteleyn and Temperley-Fisher to calculate the partition function exactly in the case of rectangular grids. This partition function turns out to be a square of a polynomial with positive integer coefficients when the grid lengths are even. Finally, we analyse this formula in the infinite volume limit and show that the local monopole density, free energy and entropy can be expressed in terms of well-known elliptic functions. Our technique is a novel determinantal formula for the partition function of a model of isolated vertices and loops for arbitrary graphs.
Resumo:
It is well known that wrist pulse signals contain information about the status of health of a person and hence diagnosis based on pulse signals has assumed great importance since long time. In this paper the efficacy of signal processing techniques in extracting useful information from wrist pulse signals has been demonstrated by using signals recorded under two different experimental conditions viz. before lunch condition and after lunch condition. We have used Pearson's product-moment correlation coefficient, which is an effective measure of phase synchronization, in making a statistical analysis of wrist pulse signals. Contour plots and box plots are used to illustrate various differences. Two-sample t-tests show that the correlations show statistically significant differences between the groups. Results show that the correlation coefficient is effective in distinguishing the changes taking place after having lunch. This paper demonstrates the ability of the wrist pulse signals in detecting changes occurring under two different conditions. The study assumes importance in view of limited literature available on the analysis of wrist pulse signals in the case of food intake and also in view of its potential health care applications.
Resumo:
In this paper, we report drain-extended MOS device design guidelines for the RF power amplifier (RF PA) applications. A complete RF PA circuit in a 28-nm CMOS technology node with the matching and biasing network is used as a test vehicle to validate the RF performance improvement by a systematic device design. A complete RF PA with 0.16-W/mm power density is reported experimentally. By simultaneous improvement of device-circuit performance, 45% improvement in the circuit RF power gain, 25% improvement in the power-added efficiency at 1-GHz frequency, and 5x improvement in the electrostatic discharge robustness are reported experimentally.
Resumo:
In this paper, we consider the problem of power allocation in MIMO wiretap channel for secrecy in the presence of multiple eavesdroppers. Perfect knowledge of the destination channel state information (CSI) and only the statistical knowledge of the eavesdroppers CSI are assumed. We first consider the MIMO wiretap channel with Gaussian input. Using Jensen's inequality, we transform the secrecy rate max-min optimization problem to a single maximization problem. We use generalized singular value decomposition and transform the problem to a concave maximization problem which maximizes the sum secrecy rate of scalar wiretap channels subject to linear constraints on the transmit covariance matrix. We then consider the MIMO wiretap channel with finite-alphabet input. We show that the transmit covariance matrix obtained for the case of Gaussian input, when used in the MIMO wiretap channel with finite-alphabet input, can lead to zero secrecy rate at high transmit powers. We then propose a power allocation scheme with an additional power constraint which alleviates this secrecy rate loss problem, and gives non-zero secrecy rates at high transmit powers.
Resumo:
Diffusion-a measure of dynamics, and entropy-a measure of disorder in the system are found to be intimately correlated in many systems, and the correlation is often strongly non-linear. We explore the origin of this complex dependence by studying diffusion of a point Brownian particle on a model potential energy surface characterized by ruggedness. If we assume that the ruggedness has a Gaussian distribution, then for this model, one can obtain the excess entropy exactly for any dimension. By using the expression for the mean first passage time, we present a statistical mechanical derivation of the well-known and well-tested scaling relation proposed by Rosenfeld between diffusion and excess entropy. In anticipation that Rosenfeld diffusion-entropy scaling (RDES) relation may continue to be valid in higher dimensions (where the mean first passage time approach is not available), we carry out an effective medium approximation (EMA) based analysis of the effective transition rate and hence of the effective diffusion coefficient. We show that the EMA expression can be used to derive the RDES scaling relation for any dimension higher than unity. However, RDES is shown to break down in the presence of spatial correlation among the energy landscape values. (C) 2015 AIP Publishing LLC.