898 resultados para Orthogonal polynomials of a discrete variable
Resumo:
This study examines the properties of Generalised Regression (GREG) estimators for domain class frequencies and proportions. The family of GREG estimators forms the class of design-based model-assisted estimators. All GREG estimators utilise auxiliary information via modelling. The classic GREG estimator with a linear fixed effects assisting model (GREG-lin) is one example. But when estimating class frequencies, the study variable is binary or polytomous. Therefore logistic-type assisting models (e.g. logistic or probit model) should be preferred over the linear one. However, other GREG estimators than GREG-lin are rarely used, and knowledge about their properties is limited. This study examines the properties of L-GREG estimators, which are GREG estimators with fixed-effects logistic-type models. Three research questions are addressed. First, I study whether and when L-GREG estimators are more accurate than GREG-lin. Theoretical results and Monte Carlo experiments which cover both equal and unequal probability sampling designs and a wide variety of model formulations show that in standard situations, the difference between L-GREG and GREG-lin is small. But in the case of a strong assisting model, two interesting situations arise: if the domain sample size is reasonably large, L-GREG is more accurate than GREG-lin, and if the domain sample size is very small, estimation of assisting model parameters may be inaccurate, resulting in bias for L-GREG. Second, I study variance estimation for the L-GREG estimators. The standard variance estimator (S) for all GREG estimators resembles the Sen-Yates-Grundy variance estimator, but it is a double sum of prediction errors, not of the observed values of the study variable. Monte Carlo experiments show that S underestimates the variance of L-GREG especially if the domain sample size is minor, or if the assisting model is strong. Third, since the standard variance estimator S often fails for the L-GREG estimators, I propose a new augmented variance estimator (A). The difference between S and the new estimator A is that the latter takes into account the difference between the sample fit model and the census fit model. In Monte Carlo experiments, the new estimator A outperformed the standard estimator S in terms of bias, root mean square error and coverage rate. Thus the new estimator provides a good alternative to the standard estimator.
Resumo:
In this paper, we consider the design and bit-error performance analysis of linear parallel interference cancellers (LPIC) for multicarrier (MC) direct-sequence code division multiple access (DS-CDMA) systems. We propose an LPIC scheme where we estimate and cancel the multiple access interference (MAT) based on the soft decision outputs on individual subcarriers, and the interference cancelled outputs on different subcarriers are combined to form the final decision statistic. We scale the MAI estimate on individual subcarriers by a weight before cancellation. In order to choose these weights optimally, we derive exact closed-form expressions for the bit-error rate (BER) at the output of different stages of the LPIC, which we minimize to obtain the optimum weights for the different stages. In addition, using an alternate approach involving the characteristic function of the decision variable, we derive BER expressions for the weighted LPIC scheme, matched filter (MF) detector, decorrelating detector, and minimum mean square error (MMSE) detector for the considered multicarrier DS-CDMA system. We show that the proposed BER-optimized weighted LPIC scheme performs better than the MF detector and the conventional LPIC scheme (where the weights are taken to be unity), and close to the decorrelating and MMSE detectors.
Resumo:
Use of precoding transforms such as Hadamard Transforms and Phase Alteration for Peak to Average Power Ratio (PAPR) reduction in OFDM systems are well known. In this paper we propose use of Inverse Discrete Fourier Transform (IDFT) and Hadamard transform as precoding transforms in MIMO-OFDM systems to achieve low peak to average power ratio (PAPR). We show that while our approach using IDFT does not disturb the diversity gains of the MIMO-OFDM systems (spatial, temporal and frequency diversity gains), it offers a better trade-off between PAPR reduction and ML decoding complexity compared to that of the Hadamard transform precoding. We study in detail the amount of PAPR reduction achieved for the following two recently proposed full-diversity Space-Frequency coded MIMO-OFDM systems using both the IDFT and the Hadamard transform: (i) W. Su. Z. Safar, M. Olfat, K. J. R. Liu (IEEE Trans. on Signal Processing, Nov. 2003), and (ii) W. Su, Z. Safar, K. J. R. Liu (IEEE Trans. on Information Theory, Jan. 2005).
Resumo:
Light scattering, or scattering and absorption of electromagnetic waves, is an important tool in all remote-sensing observations. In astronomy, the light scattered or absorbed by a distant object can be the only source of information. In Solar-system studies, the light-scattering methods are employed when interpreting observations of atmosphereless bodies such as asteroids, atmospheres of planets, and cometary or interplanetary dust. Our Earth is constantly monitored from artificial satellites at different wavelengths. With remote sensing of Earth the light-scattering methods are not the only source of information: there is always the possibility to make in situ measurements. The satellite-based remote sensing is, however, superior in the sense of speed and coverage if only the scattered signal can be reliably interpreted. The optical properties of many industrial products play a key role in their quality. Especially for products such as paint and paper, the ability to obscure the background and to reflect light is of utmost importance. High-grade papers are evaluated based on their brightness, opacity, color, and gloss. In product development, there is a need for computer-based simulation methods that could predict the optical properties and, therefore, could be used in optimizing the quality while reducing the material costs. With paper, for instance, pilot experiments with an actual paper machine can be very time- and resource-consuming. The light-scattering methods presented in this thesis solve rigorously the interaction of light and material with wavelength-scale structures. These methods are computationally demanding, thus the speed and accuracy of the methods play a key role. Different implementations of the discrete-dipole approximation are compared in the thesis and the results provide practical guidelines in choosing a suitable code. In addition, a novel method is presented for the numerical computations of orientation-averaged light-scattering properties of a particle, and the method is compared against existing techniques. Simulation of light scattering for various targets and the possible problems arising from the finite size of the model target are discussed in the thesis. Scattering by single particles and small clusters is considered, as well as scattering in particulate media, and scattering in continuous media with porosity or surface roughness. Various techniques for modeling the scattering media are presented and the results are applied to optimizing the structure of paper. However, the same methods can be applied in light-scattering studies of Solar-system regoliths or cometary dust, or in any remote-sensing problem involving light scattering in random media with wavelength-scale structures.
Resumo:
Process control systems are designed for a closed-loop peak magnitude of 2dB, which corresponds to a damping coefficient () of 0.5 approximately. With this specified constraint, the designer should choose and/or design the loop components to maintain a constant relative stability. However, the manipulative variable in almost all chemical processes will be the flow rate of a process stream. Since the gains and the time constants of the process will be functions of the manipulative variable, a constant relative stability cannot be maintained. Up to now, this problem has been overcome either by selecting proper control valve flow characteristics or by gain scheduling of controller parameters. Nevertheless, if a wrong control valve selection is made then one has to account for huge loss in controllability or eventually it may lead to an unstable control system. To overcome these problems, a compensator device that can bring back the relative stability of the control system was proposed. This compensator is similar to a dynamic nonlinear controller that has both online and offline information on several factors related to the control system. The design and analysis of the proposed compensator is discussed in this article. Finally, the performance of the compensator is validated by applying it to a two-tank blending process. It has been observed that by using a compensator in the process control system, the relative stability could be brought back to a great extent despite the effects of changes in manipulative flow rate.
Resumo:
This paper may be considered as a sequel to one of our earlier works pertaining to the development of an upwind algorithm for meshless solvers. While the earlier work dealt with the development of an inviscid solution procedure, the present work focuses on its extension to viscous flows. A robust viscous discretization strategy is chosen based on positivity of a discrete Laplacian. This work projects meshless solver as a viable cartesian grid methodology. The point distribution required for the meshless solver is obtained from a hybrid cartesian gridding strategy. Particularly considering the importance of an hybrid cartesian mesh for RANS computations, the difficulties encountered in a conventional least squares based discretization strategy are highlighted. In this context, importance of discretization strategies which exploit the local structure in the grid is presented, along with a suitable point sorting strategy. Of particular interest is the proposed discretization strategies (both inviscid and viscous) within the structured grid block; a rotated update for the inviscid part and a Green-Gauss procedure based positive update for the viscous part. Both these procedures conveniently avoid the ill-conditioning associated with a conventional least squares procedure in the critical region of structured grid block. The robustness and accuracy of such a strategy is demonstrated on a number of standard test cases including a case of a multi-element airfoil. The computational efficiency of the proposed meshless solver is also demonstrated. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
We consider the Kramers problem for a long chain polymer trapped in a biased double-well potential. Initially the polymer is in the less stable well and it can escape from this well to the other well by the motion of its N beads across the barrier to attain the configuration having lower free energy. In one dimension we simulate the crossing and show that the results are in agreement with the kink mechanism suggested earlier. In three dimensions, it has not been possible to get an analytical `kink solution' for an arbitrary potential; however, one can assume the form of the solution of the nonlinear equation as a kink solution and then find a double-well potential in three dimensions. To verify the kink mechanism, simulations of the dynamics of a discrete Rouse polymer model in a double well in three dimensions are carried out. We find that the time of crossing is proportional to the chain length, which is in agreement with the results for the kink mechanism. The shape of the kink solution is also in agreement with the analytical solution in both one and three dimensions.
Resumo:
In correlation filtering we attempt to remove that component of the aeromagnetic field which is closely related to the topography. The magnetization vector is assumed to be spatially variable, but it can be successively estimated under the additional assumption that the magnetic component due to topography is uncorrelated with the magnetic signal of deeper origin. The correlation filtering was tested against a synthetic example. The filtered field compares very well with the known signal of deeper origin. We have also applied this method to real data from the south Indian shield. It is demonstrated that the performance of the correlation filtering is superior in situations where the direction of magnetization is variable, for example, where the remnant magnetization is dominant.
Resumo:
A comprehensive scheme has been developed for the prediction of radiation from engine exhaust and its incidence on an arbitrarily located sensor. Existing codes have been modified for the simulation of flows inside nozzles and jets. A novel view factor computation scheme has been applied for the determination of the radiosities of the discrete panels of a diffuse and gray nozzle surface. The narrowband model has been used to model the radiation from the gas inside the nozzle and the nonhomogeneous jet. The gas radiation from the nozzle inclusive of nozzle surface radiosities have been used as boundary conditions on the jet radiation. Geometric modeling techniques have been developed to identify and isolate nozzle surface panels and gas columns of the nozzle and jet to determine the radiation signals incident on the sensor. The scheme has been validated for intensity and heat flux predictions, and some useful results of practical importance have been generated to establish its viability for infrared signature analysis of jets.
Resumo:
9-Anthryl and 1-pyrenyl terpyridines (1 and 2, respectively), key precursors for the design of novel fluorescent sensors have been synthesized and characterized by H-1 NMR, mass spectroscopy and X-ray crystallography. Twisted molecular conformations for each 1 and 2 were observed in their single crystal structures. Energy minimization calculations for the 1 and 2 using the semi-empirical AM1 method show that the 'twisted' conformation is intrinsic to these systems. We observe interconnected networks of edge-to-face CH...pi interactions, which appear to be cooperative in nature, in each of the crystal structures. The two twisted molecules, although having differently shaped polyaromatic hydrocarbon substituents, show similar patterns of edge-to-face CH...pi interactions.The presently described systems comprise of two aromatic surfaces that are almost orthogonal to each other. This twisted or orthogonal nature of the molecules leads to the formation of interesting multi-directional ladder like supramolecular organizations. A combination of edge-to-face and face-to-face packing modes helps to stabilize these motifs. The ladder like architecture in 1 is helical in nature. (C) 2002 Published by Elsevier Science B.V.
Resumo:
In a statistical downscaling model, it is important to remove the bias of General Circulations Model (GCM) outputs resulting from various assumptions about the geophysical processes. One conventional method for correcting such bias is standardisation, which is used prior to statistical downscaling to reduce systematic bias in the mean and variances of GCM predictors relative to the observations or National Centre for Environmental Prediction/ National Centre for Atmospheric Research (NCEP/NCAR) reanalysis data. A major drawback of standardisation is that it may reduce the bias in the mean and variance of the predictor variable but it is much harder to accommodate the bias in large-scale patterns of atmospheric circulation in GCMs (e.g. shifts in the dominant storm track relative to observed data) or unrealistic inter-variable relationships. While predicting hydrologic scenarios, such uncorrected bias should be taken care of; otherwise it will propagate in the computations for subsequent years. A statistical method based on equi-probability transformation is applied in this study after downscaling, to remove the bias from the predicted hydrologic variable relative to the observed hydrologic variable for a baseline period. The model is applied in prediction of monsoon stream flow of Mahanadi River in India, from GCM generated large scale climatological data.
Resumo:
This paper presents a novel algorithm for compression of single lead Electrocardiogram (ECG) signals. The method is based on Pole-Zero modelling of the Discrete Cosine Transformed (DCT) signal. An extension is proposed to the well known Steiglitz-Hcbride algorithm, to model the higher frequency components of the input signal more accurately. This is achieved by weighting the error function minimized by the algorithm to estimate the model parameters. The data compression achieved by the parametric model is further enhanced by Differential Pulse Code Modulation (DPCM) of the model parameters. The method accomplishes a compression ratio in the range of 1:20 to 1:40, which far exceeds those achieved by most of the current methods.
Resumo:
Multivariate neural data provide the basis for assessing interactions in brain networks. Among myriad connectivity measures, Granger causality (GC) has proven to be statistically intuitive, easy to implement, and generate meaningful results. Although its application to functional MRI (fMRI) data is increasing, several factors have been identified that appear to hinder its neural interpretability: (a) latency differences in hemodynamic response function (HRF) across different brain regions, (b) low-sampling rates, and (c) noise. Recognizing that in basic and clinical neuroscience, it is often the change of a dependent variable (e.g., GC) between experimental conditions and between normal and pathology that is of interest, we address the question of whether there exist systematic relationships between GC at the fMRI level and that at the neural level. Simulated neural signals were convolved with a canonical HRF, down-sampled, and noise-added to generate simulated fMRI data. As the coupling parameters in the model were varied, fMRI GC and neural GC were calculated, and their relationship examined. Three main results were found: (1) GC following HRF convolution is a monotonically increasing function of neural GC; (2) this monotonicity can be reliably detected as a positive correlation when realistic fMRI temporal resolution and noise level were used; and (3) although the detectability of monotonicity declined due to the presence of HRF latency differences, substantial recovery of detectability occurred after correcting for latency differences. These results suggest that Granger causality is a viable technique for analyzing fMRI data when the questions are appropriately formulated.
Resumo:
The Lovasz θ function of a graph, is a fundamental tool in combinatorial optimization and approximation algorithms. Computing θ involves solving a SDP and is extremely expensive even for moderately sized graphs. In this paper we establish that the Lovasz θ function is equivalent to a kernel learning problem related to one class SVM. This interesting connection opens up many opportunities bridging graph theoretic algorithms and machine learning. We show that there exist graphs, which we call SVM−θ graphs, on which the Lovasz θ function can be approximated well by a one-class SVM. This leads to a novel use of SVM techniques to solve algorithmic problems in large graphs e.g. identifying a planted clique of size Θ(n√) in a random graph G(n,12). A classic approach for this problem involves computing the θ function, however it is not scalable due to SDP computation. We show that the random graph with a planted clique is an example of SVM−θ graph, and as a consequence a SVM based approach easily identifies the clique in large graphs and is competitive with the state-of-the-art. Further, we introduce the notion of a ''common orthogonal labeling'' which extends the notion of a ''orthogonal labelling of a single graph (used in defining the θ function) to multiple graphs. The problem of finding the optimal common orthogonal labelling is cast as a Multiple Kernel Learning problem and is used to identify a large common dense region in multiple graphs. The proposed algorithm achieves an order of magnitude scalability compared to the state of the art.
Resumo:
Influenza hemagglutinin (HA) is the primary target of the humoral response during infection/vaccination. Current influenza vaccines typically fail to elicit/boost broadly neutralizing antibodies (bnAbs), thereby limiting their efficacy. Although several bnAbs bind to the conserved stem domain of HA, focusing the immune response to this conserved stem in the presence of the immunodominant, variable head domain of HA is challenging. We report the design of a thermotolerant, disulfide-free, and trimeric HA stem-fragment immunogen which mimics the native, prefusion conformation of HA and binds conformation specific bnAbs with high affinity. The immunogen elicited bnAbs that neutralized highly divergent group 1 (H1 and H5 subtypes) and 2 (H3 subtype) influenza virus strains in vitro. Stem immunogens designed from unmatched, highly drifted influenza strains conferred robust protection against a lethal heterologous A/Puerto Rico/8/34 virus challenge in vivo. Soluble, bacterial expression of such designed immunogens allows for rapid scale-up during pandemic outbreaks.