957 resultados para linear prediction signal subspace fitting
Resumo:
Apical membrane antigen 1 (AMA-1) is considered to be a major candidate antigen for a malaria vaccine. Previous immunoepidemiological studies of naturally acquired immunity to Plasmodium vivax AMA-1 (PvAMA-1) have shown a higher prevalence of specific antibodies to domain II (DII) of AMA-1. In the present study, we confirmed that specific antibody responses from naturally infected individuals were highly reactive to both full-length AMA-1 and DII. Also, we demonstrated a strong association between AMA-1 and DII IgG and IgG subclass responses. We analyzed the primary sequence of PvAMA-1 for B cell linear epitopes co-occurring with intrinsically unstructured/ disordered regions (IURs). The B cell epitope comprising the amino acid sequence 290-307 of PvAMA-1 (SASDQPTQYEEEMTDYQK), with the highest prediction scores, was identified in domain II and further selected for chemical synthesis and immunological testing. The antigenicity of the synthetic peptide was identified by serological analysis using sera from P. vivax-infected individuals who were knowingly reactive to the PvAMA-1 ectodomain only, domain II only, or reactive to both antigens. Although the synthetic peptide was recognized by all serum samples specific to domain II, serum with reactivity only to the full-length protein presented 58.3% positivity. Moreover, IgG reactivity against PvAMA-1 and domain II after depletion of specific synthetic peptide antibodies was reduced by 18% and 33% (P = 0.0001 for both), respectively. These results suggest that the linear epitope SASDQPTQYEEEMTDYQK is highly antigenic during natural human infections and is an important antigenic region of the domain II of PvAMA-1, suggesting its possible future use in pre-clinical studies.
Resumo:
A new simple method to design linear-phase finite impulse response (FIR) digital filters, based on the steepest-descent optimization method, is presented in this paper. Starting from the specifications of the desired frequency response and a maximum approximation error a nearly optimum digital filter is obtained. Tests have shown that this method is alternative to other traditional ones such as Frequency Sampling and Parks-McClellan, mainly when other than brick wall frequency response is required as a desired frequency response. (C) 2011 Elsevier Inc. All rights reserved.
Resumo:
This paper deals with the problem of state prediction for descriptor systems subject to bounded uncertainties. The problem is stated in terms of the optimization of an appropriate quadratic functional. This functional is well suited to derive not only the robust predictor for descriptor systems but also that for usual state-space systems. Numerical examples are included in order to demonstrate the performance of this new filter. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
Numerical methods related to Krylov subspaces are widely used in large sparse numerical linear algebra. Vectors in these subspaces are manipulated via their representation onto orthonormal bases. Nowadays, on serial computers, the method of Arnoldi is considered as a reliable technique for constructing such bases. However, although easily parallelizable, this technique is not as scalable as expected for communications. In this work we examine alternative methods aimed at overcoming this drawback. Since they retrieve upon completion the same information as Arnoldi's algorithm does, they enable us to design a wide family of stable and scalable Krylov approximation methods for various parallel environments. We present timing results obtained from their implementation on two distributed-memory multiprocessor supercomputers: the Intel Paragon and the IBM Scalable POWERparallel SP2. (C) 1997 by John Wiley & Sons, Ltd.
Resumo:
This work aims to compare different nonlinear functions for describing the growth curves of Nelore females. The growth curve parameters, their (co) variance components, and environmental and genetic effects were estimated jointly through a Bayesian hierarchical model. In the first stage of the hierarchy, 4 nonlinear functions were compared: Brody, Von Bertalanffy, Gompertz, and logistic. The analyses were carried out using 3 different data sets to check goodness of fit while having animals with few records. Three different assumptions about SD of fitting errors were considered: constancy throughout the trajectory, linear increasing until 3 yr of age and constancy thereafter, and variation following the nonlinear function applied in the first stage of the hierarchy. Comparisons of the overall goodness of fit were based on Akaike information criterion, the Bayesian information criterion, and the deviance information criterion. Goodness of fit at different points of the growth curve was compared applying the Gelfand`s check function. The posterior means of adult BW ranged from 531.78 to 586.89 kg. Greater estimates of adult BW were observed when the fitting error variance was considered constant along the trajectory. The models were not suitable to describe the SD of fitting errors at the beginning of the growth curve. All functions provided less accurate predictions at the beginning of growth, and predictions were more accurate after 48 mo of age. The prediction of adult BW using nonlinear functions can be accurate when growth curve parameters and their (co) variance components are estimated jointly. The hierarchical model used in the present study can be applied to the prediction of mature BW in herds in which a portion of the animals are culled before adult age. Gompertz, Von Bertalanffy, and Brody functions were adequate to establish mean growth patterns and to predict the adult BW of Nelore females. The Brody model was more accurate in predicting the birth weight of these animals and presented the best overall goodness of fit.
Resumo:
Any given n X n matrix A is shown to be a restriction, to the A-invariant subspace, of a nonnegative N x N matrix B of spectral radius p(B) arbitrarily close to p(A). A difference inclusion x(k+1) is an element of Ax(k), where A is a compact set of matrices, is asymptotically stable if and only if A can be extended to a set B of nonnegative matrices B with \ \B \ \ (1) < 1 or \ \B \ \ (infinity) < 1. Similar results are derived for differential inclusions.
Resumo:
We present an ultra-high bandwidth all-optical digital signal regeneration device concept utilising non-degenerate parametric interaction in a one-dimensional waveguide. Performance is analysed in terms of re-amplification, re-timing, and re-shaping (including centre frequency correction) of time domain multiplexed signals. Bandwidths of 10-100 THz are achievable. (C) 2001 Published by Elsevier Science B.V.
Resumo:
The suitable use of an array antenna at the base station of a wireless communications system can result in improvement in the signal-to-interference ratio (SIR). In general, the SIR is a function of the direction of arrival of the desired signal and depends on the configuration of the array, the number of elements, and their spacing. In this paper, we consider a uniform linear array antenna and study the effect of varying the number of its elements and inter-element spacing on the SIR performance. (C) 2002 Wiley Periodicals, Inc.
Resumo:
A finite-element method is used to study the elastic properties of random three-dimensional porous materials with highly interconnected pores. We show that Young's modulus, E, is practically independent of Poisson's ratio of the solid phase, nu(s), over the entire solid fraction range, and Poisson's ratio, nu, becomes independent of nu(s) as the percolation threshold is approached. We represent this behaviour of nu in a flow diagram. This interesting but approximate behaviour is very similar to the exactly known behaviour in two-dimensional porous materials. In addition, the behaviour of nu versus nu(s) appears to imply that information in the dilute porosity limit can affect behaviour in the percolation threshold limit. We summarize the finite-element results in terms of simple structure-property relations, instead of tables of data, to make it easier to apply the computational results. Without using accurate numerical computations, one is limited to various effective medium theories and rigorous approximations like bounds and expansions. The accuracy of these equations is unknown for general porous media. To verify a particular theory it is important to check that it predicts both isotropic elastic moduli, i.e. prediction of Young's modulus alone is necessary but not sufficient. The subtleties of Poisson's ratio behaviour actually provide a very effective method for showing differences between the theories and demonstrating their ranges of validity. We find that for moderate- to high-porosity materials, none of the analytical theories is accurate and, at present, numerical techniques must be relied upon.
Resumo:
The characteristics of sharkskin surface instability for linear low density polyethylene are studied as a function of film blowing processing conditions. By means of scanning electron microscopy and surface profilometry, is it found that for the standard industrial die geometry studied, sharkskin only occurs on the inside of the film bubble. Previous work suggests that this instability may be due to critical extensional stress levels at the exit of the die. Isothermal integral viscoelastic simulations of the annular extrusion process are reported, and confirm that the extensional stress at the die exit is large enough to cause local melt rupture. However the extensional stress level at the outer die wall predicts melt rupture of the outside bubble surface also, which contradicts the experimental findings. A significant temperature gradient is expected to exist across the die gap at the exit of the die, due to the external heating of the die and the low conductivity, of the polymer melt. It is shown that a gradient of 20 degreesC is required to cause sharkskin to only appear on the inner bubble surface.
Resumo:
We report the first steps of a collaborative project between the University of Queensland, Polyflow, Michelin, SK Chemicals, and RMIT University; on simulation, validation and application of a recently introduced constitutive model designed to describe branched polymers. Whereas much progress has been made on predicting the complex flow behaviour of many - in particular linear - polymers, it sometimes appears difficult to predict simultaneously shear thinning and extensional strain hardening behaviour using traditional constitutive models. Recently a new viscoelastic model based on molecular topology, was proposed by McLeish and Larson (1998). We explore the predictive power of a differential multi-mode version of the pom-pom model for the flow behaviour of two commercial polymer melts: a (long-chain branched) low-density polyethylene (LDPE) and a (linear) high-density polyethylene (HDPE). The model responses are compared to elongational recovery experiments published by Langouche and Debbaut (1999), and start-up of simple shear flow, stress relaxation after simple and reverse step strain experiments carried out in our laboratory.
Resumo:
We compare Bayesian methodology utilizing free-ware BUGS (Bayesian Inference Using Gibbs Sampling) with the traditional structural equation modelling approach based on another free-ware package, Mx. Dichotomous and ordinal (three category) twin data were simulated according to different additive genetic and common environment models for phenotypic variation. Practical issues are discussed in using Gibbs sampling as implemented by BUGS to fit subject-specific Bayesian generalized linear models, where the components of variation may be estimated directly. The simulation study (based on 2000 twin pairs) indicated that there is a consistent advantage in using the Bayesian method to detect a correct model under certain specifications of additive genetics and common environmental effects. For binary data, both methods had difficulty in detecting the correct model when the additive genetic effect was low (between 10 and 20%) or of moderate range (between 20 and 40%). Furthermore, neither method could adequately detect a correct model that included a modest common environmental effect (20%) even when the additive genetic effect was large (50%). Power was significantly improved with ordinal data for most scenarios, except for the case of low heritability under a true ACE model. We illustrate and compare both methods using data from 1239 twin pairs over the age of 50 years, who were registered with the Australian National Health and Medical Research Council Twin Registry (ATR) and presented symptoms associated with osteoarthritis occurring in joints of the hand.
Resumo:
This paper proposes a template for modelling complex datasets that integrates traditional statistical modelling approaches with more recent advances in statistics and modelling through an exploratory framework. Our approach builds on the well-known and long standing traditional idea of 'good practice in statistics' by establishing a comprehensive framework for modelling that focuses on exploration, prediction, interpretation and reliability assessment, a relatively new idea that allows individual assessment of predictions. The integrated framework we present comprises two stages. The first involves the use of exploratory methods to help visually understand the data and identify a parsimonious set of explanatory variables. The second encompasses a two step modelling process, where the use of non-parametric methods such as decision trees and generalized additive models are promoted to identify important variables and their modelling relationship with the response before a final predictive model is considered. We focus on fitting the predictive model using parametric, non-parametric and Bayesian approaches. This paper is motivated by a medical problem where interest focuses on developing a risk stratification system for morbidity of 1,710 cardiac patients given a suite of demographic, clinical and preoperative variables. Although the methods we use are applied specifically to this case study, these methods can be applied across any field, irrespective of the type of response.
Resumo:
MCM-41 periodic mesoporous silicates with a high degree of structural ordering are synthesized and used as model adsorbents to study the isotherm prediction of nitrogen adsorption. The nitrogen adsorption isotherm at 77 K for a macroporous silica is measured and used in high-resolution alpha(s)-plot comparative analysis to determine the external surface area, total surface area and primary mesopore volume of the MCM-41 materials. Adsorption equilibrium data of nitrogen on the different pore size MCM-41 samples (pore diameters from 2.40 to 4.92 nm) are also obtained. Based on the Broekhoff and de Boer' thermodynamic analysis, the nitrogen adsorption isotherms for the different pore size MCM-41 samples are interpreted using a novel strategy, in which the parameters of an empirical expression, used to represent the potential of interaction between the adsorbate and adsorbent, are obtained by fitting only the multilayer region prior to capillary condensation for C-16 MCM-41. Subsequently the entire isotherm, including the phase transition, is predicted for all the different pore size MCM-41 samples without any fitting. The results show that the prediction of multilayer adsorption and total adsorbed amount are in good agreement with the experimental isotherms. The predictions of the relative pressure corresponding to capillary equilibrium (coexistence) transition agree remarkably with experimental data on the adsorption branch even for hysteretic isotherms, confirming that this is the branch appropriate for pore size distribution analysis. The impact of pore radius on the adsorption film thickness and capillary coexistence pressure is also investigated, and found to agree with the experimental data. (C) 2003 Elsevier Inc. All rights reserved.
Resumo:
We study the implications for two-Higgs-doublet models of the recent announcement at the LHC giving a tantalizing hint for a Higgs boson of mass 125 GeV decaying into two photons. We require that the experimental result be within a factor of 2 of the theoretical standard model prediction, and analyze the type I and type II models as well as the lepton-specific and flipped models, subject to this requirement. It is assumed that there is no new physics other than two Higgs doublets. In all of the models, we display the allowed region of parameter space taking the recent LHC announcement at face value, and we analyze the W+W-, ZZ, (b) over barb, and tau(+)tau(-) expectations in these allowed regions. Throughout the entire range of parameter space allowed by the gamma gamma constraint, the numbers of events for Higgs decays into WW, ZZ, and b (b) over bar are not changed from the standard model by more than a factor of 2. In contrast, in the lepton-specific model, decays to tau(+)tau(-) are very sensitive across the entire gamma gamma-allowed region.