71 resultados para Explicit method, Mean square stability, Stochastic orthogonal Runge-Kutta, Chebyshev method


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The performance of an adaptive filter may be studied through the behaviour of the optimal and adaptive coefficients in a given environment. This thesis investigates the performance of finite impulse response adaptive lattice filters for two classes of input signals: (a) frequency modulated signals with polynomial phases of order p in complex Gaussian white noise (as nonstationary signals), and (b) the impulsive autoregressive processes with alpha-stable distributions (as non-Gaussian signals). Initially, an overview is given for linear prediction and adaptive filtering. The convergence and tracking properties of the stochastic gradient algorithms are discussed for stationary and nonstationary input signals. It is explained that the stochastic gradient lattice algorithm has many advantages over the least-mean square algorithm. Some of these advantages are having a modular structure, easy-guaranteed stability, less sensitivity to the eigenvalue spread of the input autocorrelation matrix, and easy quantization of filter coefficients (normally called reflection coefficients). We then characterize the performance of the stochastic gradient lattice algorithm for the frequency modulated signals through the optimal and adaptive lattice reflection coefficients. This is a difficult task due to the nonlinear dependence of the adaptive reflection coefficients on the preceding stages and the input signal. To ease the derivations, we assume that reflection coefficients of each stage are independent of the inputs to that stage. Then the optimal lattice filter is derived for the frequency modulated signals. This is performed by computing the optimal values of residual errors, reflection coefficients, and recovery errors. Next, we show the tracking behaviour of adaptive reflection coefficients for frequency modulated signals. This is carried out by computing the tracking model of these coefficients for the stochastic gradient lattice algorithm in average. The second-order convergence of the adaptive coefficients is investigated by modeling the theoretical asymptotic variance of the gradient noise at each stage. The accuracy of the analytical results is verified by computer simulations. Using the previous analytical results, we show a new property, the polynomial order reducing property of adaptive lattice filters. This property may be used to reduce the order of the polynomial phase of input frequency modulated signals. Considering two examples, we show how this property may be used in processing frequency modulated signals. In the first example, a detection procedure in carried out on a frequency modulated signal with a second-order polynomial phase in complex Gaussian white noise. We showed that using this technique a better probability of detection is obtained for the reduced-order phase signals compared to that of the traditional energy detector. Also, it is empirically shown that the distribution of the gradient noise in the first adaptive reflection coefficients approximates the Gaussian law. In the second example, the instantaneous frequency of the same observed signal is estimated. We show that by using this technique a lower mean square error is achieved for the estimated frequencies at high signal-to-noise ratios in comparison to that of the adaptive line enhancer. The performance of adaptive lattice filters is then investigated for the second type of input signals, i.e., impulsive autoregressive processes with alpha-stable distributions . The concept of alpha-stable distributions is first introduced. We discuss that the stochastic gradient algorithm which performs desirable results for finite variance input signals (like frequency modulated signals in noise) does not perform a fast convergence for infinite variance stable processes (due to using the minimum mean-square error criterion). To deal with such problems, the concept of minimum dispersion criterion, fractional lower order moments, and recently-developed algorithms for stable processes are introduced. We then study the possibility of using the lattice structure for impulsive stable processes. Accordingly, two new algorithms including the least-mean P-norm lattice algorithm and its normalized version are proposed for lattice filters based on the fractional lower order moments. Simulation results show that using the proposed algorithms, faster convergence speeds are achieved for parameters estimation of autoregressive stable processes with low to moderate degrees of impulsiveness in comparison to many other algorithms. Also, we discuss the effect of impulsiveness of stable processes on generating some misalignment between the estimated parameters and the true values. Due to the infinite variance of stable processes, the performance of the proposed algorithms is only investigated using extensive computer simulations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Biologists are increasingly conscious of the critical role that noise plays in cellular functions such as genetic regulation, often in connection with fluctuations in small numbers of key regulatory molecules. This has inspired the development of models that capture this fundamentally discrete and stochastic nature of cellular biology - most notably the Gillespie stochastic simulation algorithm (SSA). The SSA simulates a temporally homogeneous, discrete-state, continuous-time Markov process, and of course the corresponding probabilities and numbers of each molecular species must all remain positive. While accurately serving this purpose, the SSA can be computationally inefficient due to very small time stepping so faster approximations such as the Poisson and Binomial τ-leap methods have been suggested. This work places these leap methods in the context of numerical methods for the solution of stochastic differential equations (SDEs) driven by Poisson noise. This allows analogues of Euler-Maruyuma, Milstein and even higher order methods to be developed through the Itô-Taylor expansions as well as similar derivative-free Runge-Kutta approaches. Numerical results demonstrate that these novel methods compare favourably with existing techniques for simulating biochemical reactions by more accurately capturing crucial properties such as the mean and variance than existing methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Balanced method was introduced as a class of quasi-implicit methods, based upon the Euler-Maruyama scheme, for solving stiff stochastic differential equations. We extend the Balanced method to introduce a class of stable strong order 1. 0 numerical schemes for solving stochastic ordinary differential equations. We derive convergence results for this class of numerical schemes. We illustrate the asymptotic stability of this class of schemes is illustrated and is compared with contemporary schemes of strong order 1. 0. We present some evidence on parametric selection with respect to minimising the error convergence terms. Furthermore we provide a convergence result for general Balanced style schemes of higher orders.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

High-speed broadband internet access is widely recognised as a catalyst to social and economic development. However, the provision of broadband Internet services with the existing solutions to rural population, scattered over an extensive geographical area, remains both an economic and technical challenge. As a feasible solution, the Commonwealth Scientific and Industrial Research Organization (CSIRO) proposed a highly spectrally efficient, innovative and cost-effective fixed wireless broadband access technology, which uses analogue TV frequency spectrum and Multi-User MIMO (MUMIMO) technology with Orthogonal-Frequency-Division-Multiplexing (OFDM). MIMO systems have emerged as a promising solution for the increasing demand of higher data rates, better quality of service, and higher network capacity. However, the performance of MIMO systems can be significantly affected by different types of propagation environments e.g., indoor, outdoor urban, or outdoor rural and operating frequencies. For instance, large spectral efficiencies associated with MIMO systems, which assume a rich scattering environment in urban environments, may not be valid for all propagation environments, such as outdoor rural environments, due to the presence of less scatterer densities. Since this is the first time a MU-MIMO-OFDM fixed broadband wireless access solution is deployed in a rural environment, questions from both theoretical and practical standpoints arise; For example, what capacity gains are available for the proposed solution under realistic rural propagation conditions?. Currently, no comprehensive channel measurement and capacity analysis results are available for MU-MIMO-OFDM fixed broadband wireless access systems which employ large scale multiple antennas at the Access Point (AP) and analogue TV frequency spectrum in rural environments. Moreover, according to the literature, no deterministic MU-MIMO channel models exist that define rural wireless channels by accounting for terrain effects. This thesis fills the aforementioned knowledge gaps with channel measurements, channel modeling and comprehensive capacity analysis for MU-MIMO-OFDM fixed wireless broadband access systems in rural environments. For the first time, channel measurements were conducted in a rural farmland near Smithton, Tasmania using CSIRO's broadband wireless access solution. A novel deterministic MU-MIMO-OFDM channel model, which can be used for accurate performance prediction of rural MUMIMO channels with dominant Line-of-Sight (LoS) paths, was developed under this research. Results show that the proposed solution can achieve 43.7 bits/s/Hz at a Signal-to- Noise Ratio (SNR) of 20 dB in rural environments. Based on channel measurement results, this thesis verifies that the deterministic channel model accurately predicts channel capacity in rural environments with a Root Mean Square (RMS) error of 0.18 bits/s/Hz. Moreover, this study presents a comprehensive capacity analysis of rural MU-MIMOOFDM channels using experimental, simulated and theoretical models. Based on the validated deterministic model, further investigations on channel capacity and the eects of capacity variation, with different user distribution angles (θ) around the AP, were analysed. For instance, when SNR = 20dB, the capacity increases from 15.5 bits/s/Hz to 43.7 bits/s/Hz as θ increases from 10° to 360°. Strategies to mitigate these capacity degradation effects are also presented by employing a suitable user grouping method. Outcomes of this thesis have already been used by CSIRO scientists to determine optimum user distribution angles around the AP, and are of great significance for researchers and MU-MUMO-OFDM system developers to understand the advantages and potential capacity gains of MU-MIMO systems in rural environments. Also, results of this study are useful to further improve the performance of MU-MIMO-OFDM systems in rural environments. Ultimately, this knowledge contribution will be useful in delivering efficient, cost-effective high-speed wireless broadband systems that are tailor-made for rural environments, thus, improving the quality of life and economic prosperity of rural populations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: To investigate limb loading and dynamic stability during squatting in the early functional recovery of total hip arthroplasty (THA) patients. Design: Cohort study Setting: Inpatient rehabilitation clinic. Participants: A random sample of 61 THA patients (34♂/27♀; 62±9 yrs, 77±14 kg, 174±9 cm) was assessed twice, 13.2±3.8 days (PRE) and 26.6±3.3 days post-surgery (POST), and compared with a healthy reference group (REF) (22♂/16♀; 47±12yrs; 78±20kg; 175±10cm). Interventions: THA patients received two weeks of standard in-patient rehabilitation. Main Outcome Measure(s): Inter-limb vertical force distribution and dynamic stability during the squat maneuver, as defined by the root mean square (RMS) of the center of pressure in antero-posterior and medio-lateral directions, of operated (OP) and non-operated (NON)limbs. Self-reported function was assessed via FFb-H-OA 2.0 questionnaire. Results: At PRE, unloading of the OP limb was 15.8% greater (P<.001, d=1.070) and antero-posterior and medio-lateral center of pressure RMS were 30-34% higher in THA than REF P<.05). Unloading was reduced by 12.8% towards a more equal distribution from PRE to POST (P<.001, d=0.874). Although medio-lateral stability improved between PRE and POST (OP: 14.8%, P=.024, d=0.397; NON: 13.1%, P=.015, d=0.321), antero-posterior stability was not significantly different. Self-reported physical function improved by 15.8% (P<.001, d=0.965). Conclusion(s): THA patients unload the OP limb and are dynamically more unstable during squatting in the early rehabilitation phase following total hip replacement than healthy adults. Although loading symmetry and medio-lateral stability improved to the level of healthy adults with rehabilitation, antero-posterior stability remained impaired. Measures of dynamic stability and load symmetry during squatting provide quantitative information that can be used to clinically monitor early functional recovery from THA.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE: Previous research demonstrating that specific performance outcome goals can be achieved in different ways is functionally significant for springboard divers whose performance environment can vary extensively. This body of work raises questions about the traditional approach of balking (terminating the takeoff) by elite divers aiming to perform only identical, invariant movement patterns during practice. METHOD: A 12-week training program (2 times per day; 6.5 hr per day) was implemented with 4 elite female springboard divers to encourage them to adapt movement patterns under variable takeoff conditions and complete intended dives, rather than balk. RESULTS: Intraindividual analyses revealed small increases in variability in the board-work component of each diver's pretraining and posttraining program reverse-dive takeoffs. No topological differences were observed between movement patterns of dives completed pretraining and posttraining. Differences were noted in the amount of movement variability under different training conditions (evidenced by higher normalized root mean square error indexes posttraining). An increase in the number of completed dives (from 78.91%-86.84% to 95.59%-99.29%) and a decrease in the frequency of balked takeoffs (from 13.16%-19.41% to 0.63%-4.41%) showed that the elite athletes were able to adapt their behaviors during the training program. These findings coincided with greater consistency in the divers' performance during practice as scored by qualified judges. CONCLUSION: Results suggested that on completion of training, athletes were capable of successfully adapting their movement patterns under more varied takeoff conditions to achieve greater consistency and stability of performance outcomes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article describes a maximum likelihood method for estimating the parameters of the standard square-root stochastic volatility model and a variant of the model that includes jumps in equity prices. The model is fitted to data on the S&P 500 Index and the prices of vanilla options written on the index, for the period 1990 to 2011. The method is able to estimate both the parameters of the physical measure (associated with the index) and the parameters of the risk-neutral measure (associated with the options), including the volatility and jump risk premia. The estimation is implemented using a particle filter whose efficacy is demonstrated under simulation. The computational load of this estimation method, which previously has been prohibitive, is managed by the effective use of parallel computing using graphics processing units (GPUs). The empirical results indicate that the parameters of the models are reliably estimated and consistent with values reported in previous work. In particular, both the volatility risk premium and the jump risk premium are found to be significant.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The paper presents a geometry-free approach to assess the variation of covariance matrices of undifferenced triple frequency GNSS measurements and its impact on positioning solutions. Four independent geometryfree/ ionosphere-free (GFIF) models formed from original triple-frequency code and phase signals allow for effective computation of variance-covariance matrices using real data. Variance Component Estimation (VCE) algorithms are implemented to obtain the covariance matrices for three pseudorange and three carrier-phase signals epoch-by-epoch. Covariance results from the triple frequency Beidou System (BDS) and GPS data sets demonstrate that the estimated standard deviation varies in consistence with the amplitude of actual GFIF error time series. The single point positioning (SPP) results from BDS ionosphere-free measurements at four MGEX stations demonstrate an improvement of up to about 50% in Up direction relative to the results based on a mean square statistics. Additionally, a more extensive SPP analysis at 95 global MGEX stations based on GPS ionosphere-free measurements shows an average improvement of about 10% relative to the traditional results. This finding provides a preliminary confirmation that adequate consideration of the variation of covariance leads to the improvement of GNSS state solutions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis aimed to investigate the way in which distance runners modulate their speed in an effort to understand the key processes and determinants of speed selection when encountering hills in natural outdoor environments. One factor which has limited the expansion of knowledge in this area has been a reliance on the motorized treadmill which constrains runners to constant speeds and gradients and only linear paths. Conversely, limits in the portability or storage capacity of available technology have restricted field research to brief durations and level courses. Therefore another aim of this thesis was to evaluate the capacity of lightweight, portable technology to measure running speed in outdoor undulating terrain. The first study of this thesis assessed the validity of a non-differential GPS to measure speed, displacement and position during human locomotion. Three healthy participants walked and ran over straight and curved courses for 59 and 34 trials respectively. A non-differential GPS receiver provided speed data by Doppler Shift and change in GPS position over time, which were compared with actual speeds determined by chronometry. Displacement data from the GPS were compared with a surveyed 100m section, while static positions were collected for 1 hour and compared with the known geodetic point. GPS speed values on the straight course were found to be closely correlated with actual speeds (Doppler shift: r = 0.9994, p < 0.001, Δ GPS position/time: r = 0.9984, p < 0.001). Actual speed errors were lowest using the Doppler shift method (90.8% of values within ± 0.1 m.sec -1). Speed was slightly underestimated on a curved path, though still highly correlated with actual speed (Doppler shift: r = 0.9985, p < 0.001, Δ GPS distance/time: r = 0.9973, p < 0.001). Distance measured by GPS was 100.46 ± 0.49m, while 86.5% of static points were within 1.5m of the actual geodetic point (mean error: 1.08 ± 0.34m, range 0.69-2.10m). Non-differential GPS demonstrated a highly accurate estimation of speed across a wide range of human locomotion velocities using only the raw signal data with a minimal decrease in accuracy around bends. This high level of resolution was matched by accurate displacement and position data. Coupled with reduced size, cost and ease of use, the use of a non-differential receiver offers a valid alternative to differential GPS in the study of overground locomotion. The second study of this dissertation examined speed regulation during overground running on a hilly course. Following an initial laboratory session to calculate physiological thresholds (VO2 max and ventilatory thresholds), eight experienced long distance runners completed a self- paced time trial over three laps of an outdoor course involving uphill, downhill and level sections. A portable gas analyser, GPS receiver and activity monitor were used to collect physiological, speed and stride frequency data. Participants ran 23% slower on uphills and 13.8% faster on downhills compared with level sections. Speeds on level sections were significantly different for 78.4 ± 7.0 seconds following an uphill and 23.6 ± 2.2 seconds following a downhill. Speed changes were primarily regulated by stride length which was 20.5% shorter uphill and 16.2% longer downhill, while stride frequency was relatively stable. Oxygen consumption averaged 100.4% of runner’s individual ventilatory thresholds on uphills, 78.9% on downhills and 89.3% on level sections. Group level speed was highly predicted using a modified gradient factor (r2 = 0.89). Individuals adopted distinct pacing strategies, both across laps and as a function of gradient. Speed was best predicted using a weighted factor to account for prior and current gradients. Oxygen consumption (VO2) limited runner’s speeds only on uphill sections, and was maintained in line with individual ventilatory thresholds. Running speed showed larger individual variation on downhill sections, while speed on the level was systematically influenced by the preceding gradient. Runners who varied their pace more as a function of gradient showed a more consistent level of oxygen consumption. These results suggest that optimising time on the level sections after hills offers the greatest potential to minimise overall time when running over undulating terrain. The third study of this thesis investigated the effect of implementing an individualised pacing strategy on running performance over an undulating course. Six trained distance runners completed three trials involving four laps (9968m) of an outdoor course involving uphill, downhill and level sections. The initial trial was self-paced in the absence of any temporal feedback. For the second and third field trials, runners were paced for the first three laps (7476m) according to two different regimes (Intervention or Control) by matching desired goal times for subsections within each gradient. The fourth lap (2492m) was completed without pacing. Goals for the Intervention trial were based on findings from study two using a modified gradient factor and elapsed distance to predict the time for each section. To maintain the same overall time across all paced conditions, times were proportionately adjusted according to split times from the self-paced trial. The alternative pacing strategy (Control) used the original split times from this initial trial. Five of the six runners increased their range of uphill to downhill speeds on the Intervention trial by more than 30%, but this was unsuccessful in achieving a more consistent level of oxygen consumption with only one runner showing a change of more than 10%. Group level adherence to the Intervention strategy was lowest on downhill sections. Three runners successfully adhered to the Intervention pacing strategy which was gauged by a low Root Mean Square error across subsections and gradients. Of these three, the two who had the largest change in uphill-downhill speeds ran their fastest overall time. This suggests that for some runners the strategy of varying speeds systematically to account for gradients and transitions may benefit race performances on courses involving hills. In summary, a non – differential receiver was found to offer highly accurate measures of speed, distance and position across the range of human locomotion speeds. Self-selected speed was found to be best predicted using a weighted factor to account for prior and current gradients. Oxygen consumption limited runner’s speeds only on uphills, speed on the level was systematically influenced by preceding gradients, while there was a much larger individual variation on downhill sections. Individuals were found to adopt distinct but unrelated pacing strategies as a function of durations and gradients, while runners who varied pace more as a function of gradient showed a more consistent level of oxygen consumption. Finally, the implementation of an individualised pacing strategy to account for gradients and transitions greatly increased runners’ range of uphill-downhill speeds and was able to improve performance in some runners. The efficiency of various gradient-speed trade- offs and the factors limiting faster downhill speeds will however require further investigation to further improve the effectiveness of the suggested strategy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE To examine the psychometric properties of a Chinese version of the Problem Areas In Diabetes (PAID-C) scale. RESEARCH DESIGN AND METHODS The reliability and validity of the PAID-C were evaluated in a convenience sample of 205 outpatients with type 2 diabetes. Confirmatory factor analysis, Bland-Altman analysis, and Spearman's correlations facilitated the psychometric evaluation. RESULTS Confirmatory factor analysis confirmed a one-factor structure of the PAID-C (χ2/df ratio = 1.894, goodness-of-fit index = 0.901, comparative fit index = 0.905, root mean square error of approximation = 0.066). The PAID-C was associated with A1C (rs = 0.15; P < 0.05) and diabetes self-care behaviors in general diet (rs = −0.17; P < 0.05) and exercise (rs = −0.17; P < 0.05). The 4-week test-retest reliability demonstrated satisfactory stability (rs = 0.83; P < 0.01). CONCLUSIONS The PAID-C is a reliable and valid measure to determine diabetes-related emotional distress in Chinese people with type 2 diabetes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background The residue-wise contact order (RWCO) describes the sequence separations between the residues of interest and its contacting residues in a protein sequence. It is a new kind of one-dimensional protein structure that represents the extent of long-range contacts and is considered as a generalization of contact order. Together with secondary structure, accessible surface area, the B factor, and contact number, RWCO provides comprehensive and indispensable important information to reconstructing the protein three-dimensional structure from a set of one-dimensional structural properties. Accurately predicting RWCO values could have many important applications in protein three-dimensional structure prediction and protein folding rate prediction, and give deep insights into protein sequence-structure relationships. Results We developed a novel approach to predict residue-wise contact order values in proteins based on support vector regression (SVR), starting from primary amino acid sequences. We explored seven different sequence encoding schemes to examine their effects on the prediction performance, including local sequence in the form of PSI-BLAST profiles, local sequence plus amino acid composition, local sequence plus molecular weight, local sequence plus secondary structure predicted by PSIPRED, local sequence plus molecular weight and amino acid composition, local sequence plus molecular weight and predicted secondary structure, and local sequence plus molecular weight, amino acid composition and predicted secondary structure. When using local sequences with multiple sequence alignments in the form of PSI-BLAST profiles, we could predict the RWCO distribution with a Pearson correlation coefficient (CC) between the predicted and observed RWCO values of 0.55, and root mean square error (RMSE) of 0.82, based on a well-defined dataset with 680 protein sequences. Moreover, by incorporating global features such as molecular weight and amino acid composition we could further improve the prediction performance with the CC to 0.57 and an RMSE of 0.79. In addition, combining the predicted secondary structure by PSIPRED was found to significantly improve the prediction performance and could yield the best prediction accuracy with a CC of 0.60 and RMSE of 0.78, which provided at least comparable performance compared with the other existing methods. Conclusion The SVR method shows a prediction performance competitive with or at least comparable to the previously developed linear regression-based methods for predicting RWCO values. In contrast to support vector classification (SVC), SVR is very good at estimating the raw value profiles of the samples. The successful application of the SVR approach in this study reinforces the fact that support vector regression is a powerful tool in extracting the protein sequence-structure relationship and in estimating the protein structural profiles from amino acid sequences.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Obesity is a major public health problem in both developed and developing countries. The body mass index (BMI) is the most common index used to define obesity. The universal application of the same BMI classification across different ethnic groups is being challenged due to the inability of the index to differentiate fat mass (FM) and fat�]free mass (FFM) and the recognized ethnic differences in body composition. A better understanding of the body composition of Asian children from different backgrounds would help to better understand the obesity�]related health risks of people in this region. Moreover, the limitations of the BMI underscore the necessity to use where possible, more accurate measures of body fat assessment in research and clinical settings in addition to BMI, particularly in relation to the monitoring of prevention and treatment efforts. The aim of the first study was to determine the ethnic difference in the relationship between BMI and percent body fat (%BF) in pre�]pubertal Asian children from China, Lebanon, Malaysia, the Philippines, and Thailand. A total of 1039 children aged 8�]10 y were recruited using a non�]random purposive sampling approach aiming to encompass a wide BMI range from the five countries. Percent body fat (%BF) was determined using the deuterium dilution technique to quantify total body water (TBW) and subsequently derive proportions of FM and FFM. The study highlighted the sex and ethnic differences between BMI and %BF in Asian children from different countries. Girls had approximately 4.0% higher %BF compared with boys at a given BMI. Filipino boys tended to have a lower %BF than their Chinese, Lebanese, Malay and Thai counterparts at the same age and BMI level (corrected mean %BF was 25.7�}0.8%, 27.4�}0.4%, 27.1�}0.6%, 27.7�}0.5%, 28.1�}0.5% for Filipino, Chinese, Lebanese, Malay and Thai boys, respectively), although they differed significantly from Thai and Malay boys. Thai girls had approximately 2.0% higher %BF values than Chinese, Lebanese, Filipino and Malay counterparts (however no significant difference was seen among the four ethnic groups) at a given BMI (corrected mean %BF was 31.1�}0.5%, 28.6�}0.4%, 29.2�}0.6%, 29.5�}0.6%, 29.5�}0.5% for Thai, Chinese, Lebanese, Malay and Filipino girls, respectively). However, the ethnic difference in BMI�]%BF relationship varied by BMI. Compared with Caucasians, Asian children had a BMI 3�]6 units lower for a given %BF. More than one third of obese Asian children in the study were not identified using the WHO classification and more than half were not identified using the International Obesity Task Force (IOTF) classification. However, use of the Chinese classification increased the sensitivity by 19.7%, 18.1%, 2.3%, 2.3%, and 11.3% for Chinese, Lebanese, Malay, Filipino and Thai girls, respectively. A further aim of the first study was to determine the ethnic difference in body fat distribution in pre�]pubertal Asian children from China, Lebanon, Malaysia, and Thailand. The skin fold thicknesses, height, weight, waist circumference (WC) and total adiposity (as determined by deuterium dilution technique) of 922 children from the four countries was assessed. Chinese boys and girls had a similar trunk�]to�]extremity skin fold thickness ratio to Thai counterparts and both groups had higher ratios than the Malays and Lebanese at a given total FM. At a given BMI, both Chinese and Thai boys and girls had a higher WC than Malays and Lebanese (corrected mean WC was 68.1�}0.2 cm, 67.8�}0.3 cm, 65.8�}0.4 cm, 64.1�}0.3 cm for Chinese, Thai, Lebanese and Malay boys, respectively; 64.2�}0.2 cm, 65.0�}0.3 cm, 62.9�}0.4 cm, 60.6�}0.3 cm for Chinese, Thai, Lebanese and Malay girls, respectively). Chinese boys and girls had lower trunk fat adjusted subscapular/suprailiac skinfold ratio compared with Lebanese and Malay counterparts. The second study aimed to develop and cross�]validate bioelectrical impedance analysis (BIA) prediction equations of TBW and FFM for Asian pre�]pubertal children from China, Lebanon, Malaysia, the Philippines, and Thailand. Data on height, weight, age, gender, resistance and reactance measured by BIA were collected from 948 Asian children (492 boys and 456 girls) aged 8�]10 y from the five countries. The deuterium dilution technique was used as the criterion method for the estimation of TBW and FFM. The BIA equations were developed from the validation group (630 children randomly selected from the total sample) using stepwise multiple regression analysis and cross�]validated in a separate group (318 children) using the Bland�]Altman approach. Age, gender and ethnicity influenced the relationship between the resistance index (RI = height2/resistance), TBW and FFM. The BIA prediction equation for the estimation of TBW was: TBW (kg) = 0.231�~Height2 (cm)/resistance (ƒ¶) + 0.066�~Height (cm) + 0.188�~Weight (kg) + 0.128�~Age (yr) + 0.500�~Sex (male=1, female=0) . 0.316�~Ethnicity (Thai ethnicity=1, others=0) �] 4.574, and for the estimation of FFM: FFM (kg) = 0.299�~Height2 (cm)/resistance (ƒ¶) + 0.086�~Height (cm) + 0.245�~Weight (kg) + 0.260�~Age (yr) + 0.901�~Sex (male=1, female=0) �] 0.415�~Ethnicity (Thai ethnicity=1, others=0) �] 6.952. The R2 was 88.0% (root mean square error, RSME = 1.3 kg), 88.3% (RSME = 1.7 kg) for TBW and FFM equation, respectively. No significant difference between measured and predicted TBW and between measured and predicted FFM for the whole cross�]validation sample was found (bias = �]0.1�}1.4 kg, pure error = 1.4�}2.0 kg for TBW and bias = �]0.2�}1.9 kg, pure error = 1.8�}2.6 kg for FFM). However, the prediction equation for estimation of TBW/FFM tended to overestimate TBW/FFM at lower levels while underestimate at higher levels of TBW/FFM. Accuracy of the general equation for TBW and FFM compared favorably with both BMI�]specific and ethnic�]specific equations. There were significant differences between predicted TBW and FFM from external BIA equations derived from Caucasian populations and measured values in Asian children. There were three specific aims of the third study. The first was to explore the relationship between obesity and metabolic syndrome and abnormalities in Chinese children. A total of 608 boys and 800 girls aged 6�]12 y were recruited from four cities in China. Three definitions of pediatric metabolic syndrome and abnormalities were used, including the International Diabetes Federation (IDF) and National Cholesterol Education Program (NCEP) definition for adults modified by Cook et al. and de Ferranti et al. The prevalence of metabolic syndrome varied with different definitions, was highest using the de Ferranti definition (5.4%, 24.6% and 42.0%, respectively for normal�]weight, overweight and obese children), followed by the Cook definition (1.5%, 8.1%, and 25.1%, respectively), and the IDF definition (0.5%, 1.8% and 8.3%, respectively). Overweight and obese children had a higher risk of developing the metabolic syndrome compared to normal�]weight children (odds ratio varied with different definitions from 3.958 to 6.866 for overweight children, and 12.640�]26.007 for obese children). Overweight and obesity also increased the risk of developing metabolic abnormalities. Central obesity and high triglycerides (TG) were the most common while hyperglycemia was the least frequent in Chinese children regardless of different definitions. The second purpose was to determine the best obesity index for the prediction of cardiovascular (CV) risk factor clustering across a 2�]y follow�]up among BMI, %BF, WC and waist�]to�]height ratio (WHtR) in Chinese children. Height, weight, WC, %BF as determined by BIA, blood pressure, TG, high�]density lipoprotein cholesterol (HDL�]C), and fasting glucose were collected at baseline and 2 years later in 292 boys and 277 girls aged 8�]10 y. The results showed the percentage of children who remained overweight/obese defined on the basis of BMI, WC, WHtR and %BF was 89.7%, 93.5%, 84.5%, and 80.4%, respectively after 2 years. Obesity indices at baseline significantly correlated with TG, HDL�]C, and blood pressure at both baseline and 2 years later with a similar strength of correlations. BMI at baseline explained the greatest variance of later blood pressure. WC at baseline explained the greatest variance of later HDL�]C and glucose, while WHtR at baseline was the main predictor of later TG. Receiver�]operating characteristic (ROC) analysis explored the ability of the four indices to identify the later presence of CV risk. The overweight/obese children defined on the basis of BMI, WC, WHtR or %BF were more likely to develop CV risk 2 years later with relative risk (RR) scores of 3.670, 3.762, 2.767, and 2.804, respectively. The final purpose of the third study was to develop age�] and gender�]specific percentiles of WC and WHtR and cut�]off points of WC and WHtR for the prediction of CV risk in Chinese children. Smoothed percentile curves of WC and WHtR were produced in 2830 boys and 2699 girls aged 6�]12 y randomly selected from southern and northern China using the LMS method. The optimal age�] and gender�]specific thresholds of WC and WHtR for the prediction of cardiovascular risk factors clustering were derived in a sub�]sample (n=1845) by ROC analysis. Age�] and gender�]specific WC and WHtR percentiles were constructed. The WC thresholds were at the 90th and 84th percentiles for Chinese boys and girls, respectively, with sensitivity and specificity ranging from 67.2% to 83.3%. The WHtR thresholds were at the 91st and 94th percentiles for Chinese boys and girls, respectively, with sensitivity and specificity ranging from 78.6% to 88.9%. The cut�]offs of both WC and WHtR were age�] and gender�]dependent. In conclusion, the current thesis quantifies the ethnic differences in the BMI�]%BF relationship and body fat distribution between Asian children from different origins and confirms the necessity to consider ethnic differences in body composition when developing BMI and other obesity index criteria for obesity in Asian children. Moreover, ethnicity is also important in BIA prediction equations. In addition, WC and WHtR percentiles and thresholds for the prediction of CV risk in Chinese children differ from other populations. Although there was no advantage of WC or WHtR over BMI or %BF in the prediction of CV risk, obese children had a higher risk of developing the metabolic syndrome and abnormalities than normal�]weight children regardless of the obesity index used.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Linear adaptive channel equalization using the least mean square (LMS) algorithm and the recursive least-squares(RLS) algorithm for an innovative multi-user (MU) MIMOOFDM wireless broadband communications system is proposed. The proposed equalization method adaptively compensates the channel impairments caused by frequency selectivity in the propagation environment. Simulations for the proposed adaptive equalizer are conducted using a training sequence method to determine optimal performance through a comparative analysis. Results show an improvement of 0.15 in BER (at a SNR of 16 dB) when using Adaptive Equalization and RLS algorithm compared to the case in which no equalization is employed. In general, adaptive equalization using LMS and RLS algorithms showed to be significantly beneficial for MU-MIMO-OFDM systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we give an overview of some very recent work, as well as presenting a new approach, on the stochastic simulation of multi-scaled systems involving chemical reactions. In many biological systems (such as genetic regulation and cellular dynamics) there is a mix between small numbers of key regulatory proteins, and medium and large numbers of molecules. In addition, it is important to be able to follow the trajectories of individual molecules by taking proper account of the randomness inherent in such a system. We describe different types of simulation techniques (including the stochastic simulation algorithm, Poisson Runge-Kutta methods and the balanced Euler method) for treating simulations in the three different reaction regimes: slow, medium and fast. We then review some recent techniques on the treatment of coupled slow and fast reactions for stochastic chemical kinetics and present a new approach which couples the three regimes mentioned above. We then apply this approach to a biologically inspired problem involving the expression and activity of LacZ and LacY proteins in E coli, and conclude with a discussion on the significance of this work. (C) 2004 Elsevier Ltd. All rights reserved.