194 resultados para Sequential error ratio
Resumo:
This thesis deals with the problem of the instantaneous frequency (IF) estimation of sinusoidal signals. This topic plays significant role in signal processing and communications. Depending on the type of the signal, two major approaches are considered. For IF estimation of single-tone or digitally-modulated sinusoidal signals (like frequency shift keying signals) the approach of digital phase-locked loops (DPLLs) is considered, and this is Part-I of this thesis. For FM signals the approach of time-frequency analysis is considered, and this is Part-II of the thesis. In part-I we have utilized sinusoidal DPLLs with non-uniform sampling scheme as this type is widely used in communication systems. The digital tanlock loop (DTL) has introduced significant advantages over other existing DPLLs. In the last 10 years many efforts have been made to improve DTL performance. However, this loop and all of its modifications utilizes Hilbert transformer (HT) to produce a signal-independent 90-degree phase-shifted version of the input signal. Hilbert transformer can be realized approximately using a finite impulse response (FIR) digital filter. This realization introduces further complexity in the loop in addition to approximations and frequency limitations on the input signal. We have tried to avoid practical difficulties associated with the conventional tanlock scheme while keeping its advantages. A time-delay is utilized in the tanlock scheme of DTL to produce a signal-dependent phase shift. This gave rise to the time-delay digital tanlock loop (TDTL). Fixed point theorems are used to analyze the behavior of the new loop. As such TDTL combines the two major approaches in DPLLs: the non-linear approach of sinusoidal DPLL based on fixed point analysis, and the linear tanlock approach based on the arctan phase detection. TDTL preserves the main advantages of the DTL despite its reduced structure. An application of TDTL in FSK demodulation is also considered. This idea of replacing HT by a time-delay may be of interest in other signal processing systems. Hence we have analyzed and compared the behaviors of the HT and the time-delay in the presence of additive Gaussian noise. Based on the above analysis, the behavior of the first and second-order TDTLs has been analyzed in additive Gaussian noise. Since DPLLs need time for locking, they are normally not efficient in tracking the continuously changing frequencies of non-stationary signals, i.e. signals with time-varying spectra. Nonstationary signals are of importance in synthetic and real life applications. An example is the frequency-modulated (FM) signals widely used in communication systems. Part-II of this thesis is dedicated for the IF estimation of non-stationary signals. For such signals the classical spectral techniques break down, due to the time-varying nature of their spectra, and more advanced techniques should be utilized. For the purpose of instantaneous frequency estimation of non-stationary signals there are two major approaches: parametric and non-parametric. We chose the non-parametric approach which is based on time-frequency analysis. This approach is computationally less expensive and more effective in dealing with multicomponent signals, which are the main aim of this part of the thesis. A time-frequency distribution (TFD) of a signal is a two-dimensional transformation of the signal to the time-frequency domain. Multicomponent signals can be identified by multiple energy peaks in the time-frequency domain. Many real life and synthetic signals are of multicomponent nature and there is little in the literature concerning IF estimation of such signals. This is why we have concentrated on multicomponent signals in Part-H. An adaptive algorithm for IF estimation using the quadratic time-frequency distributions has been analyzed. A class of time-frequency distributions that are more suitable for this purpose has been proposed. The kernels of this class are time-only or one-dimensional, rather than the time-lag (two-dimensional) kernels. Hence this class has been named as the T -class. If the parameters of these TFDs are properly chosen, they are more efficient than the existing fixed-kernel TFDs in terms of resolution (energy concentration around the IF) and artifacts reduction. The T-distributions has been used in the IF adaptive algorithm and proved to be efficient in tracking rapidly changing frequencies. They also enables direct amplitude estimation for the components of a multicomponent
Resumo:
This work investigates the computer modelling of the photochemical formation of smog products such as ozone and aerosol, in a system containing toluene, NOx and water vapour. In particular, the problem of modelling this process in the Commonwealth Scientific and Industrial Research Organization (CSIRO) smog chambers, which utilize outdoor exposure, is addressed. The primary requirement for such modelling is a knowledge of the photolytic rate coefficients. Photolytic rate coefficients of species other than N02 are often related to JNo2 (rate coefficient for the photolysis ofN02) by a simple factor, but for outdoor chambers, this method is prone to error as the diurnal profiles may not be similar in shape. Three methods for the calculation of diurnal JNo2 are investigated. The most suitable method for incorporation into a general model, is found to be one which determines the photolytic rate coefficients for N02, as well as several other species, from actinic flux, absorption cross section and quantum yields. A computer model was developed, based on this method, to calculate in-chamber photolysis rate coefficients for the CSIRO smog chambers, in which ex-chamber rate coefficients are adjusted by accounting for variation in light intensity by transmittance through the Teflon walls, albedo from the chamber floor and radiation attenuation due to clouds. The photochemical formation of secondary aerosol is investigated in a series of toluene-NOx experiments, which were performed in the CSIRO smog chambers. Three stages of aerosol formation, in plots of total particulate volume versus time, are identified: a delay period in which no significant mass of aerosol is formed, a regime of rapid aerosol formation (regime 1) and a second regime of slowed aerosol formation (regime 2). Two models are presented which were developed from the experimental data. One model is empirically based on observations of discrete stages of aerosol formation and readily allows aerosol growth profiles to be calculated. The second model is based on an adaptation of published toluene photooxidation mechanisms and provides some chemical information about the oxidation products. Both models compare favorably against the experimental data. The gross effects of precursor concentrations (toluene, NOx and H20) and ambient conditions (temperature, photolysis rate) on the formation of secondary aerosol are also investigated, primarily using the mechanism model. An increase in [NOx]o results in increased delay time, rate of aerosol formation in regime 1 and volume of aerosol formed in regime 1. This is due to increased formation of dinitrocresol and furanone products. An increase in toluene results in a decrease in the delay time and an increase in the rate of aerosol formation in regime 1, due to enhanced reactivity from the toluene products, such as the radicals from the photolysis of benzaldehyde. Water vapor has very little effect on the formation of aerosol volume, except that rates are slightly increased due to more OH radicals from reaction with 0(1D) from ozone photolysis. Increased temperature results in increased volume of aerosol formed in regime 1 (increased dinitrocresol formation), while increased photolysis rate results in increased rate of aerosol formation in regime 1. Both the rate and volume of aerosol formed in regime 2 are increased by increased temperature or photolysis rate. Both models indicate that the yield of secondary particulates from hydrocarbons (mass concentration aerosol formed/mass concentration hydrocarbon precursor) is proportional to the ratio [NOx]0/[hydrocarbon]0
Resumo:
A review of the main rolling models is conducted to assess their suitability for modelling the foil rolling process. Two such models are Fleck and Johnson's Hertzian model and Fleck, Johnson, Mear and Zhang's Influence Function model. Both of these models are approximated through the use of perturbation methods. Decrease in the computation time resulted when compared with the numerical solution. The Hertzian model was approximated using the ratio of the yield stress of the strip to the plane-strain Young's Modulus of the rolls as the small perturbation parameter. The Influence Function model approximation takes advantage of the solution of the well-known Aerofoil Integral Equation to gain an insight into how the choice of interior boundary points affects the stability of numerical solution of the model's equations. These approximations require less computation than their full models and, in the case of the Hertzian approximation, only introduces a small error in the predictions of roll force roll torque. Hence the Hertzian approximate method is suitable for on-line control. The predictions from the Influence Function approximation underestimates the predictions from the numerical results. Better approximation of the pressure in the plastic reduction regions is the main source of this error.
Resumo:
An algorithm based on the concept of combining Kalman filter and Least Error Square (LES) techniques is proposed in this paper. The algorithm is intended to estimate signal attributes like amplitude, frequency and phase angle in the online mode. This technique can be used in protection relays, digital AVRs, DGs, DSTATCOMs, FACTS and other power electronics applications. The Kalman filter is modified to operate on a fictitious input signal and provides precise estimation results insensitive to noise and other disturbances. At the same time, the LES system has been arranged to operate in critical transient cases to compensate the delay and inaccuracy identified because of the response of the standard Kalman filter. Practical considerations such as the effect of noise, higher order harmonics, and computational issues of the algorithm are considered and tested in the paper. Several computer simulations and a laboratory test are presented to highlight the usefulness of the proposed method. Simulation results show that the proposed technique can simultaneously estimate the signal attributes, even if it is highly distorted due to the presence of non-linear loads and noise.
Resumo:
OBJECTIVE To examine the psychometric properties of a Chinese version of the Problem Areas In Diabetes (PAID-C) scale. RESEARCH DESIGN AND METHODS The reliability and validity of the PAID-C were evaluated in a convenience sample of 205 outpatients with type 2 diabetes. Confirmatory factor analysis, Bland-Altman analysis, and Spearman's correlations facilitated the psychometric evaluation. RESULTS Confirmatory factor analysis confirmed a one-factor structure of the PAID-C (χ2/df ratio = 1.894, goodness-of-fit index = 0.901, comparative fit index = 0.905, root mean square error of approximation = 0.066). The PAID-C was associated with A1C (rs = 0.15; P < 0.05) and diabetes self-care behaviors in general diet (rs = −0.17; P < 0.05) and exercise (rs = −0.17; P < 0.05). The 4-week test-retest reliability demonstrated satisfactory stability (rs = 0.83; P < 0.01). CONCLUSIONS The PAID-C is a reliable and valid measure to determine diabetes-related emotional distress in Chinese people with type 2 diabetes.
Resumo:
It is possible to estimate the depth of focus (DOF) of the eye directly from wavefront measurements using various retinal image quality metrics (IQMs). In such methods, DOF is defined as the range of defocus error that degrades the retinal image quality calculated from IQMs to a certain level of the maximum value. Although different retinal image quality metrics are used, currently there have been two arbitrary threshold levels adopted, 50% and 80%. There has been limited study of the relationship between these threshold levels and the actual measured DOF. We measured the subjective DOF in a group of 17 normal subjects, and used through-focus augmented visual Strehl ratio based on optical transfer function (VSOTF) derived from their wavefront aberrations as the IQM. For each subject, a VSOTF threshold level was derived that would match the subjectively measured DOF. Significant correlation was found between the subject’s estimated threshold level and the HOA RMS (Pearson’s r=0.88, p<0.001). The linear correlation can be used to estimate the threshold level for each individual subject, subsequently leading to a method for estimating individual’s DOF from a single measurement of their wavefront aberrations.
Resumo:
Continuum diffusion models are often used to represent the collective motion of cell populations. Most previous studies have simply used linear diffusion to represent collective cell spreading, while others found that degenerate nonlinear diffusion provides a better match to experimental cell density profiles. In the cell modeling literature there is no guidance available with regard to which approach is more appropriate for representing the spreading of cell populations. Furthermore, there is no knowledge of particular experimental measurements that can be made to distinguish between situations where these two models are appropriate. Here we provide a link between individual-based and continuum models using a multi-scale approach in which we analyze the collective motion of a population of interacting agents in a generalized lattice-based exclusion process. For round agents that occupy a single lattice site, we find that the relevant continuum description of the system is a linear diffusion equation, whereas for elongated rod-shaped agents that occupy L adjacent lattice sites we find that the relevant continuum description is connected to the porous media equation (pme). The exponent in the nonlinear diffusivity function is related to the aspect ratio of the agents. Our work provides a physical connection between modeling collective cell spreading and the use of either the linear diffusion equation or the pme to represent cell density profiles. Results suggest that when using continuum models to represent cell population spreading, we should take care to account for variations in the cell aspect ratio because different aspect ratios lead to different continuum models.
Resumo:
Objective Uterine Papillary Serous Carcinoma (UPSC) is uncommon and accounts for less than 5% of all uterine cancers. Therefore the majority of evidence about the benefits of adjuvant treatment comes from retrospective case series. We conducted a prospective multi-centre non-randomized phase 2 clinical trial using four cycles of adjuvant paclitaxel plus carboplatin chemotherapy followed by pelvic radiotherapy, in order to evaluate the tolerability and safety of this approach. Methods This trial enrolled patients with newly diagnosed, previously untreated patients with stage 1b-4 (FIGO-1988) UPSC with a papillary serous component of at least 30%. Paclitaxel (175 mg/m2) and carboplatin (AUC 6) were administered on day 1 of each 3-week cycle for 4 cycles. Chemotherapy was followed by external beam radiotherapy to the whole pelvis (50.4 Gy over 5.5 weeks). Completion and toxicity of treatment (Common Toxicity Criteria, CTC) and quality of life measures were the primary outcome indicators. Results Twenty-nine of 31 patients completed treatment as planned. Dose reduction was needed in 9 patients (29%), treatment delay in 7 (23%), and treatment cessation in 2 patients (6.5%). Hematologic toxicity, grade 3 or 4 occurred in 19% (6/31) of patients. Patients' self-reported quality of life remained stable throughout treatment. Thirteen of the 29 patients with stages 1–3 disease (44.8%) recurred (average follow up 28.1 months, range 8–60 months). Conclusion This multimodal treatment is feasible, safe and tolerated reasonably well and would be suitable for use in multi-institutional prospective randomized clinical trials incorporating novel therapies in patients with UPSC.
Resumo:
When performances are evaluated they are very often presented in a sequential order. Previous research suggests that the sequential presentation of alternatives may induce systematic biases in the way performances are evaluated. Such a phenomenon has been scarcely studied in economics. Using a large dataset of performance evaluation in the Idol series (N=1522), this paper presents new evidence about the systematic biases in sequential evaluation of performances and the psychological phenomena at the origin of these biases.
Resumo:
Learning to operate algebraically is a complex process that is dependent upon extending arithmetic knowledge to the more complex concepts of algebra. Current research has shown a gap between arithmetic and algebraic knowledge and suggests a pre-algebraic level as a step between the two knowledge types. This paper examines arithmetic and algebraic knowledge from a cognitive perspective in an effort to determine what constitutes a pre-algebraic level of understanding. Results of a longitudinal study designed to investigate students' readiness for algebra are presented. Thirty-three students in Grades 7, 8, and 9 participated. A model for the transition from arithmetic to pre-algebra to algebra is proposed and students' understanding of relevant knowledge is discussed.
Resumo:
The computation of compact and meaningful representations of high dimensional sensor data has recently been addressed through the development of Nonlinear Dimensional Reduction (NLDR) algorithms. The numerical implementation of spectral NLDR techniques typically leads to a symmetric eigenvalue problem that is solved by traditional batch eigensolution algorithms. The application of such algorithms in real-time systems necessitates the development of sequential algorithms that perform feature extraction online. This paper presents an efficient online NLDR scheme, Sequential-Isomap, based on incremental singular value decomposition (SVD) and the Isomap method. Example simulations demonstrate the validity and significant potential of this technique in real-time applications such as autonomous systems.
Resumo:
A randomized, double-blind, study was conducted to evaluate the safety, tolerability and immunogenicity of a live attenuated Japanese encephalitis chimeric virus vaccine (JE-CV) co-administered with live attenuated yellow fever (YF) vaccine (YF-17D strain; Stamaril(®), Sanofi Pasteur) or administered successively. Participants (n = 108) were randomized to receive: YF followed by JE-CV 30 days later, JE followed by YF 30 days later, or the co-administration of JE and YF followed or preceded by placebo 30 days later or earlier. Placebo was used in a double-dummy fashion to ensure masking. Neutralizing antibody titers against JE-CV, YF-17D and selected wild-type JE virus strains was determined using a 50% serum-dilution plaque reduction neutralization test. Seroconversion was defined as the appearance of a neutralizing antibody titer above the assay cut-off post-immunization when not present pre-injection at day 0, or a least a four-fold rise in neutralizing antibody titer measured before the pre-injection day 0 and later post vaccination samples. There were no serious adverse events. Most adverse events (AEs) after JE vaccination were mild to moderate in intensity, and similar to those reported following YF vaccination. Seroconversion to JE-CV was 100% and 91% in the JE/YF and YF/JE sequential vaccination groups, respectively, compared with 96% in the co-administration group. All participants seroconverted to YF vaccine and retained neutralizing titers above the assay cut-off at month six. Neutralizing antibodies against JE vaccine were detected in 82-100% of participants at month six. These results suggest that both vaccines may be successfully co-administered simultaneously or 30 days apart.