323 resultados para Gaussian type quadrature formula for sums


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Studies have examined the associations between cancers and circulating 25-hydroxyvitamin D [25(OH)D], but little is known about the impact of different laboratory practices on 25(OH)D concentrations. We examined the potential impact of delayed blood centrifuging, choice of collection tube, and type of assay on 25(OH)D concentrations. Blood samples from 20 healthy volunteers underwent alternative laboratory procedures: four centrifuging times (2, 24, 72, and 96 h after blood draw); three types of collection tubes (red top serum tube, two different plasma anticoagulant tubes containing heparin or EDTA); and two types of assays (DiaSorin radioimmunoassay [RIA] and chemiluminescence immunoassay [CLIA/LIAISON®]). Log-transformed 25(OH)D concentrations were analyzed using the generalized estimating equations (GEE) linear regression models. We found no difference in 25(OH)D concentrations by centrifuging times or type of assay. There was some indication of a difference in 25(OH)D concentrations by tube type in CLIA/LIAISON®-assayed samples, with concentrations in heparinized plasma (geometric mean, 16.1 ng ml−1) higher than those in serum (geometric mean, 15.3 ng ml−1) (p = 0.01), but the difference was significant only after substantial centrifuging delays (96 h). Our study suggests no necessity for requiring immediate processing of blood samples after collection or for the choice of a tube type or assay.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose: To investigate whether wearing different presbyopic vision corrections alters the pattern of eye and head movements when viewing dynamic driving-related traffic scenes. Methods: Participants included 20 presbyopes (mean age: 56±5.7 years) who had no experience of wearing presbyopic vision corrections (i.e. all were single vision wearers). Eye and head movements were recorded while wearing five different vision corrections: single vision lenses (SV), progressive addition spectacle lenses (PALs), bifocal spectacle lenses (BIF), monovision (MV) and multifocal contact lenses (MTF CL) in random order. Videotape recordings of traffic scenes of suburban roads and expressways (with edited targets) were presented as dynamic driving-related stimuli and digital numeric display panels included as near visual stimuli (simulating speedometer and radio). Eye and head movements were recorded using the faceLAB™ system and the accuracy of target identification was also recorded. Results: The magnitude of eye movements while viewing the driving-related traffic scenes was greater when wearing BIF and PALs than MV and MTF CL (p≤0.013). The magnitude of head movements was greater when wearing SV, BIF and PALs than MV and MTF CL (p<0.0001) and the number of saccades was significantly higher for BIF and PALs than MV (p≤0.043). Target recognition accuracy was poorer for all vision corrections when the near stimulus was located at eccentricities inferiorly and to the left, rather than directly below the primary position of gaze (p=0.008), and PALs gave better performance than MTF CL (p=0.043). Conclusions: Different presbyopic vision corrections alter eye and head movement patterns. In particular, the larger magnitude of eye and head movements and greater number of saccades associated with the spectacle presbyopic corrections, may impact on driving performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Research examining post-trauma pathology indicates negative outcomes can differ as a function of the type of trauma experienced. Such research has yet to be published when looking at positive post-trauma changes. Ninety-Four survivors of trauma, forming three groups, completed the Posttraumatic Growth Inventory (PTGI) and Impact of Events Scale-Revised (IES-R). Groups comprised survivors of i) sexual abuse ii) motor vehicle accidents iii) bereavement. Results indicted differences in growth between the groups with the bereaved reporting higher levels of growth than other survivors and sexual abuse survivors demonstrated higher levels of PTSD symptoms than the other groups. However, this did not preclude sexual abuse survivors from also reporting moderate levels of growth. Results are discussed with relation to fostering growth through clinical practice.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Automatic recognition of people is an active field of research with important forensic and security applications. In these applications, it is not always possible for the subject to be in close proximity to the system. Voice represents a human behavioural trait which can be used to recognise people in such situations. Automatic Speaker Verification (ASV) is the process of verifying a persons identity through the analysis of their speech and enables recognition of a subject at a distance over a telephone channel { wired or wireless. A significant amount of research has focussed on the application of Gaussian mixture model (GMM) techniques to speaker verification systems providing state-of-the-art performance. GMM's are a type of generative classifier trained to model the probability distribution of the features used to represent a speaker. Recently introduced to the field of ASV research is the support vector machine (SVM). An SVM is a discriminative classifier requiring examples from both positive and negative classes to train a speaker model. The SVM is based on margin maximisation whereby a hyperplane attempts to separate classes in a high dimensional space. SVMs applied to the task of speaker verification have shown high potential, particularly when used to complement current GMM-based techniques in hybrid systems. This work aims to improve the performance of ASV systems using novel and innovative SVM-based techniques. Research was divided into three main themes: session variability compensation for SVMs; unsupervised model adaptation; and impostor dataset selection. The first theme investigated the differences between the GMM and SVM domains for the modelling of session variability | an aspect crucial for robust speaker verification. Techniques developed to improve the robustness of GMMbased classification were shown to bring about similar benefits to discriminative SVM classification through their integration in the hybrid GMM mean supervector SVM classifier. Further, the domains for the modelling of session variation were contrasted to find a number of common factors, however, the SVM-domain consistently provided marginally better session variation compensation. Minimal complementary information was found between the techniques due to the similarities in how they achieved their objectives. The second theme saw the proposal of a novel model for the purpose of session variation compensation in ASV systems. Continuous progressive model adaptation attempts to improve speaker models by retraining them after exploiting all encountered test utterances during normal use of the system. The introduction of the weight-based factor analysis model provided significant performance improvements of over 60% in an unsupervised scenario. SVM-based classification was then integrated into the progressive system providing further benefits in performance over the GMM counterpart. Analysis demonstrated that SVMs also hold several beneficial characteristics to the task of unsupervised model adaptation prompting further research in the area. In pursuing the final theme, an innovative background dataset selection technique was developed. This technique selects the most appropriate subset of examples from a large and diverse set of candidate impostor observations for use as the SVM background by exploiting the SVM training process. This selection was performed on a per-observation basis so as to overcome the shortcoming of the traditional heuristic-based approach to dataset selection. Results demonstrate the approach to provide performance improvements over both the use of the complete candidate dataset and the best heuristically-selected dataset whilst being only a fraction of the size. The refined dataset was also shown to generalise well to unseen corpora and be highly applicable to the selection of impostor cohorts required in alternate techniques for speaker verification.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The broad objective of the study was to better understand anxiety among adolescents in Kolkata city, India. Specifically, the study compared anxiety across gender, school type, socio-economic background and mothers’ employment status. The study also examined adolescents’ perceptions of quality time with their parents. A group of 460 adolescents (220 boys and 240 girls), aged 13-17 years were recruited to participate in the study via a multi-stage sampling technique. The data were collected using a self-report semi-structured questionnaire and a standardized psychological test, the State-Trait Anxiety Inventory. Results show that anxiety was prevalent in the sample with 20.1% of boys and 17.9% of girls found to be suffering from high anxiety. More boys were anxious than girls (p<0.01). Adolescents from Bengali medium schools were more anxious than adolescents from English medium schools (p<0.01). Adolescents belonging to the middle class (middle socio-economic group) suffered more anxiety than those from both high and low socio-economic groups (p<0.01). Adolescents with working mothers were found to be more anxious (p<0.01). Results also show that a substantial proportion of the adolescents perceived they did not receive quality time from fathers (32.1%) and mothers (21.3%). A large number of them also did not feel comfortable to share their personal issues with their parents (60.0% for fathers and 40.0% for mothers).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Insight into the unique structure of hydrotalcites has been obtained using Raman spectroscopy. Gallium containing hydrotalcites of formula Mg4Ga2(CO3)(OH)12•4H2O (2:1 Ga-HT) to Mg8Ga2(CO3)(OH)20•4H2O (4:1 Ga-HT) have been successfully synthesised and characterized by X-ray diffraction and Raman spectroscopy. The d(003) spacing varied from 7.83 Å for the 2:1 hydrotalcite to 8.15 Å for the 3:1 gallium containing hydrotalcite. Raman spectroscopy complemented with selected infrared data has been used to characterise the synthesised gallium containing hydrotalcites of formula Mg6Ga2(CO3)(OH)16•4H2O. Raman bands observed at around 1046, 1048 and 1058 cm-1 were attributed to the symmetric stretching modes of the (CO32-) units. Multiple ν3 CO32- antisymmetric stretching modes are found at around 1346, 1378, 1446, 1464 and 1494 cm-1. The splitting of this mode indicates the carbonate anion is in a perturbed state. Raman bands observed at 710 and 717 cm-1 assigned to the ν4 (CO32-) modes support the concept of multiple carbonate species in the interlayer.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aims. To explore differences in self-care behaviour according to demographic and illness characteristics; and relationships among self-care behaviour and demographic and illness characteristics, efficacy expectations and outcome expectations of people with type 2 diabetes in Taiwan. Background. Most people with diabetes do not control their disease appropriately in Taiwan. Enhanced self-efficacy towards managing diseases can be an effective way of improving disease control as proposed by the self-efficacy model which provides a useful framework for understanding adherence to self-care behaviours. Design and methods. The sample comprised 145 patients with type 2 diabetes aged 30 years or more from diabetes outpatient clinics in Taipei. Data were collected using a self-administered questionnaire for this study. One-way anova, t-tests, Pearson product moment correlation and hierarchical regression were analysed for the study. Results. Significant differences were found: between self-care behaviour and complications (t = −2·52, p < 0·01) and patient education (t = −1·96, p < 0·05). Self-care behaviour was significantly and positively correlated with duration of diabetes (r = 0·36, p < 0·01), efficacy expectations (r = 0·54, p < 0·01) and outcome expectations (r = 0·44, p < 0·01). A total of 39·1% of variance in self-care behaviour can be explained by duration of diabetes, efficacy expectations and outcome expectations. Conclusions. Findings support the use of the self-efficacy model as a framework for understanding adherence to self-care behaviour. Relevance to clinical practice. Using self-efficacy theory when designing patient education interventions for people with type 2 diabetes will enhance self-management routines and assist in reducing major complications in the future.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Insight into the unique structure of layered double hydroxides has been obtained using a combination of X-ray diffraction and thermal analysis. Indium containing hydrotalcites of formula Mg4In2(CO3)(OH)12•4H2O (2:1 In-LDH) through to Mg8In2(CO3)(OH)18•4H2O (4:1 In-LDH) with variation in the Mg:In ratio have been successfully synthesised. The d(003) spacing varied from 7.83 Å for the 2:1 LDH to 8.15 Å for the 3:1 indium containing layered double hydroxide. Distinct mass loss steps attributed to dehydration, dehydroxylation and decarbonation are observed for the indium containing hydrotalcite. Dehydration occurs over the temperature range ambient to 205 °C. Dehydroxylation takes place in a series of steps over the 238 to 277 °C temperature range. Decarbonation occurs between 763 and 795 °C. The dehydroxylation and decarbonation steps depend upon the Mg:In ratio. The formation of indium containing hydrotalcites and their thermal activation provides a method for the synthesis of indium oxide based catalysts.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The performance of an adaptive filter may be studied through the behaviour of the optimal and adaptive coefficients in a given environment. This thesis investigates the performance of finite impulse response adaptive lattice filters for two classes of input signals: (a) frequency modulated signals with polynomial phases of order p in complex Gaussian white noise (as nonstationary signals), and (b) the impulsive autoregressive processes with alpha-stable distributions (as non-Gaussian signals). Initially, an overview is given for linear prediction and adaptive filtering. The convergence and tracking properties of the stochastic gradient algorithms are discussed for stationary and nonstationary input signals. It is explained that the stochastic gradient lattice algorithm has many advantages over the least-mean square algorithm. Some of these advantages are having a modular structure, easy-guaranteed stability, less sensitivity to the eigenvalue spread of the input autocorrelation matrix, and easy quantization of filter coefficients (normally called reflection coefficients). We then characterize the performance of the stochastic gradient lattice algorithm for the frequency modulated signals through the optimal and adaptive lattice reflection coefficients. This is a difficult task due to the nonlinear dependence of the adaptive reflection coefficients on the preceding stages and the input signal. To ease the derivations, we assume that reflection coefficients of each stage are independent of the inputs to that stage. Then the optimal lattice filter is derived for the frequency modulated signals. This is performed by computing the optimal values of residual errors, reflection coefficients, and recovery errors. Next, we show the tracking behaviour of adaptive reflection coefficients for frequency modulated signals. This is carried out by computing the tracking model of these coefficients for the stochastic gradient lattice algorithm in average. The second-order convergence of the adaptive coefficients is investigated by modeling the theoretical asymptotic variance of the gradient noise at each stage. The accuracy of the analytical results is verified by computer simulations. Using the previous analytical results, we show a new property, the polynomial order reducing property of adaptive lattice filters. This property may be used to reduce the order of the polynomial phase of input frequency modulated signals. Considering two examples, we show how this property may be used in processing frequency modulated signals. In the first example, a detection procedure in carried out on a frequency modulated signal with a second-order polynomial phase in complex Gaussian white noise. We showed that using this technique a better probability of detection is obtained for the reduced-order phase signals compared to that of the traditional energy detector. Also, it is empirically shown that the distribution of the gradient noise in the first adaptive reflection coefficients approximates the Gaussian law. In the second example, the instantaneous frequency of the same observed signal is estimated. We show that by using this technique a lower mean square error is achieved for the estimated frequencies at high signal-to-noise ratios in comparison to that of the adaptive line enhancer. The performance of adaptive lattice filters is then investigated for the second type of input signals, i.e., impulsive autoregressive processes with alpha-stable distributions . The concept of alpha-stable distributions is first introduced. We discuss that the stochastic gradient algorithm which performs desirable results for finite variance input signals (like frequency modulated signals in noise) does not perform a fast convergence for infinite variance stable processes (due to using the minimum mean-square error criterion). To deal with such problems, the concept of minimum dispersion criterion, fractional lower order moments, and recently-developed algorithms for stable processes are introduced. We then study the possibility of using the lattice structure for impulsive stable processes. Accordingly, two new algorithms including the least-mean P-norm lattice algorithm and its normalized version are proposed for lattice filters based on the fractional lower order moments. Simulation results show that using the proposed algorithms, faster convergence speeds are achieved for parameters estimation of autoregressive stable processes with low to moderate degrees of impulsiveness in comparison to many other algorithms. Also, we discuss the effect of impulsiveness of stable processes on generating some misalignment between the estimated parameters and the true values. Due to the infinite variance of stable processes, the performance of the proposed algorithms is only investigated using extensive computer simulations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis deals with the problem of the instantaneous frequency (IF) estimation of sinusoidal signals. This topic plays significant role in signal processing and communications. Depending on the type of the signal, two major approaches are considered. For IF estimation of single-tone or digitally-modulated sinusoidal signals (like frequency shift keying signals) the approach of digital phase-locked loops (DPLLs) is considered, and this is Part-I of this thesis. For FM signals the approach of time-frequency analysis is considered, and this is Part-II of the thesis. In part-I we have utilized sinusoidal DPLLs with non-uniform sampling scheme as this type is widely used in communication systems. The digital tanlock loop (DTL) has introduced significant advantages over other existing DPLLs. In the last 10 years many efforts have been made to improve DTL performance. However, this loop and all of its modifications utilizes Hilbert transformer (HT) to produce a signal-independent 90-degree phase-shifted version of the input signal. Hilbert transformer can be realized approximately using a finite impulse response (FIR) digital filter. This realization introduces further complexity in the loop in addition to approximations and frequency limitations on the input signal. We have tried to avoid practical difficulties associated with the conventional tanlock scheme while keeping its advantages. A time-delay is utilized in the tanlock scheme of DTL to produce a signal-dependent phase shift. This gave rise to the time-delay digital tanlock loop (TDTL). Fixed point theorems are used to analyze the behavior of the new loop. As such TDTL combines the two major approaches in DPLLs: the non-linear approach of sinusoidal DPLL based on fixed point analysis, and the linear tanlock approach based on the arctan phase detection. TDTL preserves the main advantages of the DTL despite its reduced structure. An application of TDTL in FSK demodulation is also considered. This idea of replacing HT by a time-delay may be of interest in other signal processing systems. Hence we have analyzed and compared the behaviors of the HT and the time-delay in the presence of additive Gaussian noise. Based on the above analysis, the behavior of the first and second-order TDTLs has been analyzed in additive Gaussian noise. Since DPLLs need time for locking, they are normally not efficient in tracking the continuously changing frequencies of non-stationary signals, i.e. signals with time-varying spectra. Nonstationary signals are of importance in synthetic and real life applications. An example is the frequency-modulated (FM) signals widely used in communication systems. Part-II of this thesis is dedicated for the IF estimation of non-stationary signals. For such signals the classical spectral techniques break down, due to the time-varying nature of their spectra, and more advanced techniques should be utilized. For the purpose of instantaneous frequency estimation of non-stationary signals there are two major approaches: parametric and non-parametric. We chose the non-parametric approach which is based on time-frequency analysis. This approach is computationally less expensive and more effective in dealing with multicomponent signals, which are the main aim of this part of the thesis. A time-frequency distribution (TFD) of a signal is a two-dimensional transformation of the signal to the time-frequency domain. Multicomponent signals can be identified by multiple energy peaks in the time-frequency domain. Many real life and synthetic signals are of multicomponent nature and there is little in the literature concerning IF estimation of such signals. This is why we have concentrated on multicomponent signals in Part-H. An adaptive algorithm for IF estimation using the quadratic time-frequency distributions has been analyzed. A class of time-frequency distributions that are more suitable for this purpose has been proposed. The kernels of this class are time-only or one-dimensional, rather than the time-lag (two-dimensional) kernels. Hence this class has been named as the T -class. If the parameters of these TFDs are properly chosen, they are more efficient than the existing fixed-kernel TFDs in terms of resolution (energy concentration around the IF) and artifacts reduction. The T-distributions has been used in the IF adaptive algorithm and proved to be efficient in tracking rapidly changing frequencies. They also enables direct amplitude estimation for the components of a multicomponent