97 resultados para consistent and asymptotically normal estimators

em Queensland University of Technology - ePrints Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider rank regression for clustered data analysis and investigate the induced smoothing method for obtaining the asymptotic covariance matrices of the parameter estimators. We prove that the induced estimating functions are asymptotically unbiased and the resulting estimators are strongly consistent and asymptotically normal. The induced smoothing approach provides an effective way for obtaining asymptotic covariance matrices for between- and within-cluster estimators and for a combined estimator to take account of within-cluster correlations. We also carry out extensive simulation studies to assess the performance of different estimators. The proposed methodology is substantially Much faster in computation and more stable in numerical results than the existing methods. We apply the proposed methodology to a dataset from a randomized clinical trial.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Consider a general regression model with an arbitrary and unknown link function and a stochastic selection variable that determines whether the outcome variable is observable or missing. The paper proposes U-statistics that are based on kernel functions as estimators for the directions of the parameter vectors in the link function and the selection equation, and shows that these estimators are consistent and asymptotically normal.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE: To determine if participants with normal visual acuity, no ophthalmoscopically signs of age-related maculopathy (ARM) in both eyes and who are carriers of the CFH, LOC387715 and HRTA1 high-risk genotypes (“gene-positive”) have impaired rod- and cone-mediated mesopic visual function compared to persons who do not carry the risk genotypes (“gene-negative”).---------- METHODS: Fifty-three Caucasian study participants (mean 55.8 ± 6.1) were genotyped for CFH, LOC387715/ARMS2 and HRTA1 polymorphisms. We genotyped single nucleotide polymorphisms (SNPs) in the CFH (rs380390), LOC387715/ARMS2 (rs10490924) and HTRA1 (rs11200638) genes using Applied Biosystems optimised TaqMan assays. We determined the critical fusion frequency (CFF) mediated by cones alone (Long, Middle and Short wavelength sensitive cones; LMS) and by the combined activities of cones and rods (LMSR). The stimuli were generated using a 4-primary photostimulator that provides independent control of the photoreceptor excitation under mesopic light levels. Visual function was further assessed using standard clinical tests, flicker perimetry and microperimetry.---------- RESULTS: The mesopic CFF mediated by rods and cones (LMSR) was significantly reduced in gene-positive compared to gene-negative participants after correction for age (p=0.03). Cone-mediated CFF (LMS) was not significantly different between gene-positive and -negative participants. There were no significant associations between flicker perimetry and microperimetry and genotype.---------- CONCLUSIONS: This is the first study to relate ARM risk genotypes with mesopic visual function in clinically normal persons. These preliminary results could become of clinical importance as mesopic vision may be used to document sub-clinical retinal changes in persons with risk genotypes and to determine whether those persons progress into manifest disease.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In general, the biological activation of nephrocarcinogenic chlorinated hydrocarbons proceeds via conjugatiton with glutathione. It has mostly been assamed that the main site of initial conjugation is the liver, followed by a mandatory transfer of intermediates to the kidney. It was therefore of interest to study the enzyme activities of subgroups of glutathione transferases (GSTs) in renal cancers and the surrounding normal renal tissues of the same individuals (n = 21). For genotyping the individuals with respect to known polymorphic GST isozymes the following substrates with differential specificity were used: 1-chloro-2,4-dinitrobenzene for overall GST activity (except GST θ); 7-chloro-4-nitrobenzo-2-oxa-1,3-diazole for GST α; 1,2-dichloro-4-nitro-benzene for GST μ; ethacrynic acid and 4-vinylpyridine for GST π; and methyl chloride for GST θ. In general, the normal tissues were able to metabolize the test substrates. A general decrease in individual GST enzyme activities was apparent in the course of cancerization, and in some (exceptional) cases individual activities, expressed in the normal renal tissue, were lost in the tumour tissue. The GST enzyme activities in tumours were independent of tumour stage, or the age and gender of the patients. There was little influence of known polymorphisms of GSTM1, GSTM3 and GSTP1 upon the activities towards the test substrates, whereas the influence of GSTT1 polymorphism on the activity towads methyl chloride was straightforward. In general, the present findings support the concept that the initial GST-dependent bioactivation step of nephrocarcinogenic chlorinated hydrocarbons may take place in the kidney itself. This should be a consideration in toxicokinetic modelling.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose an iterative estimating equations procedure for analysis of longitudinal data. We show that, under very mild conditions, the probability that the procedure converges at an exponential rate tends to one as the sample size increases to infinity. Furthermore, we show that the limiting estimator is consistent and asymptotically efficient, as expected. The method applies to semiparametric regression models with unspecified covariances among the observations. In the special case of linear models, the procedure reduces to iterative reweighted least squares. Finite sample performance of the procedure is studied by simulations, and compared with other methods. A numerical example from a medical study is considered to illustrate the application of the method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Non-small cell lung cancer consists of a diverse range of molecular and pathological features. This may be due in part to the critical interaction between normal and lung cancer cells. Consequently resulting in ‘normal’ cells acting in a malignant fashion. This project aims to identify pathways responsible for this altered ‘normal’ behaviour.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Maximum-likelihood estimates of the parameters of stochastic differential equations are consistent and asymptotically efficient, but unfortunately difficult to obtain if a closed-form expression for the transitional probability density function of the process is not available. As a result, a large number of competing estimation procedures have been proposed. This article provides a critical evaluation of the various estimation techniques. Special attention is given to the ease of implementation and comparative performance of the procedures when estimating the parameters of the Cox–Ingersoll–Ross and Ornstein–Uhlenbeck equations respectively.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper proposes a linear quantile regression analysis method for longitudinal data that combines the between- and within-subject estimating functions, which incorporates the correlations between repeated measurements. Therefore, the proposed method results in more efficient parameter estimation relative to the estimating functions based on an independence working model. To reduce computational burdens, the induced smoothing method is introduced to obtain parameter estimates and their variances. Under some regularity conditions, the estimators derived by the induced smoothing method are consistent and have asymptotically normal distributions. A number of simulation studies are carried out to evaluate the performance of the proposed method. The results indicate that the efficiency gain for the proposed method is substantial especially when strong within correlations exist. Finally, a dataset from the audiology growth research is used to illustrate the proposed methodology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Emissions from airport operations are of significant concern because of their potential impact on local air quality and human health. The currently limited scientific knowledge of aircraft emissions is an important issue worldwide, when considering air pollution associated with airport operation, and this is especially so for ultrafine particles. This limited knowledge is due to scientific complexities associated with measuring aircraft emissions during normal operations on the ground. In particular this type of research has required the development of novel sampling techniques which must take into account aircraft plume dispersion and dilution as well as the various particle dynamics that can affect the measurements of the aircraft engine plume from an operational aircraft. In order to address this scientific problem, a novel mobile emission measurement method called the Plume Capture and Analysis System (PCAS), was developed and tested. The PCAS permits the capture and analysis of aircraft exhaust during ground level operations including landing, taxiing, takeoff and idle. The PCAS uses a sampling bag to temporarily store a sample, providing sufficient time to utilize sensitive but slow instrumental techniques to be employed to measure gas and particle emissions simultaneously and to record detailed particle size distributions. The challenges in relation to the development of the technique include complexities associated with the assessment of the various particle loss and deposition mechanisms which are active during storage in the PCAS. Laboratory based assessment of the method showed that the bag sampling technique can be used to accurately measure particle emissions (e.g. particle number, mass and size distribution) from a moving aircraft or vehicle. Further assessment of the sensitivity of PCAS results to distance from the source and plume concentration was conducted in the airfield with taxiing aircraft. The results showed that the PCAS is a robust method capable of capturing the plume in only 10 seconds. The PCAS is able to account for aircraft plume dispersion and dilution at distances of 60 to 180 meters downwind of moving a aircraft along with particle deposition loss mechanisms during the measurements. Characterization of the plume in terms of particle number, mass (PM2.5), gaseous emissions and particle size distribution takes only 5 minutes allowing large numbers of tests to be completed in a short time. The results were broadly consistent and compared well with the available data. Comprehensive measurements and analyses of the aircraft plumes during various modes of the landing and takeoff (LTO) cycle (e.g. idle, taxi, landing and takeoff) were conducted at Brisbane Airport (BNE). Gaseous (NOx, CO2) emission factors, particle number and mass (PM2.5) emission factors and size distributions were determined for a range of Boeing and Airbus aircraft, as a function of aircraft type and engine thrust level. The scientific complexities including the analysis of the often multimodal particle size distributions to describe the contributions of different particle source processes during the various stages of aircraft operation were addressed through comprehensive data analysis and interpretation. The measurement results were used to develop an inventory of aircraft emissions at BNE, including all modes of the aircraft LTO cycle and ground running procedures (GRP). Measurements of the actual duration of aircraft activity in each mode of operation (time-in-mode) and compiling a comprehensive matrix of gas and particle emission rates as a function of aircraft type and engine thrust level for real world situations was crucial for developing the inventory. The significance of the resulting matrix of emission rates in this study lies in the estimate it provides of the annual particle emissions due to aircraft operations, especially in terms of particle number. In summary, this PhD thesis presents for the first time a comprehensive study of the particle and NOx emission factors and rates along with the particle size distributions from aircraft operations and provides a basis for estimating such emissions at other airports. This is a significant addition to the scientific knowledge in terms of particle emissions from aircraft operations, since the standard particle number emissions rates are not currently available for aircraft activities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The main aim of radiotherapy is to deliver a dose of radiation that is high enough to destroy the tumour cells while at the same time minimising the damage to normal healthy tissues. Clinically, this has been achieved by assigning a prescription dose to the tumour volume and a set of dose constraints on critical structures. Once an optimal treatment plan has been achieved the dosimetry is assessed using the physical parameters of dose and volume. There has been an interest in using radiobiological parameters to evaluate and predict the outcome of a treatment plan in terms of both a tumour control probability (TCP) and a normal tissue complication probability (NTCP). In this study, simple radiobiological models that are available in a commercial treatment planning system were used to compare three dimensional conformal radiotherapy treatments (3D-CRT) and intensity modulated radiotherapy (IMRT) treatments of the prostate. Initially both 3D-CRT and IMRT were planned for 2 Gy/fraction to a total dose of 60 Gy to the prostate. The sensitivity of the TCP and the NTCP to both conventional dose escalation and hypo-fractionation was investigated. The biological responses were calculated using the Källman S-model. The complication free tumour control probability (P+) is generated from the combined NTCP and TCP response values. It has been suggested that the alpha/beta ratio for prostate carcinoma cells may be lower than for most other tumour cell types. The effect of this on the modelled biological response for the different fractionation schedules was also investigated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It has been argued that intentional first year curriculum design has a critical role to play in enhancing first year student engagement, success and retention (Kift, 2008). A fundamental first year curriculum objective should be to assist students to make the successful transition to assessment in higher education. Scott (2006) has identified that ‘relevant, consistent and integrated assessment … [with] prompt and constructive feedback’ are particularly relevant to student retention generally; while Nicol (2007) suggests that ‘lack of clarity regarding expectations in the first year, low levels of teacher feedback and poor motivation’ are key issues in the first year. At the very minimum, if we expect first year students to become independent and self-managing learners, they need to be supported in their early development and acquisition of tertiary assessment literacies (Orrell, 2005). Critical to this attainment is the necessity to alleviate early anxieties around assessment information, instructions, guidance, and performance. This includes, for example:  inducting students thoroughly into the academic languages and assessment genres they will encounter as the vehicles for evidencing learning success; and  making expectations about the quality of this evidence clear. Most importantly, students should receive regular formative feedback of their work early in their program of study to aid their learning and to provide information to both students and teachers on progress and achievement. Leveraging research conducted under an ALTC Senior Fellowship that has sought to articulate a research-based 'transition pedagogy' (Kift & Nelson, 2005) – a guiding philosophy for intentional first year curriculum design and support that carefully scaffolds and mediates the first year learning experience for contemporary heterogeneous cohorts – this paper will discuss theoretical and practical strategies and examples that should be of assistance in implementing good assessment and feedback practices across a range of disciplines in the first year.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE: The aim of this study was to further evaluate the validity and clinical meaningfulness of appetite sensations to predict overall energy intake as well as body weight loss. METHODS: Men (n=176) and women (n=139) involved in six weight loss studies were selected to participate in this study. Visual analogue scales were used to measure appetite sensations before and after a fixed test meal. Fasting appetite sensations, 1 h post-prandial area under the curve (AUC) and the satiety quotient (SQ) were used as predictors of energy intake and body weight loss. Two separate measures of energy intake were used: a buffet style ad libitum test lunch and a three-day self-report dietary record. RESULTS: One-hour post-prandial AUC for all appetite sensations represented the strongest predictors of ad libitum test lunch energy intake (p0.001). These associations were more consistent and pronounced for women than men. Only SQ for fullness was associated with ad libitum test lunch energy intake in women. Similar but weaker relationships were found between appetite sensations and the 3-day self-reported energy intake. Weight loss was associated with changes in appetite sensations (p0.01) and the best predictors of body weight loss were fasting desire to eat; hunger; and PFC (p0.01). CONCLUSIONS: These results demonstrate that appetite sensations are relatively useful predictors of spontaneous energy intake, free-living total energy intake and body weight loss. They also confirm that SQ for fullness predicts energy intake, at least in women.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The theory of nonlinear dyamic systems provides some new methods to handle complex systems. Chaos theory offers new concepts, algorithms and methods for processing, enhancing and analyzing the measured signals. In recent years, researchers are applying the concepts from this theory to bio-signal analysis. In this work, the complex dynamics of the bio-signals such as electrocardiogram (ECG) and electroencephalogram (EEG) are analyzed using the tools of nonlinear systems theory. In the modern industrialized countries every year several hundred thousands of people die due to sudden cardiac death. The Electrocardiogram (ECG) is an important biosignal representing the sum total of millions of cardiac cell depolarization potentials. It contains important insight into the state of health and nature of the disease afflicting the heart. Heart rate variability (HRV) refers to the regulation of the sinoatrial node, the natural pacemaker of the heart by the sympathetic and parasympathetic branches of the autonomic nervous system. Heart rate variability analysis is an important tool to observe the heart's ability to respond to normal regulatory impulses that affect its rhythm. A computerbased intelligent system for analysis of cardiac states is very useful in diagnostics and disease management. Like many bio-signals, HRV signals are non-linear in nature. Higher order spectral analysis (HOS) is known to be a good tool for the analysis of non-linear systems and provides good noise immunity. In this work, we studied the HOS of the HRV signals of normal heartbeat and four classes of arrhythmia. This thesis presents some general characteristics for each of these classes of HRV signals in the bispectrum and bicoherence plots. Several features were extracted from the HOS and subjected an Analysis of Variance (ANOVA) test. The results are very promising for cardiac arrhythmia classification with a number of features yielding a p-value < 0.02 in the ANOVA test. An automated intelligent system for the identification of cardiac health is very useful in healthcare technology. In this work, seven features were extracted from the heart rate signals using HOS and fed to a support vector machine (SVM) for classification. The performance evaluation protocol in this thesis uses 330 subjects consisting of five different kinds of cardiac disease conditions. The classifier achieved a sensitivity of 90% and a specificity of 89%. This system is ready to run on larger data sets. In EEG analysis, the search for hidden information for identification of seizures has a long history. Epilepsy is a pathological condition characterized by spontaneous and unforeseeable occurrence of seizures, during which the perception or behavior of patients is disturbed. An automatic early detection of the seizure onsets would help the patients and observers to take appropriate precautions. Various methods have been proposed to predict the onset of seizures based on EEG recordings. The use of nonlinear features motivated by the higher order spectra (HOS) has been reported to be a promising approach to differentiate between normal, background (pre-ictal) and epileptic EEG signals. In this work, these features are used to train both a Gaussian mixture model (GMM) classifier and a Support Vector Machine (SVM) classifier. Results show that the classifiers were able to achieve 93.11% and 92.67% classification accuracy, respectively, with selected HOS based features. About 2 hours of EEG recordings from 10 patients were used in this study. This thesis introduces unique bispectrum and bicoherence plots for various cardiac conditions and for normal, background and epileptic EEG signals. These plots reveal distinct patterns. The patterns are useful for visual interpretation by those without a deep understanding of spectral analysis such as medical practitioners. It includes original contributions in extracting features from HRV and EEG signals using HOS and entropy, in analyzing the statistical properties of such features on real data and in automated classification using these features with GMM and SVM classifiers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Melt electrospinning is one aspect of electrospinning with relatively little published literature, although the technique avoids solvent accumulation and/or toxicity which is favoured in certain applications. In the study reported, we melt-electrospun blends of poly(ε-caprolactone) (PCL) and an amphiphilic diblock copolymer consisting of poly(ethylene glycol) and PCL segments (PEG-block-PCL). A custom-made electrospinning apparatus was built and various combinations of instrument parameters such as voltage and polymer feeding rate were investigated. Pure PEG-block-PCL copolymer melt electrospinning did not result in consistent and uniform fibres due to the low molecular weight, while blends of PCL and PEG-block-PCL, for some parameter combinations and certain weight ratios of the two components, were able to produce continuous fibres significantly thinner (average diameter of ca 2 µm) compared to pure PCL. The PCL fibres obtained had average diameters ranging from 6 to 33 µm and meshes were uniform for the lowest voltage employed while mesh uniformity decreased when the voltage was increased. This approach shows that PCL and blends of PEG-block-PCL and PCL can be readily processed by melt electrospinning to obtain fibrous meshes with varied average diameters and morphologies that are of interest for tissue engineering purposes. Copyright © 2010 Society of Chemical Industry