906 resultados para Multi-instance and multi-sample fusion


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The causal relationship between Human Papilloma Virus (HPV) infection and cervical cancer has motivated the development, and further improvement, of prophylactic vaccines against this virus. 70% of cervical cancers, 80% of which in low-resources countries, are associated to HPV16 and HPV18 infection, with 13 additional HPV types, classified as high-risk, responsible for the remaining 30% of tumors. Current vaccines, Cervarix® (GlaxoSmithKline) and Gardasil®(Merk), are based on virus-like particles (VLP) obtained by self-assembly of the major capsid protein L1. Despite their undisputable immunogenicity and safety, the fact that protection afforded by these vaccines is largely limited to the cognate serotypes included in the vaccine (HPV 16 and 18, plus five additional viral types incorporated into a newly licensed nonavalent vaccine) along with high production costs and reduced thermal stability, are pushing the development of 2nd generation HPV vaccines based on minor capsid protein L2. The increase in protection broadness afforded by the use of L2 cross-neutralizing epitopes, plus a marked reduction of production costs due to bacterial expression of the antigens and a considerable increase in thermal stability could strongly enhance vaccine distribution and usage in low-resource countries. Previous studies from our group identified three tandem repeats of the L2 aa. 20-38 peptide as a strongly immunogenic epitope if exposed on the scaffold protein thioredoxin (Trx). The aim of this thesis work is the improvement of the Trx-L2 vaccine formulation with regard to cross-protection and thermostability, in order to identify an antigen suitable for a phase I clinical trial. By testing Trx from different microorganisms, we selected P. furiosus thioredoxin (PfTrx) as the optimal scaffold because of its sustained peptide epitope constraining capacity and striking thermal stability (24 hours at 100°C). Alternative production systems, such as secretory Trx-L2 expression in the yeast P. pastoris, have also been set-up and evaluated as possible means to further increase production yields, with a concomitant reduction of production costs. Limitations in immune-responsiveness caused by MHC class II polymorphisms –as observed, for example, in different mouse strains- have been overcome by introducing promiscuous T-helper (Th) epitopes, e.g., PADRE (Pan DR Epitope), at both ends of PfTrx. This allowed us to obtain fairly strong immune responses even in mice (C57BL/6) normally unresponsive to the basic Trx-L2 vaccine. Cross-protection was not increased, however. I thus designed, produced and tested a novel multi-epitope formulation consisting of 8 and 11 L2(20-38) epitopes derived from different HPV types, tandemly joined into a single thioredoxin molecule (“concatemers”). To try to further increase immunogenicity, I also fused our 8X and 11X PfTrx-L2 concatemers to the N-terminus of an engineered complement-binding protein (C4bp), capable to spontaneously assemble into ordered hepatmeric structures, previously validated as a molecular adjuvant. Fusion to C4bp indeed improved antigen presentation, with a fairly significant increase in both immunogenicity and cross-protection. Another important issue I addressed, is the reduction of vaccine doses/treatment, which can be achieved by increasing immunogenicity, while also allowing for a delayed release of the antigen. I obtained preliminary, yet quite encouraging results in this direction with the use of a novel, solid-phase vaccine formulation, consisting of the basic PfTrx-L2 vaccine and its C4bp fusion derivative adsorbed to mesoporus silica-rods (MSR).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work concerns the developnent of a proton irduced X-ray emission (PIXE) analysis system and a multi-sample scattering chamber facility. The characteristics of the beam pulsing system and its counting rate capabilities were evaluated by observing the ion-induced X-ray emission from pure thick copper targets, with and without beam pulsing operation. The characteristic X-rays were detected with a high resolution Si(Li) detector coupled to a rrulti-channel analyser. The removal of the pile-up continuum by the use of the on-demand beam pulsing is clearly demonstrated in this work. This new on-demand pu1sirg system with its counting rate capability of 25, 18 and 10 kPPS corresponding to 2, 4 am 8 usec main amplifier time constant respectively enables thick targets to be analysed more readily. Reproducibility tests of the on-demard beam pulsing system operation were checked by repeated measurements of the system throughput curves, with and without beam pulsing. The reproducibility of the analysis performed using this system was also checked by repeated measurements of the intensity ratios from a number of standard binary alloys during the experimental work. A computer programme has been developed to evaluate the calculations of the X-ray yields from thick targets bornbarded by protons, taking into account the secondary X-ray yield production due to characteristic X-ray fluorescence from an element energetically higher than the absorption edge energy of the other element present in the target. This effect was studied on metallic binary alloys such as Fe/Ni and Cr/Fe. The quantitative analysis of Fe/Ni and Cr/Fe alloy samples to determine their elemental composition taking into account the enhancement has been demonstrated in this work. Furthermore, the usefulness of the Rutherford backscattering (R.B.S.) technique to obtain the depth profiles of the elements in the upper micron of the sample is discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Big data comes in various ways, types, shapes, forms and sizes. Indeed, almost all areas of science, technology, medicine, public health, economics, business, linguistics and social science are bombarded by ever increasing flows of data begging to be analyzed efficiently and effectively. In this paper, we propose a rough idea of a possible taxonomy of big data, along with some of the most commonly used tools for handling each particular category of bigness. The dimensionality p of the input space and the sample size n are usually the main ingredients in the characterization of data bigness. The specific statistical machine learning technique used to handle a particular big data set will depend on which category it falls in within the bigness taxonomy. Large p small n data sets for instance require a different set of tools from the large n small p variety. Among other tools, we discuss Preprocessing, Standardization, Imputation, Projection, Regularization, Penalization, Compression, Reduction, Selection, Kernelization, Hybridization, Parallelization, Aggregation, Randomization, Replication, Sequentialization. Indeed, it is important to emphasize right away that the so-called no free lunch theorem applies here, in the sense that there is no universally superior method that outperforms all other methods on all categories of bigness. It is also important to stress the fact that simplicity in the sense of Ockham’s razor non-plurality principle of parsimony tends to reign supreme when it comes to massive data. We conclude with a comparison of the predictive performance of some of the most commonly used methods on a few data sets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

During Ocean Drilling Program (ODP) Leg 180, 11 sites were drilled in the vicinity of the Moresby Seamount to study processes associated with the transition from continental rifting to seafloor spreading in the Woodlark Basin. This paper presents thermochronologic (40Ar/39Ar, 238U/206Pb, and fission track) results from igneous rocks recovered during ODP Leg 180 that help constrain the latest Cretaceous to present-day tectonic development of the Woodlark Basin. Igneous rocks recovered (primarily from Sites 1109, 1114, 1117, and 1118) consist of predominantly diabase and metadiabase, with minor basalt and gabbro. Zircon ion microprobe analyses gave a 238U/206Pb age of 66.4 ± 1.5 Ma, interpreted to date crystallization of the diabase. 40Ar/39Ar plagioclase apparent ages vary considerably according to the degree to which the diabase was altered subsequent to crystallization. The least altered sample (from Site 1109) yielded a plagioclase isochron age of 58.9 ± 5.8 Ma, interpreted to represent cooling following intrusion. The most altered sample (from Site 1117) yielded an isochron age of 31.0 ± 0.9 Ma, interpreted to represent a maximum age for the timing of subsequent hydrothermal alteration. The diabase has not been thermally affected by Miocene-Pliocene rift-related events, supporting our inference that these rocks have remained at shallow and cool levels in the crust (i.e., upper plate) since they were partially reset as a result of middle Oligocene hydrothermal alteration. These results suggest that crustal extension in the vicinity of the Moresby Seamount, immediately west of the active seafloor spreading tip, is being accommodated by normal faulting within latest Cretaceous to early Paleocene oceanic crust. Felsic clasts provide additional evidence for middle Miocene and Pliocene magmatic events in the region. Two rhyolitic clasts (from Sites 1110 and 1111) gave zircon 238U/206Pb ages of 15.7 ± 0.4 Ma and provide evidence for Miocene volcanism in the region. 40Ar/39Ar total fusion ages on single grains of K-feldspar from these clasts yielded younger apparent ages of 12.5 ± 0.2 and 14.4 ± 0.6 Ma due to variable sericitization of K-feldspar phenocrysts. 238U/206Pb zircon, 40Ar/39Ar K-feldspar and biotite total fusion, and apatite fission track analysis of a microgranite clast (from Site 1108) provide evidence for the existence of a rapidly cooled 3.0 to 1.8 Ma granitic protolith. The clast may have been transported longitudinally from the west (e.g., from the D'Entrecasteaux Islands). Alternatively, it may have been derived from a more proximal, but presently unknown, source in the vicinity of the Moresby Seamount.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In quantitative risk analysis, the problem of estimating small threshold exceedance probabilities and extreme quantiles arise ubiquitously in bio-surveillance, economics, natural disaster insurance actuary, quality control schemes, etc. A useful way to make an assessment of extreme events is to estimate the probabilities of exceeding large threshold values and extreme quantiles judged by interested authorities. Such information regarding extremes serves as essential guidance to interested authorities in decision making processes. However, in such a context, data are usually skewed in nature, and the rarity of exceedance of large threshold implies large fluctuations in the distribution's upper tail, precisely where the accuracy is desired mostly. Extreme Value Theory (EVT) is a branch of statistics that characterizes the behavior of upper or lower tails of probability distributions. However, existing methods in EVT for the estimation of small threshold exceedance probabilities and extreme quantiles often lead to poor predictive performance in cases where the underlying sample is not large enough or does not contain values in the distribution's tail. In this dissertation, we shall be concerned with an out of sample semiparametric (SP) method for the estimation of small threshold probabilities and extreme quantiles. The proposed SP method for interval estimation calls for the fusion or integration of a given data sample with external computer generated independent samples. Since more data are used, real as well as artificial, under certain conditions the method produces relatively short yet reliable confidence intervals for small exceedance probabilities and extreme quantiles.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Single walled carbon nanotubes (SWNTs) were incorporated in polymer nanocomposites based on poly(3-octylthiophene) (P3OT), thermoplastic polyurethane (TPU) or a blend of them. Thermogravimetry demonstrated the success of the purification procedure employed in the chemical treatment of SWNTs prior to composite preparation. Stable dispersions of SWNTs in chloroform were obtained by non-covalent interactions with the dissolved polymers. Composites exhibited glass transitions, melting temperatures and heat of fusion which changed in relation to pure polymers. This behavior is discussed as associated to interactions between nanotubes and polymers. The conductivity at room temperature of the blend (TPU-P3OT) with SWNT is higher than the P3OT/SWNT composite.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

While LRD (living donation to a genetically/emotionally related recipient) is well established in Australia, LAD (living anonymous donation to a stranger) is rare. Given the increasing use of LAD overseas, Australia may likely follow suit. Understanding the determinants of people’s willingness for LAD is essential but infrequently studied in Australia. Consequently, we compared the determinants of people’s LRD and LAD willingness, and assessed whether these determinants differed according to type of living donation. We surveyed 487 health students about their LRD and LAD willingness, attitudes, identity, prior experience with blood and organ donation, deceased donation preference, and demographics. We used Structural Equation Modelling (SEM) to identify the determinants of willingness for LRD and LAD and paired sample t-tests to examine differences in LRD and LAD attitudes, identity, and willingness. Mean differences in willingness (LRD 5.93, LAD 3.92), attitudes (LRD 6.43, LAD 5.53), and identity (LRD 5.69, LAD 3.58) were statistically significant. Revised SEM models provided a good fit to the data (LRD: x2 (41) = 67.67, p = 0.005, CFI = 0.96, RMSEA = 0.04; LAD: x2 (40) = 79.64, p < 0.001, CFI = 0.95, RMSEA = 0.05) and explained 45% and 54% of the variation in LRD and LAD willingness, respectively. Four common determinants of LRD and LAD willingness emerged: identity, attitude, past blood donation, and knowing a deceased donor. Religious affiliation and deceased donation preference predicted LAD willingness also. Identifying similarities and differences in these determinants can inform future efforts aimed at understanding people’s LRD and LAD willingness and the evaluation of potential living donor motives. Notably, this study highlights the importance of people’s identification as a living donor as a motive underlying their willingness to donate their organs while living.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper reports on a comparative study of students and non-students that investigates which psycho-social factors influence intended donation behaviour within a single organisation that offers multiple forms of donation activity. Additionally, the study examines which media channels are more important to encourage donation. A self-administered survey instrumentwas used and a sample of 776 respondents recruited. Logistic regressions and a Chow test were used to determine statistically significant differences between the groups. For donatingmoney, importance of charity and attitude towards charity influence students, whereas only importance of need significantly influences non-students. For donating time, no significant influences were found for non-students, however, importance of charity and attitude towards charity were significant for students. Importance of need was significant for both students and non-students for donating goods, with importance of charity also significant for students. Telephone and television channels were important for both groups. However, Internet, email and short messaging services were more important for students, providing opportunities to enhance this group’s perceptions of the importance of the charity, and the importance of the need, which ultimately impacts on their attitudes towards the charity. These differences highlight the importance of charities focussing on those motivations and attitudes that are important to a particular target segment and communicating through appropriate media channels for these segments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Financial processes may possess long memory and their probability densities may display heavy tails. Many models have been developed to deal with this tail behaviour, which reflects the jumps in the sample paths. On the other hand, the presence of long memory, which contradicts the efficient market hypothesis, is still an issue for further debates. These difficulties present challenges with the problems of memory detection and modelling the co-presence of long memory and heavy tails. This PhD project aims to respond to these challenges. The first part aims to detect memory in a large number of financial time series on stock prices and exchange rates using their scaling properties. Since financial time series often exhibit stochastic trends, a common form of nonstationarity, strong trends in the data can lead to false detection of memory. We will take advantage of a technique known as multifractal detrended fluctuation analysis (MF-DFA) that can systematically eliminate trends of different orders. This method is based on the identification of scaling of the q-th-order moments and is a generalisation of the standard detrended fluctuation analysis (DFA) which uses only the second moment; that is, q = 2. We also consider the rescaled range R/S analysis and the periodogram method to detect memory in financial time series and compare their results with the MF-DFA. An interesting finding is that short memory is detected for stock prices of the American Stock Exchange (AMEX) and long memory is found present in the time series of two exchange rates, namely the French franc and the Deutsche mark. Electricity price series of the five states of Australia are also found to possess long memory. For these electricity price series, heavy tails are also pronounced in their probability densities. The second part of the thesis develops models to represent short-memory and longmemory financial processes as detected in Part I. These models take the form of continuous-time AR(∞) -type equations whose kernel is the Laplace transform of a finite Borel measure. By imposing appropriate conditions on this measure, short memory or long memory in the dynamics of the solution will result. A specific form of the models, which has a good MA(∞) -type representation, is presented for the short memory case. Parameter estimation of this type of models is performed via least squares, and the models are applied to the stock prices in the AMEX, which have been established in Part I to possess short memory. By selecting the kernel in the continuous-time AR(∞) -type equations to have the form of Riemann-Liouville fractional derivative, we obtain a fractional stochastic differential equation driven by Brownian motion. This type of equations is used to represent financial processes with long memory, whose dynamics is described by the fractional derivative in the equation. These models are estimated via quasi-likelihood, namely via a continuoustime version of the Gauss-Whittle method. The models are applied to the exchange rates and the electricity prices of Part I with the aim of confirming their possible long-range dependence established by MF-DFA. The third part of the thesis provides an application of the results established in Parts I and II to characterise and classify financial markets. We will pay attention to the New York Stock Exchange (NYSE), the American Stock Exchange (AMEX), the NASDAQ Stock Exchange (NASDAQ) and the Toronto Stock Exchange (TSX). The parameters from MF-DFA and those of the short-memory AR(∞) -type models will be employed in this classification. We propose the Fisher discriminant algorithm to find a classifier in the two and three-dimensional spaces of data sets and then provide cross-validation to verify discriminant accuracies. This classification is useful for understanding and predicting the behaviour of different processes within the same market. The fourth part of the thesis investigates the heavy-tailed behaviour of financial processes which may also possess long memory. We consider fractional stochastic differential equations driven by stable noise to model financial processes such as electricity prices. The long memory of electricity prices is represented by a fractional derivative, while the stable noise input models their non-Gaussianity via the tails of their probability density. A method using the empirical densities and MF-DFA will be provided to estimate all the parameters of the model and simulate sample paths of the equation. The method is then applied to analyse daily spot prices for five states of Australia. Comparison with the results obtained from the R/S analysis, periodogram method and MF-DFA are provided. The results from fractional SDEs agree with those from MF-DFA, which are based on multifractal scaling, while those from the periodograms, which are based on the second order, seem to underestimate the long memory dynamics of the process. This highlights the need and usefulness of fractal methods in modelling non-Gaussian financial processes with long memory.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Frontline employee behaviours are recognised as vital for achieving a competitive advantage for service organisations. The services marketing literature has comprehensively examined ways to improve frontline employee behaviours in service delivery and recovery. However, limited attention has been paid to frontline employee behaviours that favour customers in ways that go against organisational norms or rules. This study examines these behaviours by introducing a behavioural concept of Customer-Oriented Deviance (COD). COD is defined as, “frontline employees exhibiting extra-role behaviours that they perceive to defy existing expectations or prescribed rules of higher authority through service adaptation, communication and use of resources to benefit customers during interpersonal service encounters.” This thesis develops a COD measure and examines the key determinants of these behaviours from a frontline employee perspective. Existing research on similar behaviours that has originated in the positive deviance and pro-social behaviour domains has limitations and is considered inadequate to examine COD in the services context. The absence of a well-developed body of knowledge on non-conforming service behaviours has implications for both theory and practice. The provision of ‘special favours’ increases customer satisfaction but the over-servicing of customers is also counterproductive for the service delivery and costly for the organisation. Despite these implications of non-conforming service behaviours, there is little understanding about the nature of these behaviours and its key drivers. This research builds on inadequacies in prior research on positive deviance, pro-social and pro-customer literature to develop the theoretical foundation of COD. The concept of positive deviance which has predominantly been used to study organisational behaviours is applied within a services marketing setting. Further, it addresses previous limitations in pro-social and pro-customer behavioural literature that has examined limited forms of behaviours with no clear understanding on the nature of these behaviours. Building upon these literature streams, this research adopts a holistic approach towards the conceptualisation of COD. It addresses previous shortcomings in the literature by providing a well bounded definition, developing a psychometrically sound measure of COD and a conceptually well-founded model of COD. The concept of COD was examined across three separate studies and based on the theoretical foundations of role theory and social identity theory. Study 1 was exploratory and based on in-depth interviews using the Critical Incident Technique (CIT). The aim of Study 1 was to understand the nature of COD and qualitatively identify its key drivers. Thematic analysis was conducted to analyse the data and the two potential dimensions of COD behaviours of Deviant Service Adaptation (DSA) and Deviant Service Communication (DSC) were revealed in the analysis. In addition, themes representing the potential influences of COD were broadly classified as individual factors, situational factors, and organisational factors. Study 2 was a scale development procedure that involved the generation and purification of items for the measure based on two student samples working in customer service roles (Pilot sample, N=278; Initial validation sample, N=231). The results for the reliability and Exploratory Factor Analyses (EFA) on the pilot sample suggested the scale had poor psychometric properties. As a result, major revisions were made in terms of item wordings and new items were developed based on the literature to reflect a new dimension, Deviant Use of Resources (DUR). The revised items were tested on the initial validation sample with the EFA analysis suggesting a four-factor structure of COD. The aim of Study 3 was to further purify the COD measure and test for nomological validity based on its theoretical relationships with key antecedents and similar constructs (key correlates). The theoretical model of COD consisting of nine hypotheses was tested on a retail and hospitality sample of frontline employees (Retail N=311; Hospitality N=305) of a market research panel using an online survey. The data was analysed using Structural Equation Modelling (SEM). The results provided support for a re-specified second-order three-factor model of COD which consists of 11 items. Overall, the COD measure was found to be reliable and valid, demonstrating convergent validity, discriminant validity and marginal partial invariance for the factor loadings. The results showed support for nomological validity, although the antecedents had differing impact on COD across samples. Specifically, empathy and perspective-taking, role conflict, and job autonomy significantly influenced COD in the retail sample, whereas empathy and perspective-taking, risk-taking propensity and role conflict were significant predictors in the hospitality sample. In addition, customer orientation-selling orientation, the altruistic dimension of organisational citizenship behaviours, workplace deviance, and social desirability responding were found to correlate with COD. This research makes several contributions to theory. First, the findings of this thesis extend the literature on positive deviance, pro-social and pro-customer behaviours. Second, the research provides an empirically tested model which describes the antecedents of COD. Third, this research contributes by providing a reliable and valid measure of COD. Finally, the research investigates the differential effects of the key antecedents in different service sectors on COD. The research findings also contribute to services marketing practice. Based on the research findings, service practitioners can better understand the phenomenon of COD and utilise the measurement tool to calibrate COD levels within their organisations. Knowledge on the key determinants of COD will help improve recruitment and training programs and drive internal initiatives within the firm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background, aim, and scope Urban motor vehicle fleets are a major source of particulate matter pollution, especially of ultrafine particles (diameters < 0.1 µm), and exposure to particulate matter has known serious health effects. A considerable body of literature is available on vehicle particle emission factors derived using a wide range of different measurement methods for different particle sizes, conducted in different parts of the world. Therefore the choice as to which are the most suitable particle emission factors to use in transport modelling and health impact assessments presented as a very difficult task. The aim of this study was to derive a comprehensive set of tailpipe particle emission factors for different vehicle and road type combinations, covering the full size range of particles emitted, which are suitable for modelling urban fleet emissions. Materials and methods A large body of data available in the international literature on particle emission factors for motor vehicles derived from measurement studies was compiled and subjected to advanced statistical analysis, to determine the most suitable emission factors to use in modelling urban fleet emissions. Results This analysis resulted in the development of five statistical models which explained 86%, 93%, 87%, 65% and 47% of the variation in published emission factors for particle number, particle volume, PM1, PM2.5 and PM10 respectively. A sixth model for total particle mass was proposed but no significant explanatory variables were identified in the analysis. From the outputs of these statistical models, the most suitable particle emission factors were selected. This selection was based on examination of the statistical robustness of the statistical model outputs, including consideration of conservative average particle emission factors with the lowest standard errors, narrowest 95% confidence intervals and largest sample sizes, and the explanatory model variables, which were Vehicle Type (all particle metrics), Instrumentation (particle number and PM2.5), Road Type (PM10) and Size Range Measured and Speed Limit on the Road (particle volume). Discussion A multiplicity of factors need to be considered in determining emission factors that are suitable for modelling motor vehicle emissions, and this study derived a set of average emission factors suitable for quantifying motor vehicle tailpipe particle emissions in developed countries. Conclusions The comprehensive set of tailpipe particle emission factors presented in this study for different vehicle and road type combinations enable the full size range of particles generated by fleets to be quantified, including ultrafine particles (measured in terms of particle number). These emission factors have particular application for regions which may have a lack of funding to undertake measurements, or insufficient measurement data upon which to derive emission factors for their region. Recommendations and perspectives In urban areas motor vehicles continue to be a major source of particulate matter pollution and of ultrafine particles. It is critical that in order to manage this major pollution source methods are available to quantify the full size range of particles emitted for traffic modelling and health impact assessments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Understanding the relationship between diet, physical activity and health in humans requires accurate measurement of body composition and daily energy expenditure. Stable isotopes provide a means of measuring total body water and daily energy expenditure under free-living conditions. While the use of isotope ratio mass spectrometry (IRMS) for the analysis of 2H (Deuterium) and 18O (Oxygen-18) is well established in the field of human energy metabolism research, numerous questions remain regarding the factors which influence analytical and measurement error using this methodology. This thesis was comprised of four studies with the following emphases. The aim of Study 1 was to determine the analytical and measurement error of the IRMS with regard to sample handling under certain conditions. Study 2 involved the comparison of TEE (Total daily energy expenditure) using two commonly employed equations. Further, saliva and urine samples, collected at different times, were used to determine if clinically significant differences would occur. Study 3 was undertaken to determine the appropriate collection times for TBW estimates and derived body composition values. Finally, Study 4, a single case study to investigate if TEE measures are affected when the human condition changes due to altered exercise and water intake. The aim of Study 1 was to validate laboratory approaches to measure isotopic enrichment to ensure accurate (to international standards), precise (reproducibility of three replicate samples) and linear (isotope ratio was constant over the expected concentration range) results. This established the machine variability for the IRMS equipment in use at Queensland University for both TBW and TEE. Using either 0.4mL or 0.5mL sample volumes for both oxygen-18 and deuterium were statistically acceptable (p>0.05) and showed a within analytical variance of 5.8 Delta VSOW units for deuterium, 0.41 Delta VSOW units for oxygen-18. This variance was used as “within analytical noise” to determine sample deviations. It was also found that there was no influence of equilibration time on oxygen-18 or deuterium values when comparing the minimum (oxygen-18: 24hr; deuterium: 3 days) and maximum (oxygen-18: and deuterium: 14 days) equilibration times. With regard to preparation using the vacuum line, any order of preparation is suitable as the TEE values fall within 8% of each other regardless of preparation order. An 8% variation is acceptable for the TEE values due to biological and technical errors (Schoeller, 1988). However, for the automated line, deuterium must be assessed first followed by oxygen-18 as the automated machine line does not evacuate tubes but merely refills them with an injection of gas for a predetermined time. Any fractionation (which may occur for both isotopes), would cause a slight elevation in the values and hence a lower TEE. The purpose of the second and third study was to investigate the use of IRMS to measure the TEE and TBW of and to validate the current IRMS practices in use with regard to sample collection times of urine and saliva, the use of two TEE equations from different research centers and the body composition values derived from these TEE and TBW values. Following the collection of a fasting baseline urine and saliva sample, 10 people (8 women, 2 men) were dosed with a doubly labeled water does comprised of 1.25g 10% oxygen-18 and 0.1 g 100% deuterium/kg body weight. The samples were collected hourly for 12 hrs on the first day and then morning, midday, and evening samples were collected for the next 14 days. The samples were analyzed using an isotope ratio mass spectrometer. For the TBW, time to equilibration was determined using three commonly employed data analysis approaches. Isotopic equilibration was reached in 90% of the sample by hour 6, and in 100% of the sample by hour 7. With regard to the TBW estimations, the optimal time for urine collection was found to be between hours 4 and 10 as to where there was no significant difference between values. In contrast, statistically significant differences in TBW estimations were found between hours 1-3 and from 11-12 when compared with hours 4-10. Most of the individuals in this study were in equilibrium after 7 hours. The TEE equations of Prof Dale Scholler (Chicago, USA, IAEA) and Prof K.Westerterp were compared with that of Prof. Andrew Coward (Dunn Nutrition Centre). When comparing values derived from samples collected in the morning and evening there was no effect of time or equation on resulting TEE values. The fourth study was a pilot study (n=1) to test the variability in TEE as a result of manipulations in fluid consumption and level of physical activity; the magnitude of change which may be expected in a sedentary adult. Physical activity levels were manipulated by increasing the number of steps per day to mimic the increases that may result when a sedentary individual commences an activity program. The study was comprised of three sub-studies completed on the same individual over a period of 8 months. There were no significant changes in TBW across all studies, even though the elimination rates changed with the supplemented water intake and additional physical activity. The extra activity may not have sufficiently strenuous enough and the water intake high enough to cause a significant change in the TBW and hence the CO2 production and TEE values. The TEE values measured show good agreement based on the estimated values calculated on an RMR of 1455 kcal/day, a DIT of 10% of TEE and activity based on measured steps. The covariance values tracked when plotting the residuals were found to be representative of “well-behaved” data and are indicative of the analytical accuracy. The ratio and product plots were found to reflect the water turnover and CO2 production and thus could, with further investigation, be employed to identify the changes in physical activity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aims The aim of this cross sectional study is to explore levels of physical activity and sitting behaviour amongst a sample of pregnant Australian women (n = 81), and investigate whether reported levels of physical activity and/or time spent sitting were associated with depressive symptom scores after controlling for potential covariates. Methods Study participants were women who attended the antenatal clinic of a large Brisbane maternity hospital between October and November 2006. Data relating to participants. current levels of physical activity, sitting behaviour, depressive symptoms, demographic characteristics and exposure to known risk factors for depression during pregnancy were collected; via on-site survey, follow-up telephone interview (approximately one week later) and post delivery access to participant hospital records. Results Participants were aged 29.5 (¡¾ 5.6) years and mostly partnered (86.4%) with a gross household income above $26,000 per annum (88.9%). Levels of physical activity were generally low, with only 28.4 % of participants reporting sufficient total activity and 16% of participants reporting sufficient planned (leisure-time) activity. The sample mean for depressive symptom scores measured by the Hospital Anxiety and Depression Scale (HADS-D) was 6.38 (¡¾ 2.55). The mean depressive symptom scores for participants who reported total moderate-to-vigorous activity levels of sufficient, insufficient, and none, were 5.43 (¡¾ 1.56), 5.82 (¡¾ 1.77) and 7.63 (¡¾ 3.25), respectively. Hierarchical multivariable linear regression modelling indicated that after controlling for covariates, a statistically significant difference of 1.09 points was observed between mean depressive symptom scores of participants who reported sufficient total physical activity, compared with participants who reported they were engaging in no moderate-to-vigorous activity in a typical week (p = 0.05) but this did not reach the criteria for a clinically meaningful difference. Total physical activity was contributed 2.2% to the total 30.3% of explained variance within this model. The other main contributors to explained variance in multivariable regression models were anxiety symptom scores and the number of existing children. Further, a trend was observed between higher levels of planned sitting behaviour and higher depressive symptom scores (p = 0.06); this correlation was not clinically meaningful. Planned sitting contributed 3.2% to the total 31.3 % of explained variance. The number of regression covariates and limited sample size led to a less than ideal ratio of covariates to participants, probably attenuating this relationship. Specific information about the sitting-based activities in which participants engaged may have provided greater insight about the relationship between planned sitting and depressive symptoms, but these data were not captured by the present study. Conclusions The finding that higher levels of physical activity were associated with lower levels of depressive symptoms is consistent with the current body of existing literature in pregnant women, and with a larger body of evidence based in general population samples. Although this result was not considered clinically meaningful, the criterion for a clinically meaningful result was an a priori decision based on quality of life literature in non-pregnant populations and may not truly reflect a difference in symptoms that is meaningful to pregnant women. Further investigation to establish clinically meaningful criteria for continuous depressive symptom data in pregnant women is required. This result may have implications relating to prevention and management options for depression during pregnancy. The observed trend between planned sitting and depressive symptom scores is consistent with literature based on leisure-time sitting behaviour in general population samples, and suggests that further research in this area, with larger samples of pregnant women and more specific sitting data is required to explore potential associations between activities such as television viewing and depressive symptoms, as this may be an area of behaviour that is amenable to modification.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Atopic dermatitis (AD) is a chronic inflammatory skin condition, characterized by intense pruritis, with a complex aetiology comprising multiple genetic and environmental factors. It is a common chronic health problem among children, and along with other allergic conditions, is increasing in prevalence within Australia and in many countries worldwide. Successful management of childhood AD poses a significant and ongoing challenge to parents of affected children. Episodic and unpredictable, AD can have profound effects on children’s physical and psychosocial wellbeing and quality of life, and that of their caregivers and families. Where concurrent child behavioural problems and parenting difficulties exist, parents may have particular difficulty achieving adequate and consistent performance of the routine management tasks that promote the child’s health and wellbeing. Despite frequent reports of behaviour problems in children with AD, past research has neglected the importance of child behaviour to parenting confidence and competence with treatment. Parents of children with AD are also at risk of experiencing depression, anxiety, parenting stress, and parenting difficulties. Although these factors have been associated with difficulty in managing other childhood chronic health conditions, the nature of these relationships in the context of child AD management has not been reported. This study therefore examined relationships between child, parent, and family variables, and parents’ management of child AD and difficult child behaviour, using social cognitive and self-efficacy theory as a guiding framework. The study was conducted in three phases. It employed a quantitative, cross-sectional study design, accessing a community sample of 120 parents of children with AD, and a sample of 64 child-parent dyads recruited from a metropolitan paediatric tertiary referral centre. In Phase One, instruments designed to measure parents’ self-reported performance of AD management tasks (Parents’ Eczema Management Scale – PEMS) and parents’ outcome expectations of task performance (Parents’ Outcome Expectations of Eczema Management Scale – POEEMS) were adapted from the Parental Self-Efficacy with Eczema Care Index (PASECI). In Phase Two, these instruments were used to examine relationships between child, parent, and family variables, and parents’ self-efficacy, outcome expectations, and self-reported performance of AD management tasks. Relationships between child, parent, and family variables, parents’ self-efficacy for managing problem behaviours, and reported parenting practices, were also examined. This latter focus was explored further in Phase Three, in which relationships between observed child and parent behaviour, and parent-reported self-efficacy for managing both child AD and problem behaviours, were explored. Phase One demonstrated the reliability of both PEMS and POEEMS, and confirmed that PASECI was reliable and valid with modification as detailed. Factor analyses revealed two-factor structures for PEMS and PASECI alike, with both scales containing factors related to performing routine management tasks, and managing the child’s symptoms and behaviour. Factor analysis was also applied to POEEMS resulting in a three-factor structure. Factors relating to independent management of AD by the parent, involving healthcare professionals in management, and involving the child in management of AD were found. Parents’ self-efficacy and outcome expectations had a significant influence on self-reported task performance. In Phase Two, relationships emerged between parents’ self-efficacy and self-reported performance of AD management tasks, and AD severity, child behaviour difficulties, parent depression and stress, conflict over parenting issues, and parents’ relationship satisfaction. Using multiple linear regressions, significant proportions of variation in parents’ self-efficacy and self-reported performance of AD management tasks were explained by child behaviour difficulties and parents’ formal education, and self-efficacy emerged as a likely mediator for the relationships between both child behaviour and parents’ education, and performance of AD management tasks. Relationships were also found between parents’ self-efficacy for managing difficult child behaviour and use of dysfunctional parenting strategies, and child behaviour difficulties, parents’ depression and stress, conflict over parenting issues, and relationship satisfaction. While significant proportions of variation in self-efficacy for managing child behaviour were explained by both child behaviour and family income, family income was the only variable to explain a significant proportion of variation in parent-reported use of dysfunctional parenting strategies. Greater use of dysfunctional parenting strategies (both lax and authoritarian parenting) was associated with more severe AD. Parents reporting lower self-efficacy for managing AD also reported lower self-efficacy for managing difficult child behaviour; likewise, less successful self-reported performance of AD management tasks was associated with greater use of dysfunctional parenting strategies. When child and parent behaviour was directly observed in Phase Three, more aversive child behaviour was associated with lower self-efficacy, less positive outcome expectations, and poorer self-reported performance of AD management tasks by parents. Importantly, there were strong positive relationships between these variables (self-efficacy, outcome expectations, and self-reported task performance) and parents’ observed competence when providing treatment to their child. Less competent performance was also associated with greater parent-reported child behaviour difficulties, parent depression and stress, parenting conflict, and relationship dissatisfaction. Overall, this study revealed the importance of child behaviour to parents’ confidence and practices in the contexts of child AD and child behaviour management. Parents of children with concurrent AD and behavioural problems are at particular risk of having low self-efficacy for managing their child’s AD and difficult behaviour. Children with more severe AD are also at higher risk of behaviour problems, and thus represent a high-risk group of children whose parents may struggle to manage the disease successfully. As one of the first studies to examine the role and correlates of parents’ self-efficacy in child AD management, this study identified a number of potentially modifiable factors that can be targeted to enhance parents’ self-efficacy, and improve parent management of child AD. In particular, interventions should focus on child behaviour and parenting issues to support parents caring for children with AD and improve child health outcomes. In future, findings from this research will assist healthcare teams to identify parents most in need of support and intervention, and inform the development and testing of targeted multidisciplinary strategies to support parents caring for children with AD.