916 resultados para Preweaning average daily gain


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The purpose of this preliminary study was to determine the relevance of the categorization of the load regime data to assess the functional output and usage of the prosthesis of lower limb amputees. The objectives were a) to introduce a categorization of load regime, b) to present some descriptors of each activity, and c) to report the results for a case. The load applied on the osseointegrated fixation of one transfemoral amputee was recorded using a portable kinetic system for 5 hours. The periods of directional locomotion, localized locomotion, and stationary loading occurred 44%, 34%, and 22% of recording time and each accounted for 51%, 38%, and 12% of the duration of the periods of activity, respectively. The absolute maximum force during directional locomotion, localized locomotion, and stationary loading was 19%, 15%, and 8% of the body weight on the anteroposterior axis, 20%, 19%, and 12% on the mediolateral axis, and 121%, 106%, and 99% on the long axis. A total of 2,783 gait cycles were recorded. Approximately 10% more gait cycles and 50% more of the total impulse than conventional analyses were identified. The proposed categorization and apparatus have the potential to complement conventional instruments, particularly for difficult cases.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Bounded parameter Markov Decision Processes (BMDPs) address the issue of dealing with uncertainty in the parameters of a Markov Decision Process (MDP). Unlike the case of an MDP, the notion of an optimal policy for a BMDP is not entirely straightforward. We consider two notions of optimality based on optimistic and pessimistic criteria. These have been analyzed for discounted BMDPs. Here we provide results for average reward BMDPs. We establish a fundamental relationship between the discounted and the average reward problems, prove the existence of Blackwell optimal policies and, for both notions of optimality, derive algorithms that converge to the optimal value function.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As English increasingly becomes one of the most commonly spoken languages in the world today for a variety of economic, social and cultural reasons, education is impacted by globalisation, the internationalisation of universities and the diversity of learners in classrooms. The challenge for educators is to find more effective ways of teaching English language so that students are better able to create meaning and communicate in the target language as well as to transform knowledge and understanding into relevant skills for a rapidly changing world. This research focuses broadly on English language education underpinned by social constructivist principles informing communicative language teaching and in particular, interactive peer learning approaches. An intervention of interactive peer-based learning in two case study contexts of English as Foreign Language (EFL) undergraduates in a Turkish university and English as Second Language (ESL) undergraduates in an Australian university investigates what students gain from the intervention. Methodology utilising qualitative data gathered from student reflective logs, focus group interviews and researcher field notes emphasises student voice. The cross case comparative study indicates that interactive peer-based learning enhances a range of learning outcomes for both cohorts including engagement, communicative competence, diagnostic feedback as well as assisting development of inclusive social relationships, civic skills, confidence and self efficacy. The learning outcomes facilitate better adaptation to a new learning environment and culture. An iterative instructional matrix tool is a useful product of the research for first year university experiences, teacher training, raising awareness of diversity, building learning communities, and differentiating the curriculum. The study demonstrates that English language learners can experience positive impact through peer-based learning and thus holds an influential key for Australian universities and higher education.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

PURPOSE: To examine the relationship between contact lens (CL) case contamination and various potential predictive factors. METHODS: 74 subjects were fitted with lotrafilcon B (CIBA Vision) CLs on a daily wear basis for 1 month. Subjects were randomly assigned one of two polyhexamethylene biguanide (PHMB) preserved disinfecting solutions with the corresponding regular lens case. Clinical evaluations were conducted at lens delivery and after 1 month, when cases were collected for microbial culture. A CL care non-compliance score was determined through administration of a questionnaire and the volume of solution used was calculated for each subject. Data was examined using backward stepwise binary logistic regression. RESULTS: 68% of cases were contaminated. 35% were moderately or heavily contaminated and 36% contained gram-negative bacteria. Case contamination was significantly associated with subjective dryness symptoms (OR 4.22, CI 1.37–13.01) (P<0.05). There was no association between contamination and subject age, ethnicity, gender, average wearing time, amount of solution used, non-compliance score, CL power and subjective redness (P>0.05). The effect of lens care system on case contamination approached significance (P=0.07). Failure to rinse the case with disinfecting solution following CL insertion (OR 2.51, CI 0.52–12.09) and not air drying the case (OR 2.31, CI 0.39–13.35) were positively correlated with contamination; however, did not reach statistical significance. CONCLUSIONS: Our results suggest that case contamination may influence subjective comfort. It is difficult to predict the development of case contamination from a variety of clinical factors. The efficacy of CL solutions, bacterial resistance to disinfection and biofilm formation are likely to play a role. Further evaluation of these factors will improve our understanding of the development of case contamination and its clinical impact.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Rapid weight gain in infancy is an important predictor of obesity in later childhood. Our aim was to determine which modifiable variables are associated with rapid weight gain in early life. Methods: Subjects were healthy infants enrolled in NOURISH, a randomised, controlled trial evaluating an intervention to promote positive early feeding practices. This analysis used the birth and baseline data for NOURISH. Birthweight was collected from hospital records and infants were also weighed at baseline assessment when they were aged 4-7 months and before randomisation. Infant feeding practices and demographic variables were collected from the mother using a self administered questionnaire. Rapid weight gain was defined as an increase in weight-for-age Z-score (using WHO standards) above 0.67 SD from birth to baseline assessment, which is interpreted clinically as crossing centile lines on a growth chart. Variables associated with rapid weight gain were evaluated using a multivariable logistic regression model. Results: Complete data were available for 612 infants (88% of the total sample recruited) with a mean (SD) age of 4.3 (1.0) months at baseline assessment. After adjusting for mother's age, smoking in pregnancy, BMI, and education and infant birthweight, age, gender and introduction of solid foods, the only two modifiable factors associated with rapid weight gain to attain statistical significance were formula feeding [OR=1.72 (95%CI 1.01-2.94), P= 0.047] and feeding on schedule [OR=2.29 (95%CI 1.14-4.61), P=0.020]. Male gender and lower birthweight were non-modifiable factors associated with rapid weight gain. Conclusions: This analysis supports the contention that there is an association between formula feeding, feeding to schedule and weight gain in the first months of life. Mechanisms may include the actual content of formula milk (e.g. higher protein intake) or differences in feeding styles, such as feeding to schedule, which increase the risk of overfeeding. Trial Registration: Australian Clinical Trials Registry ACTRN12608000056392

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Queensland University of Technology (QUT) allows the presentation of a thesis for the Degree of Doctor of Philosophy in the format of published or submitted papers, where such papers have been published, accepted or submitted during the period of candidature. This thesis is composed of seven published/submitted papers, of which one has been published, three accepted for publication and the other three are under review. This project is financially supported by an Australian Research Council (ARC) Discovery Grant with the aim of proposing strategies for the performance control of Distributed Generation (DG) system with digital estimation of power system signal parameters. Distributed Generation (DG) has been recently introduced as a new concept for the generation of power and the enhancement of conventionally produced electricity. Global warming issue calls for renewable energy resources in electricity production. Distributed generation based on solar energy (photovoltaic and solar thermal), wind, biomass, mini-hydro along with use of fuel cell and micro turbine will gain substantial momentum in the near future. Technically, DG can be a viable solution for the issue of the integration of renewable or non-conventional energy resources. Basically, DG sources can be connected to local power system through power electronic devices, i.e. inverters or ac-ac converters. The interconnection of DG systems to power system as a compensator or a power source with high quality performance is the main aim of this study. Source and load unbalance, load non-linearity, interharmonic distortion, supply voltage distortion, distortion at the point of common coupling in weak source cases, source current power factor, and synchronism of generated currents or voltages are the issues of concern. The interconnection of DG sources shall be carried out by using power electronics switching devices that inject high frequency components rather than the desired current. Also, noise and harmonic distortions can impact the performance of the control strategies. To be able to mitigate the negative effect of high frequency and harmonic as well as noise distortion to achieve satisfactory performance of DG systems, new methods of signal parameter estimation have been proposed in this thesis. These methods are based on processing the digital samples of power system signals. Thus, proposing advanced techniques for the digital estimation of signal parameters and methods for the generation of DG reference currents using the estimates provided is the targeted scope of this thesis. An introduction to this research – including a description of the research problem, the literature review and an account of the research progress linking the research papers – is presented in Chapter 1. One of the main parameters of a power system signal is its frequency. Phasor Measurement (PM) technique is one of the renowned and advanced techniques used for the estimation of power system frequency. Chapter 2 focuses on an in-depth analysis conducted on the PM technique to reveal its strengths and drawbacks. The analysis will be followed by a new technique proposed to enhance the speed of the PM technique while the input signal is free of even-order harmonics. The other techniques proposed in this thesis as the novel ones will be compared with the PM technique comprehensively studied in Chapter 2. An algorithm based on the concept of Kalman filtering is proposed in Chapter 3. The algorithm is intended to estimate signal parameters like amplitude, frequency and phase angle in the online mode. The Kalman filter is modified to operate on the output signal of a Finite Impulse Response (FIR) filter designed by a plain summation. The frequency estimation unit is independent from the Kalman filter and uses the samples refined by the FIR filter. The frequency estimated is given to the Kalman filter to be used in building the transition matrices. The initial settings for the modified Kalman filter are obtained through a trial and error exercise. Another algorithm again based on the concept of Kalman filtering is proposed in Chapter 4 for the estimation of signal parameters. The Kalman filter is also modified to operate on the output signal of the same FIR filter explained above. Nevertheless, the frequency estimation unit, unlike the one proposed in Chapter 3, is not segregated and it interacts with the Kalman filter. The frequency estimated is given to the Kalman filter and other parameters such as the amplitudes and phase angles estimated by the Kalman filter is taken to the frequency estimation unit. Chapter 5 proposes another algorithm based on the concept of Kalman filtering. This time, the state parameters are obtained through matrix arrangements where the noise level is reduced on the sample vector. The purified state vector is used to obtain a new measurement vector for a basic Kalman filter applied. The Kalman filter used has similar structure to a basic Kalman filter except the initial settings are computed through an extensive math-work with regards to the matrix arrangement utilized. Chapter 6 proposes another algorithm based on the concept of Kalman filtering similar to that of Chapter 3. However, this time the initial settings required for the better performance of the modified Kalman filter are calculated instead of being guessed by trial and error exercises. The simulations results for the parameters of signal estimated are enhanced due to the correct settings applied. Moreover, an enhanced Least Error Square (LES) technique is proposed to take on the estimation when a critical transient is detected in the input signal. In fact, some large, sudden changes in the parameters of the signal at these critical transients are not very well tracked by Kalman filtering. However, the proposed LES technique is found to be much faster in tracking these changes. Therefore, an appropriate combination of the LES and modified Kalman filtering is proposed in Chapter 6. Also, this time the ability of the proposed algorithm is verified on the real data obtained from a prototype test object. Chapter 7 proposes the other algorithm based on the concept of Kalman filtering similar to those of Chapter 3 and 6. However, this time an optimal digital filter is designed instead of the simple summation FIR filter. New initial settings for the modified Kalman filter are calculated based on the coefficients of the digital filter applied. Also, the ability of the proposed algorithm is verified on the real data obtained from a prototype test object. Chapter 8 uses the estimation algorithm proposed in Chapter 7 for the interconnection scheme of a DG to power network. Robust estimates of the signal amplitudes and phase angles obtained by the estimation approach are used in the reference generation of the compensation scheme. Several simulation tests provided in this chapter show that the proposed scheme can very well handle the source and load unbalance, load non-linearity, interharmonic distortion, supply voltage distortion, and synchronism of generated currents or voltages. The purposed compensation scheme also prevents distortion in voltage at the point of common coupling in weak source cases, balances the source currents, and makes the supply side power factor a desired value.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The idea of body weight regulation implies that a biological mechanism exerts control over energy expenditure and food intake. This is a central tenet of energy homeostasis. However, the source and identity of the controlling mechanism have not been identified, although it is often presumed to be some long-acting signal related to body fat, such as leptin. Using a comprehensive experimental platform, we have investigated the relationship between biological and behavioural variables in two separate studies over a 12-week intervention period in obese adults (total n 92). All variables have been measured objectively and with a similar degree of scientific control and precision, including anthropometric factors, body composition, RMR and accumulative energy consumed at individual meals across the whole day. Results showed that meal size and daily energy intake (EI) were significantly correlated with fat-free mass (FFM, P values ,0·02–0·05) but not with fat mass (FM) or BMI (P values 0·11–0·45) (study 1, n 58). In study 2 (n 34), FFM (but not FM or BMI) predicted meal size and daily EI under two distinct dietary conditions (high-fat and low-fat). These data appear to indicate that, under these circumstances, some signal associated with lean mass (but not FM) exerts a determining effect over self-selected food consumption. This signal may be postulated to interact with a separate class of signals generated by FM. This finding may have implications for investigations of the molecular control of food intake and body weight and for the management of obesity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

For many people, a relatively large proportion of daily exposure to a multitude of pollutants may occur inside an automobile. A key determinant of exposure is the amount of outdoor air entering the cabin (i.e. air change or flow rate). We have quantified this parameter in six passenger vehicles ranging in age from 18 years to <1 year, at three vehicle speeds and under four different ventilation settings. Average infiltration into the cabin with all operable air entry pathways closed was between 1 and 33.1 air changes per hour (ACH) at a vehicle speed of 60 km/h, and between 2.6 and 47.3 ACH at 110 km/h, with these results representing the most (2005 Volkswagen Golf) and least air-tight (1989 Mazda 121) vehicles, respectively. Average infiltration into stationary vehicles parked outdoors varied between ~0 and 1.4 ACH and was moderately related to wind speed. Measurements were also performed under an air recirculation setting with low fan speed, while airflow rate measurements were conducted under two non-recirculate ventilation settings with low and high fan speeds. The windows were closed in all cases, and over 200 measurements were performed. The results can be applied to estimate pollutant exposure inside vehicles.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction: Feeding on demand supports an infant’s innate capacity to respond to hunger and satiety cues and may promote later self-regulation of intake. Our aim was to examine whether feeding style (on demand vs to schedule) is associated with weight gain in early life. Methods: Participants were first-time mothers of healthy term infants enrolled NOURISH, an RCT evaluating an intervention to promote positive early feeding practices. Baseline assessment occurred when infants were aged 2-7 months. Infants able to be categorised clearly as feeding on demand or to schedule (mothers self report) were included in the logistic regression analysis. The model was adjusted for gender, breastfeeding and maternal age, education, BMI. Weight gain was defined as a positive difference in baseline minus birthweight z-scores (WHO standards) which indicated tracking above weight percentile. Results: Data from 356 infants with a mean age of 4.4 (SD 1.0) months were available. Of these, 197 (55%) were fed on demand, 42 (12%) were fed on schedule. There was no statistical association between feeding style and weight gain [OR=0.72 (95%CI 0.35-1.46), P=0.36]. Formula fed infants were three times more likely to be fed on schedule and formula feeding was independently associated with increased weight gain [OR=2.02 (95%CI 1.11-3.66), P=0.021]. Conclusion: In this preliminary analysis the association between feeding style and weight gain did not reach statistical significance, however , the effect size may be clinically relevant and future analysis will include the full study sample (N=698).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The design of pre-contoured fracture fixation implants (plates and nails) that correctly fit the anatomy of a patient utilises 3D models of long bones with accurate geometric representation. 3D data is usually available from computed tomography (CT) scans of human cadavers that generally represent the above 60 year old age group. Thus, despite the fact that half of the seriously injured population comes from the 30 year age group and below, virtually no data exists from these younger age groups to inform the design of implants that optimally fit patients from these groups. Hence, relevant bone data from these age groups is required. The current gold standard for acquiring such data–CT–involves ionising radiation and cannot be used to scan healthy human volunteers. Magnetic resonance imaging (MRI) has been shown to be a potential alternative in the previous studies conducted using small bones (tarsal bones) and parts of the long bones. However, in order to use MRI effectively for 3D reconstruction of human long bones, further validations using long bones and appropriate reference standards are required. Accurate reconstruction of 3D models from CT or MRI data sets requires an accurate image segmentation method. Currently available sophisticated segmentation methods involve complex programming and mathematics that researchers are not trained to perform. Therefore, an accurate but relatively simple segmentation method is required for segmentation of CT and MRI data. Furthermore, some of the limitations of 1.5T MRI such as very long scanning times and poor contrast in articular regions can potentially be reduced by using higher field 3T MRI imaging. However, a quantification of the signal to noise ratio (SNR) gain at the bone - soft tissue interface should be performed; this is not reported in the literature. As MRI scanning of long bones has very long scanning times, the acquired images are more prone to motion artefacts due to random movements of the subject‟s limbs. One of the artefacts observed is the step artefact that is believed to occur from the random movements of the volunteer during a scan. This needs to be corrected before the models can be used for implant design. As the first aim, this study investigated two segmentation methods: intensity thresholding and Canny edge detection as accurate but simple segmentation methods for segmentation of MRI and CT data. The second aim was to investigate the usability of MRI as a radiation free imaging alternative to CT for reconstruction of 3D models of long bones. The third aim was to use 3T MRI to improve the poor contrast in articular regions and long scanning times of current MRI. The fourth and final aim was to minimise the step artefact using 3D modelling techniques. The segmentation methods were investigated using CT scans of five ovine femora. The single level thresholding was performed using a visually selected threshold level to segment the complete femur. For multilevel thresholding, multiple threshold levels calculated from the threshold selection method were used for the proximal, diaphyseal and distal regions of the femur. Canny edge detection was used by delineating the outer and inner contour of 2D images and then combining them to generate the 3D model. Models generated from these methods were compared to the reference standard generated using the mechanical contact scans of the denuded bone. The second aim was achieved using CT and MRI scans of five ovine femora and segmenting them using the multilevel threshold method. A surface geometric comparison was conducted between CT based, MRI based and reference models. To quantitatively compare the 1.5T images to the 3T MRI images, the right lower limbs of five healthy volunteers were scanned using scanners from the same manufacturer. The images obtained using the identical protocols were compared by means of SNR and contrast to noise ratio (CNR) of muscle, bone marrow and bone. In order to correct the step artefact in the final 3D models, the step was simulated in five ovine femora scanned with a 3T MRI scanner. The step was corrected using the iterative closest point (ICP) algorithm based aligning method. The present study demonstrated that the multi-threshold approach in combination with the threshold selection method can generate 3D models from long bones with an average deviation of 0.18 mm. The same was 0.24 mm of the single threshold method. There was a significant statistical difference between the accuracy of models generated by the two methods. In comparison, the Canny edge detection method generated average deviation of 0.20 mm. MRI based models exhibited 0.23 mm average deviation in comparison to the 0.18 mm average deviation of CT based models. The differences were not statistically significant. 3T MRI improved the contrast in the bone–muscle interfaces of most anatomical regions of femora and tibiae, potentially improving the inaccuracies conferred by poor contrast of the articular regions. Using the robust ICP algorithm to align the 3D surfaces, the step artefact that occurred by the volunteer moving the leg was corrected, generating errors of 0.32 ± 0.02 mm when compared with the reference standard. The study concludes that magnetic resonance imaging, together with simple multilevel thresholding segmentation, is able to produce 3D models of long bones with accurate geometric representations. The method is, therefore, a potential alternative to the current gold standard CT imaging.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose. To investigate whether diurnal variation occurs in retinal thickness measures derived from spectral domain optical coherence tomography (SD-OCT). Methods. Twelve healthy adult subjects had retinal thickness measured with SD-OCT every 2 h over a 10 h period. At each measurement session, three average B-scan images were derived from a series of multiple B-scans (each from a 5 mm horizontal raster scan along the fovea, containing 1500 A-scans/B-scan) and analyzed to determine the thickness of the total retina, as well as the thickness of the outer retinal layers. Average thickness values were calculated at the foveal center, at the 0.5 mm diameter foveal region, and for the temporal parafovea (1.5 mm from foveal center) and nasal parafovea (1.5 mm from foveal center). Results. Total retinal thickness did not exhibit significant diurnal variation in any of the considered retinal regions (p > 0.05). Evidence of significant diurnal variation was found in the thickness of the outer retinal layers (p < 0.05), with the most prominent changes observed in the photoreceptor layers at the foveal center. The photoreceptor inner and outer segment layer thickness exhibited mean amplitude (peak to trough) of daily change of 7 ± 3 μm at the foveal center. The peak in thickness was typically observed at the third measurement session (mean measurement time, 13:06). Conclusions. The total retinal thickness measured with SD-OCT does not exhibit evidence of significant variation over the course of the day. However, small but significant diurnal variation occurs in the thickness of the foveal outer retinal layers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Advances in safety research—trying to improve the collective understanding of motor vehicle crash causes and contributing factors—rest upon the pursuit of numerous lines of research inquiry. The research community has focused considerable attention on analytical methods development (negative binomial models, simultaneous equations, etc.), on better experimental designs (before-after studies, comparison sites, etc.), on improving exposure measures, and on model specification improvements (additive terms, non-linear relations, etc.). One might logically seek to know which lines of inquiry might provide the most significant improvements in understanding crash causation and/or prediction. It is the contention of this paper that the exclusion of important variables (causal or surrogate measures of causal variables) cause omitted variable bias in model estimation and is an important and neglected line of inquiry in safety research. In particular, spatially related variables are often difficult to collect and omitted from crash models—but offer significant opportunities to better understand contributing factors and/or causes of crashes. This study examines the role of important variables (other than Average Annual Daily Traffic (AADT)) that are generally omitted from intersection crash prediction models. In addition to the geometric and traffic regulatory information of intersection, the proposed model includes many spatial factors such as local influences of weather, sun glare, proximity to drinking establishments, and proximity to schools—representing a mix of potential environmental and human factors that are theoretically important, but rarely used. Results suggest that these variables in addition to AADT have significant explanatory power, and their exclusion leads to omitted variable bias. Provided is evidence that variable exclusion overstates the effect of minor road AADT by as much as 40% and major road AADT by 14%.