918 resultados para Parametric and semiparametric methods
Resumo:
Objectives: The aim of this study was to determine the antimicrobial resistance patterns of 125 Campylobacter jejuni and 27 Campylobacter coli isolates from 39 Queensland broiler farms. Methods: Two methods, a disc diffusion assay and an agar-based MIC assay, were used. The disc diffusion was performed and interpreted as previously described (Huysmans MB, Turnidge JD. Disc susceptibility testing for thermophilic campylobacters. Pathology 1997; 29: 209–16), whereas the MIC assay was performed according to CLSI (formerly NCCLS) methods and interpreted using DANMAP criteria. Results: In both assays, no C. jejuni or C. coli isolates were resistant to ciprofloxacin or chloramphenicol, no C. coli were resistant to nalidixic acid, and no C. jejuni were resistant to erythromycin. In the MIC assay, no C. jejuni isolate was resistant to nalidixic acid, whereas three isolates (2.4%) were resistant in the disc assay. The highest levels of resistance of the C. jejuni isolates were recorded for tetracycline (19.2% by MIC and 18.4% by disc) and ampicillin (19.2% by MIC and 17.6% by disc). The C. coli isolates gave very similar results (tetracycline resistance 14.8% by both MIC and disc; ampicillin resistance 7.4% by MIC and 14.8% by disc). Conclusions: This work has shown that the majority of C. jejuni and C. coli isolates were susceptible to the six antibiotics tested by both disc diffusion and MIC methods. Disc diffusion represents a suitable alternative methodology to agar-based MIC methods for poultry Campylobacter isolates.
Resumo:
Tillage is defined here in a broad sense, including disturbance of the soil and crop residues, wheel traffic and sowing opportunities. In sub-tropical, semi-arid cropping areas in Australia, tillage systems have evolved from intensively tilled bare fallow systems, with high soil losses, to reduced and no tillage systems. In recent years, the use of controlled traffic has also increased. These conservation tillage systems are successful in reducing water erosion of soil and sediment-bound chemicals. Control of runoff of dissolved nutrients and weakly sorbed chemicals is less certain. Adoption of new practices appears to have been related to practical and economic considerations, and proved to be more profitable after a considerable period of research and development. However there are still challenges. One challenge is to ensure that systems that reduce soil erosion, which may involve greater use of chemicals, do not degrade water quality in streams. Another challenge is to ensure that systems that improve water entry do not increase drainage below the crop root zone, which would increase the risk of salinity. Better understanding of how tillage practices influence soil hydrology, runoff and erosion processes should lead to better tillage systems and enable better management of risks to water quality and soil health. Finally, the need to determine the effectiveness of in-field management practices in achieving stream water quality targets in large, multi-land use catchments will challenge our current knowledge base and the tools available.
Resumo:
Nitrogen (N) is the largest agricultural input in many Australian cropping systems and applying the right amount of N in the right place at the right physiological stage is a significant challenge for wheat growers. Optimizing N uptake could reduce input costs and minimize potential off-site movement. Since N uptake is dependent on soil and plant water status, ideally, N should be applied only to areas within paddocks with sufficient plant available water. To quantify N and water stress, spectral and thermal crop stress detection methods were explored using hyperspectral, multispectral and thermal remote sensing data collected at a research field site in Victoria, Australia. Wheat was grown over two seasons with two levels of water inputs (rainfall/irrigation) and either four levels (in 2004; 0, 17, 39 and 163 kg/ha) or two levels (in 2005; 0 and 39 kg/ha N) of nitrogen. The Canopy Chlorophyll Content Index (CCCI) and modified Spectral Ratio planar index (mSRpi), two indices designed to measure canopy-level N, were calculated from canopy-level hyperspectral data in 2005. They accounted for 76% and 74% of the variability of crop N status, respectively, just prior to stem elongation (Zadoks 24). The Normalised Difference Red Edge (NDRE) index and CCCI, calculated from airborne multispectral imagery, accounted for 41% and 37% of variability in crop N status, respectively. Greater scatter in the airborne data was attributable to the difference in scale of the ground and aerial measurements (i.e., small area plant samples against whole-plot means from imagery). Nevertheless, the analysis demonstrated that canopy-level theory can be transferred to airborne data, which could ultimately be of more use to growers. Thermal imagery showed that mean plot temperatures of rainfed treatments were 2.7 °C warmer than irrigated treatments (P < 0.001) at full cover. For partially vegetated fields, the two-Dimensional Crop Water Stress Index (2D CWSI) was calculated using the Vegetation Index-Temperature (VIT) trapezoid method to reduce the contribution of soil background to image temperature. Results showed rainfed plots were consistently more stressed than irrigated plots. Future work is needed to improve the ability of the CCCI and VIT methods to detect N and water stress and apply both indices simultaneously at the paddock scale to test whether N can be targeted based on water status. Use of these technologies has significant potential for maximising the spatial and temporal efficiency of N applications for wheat growers. ‘Ground–breaking Stuff’- Proceedings of the 13th Australian Society of Agronomy Conference, 10-14 September 2006, Perth, Western Australia.
Resumo:
A series of secondary and tertiary amide-substituted diselenides were synthesized and studied for their GPx-like antioxidant activities using H2O2 Cum-OOH, and tBuOOH as substrates and PhSH as thiol co-substrate.The effect of substitution at the free -NH group of the amide moiety in the sec-amide-based diselenides on GPx activity was analyzed by detailed experimental and theoretical methods. It is observed that substitution at the free -NH group significantly enhances the GPx-like activities of the sec-amide-based diselenides, mainly by reducing the Se center dot center dot center dot O nonbonded interactions. The reduction in strength of the Se center dot center dot center dot O interaction upon introduction of N,N-dialkyl substituents not only prevents the undesired thiol exchange reactions, but also reduces the stability of selenenyl sulfide intermediates. This leads to a facile disproportionation of the selenenyl sulfide to the corresponding diselenide, which enhances the catalytic activity. The mechanistic investigations indicate that the reactivity of diselenides having sec-or tert-amide moieties with PhSH is extremely slow; indicating that the first step of the catalytic cycle involves the reaction between the diselenides and peroxide to produce the corresponding selenenic and seleninic acids.
Resumo:
Pitch discrimination is a fundamental property of the human auditory system. Our understanding of pitch-discrimination mechanisms is important from both theoretical and clinical perspectives. The discrimination of spectrally complex sounds is crucial in the processing of music and speech. Current methods of cognitive neuroscience can track the brain processes underlying sound processing either with precise temporal (EEG and MEG) or spatial resolution (PET and fMRI). A combination of different techniques is therefore required in contemporary auditory research. One of the problems in comparing the EEG/MEG and fMRI methods, however, is the fMRI acoustic noise. In the present thesis, EEG and MEG in combination with behavioral techniques were used, first, to define the ERP correlates of automatic pitch discrimination across a wide frequency range in adults and neonates and, second, they were used to determine the effect of recorded acoustic fMRI noise on those adult ERP and ERF correlates during passive and active pitch discrimination. Pure tones and complex 3-harmonic sounds served as stimuli in the oddball and matching-to-sample paradigms. The results suggest that pitch discrimination in adults, as reflected by MMN latency, is most accurate in the 1000-2000 Hz frequency range, and that pitch discrimination is facilitated further by adding harmonics to the fundamental frequency. Newborn infants are able to discriminate a 20% frequency change in the 250-4000 Hz frequency range, whereas the discrimination of a 5% frequency change was unconfirmed. Furthermore, the effect of the fMRI gradient noise on the automatic processing of pitch change was more prominent for tones with frequencies exceeding 500 Hz, overlapping with the spectral maximum of the noise. When the fundamental frequency of the tones was lower than the spectral maximum of the noise, fMRI noise had no effect on MMN and P3a, whereas the noise delayed and suppressed N1 and exogenous N2. Noise also suppressed the N1 amplitude in a matching-to-sample working memory task. However, the task-related difference observed in the N1 component, suggesting a functional dissociation between the processing of spatial and non-spatial auditory information, was partially preserved in the noise condition. Noise hampered feature coding mechanisms more than it hampered the mechanisms of change detection, involuntary attention, and the segregation of the spatial and non-spatial domains of working-memory. The data presented in the thesis can be used to develop clinical ERP-based frequency-discrimination protocols and combined EEG and fMRI experimental paradigms.
Resumo:
- Introduction Research identifies truck drivers as being at high risk of chronic disease. For most truck drivers their workplace is their vehicle. Truck drivers’ health is impacted by the limitations of this unique working environment, including reduced opportunities for physical activity and the intake of healthy foods. Workplaces are widely recognised as effective platforms for health promotion. However, the effectiveness of traditional and contemporary health promotion interventions in truck drivers’ novel workplace is unknown. - Methods This project worked with six transport industry workplaces in Queensland, Australia over a two-year period. Researchers used Participatory Action Research (PAR) processes to engage truck drivers and workplace managers in the implementation and evaluation of six workplace health promotion interventions. These interventions were designed to support truck drivers to increase their physical activity and access to healthy foods at work. They included traditional health promotion interventions such as a free fruit initiative, a ten thousand steps challenge, personal health messages and workplace posters, and a contemporary social media intervention. Participants were engaged via focus groups, interviews and mixed-methods surveys. - Results The project achieved positive changes in truck drivers’ health knowledge and health behaviours, particularly related to nutrition. There were positive changes in truck drivers’ self-reported health rating, body mass index (BMI) and readiness to make health-related lifestyle changes. There were also positive changes in truck drivers reporting their workplace as a key source of health information. These changes were underpinned by a positive shift in the culture of participating workplaces. Truck drivers’ perceptions of their workplace valuing, encouraging, modelling and facilitating healthy nutrition and physical activity behaviours improved. PAR processes enabled researchers to develop relationships with workplace managers, contextualise interventions and deliver rigorous outcomes. Despite the novelty of truck drivers’ mobile workplace, traditional health promotion interventions were more effective than contemporary ones. - Conclusion In this workplace health promotion project targeting a ‘hard-to-reach’ group of truck drivers, a combination of well-designed traditional workplace interventions and the PAR process resulted in positive health outcomes.
Resumo:
The introgression of domestic dog genes into dingo populations threatens the genetic integrity of 'pure' dingoes. However, dingo conservation efforts are hampered by difficulties in distinguishing between dingoes and hybrids in the field. This study evaluates consistency in the status of hybridisation (i.e. dingo, hybrid or dog) assigned by genetic analyses, skull morphology and visual assessments. Of the 56 south-east Queensland animals sampled, 39 (69.6%) were assigned the same status by all three methods, 10 (17.9%) by genetic and skull methods, four (7.1%) by genetic and visual methods; and two (3.6%) by skull and visual methods. Pair-wise comparisons identified a significant relationship between genetic and skull methods, but not between either of these and visual methods. Results from surveying 13 experienced wild dog managers showed that hybrids were more easily identified by visual characters than were dingoes. A more reliable visual assessment can be developed through determining the relationship between (1) genetics and phenotype by sampling wild dog populations and (2) the expression of visual characteristics from different proportions and breeds of domestic dog genes by breeding trials. Culling obvious hybrids based on visual characteristics, such as sable and patchy coat colours, should slow the process of hybridisation.
Resumo:
Multiple sclerosis (MS) is a chronic, inflammatory disease of the central nervous system, characterized especially by myelin and axon damage. Cognitive impairment in MS is common but difficult to detect without a neuropsychological examination. Valid and reliable methods are needed in clinical practice and research to detect deficits, follow their natural evolution, and verify treatment effects. The Paced Auditory Serial Addition Test (PASAT) is a measure of sustained and divided attention, working memory, and information processing speed, and it is widely used in MS patients neuropsychological evaluation. Additionally, the PASAT is the sole cognitive measure in an assessment tool primarly designed for MS clinical trials, the Multiple Sclerosis Functional Composite (MSFC). The aims of the present study were to determine a) the frequency, characteristics, and evolution of cognitive impairment among relapsing-remitting MS patients, and b) the validity and reliability of the PASAT in measuring cognitive performance in MS patients. The subjects were 45 relapsing-remitting MS patients from Seinäjoki Central Hospital, Department of Neurology and 48 healthy controls. Both groups underwent comprehensive neuropsychological assessments, including the PASAT, twice in a one-year follow-up, and additionally a sample of 10 patients and controls were evaluated with the PASAT in serial assessments five times in one month. The frequency of cognitive dysfunction among relapsing-remitting MS patients in the present study was 42%. Impairments were characterized especially by slowed information processing speed and memory deficits. During the one-year follow-up, the cognitive performance was relatively stable among MS patients on a group level. However, the practice effects in cognitive tests were less pronounced among MS patients than healthy controls. At an individual level the spectrum of MS patients cognitive deficits was wide in regards to their characteristics, severity, and evolution. The PASAT was moderately accurate in detecting MS-associated cognitive impairment, and 69% of patients were correctly classified as cognitively impaired or unimpaired when comprehensive neuropsychological assessment was used as a "gold standard". Self-reported nervousness and poor arithmetical skills seemed to explain misclassifications. MS-related fatigue was objectively demonstrated as fading performance towards the end of the test. Despite the observed practice effect, the reliability of the PASAT was excellent, and it was sensitive to the cognitive decline taking place during the follow-up in a subgroup of patients. The PASAT can be recommended for use in the neuropsychological assessment of MS patients. The test is fairly sensitive, but less specific; consequently, the reasons for low scores have to be carefully identified before interpreting them as clinically significant.
Resumo:
This study examines boundaries in health care organizations. Boundaries are sometimes considered things to be avoided in everyday living. This study suggests that boundaries can be important temporally and spatially emerging locations of development, learning, and change in inter-organizational activity. Boundaries can act as mediators of cultural and social formations and practices. The data of the study was gathered in an intervention project during the years 2000-2002 in Helsinki in which the care of 26 patients with multiple and chronic illnesses was improved. The project used the Change Laboratory method that represents a research assisted method for developing work. The research questions of the study are: (1) What are the boundary dynamics of development, learning, and change in health care for patients with multiple and chronic illnesses? (2) How do individual patients experience boundaries in their health care? (3) How are the boundaries of health care constructed and reconstructed in social interaction? (4) What are the dynamics of boundary crossing in the experimentation with the new tools and new practice? The methodology of the study, the ethnography of the multi-organizational field of activity, draws on cultural-historical activity theory and anthropological methods. The ethnographic fieldwork involves multiple research techniques and a collaborative strategy for raising research data. The data of this study consists of observations, interviews, transcribed intervention sessions, and patients' health documents. According to the findings, the care of patients with multiple and chronic illnesses emerges as fragmented by divisions of a patient and professionals, specialties of medicine and levels of health care organization. These boundaries have a historical origin in the Finnish health care system. As an implication of these boundaries, patients frequently experience uncertainty and neglect in their care. However, the boundaries of a single patient were transformed in the Change Laboratory discussions among patients, professionals and researchers. In these discussions, the questioning of the prevailing boundaries was triggered by the observation of gaps in inter-organizational care. Transformation of the prevailing boundaries was achieved in implementation of the collaborative care agreement tool and the practice of negotiated care. However, the new tool and practice did not expand into general use during the project. The study identifies two complementary models for the development of health care organization in Finland. The 'care package model', which is based on productivity and process models adopted from engineering and the 'model of negotiated care', which is based on co-configuration and the public good.
Resumo:
Objective: Attention deficit hyperactivity disorder (ADHD) is a life-long condition, but because of its historical status as a self-remitting disorder of childhood, empirically validated and reliable methods for the assessment of adults are scarce. In this study, the validity and reliability of the Wender Utah Rating Scale (WURS) and the Adult Problem Questionnaire (APQ), which survey childhood and current symptoms of ADHD, respectively, were studied in a Finnish sample. Methods: The self-rating scales were administered to adults with an ADHD diagnosis (n = 38), healthy control participants (n = 41), and adults diagnosed with dyslexia (n = 37). Items of the self-rating scales were subjected to factor analyses, after which the reliability and discriminatory power of the subscales, derived from the factors, were examined. The effects of group and gender on the subscales of both rating scales were studied. Additionally, the effect of age on the subscales of the WURS was investigated. Finally, the diagnostic accuracy of the total scores was studied. Results: On the basis of the factor analyses, a four-factor structure for the WURS and five-factor structure for the APQ had the best fit to the data. All of the subscales of the APQ and three of the WURS achieved sufficient reliability. The ADHD group had the highest scores on all of the subscales of the APQ, whereas two of the subscales of the WURS did not statistically differ between the ADHD and the Dyslexia group. None of the subscales of the WURS or the APQ was associated with the participant's gender. However, one subscale of the WURS describing dysthymia was positively correlated with the participant's age. With the WURS, the probability of a correct positive classification was .59 in the current sample and .21 when the relatively low prevalence of adult ADHD was taken into account. The probabilities of correct positive classifications with the APQ were .71 and .23, respectively. Conclusions: The WURS and the APQ can provide accurate and reliable information of childhood and adult ADHD symptoms, given some important constraints. Classifications made on the basis of the total scores are reliable predictors of ADHD diagnosis only in populations with a high proportion of ADHD and a low proportion of other similar disorders. The subscale scores can provide detailed information of an individual's symptoms if the characteristics and limitations of each domain are taken into account. Improvements are suggested for two subscales of the WURS.
Resumo:
Objective Vast amounts of injury narratives are collected daily and are available electronically in real time and have great potential for use in injury surveillance and evaluation. Machine learning algorithms have been developed to assist in identifying cases and classifying mechanisms leading to injury in a much timelier manner than is possible when relying on manual coding of narratives. The aim of this paper is to describe the background, growth, value, challenges and future directions of machine learning as applied to injury surveillance. Methods This paper reviews key aspects of machine learning using injury narratives, providing a case study to demonstrate an application to an established human-machine learning approach. Results The range of applications and utility of narrative text has increased greatly with advancements in computing techniques over time. Practical and feasible methods exist for semi-automatic classification of injury narratives which are accurate, efficient and meaningful. The human-machine learning approach described in the case study achieved high sensitivity and positive predictive value and reduced the need for human coding to less than one-third of cases in one large occupational injury database. Conclusion The last 20 years have seen a dramatic change in the potential for technological advancements in injury surveillance. Machine learning of ‘big injury narrative data’ opens up many possibilities for expanded sources of data which can provide more comprehensive, ongoing and timely surveillance to inform future injury prevention policy and practice.
Resumo:
Objective: To explore the effect of education and training on the delivery of alcohol screening and brief intervention and referral to high-risk patients in a hospital setting. Main outcome measures included; delivery of training; practice change in relation to staff performing alcohol screening, brief intervention and referrals. Methods: Observational study design using mixed methods set in a tertiary referral hospital. Pre-post assessment of medical records and semi-structured interviews with key informants. Results: Routine screening for substance misuse (9% pre / 71.4% post) and wellbeing concerns (6.6% pre / 15 % post) was more frequent following the introduction of resources and staff participation in educational workshops. There was no evidence of a concomitant increase in delivery of brief intervention or referrals to services. Implementation challenges, including time constraints and staff attitudes, and enablers such as collaboration and visible pathways, were identified. Conclusion: Rates of patient screening increased, however barriers to delivery of brief intervention and referrals remained. Implementation strategies targeting specific barriers and enablers to introducing interventions are both required to improve the application of secondary prevention for patients in acute settings. Implications: Educational training, formalised liaison between services, systematised early intervention protocols, and continuous quality improvement processes will progress service delivery in this area.
Resumo:
Purpose Transient changes in corneal topography associated with soft and conventional or reverse geometry rigid contact lens wear have been well documented; however, only a few studies have examined the influence of scleral contact lens wear upon the cornea. Therefore, in this study, we examined the influence of modern miniscleral contact lenses, which land entirely on the sclera and overlying tissues, upon anterior corneal curvature and optics. Methods Anterior corneal topography and elevation data were acquired using Scheimpflug imaging (Pentacam HR, Oculus) immediately prior to and following 8 hours of miniscleral contact lens wear in 15 young healthy adults (mean age 22 ± 3 years, 8 East Asian, 7 Caucasian) with normal corneae. Corneal diurnal variations were accounted for using data collected on a dedicated measurement day without contact lens wear. Corneal clearance was quantified using an optical coherence tomographer (RS-3000, Nidek) following lens insertion and after 8 hours of lens wear. Results Although corneal clearance was maintained throughout the 8 hour lens wear period, significant corneal flattening (up to 0.08 ± 0.04 mm) was observed, primarily in the superior mid-peripheral cornea, which resulted in a slight increase in against-the-rule corneal astigmatism (mean +0.02/-0.15 x 94 for an 8 mm diameter). Higher order aberration terms of horizontal coma, vertical coma and spherical aberration all underwent significant changes for an 8 mm corneal diameter (p ≤ 0.01), which typically resulted in a decrease in RMS error values (mean change in total higher order RMS -0.035 ± 0.046 µm for an 8 mm diameter). There was no association between the magnitude of change in central or mid-peripheral corneal clearance during lens wear and the observed changes in corneal curvature (p > 0.05). However, Asian participants displayed a significantly greater reduction in corneal clearance (p = 0.04) and greater superior-nasal corneal flattening compared to Caucasians (p = 0.048). Conclusions Miniscleral contact lenses that vault the cornea induce significant changes in anterior corneal surface topography and higher order aberrations following 8 hours of lens wear. The region of greatest corneal flattening was observed in the superior-nasal mid-periphery, more so in Asian participants. Practitioners should be aware that corneal measurements obtained following miniscleral lens removal may mask underlying corneal steepening.
Resumo:
Aim: Birds play a major role in the dispersal of seeds of many fleshy-fruited invasive plants. The fruits that birds choose to consume are influenced by fruit traits. However, little is known of how the traits of invasive plant fruits contribute to invasiveness or to their use by frugivores. We aim to gain a greater understanding of these relationships to improve invasive plant management. Location: South-east Queensland, Australia. Methods: We measure a variety of fruit morphology, pulp nutrient and phenology traits of a suite of bird-dispersed alien plants. Frugivore richness of these aliens was derived from the literature. Using regressions and multivariate methods, we investigate relationships between fruit traits, frugivore richness and invasiveness. Results: Plant invasiveness was negatively correlated to fruit size, and all highly invasive species had quite similar fruit morphology [smaller fruits, seeds of intermediate size and few (<10) seeds per fruit]. Lower pulp water was the only pulp nutrient trait associated with invasiveness. There were strong positive relationships between the diversity of bird frugivores and plant invasiveness, and in the diversity of bird frugivores in the study region and another part of the plants' alien range. Main conclusions: Our results suggest that weed risk assessments (WRA) and predictions of invasive success for bird-dispersed plants can be improved. Scoring criteria for WRA regarding fruit size would need to be system-specific, depending on the fruit-processing capabilities of local frugivores. Frugivore richness could be quantified in the plant's natural range, its invasive range elsewhere, or predictions made based on functionally similar fruits.
Resumo:
The critical stream power criterion may be used to describe the incipient motion of cohesionless particles of plane sediment beds. The governing equation relating ``critical stream power'' to ``shear Reynolds number'' is developed by using the present experimental data as well as the data from several other sources. Simultaneously, a resistance equation, relating the ``particle Reynolds number'' to the``shear Reynolds number'' is developed for plane sediment beds in wide channels with little or no transport. By making use of these relations, a procedure is developed to design plane sediment beds such that any two of the four design variables, including particle size, energy/friction slope, flow depth, and discharge per unit width in the channel should be known to predict the remaining two variables. Finally, a straightforward design procedure using design tables/design curves and analytical methods is presented to solve six possible design problems.