853 resultados para procedural deficit hypothesis (PDH)
Resumo:
This chapter explores the possibility and exigencies of employing hypotheses, or educated guesses, as the basis for ethnographic research design. The authors’ goal is to examine whether using hypotheses might provide a path to resolve some of the challenges to knowledge claims produced by ethnographic studies. Through resolution of the putative division between qualitative and quantitative research traditions , it is argued that hypotheses can serve as inferential warrants in qualitative and ethnographic studies.
Resumo:
Background Procedural sedation and analgesia (PSA) is used to attenuate the pain and distress that may otherwise be experienced during diagnostic and interventional medical or dental procedures. As the risk of adverse events increases with the depth of sedation induced, frequent monitoring of level of consciousness is recommended. Level of consciousness is usually monitored during PSA with clinical observation. Processed electroencephalogram-based depth of anaesthesia (DoA) monitoring devices provide an alternative method to monitor level of consciousness that can be used in addition to clinical observation. However, there is uncertainty as to whether their routine use in PSA would be justified. Rigorous evaluation of the clinical benefits of DoA monitors during PSA, including comprehensive syntheses of the available evidence, is therefore required. One potential clinical benefit of using DoA monitoring during PSA is that the technology could improve patient safety by reducing sedation-related adverse events, such as death or permanent neurological disability. We hypothesise that earlier identification of lapses into deeper than intended levels of sedation using DoA monitoring leads to more effective titration of sedative and analgesic medications, and results in a reduction in the risk of adverse events caused by the consequences of over-sedation, such as hypoxaemia. The primary objective of this review is to determine whether using DoA monitoring during PSA in the hospital setting improves patient safety by reducing the risk of hypoxaemia (defined as an arterial partial pressure of oxygen below 60 mmHg or percentage of haemoglobin that is saturated with oxygen [SpO2] less than 90 %). Other potential clinical benefits of using DoA monitoring devices during sedation will be assessed as secondary outcomes. Methods/design Electronic databases will be systematically searched for randomized controlled trials comparing the use of depth of anaesthesia monitoring devices with clinical observation of level of consciousness during PSA. Language restrictions will not be imposed. Screening, study selection and data extraction will be performed by two independent reviewers. Disagreements will be resolved by discussion. Meta-analyses will be performed if suitable. Discussion This review will synthesise the evidence on an important potential clinical benefit of DoA monitoring during PSA within hospital settings.
Resumo:
Objective To identify the prevalence of and risk factors for inadvertent hypothermia after procedures performed with procedural sedation and analgesia in a cardiac catheterisation laboratory. Design Single-centre, prospective observational study. Setting Tertiary care private hospital in Australia. Participants A convenience sample of 399 patients undergoing elective procedures with procedural sedation and analgesia were included. Propofol infusions were used when an anaesthetist was present. Otherwise, bolus doses of either midazolam or fentanyl or a combination of these medications was used. Interventions None Measurements and main results Hypothermia was defined as a temperature <36.0° Celsius. Multivariate logistic regression was used to identify risk factors. Hypothermia was present after 23.3% (n=93; 95% confidence interval [CI] 19.2%-27.4%) of 399 procedures. Sedative regimens with the highest prevalence of hypothermia were any regimen that included propofol (n=35; 40.2%; 95% CI 29.9%-50.5%) and the use of fentanyl combined with midazolam (n=23; 20.3%; 95% CI 12.9%-27.7%). Difference in mean temperature from pre to post-procedure was -0.27°C (Standard deviation [SD] 0.45). Receiving propofol (odds ratio [OR] OR 4.6 95% CI 2.5-8.6), percutaneous coronary intervention (OR 3.2 95% CI 1.7-5.9), body mass index <25 (OR 2.5 95% CI 1.4-4.4) and being hypothermic prior to the procedure (OR 4.9; 95% CI 2.3-10.8) were independent predictors of post-procedural hypothermia. Conclusions A moderate prevalence of hypothermia was observed. The small absolute change in temperature observed may not be a clinically important amount. More research is needed to increase confidence in our estimates of hypothermia in sedated patients and its impact on clinical outcomes.
Resumo:
A commentary on Whiteness studies, linguistic and cultural minority and Indigenous studies in early childhood language and literacy socialization. When the literature on ‘Whiteness’ first emerged in the 1990s, I was offended and skeptical. As an Asian who has lived in White-dominant cultures most of my life, my reflex was to say something like: “Yeah – they want to be ‘special’ too. After all our struggles to get beyond an unmarked place of deficit in the fields of disciplinary knowledge and social sciences – now they want ‘Whiteness’ as their own ethnic studies”...
Resumo:
This chapter examines the personal reflections and experiences of several pre-service and newly graduated teachers, including Kristie, who were involved in the NETDS program. Their documented professional journeys, which include descriptions of struggling when their privileged, taken-for-granted ways of being were destabilized, and grappling with tensions related to their own predispositions and values, are investigated in the context of Whiteness and privilege theory.
Resumo:
Resolving species relationships and confirming diagnostic morphological characters for insect clades that are highly plastic, and/or include morphologically cryptic species, is crucial for both academic and applied reasons. Within the true fly (Diptera) family Chironomidae, a most ubiquitous freshwater insect group, the genera CricotopusWulp, 1874 and ParatrichocladiusSantos-Abreu, 1918 have long been taxonomically confusing. Indeed, until recently the Australian fauna had been examined in just two unpublished theses: most species were known by informal manuscript names only, with no concept of relationships. Understanding species limits, and the associated ecology and evolution, is essential to address taxonomic sufficiency in biomonitoring surveys. Immature stages are collected routinely, but tolerance is generalized at the genus level, despite marked variation among species. Here, we explored this issue using a multilocus molecular phylogenetic approach, including the standard mitochondrial barcode region, and tested explicitly for phylogenetic signal in ecological tolerance of species. Additionally, we addressed biogeographical patterns by conducting Bayesian divergence time estimation. We sampled all but one of the now recognized Australian Cricotopus species and tested monophyly using representatives from other austral and Asian locations. Cricotopus is revealed as paraphyletic by the inclusion of a nested monophyletic Paratrichocladius, with in-group diversification beginning in the Eocene. Previous morphological species concepts are largely corroborated, but some additional cryptic diversity is revealed. No significant relationship was observed between the phylogenetic position of a species and its ecology, implying either that tolerance to deleterious environmental impacts is a convergent trait among many Cricotopus species or that sensitive and restricted taxa have diversified into more narrow niches from a widely tolerant ancestor.
Resumo:
Background Although there are many structural neuroimaging studies of attention-deficit/hyperactivity disorder (ADHD) in children, there are inconsistencies across studies and no consensus regarding which brain regions show the most robust area or volumetric reductions relative to control subjects. Our goal was to statistically analyze structural imaging data via a meta-analysis to help resolve these issues. Methods We searched the MEDLINE and PsycINFO databases through January 2005. Studies must have been written in English, used magnetic resonance imaging, and presented the means and standard deviations of regions assessed. Data were extracted by one of the authors and verified independently by another author. Results Analyses were performed using STATA with metan, metabias, and metainf programs. A meta-analysis including all regions across all studies indicated global reductions for ADHD subjects compared with control subjects, standardized mean difference equal to .408, p less than .001. Regions most frequently assessed and showing the largest differences included cerebellar regions, the splenium of the corpus callosum, total and right cerebral volume, and right caudate. Several frontal regions assessed in only two studies also showed large significant differences. Conclusions This meta-analysis provides a quantitative analysis of neuroanatomical abnormalities in ADHD and information that can be used to guide future studies.
Resumo:
Objective Diagnosing attention deficit hyperactivity disorder (ADHD) in adults is difficult when diagnosticians cannot establish an onset before the DSM-IV criterion of age 7 or if the number of symptoms recalled does not achieve DSM’s diagnosis threshold. Method The authors addressed the validity of DSM-IV’s age-at-onset and symptom threshold criteria by comparing four groups of adults: 127 subjects with full ADHD who met all DSM-IV criteria for childhood-onset ADHD, 79 subjects with late-onset ADHD who met all criteria except the age-at-onset criterion, 41 subjects with subthreshold ADHD who did not meet full symptom criteria for ADHD, and 123 subjects without ADHD who did not meet any criteria. The authors hypothesized that subjects with late-onset and subthreshold ADHD would show patterns of psychiatric comorbidity, functional impairment, and familial transmission similar to those seen in subjects with full ADHD. Result Subjects with late-onset and full ADHD had similar patterns of psychiatric comorbidity, functional impairment, and familial transmission. Most children with late onset of ADHD (83%) were younger than 12. Subthreshold ADHD was milder and showed a different pattern of familial transmission than the other forms of ADHD. Conclusions The data about the clinical features of probands and the pattern of transmission of ADHD among relatives found little evidence for the validity of subthreshold ADHD among such subjects, who reported a lifetime history of some symptoms that never met DSM-IV’s threshold for diagnosis. In contrast, the results suggested that late-onset adult ADHD is valid and that DSM-IV’s age-at-onset criterion is too stringent.
Resumo:
Background Diagnosing attention-deficit/hyperactivity disorder (ADHD) in adults is difficult when the diagnostician cannot establish an onset prior to the DSM-IV criterion of age 7 or if the number of symptoms recalled does not achieve the DSM-IV threshold for diagnosis. Because neuropsychological deficits are associated with ADHD, we addressed the validity of the DSM-IV age at onset and symptom threshold criteria by using neuropsychological test scores as external validators. Methods We compared four groups of adults: 1) full ADHD subjects met all DSM-IV criteria for childhood-onset ADHD; 2) late-onset ADHD subjects met all criteria except the age at onset criterion; 3) subthreshold ADHD subjects did not meet full symptom criteria; and 4) non-ADHD subjects did not meet any of the above criteria. Results Late-onset and full ADHD subjects had similar patterns of neuropsychological dysfunction. By comparison, subthreshold ADHD subjects showed few neuropsychological differences with non-ADHD subjects. Conclusions Our results showing similar neuropsychological underpinning in subjects with late-onset ADHD suggest that the DSM-IV age at onset criterion may be too stringent. Our data also suggest that ADHD subjects who failed to ever meet the DSM-IV threshold for diagnosis have a milder form of the disorder.
Resumo:
Objective To test the hypothesis that the age at onset of bipolar disorder would identify a developmental subtype of bipolar disorder in adults characterized by increased levels of irritability, chronic course, rapid cycling, and comorbidity with attention deficit hyperactivity disorder. Methods Forty-four adult subjects diagnosed with bipolar disorder were selected from large family studies of youth with and without attention deficit hyperactivity disorder. These subjects were stratified by the age at onset in childhood (younger than 13 years; n = 8, 18%), adolescence (13–18 years; n = 12, 27%, or adulthood (older than 19 years; n = 24, 55%). All subjects were administered structure diagnostic interviews and a brief cognitive battery. Results In contrast with adult-onset bipolar disorder, child-onset bipolar disorder was associated with a longer duration of illness, more irritability than euphoria, a mixed presentation, a more chronic or rapid-cycling course, and increased comorbidity with childhood disruptive behavior disorders and anxiety disorders. Conclusion Stratification by age at onset of bipolar disorder identified subgroups of adult subjects with differing clinical correlates. This pattern of correlates is consistent with findings documented in children with pediatric bipolar disorder and supports the hypothesis that child-onset bipolar disorder may represent a developmental subtype of the disorder.
Resumo:
Two types of welfare states are compared in this article. Differences in procedural rights for young unemployed at the level of service delivery are analyzed. In Australia, rights are regulated through a rigid procedural justice system. The young unemployed within the social assistance system in Sweden encounter staff with high discretionary powers, which makes the legal status weak for the unemployed but, on the other hand, the system is more flexible. Despite the differences, there is striking convergence in how the young unemployed describe how discretionary power among street-level staff affects their procedural rights. This result can be understood as a result of similar professional norms, work customs and occupational cultures of street-level staff, and that there is a basic logic of conditionality in all developed welfare states where procedural rights are tightly coupled with responsibilities.
Resumo:
Background The various cell types and their relative numbers in multicellular organisms are controlled by growth factors and related extracellular molecules which affect genetic expression pathways. However, these substances may have both/either inhibitory and/or stimulatory effects on cell division and cell differentiation depending on the cellular environment. It is not known how cells respond to these substances in such an ambiguous way. Many cellular effects have been investigated and reported using cell culture from cancer cell lines in an effort to define normal cellular behaviour using these abnormal cells. A model is offered to explain the harmony of cellular life in multicellular organisms involving interacting extracellular substances. Methods A basic model was proposed based on asymmetric cell division and evidence to support the hypothetical model was accumulated from the literature. In particular, relevant evidence was selected for the Insulin-Like Growth Factor system from the published data, especially from certain cell lines, to support the model. The evidence has been selective in an attempt to provide a picture of normal cellular responses, derived from the cell lines. Results The formation of a pair of coupled cells by asymmetric cell division is an integral part of the model as is the interaction of couplet molecules derived from these cells. Each couplet cell will have a receptor to measure the amount of the couplet molecule produced by the other cell; each cell will be receptor-positive or receptor-negative for the respective receptors. The couplet molecules will form a binary complex whose level is also measured by the cell. The hypothesis is heavily supported by selective collection of circumstantial evidence and by some direct evidence. The basic model can be expanded to other cellular interactions. Conclusions These couplet cells and interacting couplet molecules can be viewed as a mechanism that provides a controlled and balanced division-of-labour between the two progeny cells, and, in turn, their progeny. The presence or absence of a particular receptor for a couplet molecule will define a cell type and the presence or absence of many such receptors will define the cell types of the progeny within cell lineages.
Resumo:
Objective: Attention deficit hyperactivity disorder (ADHD) is a life-long condition, but because of its historical status as a self-remitting disorder of childhood, empirically validated and reliable methods for the assessment of adults are scarce. In this study, the validity and reliability of the Wender Utah Rating Scale (WURS) and the Adult Problem Questionnaire (APQ), which survey childhood and current symptoms of ADHD, respectively, were studied in a Finnish sample. Methods: The self-rating scales were administered to adults with an ADHD diagnosis (n = 38), healthy control participants (n = 41), and adults diagnosed with dyslexia (n = 37). Items of the self-rating scales were subjected to factor analyses, after which the reliability and discriminatory power of the subscales, derived from the factors, were examined. The effects of group and gender on the subscales of both rating scales were studied. Additionally, the effect of age on the subscales of the WURS was investigated. Finally, the diagnostic accuracy of the total scores was studied. Results: On the basis of the factor analyses, a four-factor structure for the WURS and five-factor structure for the APQ had the best fit to the data. All of the subscales of the APQ and three of the WURS achieved sufficient reliability. The ADHD group had the highest scores on all of the subscales of the APQ, whereas two of the subscales of the WURS did not statistically differ between the ADHD and the Dyslexia group. None of the subscales of the WURS or the APQ was associated with the participant's gender. However, one subscale of the WURS describing dysthymia was positively correlated with the participant's age. With the WURS, the probability of a correct positive classification was .59 in the current sample and .21 when the relatively low prevalence of adult ADHD was taken into account. The probabilities of correct positive classifications with the APQ were .71 and .23, respectively. Conclusions: The WURS and the APQ can provide accurate and reliable information of childhood and adult ADHD symptoms, given some important constraints. Classifications made on the basis of the total scores are reliable predictors of ADHD diagnosis only in populations with a high proportion of ADHD and a low proportion of other similar disorders. The subscale scores can provide detailed information of an individual's symptoms if the characteristics and limitations of each domain are taken into account. Improvements are suggested for two subscales of the WURS.
Resumo:
The enemy release hypothesis predicts that native herbivores will either prefer or cause more damage to native than introduced plant species. We tested this using preference and performance experiments in the laboratory and surveys of leaf damage caused by the magpie moth Nyctemera amica on a co-occuring native and introduced species of fireweed (Senecio) in eastern Australia. In the laboratory, ovipositing females and feeding larvae preferred the native S. pinnatifolius over the introduced S. madagascariensis. Larvae performed equally well on foliage of S. pinnatifolius and S. madagascariensis: pupal weights did not differ between insects reared on the two species, but growth rates were significantly faster on S. pinnatifolius. In the field, foliage damage was significantly greater on native S. pinnatifolius than introduced S. madagascariensis. These results support the enemy release hypothesis, and suggest that the failure of native consumers to switch to introduced species contributes to their invasive success. Both plant species experienced reduced, rather than increased, levels of herbivory when growing in mixed populations, as opposed to pure stands in the field; thus, there was no evidence that apparent competition occurred.
Resumo:
Water regulations have decreased irrigation water supplies in Nebraska and some other areas of the USA Great Plains. When available water is not enough to meet crop water requirements during the entire growing cycle, it becomes critical to know the proper irrigation timing that would maximize yields and profits. This study evaluated the effect of timing of a deficit-irrigation allocation (150 mm) on crop evapotranspiration (ETc), yield, water use efficiency (WUE = yield/ETc), irrigation water use efficiency (IWUE = yield/irrigation), and dry mass (DM) of corn (Zea mays L.) irrigated with subsurface drip irrigation in the semiarid climate of North Platte, NE. During 2005 and 2006, a total of sixteen irrigation treatments (eight each year) were evaluated, which received different percentages of the water allocation during July, August, and September. During both years, all treatments resulted in no crop stress during the vegetative period and stress during the reproductive stages, which affected ETc, DM, yield, WUE and IWUE. Among treatments, ETc varied by 7.2 and 18.8%; yield by 17 and 33%; WUE by 12 and 22%, and IWUE by 18 and 33% in 2005 and 2006, respectively. Yield and WUE both increased linearly with ETc and with ETc/ETp (ETp = seasonal ETc with no water stress), and WUE increased linearly with yield. The yield response factor (ky) averaged 1.50 over the two seasons. Irrigation timing affected the DM of the plant, grain, and cob, but not that of the stover. It also affected the percent of DM partitioned to the grain (harvest index), which increased linearly with ETc and averaged 56.2% over the two seasons, but did not affect the percent allocated to the cob or stover. Irrigation applied in July had the highest positive coefficient of determination (R2) with yield. This high positive correlation decreased considerably for irrigation applied in August, and became negative for irrigation applied in September. The best positive correlation between the soil water deficit factor (Ks) and yield occurred during weeks 12-14 from crop emergence, during the "milk" and "dough" growth stages. Yield was poorly correlated to stress during weeks 15 and 16, and the correlation became negative after week 17. Dividing the 150 mm allocation about evenly among July, August and September was a good strategy resulting in the highest yields in 2005, but not in 2006. Applying a larger proportion of the allocation in July was a good strategy during both years, and the opposite resulted when applying a large proportion of the allocation in September. The different results obtained between years indicate that flexible irrigation scheduling techniques should be adopted, rather than relying on fixed timing strategies.