903 resultados para Follow-lip Studies
Resumo:
This clinical study focused on effects of childhood specific language impairment (SLI) on daily functioning in late life. SLI is a neurobiological disorder with genetic predisposition and manifests as poor language production or comprehension or both in a child with age-level non-verbal intelligence and no other known cause for deficient language development. The prevalence rate of around 7% puts it among the most prevalent developmental disorders in childhood. Negative long-term effects, such as problems in learning and behavior, are frequent. In follow-up studies the focus has seldom been on self-perception of daily functioning and participation, which are considered important in the International Classification of Functioning, Disability, and Health (ICF). To investigate the self-perceived aspects of everyday functioning in individuals with childhood receptive SLI compared with age- and gender-matched control populations, the 15D, 16D, and 17D health-related quality of life (HRQoL) questionnaires were applied. These generic questionnaires include 15, 16, and 17 dimensions, respectively, and give both a single index score and a profile with values on each dimension. Information on different life domains (rehabilitation, education, employment etc.) from each age-group was collected with separate questionnaires. The study groups comprised adults, adolescents (12-16 years), and pre-adolescents (8-11 years) who had received a diagnosis of receptive SLI and had been examined, usually before school age, at the Department of Phoniatrics of Helsinki University Central Hospital, where children with language deficits caused by various etiologies are examined and treated by a multidisciplinary team. The adult respondents included 33 subjects with a mean age of 34 years. Measured with 15D, the subjects perceived their HRQoL to be nearly as good as that of their controls, but on the dimensions of speech, usual activities, mental functioning, and distress they were significantly worse off. They significantly more often lived with their parents (19%) or were pensioned (26%) than the adult Finnish population on average. Adults with self-perceived problems in finding words and in remembering instructions, manifestations of persistent language impairment, showed inferior every day functioning to the rest of the study group. Of the adolescents and pre-adolescents, 48 and 51, respectively, responded. The majority in both groups had received special education or extra educational support at school. They all had attended speech therapy at some point; at the time of the study only one adolescent, but every third pre-adolescent still received speech therapy. The 16D score of the adolescent or the 17D score of the pre-adolescents did not differ from that of their controls. The 16D profiles differed on some dimensions; subjects were significantly worse off on the dimension of mental functioning, but better off on the dimension of vitality than controls. Of the 17D dimensions, the study group was significantly worse off on speech, whereas the control group reported significantly more problems in sleeping. Of the childhood performance measures investigated, low verbal intelligence quotient (VIQ), which is often considered to reflect receptive language impairment, was in adults subjects significantly associated with some of the self-perceived problems, such as problems in usual activities and mental functioning. The 15D, 16D, and 17D questionnaires served well in measuring self-perceived HRQoL. Such standardized measures with population values are especially important in confirming with the ICF guidelines. In the future these questionnaires could perhaps be used on a more individual level in follow-up of children in clinics, and even in special schools and classes, to detect those children at greatest risk of negative long-term effects and perhaps diminished well-being regarding daily functioning and participation.
Resumo:
The project consisted of two long-term follow-up studies of preterm children addressing the question whether intrauterine growth restriction affects the outcome. Assessment at 5 years of age of 203 children with a birth weight less than 1000 g born in Finland in 1996-1997 showed that 9% of the children had cognitive impairment, 14% cerebral palsy, and 4% needed a hearing aid. The intelligence quotient was lower (p<0.05) than the reference value. Thus, 20% exhibited major, 19% minor disabilities, and 61% had no functional abnormalities. Being small for gestational age (SGA) was associated with sub-optimal growth later. In children born before 27 gestational weeks, the SGA had more neuropsychological disabilities than those appropriate for gestational age (AGA). In another cohort with birth weight less than 1500 g assessed at 5 years of age, echocardiography showed a thickened interventricular septum and a decreased left ventricular end-diastolic diameter in both SGA and AGA born children. They also had a higher systolic blood pressure than the reference. Laser-Doppler flowmetry showed different endothelium-dependent and -independent vasodilation responses in the AGA children compared to those of the controls. SGA was not associated with cardio-vascular abnormalities. Auditory event-related potentials (AERPs) were recorded using an oddball paradigm with frequency deviants (standard tone 500 Hz and deviant 750-Hz with 10% probability). At term, the P350 was smaller in SGA and AGA infants than in controls. At 12 months, the automatic change detection peak (mismatch negativity, MMN) was observed in the controls. However, the pre-term infants had a difference positivity that correlated with their neurodevelopment scores. At 5 years of age, the P1-deflection, which reflects primary auditory processing, was smaller, and the MMN larger in the preterm than in the control children. Even with a challenging paradigm or a distraction paradigm, P1 was smaller in the preterm than in the control children. The SGA and AGA children showed similar AERP responses. Prematurity is a major risk factor for abnormal brain development. Preterm children showed signs of cardiovascular abnormality suggesting that prematurity per se may carry a risk for later morbidity. The small positive amplitudes in AERPs suggest persisting altered auditory processing in the preterm in-fants.
Resumo:
Summary: The offshore shelf and canyon habitats of the OCNMS (Fig. 1) are areas of high primary productivity and biodiversity that support extensive groundfish fisheries. Recent acoustic surveys conducted in these waters have indicated the presence of hard-bottom substrates believed to harbor unique deep-sea coral and sponge assemblages. Such fauna are often associated with shallow tropical waters, however an increasing number of studies around the world have recorded them in deeper, cold-water habitats in both northern and southern latitudes. These habitats are of tremendous value as sites of recruitment for commercially important fishes. Yet, ironically, studies have shown how the gear used in offshore demersal fishing, as well as other commercial operations on the seafloor, can cause severe physical disturbances to resident benthic fauna. Due to their exposed structure, slow growth and recruitment rates, and long life spans, deep-sea corals and sponges may be especially vulnerable to such disturbances, requiring very long periods to recover. Potential effects of fishing and other commercial operations in such critical habitats, and the need to define appropriate strategies for the protection of these resources, have been identified as a high-priority management issue for the sanctuary. To begin addressing this issue, an initial pilot survey was conducted June 1-12, 2004 at six sites in offshore waters of the OCNMS (Fig. 2, average depths of 147-265 m) to explore for the presence of deep-sea coral/sponge assemblages and to look for evidence of potential anthropogenic impacts in these critical habitats. The survey was conducted on the NOAA Ship McARTHUR-II using the Navy’s Phantom DHD2+2 remotely operated vehicle (ROV), which was equipped with a video camera, lasers, and a manipulator arm for the collection of voucher specimens. At each site, a 0.1-m2 grab sampler also was used to collect samples of sediments for the analysis of macroinfauna (> 1.0 mm), total organic carbon (TOC), grain size, and chemical contaminants. Vertical profiles of salinity, dissolved oxygen (DO), temperature, and pressure were recorded at each site with a small SeaCat conductivity-temperature-depth (CTD) profiler. Niskin bottles attached to the CTD also obtained near-bottom water samples in support of a companion study of microbial indicators of coral health and general ecological condition across these sites. All samples except the sediment-contaminant samples are being analyzed with present project funds. Original cruise plans included a total of 12 candidate stations to investigate (Fig. 3). However, inclement weather and equipment failures restricted the sampling to half of these sites. In spite of the limited sampling, the work completed was sufficient to address key project objectives and included several significant scientific observations. Foremost, the cruise was successful in demonstrating the presence of target deepwater coral species in these waters. Patches of the rare stony coral Lophelia pertusa, more characteristic of deepwater coral/sponge assemblages in the North Atlantic, were observed for the first time in OCNMS at a site in 271 meters of water. A large proportion of these corals consisted of dead and broken skeletal remains, and a broken gorgonian (soft coral) also was observed nearby. The source of these disturbances is not known. However, observations from several sites included evidence of bottom trawl marks in the sediment and derelict fishing gear (long lines). Preliminary results also support the view that these areas are important reservoirs of marine biodiversity and of value as habitat for demersal fishes. For example, onboard examination of 18 bottom-sediment grabs revealed benthic infaunal species representative of 14 different invertebrate phyla. Twenty-eight species of fishes from 11 families, including 11 (possibly 12) species of ommercially important rockfishes, also were identified from ROV video footage. These initial discoveries have sparked considerable interests in follow-up studies to learn more about the spatial extent of these assemblages and magnitude of potential impacts from commercial-fishing and other anthropogenic activities in the area. It is essential to expand our knowledge of these deep-sea communities and their vulnerability to potential environmental risks in order to determine the most appropriate management strategies. The survey was conducted under a partnership between NOAA’s National Centers for Coastal Ocean Science (NCCOS) and National Marine Sanctuary Program (NMSP) and included scientists from NCCOS, OCNMS, and several other west-coast State, academic, private, and tribal research institutions (see Section 4 for a complete listing of participating scientists). (PDF contains 20 pages)
Resumo:
My thesis studies how people pay attention to other people and the environment. How does the brain figure out what is important and what are the neural mechanisms underlying attention? What is special about salient social cues compared to salient non-social cues? In Chapter I, I review social cues that attract attention, with an emphasis on the neurobiology of these social cues. I also review neurological and psychiatric links: the relationship between saliency, the amygdala and autism. The first empirical chapter then begins by noting that people constantly move in the environment. In Chapter II, I study the spatial cues that attract attention during locomotion using a cued speeded discrimination task. I found that when the motion was expansive, attention was attracted towards the singular point of the optic flow (the focus of expansion, FOE) in a sustained fashion. The more ecologically valid the motion features became (e.g., temporal expansion of each object, spatial depth structure implied by distribution of the size of the objects), the stronger the attentional effects. However, compared to inanimate objects and cues, people preferentially attend to animals and faces, a process in which the amygdala is thought to play an important role. To directly compare social cues and non-social cues in the same experiment and investigate the neural structures processing social cues, in Chapter III, I employ a change detection task and test four rare patients with bilateral amygdala lesions. All four amygdala patients showed a normal pattern of reliably faster and more accurate detection of animate stimuli, suggesting that advantageous processing of social cues can be preserved even without the amygdala, a key structure of the “social brain”. People not only attend to faces, but also pay attention to others’ facial emotions and analyze faces in great detail. Humans have a dedicated system for processing faces and the amygdala has long been associated with a key role in recognizing facial emotions. In Chapter IV, I study the neural mechanisms of emotion perception and find that single neurons in the human amygdala are selective for subjective judgment of others’ emotions. Lastly, people typically pay special attention to faces and people, but people with autism spectrum disorders (ASD) might not. To further study social attention and explore possible deficits of social attention in autism, in Chapter V, I employ a visual search task and show that people with ASD have reduced attention, especially social attention, to target-congruent objects in the search array. This deficit cannot be explained by low-level visual properties of the stimuli and is independent of the amygdala, but it is dependent on task demands. Overall, through visual psychophysics with concurrent eye-tracking, my thesis found and analyzed socially salient cues and compared social vs. non-social cues and healthy vs. clinical populations. Neural mechanisms underlying social saliency were elucidated through electrophysiology and lesion studies. I finally propose further research questions based on the findings in my thesis and introduce my follow-up studies and preliminary results beyond the scope of this thesis in the very last section, Future Directions.
Resumo:
Specific anti-polysaccharide antibody deficiency (SPAD) is an immune disorder. Diagnostic criteria have not yet been defined clearly. One hundred and seventy-six children evaluated for recurrent respiratory tract infections were analysed retrospectively. For each subject, specific anti-pneumococcal antibodies had been measured with two enzyme-linked immunosorbent assays (ELISAs), one overall assay (OA) using the 23-valent pneumococcal polysaccharide vaccine (23-PPSV) as detecting antigen and the other purified pneumococcal polysaccharide serotypes (serotype-specific assay, SSA) (serotypes 14, 19F and 23F). Antibody levels were measured before (n = 176) and after (n = 93) immunization with the 23-PPSV. Before immunization, low titres were found for 138 of 176 patients (78%) with OA, compared to 20 of 176 patients (11%) with the SSA. We found a significant correlation between OA and SSA results. After immunization, 88% (71 of 81) of the patients considered as responders in the OA test were also responders in the SSA; 93% (71 of 76) of the patients classified as responders according to the SSA were also responders in the OA. SPAD was diagnosed in 8% (seven of 93) of patients on the basis of the absence of response in both tests. Thus, we propose to use OA as a screening test for SPAD before 23-PPSV immunization. After immunization, SSA should be used only in case of a low response in OA. Only the absence of or a very low antibody response detected by both tests should be used as a diagnostic criterion for SPAD.
Resumo:
Hypogammaglobulinemia (hypo-Ig) and low mannose binding protein (MBP) levels might be involved in the infectious risk in renal transplantation. In 152 kidney transplant recipients treated with calcineurin inhibitors (CNI) and mycophenolate mofetil (MMF), during the first year, we prospectively recorded the incidence of hypogammaglobulinemia, and low MBP levels. Their influence on infectious complications was evaluated in 92 patients at 3 and 12 months (T3 and T12). The proportion of deficiency increased significantly: hypo-IgG: 6% (T0), 45% (T3), and 30% (T12) (P < 0.001); hypo-MBP: 5%, 11%, and 12% (P = 0.035). Hypo-IgG at T3 was not associated with an increased incidence of first-year infections. A significantly higher proportion of patients with combined hypogammaglobulinemia [IgG+ (IgA and/or IgM)] at T3 and with isolated hypo-IgG at T0 developed infections until T3 compared with patients free of these deficits (P < 0.05). Low MBP levels at T3 were associated with more sepsis and viral infections. Hypogammaglobulinemia is frequent during the first year after renal transplantation in patients treated with a CNI and MMF. Hypo-IgG at T0 and combined Igs deficts at T3 were associated with more infections. MBP deficiency might emerge as an important determinant of the post-transplant infectious risk.
Resumo:
PURPOSE: To compare the efficacy of paclitaxel versus doxorubicin given as single agents in first-line therapy of advanced breast cancer (primary end point, progression-free survival ¿PFS) and to explore the degree of cross-resistance between the two agents. PATIENTS AND METHODS: Three hundred thirty-one patients were randomized to receive either paclitaxel 200 mg/m(2), 3-hour infusion every 3 weeks, or doxorubicin 75 mg/m(2), intravenous bolus every 3 weeks. Seven courses were planned unless progression or unacceptable toxicity occurred before the seven courses were finished. Patients who progressed within the seven courses underwent early cross-over to the alternative drug, while a delayed cross-over was optional for the remainder of patients at the time of disease progression. RESULTS: Objective response in first-line therapy was significantly better (P =.003) for doxorubicin (response rate ¿RR, 41%) than for paclitaxel (RR, 25%), with doxorubicin achieving a longer median PFS (7.5 months for doxorubicin v 3.9 months for paclitaxel, P <.001). In second-line therapy, cross-over to doxorubicin (91 patients) and to paclitaxel (77 patients) gave response rates of 30% and 16%, respectively. The median survival durations of 18.3 months for doxorubicin and 15.6 months for paclitaxel were not significantly different (P =.38). The doxorubicin arm had greater toxicity, but this was counterbalanced by better symptom control. CONCLUSION: At the dosages and schedules used in the present study, doxorubicin achieves better disease and symptom control than paclitaxel in first-line treatment. Doxorubicin and paclitaxel are not totally cross-resistant, which supports further investigation of these drugs in combination or in sequence, both in advanced disease and in the adjuvant setting.
Resumo:
BACKGROUND: The potential cardiotoxicity of the doxorubicin-paclitaxel regimen, when paclitaxel is given shortly after the end of the anthracycline infusion, is an issue of concern, as suggested by small single institution Phase II studies. METHODS: In a large multicenter Phase III trial, 275 anthracycline naive metastatic breast carcinoma patients were randomized to receive either doxorubicin (60 mg/m(2)) followed 30 minutes later by paclitaxel (175 mg/m(2) 3-hour infusion; AT) or a standard doxorubicin-cyclophosphamide regimen (AC; 60/600 mg/m(2)). Both treatments were given once every 3 weeks for a maximum of six cycles. Close cardiac monitoring was implemented in the study design. RESULTS: Congestive heart failure (CHF) occurred in three patients in the AT arm and in one patient in the AC arm (P = 0.62). Decreases in left ventricular ejection fraction to below the limit of normal were documented in 33% AT and 19% AC patients and were not predictive of CHF development. CONCLUSIONS: AT is devoid of excessive cardiac risk among metastatic breast carcinoma patients, when the maximum planned cumulative dose of doxorubicin does not exceed 360 mg/m(2).
Resumo:
BACKGROUND AND PURPOSE: Docetaxel is an active agent in the treatment of metastatic breast cancer. We evaluated the feasibility of docetaxel-based sequential and combination regimens as adjuvant therapies for patients with node-positive breast cancer. PATIENTS AND METHODS: Three consecutive groups of patients with node-positive breast cancer or locally-advanced disease, aged < or = 70 years, received one of the following regimens: a) sequential A-->T-->CMF: doxorubicin 75 mg/m2 q 3 weeks x 3, followed by docetaxel 100 mg/m2 q 3 weeks x 3, followed by i.v. CMF days 1 + 8 q 4 weeks x 3; b) sequential accelerated A-->T-->CMF: A and T were administered at the same doses q 2 weeks; c) combination therapy: doxorubicin 50 mg/m2 + docetaxel 75 mg/m2 q 3 weeks x 4, followed by CMF x 4. When indicated, radiotherapy was administered during or after CMF, and tamoxifen started after the end of CMF. RESULTS: Seventy-nine patients have been treated. Median age was 48 years. A 30% rate of early treatment discontinuation was observed in patients receiving the sequential accelerated therapy (23% during A-->T), due principally to severe skin toxicity. Median relative dose-intensity was 100% in the three treatment arms. The incidence of G3-G4 major toxicities by treated patients, was as follows: skin toxicity a: 5%; b: 27%; c: 0%; stomatitis a: 20%; b: 20%; c: 3%. The incidence of neutropenic fever was a: 30%; b: 13%; c: 48%. After a median follow-up of 18 months, no late toxicity has been reported. CONCLUSIONS: The accelerated sequential A-->T-->CMF treatment is not feasible due to an excess of skin toxicity. The sequential non accelerated and the combination regimens are feasible and under evaluation in a phase III trial of adjuvant therapy.
Resumo:
AIM: To examine whether smokers who reduce their quantity of cigarettes smoked between two periods are more or less likely to quit subsequently. STUDY DESIGN: Data come from the Health and Retirement Study, a nationally representative survey of older Americans aged 51-61 in 1991 followed every 2 years from 1992 to 1998. The 2064 participants smoking at baseline and the first follow-up comprise the main sample. MEASUREMENTS: Smoking cessation by 1996 is examined as the primary outcome. A secondary outcome is relapse by 1998. Spontaneous changes in smoking quantity between the first two waves make up the key predictor variables. Control variables include gender, age, education, race, marital status, alcohol use, psychiatric problems, acute or chronic health problems and smoking quantity. FINDINGS: Large (over 50%) and even moderate (25-50%) reductions in quantity smoked between 1992 and 1994 predict prospectively increased likelihood of cessation in 1996 compared to no change in quantity (OR 2.96, P<0.001 and OR 1.61, P<0.01, respectively). Additionally, those who reduced and then quit were somewhat less likely to relapse by 1998 than those who did not reduce in the 2 years prior to quitting. CONCLUSIONS: Reducing successfully the quantity of cigarettes smoked appears to have a beneficial effect on future cessation likelihood, even after controlling for initial smoking level and other variables known to impact smoking cessation. These results indicate that the harm reduction strategy of reduced smoking warrants further study.
Resumo:
Multiple functions of the beta2-adrenergic receptor (ADRB2) and angiotensin-converting enzyme (ACE) genes warrant studies of their associations with aging-related phenotypes. We focus on multimarker analyses and analyses of the effects of compound genotypes of two polymorphisms in the ADRB2 gene, rs1042713 and rs1042714, and 11 polymorphisms of the ACE gene, on the risk of such an aging-associated phenotype as myocardial infarction (MI). We used the data from a genotyped sample of the Framingham Heart Study Offspring (FHSO) cohort (n = 1500) followed for about 36 years with six examinations. The ADRB2 rs1042714 (C-->G) polymorphism and two moderately correlated (r(2) = 0.77) ACE polymorphisms, rs4363 (A-->G) and rs12449782 (A-->G), were significantly associated with risks of MI in this aging cohort in multimarker models. Predominantly linked ACE genotypes exhibited opposite effects on MI risks, e.g., the AA (rs12449782) genotype had a detrimental effect, whereas the predominantly linked AA (rs4363) genotype exhibited a protective effect. This trade-off occurs as a result of the opposite effects of rare compound genotypes of the ACE polymorphisms with a single dose of the AG heterozygote. This genetic trade-off is further augmented by the selective modulating effect of the rs1042714 ADRB2 polymorphism. The associations were not altered by adjustment for common MI risk factors. The results suggest that effects of single specific genetic variants of the ADRB2 and ACE genes on MI can be readily altered by gene-gene or/and gene-environmental interactions, especially in large heterogeneous samples. Multimarker genetic analyses should benefit studies of complex aging-associated phenotypes.
Variation in use of surveillance colonoscopy among colorectal cancer survivors in the United States.
Resumo:
BACKGROUND: Clinical practice guidelines recommend colonoscopies at regular intervals for colorectal cancer (CRC) survivors. Using data from a large, multi-regional, population-based cohort, we describe the rate of surveillance colonoscopy and its association with geographic, sociodemographic, clinical, and health services characteristics. METHODS: We studied CRC survivors enrolled in the Cancer Care Outcomes Research and Surveillance (CanCORS) study. Eligible survivors were diagnosed between 2003 and 2005, had curative surgery for CRC, and were alive without recurrences 14 months after surgery with curative intent. Data came from patient interviews and medical record abstraction. We used a multivariate logit model to identify predictors of colonoscopy use. RESULTS: Despite guidelines recommending surveillance, only 49% of the 1423 eligible survivors received a colonoscopy within 14 months after surgery. We observed large regional differences (38% to 57%) across regions. Survivors who received screening colonoscopy were more likely to: have colon cancer than rectal cancer (OR = 1.41, 95% CI: 1.05-1.90); have visited a primary care physician (OR = 1.44, 95% CI: 1.14-1.82); and received adjuvant chemotherapy (OR = 1.75, 95% CI: 1.27-2.41). Compared to survivors with no comorbidities, survivors with moderate or severe comorbidities were less likely to receive surveillance colonoscopy (OR = 0.69, 95% CI: 0.49-0.98 and OR = 0.44, 95% CI: 0.29-0.66, respectively). CONCLUSIONS: Despite guidelines, more than half of CRC survivors did not receive surveillance colonoscopy within 14 months of surgery, with substantial variation by site of care. The association of primary care visits and adjuvant chemotherapy use suggests that access to care following surgery affects cancer surveillance.
Resumo:
BACKGROUND: Stroke is one of the most disabling and costly impairments of adulthood in the United States. Stroke patients clearly benefit from intensive inpatient care, but due to the high cost, there is considerable interest in implementing interventions to reduce hospital lengths of stay. Early discharge rehabilitation programs require coordinated, well-organized home-based rehabilitation, yet lack of sufficient information about the home setting impedes successful rehabilitation. This trial examines a multifaceted telerehabilitation (TR) intervention that uses telehealth technology to simultaneously evaluate the home environment, assess the patient's mobility skills, initiate rehabilitative treatment, prescribe exercises tailored for stroke patients and provide periodic goal oriented reassessment, feedback and encouragement. METHODS: We describe an ongoing Phase II, 2-arm, 3-site randomized controlled trial (RCT) that determines primarily the effect of TR on physical function and secondarily the effect on disability, falls-related self-efficacy, and patient satisfaction. Fifty participants with a diagnosis of ischemic or hemorrhagic stroke will be randomly assigned to one of two groups: (a) TR; or (b) Usual Care. The TR intervention uses a combination of three videotaped visits and five telephone calls, an in-home messaging device, and additional telephonic contact as needed over a 3-month study period, to provide a progressive rehabilitative intervention with a treatment goal of safe functional mobility of the individual within an accessible home environment. Dependent variables will be measured at baseline, 3-, and 6-months and analyzed with a linear mixed-effects model across all time points. DISCUSSION: For patients recovering from stroke, the use of TR to provide home assessments and follow-up training in prescribed equipment has the potential to effectively supplement existing home health services, assist transition to home and increase efficiency. This may be particularly relevant when patients live in remote locations, as is the case for many veterans. TRIAL REGISTRATION: Clinical Trials.gov Identifier: NCT00384748.
Resumo:
The authors of this study evaluated a structured 10-session psychosocial support group intervention for newly HIV-diagnosed pregnant South African women. Participants were expected to display increases in HIV disclosure, self-esteem, active coping and positive social support, and decreases in depression, avoidant coping, and negative social support. Three hundred sixty-one pregnant HIV-infected women were recruited from four antenatal clinics in Tshwane townships from April 2005 to September 2006. Using a quasi-experimental design, assessments were conducted at baseline and two and eight months post-intervention. A series of random effects regression analyses were conducted, with the three assessment points treated as a random effect of time. At both follow-ups, the rate of disclosure in the intervention group was significantly higher than that of the comparison group (p<0.001). Compared to the comparison group at the first follow-up, the intervention group displayed higher levels of active coping (t=2.68, p<0.05) and lower levels of avoidant coping (t=-2.02, p<0.05), and those who attended at least half of the intervention sessions exhibited improved self-esteem (t=2.11, p<0.05). Group interventions tailored for newly HIV positive pregnant women, implemented in resource-limited settings, may accelerate the process of adjusting to one's HIV status, but may not have sustainable benefits over time.
Resumo:
Externalizing behavior problems of 124 adolescents were assessed across Grades 7-11. In Grade 9, participants were also assessed across social-cognitive domains after imagining themselves as the object of provocations portrayed in six videotaped vignettes. Participants responded to vignette-based questions representing multiple processes of the response decision step of social information processing. Phase 1 of our investigation supported a two-factor model of the response evaluation process of response decision (response valuation and outcome expectancy). Phase 2 showed significant relations between the set of these response decision processes, as well as response selection, measured in Grade 9 and (a) externalizing behavior in Grade 9 and (b) externalizing behavior in Grades 10-11, even after controlling externalizing behavior in Grades 7-8. These findings suggest that on-line behavioral judgments about aggression play a crucial role in the maintenance and growth of aggressive response tendencies in adolescence.