92 resultados para 13-120
Resumo:
Over the last 15 years, the supernova community has endeavoured to directly identify progenitor stars for core-collapse supernovae discovered in nearby galaxies. These precursors are often visible as resolved stars in high-resolution images from space-and ground-based telescopes. The discovery rate of progenitor stars is limited by the local supernova rate and the availability and depth of archive images of galaxies, with 18 detections of precursor objects and 27 upper limits. This review compiles these results (from 1999 to 2013) in a distance-limited sample and discusses the implications of the findings. The vast majority of the detections of progenitor stars are of type II-P, II-L, or IIb with one type Ib progenitor system detected and many more upper limits for progenitors of Ibc supernovae (14 in all). The data for these 45 supernovae progenitors illustrate a remarkable deficit of high-luminosity stars above an apparent limit of log L/L-circle dot similar or equal to 5.1 dex. For a typical Salpeter initial mass function, one would expect to have found 13 high-luminosity and high-mass progenitors by now. There is, possibly, only one object in this time-and volume-limited sample that is unambiguously high-mass (the progenitor of SN2009ip) although the nature of that supernovae is still debated. The possible biases due to the influence of circumstellar dust, the luminosity analysis, and sample selection methods are reviewed. It does not appear likely that these can explain the missing high-mass progenitor stars. This review concludes that the community's work to date shows that the observed populations of supernovae in the local Universe are not, on the whole, produced by high-mass (M greater than or similar to 18 M-circle dot) stars. Theoretical explosions of model stars also predict that black hole formation and failed supernovae tend to occur above an initial mass of M similar or equal to 18 M-circle dot. The models also suggest there is no simple single mass division for neutron star or black-hole formation and that there are islands of explodability for stars in the 8-120 M-circle dot range. The observational constraints are quite consistent with the bulk of stars above M similar or equal to 18 M-circle dot collapsing to form black holes with no visible supernovae.
Resumo:
Relative strengths of surface interaction for individual carbon atoms in acyclic and cyclic hydrocarbons adsorbed on alumina surfaces are determined using chemically resolved 13C nuclear magnetic resonance (NMR) T1 relaxation times. The ratio of relaxation times for the adsorbed atoms T1,ads to the bulk liquid relaxation time T1,bulk provides an indication of the mobility of the atom. Hence a low T1,ads/T1,bulk ratio indicates a stronger surface interaction. The carbon atoms associated with unsaturated bonds in the molecules are seen to exhibit a larger reduction in T1 on adsorption relative to the aliphatic carbons, consistent with adsorption occurring through the carbon-carbon multiple bonds. The relaxation data are interpreted in terms of proximity of individual carbon atoms to the alumina surface and adsorption conformations are inferred. Furthermore, variations of interaction strength and molecular configuration have been explored as a function of adsorbate coverage, temperature, surface pre-treatment, and in the presence of co-adsorbates. This relaxation time analysis is appropriate for studying the behaviour of hydrocarbons adsorbed on a wide range of catalyst support and supported-metal catalyst surfaces, and offers the potential to explore such systems under realistic operating conditions when multiple chemical components are present at the surface.
Resumo:
Secondary or late graft failure has been defined as the development of inadequate marrow function after initial engraftment has been achieved. We describe a case of profound marrow aplasia occurring 13 years after sibling allogeneic bone marrow transplantation for chronic myeloid leukaemia (CML) in first chronic phase. Although the patient remained a complete donor chimera, thereby suggesting that an unselected infusion of donor peripheral blood stem cells (PBSC) or bone marrow might be indicated, the newly acquired aplasia was thought to be immune in aetiology and some immunosuppression was therefore considered appropriate. Rapid haematological recovery was achieved after the infusion of unselected PBSC from the original donor following conditioning with anti-thymocyte globulin (ATG).
Resumo:
Ex vivo T cell depletion of allogeneic grafts is associated with a high (up to 80%) rate of mixed chimerism (MC) posttransplantation. The number of transplanted progenitor cells is an important factor in achieving complete donor chimerism in the T cell depletion setting. Use of granulocyte colony-stimulating factor (G-CSF) peripheral blood allografts allows the administration of large numbers of CD34+ cells. We studied the chimeric status of 13 patients who received allogeneic CD34+-selected peripheral blood progenitor cell transplants (allo-PBPCTs/CD34+) from HLA-identical sibling donors. Patients were conditioned with cyclophosphamide (120 mg/kg) and total-body irradiation (13 Gy in four fractions). Apheresis products were T cell-depleted by the immunoadsorption avidin-biotin method. The median number of CD34+ and CD3+ cells infused was 2.8x10(6)/kg (range 1.9-8.6x10(6)/kg) and 0.4x10(6)/kg (range 0.3-1x10(6)/kg), respectively. Molecular analysis of the engraftment was performed using polymerase chain reaction (PCR) amplification of highly polymorphic short tandem repeat (PCR-STR) sequences in peripheral blood samples. MC was detected in two (15%) of 13 patients. These two patients relapsed at 8 and 10 months after transplant, respectively. The remaining 11 patients showed complete donor chimerism and were in clinical remission after a maximum follow-up period of 24 months (range 6-24 months). These results were compared with those obtained in 10 patients who were treated with T cell-depleted bone marrow transplantation by means of elutriation and who received the same conditioning treatment and similar amounts of CD3+ cells (median 0.45x10(6)/kg; not significant) but a lower number of CD34+ cells (median 0.8x10(6)/kg; p = 0.001). MC was documented in six of 10 patients (60%), which was significantly higher than in the allo-PBPCT/CD34+ group (p = 0.04). We conclude that a high frequency of complete donor chimerism is achieved in patients receiving allo-PBPCT/CD34+ and that this is most likely due to the high number of progenitor cells administered.
Resumo:
Inland waters are of global biogeochemical importance receiving carbon inputs of ~ 4.8 Pg C y-1. Of this 12 % is buried, 18 % transported to the oceans, and 70 % supports aquatic secondary production. However, the mechanisms that determine the fate of organic matter (OM) in these systems are poorly defined. One important aspect is the formation of organo-mineral complexes in aquatic systems and their potential as a route for OM transport and burial vs. their use potential as organic carbon (C) and nitrogen (N) sources. Organo-mineral particles form by sorption of dissolved OM to freshly eroded mineral surfaces and may contribute to ecosystem-scale particulate OM fluxes. We tested the availability of mineral-sorbed OM as a C & N source for streamwater microbial assemblages and streambed biofilms. Organo-mineral particles were constructed in vitro by sorption of 13C:15N-labelled amino acids to hydrated kaolin particles, and microbial degradation of these particles compared with equivalent doses of 13C:15N-labelled free amino acids. Experiments were conducted in 120 ml mesocosms over 7 days using biofilms and streamwater sampled from the Oberer Seebach stream (Austria), tracing assimilation and mineralization of 13C and 15N labels from mineral-sorbed and dissolved amino acids.Here we present data on the effects of organo-mineral sorption upon amino acid mineralization and its C:N stoichiometry. Organo-mineral sorption had a significant effect upon microbial activity, restricting C and N mineralization by both the biofilm and streamwater treatments. Distinct differences in community response were observed, with both dissolved and mineral-stabilized amino acids playing an enhanced role in the metabolism of the streamwater microbial community. Mineral-sorption of amino acids differentially affected C & N mineralization and reduced the C:N ratio of the dissolved amino acid pool. The present study demonstrates that organo-mineral complexes restrict microbial degradation of OM and may, consequently, alter the carbon and nitrogen cycling dynamics within aquatic ecosystems.
Resumo:
Over 1 million km2 of seafloor experience permanent low-oxygen conditions within oxygen minimum zones (OMZs). OMZs are predicted to grow as a consequence of climate change, potentially affecting oceanic biogeochemical cycles. The Arabian Sea OMZ impinges upon the western Indian continental margin at bathyal depths (150 - 1500 m) producing a strong depth dependent oxygen gradient at the sea floor. The influence of the OMZ upon the short term processing of organic matter by sediment ecosystems was investigated using in situ stable isotope pulse chase experiments. These deployed doses of 13C:15N labeled organic matter onto the sediment surface at four stations from across the OMZ (water depth 540 - 1100 m; [O2] = 0.35 - 15 μM). In order to prevent experimentally anoxia, the mesocosms were not sealed. 13C and 15N labels were traced into sediment, bacteria, fauna and 13C into sediment porewater DIC and DOC. However, the DIC and DOC flux to the water column could not be measured, limiting our capacity to obtain mass-balance for C in each experimental mesocosm. Linear Inverse Modeling (LIM) provides a method to obtain a mass-balanced model of carbon flow that integrates stable-isotope tracer data with community biomass and biogeochemical flux data from a range of sources. Here we present an adaptation of the LIM methodology used to investigate how ecosystem structure influenced carbon flow across the Indian margin OMZ. We demonstrate how oxygen conditions affect food-web complexity, affecting the linkages between the bacteria, foraminifera and metazoan fauna, and their contributions to benthic respiration. The food-web models demonstrate how changes in ecosystem complexity are associated with oxygen availability across the OMZ and allow us to obtain a complete carbon budget for the stationa where stable-isotope labelling experiments were conducted.
Resumo:
Inland waters are of global biogeochemical importance. They receive carbon inputs of ~ 4.8 Pg C/ y of which, 12 % is buried, 18 % transported to the oceans, and 70 % supports aquatic secondary production. However, the mechanisms that determine the fate of organic matter (OM) in these systems are poorly defined. One aspect of this is the formation of organo-mineral complexes in aquatic systems and their potential as a route for OM transport and burial vs. their use as carbon (C) and nitrogen (N) sources within aquatic systems. Organo-mineral particles form by sorption of dissolved OM to freshly eroded mineral surfaces and may contribute to ecosystem-scale particulate OM fluxes. We experimentally tested the availability of mineral-sorbed OM as a C & N source for streamwater microbial assemblages and streambed biofilms. Organo-mineral particles were constructed in vitro by sorption of 13C:15N-labelled amino acids to hydrated kaolin particles, and microbial degradation of these particles compared with equivalent doses of 13C:15N-labelled free amino acids. Experiments were conducted in 120 ml mesocosms over 7 days using biofilms and water sampled from the Oberer Seebach stream (Austria). Each incubation experienced a 16:8 light:dark regime, with metabolism monitored via changes in oxygen concentrations between photoperiods. The relative fate of the organo-mineral particles was quantified by tracing the mineralization of the 13C and 15N labels and their incorporation into microbial biomass. Here we present the initial results of 13C-label mineralization, incorporation and retention within dissolved organic carbon pool. The results indicate that 514 (± 219) μmol/ mmol of the 13:15N labeled free amino acids were mineralized over the 7-day incubations. By contrast, 186 (± 97) μmol/ mmol of the mineral-sorbed amino acids were mineralized over a similar period. Thus, organo-mineral complexation reduced amino acid mineralization by ~ 60 %, with no differences observed between the streamwater and biofilm assemblages. Throughout the incubations, biofilms were observed to leach dissolved organic carbon (DOC). However, within the streamwater assemblage the presence of both organo-mineral particles and kaolin particles was associated with significant DOC removal (-1.7 % and -7.5 % respectively). Consequently, the study demonstrates that mineral and organo-mineral particles can limit the availability of DOC in aquatic systems, providing nucleation sites for flocculation and fresh mineral surfaces, which facilitate OM-sorption. The formation of these organo-mineral particles subsequently restricts microbial OM degradation, potentially altering the transport and facilitating the burial of OM within streams.
Resumo:
Background The use of technology in healthcare settings is on the increase and may represent a cost-effective means of delivering rehabilitation. Reductions in treatment time, and delivery in the home, are also thought to be benefits of this approach. Children and adolescents with brain injury often experience deficits in memory and executive functioning that can negatively affect their school work, social lives, and future occupations. Effective interventions that can be delivered at home, without the need for high-cost clinical involvement, could provide a means to address a current lack of provision. We have systematically reviewed studies examining the effects of technology-based interventions for the rehabilitation of deficits in memory and executive functioning in children and adolescents with acquired brain injury. Objectives To assess the effects of technology-based interventions compared to placebo intervention, no treatment, or other types of intervention, on the executive functioning and memory of children and adolescents with acquired brain injury. Search methods We ran the search on the 30 September 2015. We searched the Cochrane Injuries Group Specialised Register, the Cochrane Central Register of Controlled Trials (CENTRAL), Ovid MEDLINE(R), Ovid MEDLINE(R) In-Process & Other Non-Indexed Citations, Ovid MEDLINE(R) Daily and Ovid OLDMEDLINE(R), EMBASE Classic + EMBASE (OvidSP), ISI Web of Science (SCI-EXPANDED, SSCI, CPCI-S, and CPSI-SSH), CINAHL Plus (EBSCO), two other databases, and clinical trials registers. We also searched the internet, screened reference lists, and contacted authors of included studies. Selection criteria Randomised controlled trials comparing the use of a technological aid for the rehabilitation of children and adolescents with memory or executive-functioning deficits with placebo, no treatment, or another intervention. Data collection and analysis Two review authors independently reviewed titles and abstracts identified by the search strategy. Following retrieval of full-text manuscripts, two review authors independently performed data extraction and assessed the risk of bias. Main results Four studies (involving 206 participants) met the inclusion criteria for this review. Three studies, involving 194 participants, assessed the effects of online interventions to target executive functioning (that is monitoring and changing behaviour, problem solving, planning, etc.). These studies, which were all conducted by the same research team, compared online interventions against a 'placebo' (participants were given internet resources on brain injury). The interventions were delivered in the family home with additional support or training, or both, from a psychologist or doctoral student. The fourth study investigated the use of a computer program to target memory in addition to components of executive functioning (that is attention, organisation, and problem solving). No information on the study setting was provided, however a speech-language pathologist, teacher, or occupational therapist accompanied participants. Two studies assessed adolescents and young adults with mild to severe traumatic brain injury (TBI), while the remaining two studies assessed children and adolescents with moderate to severe TBI. Risk of bias We assessed the risk of selection bias as low for three studies and unclear for one study. Allocation bias was high in two studies, unclear in one study, and low in one study. Only one study (n = 120) was able to conceal allocation from participants, therefore overall selection bias was assessed as high. One study took steps to conceal assessors from allocation (low risk of detection bias), while the other three did not do so (high risk of detection bias). Primary outcome 1: Executive functioning: Technology-based intervention versus placebo Results from meta-analysis of three studies (n = 194) comparing online interventions with a placebo for children and adolescents with TBI, favoured the intervention immediately post-treatment (standardised mean difference (SMD) -0.37, 95% confidence interval (CI) -0.66 to -0.09; P = 0.62; I2 = 0%). (As there is no 'gold standard' measure in the field, we have not translated the SMD back to any particular scale.) This result is thought to represent only a small to medium effect size (using Cohen’s rule of thumb, where 0.2 is a small effect, 0.5 a medium one, and 0.8 or above is a large effect); this is unlikely to have a clinically important effect on the participant. The fourth study (n = 12) reported differences between the intervention and control groups on problem solving (an important component of executive functioning). No means or standard deviations were presented for this outcome, therefore an effect size could not be calculated. The quality of evidence for this outcome according to GRADE was very low. This means future research is highly likely to change the estimate of effect. Primary outcome 2: Memory One small study (n = 12) reported a statistically significant difference in improvement in sentence recall between the intervention and control group following an eight-week remediation programme. No means or standard deviations were presented for this outcome, therefore an effect size could not be calculated. Secondary outcomes Two studies (n = 158) reported on anxiety/depression as measured by the Child Behavior Checklist (CBCL) and were included in a meta-analysis. We found no evidence of an effect with the intervention (mean difference -5.59, 95% CI -11.46 to 0.28; I2 = 53%). The GRADE quality of evidence for this outcome was very low, meaning future research is likely to change the estimate of effect. A single study sought to record adverse events and reported none. Two studies reported on use of the intervention (range 0 to 13 and 1 to 24 sessions). One study reported on social functioning/social competence and found no effect. The included studies reported no data for other secondary outcomes (that is quality of life and academic achievement). Authors' conclusions This review provides low-quality evidence for the use of technology-based interventions in the rehabilitation of executive functions and memory for children and adolescents with TBI. As all of the included studies contained relatively small numbers of participants (12 to 120), our findings should be interpreted with caution. The involvement of a clinician or therapist, rather than use of the technology, may have led to the success of these interventions. Future research should seek to replicate these findings with larger samples, in other regions, using ecologically valid outcome measures, and reduced clinician involvement.
Resumo:
Background: Poor follow-up after cataract surgery in developing countries makes assessment of operative quality uncertain. We aimed to assess two strategies to measure visual outcome: recording the visual acuity of all patients 3 or fewer days postoperatively (early postoperative assessment), and recording that of only those patients who returned for the final follow-up examination after 40 or more days without additional prompting. Methods: Each of 40 centres in ten countries in Asia, Africa, and Latin America recruited 40-120 consecutive surgical cataract patients. Operative-eye best-corrected visual acuity and uncorrected visual acuity were recorded before surgery, 3 or fewer days postoperatively, and 40 or more days postoperatively. Clinics logged whether each patient had returned for the final follow-up examination without additional prompting, had to be actively encouraged to return, or had to be examined at home. Visual outcome for each centre was defined as the proportion of patients with uncorrected visual acuity of 6/18 or better minus the proportion with uncorrected visual acuity of 6/60 or worse, and was calculated for each participating hospital with results from the early assessment of all patients and the late assessment of only those returning unprompted, with results from the final follow-up assessment for all patients used as the standard. Findings: Of 3708 participants, 3441 (93%) had final follow-up vision data recorded 40 or more days after surgery, 1831 of whom (51% of the 3581 total participants for whom mode of follow-up was recorded) had returned to the clinic without additional prompting. Visual outcome by hospital from early postoperative and final follow-up assessment for all patients were highly correlated (Spearman's rs=0·74, p<0·0001). Visual outcome from final follow-up assessment for all patients and for only those who returned without additional prompting were also highly correlated (rs=0·86, p<0·0001), even for the 17 hospitals with unprompted return rates of less than 50% (rs=0·71, p=0·002). When we divided hospitals into top 25%, middle 50%, and bottom 25% by visual outcome, classification based on final follow-up assessment for all patients was the same as that based on early postoperative assessment for 27 (68%) of 40 centres, and the same as that based on data from patients who returned without additional prompting in 31 (84%) of 37 centres. Use of glasses to optimise vision at the time of the early and late examinations did not further improve the correlations. Interpretation: Early vision assessment for all patients and follow-up assessment only for patients who return to the clinic without prompting are valid measures of operative quality in settings where follow-up is poor. Funding: ORBIS International, Fred Hollows Foundation, Helen Keller International, International Association for the Prevention of Blindness Latin American Office, Aravind Eye Care System. © 2013 Congdon et al. Open Access article distributed under the terms of CC BY.
Resumo:
Purpose. To determine the 5-year incidence and visual outcome of cataract surgery in an adult urban Chinese population. Methods. A comprehensive eye examination was performed at baseline and 5 years later on subjects participating in a population-based study. Incident cataract surgery was defined as having undergone surgery in either eye during the 5-year period. Postoperative visual impairment (PVI) was defined as visual acuity (VA) <6/18 based on both presenting VA (PVA) and best corrected VA (BCVA) in the operated eye. Results. Among the 1405 baseline participants, 75% (924) of survivors were seen at the 5-year follow-up visit. Forty-four returning participants (62 eyes) had undergone incident cataract surgery, an incidence of 4.84% (95% confidence interval [CI] - [3.53, 6.44]). Detailed medical and surgical records were available for 54/62 (87.1%) eyes, and of these, 5/ 54 (24.1%) had an immediate preoperative visual acuity <6/ 120. All recorded surgeries were performed at tertiary-level hospitals with phacoemulsification and foldable intraocular lens implantation. Those undergoing cataract surgery were more educated (P < 0.05) and had poorer baseline PVA in the worse-seeing eye (P < 0.001) than 54 persons with baseline PVA <6/18 due to cataract who had not had surgery. Among the 62 operated eyes, 22.6% (14/62) had PVI based on PVA and 9.6% (6/62) based on BCVA. Conclusions. Despite somewhat lower incidence, outcomes of cataract surgery in urban southern China are comparable with developed countries and better than for rural China. In urban China, emphasis should be on improving access to surgery. (Invest Ophthalmol Vis Sci. 2012;53:7936-7942) © 2012 The Association for Research in Vision and Ophthalmology, Inc.
Resumo:
PURPOSE: To describe the prevalence of different types of cataract and their association with visual acuity in a Tanzanian population aged 40 years and older. METHODS: A prevalence survey for lens opacity, glaucoma, and visual impairment was carried out on all residents age 40 and older of six villages in Kongwa, Tanzania. One examiner graded the lens for presence of nuclear (NSC), posterior subcapsular (PSC), and cortical cataract (CC), using the new WHO Simplified Cataract Grading System. Visual acuity was measured in each eye, both presenting and best corrected, using an illiterate E chart. RESULTS: The proportion of eligible subjects participating was 90% (3268/3641). The prevalence of cataract was as follows: NSC, 15.6%; CC, 8.8%; and PSC, 1.9%. All types of cataract increased with age, from NSC, 1.7%; CC, 2.4%; and PSC, 0.4% for those aged 40 to 49 years to NSC, 59.2%; CC, 23.5%; and PSC, 5.9% for those aged 70 years and older (P < 0.0001 for all cataract types, chi(2) test for trend). Cataract prevalence was higher among women than men for NSC (P = 0.0001), but not for CC (P = 0.15) or PSC (P = 0.25), after adjusting for age. Prevalence rates of visual impairment (BCVA < 6/12), US blindness (< or = 6/60) and WHO blindness (< 6/120) for this population were 13.3%, 2.1%, and 1.3%, respectively. Older age and each of the major types of pure and mixed cataract were independently associated with worse vision in regression modeling. CONCLUSIONS: Unlike African-derived populations in Salisbury and Barbados, NSC rather than CC was most prevalent in this African population. The seeming lower prevalence of CC may to some extent be explained by different grading schemes, differential availability of cataract surgery, the younger mean age of the Tanzanian subjects, and a higher prevalence of NSC in this population.
Resumo:
PURPOSE: To evaluate the prevalence and causes of visual impairment among Chinese children aged 3 to 6 years in Beijing. DESIGN: Population-based prevalence survey. METHODS: Presenting and pinhole visual acuity were tested using picture optotypes or, in children with pinhole vision < 6/18, a Snellen tumbling E chart. Comprehensive eye examinations and cycloplegic refraction were carried out for children with pinhole vision < 6/18 in the better-seeing eye. RESULTS: All examinations were completed on 17,699 children aged 3 to 6 years (95.3% of sample). Subjects with bilateral correctable low vision (presenting vision < 6/18 correctable to >or= 6/18) numbered 57 (0.322%; 95% confidence interval [CI], 0.237% to 0.403%), while 14 (0.079%; 95% CI, 0.038% to 0.120%) had bilateral uncorrectable low vision (best-corrected vision of < 6/18 and >or= 3/60), and 5 subjects (0.028%; 95% CI, 0.004% to 0.054%) were bilaterally blind (best-corrected acuity < 3/60). The etiology of 76 cases of visual impairment included: refractive error in 57 children (75%), hereditary factors (microphthalmos, congenital cataract, congenital motor nystagmus, albinism, and optic nerve disease) in 13 children (17.1 %), amblyopia in 3 children (3.95%), and cortical blindness in 1 child (1.3%). The cause of visual impairment could not be established in 2 (2.63%) children. The prevalence of visual impairment did not differ by gender, but correctable low vision was significantly (P < .0001) more common among urban as compared with rural children. CONCLUSION: The leading causes of visual impairment among Chinese preschool-aged children are refractive error and hereditary eye diseases. A higher prevalence of refractive error is already present among urban as compared with rural children in this preschool population.