974 resultados para Lane use control signals.
Resumo:
It has been hypothesized that children and adolescents might be more vulnerable to possible health effects from mobile phone exposure than adults. We investigated whether mobile phone use is associated with brain tumor risk among children and adolescents.
Resumo:
Whether the use of mobile phones is a risk factor for brain tumors in adolescents is currently being studied. Case--control studies investigating this possible relationship are prone to recall error and selection bias. We assessed the potential impact of random and systematic recall error and selection bias on odds ratios (ORs) by performing simulations based on real data from an ongoing case--control study of mobile phones and brain tumor risk in children and adolescents (CEFALO study). Simulations were conducted for two mobile phone exposure categories: regular and heavy use. Our choice of levels of recall error was guided by a validation study that compared objective network operator data with the self-reported amount of mobile phone use in CEFALO. In our validation study, cases overestimated their number of calls by 9% on average and controls by 34%. Cases also overestimated their duration of calls by 52% on average and controls by 163%. The participation rates in CEFALO were 83% for cases and 71% for controls. In a variety of scenarios, the combined impact of recall error and selection bias on the estimated ORs was complex. These simulations are useful for the interpretation of previous case-control studies on brain tumor and mobile phone use in adults as well as for the interpretation of future studies on adolescents.
Resumo:
Complete resection of grade II gliomas might prolong survival but is not always possible. The goal of the study was to evaluate the location of unexpected grade II gliomas remnants after assumed complete removal with intraoperative (iop) MRI and to assess the reason for their non-detection.
Resumo:
HIV-1 sequence diversity is affected by selection pressures arising from host genomic factors. Using paired human and viral data from 1071 individuals, we ran >3000 genome-wide scans, testing for associations between host DNA polymorphisms, HIV-1 sequence variation and plasma viral load (VL), while considering human and viral population structure. We observed significant human SNP associations to a total of 48 HIV-1 amino acid variants (p<2.4 × 10−12). All associated SNPs mapped to the HLA class I region. Clinical relevance of host and pathogen variation was assessed using VL results. We identified two critical advantages to the use of viral variation for identifying host factors: (1) association signals are much stronger for HIV-1 sequence variants than VL, reflecting the ‘intermediate phenotype’ nature of viral variation; (2) association testing can be run without any clinical data. The proposed genome-to-genome approach highlights sites of genomic conflict and is a strategy generally applicable to studies of host–pathogen interaction.
Resumo:
OBJECTIVE Use of diuretics has been associated with an increased risk of gout. Data on different types of diuretics are scarce. We undertook this study to investigate the association between use of loop diuretics, thiazide or thiazide-like diuretics, and potassium-sparing agents and the risk of developing incident gout. METHODS We conducted a retrospective population-based case-control analysis using the General Practice Research Database established in the UK. We identified case patients who were diagnosed as having incident gout between 1990 and 2010. One control patient was matched to each case patient for age, sex, general practice, calendar time, and years of active history in the database. We used conditional logistic regression to calculate odds ratios (ORs) and 95% confidence intervals (95% CIs), and we adjusted for potential confounders. RESULTS We identified 91,530 incident cases of gout and the same number of matched controls. Compared to past use of diuretics from each respective drug class, adjusted ORs for current use of loop diuretics, thiazide diuretics, thiazide-like diuretics, and potassium-sparing diuretics were 2.64 (95% CI 2.47-2.83), 1.70 (95% CI 1.62-1.79), 2.30 (95% CI 1.95-2.70), and 1.06 (95% CI 0.91-1.23), respectively. Combined use of loop diuretics and thiazide diuretics was associated with the highest relative risk estimates of gout (adjusted OR 4.65 [95% CI 3.51-6.16]). Current use of calcium channel blockers or losartan slightly attenuated the risk of gout in patients who took diuretics. CONCLUSION Use of loop diuretics, thiazide diuretics, and thiazide-like diuretics was associated with an increased risk of incident gout, although use of potassium-sparing agents was not.
Resumo:
The Earth’s carbon and hydrologic cycles are intimately coupled by gas exchange through plant stomata1, 2, 3. However, uncertainties in the magnitude4, 5, 6 and consequences7, 8 of the physiological responses9, 10 of plants to elevated CO2 in natural environments hinders modelling of terrestrial water cycling and carbon storage11. Here we use annually resolved long-term δ13C tree-ring measurements across a European forest network to reconstruct the physiologically driven response of intercellular CO2 (Ci) caused by atmospheric CO2 (Ca) trends. When removing meteorological signals from the δ13C measurements, we find that trees across Europe regulated gas exchange so that for one ppmv atmospheric CO2 increase, Ci increased by ~0.76 ppmv, most consistent with moderate control towards a constant Ci/Ca ratio. This response corresponds to twentieth-century intrinsic water-use efficiency (iWUE) increases of 14 ± 10 and 22 ± 6% at broadleaf and coniferous sites, respectively. An ensemble of process-based global vegetation models shows similar CO2 effects on iWUE trends. Yet, when operating these models with climate drivers reintroduced, despite decreased stomatal opening, 5% increases in European forest transpiration are calculated over the twentieth century. This counterintuitive result arises from lengthened growing seasons, enhanced evaporative demand in a warming climate, and increased leaf area, which together oppose effects of CO2-induced stomatal closure. Our study questions changes to the hydrological cycle, such as reductions in transpiration and air humidity, hypothesized to result from plant responses to anthropogenic emissions.
Resumo:
Background: There is evidence that drinking during residential treatment is related to various factors, such as patients’ general control beliefs and self-efficacy, as well as to external control of alcohol use by program’s staff and situations where there is temptation to drink. As alcohol use during treatment has been shown to be associated with the resumption of alcohol use after discharge from residential treatment, we aimed to investigate how these variables are related to alcohol use during abstinenceoriented residential treatment programs for alcohol use disorders (AUD). Methods: In total, 509 patients who entered 1 of 2 residential abstinence-oriented treatment programs for AUD were included in the study. After detoxification, patients completed a standardized diagnostic procedure including interviews and questionnaires. Drinking was assessed by patients’ selfreport of at least 1 standard drink or by positive breathalyzer testing. The 2 residential programs were categorized as high or low control according to the average number of tests per patient. Results: Regression analysis revealed a significant interaction effect between internal and external control suggesting that patients with high internal locus of control and high frequency of control by staff demonstrated the least alcohol use during treatment (16.7%) while patients with low internal locus of control in programs with low external control were more likely to use alcohol during Treatment (45.9%). No effects were found for self-efficacy and temptation. Conclusions: As alcohol use during treatment is most likely associated with poor treatment outcomes, external control may improve treatment outcomes and particularly support patients with low internal locus of control, who show the highest risk for alcohol use during treatment. High external control may complement high internal control to improve alcohol use prevention while in treatment. Key Words: Alcohol Dependence, Alcohol Use, Locus of Control, Alcohol Testing.
Resumo:
BACKGROUND CONTEXT Several randomized controlled trials (RCTs) have compared patient outcomes of anterior (cervical) interbody fusion (AIF) with those of total disc arthroplasty (TDA). Because RCTs have known limitations with regard to their external validity, the comparative effectiveness of the two therapies in daily practice remains unknown. PURPOSE This study aimed to compare patient-reported outcomes after TDA versus AIF based on data from an international spine registry. STUDY DESIGN AND SETTING A retrospective analysis of registry data was carried out. PATIENT SAMPLE Inclusion criteria were degenerative disc or disc herniation of the cervical spine treated by single-level TDA or AIF, no previous surgery, and a Core Outcome Measures Index (COMI) completed at baseline and at least 3 months' follow-up. Overall, 987 patients were identified. OUTCOME MEASURES Neck and arm pain relief and COMI score improvement were the outcome measures. METHODS Three separate analyses were performed to compare TDA and AIF surgical outcomes: (1) mimicking an RCT setting, with admission criteria typical of those in published RCTs, a 1:1 matched analysis was carried out in 739 patients; (2) an analysis was performed on 248 patients outside the classic RCT spectrum, that is, with one or more typical RCT exclusion criteria; (3) a subgroup analysis of all patients with additional follow-up longer than 2 years (n=149). RESULTS Matching resulted in 190 pairs with an average follow-up of 17 months that had no residual significant differences for any patient characteristics. Small but statistically significant differences in outcome were observed in favor of TDA, which are potentially clinically relevant. Subgroup analyses of atypical patients and of patients with longer-term follow-up showed no significant differences in outcome between the treatments. CONCLUSIONS The results of this observational study were in accordance with those of the published RCTs, suggesting substantial pain reduction both after AIF and TDA, with slightly greater benefit after arthroplasty. The analysis of atypical patients suggested that, in patients outside the spectrum of clinical trials, both surgical interventions appeared to work to a similar extent to that shown for the cohort in the matched study. Also, in the longer-term perspective, both therapies resulted in similar benefits to the patients.
Resumo:
In June 1995 a case-control study was initiated by the Texas Department of Health among Mexican American women residing in the fourteen counties of the Texas-Mexico border. Case-women had carried infants with neural tube defect. Control-women had given birth to infants without neural tube defects. The case-control protocol included a general questionnaire which elicited information regarding illnesses experienced and antibiotics taken from three months prior to conception to three months after conception. An assessment of the associations between periconceptional diarrhea and the risk of neural tube defects indicated that the unadjusted association of diarrhea and risk of neural tube defect was significant (OR = 3.3, CI = 1.4–7.6). The unadjusted association of use of oral antimicrobials and risk of neural tube defect was also significant (OR = 3.4, CI = 1.6–7.3). These associations persisted among women who had no fever during the periconceptional period and were present irrespective of folate intake. Diarrhea was associated with an increased risk of NTD independent of use of antimicrobials. The converse was also true; antimicrobials were associated with an increased risk of NTD independent of diarrhea. Further research regarding these potentially modifiable risk factors is warranted. Replication of these findings could result in interventions in addition to folate supplementation. ^
Resumo:
In this paper we develop a simple economic model to analyze the use of a policy that combines a voluntary approach to controlling nonpoint-source pollution with a background threat of an ambient tax if the voluntary approach is unsuccessful in meeting a pre-specified environmental goal. We first consider the case where the policy is applied to a single farmer, and then extend the analysis to the case where the policy is applied to a group of farmers. We show that in either case such a policy can induce cost-minimizing abatement without the need for farm-specific information. In this sense, the combined policy approach is not only more effective in protecting environmental quality than a pure voluntary approach (which does not ensure that water quality goals are met) but also less costly than a pure ambient tax approach (since it entails lower information costs). However, when the policy is applied to a group of farmers, we show that there is a potential tradeoff in the design of the policy. In this context, lowering the cutoff level of pollution used for determining total tax payments increases the likely effectiveness of the combined approach but also increases the potential for free riding. By setting the cutoff level equal to the target level of pollution, the regulator can eliminate free riding and ensure that cost-minimizing abatement is the unique Nash equilibrium under which the target is met voluntarily. However, this cutoff level also ensures that zero voluntary abatement is a Nash equilibrium. In addition, with this cutoff level the equilibrium under which the target is met voluntarily will not strictly dominate the equilibrium under which it is not. We show that all results still hold if the background threat instead takes the form of reducing government subsidies if a pre-specified environmental goal is not met.
Resumo:
The present work examines the relationship between pH-induced changes in growth and stable isotopic composition of coccolith calcite in two coccolithophore species with a geological perspective. These cells (Gephyrocapsa oceanica and Coccolithus pelagicus) with differing physiologies and vital effects possess a growth optimum corresponding to average pH of surface seawater in the geological period during their first known occurrence. Diminished growth rates outside of their optimum pH range are explained by the challenge of proton translocation into the extracellular environment at low pH, and enhanced aqueous CO2 limitation at high pH. These diminished growth rates correspond to a lower degree of oxygen isotopic disequilibrium in G. oceanica. In contrast, the slower growing and ancient species C. pelagicus, which typically precipitates near-equilibrium calcite, does not show any modulation of oxygen isotope signals with changing pH. In CO2-utilizing unicellular algae, carbon and oxygen isotope compositions are best explained by the degree of utilization of the internal dissolved inorganic carbon (DIC) pool and the dynamics of isotopic re-equilibration inside the cell. Thus, the "carbonate ion effect" may not apply to coccolithophores. This difference with foraminifera can be traced to different modes of DIC incorporation into these two distinct biomineralizing organisms. From a geological perspective, these findings have implications for refining the use of oxygen isotopes to infer more reliable sea surface temperatures (SSTs) from fossil carbonates, and contribute to a better understanding of how climate-relevant parameters are recorded in the sedimentary archive.
Resumo:
An actual case of an underground railway in the neighbourhood of habitation buildings has been analyzed. The study has been based on a twodimensional BEM model including a tunnel and a typical building. The soil properties were obtained using geophysical techniques. After a sensitivity study, the model has been simplyfied and validated by comparison with "in situ" measurements. Using this simplyfied model, a parametric study has been done including trenches and walls of different materials and different depths at two different distances from the tunnel. The reductions obtained with the different solutions can then be compared.
Resumo:
The interest in LED lighting has been growing recently due to the high efficacy, lifelime and ruggedness that this technology offers. However the key element to guarantee those parameters with these new electronic devices is to keep under control the working temperature of the semiconductor crystal. This paper propases a LED lamp design that fulfils the requ irements of a PV lighting systems, whose main quality criteria is reliability. It uses directly as a power supply a non·stabilized constant voltage source, as batteries. An electronic control architecture is used to regulate the current applied to the LEO matri)( according to their temperature and the voltage output value of the batteries with two pulse modulation signals (PWM) signals. The first one connects and disconnects the LEOs to the power supply and the second one connects and disconnects several emitters to the electric circuit changing its overall impedance. A prototype of the LEO lamp has been implemented and tested at different temperaturas and battery voltages.