952 resultados para schedule of cleanings
Resumo:
Optimal tax formulas expressed in "sufficient statistics" are usually calibrated under the assumptionthat the relevant tax elasticities are unaffected by other available policy instruments.In practice though, tax authorities have many more instruments than the mere tax rates andtax elasticities are functions of all these policy instruments. In this paper we provide evidencethat tax elasticities are extremely sensitive to a particular policy instrument: the level of taxenforcement. We exploit a natural experiment that took place in France in 1983, when the taxadministration tightened the requirements to claim charitable deductions. The reform led to asubstantial drop in the amount of contributions reported to the administration, which can becredibly attributed to overreporting of charitable contributions before the reform, rather thanto a real change in giving behaviours. We show that the reform was also associated with asubstantial decline in the absolute value of the elasticity of reported contributions. This findingallows us to partially identify the elasticity of overreporting contributions, which is shown tobe large and inferior to -2 in the lax enforcement regime. We further show using bunching oftaxpayers at kink-points of the tax schedule that the elasticity of taxable income also experienceda significant decline after the reform. Our results suggest that optimizing the tax rate fora given tax elasticity when other policy instruments are not optimized can lead to misleadingconclusions when tax authorities have another instrument that could set the tax elasticity itselfat its optimal level as in Kopczuk and Slemrod [2002].
Resumo:
The objective of this work was to evaluate the effect of grazing interval and period of evaluation over tissue turnover in Tanzania grass pastures (Panicum maximum cv. Tanzania) and to ascertain if herbage accumulation rate can be used as a criterion to establish a defoliation schedule for this grass in Southeast of Brazil. A randomized block design with a split-plot arrangement was used. The effect of three grazing intervals was evaluated within seven periods between October 1995 and September 1996. Responses monitored were leaf and stem elongation rates, leaf senescence rate, stem length, and tiller density. Net herbage accumulation rate was calculated using tissue turnover data. The grazing intervals for Tanzania grass should be around 38 days between October and April (spring and early autumn) and 28 days during the reproductive phase of the grass (April/May). Between May and September (late autumn and winter), grazing interval should be around 48 days. Herbage accumulation rate is not a good criterion to establish defoliation time for Tanzania grass. Studies on the effects of stem production in grazing efficiency, animal intake and forage quality are needed to improve Tanzania grass management.
Resumo:
Background: Bone health is a concern when treating early stage breast cancer patients with adjuvant aromatase inhibitors. Early detection of patients (pts) at risk of osteoporosis and fractures may be helpful for starting preventive therapies and selecting the most appropriate endocrine therapy schedule. We present statistical models describing the evolution of lumbar and hip bone mineral density (BMD) in pts treated with tamoxifen (T), letrozole (L) and sequences of T and L. Methods: Available dual-energy x-ray absorptiometry exams (DXA) of pts treated in trial BIG 1-98 were retrospectively collected from Swiss centers. Treatment arms: A) T for 5 years, B) L for 5 years, C) 2 years of T followed by 3 years of L and, D) 2 years of L followed by 3 years of T. Pts without DXA were used as a control for detecting selection biases. Patients randomized to arm A were subsequently allowed an unplanned switch from T to L. Allowing for variations between DXA machines and centres, two repeated measures models, using a covariance structure that allow for different times between DXA, were used to estimate changes in hip and lumbar BMD (g/cm2) from trial randomization. Prospectively defined covariates, considered as fixed effects in the multivariable models in an intention to treat analysis, at the time of trial randomization were: age, height, weight, hysterectomy, race, known osteoporosis, tobacco use, prior bone fracture, prior hormone replacement therapy (HRT), bisphosphonate use and previous neo-/adjuvant chemotherapy (ChT). Similarly, the T-scores for lumbar and hip BMD measurements were modeled using a per-protocol approach (allowing for treatment switch in arm A), specifically studying the effect of each therapy upon T-score percentage. Results: A total of 247 out of 546 pts had between 1 and 5 DXA; a total of 576 DXA were collected. Number of DXA measurements per arm were; arm A 133, B 137, C 141 and D 135. The median follow-up time was 5.8 years. Significant factors positively correlated with lumbar and hip BMD in the multivariate analysis were weight, previous HRT use, neo-/adjuvant ChT, hysterectomy and height. Significant negatively correlated factors in the models were osteoporosis, treatment arm (B/C/D vs. A), time since endocrine therapy start, age and smoking (current vs. never).Modeling the T-score percentage, differences from T to L were -4.199% (p = 0.036) and -4.907% (p = 0.025) for the hip and lumbar measurements respectively, before any treatment switch occurred. Conclusions: Our statistical models describe the lumbar and hip BMD evolution for pts treated with L and/or T. The results of both localisations confirm that, contrary to expectation, the sequential schedules do not seem less detrimental for the BMD than L monotherapy. The estimated difference in BMD T-score percent is at least 4% from T to L.
Resumo:
The Institute has professionals with extensive experience in training, specifically with experience in the field of police and emergencies training. Moreover, it also has very talented people. But above all, our institution has public professionals with a desire to serve, who love security and emergency responders and want to provide them with the best knowledge to make them every day better professionals. In the quest for continuous training improvement, its during 2009 when e-learning begins to have a presence at the Institute. Virtual training methodology becomes a facilitator for the training of various professionals, avoiding geographical displacement and easing the class schedule.
Resumo:
Purpose (1) To identify work related stressors that are associated with psychiatric symptoms in a Swiss sample of policemen and (2) to develop a model for identifying officers at risk for developing mental health problems. Method The study design is cross sectional. A total of 354 male police officers answered a questionnaire assessing a wide spectrum of work related stressors. Psychiatric symptoms were assessed using the "TST questionnaire" (Langner in J Health Hum Behav 4, 269-276, 1962). Logistic regression with backward procedure was used to identify a set of variables collectively associated with high scores for psychiatric symptoms. Results A total of 42 (11.9%) officers had a high score for psychiatric symptoms. Nearly all potential stressors considered were significantly associated (at P < 0.05) with a high score for psychiatric symptoms. A significant model including 6 independent variables was identified: lack of support from superior and organization OR = 3.58 (1.58-8.13), self perception of bad quality work OR = 2.99 (1.35-6.59), inadequate work schedule OR = 2.84 (1.22-6.62), high mental/intellectual demand OR = 2.56 (1.12-5.86), age (in decades) OR = 1.82 (1.21-2.73), and score for physical environment complaints OR = 1.30 (1.03-1.64). Conclusions Most of work stressors considered are associated with psychiatric symptoms. Prevention should target the most frequent stressors with high association to symptoms. Complaints of police officers about stressors should receive proper consideration by the management of public administration. Such complaints might be the expression of psychiatric caseness requiring medical assistance. Particular attention should be given to police officers complaining about many stressors identified in this study's multiple model. [Authors]
Resumo:
Methadone is a 50:50 mixture of two enantiomers and (R)-methadone accounts for the majority of its opioid effect. The aim of this study was to determine whether a blood concentration of (R)-methadone can be associated with therapeutic response in addict patients in methadone maintenance treatment. Trough plasma concentrations of (R)-, (S)- and (R,S)-methadone were measured in 180 patients in maintenance treatment. Therapeutic response was defined by the absence of illicit opiate or cocaine in urine samples collected during a 2-month period prior to blood sampling. A large interindividual variability of (R)-methadone concentration-to-dose-to-weight ratios was found (mean, S.D., median, range: 112, 54, 100, 19-316 ng x kg/ml x mg). With regard to the consumption of illicit opiate (but not of cocaine), a therapeutic response was associated with (R)- (at 250 ng/ml) and (R,S)-methadone (at 400 ng/ml) but not with (S)-methadone concentrations. A higher specificity was calculated for (R)- than for (R,S)-methadone, as the number of non-responders above this threshold divided by the total number of non-responders was higher for (R,S)-methadone (19%) than for (R)-methadone (7%). The results support the use of therapeutic drug monitoring of (R)-methadone in cases of continued intake of illicit opiates. Due to the variability of methadone concentration-to-dose-to-weight ratios, theoretical doses of racemic methadone could be as small as 55 mg/day and as large as 921 mg/day to produce a plasma (R)-methadone concentration of 250 ng/ml in a 70-kg patient. This demonstrates the importance of individualizing methadone treatment.
Resumo:
In order to determine if 5-fluorouracil (5FU) could potentiate the effect of radioimmunotherapy (RIT), nude mice bearing subcutaneous human colon carcinoma xenografts were treated by 1 or 2 intravenous injection(s) of subtherapeutic doses of 131I labeled F(ab')2 from anti-carcinoembryonic antigen monoclonal antibodies combined with 5 daily intraperitoneal injections of 5FU. Control mice received either 131I F(ab')2 alone, 5FU alone or no treatment. RIT alone induced significant tumor regression, while 5FU alone gave only minimal tumor growth inhibition. The combined treatment group also resulted in long-term tumor regression with tumors remaining significantly smaller than in the RIT alone group. There was however, no significant difference in tumor recurrence time between the groups treated with RIT alone or with RIT + 5FU. Myelotoxicity, the major side effect of RIT, detected by the decrease of peripheral white blood cells (WBC), was shown to be almost identical between the groups receiving only RIT or only 5FU. Surprisingly, there was no cumulative bone marrow toxicity in animals which received 5FU before RIT. Furthermore, in the latter group, the WBC levels after RIT were significantly higher than in the control group receiving only RIT. Taken together, the results demonstrate the higher therapeutic efficiency of RIT as compared to 5FU in this model. They do not show, however, that the combination of the two forms of treatment can induce longer tumor remission. Interestingly, the WBC results suggest that 5FU given before RIT can have a radioprotective effect on bone marrow, possibly by selecting radioresistant bone marrow stem cells.
Resumo:
The Electro-Reflective Measuring Apparatus (ERMA) was developed by the Minnesota Department of Highways in 1974 to measure the retro-reflective characteristics of pavement marking materials. Minnesota researchers recommended that due to the increased cost of pavement marking materials and reduced availability of these materials, ERMA can and should be used as a maintenance management tool to determine when painting is necessary rather than according to a fixed time schedule. The Iowa DOT Office of Materials built an ERMA device patterned after Minnesota's design in 1976. Subsequent efforts to calibrate and correlate this ERMA device to District Paint Foremen ratings proved unsuccessful, and ERMA modification or abandonment was recommended in 1979. Lyman Moothart, Materials Lab. Tech. 4, modified the ERMA device in 1980 and correlation attempts to District Paint Foremen ratings conducted in November 1980 have been moderately successful. A Paint/No Paint ERMA value has been established which will identify about 90% of the painting needs but will also include about 40% of the marking lines not needing repainting. The Office of Maintenance should establish a trial ERMA program to study the accuracy and potential cost savings of using ERMA to identify pavement marking needs.
Resumo:
BACKGROUND: Long-term side-effects and cost of HIV treatment motivate the development of simplified maintenance. Monotherapy with ritonavir-boosted lopinavir (LPV/r-MT) is the most widely studied strategy. However, efficacy of LPV/r-MT in compartments remains to be shown. METHODS: Randomized controlled open-label trial comparing LPV/r-MT with continued treatment for 48 weeks in treated patients with fully suppressed viral load. The primary endpoint was treatment failure in the central nervous system [cerebrospinal fluid (CSF)] and/or genital tract. Treatment failure in blood was defined as two consecutive HIV RNA levels more than 400 copies/ml. RESULTS: The trial was prematurely stopped when six patients on monotherapy (none in continued treatment-arm) demonstrated a viral failure in blood. At study termination, 60 patients were included, 29 randomized to monotherapy and 13 additional patients switched from continued treatment to monotherapy after 48 weeks. All failures occurred in patients with a nadir CD4 cell count below 200/microl and within the first 24 weeks of monotherapy. Among failing patients, all five patients with a lumbar puncture had an elevated HIV RNA load in CSF and four of six had neurological symptoms. Viral load was fully resuppressed in all failing patients after resumption of the original combination therapy. No drug resistant virus was found. The only predictor of failure was low nadir CD4 cell count (P < 0.02). CONCLUSION: Maintenance of HIV therapy with LPV/r alone should not be recommended as a standard strategy; particularly not in patients with a CD4 cell count nadir less than 200/microl. Further studies are warranted to elucidate the role of the central nervous system compartment in monotherapy-failure.
Resumo:
OBJECTIVE: To investigate whether HIV-infected patients on a stable and fully suppressive combination antiretroviral therapy (cART) regimen could safely be monitored less often than the current recommendations of every 3 months. DESIGN: Two thousand two hundred and forty patients from the EuroSIDA study who maintained a stable and fully suppressed cART regimen for 1 year were included in the analysis. METHODS: Risk of treatment failure, defined by viral rebound, fall in CD4 cell count, development of new AIDS-defining illness, serious opportunistic infection or death, in the 12 months following a year of a stable and fully suppressed regimen was assessed. RESULTS: One hundred thirty-one (6%) patients experienced treatment failure in the 12 months following a year of stable therapy, viral rebound occurred in 99 (4.6%) patients. After 3, 6 and 12 months, patients had a 0.3% [95% confidence interval (CI) 0.1-0.5], 2.2% (95% CI 1.6-2.8) and 6.0% (95% CI 5.0-7.0) risk of treatment failure, respectively. Patients who spent more than 80% of their time on cART with fully suppressed viraemia prior to baseline had a 38% reduced risk of treatment failure, hazard ratio 0.62 (95% CI 0.42-0.90, P = 0.01). CONCLUSION: Patients who have responded well to cART and are on a well tolerated and durably fully suppressive cART regimen have a low chance of experiencing treatment failure in the next 3-6 months. Therefore, in this subgroup of otherwise healthy patients, it maybe reasonable to extend visit intervals to 6 months, with cost and time savings to both the treating clinics and the patients.
Resumo:
At present, there is little fundamental guidance available to assist contractors in choosing when to schedule saw cuts on joints. To conduct pavement finishing and sawing activities effectively, however, contractors need to know when a concrete mixture is going to reach initial set, or when the sawing window will open. Previous research investigated the use of the ultrasonic pulse velocity (UPV) method to predict the saw-cutting window for early entry sawing. The results indicated that the method has the potential to provide effective guidance to contractors as to when to conduct early entry sawing. The aim of this project was to conduct similar work to observe the correlation between initial setting and conventional sawing time. Sixteen construction sites were visited in Minnesota and Missouri over a two-year period. At each site, initial set was determined using a p-wave propagation technique with a commercial device. Calorimetric data were collected using a commercial semi-adiabatic device at a majority of the sites. Concrete samples were collected in front of the paver and tested using both methods with equipment that was set up next to the pavement during paving. The data collected revealed that the UPV method looks promising for early entry and conventional sawing in the field, both early entry and conventional sawing times can be predicted for the range of mixtures tested.
Resumo:
OBJECTIVE: To estimate the effect of combined antiretroviral therapy (cART) on mortality among HIV-infected individuals after appropriate adjustment for time-varying confounding by indication. DESIGN: A collaboration of 12 prospective cohort studies from Europe and the United States (the HIV-CAUSAL Collaboration) that includes 62 760 HIV-infected, therapy-naive individuals followed for an average of 3.3 years. Inverse probability weighting of marginal structural models was used to adjust for measured confounding by indication. RESULTS: Two thousand and thirty-nine individuals died during the follow-up. The mortality hazard ratio was 0.48 (95% confidence interval 0.41-0.57) for cART initiation versus no initiation. In analyses stratified by CD4 cell count at baseline, the corresponding hazard ratios were 0.29 (0.22-0.37) for less than 100 cells/microl, 0.33 (0.25-0.44) for 100 to less than 200 cells/microl, 0.38 (0.28-0.52) for 200 to less than 350 cells/microl, 0.55 (0.41-0.74) for 350 to less than 500 cells/microl, and 0.77 (0.58-1.01) for 500 cells/microl or more. The estimated hazard ratio varied with years since initiation of cART from 0.57 (0.49-0.67) for less than 1 year since initiation to 0.21 (0.14-0.31) for 5 years or more (P value for trend <0.001). CONCLUSION: We estimated that cART halved the average mortality rate in HIV-infected individuals. The mortality reduction was greater in those with worse prognosis at the start of follow-up.
Resumo:
Aims: To compare the frequency of life events in the year preceding illness onset in a series of Conversion Disorder (CD) patients, with those of a matched control group and to characterize the nature of those events in terms of "escape" potential. Traditional models of CD hypothesise that relevant stressful experiences are "converted" into physical symptoms to relieve psychological pressure, and that the resultant disability allows "escape" from the stressor, providing some advantage to the individual. Methods: The Life Events and Difficulties Schedule (LEDS) is a validated semi-structured interview designed to minimise recall and interviewer bias through rigorous assessment and independent rating of events. An additional "escape" rating was developed. Results: In the year preceding onset in 25 CD patients (mean age 38.9 years ± 8) and a similar matched period in 13 controls (mean age 36.2 years ± 10), no significant difference was found in the proportion of subjects having ≥ 1 severe event (CD 64%, controls 38%; p=0.2). In the last month preceding onset, a higher number of patients experienced ≥1 severe events than controls (52% vs 15%, odds ratio 5.95 (CI: 1.09-32.57)). Patients were twice as much more likely to have a severe escape events than controls, in the month preceding onset (44% vs 7%, odds ratio 9.43 (CI: 1.06-84.04). Conclusion: Preliminary data from this ongoing study suggest that the time frame (preceding month) and the nature ("escape") of the events may play an important role in identifying key events related to CD onset.
Resumo:
BACKGROUND: The outcome of diffuse large B-cell lymphoma has been substantially improved by the addition of the anti-CD20 monoclonal antibody rituximab to chemotherapy regimens. We aimed to assess, in patients aged 18-59 years, the potential survival benefit provided by a dose-intensive immunochemotherapy regimen plus rituximab compared with standard treatment plus rituximab. METHODS: We did an open-label randomised trial comparing dose-intensive rituximab, doxorubicin, cyclophosphamide, vindesine, bleomycin, and prednisone (R-ACVBP) with subsequent consolidation versus standard rituximab, doxorubicin, cyclophosphamide, vincristine, and prednisone (R-CHOP). Random assignment was done with a computer-assisted randomisation-allocation sequence with a block size of four. Patients were aged 18-59 years with untreated diffuse large B-cell lymphoma and an age-adjusted international prognostic index equal to 1. Our primary endpoint was event-free survival. Our analyses of efficacy and safety were of the intention-to-treat population. This study is registered with ClinicalTrials.gov, number NCT00140595. FINDINGS: One patient withdrew consent before treatment and 54 did not complete treatment. After a median follow-up of 44 months, our 3-year estimate of event-free survival was 81% (95% CI 75-86) in the R-ACVBP group and 67% (59-73) in the R-CHOP group (hazard ratio [HR] 0·56, 95% CI 0·38-0·83; p=0·0035). 3-year estimates of progression-free survival (87% [95% CI, 81-91] vs 73% [66-79]; HR 0·48 [0·30-0·76]; p=0·0015) and overall survival (92% [87-95] vs 84% [77-89]; HR 0·44 [0·28-0·81]; p=0·0071) were also increased in the R-ACVBP group. 82 (42%) of 196 patients in the R-ACVBP group experienced a serious adverse event compared with 28 (15%) of 183 in the R-CHOP group. Grade 3-4 haematological toxic effects were more common in the R-ACVBP group, with a higher proportion of patients experiencing a febrile neutropenic episode (38% [75 of 196] vs 9% [16 of 183]). INTERPRETATION: Compared with standard R-CHOP, intensified immunochemotherapy with R-ACVBP significantly improves survival of patients aged 18-59 years with diffuse large B-cell lymphoma with low-intermediate risk according to the International Prognostic Index. Haematological toxic effects of the intensive regimen were raised but manageable. FUNDING: Groupe d'Etudes des Lymphomes de l'Adulte and Amgen.