911 resultados para critical period for weed control
Resumo:
AimThis study histologically analysed the effect of autogenous platelet-rich plasma (PRP), prepared according to a new semiautomatic system, on healing of autogenous bone (AB) grafts placed in surgically created critical-size defects (CSD) in rabbit calvaria.Material and MethodsSixty rabbits were divided into three groups: C, AB and AB/PRP. A CSD was created in the calvarium of each animal. In Group C (control), the defect was filled by blood clot only. In Group AB (autogenous bone graft), the defect was filled with particulate autogenous bone. In Group AB/PRP (autogenous bone graft with platelet-rich plasma), it was filled with particulate autogenous bone combined with PRP. All groups were divided into subgroups (n=10) and euthanized at 4 or 12 weeks post-operatively. Histometric and histologic analyses were performed. Data were statistically analysed (anova, t-test, p < 0.05).ResultsGroup C presented significantly less bone formation compared with Group AB and AB/PRP in both periods of analysis (p < 0.001). At 4 weeks, Group AB/PRP showed a statistically greater amount of bone formation than Group AB (64.44 +/- 15.0% versus 46.88 +/- 14.15%; p=0.0181). At 12 weeks, no statistically significant differences were observed between Groups AB and AB/PRP (75.0 +/- 8.11% versus 77.90 +/- 8.13%; p > 0.05). It is notable that the amount of new bone formation in Group AB/PRP at 4 weeks was similar to that of Group AB at 12 weeks (p > 0.05).ConclusionWithin its limitation, the present study has indicated that (i) AB and AB/PRP significantly improved bone formation and (ii) a beneficial effect of PRP was limited to an initial healing period of 4 weeks.
Resumo:
Fluoride was introduced into dentistry over 70 years ago, and it is now recognized as the main factor responsible for the dramatic decline in caries prevalence that has been observed worldwide. However, excessive fluoride intake during the period of tooth development can cause dental fluorosis. In order that the maximum benefits of fluoride for caries control can be achieved with the minimum risk of side effects, it is necessary to have a profound understanding of the mechanisms by which fluoride promotes caries control. In the 1980s, it was established that fluoride controls caries mainly through its topical effect. Fluoride present in low, sustained concentrations (sub-ppm range) in the oral fluids during an acidic challenge is able to absorb to the surface of the apatite crystals, inhibiting demineralization. When the pH is re-established, traces of fluoride in solution will make it highly supersaturated with respect to fluorhydroxyapatite, which will speed up the process of remineralization. The mineral formed under the nucleating action of the partially dissolved minerals will then preferentially include fluoride and exclude carbonate, rendering the enamel more resistant to future acidic challenges. Topical fluoride can also provide antimicrobial action. Fluoride concentrations as found in dental plaque have biological activity on critical virulence factors of S. mutans in vitro, such as acid production and glucan synthesis, but the in vivo implications of this are still not clear. Evidence also supports fluoride's systemic mechanism of caries inhibition in pit and fissure surfaces of permanent first molars when it is incorporated into these teeth pre-eruptively. © 2011 S. Karger AG, Basel.
Resumo:
Pós-graduação em Agronomia (Produção Vegetal) - FCAV
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Metabolic disturbances are quite common in critically ill patients. Glycemic control appears to be an important adjuvant therapy in such patients. In addition, disorders of lipid metabolism are associated with worse prognoses. The purpose of this study was to investigate the effects that two different glycemic control protocols have on lipid profile and metabolism. We evaluated 63 patients hospitalized for severe sepsis or septic shock, over the first 72 h of intensive care. Patients were randomly allocated to receive conservative glycemic control (target range 140-180 mg/dl) or intensive glycemic control (target range 80-110 mg/dl). Serum levels of low-density lipoprotein, high-density lipoprotein, triglycerides, total cholesterol, free fatty acids, and oxidized low-density lipoprotein were determined. In both groups, serum levels of low-density lipoprotein, high-density lipoprotein, and total cholesterol were below normal, whereas those of free fatty acids, triglycerides, and oxidized low-density lipoprotein were above normal. At 4 h after admission, free fatty acid levels were higher in the conservative group than in the intensive group, progressively decreasing in both groups until hour 48 and continuing to decrease until hour 72 only in the intensive group. Oxidized low-density lipoprotein levels were elevated in both groups throughout the study period. Free fatty acids respond to intensive glycemic control and, because of their high toxicity, can be a therapeutic target in patients with sepsis.
Resumo:
For virtually all hospitals, utilization rates are a critical managerial indicator of efficiency and are determined in part by turnover time. Turnover time is defined as the time elapsed between surgeries, during which the operating room is cleaned and preparedfor the next surgery. Lengthier turnover times result in lower utilization rates, thereby hindering hospitals’ ability to maximize the numbers of patients that can be attended to. In this thesis, we analyze operating room data from a two year period provided byEvangelical Community Hospital in Lewisburg, Pennsylvania, to understand the variability of the turnover process. From the recorded data provided, we derive our best estimation of turnover time. Recognizing the importance of being able to properly modelturnover times in order to improve the accuracy of scheduling, we seek to fit distributions to the set of turnover times. We find that log-normal and log-logistic distributions are well-suited to turnover times, although further research must validate this finding. Wepropose that the choice of distribution depends on the hospital and, as a result, a hospital must choose whether to use the log-normal or the log-logistic distribution. Next, we use statistical tests to identify variables that may potentially influence turnover time. We find that there does not appear to be a correlation between surgerytime and turnover time across doctors. However, there are statistically significant differences between the mean turnover times across doctors. The final component of our research entails analyzing and explaining the benefits of introducing control charts as a quality control mechanism for monitoring turnover times in hospitals. Although widely instituted in other industries, control charts are notwidely adopted in healthcare environments, despite their potential benefits. A major component of our work is the development of control charts to monitor the stability of turnover times. These charts can be easily instituted in hospitals to reduce the variabilityof turnover times. Overall, our analysis uses operations research techniques to analyze turnover times and identify manners for improvement in lowering the mean turnover time and thevariability in turnover times. We provide valuable insight into a component of the surgery process that has received little attention, but can significantly affect utilization rates in hospitals. Most critically, an ability to more accurately predict turnover timesand a better understanding of the sources of variability can result in improved scheduling and heightened hospital staff and patient satisfaction. We hope that our findings can apply to many other hospital settings.
Resumo:
OBJECTIVE: To review trial design issues related to control groups. DESIGN: Review of the literature with specific reference to critical care trials. MAIN RESULTS AND CONCLUSIONS: Performing randomized controlled trials in the critical care setting presents specific problems: studies include patients with rapidly lethal conditions, the majority of intensive care patients suffer from syndromes rather than from well-definable diseases, the severity of such syndromes cannot be precisely assessed, and the treatment consists of interacting therapies. Interactions between physiology, pathophysiology, and therapies are at best marginally understood and may have a major impact on study design and interpretation of results. Selection of the right control group is crucial for the interpretation and clinical implementation of results. Studies comparing new interventions with current ones or different levels of current treatments have the problem of the necessity of defining "usual care." Usual care controls without any constraints typically include substantial heterogeneity. Constraints in the usual therapy may help to reduce some variation. Inclusion of unrestricted usual care groups may help to enhance safety. Practice misalignment is a novel problem in which patients receive a treatment that is the direct opposite of usual care, and occurs when fixed-dose interventions are used in situations where care is normally titrated. Practice misalignment should be considered in the design and interpretation of studies on titrated therapies.
Resumo:
Financial, economic, and biological data collected from cow-calf producers who participated in the Illinois and Iowa Standardized Performance Analysis (SPA) programs were used in this study. Data used were collected for the 1996 through 1999 calendar years, with each herd within year representing one observation. This resulted in a final database of 225 observations (117 from Iowa and 108 from Illinois) from commercial herds with a range in size from 20 to 373 cows. Two analyses were conducted, one utilizing financial cost of production data, the other economic cost of production data. Each observation was analyzed as the difference from the mean for that given year. The independent variable utilized in both the financial and economic models as an indicator of profit was return to unpaid labor and management per cow (RLM). Used as dependent variables were the five factors that make up total annual cow cost: feed cost, operating cost, depreciation cost, capital charge, and hired labor, all on an annual cost per cow basis. In the economic analysis, family labor was also included. Production factors evaluated as dependent variables in both models were calf weight, calf price, cull weight, cull price, weaning percentage, and calving distribution. Herd size and investment were also analyzed. All financial factors analyzed were significantly correlated to RLM (P < .10) except cull weight, and cull price. All economic factors analyzed were significantly correlated to RLM (P < .10) except calf weight, cull weight and cull price. Results of the financial prediction equation indicate that there are eight measurements capable of explaining over 82 percent of the farm-to-farm variation in RLM. Feed cost is the overriding factor driving RLM in both the financial and economic stepwise regression analyses. In both analyses over 50 percent of the herd-to-herd variation in RLM could be explained by feed cost. Financial feed cost is correlated (P < .001) to operating cost, depreciation cost, and investment. Economic feed cost is correlated (P < .001) with investment and operating cost, as well as capital charge. Operating cost, depreciation, and capital charge were all negatively correlated (P < .10) to herd size, and positively correlated (P < .01) to feed cost in both analyses. Operating costs were positively correlated with capital charge and investment (P < .01) in both analyses. In the financial regression model, depreciation cost was the second critical factor explaining almost 9 percent of the herd-to-herd variation in RLM followed by operating cost (5 percent). Calf weight had a greater impact than calf price on RLM in both the financial and economic regression models. Calf weight was the fourth indicator of RLM in the financial model and was similar in magnitude to operating cost. Investment was not a significant variable in either regression model; however, it was highly correlated to a number of the significant cost variables including feed cost, depreciation cost, and operating cost (P < .001, financial; P < .10, economic). Cost factors were far more influential in driving RLM than production, reproduction, or producer controlled marketing factors. Of these cost factors, feed cost had by far the largest impact. As producers focus attention on factors that affect the profitability of the operation, feed cost is the most critical control point because it was responsible for over 50 percent of the herd-to-herd variation in profit.
Resumo:
In June 1995 a case-control study was initiated by the Texas Department of Health among Mexican American women residing in the fourteen counties of the Texas-Mexico border. Case-women had carried infants with neural tube defect. Control-women had given birth to infants without neural tube defects. The case-control protocol included a general questionnaire which elicited information regarding illnesses experienced and antibiotics taken from three months prior to conception to three months after conception. An assessment of the associations between periconceptional diarrhea and the risk of neural tube defects indicated that the unadjusted association of diarrhea and risk of neural tube defect was significant (OR = 3.3, CI = 1.4–7.6). The unadjusted association of use of oral antimicrobials and risk of neural tube defect was also significant (OR = 3.4, CI = 1.6–7.3). These associations persisted among women who had no fever during the periconceptional period and were present irrespective of folate intake. Diarrhea was associated with an increased risk of NTD independent of use of antimicrobials. The converse was also true; antimicrobials were associated with an increased risk of NTD independent of diarrhea. Further research regarding these potentially modifiable risk factors is warranted. Replication of these findings could result in interventions in addition to folate supplementation. ^
Resumo:
The recently discovered aging-dependent large accumulation of point mutations in the human fibroblast mtDNA control region raised the question of their occurrence in postmitotic tissues. In the present work, analysis of biopsied or autopsied human skeletal muscle revealed the absence or only minimal presence of those mutations. By contrast, surprisingly, most of 26 individuals 53 to 92 years old, without a known history of neuromuscular disease, exhibited at mtDNA replication control sites in muscle an accumulation of two new point mutations, i.e., A189G and T408A, which were absent or marginally present in 19 individuals younger than 34 years. These two mutations were not found in fibroblasts from 22 subjects 64 to 101 years of age (T408A), or were present only in three subjects in very low amounts (A189G). Furthermore, in several older individuals exhibiting an accumulation in muscle of one or both of these mutations, they were nearly absent in other tissues, whereas the most frequent fibroblast-specific mutation (T414G) was present in skin, but not in muscle. Among eight additional individuals exhibiting partial denervation of their biopsied muscle, four subjects >80 years old had accumulated the two muscle-specific point mutations, which were, conversely, present at only very low levels in four subjects ≤40 years old. The striking tissue specificity of the muscle mtDNA mutations detected here and their mapping at critical sites for mtDNA replication strongly point to the involvement of a specific mutagenic machinery and to the functional relevance of these mutations.
Resumo:
Epidemics of soil-borne plant disease are characterized by patchiness because of restricted dispersal of inoculum. The density of inoculum within disease patches depends on a sequence comprising local amplification during the parasitic phase followed by dispersal of inoculum by cultivation during the intercrop period. The mechanisms that control size, shape, and persistence have received very little rigorous attention in epidemiological theory. Here we derive a model for dispersal of inoculum in soil by cultivation that takes account into the discrete stochastic nature of the system in time and space. Two parameters, probability of movement and mean dispersal distance, characterize lateral dispersal of inoculum by cultivation. The dispersal parameters are used in combination with the characteristic area and dimensions of host plants to identify criteria that control the shape and size of disease patches. We derive a critical value for the probability of movement for the formation of cross-shaped patches and show that this is independent of the amount of inoculum. We examine the interaction between local amplification of inoculum by parasitic activity and subsequent dilution by dispersal and identify criteria whereby asymptomatic patches may persist as inoculum falls below a threshold necessary for symptoms to appear in the subsequent crop. The model is motivated by the spread of rhizomania, an economically important soil-borne disease of sugar beet. However, the results have broad applicability to a very wide range of diseases that survive as discrete units of inoculum. The application of the model to patch dynamics of weed seeds and local introductions of genetically modified seeds is also discussed.