950 resultados para Increasing hazard ratio
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Background: The prevalence of systemic lupus erythematous (SLE) patients requiring renal replacement therapy (RRT) is increasing but data on clinical outcomes are scarce. Interestingly, data on technique failure and peritoneal-dialysis (PD)-related infections are rarer, despite SLE patients being considered at high risk for infections. The aim of our study is to compare clinical outcomes of SLE patients on PD in a large PD cohort. Methods: We conducted a nationwide prospective observational study from the BRAZPD II cohort. For this study we identified all patients on PD for greater than 90 days. Within that subset, all those with SLE as primary renal disease were matched with PD patients without SLE for comparison of clinical outcomes, namely: patient mortality, technique survival and time to first peritonitis, then were analyzed taking into account the presence of competing risks. Results: Out of a total of 9907 patients, we identified 102 SLE patients incident in PD and with more than 90 days on PD. After matching the groups consisted of 92 patients with SLE and 340 matched controls. Mean age was 46.9 +/- 16.8 years, 77.3% were females and 58.1% were Caucasians. After adjustments SLE sub-hazard distribution ratio for mortality was 1.06 (CI 95% 0.55-2.05), for technique failure was 1.01 (CI 95% 0.54-1.91) and for time to first peritonitis episode was 1.40 (CI 95% 0.92-2.11). The probability for occurrence of competing risks in all three outcomes was similar between groups. Conclusion: PD therapy was shown to be a safe and equally successful therapy for SLE patients compared to matched non-SLE patients.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
The literature reports dealing with the dietary electrolyte ratio (K+Cl)/ Na are rare, although the concept has been proposed by Mongin in 1981. Thus, its application appears to be limited as a nutritional strategy in feed formulation, which usually meets only the minimum nutritional recommendations in Na, K and Cl. The objective of this study was to evaluate the performance of broilers submitted to different dietary electrolyte balances (DEB) Na+K-Cl and dietary electrolyte ratio (DER) from 1 to 21 d of age. A total of 1575 male 1-d old broiler chicks were randomly assigned to 5 treatments with 9 replicates of 35 chicks each. The treatments consisted of diets with 5 relation of electrolytes, arising from combinations DEB/DER 150/3, 250/2, 250/3, 250/4 and 350/3, with use of NaCl, NaHCO3, KCl, K2SO4 and CaCl2. All diets were corn-soybean meal based and formulated to meet or exceed the NRC (1994) requirements. Chicks had ad libitum access to feed and water in floor pens with wood shavings as litter. Body weight, feed intake and feed conversion ratio were measured at 21 d of age. It was found that only the feed conversion was significantly affected (P = 0.0142) by the combinations of relation of electrolytes (DEB and DER). The supplementation levels in DEB and DER were made to fit the data into a surface analysis to allow increasing levels of DEB (150–350 mEq/kg) and narrow and broad ratio of DER (2–4). For the canonical analysis of response surface was obtained the stationary point for body weight (DEB = 255.77 mEq/kg, and DER = 2.73:1) the value was 942.02 g; for feed intake (DEB = 251.69 mEq / kg and DER = 3.51:1), the value was 1200.02 g; and for feed conversion (DEB = 254.62 mEq/kg and DER = 3.06:1) the value was 1.35. The results of this experiment indicated that the best performance was obtained with combinations of relation of electrolytes for DEB between 251 and 255 mEq/kg and DER between 2.73:1 and 3.5:1.
Resumo:
In many applications of lifetime data analysis, it is important to perform inferences about the change-point of the hazard function. The change-point could be a maximum for unimodal hazard functions or a minimum for bathtub forms of hazard functions and is usually of great interest in medical or industrial applications. For lifetime distributions where this change-point of the hazard function can be analytically calculated, its maximum likelihood estimator is easily obtained from the invariance properties of the maximum likelihood estimators. From the asymptotical normality of the maximum likelihood estimators, confidence intervals can also be obtained. Considering the exponentiated Weibull distribution for the lifetime data, we have different forms for the hazard function: constant, increasing, unimodal, decreasing or bathtub forms. This model gives great flexibility of fit, but we do not have analytic expressions for the change-point of the hazard function. In this way, we consider the use of Markov Chain Monte Carlo methods to get posterior summaries for the change-point of the hazard function considering the exponentiated Weibull distribution.
Resumo:
Lattice calculations of the QCD trace anomaly at temperatures T < 160 MeV have been shown to match hadron resonance gas model calculations, which include an exponentially rising hadron mass spectrum. In this paper we perform a more detailed comparison of the model calculations to lattice data that confirms the need for an exponentially increasing density of hadronic states. Also, we find that the lattice data is compatible with a hadron density of states that goes as rho(m) similar to m(-a) exp(m/T-H) at large m with a > 5/2 (where T-H similar to 167 MeV). With this specific subleading contribution to the density of states, heavy resonances are most likely to undergo two-body decay (instead of multiparticle decay), which facilitates their inclusion into hadron transport codes. Moreover, estimates for the shear viscosity and the shear relaxation time coefficient of the hadron resonance model computed within the excluded volume approximation suggest that these transport coefficients are sensitive to the parameters that define the hadron mass spectrum.
Resumo:
The generalized failure rate of a continuous random variable has demonstrable importance in operations management. If the valuation distribution of a product has an increasing generalized failure rate (that is, the distribution is IGFR), then the associated revenue function is unimodal, and when the generalized failure rate is strictly increasing, the global maximum is uniquely specified. The assumption that the distribution is IGFR is thus useful and frequently held in recent pricing, revenue, and supply chain management literature. This note contributes to the IGFR literature in several ways. First, it investigates the prevalence of the IGFR property for the left and right truncations of valuation distributions. Second, we extend the IGFR notion to discrete distributions and contrast it with the continuous distribution case. The note also addresses two errors in the previous IGFR literature. Finally, for future reference, we analyze all common (continuous and discrete) distributions for the prevalence of the IGFR property, and derive and tabulate their generalized failure rates.
Resumo:
The municipality of San Juan La Laguna, Guatemala is home to approximately 5,200 people and located on the western side of the Lake Atitlán caldera. Steep slopes surround all but the eastern side of San Juan. The Lake Atitlán watershed is susceptible to many natural hazards, but most predictable are the landslides that can occur annually with each rainy season, especially during high-intensity events. Hurricane Stan hit Guatemala in October 2005; the resulting flooding and landslides devastated the Atitlán region. Locations of landslide and non-landslide points were obtained from field observations and orthophotos taken following Hurricane Stan. This study used data from multiple attributes, at every landslide and non-landslide point, and applied different multivariate analyses to optimize a model for landslides prediction during high-intensity precipitation events like Hurricane Stan. The attributes considered in this study are: geology, geomorphology, distance to faults and streams, land use, slope, aspect, curvature, plan curvature, profile curvature and topographic wetness index. The attributes were pre-evaluated for their ability to predict landslides using four different attribute evaluators, all available in the open source data mining software Weka: filtered subset, information gain, gain ratio and chi-squared. Three multivariate algorithms (decision tree J48, logistic regression and BayesNet) were optimized for landslide prediction using different attributes. The following statistical parameters were used to evaluate model accuracy: precision, recall, F measure and area under the receiver operating characteristic (ROC) curve. The algorithm BayesNet yielded the most accurate model and was used to build a probability map of landslide initiation points. The probability map developed in this study was also compared to the results of a bivariate landslide susceptibility analysis conducted for the watershed, encompassing Lake Atitlán and San Juan. Landslides from Tropical Storm Agatha 2010 were used to independently validate this study’s multivariate model and the bivariate model. The ultimate aim of this study is to share the methodology and results with municipal contacts from the author's time as a U.S. Peace Corps volunteer, to facilitate more effective future landslide hazard planning and mitigation.
Resumo:
Hodgkin lymphoma (HL) risk is elevated among persons infected with HIV (PHIV) and has been suggested to have increased in the era of combined antiretroviral therapy (cART). Among 14,606 PHIV followed more than 20 years in the Swiss HIV Cohort Study (SHCS), determinants of HL were investigated using 2 different approaches, namely, a cohort and nested case-control study, estimating hazard ratios (HRs) and matched odds ratios, respectively. Forty-seven incident HL cases occurred during 84,611 person-years of SHCS follow-up. HL risk was significantly higher among men having sex with men (HR vs intravenous drug users = 2.44, 95% confidence interval [CI], 1.13-5.24) but did not vary by calendar period (HR for 2002-2007 vs 1995 or earlier = 0.65, 95% CI, 0.29-1.44) or cART use (HR vs nonusers = 1.02, 95% CI, 0.53-1.94). HL risk tended to increase with declining CD4(+) cell counts, but these differences were not significant. A lower CD4(+)/CD8(+) ratio at SHCS enrollment or 1 to 2 years before HL diagnosis, however, was significantly associated with increased HL risk. In conclusion, HL risk does not appear to be increasing in recent years or among PHIV using cART in Switzerland, and there was no evidence that HL risk should be increased in the setting of improved immunity.
Resumo:
Antineutrophil cytoplasmic antibodies directed against bactericidal/permeability-increasing protein (BPI), an inhibitor of a lipopolysaccharide of gram-negative bacteria, are a common feature of chronic neutrophilic inflammatory processes such as cystic fibrosis. We investigated whether serum and salivary anti-BPI autoantibodies also appear in the course of acute pneumonia in 24 otherwise healthy children. Nine (38%) and four (17%) patients had detectable serum anti-BPI immunoglobulin G (IgG) (> or =4 IU mL(-1)) and IgA (ratio> or =1.2), respectively, on the day of hospital admission (day 0). There was no increase in the rate of occurrence or the concentration of these antibodies in the convalescent sera obtained on day 30. The presence of anti-BPI IgG on admission did not correlate with inflammatory markers (peripheral white blood cell count, C-reactive protein) or temperature on admission. Also, salivary anti-BPI IgA, determined on days 0, 3-5 and 30, did not appear during the course of acute pneumonia. In summary, a substantial proportion of previously healthy children have pre-existing anti-BPI IgG autoantibodies. Acute neutrophilic infection, i.e. pneumonia, however, neither triggered the appearance of new antibodies nor boosted the concentrations of pre-existing ones. Thus, in typical acute pneumonia in children, autoantibodies directed against BPI may not have clinical significance.
Resumo:
To test a system with milk flow-controlled pulsation, milk flow was recorded in 29 Holstein cows during machine milking. The three different treatments were routine milking (including a pre-stimulation of 50-70 s), milking with a minimum of teat preparation and milking with milk flow-controlled b-phase, i.e. with a gradually elongated b-phase of the pulsation cycle with increasing milk flow rate and shortening again during decreasing milk flow. For data evaluation the herd was divided into three groups based on the peak flow rate at routine milking (group 1: <3.2 kg/min; group 2: 3.2-4.5 kg/min; group 3: >4.5 kg/min). Compared with routine milking, milking with milk flow-controlled b-phase caused a significant elevation of the peak flow rate and the duration of incline lasted longer especially in cows with a peak flow rate of >3.2 kg/min in routine milking. In milking with a minimum of teat preparation the duration of incline lasted longer compared with the two other treatments. Bimodality of milk flow, i.e. delayed milk ejection at the start of milking, was most frequent at milking with a minimum of teat preparation. No significant differences between routine milking and milking with milk flow-controlled b-phase were detected for all other milking characteristics. In summary, milking with milk flow-controlled b-phase changes the course of milk removal, however mainly in cows with high peak flow rates.
Resumo:
One hundred eighty-nine mixed breed beef heifers from 13 consignors enrolled in the MACEP heifer development project were utilized in this study. Heifers were synchronized by feeding 0.5 mg melengestrol acetate (MGA) per head per day for 14 days followed by an injection of prostaglandin F2a (PGF2a; 25 mg Lutalyse®) 17 days after the last MGA feeding. Each heifer was fitted with a Heatwatch® transmitter on the morning of PGF2a administration to facilitate detection of estrus. Vaginal conductivity measurements were taken using an Ovatec® probe every 12 hours for 96 hours beginning at the time of PGF2a injection. Heifers randomly assigned to produce a female calf were inseminated near the onset of estrus (as indicated by probe values of £ 55 on the decline). Heifers randomly assigned to produce a male calf were inseminated approximately 24 hours after the onset of estrus (as indicated by probe values of ³ 60 on the incline). All heifers not inseminated by 96 hours after PGF2a were mass inseminated in an attempt to impregnate as many heifers as possible. Heifers that were diagnosed as pregnant as a result of the artificial insemination were subjected to ultrasonography for fetal sex determination. Only 70 of the 189 heifers (37.0%) exhibited estrus according to Heatwatch® and incidence of estrus was influenced by heifer average daily gain, reproductive tract score, and disposition score. Heifers receiving a disposition score of 3 (78.7) had a higher (P<.05) probe reading at AI than those receiving a disposition score of 1 or 2 (70.8 and 72.5, respectively). Heifers with probe readings at insemination of 80 - 84 and > 84 had lower (P<.05) pregnancy rates to AI (13.6 and 0.0%, respectively) than heifers with probe readings in the ranges of < 60, 60 - 64, 65 - 69, 70 - 74, and 75 - 79 (35.7, 40.9, 31.4, 35.3, and 26.9% respectively). Heifers that were bred when probe values were increasing had a lower (P<.05) percentage of male fetuses (34.4%) than those bred during a period of decreasing probe values (69.2% male fetuses). These results demonstrate that a vaginal conductivity probe may be a useful tool to determine an insemination time that could potentially alter calf sex ratio.
Resumo:
The reconstruction of past flash floods in ungauged basins leads to a high level of uncertainty, which increases if other processes are involved such as the transport of large wood material. An important flash flood occurred in 1997 in Venero Claro (Central Spain), causing significant economic losses. The wood material clogged bridge sections, raising the water level upstream. The aim of this study was to reconstruct this event, analysing the influence of woody debris transport on the flood hazard pattern. Because the reach in question was affected by backwater effects due to bridge clogging, using only high water mark or palaeostage indicators may overestimate discharges, and so other methods are required to estimate peak flows. Therefore, the peak discharge was estimated (123 ± 18 m3 s–1) using indirect methods, but one-dimensional hydraulic simulation was also used to validate these indirect estimates through an iterative process (127 ± 33 m3 s–1) and reconstruct the bridge obstruction to obtain the blockage ratio during the 1997 event (~48%) and the bridge clogging curves. Rainfall–Runoff modelling with stochastic simulation of different rainfall field configurations also helped to confirm that a peak discharge greater than 150 m3 s–1 is very unlikely to occur and that the estimated discharge range is consistent with the estimated rainfall amount (233 ± 27 mm). It was observed that the backwater effect due to the obstruction (water level ~7 m) made the 1997 flood (~35-year return period) equivalent to the 50-year flood. This allowed the equivalent return period to be defined as the recurrence interval of an event of specified magnitude, which, where large woody debris is present, is equivalent in water depth and extent of flooded area to a more extreme event of greater magnitude. These results highlight the need to include obstruction phenomena in flood hazard analysis.
Resumo:
The potential effects of climatic changes on natural risks are widely discussed. But the formulation of strategies for adapting risk management practice to climate changes requires knowledge of the related risks for people and economic values. The main goals of this work were (1) the development of a method for analysing and comparing risks induced by different natural hazard types, (2) highlighting the most relevant natural hazard processes and related damages, (3) the development of an information system for the monitoring of the temporal development of natural hazard risk and (4) the visualisation of the resulting information for the wider public. A comparative exposure analysis provides the basis for pointing out the hot spots of natural hazard risks in the province of Carinthia, Austria. An analysis of flood risks in all municipalities provides the basis for setting the priorities in the planning of flood protection measures. The methods form the basis for a monitoring system that periodically observes the temporal development of natural hazard risks. This makes it possible firstly to identify situations in which natural hazard risks are rising and secondly to differentiate between the most relevant factors responsible for the increasing risks. The factors that most influence the natural risks could be made evident.