894 resultados para probability of error
Resumo:
Both public and private insurance for long-term care is undeveloped in some European countries such as in Spain and empirical evidence is still limited. This paper aims at exmining the determinants of the demand for Long Term Care (LTC) coverage in Spain using contingent valuation techniques. Our findings indicate that only one-fifth of the population is willing to pay to assure coverage decisions are significantly affected by private information asymmetry and housing tenure in giving rise to self-insurance reduces the probability of insurance being hypothetically purchased.
Resumo:
BACKGROUND: Postmenopausal women with hormone receptor-positive early breast cancer have persistent, long-term risk of breast-cancer recurrence and death. Therefore, trials assessing endocrine therapies for this patient population need extended follow-up. We present an update of efficacy outcomes in the Breast International Group (BIG) 1-98 study at 8·1 years median follow-up. METHODS: BIG 1-98 is a randomised, phase 3, double-blind trial of postmenopausal women with hormone receptor-positive early breast cancer that compares 5 years of tamoxifen or letrozole monotherapy, or sequential treatment with 2 years of one of these drugs followed by 3 years of the other. Randomisation was done with permuted blocks, and stratified according to the two-arm or four-arm randomisation option, participating institution, and chemotherapy use. Patients, investigators, data managers, and medical reviewers were masked. The primary efficacy endpoint was disease-free survival (events were invasive breast cancer relapse, second primaries [contralateral breast and non-breast], or death without previous cancer event). Secondary endpoints were overall survival, distant recurrence-free interval (DRFI), and breast cancer-free interval (BCFI). The monotherapy comparison included patients randomly assigned to tamoxifen or letrozole for 5 years. In 2005, after a significant disease-free survival benefit was reported for letrozole as compared with tamoxifen, a protocol amendment facilitated the crossover to letrozole of patients who were still receiving tamoxifen alone; Cox models and Kaplan-Meier estimates with inverse probability of censoring weighting (IPCW) are used to account for selective crossover to letrozole of patients (n=619) in the tamoxifen arm. Comparison of sequential treatments to letrozole monotherapy included patients enrolled and randomly assigned to letrozole for 5 years, letrozole for 2 years followed by tamoxifen for 3 years, or tamoxifen for 2 years followed by letrozole for 3 years. Treatment has ended for all patients and detailed safety results for adverse events that occurred during the 5 years of treatment have been reported elsewhere. Follow-up is continuing for those enrolled in the four-arm option. BIG 1-98 is registered at clinicaltrials.govNCT00004205. FINDINGS: 8010 patients were included in the trial, with a median follow-up of 8·1 years (range 0-12·4). 2459 were randomly assigned to monotherapy with tamoxifen for 5 years and 2463 to monotherapy with letrozole for 5 years. In the four-arm option of the trial, 1546 were randomly assigned to letrozole for 5 years, 1548 to tamoxifen for 5 years, 1540 to letrozole for 2 years followed by tamoxifen for 3 years, and 1548 to tamoxifen for 2 years followed by letrozole for 3 years. At a median follow-up of 8·7 years from randomisation (range 0-12·4), letrozole monotherapy was significantly better than tamoxifen, whether by IPCW or intention-to-treat analysis (IPCW disease-free survival HR 0·82 [95% CI 0·74-0·92], overall survival HR 0·79 [0·69-0·90], DRFI HR 0·79 [0·68-0·92], BCFI HR 0·80 [0·70-0·92]; intention-to-treat disease-free survival HR 0·86 [0·78-0·96], overall survival HR 0·87 [0·77-0·999], DRFI HR 0·86 [0·74-0·998], BCFI HR 0·86 [0·76-0·98]). At a median follow-up of 8·0 years from randomisation (range 0-11·2) for the comparison of the sequential groups with letrozole monotherapy, there were no statistically significant differences in any of the four endpoints for either sequence. 8-year intention-to-treat estimates (each with SE ≤1·1%) for letrozole monotherapy, letrozole followed by tamoxifen, and tamoxifen followed by letrozole were 78·6%, 77·8%, 77·3% for disease-free survival; 87·5%, 87·7%, 85·9% for overall survival; 89·9%, 88·7%, 88·1% for DRFI; and 86·1%, 85·3%, 84·3% for BCFI. INTERPRETATION: For postmenopausal women with endocrine-responsive early breast cancer, a reduction in breast cancer recurrence and mortality is obtained by letrozole monotherapy when compared with tamoxifen montherapy. Sequential treatments involving tamoxifen and letrozole do not improve outcome compared with letrozole monotherapy, but might be useful strategies when considering an individual patient's risk of recurrence and treatment tolerability. FUNDING: Novartis, United States National Cancer Institute, International Breast Cancer Study Group.
Resumo:
Disturbances affect metapopulations directly through reductions in population size and indirectly through habitat modification. We consider how metapopulation persistence is affected by different disturbance regimes and the way in which disturbances spread, when metapopulations are compact or elongated, using a stochastic spatially explicit model which includes metapopulation and habitat dynamics. We discover that the risk of population extinction is larger for spatially aggregated disturbances than for spatially random disturbances. By changing the spatial configuration of the patches in the system--leading to different proportions of edge and interior patches--we demonstrate that the probability of metapopulation extinction is smaller when the metapopulation is more compact. Both of these results become more pronounced when colonization connectivity decreases. Our results have important management implication as edge patches, which are invariably considered to be less important, may play an important role as disturbance refugia.
Resumo:
This work examines behavioural relationships between young females (potential queens) and workers, in a multi-nest population (supercolony), of Formica lugubris. Each nest contains hundreds of functional queens but the colony is initiated by a single foundress (secondary polygyny). Thus, recruitment of new queens into the nests is part of the population dynamics. Substantial variation in worker response towards introduced female sexuals, ranging from execution to complete acceptance, is demonstrated. The mating status of the introduced females has a clear effect on the worker response: virgin females are accepted with about twice the probability of inseminated females. When native alates are present in a nest, all introduced females are accepted with higher probability than when the native alates are absent, later in the season. No effect of distance (between donor and recipient nests) on the worker reaction was found, within the supercolony borders. Proximate mechanisms and selective forces regulating the recruitment process are discussed in light of these findings.
Resumo:
OBJECTIVE: This study sought to determine the prevalence of transactional sex among university students in Uganda and to assess the possible relationship between transactional sex and sexual coercion, physical violence, mental health, and alcohol use. METHODS: In 2010, 1954 undergraduate students at a Ugandan university responded to a self-administered questionnaire that assessed mental health, substance use, physical violence and sexual behaviors including sexual coercion and transactional sex. The prevalence of transactional sex was assessed and logistic regression analysis was performed to measure the associations between various risk factors and reporting transactional sex. RESULTS: Approximately 25% of the study sample reported having taken part in transactional sex, with more women reporting having accepted money, gifts or some compensation for sex, while more men reporting having paid, given a gift or otherwise compensated for sex. Sexual coercion in men and women was significantly associated with having accepted money, gifts or some compensation for sex. Men who were victims of physical violence in the last 12 months had higher probability of having accepted money, gifts or some compensation for sex than other men. Women who were victims of sexual coercion reported greater likelihood of having paid, given a gift or otherwise compensated for sex. Respondents who had been victims of physical violence in last 12 months, engaged in heavy episodic drinking and had poor mental health status were more likely to have paid, given a gift or otherwise compensated for sex. CONCLUSIONS: University students in Uganda are at high risk of transactional sex. Young men and women may be equally vulnerable to the risks and consequences of transactional sex and should be included in program initiatives to prevent transactional sex. The role of sexual coercion, physical violence, mental health, and alcohol use should be considered when designing interventions for countering transactional sex.
Variability of soil fertility properties in areas planted to sugarcane in the State of Goias, Brazil
Resumo:
Soil sampling should provide an accurate representation of a given area so that recommendations for amendments of soil acidity, fertilization and soil conservation may be drafted to increase yield and improve the use of inputs. The aim of this study was to evaluate the variability of soil fertility properties of Oxisols in areas planted to sugarcane in the State of Goias, Brazil. Two areas of approximately 8,100 m² each were selected, representing two fields of the Goiasa sugarcane mill in Goiatuba. The sugarcane crop had a row spacing of 1.5 m and subsamples were taken from 49 points in the row and 49 between the row with a Dutch auger at depths of 0.0-0.2 and 0.2-0.4 m, for a total of 196 subsamples for each area. The samples were individually subjected to chemical analyses of soil fertility (pH in CaCl2, potential acidity, organic matter, P, K, Ca and Mg) and particle size analysis. The number of subsamples required to compose a sample within the acceptable ranges of error of 5, 10, 20 and 40 % of each property were computed from the coefficients of variation and the Student t-value for 95 % confidence. The soil properties under analysis exhibited different variabilities: high (P and K), medium (potential acidity, Ca and Mg) and low (pH, organic matter and clay content). Most of the properties analyzed showed an error of less than 20 % for a group of 20 subsamples, except for P and K, which were capable of showing an error greater than 40 % around the mean. The extreme variability in phosphorus, particularly at the depth of 0.2-0.4 m, attributed to banded application of high rates of P fertilizers at planting, places limitations on assessment of its availability due to the high number of subsamples required for a composite sample.
Resumo:
Ces dernières années, de nombreuses recherches ont mis en évidence les effets toxiques des micropolluants organiques pour les espèces de nos lacs et rivières. Cependant, la plupart de ces études se sont focalisées sur la toxicité des substances individuelles, alors que les organismes sont exposés tous les jours à des milliers de substances en mélange. Or les effets de ces cocktails ne sont pas négligeables. Cette thèse de doctorat s'est ainsi intéressée aux modèles permettant de prédire le risque environnemental de ces cocktails pour le milieu aquatique. Le principal objectif a été d'évaluer le risque écologique des mélanges de substances chimiques mesurées dans le Léman, mais aussi d'apporter un regard critique sur les méthodologies utilisées afin de proposer certaines adaptations pour une meilleure estimation du risque. Dans la première partie de ce travail, le risque des mélanges de pesticides et médicaments pour le Rhône et pour le Léman a été établi en utilisant des approches envisagées notamment dans la législation européenne. Il s'agit d'approches de « screening », c'est-à-dire permettant une évaluation générale du risque des mélanges. Une telle approche permet de mettre en évidence les substances les plus problématiques, c'est-à-dire contribuant le plus à la toxicité du mélange. Dans notre cas, il s'agit essentiellement de 4 pesticides. L'étude met également en évidence que toutes les substances, même en trace infime, contribuent à l'effet du mélange. Cette constatation a des implications en terme de gestion de l'environnement. En effet, ceci implique qu'il faut réduire toutes les sources de polluants, et pas seulement les plus problématiques. Mais l'approche proposée présente également un biais important au niveau conceptuel, ce qui rend son utilisation discutable, en dehors d'un screening, et nécessiterait une adaptation au niveau des facteurs de sécurité employés. Dans une deuxième partie, l'étude s'est portée sur l'utilisation des modèles de mélanges dans le calcul de risque environnemental. En effet, les modèles de mélanges ont été développés et validés espèce par espèce, et non pour une évaluation sur l'écosystème en entier. Leur utilisation devrait donc passer par un calcul par espèce, ce qui est rarement fait dû au manque de données écotoxicologiques à disposition. Le but a été donc de comparer, avec des valeurs générées aléatoirement, le calcul de risque effectué selon une méthode rigoureuse, espèce par espèce, avec celui effectué classiquement où les modèles sont appliqués sur l'ensemble de la communauté sans tenir compte des variations inter-espèces. Les résultats sont dans la majorité des cas similaires, ce qui valide l'approche utilisée traditionnellement. En revanche, ce travail a permis de déterminer certains cas où l'application classique peut conduire à une sous- ou sur-estimation du risque. Enfin, une dernière partie de cette thèse s'est intéressée à l'influence que les cocktails de micropolluants ont pu avoir sur les communautés in situ. Pour ce faire, une approche en deux temps a été adoptée. Tout d'abord la toxicité de quatorze herbicides détectés dans le Léman a été déterminée. Sur la période étudiée, de 2004 à 2009, cette toxicité due aux herbicides a diminué, passant de 4% d'espèces affectées à moins de 1%. Ensuite, la question était de savoir si cette diminution de toxicité avait un impact sur le développement de certaines espèces au sein de la communauté des algues. Pour ce faire, l'utilisation statistique a permis d'isoler d'autres facteurs pouvant avoir une influence sur la flore, comme la température de l'eau ou la présence de phosphates, et ainsi de constater quelles espèces se sont révélées avoir été influencées, positivement ou négativement, par la diminution de la toxicité dans le lac au fil du temps. Fait intéressant, une partie d'entre-elles avait déjà montré des comportements similaires dans des études en mésocosmes. En conclusion, ce travail montre qu'il existe des modèles robustes pour prédire le risque des mélanges de micropolluants sur les espèces aquatiques, et qu'ils peuvent être utilisés pour expliquer le rôle des substances dans le fonctionnement des écosystèmes. Toutefois, ces modèles ont bien sûr des limites et des hypothèses sous-jacentes qu'il est important de considérer lors de leur application. - Depuis plusieurs années, les risques que posent les micropolluants organiques pour le milieu aquatique préoccupent grandement les scientifiques ainsi que notre société. En effet, de nombreuses recherches ont mis en évidence les effets toxiques que peuvent avoir ces substances chimiques sur les espèces de nos lacs et rivières, quand elles se retrouvent exposées à des concentrations aiguës ou chroniques. Cependant, la plupart de ces études se sont focalisées sur la toxicité des substances individuelles, c'est à dire considérées séparément. Actuellement, il en est de même dans les procédures de régulation européennes, concernant la partie évaluation du risque pour l'environnement d'une substance. Or, les organismes sont exposés tous les jours à des milliers de substances en mélange, et les effets de ces "cocktails" ne sont pas négligeables. L'évaluation du risque écologique que pose ces mélanges de substances doit donc être abordé par de la manière la plus appropriée et la plus fiable possible. Dans la première partie de cette thèse, nous nous sommes intéressés aux méthodes actuellement envisagées à être intégrées dans les législations européennes pour l'évaluation du risque des mélanges pour le milieu aquatique. Ces méthodes sont basées sur le modèle d'addition des concentrations, avec l'utilisation des valeurs de concentrations des substances estimées sans effet dans le milieu (PNEC), ou à partir des valeurs des concentrations d'effet (CE50) sur certaines espèces d'un niveau trophique avec la prise en compte de facteurs de sécurité. Nous avons appliqué ces méthodes à deux cas spécifiques, le lac Léman et le Rhône situés en Suisse, et discuté les résultats de ces applications. Ces premières étapes d'évaluation ont montré que le risque des mélanges pour ces cas d'étude atteint rapidement une valeur au dessus d'un seuil critique. Cette valeur atteinte est généralement due à deux ou trois substances principales. Les procédures proposées permettent donc d'identifier les substances les plus problématiques pour lesquelles des mesures de gestion, telles que la réduction de leur entrée dans le milieu aquatique, devraient être envisagées. Cependant, nous avons également constaté que le niveau de risque associé à ces mélanges de substances n'est pas négligeable, même sans tenir compte de ces substances principales. En effet, l'accumulation des substances, même en traces infimes, atteint un seuil critique, ce qui devient plus difficile en terme de gestion du risque. En outre, nous avons souligné un manque de fiabilité dans ces procédures, qui peuvent conduire à des résultats contradictoires en terme de risque. Ceci est lié à l'incompatibilité des facteurs de sécurité utilisés dans les différentes méthodes. Dans la deuxième partie de la thèse, nous avons étudié la fiabilité de méthodes plus avancées dans la prédiction de l'effet des mélanges pour les communautés évoluant dans le système aquatique. Ces méthodes reposent sur le modèle d'addition des concentrations (CA) ou d'addition des réponses (RA) appliqués sur les courbes de distribution de la sensibilité des espèces (SSD) aux substances. En effet, les modèles de mélanges ont été développés et validés pour être appliqués espèce par espèce, et non pas sur plusieurs espèces agrégées simultanément dans les courbes SSD. Nous avons ainsi proposé une procédure plus rigoureuse, pour l'évaluation du risque d'un mélange, qui serait d'appliquer d'abord les modèles CA ou RA à chaque espèce séparément, et, dans une deuxième étape, combiner les résultats afin d'établir une courbe SSD du mélange. Malheureusement, cette méthode n'est pas applicable dans la plupart des cas, car elle nécessite trop de données généralement indisponibles. Par conséquent, nous avons comparé, avec des valeurs générées aléatoirement, le calcul de risque effectué selon cette méthode plus rigoureuse, avec celle effectuée traditionnellement, afin de caractériser la robustesse de cette approche qui consiste à appliquer les modèles de mélange sur les courbes SSD. Nos résultats ont montré que l'utilisation de CA directement sur les SSDs peut conduire à une sous-estimation de la concentration du mélange affectant 5 % ou 50% des espèces, en particulier lorsque les substances présentent un grand écart- type dans leur distribution de la sensibilité des espèces. L'application du modèle RA peut quant à lui conduire à une sur- ou sous-estimations, principalement en fonction de la pente des courbes dose- réponse de chaque espèce composant les SSDs. La sous-estimation avec RA devient potentiellement importante lorsque le rapport entre la EC50 et la EC10 de la courbe dose-réponse des espèces est plus petit que 100. Toutefois, la plupart des substances, selon des cas réels, présentent des données d' écotoxicité qui font que le risque du mélange calculé par la méthode des modèles appliqués directement sur les SSDs reste cohérent et surestimerait plutôt légèrement le risque. Ces résultats valident ainsi l'approche utilisée traditionnellement. Néanmoins, il faut garder à l'esprit cette source d'erreur lorsqu'on procède à une évaluation du risque d'un mélange avec cette méthode traditionnelle, en particulier quand les SSD présentent une distribution des données en dehors des limites déterminées dans cette étude. Enfin, dans la dernière partie de cette thèse, nous avons confronté des prédictions de l'effet de mélange avec des changements biologiques observés dans l'environnement. Dans cette étude, nous avons utilisé des données venant d'un suivi à long terme d'un grand lac européen, le lac Léman, ce qui offrait la possibilité d'évaluer dans quelle mesure la prédiction de la toxicité des mélanges d'herbicide expliquait les changements dans la composition de la communauté phytoplanctonique. Ceci à côté d'autres paramètres classiques de limnologie tels que les nutriments. Pour atteindre cet objectif, nous avons déterminé la toxicité des mélanges sur plusieurs années de 14 herbicides régulièrement détectés dans le lac, en utilisant les modèles CA et RA avec les courbes de distribution de la sensibilité des espèces. Un gradient temporel de toxicité décroissant a pu être constaté de 2004 à 2009. Une analyse de redondance et de redondance partielle, a montré que ce gradient explique une partie significative de la variation de la composition de la communauté phytoplanctonique, même après avoir enlevé l'effet de toutes les autres co-variables. De plus, certaines espèces révélées pour avoir été influencées, positivement ou négativement, par la diminution de la toxicité dans le lac au fil du temps, ont montré des comportements similaires dans des études en mésocosmes. On peut en conclure que la toxicité du mélange herbicide est l'un des paramètres clés pour expliquer les changements de phytoplancton dans le lac Léman. En conclusion, il existe diverses méthodes pour prédire le risque des mélanges de micropolluants sur les espèces aquatiques et celui-ci peut jouer un rôle dans le fonctionnement des écosystèmes. Toutefois, ces modèles ont bien sûr des limites et des hypothèses sous-jacentes qu'il est important de considérer lors de leur application, avant d'utiliser leurs résultats pour la gestion des risques environnementaux. - For several years now, the scientists as well as the society is concerned by the aquatic risk organic micropollutants may pose. Indeed, several researches have shown the toxic effects these substances may induce on organisms living in our lakes or rivers, especially when they are exposed to acute or chronic concentrations. However, most of the studies focused on the toxicity of single compounds, i.e. considered individually. The same also goes in the current European regulations concerning the risk assessment procedures for the environment of these substances. But aquatic organisms are typically exposed every day simultaneously to thousands of organic compounds. The toxic effects resulting of these "cocktails" cannot be neglected. The ecological risk assessment of mixtures of such compounds has therefore to be addressed by scientists in the most reliable and appropriate way. In the first part of this thesis, the procedures currently envisioned for the aquatic mixture risk assessment in European legislations are described. These methodologies are based on the mixture model of concentration addition and the use of the predicted no effect concentrations (PNEC) or effect concentrations (EC50) with assessment factors. These principal approaches were applied to two specific case studies, Lake Geneva and the River Rhône in Switzerland, including a discussion of the outcomes of such applications. These first level assessments showed that the mixture risks for these studied cases exceeded rapidly the critical value. This exceeding is generally due to two or three main substances. The proposed procedures allow therefore the identification of the most problematic substances for which management measures, such as a reduction of the entrance to the aquatic environment, should be envisioned. However, it was also showed that the risk levels associated with mixtures of compounds are not negligible, even without considering these main substances. Indeed, it is the sum of the substances that is problematic, which is more challenging in term of risk management. Moreover, a lack of reliability in the procedures was highlighted, which can lead to contradictory results in terms of risk. This result is linked to the inconsistency in the assessment factors applied in the different methods. In the second part of the thesis, the reliability of the more advanced procedures to predict the mixture effect to communities in the aquatic system were investigated. These established methodologies combine the model of concentration addition (CA) or response addition (RA) with species sensitivity distribution curves (SSD). Indeed, the mixture effect predictions were shown to be consistent only when the mixture models are applied on a single species, and not on several species simultaneously aggregated to SSDs. Hence, A more stringent procedure for mixture risk assessment is proposed, that would be to apply first the CA or RA models to each species separately and, in a second step, to combine the results to build an SSD for a mixture. Unfortunately, this methodology is not applicable in most cases, because it requires large data sets usually not available. Therefore, the differences between the two methodologies were studied with datasets created artificially to characterize the robustness of the traditional approach applying models on species sensitivity distribution. The results showed that the use of CA on SSD directly might lead to underestimations of the mixture concentration affecting 5% or 50% of species, especially when substances present a large standard deviation of the distribution from the sensitivity of the species. The application of RA can lead to over- or underestimates, depending mainly on the slope of the dose-response curves of the individual species. The potential underestimation with RA becomes important when the ratio between the EC50 and the EC10 for the dose-response curve of the species composing the SSD are smaller than 100. However, considering common real cases of ecotoxicity data for substances, the mixture risk calculated by the methodology applying mixture models directly on SSDs remains consistent and would rather slightly overestimate the risk. These results can be used as a theoretical validation of the currently applied methodology. Nevertheless, when assessing the risk of mixtures, one has to keep in mind this source of error with this classical methodology, especially when SSDs present a distribution of the data outside the range determined in this study Finally, in the last part of this thesis, we confronted the mixture effect predictions with biological changes observed in the environment. In this study, long-term monitoring of a European great lake, Lake Geneva, provides the opportunity to assess to what extent the predicted toxicity of herbicide mixtures explains the changes in the composition of the phytoplankton community next to other classical limnology parameters such as nutrients. To reach this goal, the gradient of the mixture toxicity of 14 herbicides regularly detected in the lake was calculated, using concentration addition and response addition models. A decreasing temporal gradient of toxicity was observed from 2004 to 2009. Redundancy analysis and partial redundancy analysis showed that this gradient explains a significant portion of the variation in phytoplankton community composition, even when having removed the effect of all other co-variables. Moreover, some species that were revealed to be influenced positively or negatively, by the decrease of toxicity in the lake over time, showed similar behaviors in mesocosms studies. It could be concluded that the herbicide mixture toxicity is one of the key parameters to explain phytoplankton changes in Lake Geneva. To conclude, different methods exist to predict the risk of mixture in the ecosystems. But their reliability varies depending on the underlying hypotheses. One should therefore carefully consider these hypotheses, as well as the limits of the approaches, before using the results for environmental risk management
Resumo:
The erosive capacity of rainfall can be expressed by an index and knowing it allows recommendation of soil management and conservation practices to reduce water erosion. The objective of this study was to calculate various indices of rainfall erosivity in Lages, Santa Catarina, Brazil, identify the best one, and discover its temporal distribution. The study was conducted at the Center of Agricultural and Veterinary Sciences, Lages, Santa Catarina, using daily rainfall charts from 1989 to 2012. Using the computer program Chuveros , 107 erosivity indices were obtained, which were based on maximum intensity in 5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60, 70, 80, 90, 100, 110, 120, 135, 150, 165, 180, 210, and 240 min of duration and on the combination of these intensities with the kinetic energy obtained by the equations of Brown & Foster, Wagner & Massambani, and Wischmeier & Smith. The indices of the time period from 1993 to 2012 were correlated with the respective soil losses from the standard plot of the Universal Soil Loss Equation (USLE) in order to select the erosivity index for the region. Erosive rainfall accounted for 83 % of the mean annual total volume of 1,533 mm. The erosivity index (R factor) of rainfall recommended for Lages is the EI30, whose mean annual value is 5,033 MJ mm ha-1 h-1, and of this value, 66 % occurs from September to February. Mean annual erosivity has a return period estimated at two years with a 50 % probability of occurrence.
Resumo:
We study a class of models of correlated random networks in which vertices are characterized by hidden variables controlling the establishment of edges between pairs of vertices. We find analytical expressions for the main topological properties of these models as a function of the distribution of hidden variables and the probability of connecting vertices. The expressions obtained are checked by means of numerical simulations in a particular example. The general model is extended to describe a practical algorithm to generate random networks with an a priori specified correlation structure. We also present an extension of the class, to map nonequilibrium growing networks to networks with hidden variables that represent the time at which each vertex was introduced in the system.
Resumo:
A common way to model multiclass classification problems is by means of Error-Correcting Output Codes (ECOCs). Given a multiclass problem, the ECOC technique designs a code word for each class, where each position of the code identifies the membership of the class for a given binary problem. A classification decision is obtained by assigning the label of the class with the closest code. One of the main requirements of the ECOC design is that the base classifier is capable of splitting each subgroup of classes from each binary problem. However, we cannot guarantee that a linear classifier model convex regions. Furthermore, nonlinear classifiers also fail to manage some type of surfaces. In this paper, we present a novel strategy to model multiclass classification problems using subclass information in the ECOC framework. Complex problems are solved by splitting the original set of classes into subclasses and embedding the binary problems in a problem-dependent ECOC design. Experimental results show that the proposed splitting procedure yields a better performance when the class overlap or the distribution of the training objects conceal the decision boundaries for the base classifier. The results are even more significant when one has a sufficiently large training size.
Resumo:
Background: Mortality among patients who complete tuberculosis (TB) treatment is still high among vulnerable populations. The objective of the study was to identify the probability of death and its predictive factors in a cohort of successfully treated TB patients. Methods: A population-based retrospective longitudinal study was performed in Barcelona, Spain. All patients who successfully completed TB treatment with culture-confirmation and available drug susceptibility testing between 1995 1997 were retrospectively followed-up until December 31, 2005 by the Barcelona TB Control Program. Socio-demographic, clinical, microbiological and treatment variables were examined. Mortality, TB Program and AIDS registries were reviewed. Kaplan-Meier and a Cox regression methods with time-dependent covariates were used for the survival analysis, calculating the hazard ratio (HR) with 95% confidence intervals (CI). Results: Among the 762 included patients, the median age was 36 years, 520 (68.2%) were male, 178 (23.4%) HIV-infected, and 208 (27.3%) were alcohol abusers. Of the 134 (17.6%) injecting drug users (IDU), 123 (91.8%) were HIV-infected. A total of 30 (3.9%) recurrences and 173 deaths (22.7%) occurred (mortality rate: 3.4/100 person-years of follow-up). The predictors of death were: age between 4160 years old (HR: 3.5; CI:2.15.7), age greater than 60 years (HR: 14.6; CI:8.924), alcohol abuse (HR: 1.7; CI:1.22.4) and HIV-infected IDU (HR: 7.9; CI:4.713.3). Conclusions: The mortality rate among TB patients who completed treatment is associated with vulnerable populations such as the elderly, alcohol abusers, and HIV-infected IDU. We therefore need to fight against poverty, and promote and develop interventions and social policies directed towards these populations to improve their survival.
Resumo:
Background Patients with cirrhosis in ChildPugh class C or those in class B who have persistent bleeding at endoscopy are at high risk for treatment failure and a poor prognosis, even if they have undergone rescue treatment with a transjugular intrahepatic porto - systemic shunt (TIPS). This study evaluated the earlier use of TIPS in such patients. Methods We randomly assigned, within 24 hours after admission, a total of 63 patients with cirrhosis and acute variceal bleeding who had been treated with vasoactive drugs plus endoscopic therapy to treatment with a polytetrafluoroethylene-covered stent within 72 hours after randomization (early-TIPS group, 32 patients) or continuation of vasoactive-drug therapy, followed after 3 to 5 days by treatment with propranolol or nadolol and long-term endoscopic band ligation (EBL), with insertion of a TIPS if needed as rescue therapy (pharmacotherapyEBL group, 31 patients). Results During a median follow-up of 16 months, rebleeding or failure to control bleeding occurred in 14 patients in the pharmacotherapyEBL group as compared with 1 patient in the early-TIPS group (P=0.001). The 1-year actuarial probability of remaining free of this composite end point was 50% in the pharmacotherapyEBL group versus 97% in the early-TIPS group (P<0.001). Sixteen patients died (12 in the pharmacotherapyEBL group and 4 in the early-TIPS group, P=0.01). The 1-year actuarial survival was 61% in the pharmacotherapyEBL group versus 86% in the early-TIPS group (P<0.001). Seven patients in the pharmacotherapyEBL group received TIPS as rescue therapy, but four died. The number of days in the intensive care unit and the percentage of time in the hospital during follow-up were significantly higher in the pharmacotherapyEBL group than in the early-TIPS group. No significant diferences were observed between the two treatment groups with respect to serious adverse events. Conclusions In these patients with cirrhosis who were hospitalized for acute variceal bleeding and at high risk for treatment failure, the early use of TIPS was associated with signif icant reductions in treatment failure and in mortality. (Current Controlled Trials number, ISRCTN58150114.)
Resumo:
Background and aims: Previous clinical trials suggest that adding non-selective beta-blockers improves the efficacy of endoscopic band ligation (EBL) in the prevention of recurrent bleeding, but no study has evaluated whether EBL improves the efficacy of beta-blockers + isosorbide-5-mononitrate. The present study was aimed at evaluating this issue in a multicentre randomised controlled trial (RCT) and to correlate changes in hepatic venous pressure gradient (HVPG) during treatment with clinical outcomes. Methods: 158 patients with cirrhosis, admitted because of variceal bleeding, were randomised to receive nadolol+isosorbide-5-mononitrate alone (Drug: n=78) or combined with EBL (Drug+EBL; n=80). HVPG measurements were performed at randomisation and after 4¿6 weeks on medical therapy. Results: Median follow-up was 15 months. One-year probability of recurrent bleeding was similar in both groups (33% vs 26%: p=0.3). There were no significant differences in survival or need of rescue shunts. Overall adverse events or those requiring hospital admission were significantly more frequent in the Drug+EBL group. Recurrent bleeding was significantly more frequent in HVPG non-responders than in responders (HVPG reduction ¿20% or ¿12 mm Hg). Among non-responders recurrent bleeding was similar in patients treated with Drugs or Drugs+EBL. Conclusions: Adding EBL to pharmacological treatment did not reduce recurrent bleeding, the need for rescue therapy, or mortality, and was associated with more adverse events. Furthermore, associating EBL to drug therapy did not reduce the high rebleeding risk of HVPG non-responders.
Resumo:
BACKGROUND AND PURPOSE: There is no strong evidence that all ischaemic stroke types are associated with high cardiovascular risk. Our aim was to investigate whether all ischaemic stroke types are associated with high cardiovascular risk. METHODS: All consecutive patients with ischaemic stroke registered in the Athens Stroke Registry between 1 January 1993 and 31 December 2010 were categorized according to the TOAST classification and were followed up for up to 10 years. Outcomes assessed were cardiovascular and all-cause mortality, myocardial infarction, stroke recurrence, and a composite cardiovascular outcome consisting of myocardial infarction, angina pectoris, acute heart failure, sudden cardiac death, stroke recurrence and aortic aneurysm rupture. The Kaplan-Meier product limit method was used to estimate the probability of each end-point in each patient group. Cox proportional hazards models were used to determine the independent covariates of each end-point. RESULTS: Two thousand seven hundred and thirty patients were followed up for 48.1 ± 41.9 months. The cumulative probabilities of 10-year cardiovascular mortality in patients with cardioembolic stroke [46.6%, 95% confidence interval (CI) 40.6-52.8], lacunar stroke (22.1%, 95% CI 16.2-28.0) or undetermined stroke (35.2%, 95% CI 27.8-42.6) were either similar to or higher than those of patients with large-artery atherosclerotic stroke (LAA) (28.7%, 95% CI 22.4-35.0). Compared with LAA, all other TOAST types had a higher probability of 10-year stroke recurrence. In Cox proportional hazards analysis, compared with patients with LAA, patients with any other stroke type were associated with similar or higher risk for the outcomes of overall mortality, cardiovascular mortality, stroke recurrence and composite cardiovascular outcome. CONCLUSIONS: Large-artery atherosclerotic stroke and cardioembolic stroke are associated with the highest risk for future cardiovascular events, with the latter carrying at least as high a risk as LAA stroke.
Resumo:
Background: Screening of elevated blood pressure (BP) in children has been advocated to early identify hypertension. However, identification of children with sustained elevated BP is challenging due to the high BP variability. The value of an elevated BP measure during childhood and adolescence for the prediction of future elevated BP is not well described. Objectives: We assessed the positive (PPV) and negative (NPV) predictive value of high BP for sustained elevated BP in cohorts of children of the Seychelles, a rapidly developing island state in the African region. Methods: Serial school-based surveys of weight, height, and BP were conducted yearly between 1998-2006 among all students of the country in four school grades (kindergarten [G0, mean age (SD): 5.5 (0.4) yr], G4 [9.2 (0.4) yr], G7 [12.5 (0.4) yr] and G10 (15.6 (0.5) yr]. We constituted three cohorts of children examined twice at 3-4 years interval: 4,557 children examined at G0 and G4, 6,198 at G4 and G7, and 6,094 at G7 and G10. The same automated BP measurement devices were used throughout the study. BP was measured twice at each exam and averaged. Obesity and elevated BP were defined using the CDC (BMI_95th sex-, and age-specific percentile) and the NHBPEP criteria (BP_95th sex-, age-, and height specific percentile), respectively. Results: Prevalence of obesity was 6.1% at G0, 7.1% at G4, 7.5% at G7, and 6.5% at G10. Prevalence of elevated BP was 10.2% at G0, 9.9% at G4, 7.1% at G7, and 8.7% at G10. Among children with elevated BP at initial exam, the PPV of keeping elevated BP was low but increased with age: 13% between G0 and G4, 19% between G4 and G7, and 27% between G7 and G10. Among obese children with elevated BP, the PPV was higher: 33%, 35% and 39% respectively. Overall, the probability for children with normal BP to remain in that category 3-4 years later (NPV) was 92%, 95%, and 93%, respectively. By comparison, the PPV for children initially obese to remain obese was much higher at 71%, 71%, and 62% (G7-G10), respectively. The NPV (i.e. the probability of remaining at normal weight) was 94%, 96%, and 98%, respectively. Conclusion: During childhood and adolescence, having an elevated BP at one occasion is a weak predictor of sustained elevated BP 3-4 years later. In obese children, it is a better predictor.