795 resultados para Risk management practices
Resumo:
Revenue management practices often include overbooking capacity to account for customerswho make reservations but do not show up. In this paper, we consider the network revenuemanagement problem with no-shows and overbooking, where the show-up probabilities are specificto each product. No-show rates differ significantly by product (for instance, each itinerary andfare combination for an airline) as sale restrictions and the demand characteristics vary byproduct. However, models that consider no-show rates by each individual product are difficultto handle as the state-space in dynamic programming formulations (or the variable space inapproximations) increases significantly. In this paper, we propose a randomized linear program tojointly make the capacity control and overbooking decisions with product-specific no-shows. Weestablish that our formulation gives an upper bound on the optimal expected total profit andour upper bound is tighter than a deterministic linear programming upper bound that appearsin the existing literature. Furthermore, we show that our upper bound is asymptotically tightin a regime where the leg capacities and the expected demand is scaled linearly with the samerate. We also describe how the randomized linear program can be used to obtain a bid price controlpolicy. Computational experiments indicate that our approach is quite fast, able to scale to industrialproblems and can provide significant improvements over standard benchmarks.
Resumo:
Short description of the proposed presentation * lees than 100 words This paper describes the interdisciplinary work done in Uspantán, Guatemala, a city vulnerable to natural hazards. We investigated local responses to landslides that happened in 2007 and 2010 and had a strong impact on the local community. We show a complete example of a systemic approach that incorporates physical, social and environmental aspects in order to understand risks. The objective of this work is to present the combination of social and geological data (mapping), and describe the methodology used for identification and assessment of risk. The article discusses both the limitations and methodological challenges encountered when conducting interdisciplinary research. Describe why it is important to present this topic at the Global Platform in less than 50 words This work shows the benefits of addressing risk in an interdisciplinary perspective, in particular how integrating social sciences can help identify new phenomena and natural hazards and assess risk. It gives a practical example of how one can integrate data from different fields. What is innovative about this presentation? * The use of mapping to combine qualitative and quantitative data. By coupling approaches, we could associate a hazard map with qualitative data gathered by interviews with the population. This map is an important document for the authorities. Indeed, it allows them to be aware of the most dangerous zones, the affected families and the places where it is most urgent to intervene.
Resumo:
Biologic agents have substantially advanced the treatment of immunological disorders, including chronic inflammatory and autoimmune diseases. However, these drugs are often associated with adverse events (AEs), including allergic, immunological and other unwanted reactions. AEs can affect almost any organ or system in the body and can occur immediately, within minutes to hours, or with a delay of several days or more after initiation of biologic therapy. Although some AEs are a direct consequence of the functional inhibition of biologic-agent-targeted antigens, the pathogenesis of other AEs results from a drug-induced imbalance of the immune system, intermediary factors and cofactors, a complexity that complicates their prediction. Herein, we review the AEs associated with biologic therapy most relevant to rheumatic and immunological diseases, and discuss their underlying pathogenesis. We also include our recommendations for the medical management of such AEs. Increased understanding and improved risk management of AEs induced by biologic agents will enable better use of these versatile immune-response modifiers.
Resumo:
BACKGROUND: There is an emerging knowledge base on the effectiveness of strategies to close the knowledge-practice gap. However, less is known about how attributes of an innovation and other contextual and situational factors facilitate and impede an innovation's adoption. The Healthy Heart Kit (HHK) is a risk management and patient education resource for the prevention of cardiovascular disease (CVD) and promotion of cardiovascular health. Although previous studies have demonstrated the HHK's content validity and practical utility, no published study has examined physicians' uptake of the HHK and factors that shape its adoption. OBJECTIVES: Conceptually informed by Rogers' Diffusion of Innovation theory, and Theory of Planned Behaviour, this study had two objectives: (1) to determine if specific attributes of the HHK as well as contextual and situational factors are associated with physicians' intention and actual usage of the HHK kit; and (2), to determine if any contextual and situational factors are associated with individual or environmental barriers that prevent the uptake of the HHK among those physicians who do not plan to use the kit. METHODS: A sample of 153 physicians who responded to an invitation letter sent to all family physicians in the province of Alberta, Canada were recruited for the study. Participating physicians were sent a HHK, and two months later a study questionnaire assessed primary factors on the physicians' clinical practice, attributes of the HHK (relative advantage, compatibility, complexity, trialability, observability), confidence and control using the HHK, barriers to use, and individual attributes. All measures were used in path analysis, employing a causal model based on Rogers' Diffusion of Innovations Theory and Theory of Planned Behaviour. RESULTS: 115 physicians (follow up rate of 75%) completed the questionnaire. Use of the HHK was associated with intention to use the HHK, relative advantage, and years of experience. Relative advantage and the observability of the HHK benefits were also significantly associated with physicians' intention to use the HHK. Physicians working in solo medical practices reported experiencing more individual and environmental barriers to using the HHK. CONCLUSION: The results of this study suggest that future information innovations must demonstrate an advantage over current resources and the research evidence supporting the innovation must be clearly visible. Findings also suggest that the innovation adoption process has a social element, and collegial interactions and discussions may facilitate that process. These results could be valuable for knowledge translation researchers and health promotion developers in future innovation adoption planning.
Resumo:
Despite numerous discussions, workshops, reviews and reports about responsible development of nanotechnology, information describing health and environmental risk of engineered nanoparticles or nanomaterials is severely lacking and thus insufficient for completing rigorous risk assessment on their use. However, since preliminary scientific evaluations indicate that there are reasonable suspicions that activities involving nanomaterials might have damaging effects on human health; the precautionary principle must be applied. Public and private institutions as well as industries have the duty to adopt preventive and protective measures proportionate to the risk intensity and the desired level of protection. In this work, we present a practical, 'user-friendly' procedure for a university-wide safety and health management of nanomaterials, developed as a multi-stakeholder effort (government, accident insurance, researchers and experts for occupational safety and health). The process starts using a schematic decision tree that allows classifying the nano laboratory into three hazard classes similar to a control banding approach (from Nano 3 - highest hazard to Nano1 - lowest hazard). Classifying laboratories into risk classes would require considering actual or potential exposure to the nanomaterial as well as statistical data on health effects of exposure. Due to the fact that these data (as well as exposure limits for each individual material) are not available, risk classes could not be determined. For each hazard level we then provide a list of required risk mitigation measures (technical, organizational and personal). The target 'users' of this safety and health methodology are researchers and safety officers. They can rapidly access the precautionary hazard class of their activities and the corresponding adequate safety and health measures. We succeed in convincing scientist dealing with nano-activities that adequate safety measures and management are promoting innovation and discoveries by ensuring them a safe environment even in the case of very novel products. The proposed measures are not considered as constraints but as a support to their research. This methodology is being implemented at the Ecole Polytechnique de Lausanne in over 100 research labs dealing with nanomaterials. It is our opinion that it would be useful to other research and academia institutions as well. [Authors]
Resumo:
This article has an immediate predecessor, upon which it is based and with which readers must necessarily be familiar: Towards a Theory of the Credit-Risk Balance Sheet (Vallverdú, Somoza and Moya, 2006). The Balance Sheet is conceptualised on the basis of the duality of a credit-based transaction; it deals with its theoretical foundations, providing evidence of a causal credit-risk duality, that is, a true causal relationship; its characteristics, properties and its static and dynamic characteristics are analyzed. This article, which provides a logical continuation to the previous one, studies the evolution of the structure of the Credit-Risk Balance Sheet as a consequence of a business¿s dynamics in the credit area. Given the Credit-Risk Balance Sheet of a company at any given time, it attempts to estimate, by means of sequential analysis, its structural evolution, showing its usefulness in the management and control of credit and risk. To do this, it bases itself, with the necessary adaptations, on the by-now classic works of Palomba and Cutolo. The establishment of the corresponding transformation matrices allows one to move from an initial balance sheet structure to a final, future one, to understand its credit-risk situation trends, as well as to make possible its monitoring and control, basic elements in providing support for risk management.
Resumo:
This study deals with the psychological processes underlying the selection of appropriate strategy during exploratory behavior. A new device was used to assess sexual dimorphisms in spatial abilities that do not depend on spatial rotation, map reading or directional vector extraction capacities. Moreover, it makes it possible to investigate exploratory behavior as a specific response to novelty that trades off risk and reward. Risk management under uncertainty was assessed through both spontaneous searching strategies and signal detection capacities. The results of exploratory behavior, detection capacities, and decision-making strategies seem to indicate that women's exploratory behavior is based on risk-reducing behavior while men behavior does not appear to be influenced by this variable. This difference was interpreted as a difference in information processing modifying beliefs concerning the likelihood of uncertain events, and therefore influencing risk evaluation.
Resumo:
This article has an immediate predecessor, upon which it is based and with which readers must necessarily be familiar: Towards a Theory of the Credit-Risk Balance Sheet (Vallverdú, Somoza and Moya, 2006). The Balance Sheet is conceptualised on the basis of the duality of a credit-based transaction; it deals with its theoretical foundations, providing evidence of a causal credit-risk duality, that is, a true causal relationship; its characteristics, properties and its static and dynamic characteristics are analyzed. This article, which provides a logical continuation to the previous one, studies the evolution of the structure of the Credit-Risk Balance Sheet as a consequence of a business¿s dynamics in the credit area. Given the Credit-Risk Balance Sheet of a company at any given time, it attempts to estimate, by means of sequential analysis, its structural evolution, showing its usefulness in the management and control of credit and risk. To do this, it bases itself, with the necessary adaptations, on the by-now classic works of Palomba and Cutolo. The establishment of the corresponding transformation matrices allows one to move from an initial balance sheet structure to a final, future one, to understand its credit-risk situation trends, as well as to make possible its monitoring and control, basic elements in providing support for risk management.
Resumo:
Ces dernières années, de nombreuses recherches ont mis en évidence les effets toxiques des micropolluants organiques pour les espèces de nos lacs et rivières. Cependant, la plupart de ces études se sont focalisées sur la toxicité des substances individuelles, alors que les organismes sont exposés tous les jours à des milliers de substances en mélange. Or les effets de ces cocktails ne sont pas négligeables. Cette thèse de doctorat s'est ainsi intéressée aux modèles permettant de prédire le risque environnemental de ces cocktails pour le milieu aquatique. Le principal objectif a été d'évaluer le risque écologique des mélanges de substances chimiques mesurées dans le Léman, mais aussi d'apporter un regard critique sur les méthodologies utilisées afin de proposer certaines adaptations pour une meilleure estimation du risque. Dans la première partie de ce travail, le risque des mélanges de pesticides et médicaments pour le Rhône et pour le Léman a été établi en utilisant des approches envisagées notamment dans la législation européenne. Il s'agit d'approches de « screening », c'est-à-dire permettant une évaluation générale du risque des mélanges. Une telle approche permet de mettre en évidence les substances les plus problématiques, c'est-à-dire contribuant le plus à la toxicité du mélange. Dans notre cas, il s'agit essentiellement de 4 pesticides. L'étude met également en évidence que toutes les substances, même en trace infime, contribuent à l'effet du mélange. Cette constatation a des implications en terme de gestion de l'environnement. En effet, ceci implique qu'il faut réduire toutes les sources de polluants, et pas seulement les plus problématiques. Mais l'approche proposée présente également un biais important au niveau conceptuel, ce qui rend son utilisation discutable, en dehors d'un screening, et nécessiterait une adaptation au niveau des facteurs de sécurité employés. Dans une deuxième partie, l'étude s'est portée sur l'utilisation des modèles de mélanges dans le calcul de risque environnemental. En effet, les modèles de mélanges ont été développés et validés espèce par espèce, et non pour une évaluation sur l'écosystème en entier. Leur utilisation devrait donc passer par un calcul par espèce, ce qui est rarement fait dû au manque de données écotoxicologiques à disposition. Le but a été donc de comparer, avec des valeurs générées aléatoirement, le calcul de risque effectué selon une méthode rigoureuse, espèce par espèce, avec celui effectué classiquement où les modèles sont appliqués sur l'ensemble de la communauté sans tenir compte des variations inter-espèces. Les résultats sont dans la majorité des cas similaires, ce qui valide l'approche utilisée traditionnellement. En revanche, ce travail a permis de déterminer certains cas où l'application classique peut conduire à une sous- ou sur-estimation du risque. Enfin, une dernière partie de cette thèse s'est intéressée à l'influence que les cocktails de micropolluants ont pu avoir sur les communautés in situ. Pour ce faire, une approche en deux temps a été adoptée. Tout d'abord la toxicité de quatorze herbicides détectés dans le Léman a été déterminée. Sur la période étudiée, de 2004 à 2009, cette toxicité due aux herbicides a diminué, passant de 4% d'espèces affectées à moins de 1%. Ensuite, la question était de savoir si cette diminution de toxicité avait un impact sur le développement de certaines espèces au sein de la communauté des algues. Pour ce faire, l'utilisation statistique a permis d'isoler d'autres facteurs pouvant avoir une influence sur la flore, comme la température de l'eau ou la présence de phosphates, et ainsi de constater quelles espèces se sont révélées avoir été influencées, positivement ou négativement, par la diminution de la toxicité dans le lac au fil du temps. Fait intéressant, une partie d'entre-elles avait déjà montré des comportements similaires dans des études en mésocosmes. En conclusion, ce travail montre qu'il existe des modèles robustes pour prédire le risque des mélanges de micropolluants sur les espèces aquatiques, et qu'ils peuvent être utilisés pour expliquer le rôle des substances dans le fonctionnement des écosystèmes. Toutefois, ces modèles ont bien sûr des limites et des hypothèses sous-jacentes qu'il est important de considérer lors de leur application. - Depuis plusieurs années, les risques que posent les micropolluants organiques pour le milieu aquatique préoccupent grandement les scientifiques ainsi que notre société. En effet, de nombreuses recherches ont mis en évidence les effets toxiques que peuvent avoir ces substances chimiques sur les espèces de nos lacs et rivières, quand elles se retrouvent exposées à des concentrations aiguës ou chroniques. Cependant, la plupart de ces études se sont focalisées sur la toxicité des substances individuelles, c'est à dire considérées séparément. Actuellement, il en est de même dans les procédures de régulation européennes, concernant la partie évaluation du risque pour l'environnement d'une substance. Or, les organismes sont exposés tous les jours à des milliers de substances en mélange, et les effets de ces "cocktails" ne sont pas négligeables. L'évaluation du risque écologique que pose ces mélanges de substances doit donc être abordé par de la manière la plus appropriée et la plus fiable possible. Dans la première partie de cette thèse, nous nous sommes intéressés aux méthodes actuellement envisagées à être intégrées dans les législations européennes pour l'évaluation du risque des mélanges pour le milieu aquatique. Ces méthodes sont basées sur le modèle d'addition des concentrations, avec l'utilisation des valeurs de concentrations des substances estimées sans effet dans le milieu (PNEC), ou à partir des valeurs des concentrations d'effet (CE50) sur certaines espèces d'un niveau trophique avec la prise en compte de facteurs de sécurité. Nous avons appliqué ces méthodes à deux cas spécifiques, le lac Léman et le Rhône situés en Suisse, et discuté les résultats de ces applications. Ces premières étapes d'évaluation ont montré que le risque des mélanges pour ces cas d'étude atteint rapidement une valeur au dessus d'un seuil critique. Cette valeur atteinte est généralement due à deux ou trois substances principales. Les procédures proposées permettent donc d'identifier les substances les plus problématiques pour lesquelles des mesures de gestion, telles que la réduction de leur entrée dans le milieu aquatique, devraient être envisagées. Cependant, nous avons également constaté que le niveau de risque associé à ces mélanges de substances n'est pas négligeable, même sans tenir compte de ces substances principales. En effet, l'accumulation des substances, même en traces infimes, atteint un seuil critique, ce qui devient plus difficile en terme de gestion du risque. En outre, nous avons souligné un manque de fiabilité dans ces procédures, qui peuvent conduire à des résultats contradictoires en terme de risque. Ceci est lié à l'incompatibilité des facteurs de sécurité utilisés dans les différentes méthodes. Dans la deuxième partie de la thèse, nous avons étudié la fiabilité de méthodes plus avancées dans la prédiction de l'effet des mélanges pour les communautés évoluant dans le système aquatique. Ces méthodes reposent sur le modèle d'addition des concentrations (CA) ou d'addition des réponses (RA) appliqués sur les courbes de distribution de la sensibilité des espèces (SSD) aux substances. En effet, les modèles de mélanges ont été développés et validés pour être appliqués espèce par espèce, et non pas sur plusieurs espèces agrégées simultanément dans les courbes SSD. Nous avons ainsi proposé une procédure plus rigoureuse, pour l'évaluation du risque d'un mélange, qui serait d'appliquer d'abord les modèles CA ou RA à chaque espèce séparément, et, dans une deuxième étape, combiner les résultats afin d'établir une courbe SSD du mélange. Malheureusement, cette méthode n'est pas applicable dans la plupart des cas, car elle nécessite trop de données généralement indisponibles. Par conséquent, nous avons comparé, avec des valeurs générées aléatoirement, le calcul de risque effectué selon cette méthode plus rigoureuse, avec celle effectuée traditionnellement, afin de caractériser la robustesse de cette approche qui consiste à appliquer les modèles de mélange sur les courbes SSD. Nos résultats ont montré que l'utilisation de CA directement sur les SSDs peut conduire à une sous-estimation de la concentration du mélange affectant 5 % ou 50% des espèces, en particulier lorsque les substances présentent un grand écart- type dans leur distribution de la sensibilité des espèces. L'application du modèle RA peut quant à lui conduire à une sur- ou sous-estimations, principalement en fonction de la pente des courbes dose- réponse de chaque espèce composant les SSDs. La sous-estimation avec RA devient potentiellement importante lorsque le rapport entre la EC50 et la EC10 de la courbe dose-réponse des espèces est plus petit que 100. Toutefois, la plupart des substances, selon des cas réels, présentent des données d' écotoxicité qui font que le risque du mélange calculé par la méthode des modèles appliqués directement sur les SSDs reste cohérent et surestimerait plutôt légèrement le risque. Ces résultats valident ainsi l'approche utilisée traditionnellement. Néanmoins, il faut garder à l'esprit cette source d'erreur lorsqu'on procède à une évaluation du risque d'un mélange avec cette méthode traditionnelle, en particulier quand les SSD présentent une distribution des données en dehors des limites déterminées dans cette étude. Enfin, dans la dernière partie de cette thèse, nous avons confronté des prédictions de l'effet de mélange avec des changements biologiques observés dans l'environnement. Dans cette étude, nous avons utilisé des données venant d'un suivi à long terme d'un grand lac européen, le lac Léman, ce qui offrait la possibilité d'évaluer dans quelle mesure la prédiction de la toxicité des mélanges d'herbicide expliquait les changements dans la composition de la communauté phytoplanctonique. Ceci à côté d'autres paramètres classiques de limnologie tels que les nutriments. Pour atteindre cet objectif, nous avons déterminé la toxicité des mélanges sur plusieurs années de 14 herbicides régulièrement détectés dans le lac, en utilisant les modèles CA et RA avec les courbes de distribution de la sensibilité des espèces. Un gradient temporel de toxicité décroissant a pu être constaté de 2004 à 2009. Une analyse de redondance et de redondance partielle, a montré que ce gradient explique une partie significative de la variation de la composition de la communauté phytoplanctonique, même après avoir enlevé l'effet de toutes les autres co-variables. De plus, certaines espèces révélées pour avoir été influencées, positivement ou négativement, par la diminution de la toxicité dans le lac au fil du temps, ont montré des comportements similaires dans des études en mésocosmes. On peut en conclure que la toxicité du mélange herbicide est l'un des paramètres clés pour expliquer les changements de phytoplancton dans le lac Léman. En conclusion, il existe diverses méthodes pour prédire le risque des mélanges de micropolluants sur les espèces aquatiques et celui-ci peut jouer un rôle dans le fonctionnement des écosystèmes. Toutefois, ces modèles ont bien sûr des limites et des hypothèses sous-jacentes qu'il est important de considérer lors de leur application, avant d'utiliser leurs résultats pour la gestion des risques environnementaux. - For several years now, the scientists as well as the society is concerned by the aquatic risk organic micropollutants may pose. Indeed, several researches have shown the toxic effects these substances may induce on organisms living in our lakes or rivers, especially when they are exposed to acute or chronic concentrations. However, most of the studies focused on the toxicity of single compounds, i.e. considered individually. The same also goes in the current European regulations concerning the risk assessment procedures for the environment of these substances. But aquatic organisms are typically exposed every day simultaneously to thousands of organic compounds. The toxic effects resulting of these "cocktails" cannot be neglected. The ecological risk assessment of mixtures of such compounds has therefore to be addressed by scientists in the most reliable and appropriate way. In the first part of this thesis, the procedures currently envisioned for the aquatic mixture risk assessment in European legislations are described. These methodologies are based on the mixture model of concentration addition and the use of the predicted no effect concentrations (PNEC) or effect concentrations (EC50) with assessment factors. These principal approaches were applied to two specific case studies, Lake Geneva and the River Rhône in Switzerland, including a discussion of the outcomes of such applications. These first level assessments showed that the mixture risks for these studied cases exceeded rapidly the critical value. This exceeding is generally due to two or three main substances. The proposed procedures allow therefore the identification of the most problematic substances for which management measures, such as a reduction of the entrance to the aquatic environment, should be envisioned. However, it was also showed that the risk levels associated with mixtures of compounds are not negligible, even without considering these main substances. Indeed, it is the sum of the substances that is problematic, which is more challenging in term of risk management. Moreover, a lack of reliability in the procedures was highlighted, which can lead to contradictory results in terms of risk. This result is linked to the inconsistency in the assessment factors applied in the different methods. In the second part of the thesis, the reliability of the more advanced procedures to predict the mixture effect to communities in the aquatic system were investigated. These established methodologies combine the model of concentration addition (CA) or response addition (RA) with species sensitivity distribution curves (SSD). Indeed, the mixture effect predictions were shown to be consistent only when the mixture models are applied on a single species, and not on several species simultaneously aggregated to SSDs. Hence, A more stringent procedure for mixture risk assessment is proposed, that would be to apply first the CA or RA models to each species separately and, in a second step, to combine the results to build an SSD for a mixture. Unfortunately, this methodology is not applicable in most cases, because it requires large data sets usually not available. Therefore, the differences between the two methodologies were studied with datasets created artificially to characterize the robustness of the traditional approach applying models on species sensitivity distribution. The results showed that the use of CA on SSD directly might lead to underestimations of the mixture concentration affecting 5% or 50% of species, especially when substances present a large standard deviation of the distribution from the sensitivity of the species. The application of RA can lead to over- or underestimates, depending mainly on the slope of the dose-response curves of the individual species. The potential underestimation with RA becomes important when the ratio between the EC50 and the EC10 for the dose-response curve of the species composing the SSD are smaller than 100. However, considering common real cases of ecotoxicity data for substances, the mixture risk calculated by the methodology applying mixture models directly on SSDs remains consistent and would rather slightly overestimate the risk. These results can be used as a theoretical validation of the currently applied methodology. Nevertheless, when assessing the risk of mixtures, one has to keep in mind this source of error with this classical methodology, especially when SSDs present a distribution of the data outside the range determined in this study Finally, in the last part of this thesis, we confronted the mixture effect predictions with biological changes observed in the environment. In this study, long-term monitoring of a European great lake, Lake Geneva, provides the opportunity to assess to what extent the predicted toxicity of herbicide mixtures explains the changes in the composition of the phytoplankton community next to other classical limnology parameters such as nutrients. To reach this goal, the gradient of the mixture toxicity of 14 herbicides regularly detected in the lake was calculated, using concentration addition and response addition models. A decreasing temporal gradient of toxicity was observed from 2004 to 2009. Redundancy analysis and partial redundancy analysis showed that this gradient explains a significant portion of the variation in phytoplankton community composition, even when having removed the effect of all other co-variables. Moreover, some species that were revealed to be influenced positively or negatively, by the decrease of toxicity in the lake over time, showed similar behaviors in mesocosms studies. It could be concluded that the herbicide mixture toxicity is one of the key parameters to explain phytoplankton changes in Lake Geneva. To conclude, different methods exist to predict the risk of mixture in the ecosystems. But their reliability varies depending on the underlying hypotheses. One should therefore carefully consider these hypotheses, as well as the limits of the approaches, before using the results for environmental risk management
Resumo:
In evaluation of soil quality for agricultural use, soil structure is one of the most important properties, which is influenced not only by climate, biological activity, and management practices but also by mechanical and physico-chemical forces acting in the soil. The purpose of this study was to evaluate the influence of conventional agricultural management on the structure and microstructure of a Latossolo Vermelho distroférrico típico (Rhodic Hapludox) in an experimental area planted to maize. Soil morphology was described using the crop profile method by identifying the distinct structural volumes called Morphologically Homogeneous Units (MHUs). For comparison, we also described a profile in an adjacent area without agricultural use and under natural regrowth referred to as Memory. We took undisturbed samples from the main MHUs so as to form thin sections and blocks of soil for micromorphological and micromorphometrical analyses. Results from the application of the crop profile method showed the occurrence of the following structural types: loose (L), fragmented (F) and continuous (C) in both profiles analyzed. In the Memory soil profile, the fragmented structures were classified as Fptμ∆+tf and Fmt∆μ, whose micromorphology shows an enaulic-porphyric (porous) relative distribution with a great deal of biological activity as indicated by the presence of vughs and channels. Lower down, from 0.20 to 0.35 m, there was a continuous soil volume (sub-type C∆μ), with a subangular block microstructure and an enaulic-porphyric relative distribution, though in this case more compact and with aggregate coalescence and less biological activity. The micromorphometrical study of the soil of the Memory Plot showed the predominance of complex pores in NAM (15.03 %), Fmt∆μ (11.72 %), and Fptμ∆+tf (7.73 %), and rounded pores in C∆μ (8.21 %). In the soil under conventional agricultural management, we observed fragmented structures similar to the Memory Plot from 0.02 to 0.20 m, followed by a volume with a compact continuous structure (C∆μ), without visible porosity and with few roots. In the MHUs under conventional management, reduction in the packing pores (40 %) was observed, mainly in the continuous units (C). The microstructure had well-defined blocks, with the occurrence of planar pores and less evidence of biological activity. In conclusion, the morphological and micromorphological analyses of the soil profiles studied offered complementary information regarding soil structural quality, especially concerning the changes in pore types as result of mechanical stress undergone by the soil.
Resumo:
OBJECTIVE: The Healthy Heart Kit (HHK) is a risk management and patient education kit for the prevention of cardiovascular disease (CVD) and the promotion of CV health. There are currently no published data examining predictors of HHK use by physicians. The main objective of this study was to examine the association between physicians' characteristics (socio-demographic, cognitive, and behavioural) and the use of the HHK. METHODS: All registered family physicians in Alberta (n=3068) were invited to participate in the "Healthy Heart Kit" Study. Consenting physicians (n=153) received the Kit and were requested to use it for two months. At the end of this period, a questionnaire collected data on the frequency of Kit use by physicians, as well as socio-demographic, cognitive, and behavioural variables pertaining to the physicians. RESULTS: The questionnaire was returned by 115 physicians (follow-up rate = 75%). On a scale ranging from 0 to 100, the mean score of Kit use was 61 [SD=26]. A multiple linear regression showed that "agreement with the Kit" and the degree of "confidence in using the Kit" was strongly associated with Kit use, explaining 46% of the variability for Kit use. Time since graduation was inversely associated with Kit use, and a trend was observed for smaller practices to be associated with lower use. CONCLUSION: Given these findings, future research and practice should explore innovative strategies to gain initial agreement among physicians to employ such clinical tools. Participation of older physicians and solo-practitioners in this process should be emphasized.
Resumo:
A score system integrating the evolution of efficacy and tolerability over time was applied to a subpopulation of the STRATHE trial, a trial performed according to a parallel group design, with a double-blind, random allocation to either a fixed-dose combination strategy (perindopril/indapamide 2 mg/0.625 mg, with the possibility to increase the dose to 3 mg/0.935 mg, and 4 mg/1.250 mg if needed, n = 118), a sequential monotherapy approach (atenolol 50 mg, followed by losartan 50 mg and amlodipine 5 mg if needed, n = 108), or a stepped-care strategy (valsartan 40 mg, followed by valsartan 80 mg and valsartan 80 mg+ hydrochlorothiazide 12.5 mg if needed, n = 103). The aim was to lower blood pressure below 140/90 mmHg within a 9-month period. The treatment could be adjusted after 3 and 6 months. Only patients in whom the study protocol was strictly applied were included in this analysis. At completion of the trial the total score averaged 13.1 +/- 70.5 (mean +/- SD) using the fixed-dose combination strategy, compared with -7.2 +/- 81.0 using the sequential monotherapy approach and -17.5 +/- 76.4 using the stepped-care strategy. In conclusion, the use of a score system allows the comparison of antihypertensive therapeutic strategies, taking into account at the same time efficacy and tolerability. In the STRATHE trial the best results were observed with the fixed-dose combination containing low doses of an angiotensin enzyme converting inhibitor (perindopril) and a diuretic (indapamide).
Resumo:
RESUMEN: El objetivo de este trabajo es calcular el importe de la prima pura periódica que debe cobrar el reasegurador a la cedente en un reaseguro finite risk en ambiente financiero estocástico. El problema de la convolución de las diferentes variables aleatorias que intervienen en el cálculo de la prima lo hemos solucionado simulando, por Monte-Carlo, trayectorias de siniestralidad para el reasegurador aplicando posteriormente, en cada trayectoria simulada, los criterios de decisión financieros, esperanza, varianza y desviación. En los criterios de la varianza y de la desviación proponemos utilizar una ecuación de recurrencia estocástica para evitar el problema de la dependencia que existe entre los factores de capitalización estocásticos, obteniendo la prima de reaseguro en función del nivel de aversión al riesgo del reasegurador y de la volatilidad del tipo de interés. Palabras clave: Finite risk, ambiente estocástico, ecuación de recurrencia, simulación de Monte-Carlo, prima pura periódica.
Resumo:
Most local agencies in Iowa currently make their pavement treatment decisions based on their limited experience due primarily to lack of a systematic decision-making framework and a decision-aid tool. The lack of objective condition assessment data of agency pavements also contributes to this problem. This study developed a systematic pavement treatment selection framework for local agencies to assist them in selecting the most appropriate treatment and to help justify their maintenance and rehabilitation decisions. The framework is based on an extensive literature review of the various pavement treatment techniques in terms of their technical applicability and limitations, meaningful practices of neighboring states, and the results of a survey of local agencies. The treatment selection framework involves three different steps: pavement condition assessment, selection of technically feasible treatments using decision trees, and selection of the most appropriate treatment considering the return-on-investment (ROI) and other non-economic factors. An Excel-based spreadsheet tool that automates the treatment selection framework was also developed, along with a standalone user guide for the tool. The Pavement Treatment Selection Tool (PTST) for Local Agencies allows users to enter the severity and extent levels of existing distresses and then, recommends a set of technically feasible treatments. The tool also evaluates the ROI of each feasible treatment and, if necessary, it can also evaluate the non-economic value of each treatment option to help determine the most appropriate treatment for the pavement. It is expected that the framework and tool will help local agencies improve their pavement asset management practices significantly and make better economic and defensible decisions on pavement treatment selection.