905 resultados para Probability of fixation
Resumo:
In the present study, 470 children less than 72 months of age and presenting acute diarrhea were examined to identify associated enteropathogenic agents. Viruses were the pathogens most frequently found in stools of infants with diarrhea, including 111 cases of rotavirus (23.6% of the total diarrhea cases) and 30 cases of adenovirus (6.3%). The second group was diarrheogenic Escherichia coli (86 cases, 18.2%), followed by Salmonella sp (44 cases, 9.3%) and Shigella sp (24 cases, 5.1%). Using the PCR technique to differentiate the pathogenic categories of E. coli, it was possible to identify 29 cases (6.1%) of enteropathogenic E. coli (EPEC). Of these, 10 (2.1%) were typical EPEC and 19 (4.0%) atypical EPEC. In addition, there were 26 cases (5.5%) of enteroaggregative E. coli, 21 cases (4.4%) of enterotoxigenic E. coli, 7 cases (1.4%) of enteroinvasive E. coli (EIEC), and 3 cases (0.6%) of enterohemorrhagic E. coli. When comparing the frequencies of diarrheogenic E. coli, EPEC was the only category for which significant differences were found between diarrhea and control groups. A low frequency of EIEC was found, thus EIEC cannot be considered to be a potential etiology agent of diarrhea. Simultaneous infections with two pathogens were found in 39 diarrhea cases but not in controls, suggesting associations among potential enteropathogens in the etiology of diarrhea. The frequent association of diarrheogenic E. coli strains was significantly higher than the probability of their random association, suggesting the presence of facilitating factor(s).
Resumo:
Several companies are trying to improve their operation efficiency by implementing an enterprise resource planning (ERP) system that makes it possible to control the resources of the company in real time. However, the success of the implementation project is not a foregone conclusion; a significant part of these projects end in a failure, one way or another. Therefore it is important to investigate ERP system implementation more closely in order to increase understanding about factors influencing ERP system success and to improve the probability of a successful ERP implementation project. Consequently, this study was initiated because a manufacturing case company wanted to review the success of their ERP implementation project. To be exact, the case company hoped to gain both information about the success of the project and insight for future implementation improvement. This study investigated ERP success specifically by examining factors that influence ERP key-user satisfaction. User satisfaction is one of the most commonly applied indicators of information system success. The research data was mainly collected by conducting theme interviews. The subjects of the interviews were six key-users of the newly implemented ERP system. The interviewees were closely involved in the implementation project. Furthermore, they act as representative users that utilize the new system in everyday business processes. The collected data was analyzed by thematizing. Both data collection and analysis were guided by a theoretical frame of reference. This frame was based on previous research on the subject. The results of the study aligned with the theoretical framework to large extent. The four principal factors influencing key-user satisfaction were change management, contractor service, key-user’s system knowledge and characteristics of the ERP product itself. One of the most significant contributions of the research is that it confirmed the existence of a connection between change management and ERP key-user satisfaction. Furthermore, it discovered two new sub-factors influencing contractor service related key-user satisfaction. In addition, the research findings indicated that in order to improve the current level of key-user satisfaction, the case company should pay special attention to system functionality improvement and enhancement of the key-users’ knowledge. During similar implementation projects in the future, it would be important to assure the success of change management and contractor service related processes.
Resumo:
The aim of this study was to analyze the association of different clinical contributors of hypoxic-ischemic encephalopathy with NOS3 gene polymorphisms. A total of 110 children with hypoxic-ischemic encephalopathy and 128 control children were selected for this study. Association of gender, gestational age, birth weight, Apgar score, cranial ultrasonography, and magnetic resonance imaging findings with genotypic data of six haplotype-tagging single nucleotide polymorphisms and the most commonly investigated rs1800779 and rs2070744 polymorphisms was analyzed. The TGT haplotype of rs1800783, rs1800779, and rs2070744 polymorphisms was associated with hypoxic-ischemic encephalopathy. Children with the TGT haplotype were infants below 32 weeks of gestation and they had the most severe brain damage. Increased incidence of the TT genotype of the NOS3 rs1808593 SNP was found in the group of hypoxic-ischemic encephalopathy patients with medium and severe brain damage. The probability of brain damage was twice as high in children with the TT genotype than in children with the TG genotype of the same polymorphism. Furthermore, the T allele of the same polymorphism was twice as frequent in children with lower Apgar scores. This study strongly suggests associations of NOS3 gene polymorphism with intensity of brain damage and severity of the clinical picture in affected children.
Resumo:
Our goal is to get better understanding of different kind of dependencies behind the high-level capability areas. The models are suitable for investigating present state capabilities or future developments of capabilities in the context of technology forecasting. Three levels are necessary for a model describing effects of technologies on military capabilities. These levels are capability areas, systems and technologies. The contribution of this paper is to present one possible model for interdependencies between technologies. Modelling interdependencies between technologies is the last building block in constructing a quantitative model for technological forecasting including necessary levels of abstraction. This study supplements our previous research and as a result we present a model for the whole process of capability modelling. As in our earlier studies, capability is defined as the probability of a successful task or operation or proper functioning of a system. In order to obtain numerical data to demonstrate our model, we conducted a questionnaire to a group of defence technology researchers where interdependencies between seven representative technologies were inquired. Because of a small number of participants in questionnaires and general uncertainties concerning subjective evaluations, only rough conclusions can be made from the numerical results
Resumo:
Intelligence from a human source, that is falsely thought to be true, is potentially more harmful than a total lack of it. The veracity assessment of the gathered intelligence is one of the most important phases of the intelligence process. Lie detection and veracity assessment methods have been studied widely but a comprehensive analysis of these methods’ applicability is lacking. There are some problems related to the efficacy of lie detection and veracity assessment. According to a conventional belief an almighty lie detection method, that is almost 100% accurate and suitable for any social encounter, exists. However, scientific studies have shown that this is not the case, and popular approaches are often over simplified. The main research question of this study was: What is the applicability of veracity assessment methods, which are reliable and are based on scientific proof, in terms of the following criteria? o Accuracy, i.e. probability of detecting deception successfully o Ease of Use, i.e. easiness to apply the method correctly o Time Required to apply the method reliably o No Need for Special Equipment o Unobtrusiveness of the method In order to get an answer to the main research question, the following supporting research questions were answered first: What kinds of interviewing and interrogation techniques exist and how could they be used in the intelligence interview context, what kinds of lie detection and veracity assessment methods exist that are reliable and are based on scientific proof and what kind of uncertainty and other limitations are included in these methods? Two major databases, Google Scholar and Science Direct, were used to search and collect existing topic related studies and other papers. After the search phase, the understanding of the existing lie detection and veracity assessment methods was established through a meta-analysis. Multi Criteria Analysis utilizing Analytic Hierarchy Process was conducted to compare scientifically valid lie detection and veracity assessment methods in terms of the assessment criteria. In addition, a field study was arranged to get a firsthand experience of the applicability of different lie detection and veracity assessment methods. The Studied Features of Discourse and the Studied Features of Nonverbal Communication gained the highest ranking in overall applicability. They were assessed to be the easiest and fastest to apply, and to have required temporal and contextual sensitivity. The Plausibility and Inner Logic of the Statement, the Method for Assessing the Credibility of Evidence and the Criteria Based Content Analysis were also found to be useful, but with some limitations. The Discourse Analysis and the Polygraph were assessed to be the least applicable. Results from the field study support these findings. However, it was also discovered that the most applicable methods are not entirely troublefree either. In addition, this study highlighted that three channels of information, Content, Discourse and Nonverbal Communication, can be subjected to veracity assessment methods that are scientifically defensible. There is at least one reliable and applicable veracity assessment method for each of the three channels. All of the methods require disciplined application and a scientific working approach. There are no quick gains if high accuracy and reliability is desired. Since most of the current lie detection studies are concentrated around a scenario, where roughly half of the assessed people are totally truthful and the other half are liars who present a well prepared cover story, it is proposed that in future studies lie detection and veracity assessment methods are tested against partially truthful human sources. This kind of test setup would highlight new challenges and opportunities for the use of existing and widely studied lie detection methods, as well as for the modern ones that are still under development.
Resumo:
Surface size analyses of Twenty and Sixteen Mile Creeks, the Grand and Genesee Rivers and Cazenovia Creek show three distinct types of bed-surface sediment: 1) a "continuous" armor coat which has a mean size of -6.5 phi and coarser, 2) a "discontinuous" armor coat which has a mean size of approximately -6.0 phi and 3) a bed with no armor coat which has a mean surface size of -5.0 phi and finer. The continuous armor coat completely covers and protects the subsurface from the flow. The discontinuous armor coat is composed of intermittently-spaced surface clasts, which provide the subsurface with only limited protection from the flow. The bed with no armor coat allows complete exposure of the subsurface to the flow. The subsurface beneath the continuous armor coats of Twenty and Sixteen Mile Creeks is possibly modified by a "vertical winnowing" process when the armor coat is p«natrat«d. This process results in a welld «v«loped inversely graded sediment sequence.vertical winnowing is reduced beneath the discontinuous armor coats of the Grand and Genesee Rivers. The reduction of vertical winnowing results in a more poorly-developed inverse grading than that found in Twenty and sixteen Mile Creeks. The streambed of Cazenovia Creek normally is not armored resulting in a homogeneous subsurface which shows no modification by vertical winnowing. This streambed forms during waning or moderate flows, suggesting it does not represent the maximum competence of the stream. Each population of grains in the subsurface layers of Twenty and sixteen Mile Creeks has been modified by vertical winnowing and does not represent a mode of transport. Each population in the subsurface layers beneath a discontinuous armor coat may partially reflect a transport mode. These layers are still inversely graded suggesting that each population is affected to some degree by vertical winnowing. The populations for sediment beneath a surface which is not armored are probably indicative of transport modes because such sediment has not been modified by vertical winnowing. Bed photographs taken in each of the five streams before and after the 1982-83 snow-melt show that the probability of movement for the surface clasts is a function of grain size. The greatest probability of of clast movement and scour depth of this study were recorded on Cazenovia Creek in areas where no armor coat is present. The scour depth in the armored beds of Twenty and Sixteen Mile Creeks is related to the probability of movement for a given mean surface size.
Resumo:
It is well documented that the majority of Tuberculosis (TB) cases diagnosed in Canada are related to foreign-bom persons from TB high-burden countries. The Canadian seasonal agricultural workers program (SAWP) operating with Mexico allows migrant workers to enter the country with a temporary work permit for up to 8 months. Preiimnigration screening of these workers by both clinical examination and chest X-ray (CXR) reduces the risk of introducing cases of active pulmonary TB to Canada, but screening for latent TB (LTBI) is not routinely done. Studies carried out in industrialized nations with high immigration from TBendemic countries provide data of lifetime LTBI reactivation of around 10% but little is known about reactivation rates within TB-endemic countries where new infections (or reinfections) may be impossible to distinguish from reactivation. Migrant populations like the SAWP workers who spend considerable amounts of time in both Canada and TBendemic rural areas in Mexico are a unique population in terms of TB epidemiology. However, to our knowledge no studies have been undertaken to explore either the existence of LTBI among Mexican workers, the probability of reactivation or the workers' exposure to TB cases while back in their communities before returning the following season. Being aware of their LTBI status may help workers to exercise healthy behaviours to avoid TB reactivation and therefore continue to access the SAWP. In order to assess the prevalence of LTBI and associated risk factors among Mexican migrant workers a preliminary cross sectional study was designed to involve a convenience sample of the Niagara Region's Mexican workers in 2007. Research ethics clearance was granted by Brock University. Individual questionnaires were administered to collect socio-demographic and TB-related epidemiological data as well as TB knowledge and awareness levels. Cellular immunity to M tuberculosis was assessed by both an Interferon-y release assay (lGRA), QuantiFERON -TB Gold In-Tube (QFf™) and by the tuberculin skin test (TSn using Mantoux. A total of 82 Mexican workers (out of 125 invited) completed the study. Most participants were male (80%) and their age ranged from 22 to 65 years (mean 38.5). The prevalence of LTBI was 34% using TST and 18% using QFTTM. As previously reported, TST (using ~lOmm cut-off) showed a sensitivity of 93.3% and a specificity of 79.1 %. These findings at the moment cannot predict the probability of progression to active TB; only longitudinal cohort studies of this population can ascertain this outcome. However, based on recent publications, lORA positive individuals may have up to 14% probability of reactivation within the next two years. Although according to the SA WP guidelines, all workers undergo TB screening before entering or re-entering Canada, CXR examination requirements showed to be inconsistent for this population: whereas 100% of the workers coming to Canada for the first time reported having the procedure done, only 31 % of returning participants reported having had a CXR in the past year. None of the participants reported ever having a CXR compatible with TB which was consistent with the fact that none had ever been diagnosed with active pulmonary TB and with only 3.6% reporting close contact with a person with active TB in their lifetime. Although Mexico reports that 99% of popUlation is fully immunized against TB within the first year of age, only 85.3% of participants reported receiving BOC vaccine in childhood. Conversely, even when TST is not part of the routine TB screening in endemic countries, a suqDrisingly high 25.6% reported receiving a TST in the past. In regards to TB knowledge and awareness, 74% of the studied population had previous knowledge about (active) TB, 42% correctly identified active TB symptomatology, 4.8% identified the correct route of transmission, 4.8% knew about the existence of LTBI, 3.6% knew that latent TB could reactivate and 48% recognized TB as treatable and curable. Of all variables explored as potential risk factors for LTBI, age was the only one which showed statistical significance. Significant associations could not be proven for other known variables (such as sex, TB contact, history of TB) probably because of the small sample size and the homogeneity of the sample. Screening for LTBI by TST (high sensitivity) followed by confirmation with QFT''"'^ (high specificity) suggests to be a good strategy especially for immigrants from TB high-burden countries. After educational sessions, workers positive for LTBI gained greater knowledge about the signs and symptoms of TB reactivation as well as the risk factors commonly associated with reactivation. Additionally, they were more likely to attend their annual health check up and request a CXR exam to monitor for TB reactivation.
Resumo:
Affiliation: Département de Biochimie, Université de Montréal
Resumo:
My aim in the present paper is to develop a new kind of argument in support of the ideal of liberal neutrality. This argument combines some basic moral principles with a thesis about the relationship between the correct standards of justification for a belief/action and certain contextual factors. The idea is that the level of importance of what is at stake in a specific context of action determines how demanding the correct standards to justify an action based on a specific set of beliefs ought to be. In certain exceptional contexts –where the seriousness of harm in case of mistake and the level of an agent’s responsibility for the outcome of his action are specially high– a very small probability of making a mistake should be recognized as a good reason to avoid to act based on beliefs that we nonetheless affirm with a high degree of confidence and that actually justify our action in other contexts. The further steps of the argument consist in probing 1) that the fundamental state’s policies are such a case of exceptional context, 2) that perfectionist policies are the type of actions we should avoid, and 3) that policies that satisfy neutral standards of justification are not affected by the reasons which lead to reject perfectionist policies.
Resumo:
Les antibiotiques aminoglycosidiques sont des agents bactéricides de grande valeur et d’efficacité à large spectre contre les pathogènes Gram-positifs et Gram-négatifs, dont plusieurs membres naturels et semisynthétiques sont importants dans l’histoire clinique depuis 1950. Des travaux crystallographiques sur le ribosome, récompensés par le prix Nobel, ont démontré comment leurs diverses structures polyaminées sont adaptées pour cibler une hélice d’ARN dans le centre de codage de la sous-unité 30S du ribosome bactérien. Leur interférence avec l’affinité et la cinétique des étapes de sélection et vérification des tARN induit la synthèse de protéines à basse fidélité, et l’inhibition de la translocation, établissant un cercle vicieux d’accumulation d’antibiotique et de stress sur la membrane. En réponse à ces pressions, les pathogènes bactériens ont évolué et disséminé une panoplie de mécanismes de résistance enzymatiques et d’expulsion : tels que les N acétyltransférases, les O phosphotransférases et les O nucleotidyltransférases qui ciblent les groupements hydroxyle et amino sur le coeur des aminoglycosides; des méthyl-transférases, qui ciblent le site de liaison ribosomale; et des pompes d’expulsion actives pour l’élimination sélective des aminoglycosides, qui sont utilisés par les souches Gram-négatives. Les pathogènes les plus problématiques, qui présentent aujourd’hui une forte résilience envers la majorité des classes d’antibiotiques sur le bord de la pan-résistance ont été nommés des bactéries ESKAPE, une mnémonique pour Enterococcus faecium, Staphylococcus aureus, Klebsiella pneumoniae, Acinetobacter baumannii, Pseudomonas aeruginosa et Enterobacteriaceae. La distribution globale des souches avec des mécanismes de résistance envers les standards cliniques aminoglycosides, tels que la tobramycine, l’amikacine et la gentamicine, est comprise entre 20 et 60% des isolées cliniques. Ainsi, les aminoglycosides du type 4,6-disubstitués-2-deoxystreptamine sont inadéquats comme thérapies anti-infectieuses à large spectre. Cependant, la famille des aminoglycosides 4,5-disubstitués, incluant la butirosine, la neomycine et la paromomycine, dont la structure plus complexe, pourrait constituter une alternative. Des collègues dans le groupe Hanessian et collaborateurs d’Achaogen Inc. ont démontré que certains analogues de la paraomomycine et neomycine, modifiés par désoxygénation sur les positions 3’ et 4’, et par substitution avec la chaîne N1-α-hydroxy-γ-aminobutyramide (HABA) provenant de la butirosine, pourrait produire des antibiotiques très prometteurs. Le Chapitre 4 de cette dissertation présente la conception et le développement d’une stratégie semi-synthétique pour produire des nouveaux aminoglycosides améliorés du type 4,5 disubstitués, inspiré par des modifications biosynthétiques de la sisomicine, qui frustrent les mécanismes de résistance bactérienne distribuées globalement. Cette voie de synthèse dépend d’une réaction d’hydrogénolyse de type Tsuji catalysée par palladium, d’abord développée sur des modèles monosaccharides puis subséquemment appliquée pour générer un ensemble d’aminoglycosides hybrides entre la neomycine et la sisomicine. Les études structure-activité des divers analogues de cette nouvelle classe ont été évaluées sur une gamme de 26 souches bactériennes exprimant des mécanismes de résistance enzymatique et d’expulsion qui englobe l’ensemble des pathogènes ESKAPE. Deux des antibiotiques hybrides ont une couverture antibacterienne excellente, et cette étude a mis en évidence des candidats prometteurs pour le développement préclinique. La thérapie avec les antibiotiques aminoglycosidiques est toujours associée à une probabilité de complications néphrotoxiques. Le potentiel de toxicité de chaque aminoglycoside peut être largement corrélé avec le nombre de groupements amino et de désoxygénations. Une hypothèse de longue date dans le domaine indique que les interactions principales sont effectuées par des sels des groupements ammonium, donc l’ajustement des paramètres de pKa pourrait provoquer une dissociation plus rapide avec leurs cibles, une clairance plus efficace et globalement des analogues moins néphrotoxiques. Le Chapitre 5 de cette dissertation présente la conception et la synthèse asymétrique de chaînes N1 HABA β substitutées par mono- et bis-fluoration. Des chaînes qui possèdent des γ-N pKa dans l’intervalle entre 10 et 7.5 ont été appliquées sur une neomycine tétra-désoxygénée pour produire des antibiotiques avancés. Malgré la réduction considérable du γ N pKa, le large spectre bactéricide n’a pas été significativement affecté pour les analogues fluorés isosteriques. De plus, des études structure-toxicité évaluées avec une analyse d’apoptose propriétaire d’Achaogen ont démontré que la nouvelle chaîne β,β difluoro-N1-HABA est moins nocive sur un modèle de cellules de rein humain HK2 et elle est prometteuse pour le développement d’antibiotiques du type neomycine avec des propriétés thérapeutiques améliorées. Le chapitre final de cette dissertation présente la proposition et validation d’une synthèse biomimétique par assemblage spontané du aminoglycoside 66-40C, un dimère C2 symétrique bis-imine macrocyclique à 16 membres. La structure proposée du macrocycle a été affinée par spectroscopie nucléaire à un système trans,trans-bis-azadiène anti-parallèle. Des calculs indiquent que l’effet anomérique de la liaison α glycosidique entre les anneaux A et B fournit la pré-organisation pour le monomère 6’ aldéhydo sisomicine et favorise le produit macrocyclique observé. L’assemblage spontané dans l’eau a été étudié par la dimérisation de trois divers analogues et par des expériences d’entre croisement qui ont démontré la généralité et la stabilité du motif macrocyclique de l'aminoglycoside 66-40C.
Resumo:
Le but de cette thèse est d étendre la théorie du bootstrap aux modèles de données de panel. Les données de panel s obtiennent en observant plusieurs unités statistiques sur plusieurs périodes de temps. Leur double dimension individuelle et temporelle permet de contrôler l 'hétérogénéité non observable entre individus et entre les périodes de temps et donc de faire des études plus riches que les séries chronologiques ou les données en coupe instantanée. L 'avantage du bootstrap est de permettre d obtenir une inférence plus précise que celle avec la théorie asymptotique classique ou une inférence impossible en cas de paramètre de nuisance. La méthode consiste à tirer des échantillons aléatoires qui ressemblent le plus possible à l échantillon d analyse. L 'objet statitstique d intérêt est estimé sur chacun de ses échantillons aléatoires et on utilise l ensemble des valeurs estimées pour faire de l inférence. Il existe dans la littérature certaines application du bootstrap aux données de panels sans justi cation théorique rigoureuse ou sous de fortes hypothèses. Cette thèse propose une méthode de bootstrap plus appropriée aux données de panels. Les trois chapitres analysent sa validité et son application. Le premier chapitre postule un modèle simple avec un seul paramètre et s 'attaque aux propriétés théoriques de l estimateur de la moyenne. Nous montrons que le double rééchantillonnage que nous proposons et qui tient compte à la fois de la dimension individuelle et la dimension temporelle est valide avec ces modèles. Le rééchantillonnage seulement dans la dimension individuelle n est pas valide en présence d hétérogénéité temporelle. Le ré-échantillonnage dans la dimension temporelle n est pas valide en présence d'hétérogénéité individuelle. Le deuxième chapitre étend le précédent au modèle panel de régression. linéaire. Trois types de régresseurs sont considérés : les caractéristiques individuelles, les caractéristiques temporelles et les régresseurs qui évoluent dans le temps et par individu. En utilisant un modèle à erreurs composées doubles, l'estimateur des moindres carrés ordinaires et la méthode de bootstrap des résidus, on montre que le rééchantillonnage dans la seule dimension individuelle est valide pour l'inférence sur les coe¢ cients associés aux régresseurs qui changent uniquement par individu. Le rééchantillonnage dans la dimen- sion temporelle est valide seulement pour le sous vecteur des paramètres associés aux régresseurs qui évoluent uniquement dans le temps. Le double rééchantillonnage est quand à lui est valide pour faire de l inférence pour tout le vecteur des paramètres. Le troisième chapitre re-examine l exercice de l estimateur de différence en di¤érence de Bertrand, Duflo et Mullainathan (2004). Cet estimateur est couramment utilisé dans la littérature pour évaluer l impact de certaines poli- tiques publiques. L exercice empirique utilise des données de panel provenant du Current Population Survey sur le salaire des femmes dans les 50 états des Etats-Unis d Amérique de 1979 à 1999. Des variables de pseudo-interventions publiques au niveau des états sont générées et on s attend à ce que les tests arrivent à la conclusion qu il n y a pas d e¤et de ces politiques placebos sur le salaire des femmes. Bertrand, Du o et Mullainathan (2004) montre que la non-prise en compte de l hétérogénéité et de la dépendance temporelle entraîne d importantes distorsions de niveau de test lorsqu'on évalue l'impact de politiques publiques en utilisant des données de panel. Une des solutions préconisées est d utiliser la méthode de bootstrap. La méthode de double ré-échantillonnage développée dans cette thèse permet de corriger le problème de niveau de test et donc d'évaluer correctement l'impact des politiques publiques.
Resumo:
L’insomnie, commune auprès de la population gériatrique, est typiquement traitée avec des benzodiazépines qui peuvent augmenter le risque des chutes. La thérapie cognitive-comportementale (TCC) est une intervention non-pharmacologique ayant une efficacité équivalente et aucun effet secondaire. Dans la présente thèse, le coût des benzodiazépines (BZD) sera comparé à celui de la TCC dans le traitement de l’insomnie auprès d’une population âgée, avec et sans considération du coût additionnel engendré par les chutes reliées à la prise des BZD. Un modèle d’arbre décisionnel a été conçu et appliqué selon la perspective du système de santé sur une période d’un an. Les probabilités de chutes, de visites à l’urgence, d’hospitalisation avec et sans fracture de la hanche, les données sur les coûts et sur les utilités ont été recueillies à partir d’une revue de la littérature. Des analyses sur le coût des conséquences, sur le coût-utilité et sur les économies potentielles ont été faites. Des analyses de sensibilité probabilistes et déterministes ont permis de prendre en considération les estimations des données. Le traitement par BZD coûte 30% fois moins cher que TCC si les coûts reliés aux chutes ne sont pas considérés (231$ CAN vs 335$ CAN/personne/année). Lorsque le coût relié aux chutes est pris en compte, la TCC s’avère être l’option la moins chère (177$ CAN d’économie absolue/ personne/année, 1,357$ CAN avec les BZD vs 1,180$ pour la TCC). La TCC a dominé l’utilisation des BZD avec une économie moyenne de 25, 743$ CAN par QALY à cause des chutes moins nombreuses observées avec la TCC. Les résultats des analyses d’économies d’argent suggèrent que si la TCC remplaçait le traitement par BZD, l’économie annuelle directe pour le traitement de l’insomnie serait de 441 millions de dollars CAN avec une économie cumulative de 112 billions de dollars canadiens sur une période de cinq ans. D’après le rapport sensibilité, le traitement par BZD coûte en moyenne 1,305$ CAN, écart type 598$ (étendue : 245-2,625)/personne/année alors qu’il en coûte moyenne 1,129$ CAN, écart type 514$ (étendue : 342-2,526)/personne/année avec la TCC. Les options actuelles de remboursement de traitements pharmacologiques au lieu des traitements non-pharmacologiques pour l’insomnie chez les personnes âgées ne permettent pas d’économie de coûts et ne sont pas recommandables éthiquement dans une perspective du système de santé.
Resumo:
The hazards associated with major accident hazard (MAH) industries are fire, explosion and toxic gas releases. Of these, toxic gas release is the worst as it has the potential to cause extensive fatalities. Qualitative and quantitative hazard analyses are essential for the identitication and quantification of the hazards associated with chemical industries. This research work presents the results of a consequence analysis carried out to assess the damage potential of the hazardous material storages in an industrial area of central Kerala, India. A survey carried out in the major accident hazard (MAH) units in the industrial belt revealed that the major hazardous chemicals stored by the various industrial units are ammonia, chlorine, benzene, naphtha, cyclohexane, cyclohexanone and LPG. The damage potential of the above chemicals is assessed using consequence modelling. Modelling of pool fires for naphtha, cyclohexane, cyclohexanone, benzene and ammonia are carried out using TNO model. Vapor cloud explosion (VCE) modelling of LPG, cyclohexane and benzene are carried out using TNT equivalent model. Boiling liquid expanding vapor explosion (BLEVE) modelling of LPG is also carried out. Dispersion modelling of toxic chemicals like chlorine, ammonia and benzene is carried out using the ALOHA air quality model. Threat zones for different hazardous storages are estimated based on the consequence modelling. The distance covered by the threat zone was found to be maximum for chlorine release from a chlor-alkali industry located in the area. The results of consequence modelling are useful for the estimation of individual risk and societal risk in the above industrial area.Vulnerability assessment is carried out using probit functions for toxic, thermal and pressure loads. Individual and societal risks are also estimated at different locations. Mapping of threat zones due to different incident outcome cases from different MAH industries is done with the help of Are GIS.Fault Tree Analysis (FTA) is an established technique for hazard evaluation. This technique has the advantage of being both qualitative and quantitative, if the probabilities and frequencies of the basic events are known. However it is often difficult to estimate precisely the failure probability of the components due to insufficient data or vague characteristics of the basic event. It has been reported that availability of the failure probability data pertaining to local conditions is surprisingly limited in India. This thesis outlines the generation of failure probability values of the basic events that lead to the release of chlorine from the storage and filling facility of a major chlor-alkali industry located in the area using expert elicitation and proven fuzzy logic. Sensitivity analysis has been done to evaluate the percentage contribution of each basic event that could lead to chlorine release. Two dimensional fuzzy fault tree analysis (TDFFTA) has been proposed for balancing the hesitation factor invo1ved in expert elicitation .
Resumo:
Warships are generally sleek, slender with V shaped sections and block coefficient below 0.5, compared to fuller forms and higher values for commercial ships. They normally operate in the higher Froude number regime, and the hydrodynamic design is primarily aimed at achieving higher speeds with the minimum power. Therefore the structural design and analysis methods are different from those for commercial ships. Certain design guidelines have been given in documents like Naval Engineering Standards and one of the new developments in this regard is the introduction of classification society rules for the design of warships.The marine environment imposes subjective and objective uncertainties on ship structure. The uncertainties in loads, material properties etc.,. make reliable predictions of ship structural response a difficult task. Strength, stiffness and durability criteria for warship structures can be established by investigations on elastic analysis, ultimate strength analysis and reliability analysis. For analysis of complicated warship structures, special means and valid approximations are required.Preliminary structural design of a frigate size ship has been carried out . A finite element model of the hold model, representative of the complexities in the geometric configuration has been created using the finite element software NISA. Two other models representing the geometry to a limited extent also have been created —- one with two transverse frames and the attached plating alongwith the longitudinal members and the other representing the plating and longitudinal stiffeners between two transverse frames. Linear static analysis of the three models have been carried out and each one with three different boundary conditions. The structural responses have been checked for deflections and stresses against the permissible values. The structure has been found adequate in all the cases. The stresses and deflections predicted by the frame model are comparable with those of the hold model. But no such comparison has been realized for the interstiffener plating model with the other two models.Progressive collapse analyses of the models have been conducted for the three boundary conditions, considering geometric nonlinearity and then combined geometric and material nonlinearity for the hold and the frame models. von Mises — lllyushin yield criteria with elastic-perfectly plastic stress-strain curve has been chosen. ln each case, P-Delta curves have been generated and the ultimate load causing failure (ultimate load factor) has been identified as a multiple of the design load specified by NES.Reliability analysis of the hull module under combined geometric and material nonlinearities have been conducted. The Young's Modulus and the shell thickness have been chosen as the variables. Randomly generated values have been used in the analysis. First Order Second Moment has been used to predict the reliability index and thereafter, the probability of failure. The values have been compared against standard values published in literature.
Resumo:
Usually, under rainfed conditions the growing period exists in the humid months. Hence, for agricultural planning knowledge about the variabilities of the duration of the humid seasons are very much needed. The crucial problem affecting agriculture is the persistency in receiving a specific amount of rainfall during a short period. Agricultural operations and decision making are highly dependent on the probability of receiving given amounts of rainfall; such periods should match the water requirements of different phenological phases of the crops. While prolonged dry periods during sensitive phases are detrimental to their growth and lower the yields, excess of rainfall causes soil erosion and loss of soil nutrients. These factors point to the importance of evaluation of wet and dry spells. In this study the weekly rainfall data have been analysed to estimate the probability of wet and dry periods at all selected stations of each agroclimatic zone and the crop growth potentials of the growing seasons have been analysed. The thesis consists of six Chapters.