126 resultados para Routh-Hurwitz criterion
em Université de Lausanne, Switzerland
Resumo:
This study presents a classification criteria for two-class Cannabis seedlings. As the cultivation of drug type cannabis is forbidden in Switzerland, law enforcement authorities regularly ask laboratories to determine cannabis plant's chemotype from seized material in order to ascertain that the plantation is legal or not. In this study, the classification analysis is based on data obtained from the relative proportion of three major leaf compounds measured by gas-chromatography interfaced with mass spectrometry (GC-MS). The aim is to discriminate between drug type (illegal) and fiber type (legal) cannabis at an early stage of the growth. A Bayesian procedure is proposed: a Bayes factor is computed and classification is performed on the basis of the decision maker specifications (i.e. prior probability distributions on cannabis type and consequences of classification measured by losses). Classification rates are computed with two statistical models and results are compared. Sensitivity analysis is then performed to analyze the robustness of classification criteria.
Resumo:
BACKGROUND: : A primary goal of clinical pharmacology is to understand the factors that determine the dose-effect relationship and to use this knowledge to individualize drug dose. METHODS: : A principle-based criterion is proposed for deciding among alternative individualization methods. RESULTS: : Safe and effective variability defines the maximum acceptable population variability in drug concentration around the population average. CONCLUSIONS: : A decision on whether patient covariates alone are sufficient, or whether therapeutic drug monitoring in combination with target concentration intervention is needed, can be made by comparing the remaining population variability after a particular dosing method with the safe and effective variability.
Resumo:
The authors evaluated ten years of surgical reanimation in the University Centre of Lausanne (CHUV). Irreversible coagulopathy (IC) is the predominant cause of death for the polytraumatized patient. Acidosis, hypothermy, and coagulation troubles are crucial elements of this coagulopathy. The authors looked for a criterion allowing the identification of dying of IC. In a retrospective study, laboratory results of pH, TP, PTT, thrombocyte count and the need for blood transfusion units were checked for each major step of the primary evaluation and treatment of the polytraumatized patients. These results were considered as critical according to criteria of the literature (30). The authors conclude that the apparation of a third critical value may be useful to identify the polytraumatized patient at risk of dying of IC status. This criterion may also guide the trauma team in selecting a damage control surgical approach (DCS). This criterion was then introduced into an algorithm involving the Emergency Department, the operating room and the Intensive Care Unit. This criterion is a new tool to address the patient at the crucial moment to the appropriate hospital structure.
Resumo:
Therapeutic drug monitoring (TDM) aims to optimize treatments by individualizing dosage regimens based on the measurement of blood concentrations. Dosage individualization to maintain concentrations within a target range requires pharmacokinetic and clinical capabilities. Bayesian calculations currently represent the gold standard TDM approach but require computation assistance. In recent decades computer programs have been developed to assist clinicians in this assignment. The aim of this survey was to assess and compare computer tools designed to support TDM clinical activities. The literature and the Internet were searched to identify software. All programs were tested on personal computers. Each program was scored against a standardized grid covering pharmacokinetic relevance, user friendliness, computing aspects, interfacing and storage. A weighting factor was applied to each criterion of the grid to account for its relative importance. To assess the robustness of the software, six representative clinical vignettes were processed through each of them. Altogether, 12 software tools were identified, tested and ranked, representing a comprehensive review of the available software. Numbers of drugs handled by the software vary widely (from two to 180), and eight programs offer users the possibility of adding new drug models based on population pharmacokinetic analyses. Bayesian computation to predict dosage adaptation from blood concentration (a posteriori adjustment) is performed by ten tools, while nine are also able to propose a priori dosage regimens, based only on individual patient covariates such as age, sex and bodyweight. Among those applying Bayesian calculation, MM-USC*PACK© uses the non-parametric approach. The top two programs emerging from this benchmark were MwPharm© and TCIWorks. Most other programs evaluated had good potential while being less sophisticated or less user friendly. Programs vary in complexity and might not fit all healthcare settings. Each software tool must therefore be regarded with respect to the individual needs of hospitals or clinicians. Programs should be easy and fast for routine activities, including for non-experienced users. Computer-assisted TDM is gaining growing interest and should further improve, especially in terms of information system interfacing, user friendliness, data storage capability and report generation.
Resumo:
OBJECTIVE: To test the accuracy of a new pulse oximeter sensor based on transmittance and reflectance. This sensor makes transillumination of tissue unnecessary and allows measurements on the hand, forearm, foot, and lower limb. DESIGN: Prospective, open, nonrandomized criterion standard study. SETTING: Neonatal intensive care unit, tertiary care center. PATIENTS: Sequential sample of 54 critically ill neonates (gestational age 27 to 42 wks; postnatal age 1 to 28 days) with arterial catheters in place. MEASUREMENTS AND MAIN RESULTS: A total of 99 comparisons between pulse oximetry and arterial saturation were obtained. Comparison of femoral or umbilical arterial blood with transcutaneous measurements on the lower limb (n = 66) demonstrated an excellent correlation (r2 = .96). The mean difference was +1.44% +/- 3.51 (SD) % (range -11% to +8%). Comparison of the transcutaneous values with the radial artery saturation from the corresponding upper limb (n = 33) revealed a correlation coefficient of 0.94 with a mean error of +0.66% +/- 3.34% (range -6% to +7%). The mean difference between noninvasive and invasive measurements was least with the test sensor on the hand, intermediate on the calf and arm, and greatest on the foot. The mean error and its standard deviation were slightly larger for arterial saturation values < 90% than for values > or = 90%. CONCLUSION: Accurate pulse oximetry saturation can be acquired from the hand, forearm, foot, and calf of critically ill newborns using this new sensor.
Resumo:
The implicit projection algorithm of isotropic plasticity is extended to an objective anisotropic elastic perfectly plastic model. The recursion formula developed to project the trial stress on the yield surface, is applicable to any non linear elastic law and any plastic yield function.A curvilinear transverse isotropic model based on a quadratic elastic potential and on Hill's quadratic yield criterion is then developed and implemented in a computer program for bone mechanics perspectives. The paper concludes with a numerical study of a schematic bone-prosthesis system to illustrate the potential of the model.
Resumo:
BACKGROUND: According to recent guidelines, patients with coronary artery disease (CAD) should undergo revascularization if significant myocardial ischemia is present. Both, cardiovascular magnetic resonance (CMR) and fractional flow reserve (FFR) allow for a reliable ischemia assessment and in combination with anatomical information provided by invasive coronary angiography (CXA), such a work-up sets the basis for a decision to revascularize or not. The cost-effectiveness ratio of these two strategies is compared. METHODS: Strategy 1) CMR to assess ischemia followed by CXA in ischemia-positive patients (CMR + CXA), Strategy 2) CXA followed by FFR in angiographically positive stenoses (CXA + FFR). The costs, evaluated from the third party payer perspective in Switzerland, Germany, the United Kingdom (UK), and the United States (US), included public prices of the different outpatient procedures and costs induced by procedural complications and by diagnostic errors. The effectiveness criterion was the correct identification of hemodynamically significant coronary lesion(s) (= significant CAD) complemented by full anatomical information. Test performances were derived from the published literature. Cost-effectiveness ratios for both strategies were compared for hypothetical cohorts with different pretest likelihood of significant CAD. RESULTS: CMR + CXA and CXA + FFR were equally cost-effective at a pretest likelihood of CAD of 62% in Switzerland, 65% in Germany, 83% in the UK, and 82% in the US with costs of CHF 5'794, euro 1'517, £ 2'680, and $ 2'179 per patient correctly diagnosed. Below these thresholds, CMR + CXA showed lower costs per patient correctly diagnosed than CXA + FFR. CONCLUSIONS: The CMR + CXA strategy is more cost-effective than CXA + FFR below a CAD prevalence of 62%, 65%, 83%, and 82% for the Swiss, the German, the UK, and the US health care systems, respectively. These findings may help to optimize resource utilization in the diagnosis of CAD.
Resumo:
RésuméCette thèse traite de l'utilisation des concepts de Symbiose Industrielle dans les pays en développement et étudie le potentiel de cette stratégie pour stimuler un développement régional durable dans les zones rurales d'Afrique de l'Ouest. En particulier, lorsqu'une Symbiose Industrielle est instaurée entre une usine et sa population alentour, des outils d'évaluation sont nécessaires pour garantir que le projet permette d'atteindre un réel développement durable. Les outils existants, développés dans les pays industrialisés, ne sont cependant pas complètement adaptés pour l'évaluation de projets dans les pays en développement. En effet, les outils sont porteurs d'hypothèses implicites propres au contexte socio-économique dans lequel ils ont été conçus.L'objectif de cette thèse est de développer un cadre méthodologique pour l'évaluation de la durabilité de projets de Symbiose Industrielle dans les pays en développement.Pour ce faire, je m'appuie sur une étude de cas de la mise en place d'une Symbiose Industrielle au nord du Nigéria, à laquelle j'ai participé en tant qu'observatrice dès 2007. AshakaCem, une usine productrice de ciment du groupe Lafarge, doit faire face à de nombreuses tensions avec la population rurale alentour. L'entreprise a donc décidé d'adopter une nouvelle méthode inspirée des concepts de Symbiose Industrielle. Le projet consiste à remplacer jusqu'à 10% du carburant fossile utilisé pour la cuisson de la matière crue (calcaire et additifs) par de la biomasse produite par les paysans locaux. Pour ne pas compromettre la fragile sécurité alimentaire régionale, des techniques de lutte contre l'érosion et de fertilisation naturelle des sols sont enseignées aux paysans, qui peuvent ainsi utiliser la culture de biomasse pour améliorer leurs cultures vivrières. A travers cette Symbiose Industrielle, l'entreprise poursuit des objectifs sociaux (poser les bases nécessaires à un développement régional), mais également environnementaux (réduire ses émissions de CO2 globales) et économiques (réduire ses coûts énergétiques). Elle s'ancre ainsi dans une perspective de développement durable qui est conditionnelle à la réalisation du projet.A travers l'observation de cette Symbiose et par la connaissance des outils existants je constate qu'une évaluation de la durabilité de projets dans les pays en développement nécessite l'utilisation de critères d'évaluation propres à chaque projet. En effet, dans ce contexte, l'emploi de critères génériques apporte une évaluation trop éloignée des besoins et de la réalité locale. C'est pourquoi, en m'inspirant des outils internationalement reconnus comme l'Analyse du Cycle de Vie ou la Global Reporting Initiative, je définis dans cette thèse un cadre méthodologique qui peut, lui, être identique pour tous les projets. Cette stratégie suit six étapes, qui se réalisent de manière itérative pour permettre une auto¬amélioration de la méthodologie d'évaluation et du projet lui-même. Au cours de ces étapes, les besoins et objectifs en termes sociaux, économiques et environnementaux des différents acteurs sont déterminés, puis regroupés, hiérarchisés et formulés sous forme de critères à évaluer. Des indicateurs quantitatifs ou qualitatifs sont ensuite définis pour chacun de ces critères. Une des spécificités de cette stratégie est de définir une échelle d'évaluation en cinq graduations, identique pour chaque indicateur, témoignant d'un objectif totalement atteint (++) ou pas du tout atteint (--).L'application de ce cadre méthodologique à la Symbiose nigériane a permis de déterminer quatre critères économiques, quatre critères socio-économiques et six critères environnementaux à évaluer. Pour les caractériser, 22 indicateurs ont été définis. L'évaluation de ces indicateurs a permis de montrer que le projet élaboré atteint les objectifs de durabilité fixés pour la majorité des critères. Quatre indicateurs ont un résultat neutre (0), et un cinquième montre qu'un critère n'est pas atteint (--). Ces résultats s'expliquent par le fait que le projet n'en est encore qu'à sa phase pilote et n'a donc pas encore atteint la taille et la diffusion optimales. Un suivi sur plusieurs années permettra de garantir que ces manques seront comblés.Le cadre méthodologique que j'ai développé dans cette thèse est un outil d'évaluation participatif qui pourra être utilisé dans un contexte plus large que celui des pays en développement. Son caractère générique en fait un très bon outil pour la définition de critères et indicateurs de suivi de projet en terme de développement durable.SummaryThis thesis examines the use of industrial symbiosis in developing countries and studies its potential to stimulate sustainable regional development in rural areas across Western Africa. In particular, when industrial symbiosis is instituted between a factory and the surrounding population, evaluation tools are required to ensure the project achieves truly sustainable development. Existing tools developed in industrialized countries are not entirely suited to assessing projects in developing countries. Indeed, the implicit hypotheses behind such tools reflect the socioeconomic context in which they were designed. The goal of this thesis is to develop a methodological framework for evaluating the sustainability of industrial symbiosis projects in developing countries.To accomplish this, I followed a case study about the implementation of industrial symbiosis in northern Nigeria by participating as an observer since 2007. AshakaCem, a cement works of Lafarge group, must confront many issues associated with violence committed by the local rural population. Thus, the company decided to adopt a new approach inspired by the concepts of industrial symbiosis.The project involves replacing up to 10% of the fossil fuel used to heat limestone with biomass produced by local farmers. To avoid jeopardizing the fragile security of regional food supplies, farmers are taught ways to combat erosion and naturally fertilize the soil. They can then use biomass cultivation to improve their subsistence crops. Through this industrial symbiosis, AshakaCem follows social objectives (to lay the necessary foundations for regional development), but also environmental ones (to reduce its overall CO2 emissions) and economical ones (to reduce its energy costs). The company is firmly rooted in a view of sustainable development that is conditional upon the project's execution.By observing this symbiosis and by being familiar with existing tools, I note that assessing the sustainability of projects in developing countries requires using evaluation criteria that are specific to each project. Indeed, using generic criteria results in an assessment that is too far removed from what is needed and from the local reality. Thus, by drawing inspiration from such internationally known tools as Life Cycle Analysis and the Global Reporting Initiative, I define a generic methodological framework for the participative establishment of an evaluation methodology specific to each project.The strategy follows six phases that are fulfilled iteratively so as to improve the evaluation methodology and the project itself as it moves forward. During these phases, the social, economic, and environmental needs and objectives of the stakeholders are identified, grouped, ranked, and expressed as criteria for evaluation. Quantitative or qualitative indicators are then defined for each of these criteria. One of the characteristics of this strategy is to define a five-point evaluation scale, the same for each indicator, to reflect a goal that was completely reached (++) or not reached at all (--).Applying the methodological framework to the Nigerian symbiosis yielded four economic criteria, four socioeconomic criteria, and six environmental criteria to assess. A total of 22 indicators were defined to characterize the criteria. Evaluating these indicators made it possible to show that the project meets the sustainability goals set for the majority of criteria. Four indicators had a neutral result (0); a fifth showed that one criterion had not been met (--). These results can be explained by the fact that the project is still only in its pilot phase and, therefore, still has not reached its optimum size and scope. Following up over several years will make it possible to ensure these gaps will be filled.The methodological framework presented in this thesis is a highly effective tool that can be used in a broader context than developing countries. Its generic nature makes it a very good tool for defining criteria and follow-up indicators for sustainable development.
Resumo:
Sampling issues represent a topic of ongoing interest to the forensic science community essentially because of their crucial role in laboratory planning and working protocols. For this purpose, forensic literature described thorough (Bayesian) probabilistic sampling approaches. These are now widely implemented in practice. They allow, for instance, to obtain probability statements that parameters of interest (e.g., the proportion of a seizure of items that present particular features, such as an illegal substance) satisfy particular criteria (e.g., a threshold or an otherwise limiting value). Currently, there are many approaches that allow one to derive probability statements relating to a population proportion, but questions on how a forensic decision maker - typically a client of a forensic examination or a scientist acting on behalf of a client - ought actually to decide about a proportion or a sample size, remained largely unexplored to date. The research presented here intends to address methodology from decision theory that may help to cope usefully with the wide range of sampling issues typically encountered in forensic science applications. The procedures explored in this paper enable scientists to address a variety of concepts such as the (net) value of sample information, the (expected) value of sample information or the (expected) decision loss. All of these aspects directly relate to questions that are regularly encountered in casework. Besides probability theory and Bayesian inference, the proposed approach requires some additional elements from decision theory that may increase the efforts needed for practical implementation. In view of this challenge, the present paper will emphasise the merits of graphical modelling concepts, such as decision trees and Bayesian decision networks. These can support forensic scientists in applying the methodology in practice. How this may be achieved is illustrated with several examples. The graphical devices invoked here also serve the purpose of supporting the discussion of the similarities, differences and complementary aspects of existing Bayesian probabilistic sampling criteria and the decision-theoretic approach proposed throughout this paper.
Resumo:
An ab initio structure prediction approach adapted to the peptide-major histocompatibility complex (MHC) class I system is presented. Based on structure comparisons of a large set of peptide-MHC class I complexes, a molecular dynamics protocol is proposed using simulated annealing (SA) cycles to sample the conformational space of the peptide in its fixed MHC environment. A set of 14 peptide-human leukocyte antigen (HLA) A0201 and 27 peptide-non-HLA A0201 complexes for which X-ray structures are available is used to test the accuracy of the prediction method. For each complex, 1000 peptide conformers are obtained from the SA sampling. A graph theory clustering algorithm based on heavy atom root-mean-square deviation (RMSD) values is applied to the sampled conformers. The clusters are ranked using cluster size, mean effective or conformational free energies, with solvation free energies computed using Generalized Born MV 2 (GB-MV2) and Poisson-Boltzmann (PB) continuum models. The final conformation is chosen as the center of the best-ranked cluster. With conformational free energies, the overall prediction success is 83% using a 1.00 Angstroms crystal RMSD criterion for main-chain atoms, and 76% using a 1.50 Angstroms RMSD criterion for heavy atoms. The prediction success is even higher for the set of 14 peptide-HLA A0201 complexes: 100% of the peptides have main-chain RMSD values < or =1.00 Angstroms and 93% of the peptides have heavy atom RMSD values < or =1.50 Angstroms. This structure prediction method can be applied to complexes of natural or modified antigenic peptides in their MHC environment with the aim to perform rational structure-based optimizations of tumor vaccines.
Multimodel inference and multimodel averaging in empirical modeling of occupational exposure levels.
Resumo:
Empirical modeling of exposure levels has been popular for identifying exposure determinants in occupational hygiene. Traditional data-driven methods used to choose a model on which to base inferences have typically not accounted for the uncertainty linked to the process of selecting the final model. Several new approaches propose making statistical inferences from a set of plausible models rather than from a single model regarded as 'best'. This paper introduces the multimodel averaging approach described in the monograph by Burnham and Anderson. In their approach, a set of plausible models are defined a priori by taking into account the sample size and previous knowledge of variables influent on exposure levels. The Akaike information criterion is then calculated to evaluate the relative support of the data for each model, expressed as Akaike weight, to be interpreted as the probability of the model being the best approximating model given the model set. The model weights can then be used to rank models, quantify the evidence favoring one over another, perform multimodel prediction, estimate the relative influence of the potential predictors and estimate multimodel-averaged effects of determinants. The whole approach is illustrated with the analysis of a data set of 1500 volatile organic compound exposure levels collected by the Institute for work and health (Lausanne, Switzerland) over 20 years, each concentration having been divided by the relevant Swiss occupational exposure limit and log-transformed before analysis. Multimodel inference represents a promising procedure for modeling exposure levels that incorporates the notion that several models can be supported by the data and permits to evaluate to a certain extent model selection uncertainty, which is seldom mentioned in current practice.
International consensus conference on PFAPA syndrome: Evaluation of a new set of diagnostic criteria
Resumo:
The PFAPA syndrome is characterized by periodic fever, associated with pharyngitis, cervical adenitis and/or aphtous stomatitis and belongs to the auto-inflammatory diseases. Diagnostic criteria are based on clinical features and the exclusion of other periodic fever syndromes. An analysis of a large cohort of patients has shown weaknesses for these criteria and there is a lack of international consensus. An International Conference was held in Morges in November 2008 to propose a new set of classification criteria based on a consensus among experts in the field. We aimed to verify the applicability of the new set of classification criteria. 80 patients diagnosed with PFAPA syndrome from 3 centers (Genoa, Lausanne and Geneva) for pediatric rheumatology were included in the study. A detailed description of the clinical and laboratory features was obtained. The new classification criteria and the actual diagnostic criteria were applied to the patients. Only 43/80 patients (53.8%) fulfilled all criteria of the new classification. 31 patients were excluded because they didn't meet one of the 7 diagnostic criteria, 8 because of 2 criteria, and one because of 3 criteria. When we applied the current criteria to the same patients, 11/80 patients (13%) needed to be excluded. 8/80 patients (10%) were excluded from both sets. Exclusion was related only to some of the criteria. Number of patients for each not fulfilled criterion (new set of criteria/actual criteria): age (1/6), symptoms between episodes (2/2), delayed growth (3/3), main symptoms (21/0), periodicity, length of fever, interval between episodes, and length of disease (19/0). The application of some of the new criteria was not easy, as they were both very restrictive and needed precise information from the patients. Our work has shown that the new set of classification criteria can be applied to patients suspected for PFAPA syndrome, but it seems to be more restrictive than the actual diagnostic criteria. A further work of validation needs to be done for this new set of classification criteria in order to determine if these criteria allow a good discrimination between PFAPA patients and other causes of recurrent fever syndromes.
Resumo:
We conceptualize new ways to qualify what themes should dominate the future international business and management (IB/IM) research agenda by examining three questions: Whom should we ask? What should we ask, and which selection criteria should we apply? What are the contextual forces? Our main findings are the following: (1) wider perspectives from academia and practice would benefit both rigor and relevance; (2) four key forces are climate change, globalization, inequality, and sustainability; and (3) we propose scientific mindfulness as the way forward for generating themes in IB/IM research. Scientific mindfulness is a holistic, cross-disciplinary, and contextual approach, whereby researchers need to make sense of multiple perspectives with the betterment of society as the ultimate criterion.