63 resultados para lock and key model


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The wreck U Pezzo, excavated within the Saint Florent Gulf in northern Corsica was identified as the pink, Saint Etienne, a merchant ship which sank on January 31, 1769. In order to determine the composition of organic materials used to coat the hull or to waterproof different parts of the pink, a study of several samples, using molecular biomarker and carbon isotopic analysis, was initiated. The results revealed that the remarkable yellow coat, covering the outside planks of the ship's bottom under the water line, is composed of sulfur, tallow (of ox and not of cetacean origin) and black pitch which corresponds to a mixture called ``couroi'' or ``stuff'. Onboard ropes had been submitted to a tarring treatment with pitch. Hairs mixed with pitch were identified in samples collected between the two layers of the hull or under the sheathing planking. The study also provides a key model for weathering of pitch, as different degrees of degradation were found between the surface and the heart of several samples. Accordingly, molecular parameters for alteration were proposed. Furthermore novel mixed esters between terpenic and diterpenic alcohols and the free major fatty acids (C(14:0), C(16:0), C(18:0)) were detected in the yellow coat. (C) 2009 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a new non parametric atlas registration framework, derived from the optical flow model and the active contour theory, applied to automatic subthalamic nucleus (STN) targeting in deep brain stimulation (DBS) surgery. In a previous work, we demonstrated that the STN position can be predicted based on the position of surrounding visible structures, namely the lateral and third ventricles. A STN targeting process can thus be obtained by registering these structures of interest between a brain atlas and the patient image. Here we aim to improve the results of the state of the art targeting methods and at the same time to reduce the computational time. Our simultaneous segmentation and registration model shows mean STN localization errors statistically similar to the most performing registration algorithms tested so far and to the targeting expert's variability. Moreover, the computational time of our registration method is much lower, which is a worthwhile improvement from a clinical point of view.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The motivation for this research initiated from the abrupt rise and fall of minicomputers which were initially used both for industrial automation and business applications due to their significantly lower cost than their predecessors, the mainframes. Later industrial automation developed its own vertically integrated hardware and software to address the application needs of uninterrupted operations, real-time control and resilience to harsh environmental conditions. This has led to the creation of an independent industry, namely industrial automation used in PLC, DCS, SCADA and robot control systems. This industry employs today over 200'000 people in a profitable slow clockspeed context in contrast to the two mainstream computing industries of information technology (IT) focused on business applications and telecommunications focused on communications networks and hand-held devices. Already in 1990s it was foreseen that IT and communication would merge into one Information and communication industry (ICT). The fundamental question of the thesis is: Could industrial automation leverage a common technology platform with the newly formed ICT industry? Computer systems dominated by complex instruction set computers (CISC) were challenged during 1990s with higher performance reduced instruction set computers (RISC). RISC started to evolve parallel to the constant advancement of Moore's law. These developments created the high performance and low energy consumption System-on-Chip architecture (SoC). Unlike to the CISC processors RISC processor architecture is a separate industry from the RISC chip manufacturing industry. It also has several hardware independent software platforms consisting of integrated operating system, development environment, user interface and application market which enables customers to have more choices due to hardware independent real time capable software applications. An architecture disruption merged and the smartphone and tablet market were formed with new rules and new key players in the ICT industry. Today there are more RISC computer systems running Linux (or other Unix variants) than any other computer system. The astonishing rise of SoC based technologies and related software platforms in smartphones created in unit terms the largest installed base ever seen in the history of computers and is now being further extended by tablets. An underlying additional element of this transition is the increasing role of open source technologies both in software and hardware. This has driven the microprocessor based personal computer industry with few dominating closed operating system platforms into a steep decline. A significant factor in this process has been the separation of processor architecture and processor chip production and operating systems and application development platforms merger into integrated software platforms with proprietary application markets. Furthermore the pay-by-click marketing has changed the way applications development is compensated: Three essays on major trends in a slow clockspeed industry: The case of industrial automation 2014 freeware, ad based or licensed - all at a lower price and used by a wider customer base than ever before. Moreover, the concept of software maintenance contract is very remote in the app world. However, as a slow clockspeed industry, industrial automation has remained intact during the disruptions based on SoC and related software platforms in the ICT industries. Industrial automation incumbents continue to supply systems based on vertically integrated systems consisting of proprietary software and proprietary mainly microprocessor based hardware. They enjoy admirable profitability levels on a very narrow customer base due to strong technology-enabled customer lock-in and customers' high risk leverage as their production is dependent on fault-free operation of the industrial automation systems. When will this balance of power be disrupted? The thesis suggests how industrial automation could join the mainstream ICT industry and create an information, communication and automation (ICAT) industry. Lately the Internet of Things (loT) and weightless networks, a new standard leveraging frequency channels earlier occupied by TV broadcasting, have gradually started to change the rigid world of Machine to Machine (M2M) interaction. It is foreseeable that enough momentum will be created that the industrial automation market will in due course face an architecture disruption empowered by these new trends. This thesis examines the current state of industrial automation subject to the competition between the incumbents firstly through a research on cost competitiveness efforts in captive outsourcing of engineering, research and development and secondly researching process re- engineering in the case of complex system global software support. Thirdly we investigate the industry actors', namely customers, incumbents and newcomers, views on the future direction of industrial automation and conclude with our assessments of the possible routes industrial automation could advance taking into account the looming rise of the Internet of Things (loT) and weightless networks. Industrial automation is an industry dominated by a handful of global players each of them focusing on maintaining their own proprietary solutions. The rise of de facto standards like IBM PC, Unix and Linux and SoC leveraged by IBM, Compaq, Dell, HP, ARM, Apple, Google, Samsung and others have created new markets of personal computers, smartphone and tablets and will eventually also impact industrial automation through game changing commoditization and related control point and business model changes. This trend will inevitably continue, but the transition to a commoditized industrial automation will not happen in the near future.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The value of earmarks as an efficient means of personal identification is still subject to debate. It has been argued that the field is lacking a firm systematic and structured data basis to help practitioners to form their conclusions. Typically, there is a paucity of research guiding as to the selectivity of the features used in the comparison process between an earmark and reference earprints taken from an individual. This study proposes a system for the automatic comparison of earprints and earmarks, operating without any manual extraction of key-points or manual annotations. For each donor, a model is created using multiple reference prints, hence capturing the donor within source variability. For each comparison between a mark and a model, images are automatically aligned and a proximity score, based on a normalized 2D correlation coefficient, is calculated. Appropriate use of this score allows deriving a likelihood ratio that can be explored under known state of affairs (both in cases where it is known that the mark has been left by the donor that gave the model and conversely in cases when it is established that the mark originates from a different source). To assess the system performance, a first dataset containing 1229 donors elaborated during the FearID research project was used. Based on these data, for mark-to-print comparisons, the system performed with an equal error rate (EER) of 2.3% and about 88% of marks are found in the first 3 positions of a hitlist. When performing print-to-print transactions, results show an equal error rate of 0.5%. The system was then tested using real-case data obtained from police forces.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

EXECUTIVE SUMMARY : Evaluating Information Security Posture within an organization is becoming a very complex task. Currently, the evaluation and assessment of Information Security are commonly performed using frameworks, methodologies and standards which often consider the various aspects of security independently. Unfortunately this is ineffective because it does not take into consideration the necessity of having a global and systemic multidimensional approach to Information Security evaluation. At the same time the overall security level is globally considered to be only as strong as its weakest link. This thesis proposes a model aiming to holistically assess all dimensions of security in order to minimize the likelihood that a given threat will exploit the weakest link. A formalized structure taking into account all security elements is presented; this is based on a methodological evaluation framework in which Information Security is evaluated from a global perspective. This dissertation is divided into three parts. Part One: Information Security Evaluation issues consists of four chapters. Chapter 1 is an introduction to the purpose of this research purpose and the Model that will be proposed. In this chapter we raise some questions with respect to "traditional evaluation methods" as well as identifying the principal elements to be addressed in this direction. Then we introduce the baseline attributes of our model and set out the expected result of evaluations according to our model. Chapter 2 is focused on the definition of Information Security to be used as a reference point for our evaluation model. The inherent concepts of the contents of a holistic and baseline Information Security Program are defined. Based on this, the most common roots-of-trust in Information Security are identified. Chapter 3 focuses on an analysis of the difference and the relationship between the concepts of Information Risk and Security Management. Comparing these two concepts allows us to identify the most relevant elements to be included within our evaluation model, while clearing situating these two notions within a defined framework is of the utmost importance for the results that will be obtained from the evaluation process. Chapter 4 sets out our evaluation model and the way it addresses issues relating to the evaluation of Information Security. Within this Chapter the underlying concepts of assurance and trust are discussed. Based on these two concepts, the structure of the model is developed in order to provide an assurance related platform as well as three evaluation attributes: "assurance structure", "quality issues", and "requirements achievement". Issues relating to each of these evaluation attributes are analysed with reference to sources such as methodologies, standards and published research papers. Then the operation of the model is discussed. Assurance levels, quality levels and maturity levels are defined in order to perform the evaluation according to the model. Part Two: Implementation of the Information Security Assurance Assessment Model (ISAAM) according to the Information Security Domains consists of four chapters. This is the section where our evaluation model is put into a welldefined context with respect to the four pre-defined Information Security dimensions: the Organizational dimension, Functional dimension, Human dimension, and Legal dimension. Each Information Security dimension is discussed in a separate chapter. For each dimension, the following two-phase evaluation path is followed. The first phase concerns the identification of the elements which will constitute the basis of the evaluation: ? Identification of the key elements within the dimension; ? Identification of the Focus Areas for each dimension, consisting of the security issues identified for each dimension; ? Identification of the Specific Factors for each dimension, consisting of the security measures or control addressing the security issues identified for each dimension. The second phase concerns the evaluation of each Information Security dimension by: ? The implementation of the evaluation model, based on the elements identified for each dimension within the first phase, by identifying the security tasks, processes, procedures, and actions that should have been performed by the organization to reach the desired level of protection; ? The maturity model for each dimension as a basis for reliance on security. For each dimension we propose a generic maturity model that could be used by every organization in order to define its own security requirements. Part three of this dissertation contains the Final Remarks, Supporting Resources and Annexes. With reference to the objectives of our thesis, the Final Remarks briefly analyse whether these objectives were achieved and suggest directions for future related research. Supporting resources comprise the bibliographic resources that were used to elaborate and justify our approach. Annexes include all the relevant topics identified within the literature to illustrate certain aspects of our approach. Our Information Security evaluation model is based on and integrates different Information Security best practices, standards, methodologies and research expertise which can be combined in order to define an reliable categorization of Information Security. After the definition of terms and requirements, an evaluation process should be performed in order to obtain evidence that the Information Security within the organization in question is adequately managed. We have specifically integrated into our model the most useful elements of these sources of information in order to provide a generic model able to be implemented in all kinds of organizations. The value added by our evaluation model is that it is easy to implement and operate and answers concrete needs in terms of reliance upon an efficient and dynamic evaluation tool through a coherent evaluation system. On that basis, our model could be implemented internally within organizations, allowing them to govern better their Information Security. RÉSUMÉ : Contexte général de la thèse L'évaluation de la sécurité en général, et plus particulièrement, celle de la sécurité de l'information, est devenue pour les organisations non seulement une mission cruciale à réaliser, mais aussi de plus en plus complexe. A l'heure actuelle, cette évaluation se base principalement sur des méthodologies, des bonnes pratiques, des normes ou des standards qui appréhendent séparément les différents aspects qui composent la sécurité de l'information. Nous pensons que cette manière d'évaluer la sécurité est inefficiente, car elle ne tient pas compte de l'interaction des différentes dimensions et composantes de la sécurité entre elles, bien qu'il soit admis depuis longtemps que le niveau de sécurité globale d'une organisation est toujours celui du maillon le plus faible de la chaîne sécuritaire. Nous avons identifié le besoin d'une approche globale, intégrée, systémique et multidimensionnelle de l'évaluation de la sécurité de l'information. En effet, et c'est le point de départ de notre thèse, nous démontrons que seule une prise en compte globale de la sécurité permettra de répondre aux exigences de sécurité optimale ainsi qu'aux besoins de protection spécifiques d'une organisation. Ainsi, notre thèse propose un nouveau paradigme d'évaluation de la sécurité afin de satisfaire aux besoins d'efficacité et d'efficience d'une organisation donnée. Nous proposons alors un modèle qui vise à évaluer d'une manière holistique toutes les dimensions de la sécurité, afin de minimiser la probabilité qu'une menace potentielle puisse exploiter des vulnérabilités et engendrer des dommages directs ou indirects. Ce modèle se base sur une structure formalisée qui prend en compte tous les éléments d'un système ou programme de sécurité. Ainsi, nous proposons un cadre méthodologique d'évaluation qui considère la sécurité de l'information à partir d'une perspective globale. Structure de la thèse et thèmes abordés Notre document est structuré en trois parties. La première intitulée : « La problématique de l'évaluation de la sécurité de l'information » est composée de quatre chapitres. Le chapitre 1 introduit l'objet de la recherche ainsi que les concepts de base du modèle d'évaluation proposé. La maniéré traditionnelle de l'évaluation de la sécurité fait l'objet d'une analyse critique pour identifier les éléments principaux et invariants à prendre en compte dans notre approche holistique. Les éléments de base de notre modèle d'évaluation ainsi que son fonctionnement attendu sont ensuite présentés pour pouvoir tracer les résultats attendus de ce modèle. Le chapitre 2 se focalise sur la définition de la notion de Sécurité de l'Information. Il ne s'agit pas d'une redéfinition de la notion de la sécurité, mais d'une mise en perspectives des dimensions, critères, indicateurs à utiliser comme base de référence, afin de déterminer l'objet de l'évaluation qui sera utilisé tout au long de notre travail. Les concepts inhérents de ce qui constitue le caractère holistique de la sécurité ainsi que les éléments constitutifs d'un niveau de référence de sécurité sont définis en conséquence. Ceci permet d'identifier ceux que nous avons dénommés « les racines de confiance ». Le chapitre 3 présente et analyse la différence et les relations qui existent entre les processus de la Gestion des Risques et de la Gestion de la Sécurité, afin d'identifier les éléments constitutifs du cadre de protection à inclure dans notre modèle d'évaluation. Le chapitre 4 est consacré à la présentation de notre modèle d'évaluation Information Security Assurance Assessment Model (ISAAM) et la manière dont il répond aux exigences de l'évaluation telle que nous les avons préalablement présentées. Dans ce chapitre les concepts sous-jacents relatifs aux notions d'assurance et de confiance sont analysés. En se basant sur ces deux concepts, la structure du modèle d'évaluation est développée pour obtenir une plateforme qui offre un certain niveau de garantie en s'appuyant sur trois attributs d'évaluation, à savoir : « la structure de confiance », « la qualité du processus », et « la réalisation des exigences et des objectifs ». Les problématiques liées à chacun de ces attributs d'évaluation sont analysées en se basant sur l'état de l'art de la recherche et de la littérature, sur les différentes méthodes existantes ainsi que sur les normes et les standards les plus courants dans le domaine de la sécurité. Sur cette base, trois différents niveaux d'évaluation sont construits, à savoir : le niveau d'assurance, le niveau de qualité et le niveau de maturité qui constituent la base de l'évaluation de l'état global de la sécurité d'une organisation. La deuxième partie: « L'application du Modèle d'évaluation de l'assurance de la sécurité de l'information par domaine de sécurité » est elle aussi composée de quatre chapitres. Le modèle d'évaluation déjà construit et analysé est, dans cette partie, mis dans un contexte spécifique selon les quatre dimensions prédéfinies de sécurité qui sont: la dimension Organisationnelle, la dimension Fonctionnelle, la dimension Humaine, et la dimension Légale. Chacune de ces dimensions et son évaluation spécifique fait l'objet d'un chapitre distinct. Pour chacune des dimensions, une évaluation en deux phases est construite comme suit. La première phase concerne l'identification des éléments qui constituent la base de l'évaluation: ? Identification des éléments clés de l'évaluation ; ? Identification des « Focus Area » pour chaque dimension qui représentent les problématiques se trouvant dans la dimension ; ? Identification des « Specific Factors » pour chaque Focus Area qui représentent les mesures de sécurité et de contrôle qui contribuent à résoudre ou à diminuer les impacts des risques. La deuxième phase concerne l'évaluation de chaque dimension précédemment présentées. Elle est constituée d'une part, de l'implémentation du modèle général d'évaluation à la dimension concernée en : ? Se basant sur les éléments spécifiés lors de la première phase ; ? Identifiant les taches sécuritaires spécifiques, les processus, les procédures qui auraient dû être effectués pour atteindre le niveau de protection souhaité. D'autre part, l'évaluation de chaque dimension est complétée par la proposition d'un modèle de maturité spécifique à chaque dimension, qui est à considérer comme une base de référence pour le niveau global de sécurité. Pour chaque dimension nous proposons un modèle de maturité générique qui peut être utilisé par chaque organisation, afin de spécifier ses propres exigences en matière de sécurité. Cela constitue une innovation dans le domaine de l'évaluation, que nous justifions pour chaque dimension et dont nous mettons systématiquement en avant la plus value apportée. La troisième partie de notre document est relative à la validation globale de notre proposition et contient en guise de conclusion, une mise en perspective critique de notre travail et des remarques finales. Cette dernière partie est complétée par une bibliographie et des annexes. Notre modèle d'évaluation de la sécurité intègre et se base sur de nombreuses sources d'expertise, telles que les bonnes pratiques, les normes, les standards, les méthodes et l'expertise de la recherche scientifique du domaine. Notre proposition constructive répond à un véritable problème non encore résolu, auquel doivent faire face toutes les organisations, indépendamment de la taille et du profil. Cela permettrait à ces dernières de spécifier leurs exigences particulières en matière du niveau de sécurité à satisfaire, d'instancier un processus d'évaluation spécifique à leurs besoins afin qu'elles puissent s'assurer que leur sécurité de l'information soit gérée d'une manière appropriée, offrant ainsi un certain niveau de confiance dans le degré de protection fourni. Nous avons intégré dans notre modèle le meilleur du savoir faire, de l'expérience et de l'expertise disponible actuellement au niveau international, dans le but de fournir un modèle d'évaluation simple, générique et applicable à un grand nombre d'organisations publiques ou privées. La valeur ajoutée de notre modèle d'évaluation réside précisément dans le fait qu'il est suffisamment générique et facile à implémenter tout en apportant des réponses sur les besoins concrets des organisations. Ainsi notre proposition constitue un outil d'évaluation fiable, efficient et dynamique découlant d'une approche d'évaluation cohérente. De ce fait, notre système d'évaluation peut être implémenté à l'interne par l'entreprise elle-même, sans recourir à des ressources supplémentaires et lui donne également ainsi la possibilité de mieux gouverner sa sécurité de l'information.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Forensic intelligence is a distinct dimension of forensic science. Forensic intelligence processes have mostly been developed to address either a specific type of trace or a specific problem. Even though these empirical developments have led to successes, they are trace-specific in nature and contribute to the generation of silos which hamper the establishment of a more general and transversal model. Forensic intelligence has shown some important perspectives but more general developments are required to address persistent challenges. This will ensure the progress of the discipline as well as its widespread implementation in the future. This paper demonstrates that the description of forensic intelligence processes, their architectures, and the methods for building them can, at a certain level, be abstracted from the type of traces considered. A comparative analysis is made between two forensic intelligence approaches developed independently in Australia and in Europe regarding the monitoring of apparently very different kind of problems: illicit drugs and false identity documents. An inductive effort is pursued to identify similarities and to outline a general model. Besides breaking barriers between apparently separate fields of study in forensic science and intelligence, this transversal model would assist in defining forensic intelligence, its role and place in policing, and in identifying its contributions and limitations. The model will facilitate the paradigm shift from the current case-by-case reactive attitude towards a proactive approach by serving as a guideline for the use of forensic case data in an intelligence-led perspective. A follow-up article will specifically address issues related to comparison processes, decision points and organisational issues regarding forensic intelligence (part II).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Probabilistic inversion methods based on Markov chain Monte Carlo (MCMC) simulation are well suited to quantify parameter and model uncertainty of nonlinear inverse problems. Yet, application of such methods to CPU-intensive forward models can be a daunting task, particularly if the parameter space is high dimensional. Here, we present a 2-D pixel-based MCMC inversion of plane-wave electromagnetic (EM) data. Using synthetic data, we investigate how model parameter uncertainty depends on model structure constraints using different norms of the likelihood function and the model constraints, and study the added benefits of joint inversion of EM and electrical resistivity tomography (ERT) data. Our results demonstrate that model structure constraints are necessary to stabilize the MCMC inversion results of a highly discretized model. These constraints decrease model parameter uncertainty and facilitate model interpretation. A drawback is that these constraints may lead to posterior distributions that do not fully include the true underlying model, because some of its features exhibit a low sensitivity to the EM data, and hence are difficult to resolve. This problem can be partly mitigated if the plane-wave EM data is augmented with ERT observations. The hierarchical Bayesian inverse formulation introduced and used herein is able to successfully recover the probabilistic properties of the measurement data errors and a model regularization weight. Application of the proposed inversion methodology to field data from an aquifer demonstrates that the posterior mean model realization is very similar to that derived from a deterministic inversion with similar model constraints.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The evolution of cooperation is thought to be promoted by pleiotropy, whereby cooperative traits are coregulated with traits that are important for personal fitness. However, this hypothesis faces a key challenge: what happens if mutation targets a cooperative trait specifically rather than the pleiotropic regulator? Here, we explore this question with the bacterium Pseudomonas aeruginosa, which cooperatively digests complex proteins using elastase. We empirically measure and theoretically model the fate of two mutants-one missing the whole regulatory circuit behind elastase production and the other with only the elastase gene mutated-relative to the wild-type (WT). We first show that, when elastase is needed, neither of the mutants can grow if the WT is absent. And, consistent with previous findings, we show that regulatory gene mutants can grow faster than the WT when there are no pleiotropic costs. However, we find that mutants only lacking elastase production do not outcompete the WT, because the individual cooperative trait has a low cost. We argue that the intrinsic architecture of molecular networks makes pleiotropy an effective way to stabilize cooperative evolution. Although individual cooperative traits experience loss-of-function mutations, these mutations may result in weak benefits, and need not undermine the protection from pleiotropy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE: To provide an update to the original Surviving Sepsis Campaign clinical management guidelines, "Surviving Sepsis Campaign Guidelines for Management of Severe Sepsis and Septic Shock," published in 2004. DESIGN: Modified Delphi method with a consensus conference of 55 international experts, several subsequent meetings of subgroups and key individuals, teleconferences, and electronic-based discussion among subgroups and among the entire committee. This process was conducted independently of any industry funding. METHODS: We used the Grades of Recommendation, Assessment, Development and Evaluation (GRADE) system to guide assessment of quality of evidence from high (A) to very low (D) and to determine the strength of recommendations. A strong recommendation (1) indicates that an intervention's desirable effects clearly outweigh its undesirable effects (risk, burden, cost) or clearly do not. Weak recommendations (2) indicate that the tradeoff between desirable and undesirable effects is less clear. The grade of strong or weak is considered of greater clinical importance than a difference in letter level of quality of evidence. In areas without complete agreement, a formal process of resolution was developed and applied. Recommendations are grouped into those directly targeting severe sepsis, recommendations targeting general care of the critically ill patient that are considered high priority in severe sepsis, and pediatric considerations. RESULTS: Key recommendations, listed by category, include early goal-directed resuscitation of the septic patient during the first 6 hrs after recognition (1C); blood cultures before antibiotic therapy (1C); imaging studies performed promptly to confirm potential source of infection (1C); administration of broad-spectrum antibiotic therapy within 1 hr of diagnosis of septic shock (1B) and severe sepsis without septic shock (1D); reassessment of antibiotic therapy with microbiology and clinical data to narrow coverage, when appropriate (1C); a usual 7-10 days of antibiotic therapy guided by clinical response (1D); source control with attention to the balance of risks and benefits of the chosen method (1C); administration of either crystalloid or colloid fluid resuscitation (1B); fluid challenge to restore mean circulating filling pressure (1C); reduction in rate of fluid administration with rising filing pressures and no improvement in tissue perfusion (1D); vasopressor preference for norepinephrine or dopamine to maintain an initial target of mean arterial pressure > or = 65 mm Hg (1C); dobutamine inotropic therapy when cardiac output remains low despite fluid resuscitation and combined inotropic/vasopressor therapy (1C); stress-dose steroid therapy given only in septic shock after blood pressure is identified to be poorly responsive to fluid and vasopressor therapy (2C); recombinant activated protein C in patients with severe sepsis and clinical assessment of high risk for death (2B except 2C for postoperative patients). In the absence of tissue hypoperfusion, coronary artery disease, or acute hemorrhage, target a hemoglobin of 7-9 g/dL (1B); a low tidal volume (1B) and limitation of inspiratory plateau pressure strategy (1C) for acute lung injury (ALI)/acute respiratory distress syndrome (ARDS); application of at least a minimal amount of positive end-expiratory pressure in acute lung injury (1C); head of bed elevation in mechanically ventilated patients unless contraindicated (1B); avoiding routine use of pulmonary artery catheters in ALI/ARDS (1A); to decrease days of mechanical ventilation and ICU length of stay, a conservative fluid strategy for patients with established ALI/ARDS who are not in shock (1C); protocols for weaning and sedation/analgesia (1B); using either intermittent bolus sedation or continuous infusion sedation with daily interruptions or lightening (1B); avoidance of neuromuscular blockers, if at all possible (1B); institution of glycemic control (1B), targeting a blood glucose < 150 mg/dL after initial stabilization (2C); equivalency of continuous veno-veno hemofiltration or intermittent hemodialysis (2B); prophylaxis for deep vein thrombosis (1A); use of stress ulcer prophylaxis to prevent upper gastrointestinal bleeding using H2 blockers (1A) or proton pump inhibitors (1B); and consideration of limitation of support where appropriate (1D). Recommendations specific to pediatric severe sepsis include greater use of physical examination therapeutic end points (2C); dopamine as the first drug of choice for hypotension (2C); steroids only in children with suspected or proven adrenal insufficiency (2C); and a recommendation against the use of recombinant activated protein C in children (1B). CONCLUSIONS: There was strong agreement among a large cohort of international experts regarding many level 1 recommendations for the best current care of patients with severe sepsis. Evidenced-based recommendations regarding the acute management of sepsis and septic shock are the first step toward improved outcomes for this important group of critically ill patients.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background Individual signs and symptoms are of limited value for the diagnosis of influenza. Objective To develop a decision tree for the diagnosis of influenza based on a classification and regression tree (CART) analysis. Methods Data from two previous similar cohort studies were assembled into a single dataset. The data were randomly divided into a development set (70%) and a validation set (30%). We used CART analysis to develop three models that maximize the number of patients who do not require diagnostic testing prior to treatment decisions. The validation set was used to evaluate overfitting of the model to the training set. Results Model 1 has seven terminal nodes based on temperature, the onset of symptoms and the presence of chills, cough and myalgia. Model 2 was a simpler tree with only two splits based on temperature and the presence of chills. Model 3 was developed with temperature as a dichotomous variable (≥38°C) and had only two splits based on the presence of fever and myalgia. The area under the receiver operating characteristic curves (AUROCC) for the development and validation sets, respectively, were 0.82 and 0.80 for Model 1, 0.75 and 0.76 for Model 2 and 0.76 and 0.77 for Model 3. Model 2 classified 67% of patients in the validation group into a high- or low-risk group compared with only 38% for Model 1 and 54% for Model 3. Conclusions A simple decision tree (Model 2) classified two-thirds of patients as low or high risk and had an AUROCC of 0.76. After further validation in an independent population, this CART model could support clinical decision making regarding influenza, with low-risk patients requiring no further evaluation for influenza and high-risk patients being candidates for empiric symptomatic or drug therapy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The dynamical analysis of large biological regulatory networks requires the development of scalable methods for mathematical modeling. Following the approach initially introduced by Thomas, we formalize the interactions between the components of a network in terms of discrete variables, functions, and parameters. Model simulations result in directed graphs, called state transition graphs. We are particularly interested in reachability properties and asymptotic behaviors, which correspond to terminal strongly connected components (or "attractors") in the state transition graph. A well-known problem is the exponential increase of the size of state transition graphs with the number of network components, in particular when using the biologically realistic asynchronous updating assumption. To address this problem, we have developed several complementary methods enabling the analysis of the behavior of large and complex logical models: (i) the definition of transition priority classes to simplify the dynamics; (ii) a model reduction method preserving essential dynamical properties, (iii) a novel algorithm to compact state transition graphs and directly generate compressed representations, emphasizing relevant transient and asymptotic dynamical properties. The power of an approach combining these different methods is demonstrated by applying them to a recent multilevel logical model for the network controlling CD4+ T helper cell response to antigen presentation and to a dozen cytokines. This model accounts for the differentiation of canonical Th1 and Th2 lymphocytes, as well as of inflammatory Th17 and regulatory T cells, along with many hybrid subtypes. All these methods have been implemented into the software GINsim, which enables the definition, the analysis, and the simulation of logical regulatory graphs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A sound statistical methodology is presented for modelling the correspondence between the characteristics of individuals, their thermal environment, and their thermal sensation. The proposed methodology substantially improves that developed by P.O. Fanger, by formulating a more general and precise model of thermal comfort. It enables us to estimate the model from a sample of data where all the parameters of comfort vary at the same time, which is not possible with that adopted by Fanger. Moreover, the present model is still valid when thermal conditions are far from optimum. (C) 1997 Elsevier Science Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduction: The presence of intra-articular basic calcium phosphate (BCP) crystals, including OCP, carbonated-apatite, hydroxyapatite and tricalcium phosphate crystals, is associated with severe osteoarthritis and destructive arthropathies such as Milwaukee shoulder. Although BCP crystals displayed, in vitro, mitogenic, anabolic and catabolic responses, their intra-articular effect was never assessed.Objective: To determine the effects of OCP crystals in joints in vivo.Methods: OCP crystals (200 ug in 20 ml PBS) were injected into the right knee joint (the contra-lateral knee joint injected with 20 ul of PBS serving as a control) of wild-type mice treated or not by the IL1R antagonist Anakinra or mice deficient for the inflammasome proteins ASC and NALP3. 4 days and 17 days after crystal injection, mice were sacrificed and knee joints dissected. Histological scoring for synovial inflammation and characterisation of macrophages, neutrophils and T cells were performed. Technetium (Tc) uptake was measured at 6h, 1 and 4 days after OCP injection. Cartilage degradation was evaluated by Safranin O staining and VDIPEN immunohistochemistry. Intra-articular localisation of injected OCP crystals was evidenced by Von Kossa staining.Results: The intra-articular localisation of injected OCP crystals was evidenced by Von Kossa staining performed on non-decalcified samples embedded in methyl-metacrylate. Injection of OCP crystals into knee joints led at day 4 to an inflammatory response with intense macrophage staining and also some neutrophil recruitment in the synovial membrane. This synovitis was not accompanied by increased Tc uptake into the knee joint, Tc uptake being similar in OCP crystal injected knee or control knee at all time points investigated (6h, 1 day, 4 days). The histological modifications persisted over 17 days, with an additional fibrosis evidenced at this later time-point. The OCP crystal-induced synovitis was totally IL-1a and IL-1 independent as shown by the absence of inhibitory effects of anakinra injected into wild-type mice. Accordingly, OCP crystal-induced synovitis was similar in ASC-/- and NALP3-/- mice as no alterations of inflammation were demonstrated between these mice groups. Concerning cartilage matrix degradation, OCP crystals induced a strong breakdown of proteoglycans 4 and 17 days after injection, as measured by loss of red staining from Safranin O-stained sections of cartilage surfaces. In addition, we also measured advanced cartilage matrix destruction mediated by MMPs, as evidenced by VDIPEN staining of cartilage. OCP-mediated cartilage degradation was similar in all experimental conditions tested (WT+Anakinra, or ASC or NALP3 deficient mice).Conclusion: These data indicate in vivo that the intra-articular presence of OCP crystals is associated with cartilage destruction along with synovial inflammation. This is an interesting and new model of destructive arthropathy related to BCP crystals which will allow to assess new therapies in this disease.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In my first project, I analyzed the role of the amiloride-sensitive epithelial sodium channel ENaC) in the skin during wound healing. ENaC is present in the skin and a function in keratinocyte differentiation and barrier formation has been demonstrated. Previous findings suggested, that ENaC might be implicated in keratinocyte migration, although its role in wound healing was not analyzed yet. Using skin-specific (K14-Cre) conditional ENaC knockout and overexpressing mice, I determined the wound closure kinetic and performed morphometric measurements. The time course of wound repair was not significantly different in knockouts or transgenics when compared to control mice and the morphology of the closing wound was not altered. In my second project, I studied the glucocorticoid-induced leucine zipper (GILZ, Tsc22d3). GILZ is widely expressed and an important role has been predicted in immunity, adipogenesis and renal sodium handling. Mice were generated that constitutively lack all the functional domains of the Gilz gene. In these mice, the expression of GILZ mRNA transcripts and protein were completely abolished in all tissues tested. Surprisingly, knockout mice survived. To test whether GILZ mimicks glucocorticoid action, we studied its implication in T- and B- cell development and in a model of sepsis. We measured cytokine secretion in different inflammatory models, like in peritoneal and bone marrow-derived macrophages, in splenocytes and a model of sepsis. In all our experiments, cytokine secretion from GILZ- deficient cells was not different from controls. From 6 months onwards, knockout mice contained significantly less body fat and were lighter. Following sodium and water deprivation experiments, water and salt homeostasis was preserved. Sterility of knockout males was associated with a severe testis dysplasia, smaller seminiferous tubules, the number of Sertoli and germ cell was reduced while increased apoptosis, but not cell proliferation, was evidenced. The interstitial Leydig cell population was augmented, and higher plasma FSH and testosterone levels were found. Interestingly, the expression of the target gene Ppar2 was diminished in the testis and in the liver, but not in the skin, kidney or fat. Tsc22d1 mRNA transcript level was found to be upregulated in testis, but not in the kidney or fat tissue. In most tissue, excepted the testis, GILZ-deficient mice reveal functional redundancy amongst members of the Tsc22d family or genes involved in the same regulatory pathways. In summary, contrarily to the published in vitro data, GILZ does not play a crucial role attributed in immunology or inflammation, but we identified a novel function in spermatogenesis. -- Dans mon premier projet, j'ai analysé le rôle du canal épithélial sodique sensible à l'amiloride (ENaC) dans la cicatrisation de la peau. ENaC est présent dans la peau et il a une fonction dans la différenciation des kératinocytes et dans la formation de la barrière. Des études suggèrent qu'ENaC pourrait être impliqué dans la migration des kératinocytes, cependant, son rôle dans la cicatrisation n'a pas encore été étudié. A l'aide de souris qui surexpriment ou qui sont knockout pour ENaC, spécifiquement dans la peau (K14-Cre), j'ai analysé le temps de clôture de la cicatrice et j'ai aussi étudié la morphologie de la plaie guérissant. Chez les souris qui surexpriment ou chez les knockouts, la vitesse de fermeture et la morphologie de la cicatrice étaient identiques aux souris contrôles. Dans mon second projet, j'ai étudié le glucocorticoid-induced leucine zipper (GILZ, Tsc22d3). GILZ est largement exprimé et un rôle important a été prédit dans l'immunité, l'adipogénèse et le transport sodique rénal. Des souris ont été générées dont les domaines fonctionnels du gène Gilz sont éliminés. L'expression de GILZ en ARNm et protéine a été complètement abolie dans tous les tissus testés. Étonnamment, ces souris knockout survivent. Afin de tester si GILZ imite les effets des glucocorticoïdes, nous avons étudié son implication dans le développement des cellules T et B ainsi qu'un modèle de septicémie. Nous avons mesuré la sécrétion de cytokines à partir de différents modèles d'inflammation tels que des macrophages péritonéaux ou de moelle, de splénocytes ou encore d'un modèle de septicémie. Dans toutes nos expériences, la sécrétion de cytokines de cellules GILZ-déficientes était semblable. Dès 6 mois, les knockouts contenaient significativement moins de graisses et étaient plus légères. Suite à une privation sodique et aqueuse, l'homéostasie du sel et de l'eau était préservée. Les mâles knockouts présentaient une stérilité accompagnée d'une dysplasie testiculaire sévère, de tubules séminifères étaient plus petits et contenaient un nombre réduit de cellules de Sertoli et de cellules germinales. L'apoptose était augmentée dans ces cellules mais pas la prolifération cellulaire. Le nombre de cellules de Leydig était aussi plus élevé, ainsi que la FSH et la testostérone. L'expression du gène cible Pparγ2 était diminuée dans le testicule et le foie, mais pas dans la peau, le rein ou le tissu adipeux. L'ARNm de Tsc22d1 était plus exprimé dans le testicule, mais pas dans le rein ou le tissu adipeux. Dans la plupart des tissus, sauf le testicule, les souris knockouts révélaient une redondance fonctionnelle des autres membres de la famille Tsc22d ou de gènes impliqués dans les mêmes voies de régulation. En résumé, contrairement aux données in vitro, GILZ ne joue pas un rôle essentiel en immunologie, mais nous avons identifié une nouvelle fonction dans la spermatogénèse.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective. Mandibular osteoradionecrosis (ORN) is a serious complication of radiotherapy (RT) in head and neck cancer patients. The aim of this study was to analyze the incidence of and risk factors for mandibular ORN in squamous cell carcinoma (SCC) of the oral cavity and oropharynx.Study Design. Case series with chart review.Setting. University tertiary care center for head and neck oncology.Subjects and Methods. Seventy-three patients treated for stage I to IV SCC of the oral cavity and oropharynx between 2000 and 2007, with a minimum follow-up of 2 years, were included in the study. Treatment modalities included both RT with curative intent and adjuvant RT following tumor surgery. The log-rank test and Cox model were used for univariate and multivariate analyses.Results. The incidence of mandibular ORN was 40% at 5 years. Using univariate analysis, the following risk factors were identified: oral cavity tumors (P < .01), bone invasion (P < .02), any surgery prior to RT (P < .04), and bone surgery (P < .0001). By multivariate analysis, mandibular surgery proved to be the most important risk factor and the only one reaching statistical significance (P < .0002).Conclusion. Mandibular ORN is a frequent long-term complication of RT for oral cavity and oropharynx cancers. Mandibular surgery before irradiation is the only independent risk factor. These aspects must be considered when planning treatment for these tumors.