125 resultados para Univariate Analysis box-jenkins methodology


Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND/OBJECTIVES: Preoperative nutrition has been shown to reduce morbidity after major gastrointestinal (GI) surgery in selected patients at risk. In a randomized trial performed recently (NCT00512213), almost half of the patients, however, did not consume the recommended dose of nutritional intervention. The present study aimed to identify the risk factors for noncompliance. SUBJECTS/METHODS: Demographic (n=5) and nutritional (n=21) parameters for this retrospective analysis were obtained from a prospectively maintained database. The outcome of interest was compliance with the allocated intervention (ingestion of ⩾11/15 preoperative oral nutritional supplement units). Uni- and multivariate analyses of potential risk factors for noncompliance were performed. RESULTS: The final analysis included 141 patients with complete data sets for the purpose of the study. Fifty-nine patients (42%) were considered noncompliant. Univariate analysis identified low C-reactive protein levels (P=0.015), decreased recent food intake (P=0.032) and, as a trend, low hemoglobin (P=0.065) and low pre-albumin (P=0.056) levels as risk factors for decreased compliance. However, none of them was retained as an independent risk factor after multivariate analysis. Interestingly, 17 potential explanatory parameters, such as upper GI cancer, weight loss, reduced appetite or co-morbidities, did not show any significant correlation with reduced intake of nutritional supplements. CONCLUSIONS: Reduced compliance with preoperative nutritional interventions remains a major issue because the expected benefit depends on the actual intake. Seemingly, obvious reasons could not be retained as valid explanations. Compliance seems thus to be primarily a question of will and information; the importance of nutritional supplementation needs to be emphasized by specific patients' education.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Roux-en-Y gastric bypass (RYGBP), one of the commonest performed bariatric procedures, remains a technically challenging operation associated with significant morbidity in high-risk patients. This study was conducted in order to identify predictors of complications after laparoscopic RYGBP. METHODS: Our prospectively established database has been assessed to review 30-day and in-hospital complications graded according to a validated scoring system (Clavien-Dindo) and separated into minor (Clavien-Dindo I-IIIa) and major (Clavien-Dindo IIIb-IV) complications. Patient- and procedure-related factors were analyzed using univariate analysis. Significant factors associated with morbidity were introduced into a multivariate analysis to identify independent predictors. RESULTS: Between 1999 and 2012, 1573 patients underwent laparoscopic RYGBP, 374 male and 1199 female. Mean age was 41 years, and mean body mass index (BMI) was 44.5 kg/m(2). One hundred fifty-nine procedures were reoperations. One hundred fifty (9.5 %) patients developed at least one complication, and 43 (2.7 %) had major complications, leading to death in one case (0.06 %). Risk factors for morbidity were male gender (p = 0.006) and overall experience of the team (p < 0.0001). Prolonged 3-day antibiotic therapy was associated with significantly reduced overall (p < 0.0001) and major (p = 0.005) complication rates. Major complications were associated with smoking (p = 0.016). CONCLUSIONS: The most significant individual risk factors for early complications after RYGBP are male gender, limited surgical experience, and single dose of antibiotics. RYGBP should be performed by experienced teams. Smoking should be discontinued before surgery. Prolonged antibiotic therapy could be considered, especially if a circular stapled gastrojejunostomy is performed with the anvil introduced transorally.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE: Thoracic fat has been associated with an increased risk of coronary artery disease (CAD). As endothelium-dependent vasoreactivity is a surrogate of cardiovascular events and is impaired early in atherosclerosis, we aimed at assessing the possible relationship between thoracic fat volume (TFV) and endothelium-dependent coronary vasomotion. METHODS: Fifty healthy volunteers without known CAD or major cardiovascular risk factors (CRFs) prospectively underwent a (82)Rb cardiac PET/CT to quantify myocardial blood flow (MBF) at rest, and MBF response to cold pressor testing (CPT-MBF) and adenosine (i.e., stress-MBF). TFV was measured by a 2D volumetric CT method and common laboratory blood tests (glucose and insulin levels, HOMA-IR, cholesterol, triglyceride, hsCRP) were performed. Relationships between CPT-MBF, TFV and other CRFs were assessed using non-parametric Spearman rank correlation testing and multivariate linear regression analysis. RESULTS: All of the 50 participants (58 ± 10y) had normal stress-MBF (2.7 ± 0.6 mL/min/g; 95 % CI: 2.6-2.9) and myocardial flow reserve (2.8 ± 0.8; 95 % CI: 2.6-3.0) excluding underlying CAD. Univariate analysis revealed a significant inverse relation between absolute CPT-MBF and sex (ρ = -0.47, p = 0.0006), triglyceride (ρ = -0.32, p = 0.024) and insulin levels (ρ = -0.43, p = 0.0024), HOMA-IR (ρ = -0.39, p = 0.007), BMI (ρ = -0.51, p = 0.0002) and TFV (ρ = -0.52, p = 0.0001). MBF response to adenosine was also correlated with TFV (ρ = -0.32, p = 0.026). On multivariate analysis, TFV emerged as the only significant predictor of MBF response to CPT (p = 0.014). CONCLUSIONS: TFV is significantly correlated with endothelium-dependent and -independent coronary vasomotion. High TF burden might negatively influence MBF response to CPT and to adenosine stress, even in persons without CAD, suggesting a link between thoracic fat and future cardiovascular events.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

BACKGROUND AND STUDY AIMS: Appropriate use of colonoscopy is a key component of quality management in gastrointestinal endoscopy. In an update of a 1998 publication, the 2008 European Panel on the Appropriateness of Gastrointestinal Endoscopy (EPAGE II) defined appropriateness criteria for various colonoscopy indications. This introductory paper therefore deals with methodology, general appropriateness, and a review of colonoscopy complications. METHODS:The RAND/UCLA Appropriateness Method was used to evaluate the appropriateness of various diagnostic colonoscopy indications, with 14 multidisciplinary experts using a scale from 1 (extremely inappropriate) to 9 (extremely appropriate). Evidence reported in a comprehensive updated literature review was used for these decisions. Consolidation of the ratings into three appropriateness categories (appropriate, uncertain, inappropriate) was based on the median and the heterogeneity of the votes. The experts then met to discuss areas of disagreement in the light of existing evidence, followed by a second rating round, with a subsequent third voting round on necessity criteria, using much more stringent criteria (i. e. colonoscopy is deemed mandatory). RESULTS: Overall, 463 indications were rated, with 55 %, 16 % and 29 % of them being judged appropriate, uncertain and inappropriate, respectively. Perforation and hemorrhage rates, as reported in 39 studies, were in general < 0.1 % and < 0.3 %, respectively CONCLUSIONS: The updated EPAGE II criteria constitute an aid to clinical decision-making but should in no way replace individual judgment. Detailed panel results are freely available on the internet (www.epage.ch) and will thus constitute a reference source of information for clinicians.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This thesis develops a comprehensive and a flexible statistical framework for the analysis and detection of space, time and space-time clusters of environmental point data. The developed clustering methods were applied in both simulated datasets and real-world environmental phenomena; however, only the cases of forest fires in Canton of Ticino (Switzerland) and in Portugal are expounded in this document. Normally, environmental phenomena can be modelled as stochastic point processes where each event, e.g. the forest fire ignition point, is characterised by its spatial location and occurrence in time. Additionally, information such as burned area, ignition causes, landuse, topographic, climatic and meteorological features, etc., can also be used to characterise the studied phenomenon. Thereby, the space-time pattern characterisa- tion represents a powerful tool to understand the distribution and behaviour of the events and their correlation with underlying processes, for instance, socio-economic, environmental and meteorological factors. Consequently, we propose a methodology based on the adaptation and application of statistical and fractal point process measures for both global (e.g. the Morisita Index, the Box-counting fractal method, the multifractal formalism and the Ripley's K-function) and local (e.g. Scan Statistics) analysis. Many measures describing the space-time distribution of environmental phenomena have been proposed in a wide variety of disciplines; nevertheless, most of these measures are of global character and do not consider complex spatial constraints, high variability and multivariate nature of the events. Therefore, we proposed an statistical framework that takes into account the complexities of the geographical space, where phenomena take place, by introducing the Validity Domain concept and carrying out clustering analyses in data with different constrained geographical spaces, hence, assessing the relative degree of clustering of the real distribution. Moreover, exclusively to the forest fire case, this research proposes two new methodologies to defining and mapping both the Wildland-Urban Interface (WUI) described as the interaction zone between burnable vegetation and anthropogenic infrastructures, and the prediction of fire ignition susceptibility. In this regard, the main objective of this Thesis was to carry out a basic statistical/- geospatial research with a strong application part to analyse and to describe complex phenomena as well as to overcome unsolved methodological problems in the characterisation of space-time patterns, in particular, the forest fire occurrences. Thus, this Thesis provides a response to the increasing demand for both environmental monitoring and management tools for the assessment of natural and anthropogenic hazards and risks, sustainable development, retrospective success analysis, etc. The major contributions of this work were presented at national and international conferences and published in 5 scientific journals. National and international collaborations were also established and successfully accomplished. -- Cette thèse développe une méthodologie statistique complète et flexible pour l'analyse et la détection des structures spatiales, temporelles et spatio-temporelles de données environnementales représentées comme de semis de points. Les méthodes ici développées ont été appliquées aux jeux de données simulées autant qu'A des phénomènes environnementaux réels; nonobstant, seulement le cas des feux forestiers dans le Canton du Tessin (la Suisse) et celui de Portugal sont expliqués dans ce document. Normalement, les phénomènes environnementaux peuvent être modélisés comme des processus ponctuels stochastiques ou chaque événement, par ex. les point d'ignition des feux forestiers, est déterminé par son emplacement spatial et son occurrence dans le temps. De plus, des informations tels que la surface bru^lée, les causes d'ignition, l'utilisation du sol, les caractéristiques topographiques, climatiques et météorologiques, etc., peuvent aussi être utilisées pour caractériser le phénomène étudié. Par conséquent, la définition de la structure spatio-temporelle représente un outil puissant pour compren- dre la distribution du phénomène et sa corrélation avec des processus sous-jacents tels que les facteurs socio-économiques, environnementaux et météorologiques. De ce fait, nous proposons une méthodologie basée sur l'adaptation et l'application de mesures statistiques et fractales des processus ponctuels d'analyse global (par ex. l'indice de Morisita, la dimension fractale par comptage de boîtes, le formalisme multifractal et la fonction K de Ripley) et local (par ex. la statistique de scan). Des nombreuses mesures décrivant les structures spatio-temporelles de phénomènes environnementaux peuvent être trouvées dans la littérature. Néanmoins, la plupart de ces mesures sont de caractère global et ne considèrent pas de contraintes spatiales com- plexes, ainsi que la haute variabilité et la nature multivariée des événements. A cet effet, la méthodologie ici proposée prend en compte les complexités de l'espace géographique ou le phénomène a lieu, à travers de l'introduction du concept de Domaine de Validité et l'application des mesures d'analyse spatiale dans des données en présentant différentes contraintes géographiques. Cela permet l'évaluation du degré relatif d'agrégation spatiale/temporelle des structures du phénomène observé. En plus, exclusif au cas de feux forestiers, cette recherche propose aussi deux nouvelles méthodologies pour la définition et la cartographie des zones périurbaines, décrites comme des espaces anthropogéniques à proximité de la végétation sauvage ou de la forêt, et de la prédiction de la susceptibilité à l'ignition de feu. A cet égard, l'objectif principal de cette Thèse a été d'effectuer une recherche statistique/géospatiale avec une forte application dans des cas réels, pour analyser et décrire des phénomènes environnementaux complexes aussi bien que surmonter des problèmes méthodologiques non résolus relatifs à la caractérisation des structures spatio-temporelles, particulièrement, celles des occurrences de feux forestières. Ainsi, cette Thèse fournit une réponse à la demande croissante de la gestion et du monitoring environnemental pour le déploiement d'outils d'évaluation des risques et des dangers naturels et anthro- pogéniques. Les majeures contributions de ce travail ont été présentées aux conférences nationales et internationales, et ont été aussi publiées dans 5 revues internationales avec comité de lecture. Des collaborations nationales et internationales ont été aussi établies et accomplies avec succès.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The level of information provided by ink evidence to the criminal and civil justice system is limited. The limitations arise from the weakness of the interpretative framework currently used, as proposed in the ASTM 1422-05 and 1789-04 on ink analysis. It is proposed to use the likelihood ratio from the Bayes theorem to interpret ink evidence. Unfortunately, when considering the analytical practices, as defined in the ASTM standards on ink analysis, it appears that current ink analytical practices do not allow for the level of reproducibility and accuracy required by a probabilistic framework. Such framework relies on the evaluation of the statistics of the ink characteristics using an ink reference database and the objective measurement of similarities between ink samples. A complete research programme was designed to (a) develop a standard methodology for analysing ink samples in a more reproducible way, (b) comparing automatically and objectively ink samples and (c) evaluate the proposed methodology in a forensic context. This report focuses on the first of the three stages. A calibration process, based on a standard dye ladder, is proposed to improve the reproducibility of ink analysis by HPTLC, when these inks are analysed at different times and/or by different examiners. The impact of this process on the variability between the repetitive analyses of ink samples in various conditions is studied. The results show significant improvements in the reproducibility of ink analysis compared to traditional calibration methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Continuing developments in science and technology mean that the amounts of information forensic scientists are able to provide for criminal investigations is ever increasing. The commensurate increase in complexity creates difficulties for scientists and lawyers with regard to evaluation and interpretation, notably with respect to issues of inference and decision. Probability theory, implemented through graphical methods, and specifically Bayesian networks, provides powerful methods to deal with this complexity. Extensions of these methods to elements of decision theory provide further support and assistance to the judicial system. Bayesian Networks for Probabilistic Inference and Decision Analysis in Forensic Science provides a unique and comprehensive introduction to the use of Bayesian decision networks for the evaluation and interpretation of scientific findings in forensic science, and for the support of decision-makers in their scientific and legal tasks. Includes self-contained introductions to probability and decision theory. Develops the characteristics of Bayesian networks, object-oriented Bayesian networks and their extension to decision models. Features implementation of the methodology with reference to commercial and academically available software. Presents standard networks and their extensions that can be easily implemented and that can assist in the reader's own analysis of real cases. Provides a technique for structuring problems and organizing data based on methods and principles of scientific reasoning. Contains a method for the construction of coherent and defensible arguments for the analysis and evaluation of scientific findings and for decisions based on them. Is written in a lucid style, suitable for forensic scientists and lawyers with minimal mathematical background. Includes a foreword by Ian Evett. The clear and accessible style of this second edition makes this book ideal for all forensic scientists, applied statisticians and graduate students wishing to evaluate forensic findings from the perspective of probability and decision analysis. It will also appeal to lawyers and other scientists and professionals interested in the evaluation and interpretation of forensic findings, including decision making based on scientific information.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

SUMMARY : Eukaryotic DNA interacts with the nuclear proteins using non-covalent ionic interactions. Proteins can recognize specific nucleotide sequences based on the sterical interactions with the DNA and these specific protein-DNA interactions are the basis for many nuclear processes, e.g. gene transcription, chromosomal replication, and recombination. New technology termed ChIP-Seq has been recently developed for the analysis of protein-DNA interactions on a whole genome scale and it is based on immunoprecipitation of chromatin and high-throughput DNA sequencing procedure. ChIP-Seq is a novel technique with a great potential to replace older techniques for mapping of protein-DNA interactions. In this thesis, we bring some new insights into the ChIP-Seq data analysis. First, we point out to some common and so far unknown artifacts of the method. Sequence tag distribution in the genome does not follow uniform distribution and we have found extreme hot-spots of tag accumulation over specific loci in the human and mouse genomes. These artifactual sequence tags accumulations will create false peaks in every ChIP-Seq dataset and we propose different filtering methods to reduce the number of false positives. Next, we propose random sampling as a powerful analytical tool in the ChIP-Seq data analysis that could be used to infer biological knowledge from the massive ChIP-Seq datasets. We created unbiased random sampling algorithm and we used this methodology to reveal some of the important biological properties of Nuclear Factor I DNA binding proteins. Finally, by analyzing the ChIP-Seq data in detail, we revealed that Nuclear Factor I transcription factors mainly act as activators of transcription, and that they are associated with specific chromatin modifications that are markers of open chromatin. We speculate that NFI factors only interact with the DNA wrapped around the nucleosome. We also found multiple loci that indicate possible chromatin barrier activity of NFI proteins, which could suggest the use of NFI binding sequences as chromatin insulators in biotechnology applications. RESUME : L'ADN des eucaryotes interagit avec les protéines nucléaires par des interactions noncovalentes ioniques. Les protéines peuvent reconnaître les séquences nucléotidiques spécifiques basées sur l'interaction stérique avec l'ADN, et des interactions spécifiques contrôlent de nombreux processus nucléaire, p.ex. transcription du gène, la réplication chromosomique, et la recombinaison. Une nouvelle technologie appelée ChIP-Seq a été récemment développée pour l'analyse des interactions protéine-ADN à l'échelle du génome entier et cette approche est basée sur l'immuno-précipitation de la chromatine et sur la procédure de séquençage de l'ADN à haut débit. La nouvelle approche ChIP-Seq a donc un fort potentiel pour remplacer les anciennes techniques de cartographie des interactions protéine-ADN. Dans cette thèse, nous apportons de nouvelles perspectives dans l'analyse des données ChIP-Seq. Tout d'abord, nous avons identifié des artefacts très communs associés à cette méthode qui étaient jusqu'à présent insoupçonnés. La distribution des séquences dans le génome ne suit pas une distribution uniforme et nous avons constaté des positions extrêmes d'accumulation de séquence à des régions spécifiques, des génomes humains et de la souris. Ces accumulations des séquences artéfactuelles créera de faux pics dans toutes les données ChIP-Seq, et nous proposons différentes méthodes de filtrage pour réduire le nombre de faux positifs. Ensuite, nous proposons un nouvel échantillonnage aléatoire comme un outil puissant d'analyse des données ChIP-Seq, ce qui pourraient augmenter l'acquisition de connaissances biologiques à partir des données ChIP-Seq. Nous avons créé un algorithme d'échantillonnage aléatoire et nous avons utilisé cette méthode pour révéler certaines des propriétés biologiques importantes de protéines liant à l'ADN nommés Facteur Nucléaire I (NFI). Enfin, en analysant en détail les données de ChIP-Seq pour la famille de facteurs de transcription nommés Facteur Nucléaire I, nous avons révélé que ces protéines agissent principalement comme des activateurs de transcription, et qu'elles sont associées à des modifications de la chromatine spécifiques qui sont des marqueurs de la chromatine ouverte. Nous pensons que lés facteurs NFI interagir uniquement avec l'ADN enroulé autour du nucléosome. Nous avons également constaté plusieurs régions génomiques qui indiquent une éventuelle activité de barrière chromatinienne des protéines NFI, ce qui pourrait suggérer l'utilisation de séquences de liaison NFI comme séquences isolatrices dans des applications de la biotechnologie.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper explores the impact of citizens' motivation to vote on the pattern of fiscal federalism. If the only concern of instrumental citizens was outcome they would have little incentive to vote because the probability that a single vote might change an electoral outcome is usually minuscule. If voters turn out in large numbers to derive intrinsic value from action, how will these voters choose when considering the role local jurisdictions should play? The first section of the paper assesses the weight that expressive voters attach to an instrumental evaluation of alternative outcomes. Predictions are tested with reference to case study analysis of the way Swiss voters assessed the role their local jurisdiction should play. The relevance of this analysis is also assessed with reference to the choice that voters express when considering other local issues. Textbook analysis of fiscal federalism is premised on the assumption that voters register choice just as 'consumers' reveal demand for services in a market, but how robust is this analogy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

SUMMARY: EBBP is a poorly characterized member of the RBCC/TRIM family (RING finger B-box coiled-coilltripartite motif). It is ubiquitously expressed, but particularly high levels are found in keratinocytes. There is evidence that EBBP is involved in inflammatory processes, since it can interact with pro-interleukin-1 ß (prolL-1 ß) in human macrophages and keratinocytes, and its downregulation results in reduced secretion of IL-1 ß. IL-1ß activation and secretion requires the proteolytic cleavage of prolL-1ß by caspase-1, which in turn is actìvated by a protein complex called the inflammasome. As it has been demonstrated that EBBP can bind two different proteins of the inflammasome (NALP-1 and caspase 1), we assumed that EBBP plays a role in the regulation of inflammation and that the inflammasome, which has as yet only been described in ínflammatory cells, may also exist in keratinocytes. Indeed, I could show in my thesis that the inflammasome components are expressed in human keratinócytes at the RNA and protein level and also in vivo in human epidermis. After irradiation with a physiological dose of UVB, keratinocytes activated prolL-1ß and secreted prolL-1 a, IL-1 ß, prolL-18 and inflammasome proteins, although all these proteins lack a classical signal peptide. The secretion was dependent on caspase-1 activity, but not on de novo protein synthesis. Knock-down of NALP1 and -3, caspase-1 and -5, EBBP and Asc strongly reduced the secretion of IL-1 ß, demonstrating that also in keratinocytes inflammasome proteins are directly involved in maturation of this cytokine. These results demonstrate for the first time the presence of an active inflammasome in non-professional immune cells. Moreover, they show that UV irradiation is a stimulus for inflammasome activation in keratinocytes. For the analysis of the ín vivo functions of EBBP, transgenic mice overexpressing EBBP in the epidermis were generated. To examine the influence of EBBP overexpression on inflammatory processes, we subjected the mice to different challenges, which induce inflammation. Wound-healing, UVB irradiation and delayed hypersensitivity were tested, but we did not observe any phenotype in the K14-EBBP mice. Besides, a conditional ebbp knockout mouse has been obtained, which will allow to determine the effects of EBBP gene deletion in different tissues and organs. RESUME: EBBP est un membre encore mal connu de la famille des RBCC/TRIM (RING finger B-box coiled-coil/tripartite motif). Il est exprimé de manière ubiquitaire, et en particulier dans les kératinocytes. EBBP étant capable d'interagir avec la prointerleukine-1 ß (prolL-1 ß) dans les macrophages et les kératinocytes humains et de réguler la sécrétion de l'IL-1 ß, il est très probable que cette protéine est impliquée dans l'inflammation. L'activation et la sécrétion de l'IL-1 ß requièrent le clivage protéolytique de son précurseur prolL-1ß par la caspase-1, qui est elle-même activée par un complexe protéique appelé l'inflammasome. Comme il a été démontré qu'EBBP peut lier deux protéines de l'inflammasome (NALP-1 et caspase-1), nous avons émis l'hypothèse qu'EBBP joue un rôle dans la régulation de l'inflammation et que l'inflammasome, jusqu'ici décrit exclusivement dans des cellules inflammatoires, existe dans les kératinocytes. En effet, j'ai pu montrer dans ma thèse que les composants de l'inflammasome sont exprimés dans les kératinocytes humains ainsi que in vivo dans l'épiderme humain. Après irradiation avec une dose, physiologique d'UVB, les kératinocytes activent la prolL-1 ß et sécrètent la prolL-1a, l'IL-1 ß, la prolL-18 et des protéines de l'inflammasome, bien que toutes ces protéines soient dépourvues de peptide signal. La sécrétion dépend de la caspase-1 mais pas de la synthèse protéique de novo. Le knock-down de NALP-1 et -3, des caspase-1 et -5, d'EBBP et d'Asc réduit de manière marquée la sécrétion d'IL-1 ß, démontrant que dans les kératinocytes également, les protéines de l'inflammasome sont impliquées directement dans la maturation de cette cytokine. Ces résultats démontrent pour la première fois la présence d'un inflammasome actif dans des cellules immunitaires non professionnelles. De plus, ils montrent que l'irradiation aux UV est un stimulus pour l'activation de l'inflammasome dans les kératinocytes. Pour l'analyse des fonctions d'EBBP in vivo, nous avons généré des souris transgéniques qui surexpriment EBBP dans l'épiderme. En vue d'examiner l'influence de la surexpression d'EBBP sur le processus inflammatoire, nous avons soumis ces souris à differents modèles d'inflammation. Nous avons testé cicatrisation, UVB et hypersensibilité retardée, mais n'avons pas observé de phénotype chez les souris transgéniques. En parallèle, nous avons également généré des souris knock-out pour ebbp qui devraient nous permettre de déterminer les effets de la suppression d'EBBP dans différents tissus et organes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Limited information is available regarding the methodology required to characterize hashish seizures for assessing the presence or the absence of a chemical link between two seizures. This casework report presents the methodology applied for assessing that two different police seizures were coming from the same block before this latter one was split. The chemical signature was extracted using GC-MS analysis and the implemented methodology consists in a study of intra- and inter-variability distributions based on the measurement of the chemical profiles similarity using a number of hashish seizures and the calculation of the Pearson correlation coefficient. Different statistical scenarios (i.e., a combination of data pretreatment techniques and selection of target compounds) were tested to find the most discriminating one. Seven compounds showing high discrimination capabilities were selected on which a specific statistical data pretreatment was applied. Based on the results, the statistical model built for comparing the hashish seizures leads to low error rates. Therefore, the implemented methodology is suitable for the chemical profiling of hashish seizures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sampling issues represent a topic of ongoing interest to the forensic science community essentially because of their crucial role in laboratory planning and working protocols. For this purpose, forensic literature described thorough (Bayesian) probabilistic sampling approaches. These are now widely implemented in practice. They allow, for instance, to obtain probability statements that parameters of interest (e.g., the proportion of a seizure of items that present particular features, such as an illegal substance) satisfy particular criteria (e.g., a threshold or an otherwise limiting value). Currently, there are many approaches that allow one to derive probability statements relating to a population proportion, but questions on how a forensic decision maker - typically a client of a forensic examination or a scientist acting on behalf of a client - ought actually to decide about a proportion or a sample size, remained largely unexplored to date. The research presented here intends to address methodology from decision theory that may help to cope usefully with the wide range of sampling issues typically encountered in forensic science applications. The procedures explored in this paper enable scientists to address a variety of concepts such as the (net) value of sample information, the (expected) value of sample information or the (expected) decision loss. All of these aspects directly relate to questions that are regularly encountered in casework. Besides probability theory and Bayesian inference, the proposed approach requires some additional elements from decision theory that may increase the efforts needed for practical implementation. In view of this challenge, the present paper will emphasise the merits of graphical modelling concepts, such as decision trees and Bayesian decision networks. These can support forensic scientists in applying the methodology in practice. How this may be achieved is illustrated with several examples. The graphical devices invoked here also serve the purpose of supporting the discussion of the similarities, differences and complementary aspects of existing Bayesian probabilistic sampling criteria and the decision-theoretic approach proposed throughout this paper.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose - The purpose of this paper is to document the outcome of a global three-year long supply chain improvement initiative at a multi-national producer of branded sporting goods that is transforming from a holding structure to an integrated company. The case company is comprised of seven internationally well-known sport brands, which form a diverse set of independent sub-cases, on which the same supply chain metrics and change project approach was applied to improve supply chain performance. Design/methodology/approach - By using in-depth case study and statistical analysis the paper analyzes across the brands how supply chain complexity (SKU count), supply chain type (make or buy) and seasonality affect completeness and punctuality of deliveries, and inventory as the change project progresses. Findings - Results show that reduction in supply chain complexity improves delivery performance, but has no impact on inventory. Supply chain type has no impact on service level, but brands with in-house production are better in improving inventory than those with outsourced production. Non-seasonal business units improve service faster than seasonal ones, yet there is no impact on inventory. Research limitations/implications - The longitudinal data used for the analysis is biased with the general business trend, yet the rich data from different cases and three-years of data collection enables generalizations to a certain level. Practical implications - The in-depth case study serves as an example for other companies on how to initiate a supply chain improvement project across business units with tangible results. Originality/value - The seven sub-cases with their different characteristics on which the same improvement initiative was applied sets a unique ground for longitudinal analysis to study supply chain complexity, type and seasonality.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Network analysis naturally relies on graph theory and, more particularly, on the use of node and edge metrics to identify the salient properties in graphs. When building visual maps of networks, these metrics are turned into useful visual cues or are used interactively to filter out parts of a graph while querying it, for instance. Over the years, analysts from different application domains have designed metrics to serve specific needs. Network science is an inherently cross-disciplinary field, which leads to the publication of metrics with similar goals; different names and descriptions of their analytics often mask the similarity between two metrics that originated in different fields. Here, we study a set of graph metrics and compare their relative values and behaviors in an effort to survey their potential contributions to the spatial analysis of networks.