14 resultados para quantitative methods
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
The increasing aversion to technological risks of the society requires the development of inherently safer and environmentally friendlier processes, besides assuring the economic competitiveness of the industrial activities. The different forms of impact (e.g. environmental, economic and societal) are frequently characterized by conflicting reduction strategies and must be holistically taken into account in order to identify the optimal solutions in process design. Though the literature reports an extensive discussion of strategies and specific principles, quantitative assessment tools are required to identify the marginal improvements in alternative design options, to allow the trade-off among contradictory aspects and to prevent the “risk shift”. In the present work a set of integrated quantitative tools for design assessment (i.e. design support system) was developed. The tools were specifically dedicated to the implementation of sustainability and inherent safety in process and plant design activities, with respect to chemical and industrial processes in which substances dangerous for humans and environment are used or stored. The tools were mainly devoted to the application in the stages of “conceptual” and “basic design”, when the project is still open to changes (due to the large number of degrees of freedom) which may comprise of strategies to improve sustainability and inherent safety. The set of developed tools includes different phases of the design activities, all through the lifecycle of a project (inventories, process flow diagrams, preliminary plant lay-out plans). The development of such tools gives a substantial contribution to fill the present gap in the availability of sound supports for implementing safety and sustainability in early phases of process design. The proposed decision support system was based on the development of a set of leading key performance indicators (KPIs), which ensure the assessment of economic, societal and environmental impacts of a process (i.e. sustainability profile). The KPIs were based on impact models (also complex), but are easy and swift in the practical application. Their full evaluation is possible also starting from the limited data available during early process design. Innovative reference criteria were developed to compare and aggregate the KPIs on the basis of the actual sitespecific impact burden and the sustainability policy. Particular attention was devoted to the development of reliable criteria and tools for the assessment of inherent safety in different stages of the project lifecycle. The assessment follows an innovative approach in the analysis of inherent safety, based on both the calculation of the expected consequences of potential accidents and the evaluation of the hazards related to equipment. The methodology overrides several problems present in the previous methods proposed for quantitative inherent safety assessment (use of arbitrary indexes, subjective judgement, build-in assumptions, etc.). A specific procedure was defined for the assessment of the hazards related to the formations of undesired substances in chemical systems undergoing “out of control” conditions. In the assessment of layout plans, “ad hoc” tools were developed to account for the hazard of domino escalations and the safety economics. The effectiveness and value of the tools were demonstrated by the application to a large number of case studies concerning different kinds of design activities (choice of materials, design of the process, of the plant, of the layout) and different types of processes/plants (chemical industry, storage facilities, waste disposal). An experimental survey (analysis of the thermal stability of isomers of nitrobenzaldehyde) provided the input data necessary to demonstrate the method for inherent safety assessment of materials.
Resumo:
The construction and use of multimedia corpora has been advocated for a while in the literature as one of the expected future application fields of Corpus Linguistics. This research project represents a pioneering experience aimed at applying a data-driven methodology to the study of the field of AVT, similarly to what has been done in the last few decades in the macro-field of Translation Studies. This research was based on the experience of Forlixt 1, the Forlì Corpus of Screen Translation, developed at the University of Bologna’s Department of Interdisciplinary Studies in Translation, Languages and Culture. As a matter of fact, in order to quantify strategies of linguistic transfer of an AV product, we need to take into consideration not only the linguistic aspect of such a product but all the meaning-making resources deployed in the filmic text. Provided that one major benefit of Forlixt 1 is the combination of audiovisual and textual data, this corpus allows the user to access primary data for scientific investigation, and thus no longer rely on pre-processed material such as traditional annotated transcriptions. Based on this rationale, the first chapter of the thesis sets out to illustrate the state of the art of research in the disciplinary fields involved. The primary objective was to underline the main repercussions on multimedia texts resulting from the interaction of a double support, audio and video, and, accordingly, on procedures, means, and methods adopted in their translation. By drawing on previous research in semiotics and film studies, the relevant codes at work in visual and acoustic channels were outlined. Subsequently, we concentrated on the analysis of the verbal component and on the peculiar characteristics of filmic orality as opposed to spontaneous dialogic production. In the second part, an overview of the main AVT modalities was presented (dubbing, voice-over, interlinguistic and intra-linguistic subtitling, audio-description, etc.) in order to define the different technologies, processes and professional qualifications that this umbrella term presently includes. The second chapter focuses diachronically on various theories’ contribution to the application of Corpus Linguistics’ methods and tools to the field of Translation Studies (i.e. Descriptive Translation Studies, Polysystem Theory). In particular, we discussed how the use of corpora can favourably help reduce the gap existing between qualitative and quantitative approaches. Subsequently, we reviewed the tools traditionally employed by Corpus Linguistics in regard to the construction of traditional “written language” corpora, to assess whether and how they can be adapted to meet the needs of multimedia corpora. In particular, we reviewed existing speech and spoken corpora, as well as multimedia corpora specifically designed to investigate Translation. The third chapter reviews Forlixt 1's main developing steps, from a technical (IT design principles, data query functions) and methodological point of view, by laying down extensive scientific foundations for the annotation methods adopted, which presently encompass categories of pragmatic, sociolinguistic, linguacultural and semiotic nature. Finally, we described the main query tools (free search, guided search, advanced search and combined search) and the main intended uses of the database in a pedagogical perspective. The fourth chapter lists specific compilation criteria retained, as well as statistics of the two sub-corpora, by presenting data broken down by language pair (French-Italian and German-Italian) and genre (cinema’s comedies, television’s soapoperas and crime series). Next, we concentrated on the discussion of the results obtained from the analysis of summary tables reporting the frequency of categories applied to the French-Italian sub-corpus. The detailed observation of the distribution of categories identified in the original and dubbed corpus allowed us to empirically confirm some of the theories put forward in the literature and notably concerning the nature of the filmic text, the dubbing process and Italian dubbed language’s features. This was possible by looking into some of the most problematic aspects, like the rendering of socio-linguistic variation. The corpus equally allowed us to consider so far neglected aspects, such as pragmatic, prosodic, kinetic, facial, and semiotic elements, and their combination. At the end of this first exploration, some specific observations concerning possible macrotranslation trends were made for each type of sub-genre considered (cinematic and TV genre). On the grounds of this first quantitative investigation, the fifth chapter intended to further examine data, by applying ad hoc models of analysis. Given the virtually infinite number of combinations of categories adopted, and of the latter with searchable textual units, three possible qualitative and quantitative methods were designed, each of which was to concentrate on a particular translation dimension of the filmic text. The first one was the cultural dimension, which specifically focused on the rendering of selected cultural references and on the investigation of recurrent translation choices and strategies justified on the basis of the occurrence of specific clusters of categories. The second analysis was conducted on the linguistic dimension by exploring the occurrence of phrasal verbs in the Italian dubbed corpus and by ascertaining the influence on the adoption of related translation strategies of possible semiotic traits, such as gestures and facial expressions. Finally, the main aim of the third study was to verify whether, under which circumstances, and through which modality, graphic and iconic elements were translated into Italian from an original corpus of both German and French films. After having reviewed the main translation techniques at work, an exhaustive account of possible causes for their non-translation was equally provided. By way of conclusion, the discussion of results obtained from the distribution of annotation categories on the French-Italian corpus, as well as the application of specific models of analysis allowed us to underline possible advantages and drawbacks related to the adoption of a corpus-based approach to AVT studies. Even though possible updating and improvement were proposed in order to help solve some of the problems identified, it is argued that the added value of Forlixt 1 lies ultimately in having created a valuable instrument, allowing to carry out empirically-sound contrastive studies that may be usefully replicated on different language pairs and several types of multimedia texts. Furthermore, multimedia corpora can also play a crucial role in L2 and translation teaching, two disciplines in which their use still lacks systematic investigation.
Resumo:
The purpose of this research is to contribute to the literature on organizational demography and new product development by investigating how diverse individual career histories impact team performance. Moreover we highlighted the importance of considering also the institutional context and the specific labour market arrangements in which a team is embedded, in order to interpret correctly the effect of career-related diversity measures on performance. The empirical setting of the study is the videogame industry, and the teams in charge of the development of new game titles. Video games development teams are the ideal setting to investigate the influence of career histories on team performance, since the development of videogames is performed by multidisciplinary teams composed by specialists with a wide variety of technical and artistic backgrounds, who execute a significant amounts of creative thinking. We investigate our research question both with quantitative methods and with a case study on the Japanese videogame industry: one of the most innovative in this sector. Our results show how career histories in terms of occupational diversity, prior functional diversity and prior product diversity, usually have a positive influence on team performance. However, when the moderating effect of the institutional setting is taken in to account, career diversity has different or even opposite effect on team performance, according to the specific national context in which a team operates.
Resumo:
In many communities, supplying water for the people is a huge task and the fact that this essential service can be carried out by the private sector respecting the right to water, is a debated issue. This dissertation investigates the mechanisms through which a 'perceived rights violation' - which represents a specific form of perceived injustice which derives from the violation of absolute moral principles – can promote collective action. Indeed, literature on morality and collective action suggests that even if many people apparently sustain high moral principles (like human rights), only a minority decides to act in order to defend them. Taking advantage of the political situation in Italy, and the recent mobilization for "public water" we hypothesized that, because of its "sacred value", the perceived violation of the right to water facilitates identification with the social movement and activism. Through five studies adopting qualitative and quantitative methods, we confirmed our hypotheses demonstrating that the perceived violation of the right to water can sustain activism and it can influence vote intentions at the referendum for 'public water'. This path to collective action coexists with other 'classical' predictors of collective action, like instrumental factors (personal advantages, efficacy beliefs) and anger. The perceived rights violation can derive both from personal values (i.e. universalism) and external factors (i.e. a mobilization campaign). Furthermore, we demonstrated that it is possible to enhance the perceived violation of the right to water and anger through a specifically designed communication campaign. The final chapter summarizes the main findings and discusses the results, suggesting some innovative line of research for collective action literature.
Resumo:
La neuroriabilitazione è un processo attraverso cui individui affetti da patologie neurologiche mirano al conseguimento di un recupero completo o alla realizzazione del loro potenziale ottimale benessere fisico, mentale e sociale. Elementi essenziali per una riabilitazione efficace sono: una valutazione clinica da parte di un team multidisciplinare, un programma riabilitativo mirato e la valutazione dei risultati conseguiti mediante misure scientifiche e clinicamente appropriate. Obiettivo principale di questa tesi è stato sviluppare metodi e strumenti quantitativi per il trattamento e la valutazione motoria di pazienti neurologici. I trattamenti riabilitativi convenzionali richiedono a pazienti neurologici l’esecuzione di esercizi ripetitivi, diminuendo la loro motivazione. La realtà virtuale e i feedback sono in grado di coinvolgerli nel trattamento, permettendo ripetibilità e standardizzazione dei protocolli. È stato sviluppato e valutato uno strumento basato su feedback aumentati per il controllo del tronco. Inoltre, la realtà virtuale permette l’individualizzare il trattamento in base alle esigenze del paziente. Un’applicazione virtuale per la riabilitazione del cammino è stata sviluppata e testata durante un training su pazienti di sclerosi multipla, valutandone fattibilità e accettazione e dimostrando l'efficacia del trattamento. La valutazione quantitativa delle capacità motorie dei pazienti viene effettuata utilizzando sistemi di motion capture. Essendo il loro uso nella pratica clinica limitato, una metodologia per valutare l’oscillazione delle braccia in soggetti parkinsoniani basata su sensori inerziali è stata proposta. Questi sono piccoli, accurati e flessibili ma accumulano errori durante lunghe misurazioni. È stato affrontato questo problema e i risultati suggeriscono che, se il sensore è sul piede e le accelerazioni sono integrate iniziando dalla fase di mid stance, l’errore e le sue conseguenze nella determinazione dei parametri spaziali sono contenuti. Infine, è stata presentata una validazione del Kinect per il tracking del cammino in ambiente virtuale. Risultati preliminari consentono di definire il campo di utilizzo del sensore in riabilitazione.
Resumo:
The clonal distribution of BRAFV600E in papillary thyroid carcinoma (PTC) has been recently debated. No information is currently available about precursor lesions of PTCs. My first aim was to establish whether the BRAFV600E mutation occurs as a subclonal event in PTCs. My second aim was to screen BRAF mutations in histologically benign tissue of cases with BRAFV600E or BRAFwt PTCs in order to identify putative precursor lesions of PTCs. Highly sensitive semi-quantitative methods were used: Allele Specific LNA quantitative PCR (ASLNAqPCR) and 454 Next-Generation Sequencing (NGS). For the first aim 155 consecutive formalin-fixed and paraffin-embedded (FFPE) specimens of PTCs were analyzed. The percentage of mutated cells obtained was normalized to the estimated number of neoplastic cells. Three groups of tumors were identified: a first had a percentage of BRAF mutated neoplastic cells > 80%; a second group showed a number of BRAF mutated neoplastic cells < 30%; a third group had a distribution of BRAFV600E between 30-80%. The large presence of BRAFV600E mutated neoplastic cell sub-populations suggests that BRAFV600E may be acquired early during tumorigenesis: therefore, BRAFV600E can be heterogeneously distributed in PTC. For the second aim, two groups were studied: one consisted of 20 cases with BRAFV600E mutated PTC, the other of 9 BRAFwt PTCs. Seventy-five and 23 histologically benign FFPE thyroid specimens were analyzed from the BRAFV600E mutated and BRAFwt PTC groups, respectively. The screening of BRAF mutations identified BRAFV600E in “atypical” cell foci from both groups of patients. “Unusual” BRAF substitutions were observed in histologically benign thyroid associated with BRAFV600E PTCs. These mutations were very uncommon in the group with BRAFwt PTCs and in BRAFV600E PTCs. Therefore, lesions carrying BRAF mutations may represent “abortive” attempts at cancer development: only BRAFV600E boosts neoplastic transformation to PTC. BRAFV600E mutated “atypical foci” may represent precursor lesions of BRAFV600E mutated PTCs.
Resumo:
The physico-chemical characterization, structure-pharmacokinetic and metabolism studies of new semi synthetic analogues of natural bile acids (BAs) drug candidates have been performed. Recent studies discovered a role of BAs as agonists of FXR and TGR5 receptor, thus opening new therapeutic target for the treatment of liver diseases or metabolic disorders. Up to twenty new semisynthetic analogues have been synthesized and studied in order to find promising novel drugs candidates. In order to define the BAs structure-activity relationship, their main physico-chemical properties (solubility, detergency, lipophilicity and affinity with serum albumin) have been measured with validated analytical methodologies. Their metabolism and biodistribution has been studied in “bile fistula rat”, model where each BA is acutely administered through duodenal and femoral infusion and bile collected at different time interval allowing to define the relationship between structure and intestinal absorption and hepatic uptake ,metabolism and systemic spill-over. One of the studied analogues, 6α-ethyl-3α7α-dihydroxy-5β-cholanic acid, analogue of CDCA (INT 747, Obeticholic Acid (OCA)), recently under approval for the treatment of cholestatic liver diseases, requires additional studies to ensure its safety and lack of toxicity when administered to patients with a strong liver impairment. For this purpose, CCl4 inhalation to rat causing hepatic decompensation (cirrhosis) animal model has been developed and used to define the difference of OCA biodistribution in respect to control animals trying to define whether peripheral tissues might be also exposed as a result of toxic plasma levels of OCA, evaluating also the endogenous BAs biodistribution. An accurate and sensitive HPLC-ES-MS/MS method is developed to identify and quantify all BAs in biological matrices (bile, plasma, urine, liver, kidney, intestinal content and tissue) for which a sample pretreatment have been optimized.
Resumo:
Great strides have been made in the last few years in the pharmacological treatment of neuropsychiatric disorders, with the introduction into the therapy of several new and more efficient agents, which have improved the quality of life of many patients. Despite these advances, a large percentage of patients is still considered “non-responder” to the therapy, not drawing any benefits from it. Moreover, these patients have a peculiar therapeutic profile, due to the very frequent application of polypharmacy, attempting to obtain satisfactory remission of the multiple aspects of psychiatric syndromes. Therapy is heavily individualised and switching from one therapeutic agent to another is quite frequent. One of the main problems of this situation is the possibility of unwanted or unexpected pharmacological interactions, which can occur both during polypharmacy and during switching. Simultaneous administration of psychiatric drugs can easily lead to interactions if one of the administered compounds influences the metabolism of the others. Impaired CYP450 function due to inhibition of the enzyme is frequent. Other metabolic pathways, such as glucuronidation, can also be influenced. The Therapeutic Drug Monitoring (TDM) of psychotropic drugs is an important tool for treatment personalisation and optimisation. It deals with the determination of parent drugs and metabolites plasma levels, in order to monitor them over time and to compare these findings with clinical data. This allows establishing chemical-clinical correlations (such as those between administered dose and therapeutic and side effects), which are essential to obtain the maximum therapeutic efficacy, while minimising side and toxic effects. It is evident the importance of developing sensitive and selective analytical methods for the determination of the administered drugs and their main metabolites, in order to obtain reliable data that can correctly support clinical decisions. During the three years of Ph.D. program, some analytical methods based on HPLC have been developed, validated and successfully applied to the TDM of psychiatric patients undergoing treatment with drugs belonging to following classes: antipsychotics, antidepressants and anxiolytic-hypnotics. The biological matrices which have been processed were: blood, plasma, serum, saliva, urine, hair and rat brain. Among antipsychotics, both atypical and classical agents have been considered, such as haloperidol, chlorpromazine, clotiapine, loxapine, risperidone (and 9-hydroxyrisperidone), clozapine (as well as N-desmethylclozapine and clozapine N-oxide) and quetiapine. While the need for an accurate TDM of schizophrenic patients is being increasingly recognized by psychiatrists, only in the last few years the same attention is being paid to the TDM of depressed patients. This is leading to the acknowledgment that depression pharmacotherapy can greatly benefit from the accurate application of TDM. For this reason, the research activity has also been focused on first and second-generation antidepressant agents, like triciclic antidepressants, trazodone and m-chlorophenylpiperazine (m-cpp), paroxetine and its three main metabolites, venlafaxine and its active metabolite, and the most recent antidepressant introduced into the market, duloxetine. Among anxiolytics-hypnotics, benzodiazepines are very often involved in the pharmacotherapy of depression for the relief of anxious components; for this reason, it is useful to monitor these drugs, especially in cases of polypharmacy. The results obtained during these three years of Ph.D. program are reliable and the developed HPLC methods are suitable for the qualitative and quantitative determination of CNS drugs in biological fluids for TDM purposes.
Resumo:
Here I will focus on three main topics that best address and include the projects I have been working in during my three year PhD period that I have spent in different research laboratories addressing both computationally and practically important problems all related to modern molecular genomics. The first topic is the use of livestock species (pigs) as a model of obesity, a complex human dysfunction. My efforts here concern the detection and annotation of Single Nucleotide Polymorphisms. I developed a pipeline for mining human and porcine sequences. Starting from a set of human genes related with obesity the platform returns a list of annotated porcine SNPs extracted from a new set of potential obesity-genes. 565 of these SNPs were analyzed on an Illumina chip to test the involvement in obesity on a population composed by more than 500 pigs. Results will be discussed. All the computational analysis and experiments were done in collaboration with the Biocomputing group and Dr.Luca Fontanesi, respectively, under the direction of prof. Rita Casadio at the Bologna University, Italy. The second topic concerns developing a methodology, based on Factor Analysis, to simultaneously mine information from different levels of biological organization. With specific test cases we develop models of the complexity of the mRNA-miRNA molecular interaction in brain tumors measured indirectly by microarray and quantitative PCR. This work was done under the supervision of Prof. Christine Nardini, at the “CAS-MPG Partner Institute for Computational Biology” of Shangai, China (co-founded by the Max Planck Society and the Chinese Academy of Sciences jointly) The third topic concerns the development of a new method to overcome the variety of PCR technologies routinely adopted to characterize unknown flanking DNA regions of a viral integration locus of the human genome after clinical gene therapy. This new method is entirely based on next generation sequencing and it reduces the time required to detect insertion sites, decreasing the complexity of the procedure. This work was done in collaboration with the group of Dr. Manfred Schmidt at the Nationales Centrum für Tumorerkrankungen (Heidelberg, Germany) supervised by Dr. Annette Deichmann and Dr. Ali Nowrouzi. Furthermore I add as an Appendix the description of a R package for gene network reconstruction that I helped to develop for scientific usage (http://www.bioconductor.org/help/bioc-views/release/bioc/html/BUS.html).
Resumo:
The present work, then, is concerned with the forgotten elements of the Lebanese economy, agriculture and rural development. It investigates the main problematic which arose from these forgotten components, in particular the structure of the agricultural sector, production technology, income distribution, poverty, food security, territorial development and local livelihood strategies. It will do so using quantitative Computable General Equilibrium (CGE) modeling and a qualitative phenomenological case study analysis, both embedded in a critical review of the historical development of the political economy of Lebanon, and a structural analysis of its economy. The research shows that under-development in Lebanese rural areas is not due to lack of resources, but rather is the consequence of political choices. It further suggests that agriculture – in both its mainstream conventional and its innovative locally initiated forms of production – still represents important potential for inducing economic growth and development. In order to do so, Lebanon has to take full advantage of its human and territorial capital, by developing a rural development strategy based on two parallel sets of actions: one directed toward the support of local rural development initiatives, and the other directed toward intensive form of production. In addition to its economic returns, such a strategy would promote social and political stability.
Resumo:
The consumer demand for natural, minimally processed, fresh like and functional food has lead to an increasing interest in emerging technologies. The aim of this PhD project was to study three innovative food processing technologies currently used in the food sector. Ultrasound-assisted freezing, vacuum impregnation and pulsed electric field have been investigated through laboratory scale systems and semi-industrial pilot plants. Furthermore, analytical and sensory techniques have been developed to evaluate the quality of food and vegetable matrix obtained by traditional and emerging processes. Ultrasound was found to be a valuable technique to improve the freezing process of potatoes, anticipating the beginning of the nucleation process, mainly when applied during the supercooling phase. A study of the effects of pulsed electric fields on phenol and enzymatic profile of melon juice has been realized and the statistical treatment of data was carried out through a response surface method. Next, flavour enrichment of apple sticks has been realized applying different techniques, as atmospheric, vacuum, ultrasound technologies and their combinations. The second section of the thesis deals with the development of analytical methods for the discrimination and quantification of phenol compounds in vegetable matrix, as chestnut bark extracts and olive mill waste water. The management of waste disposal in mill sector has been approached with the aim of reducing the amount of waste, and at the same time recovering valuable by-products, to be used in different industrial sectors. Finally, the sensory analysis of boiled potatoes has been carried out through the development of a quantitative descriptive procedure for the study of Italian and Mexican potato varieties. An update on flavour development in fresh and cooked potatoes has been realized and a sensory glossary, including general and specific definitions related to organic products, used in the European project Ecropolis, has been drafted.
Resumo:
Myocardial perfusion quantification by means of Contrast-Enhanced Cardiac Magnetic Resonance images relies on time consuming frame-by-frame manual tracing of regions of interest. In this Thesis, a novel automated technique for myocardial segmentation and non-rigid registration as a basis for perfusion quantification is presented. The proposed technique is based on three steps: reference frame selection, myocardial segmentation and non-rigid registration. In the first step, the reference frame in which both endo- and epicardial segmentation will be performed is chosen. Endocardial segmentation is achieved by means of a statistical region-based level-set technique followed by a curvature-based regularization motion. Epicardial segmentation is achieved by means of an edge-based level-set technique followed again by a regularization motion. To take into account the changes in position, size and shape of myocardium throughout the sequence due to out of plane respiratory motion, a non-rigid registration algorithm is required. The proposed non-rigid registration scheme consists in a novel multiscale extension of the normalized cross-correlation algorithm in combination with level-set methods. The myocardium is then divided into standard segments. Contrast enhancement curves are computed measuring the mean pixel intensity of each segment over time, and perfusion indices are extracted from each curve. The overall approach has been tested on synthetic and real datasets. For validation purposes, the sequences have been manually traced by an experienced interpreter, and contrast enhancement curves as well as perfusion indices have been computed. Comparisons between automatically extracted and manually obtained contours and enhancement curves showed high inter-technique agreement. Comparisons of perfusion indices computed using both approaches against quantitative coronary angiography and visual interpretation demonstrated that the two technique have similar diagnostic accuracy. In conclusion, the proposed technique allows fast, automated and accurate measurement of intra-myocardial contrast dynamics, and may thus address the strong clinical need for quantitative evaluation of myocardial perfusion.
Resumo:
Foodborne diseases impact human health and economies worldwide in terms of health care and productivity loss. Prevention is necessary and methods to detect, isolate and quantify foodborne pathogens play a fundamental role, changing continuously to face microorganisms and food production evolution. Official methods are mainly based on microorganisms growth in different media and their isolation on selective agars followed by confirmation of presumptive colonies through biochemical and serological test. A complete identification requires form 7 to 10 days. Over the last decades, new molecular techniques based on antibodies and nucleic acids allow a more accurate typing and a faster detection and quantification. The present thesis aims to apply molecular techniques to improve official methods performances regarding two pathogens: Shiga-like Toxin-producing Escherichia coli (STEC) and Listeria monocytogenes. In 2011, a new strain of STEC belonging to the serogroup O104 provoked a large outbreak. Therefore, the development of a method to detect and isolate STEC O104 is demanded. The first objective of this work is the detection, isolation and identification of STEC O104 in sprouts artificially contaminated. Multiplex PCR assays and antibodies anti-O104 incorporated in reagents for immunomagnetic separation and latex agglutination were employed. Contamination levels of less than 1 CFU/g were detected. Multiplex PCR assays permitted a rapid screening of enriched food samples and identification of isolated colonies. Immunomagnetic separation and latex agglutination allowed a high sensitivity and rapid identification of O104 antigen, respectively. The development of a rapid method to detect and quantify Listeria monocytogenes, a high-risk pathogen, is the second objective. Detection of 1 CFU/ml and quantification of 10–1,000 CFU/ml in raw milk were achieved by a sample pretreatment step and quantitative PCR in about 3h. L. monocytogenes growth in raw milk was also evaluated.
Resumo:
The so called cascading events, which lead to high-impact low-frequency scenarios are rising concern worldwide. A chain of events result in a major industrial accident with dreadful (and often unpredicted) consequences. Cascading events can be the result of the realization of an external threat, like a terrorist attack a natural disaster or of “domino effect”. During domino events the escalation of a primary accident is driven by the propagation of the primary event to nearby units, causing an overall increment of the accident severity and an increment of the risk associated to an industrial installation. Also natural disasters, like intense flooding, hurricanes, earthquake and lightning are found capable to enhance the risk of an industrial area, triggering loss of containment of hazardous materials and in major accidents. The scientific community usually refers to those accidents as “NaTechs”: natural events triggering industrial accidents. In this document, a state of the art of available approaches to the modelling, assessment, prevention and management of domino and NaTech events is described. On the other hand, the relevant work carried out during past studies still needs to be consolidated and completed, in order to be applicable in a real industrial framework. New methodologies, developed during my research activity, aimed at the quantitative assessment of domino and NaTech accidents are presented. The tools and methods provided within this very study had the aim to assist the progress toward a consolidated and universal methodology for the assessment and prevention of cascading events, contributing to enhance safety and sustainability of the chemical and process industry.