957 resultados para Diagnostic Method For Fluid Dynamics Experiment
Resumo:
We study the radial expansion of cylindrical tubes in a hot QGP. These tubes are treated as perturbations in the energy density of the system which is formed in heavy ion collisions at RHIC and LHC. We start from the equations of relativistic hydrodynamics in two spatial dimensions and cylindrical symmetry and perform an expansion of these equations in a small parameter, conserving the nonlinearity of the hydrodynamical formalism. We consider both ideal and viscous fluids and the latter are studied with a relativistic Navier-Stokes equation. We use the equation of state of the MIT bag model. In the case of ideal fluids we obtain a breaking wave equation for the energy density fluctuation, which is then solved numerically. We also show that, under certain assumptions, perturbations in a relativistic viscous fluid are governed by the Burgers equation. We estimate the typical expansion time of the tubes. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Models of the filtration phenomenon describe the mass balance in bed filtration in terms of particle removal mechanisms, and allow for the determination of global particle removal efficiencies. These models are defined in terms of the geometry and characteristic elements of granule collectors, particles and fluid, and also the composition of the balance of forces that act in the particle collector system. This work analyzes particles collection efficiency comparing downflow and upflow direct filtration, taking into account the contribution of the gravitational factor of the settling removal efficiency in future proposal of initial collection efficiency models for upflow filtration. A qualitative analysis is also made of the proposal for the collection efficiency models for particle removal in direct downflow and upflow filtration using a Computational Fluid Dynamics (CFD) tool. This analysis showed a strong influence of gravitational factor in initial collection efficiency (t = 0) of particles, as well as the reasons of their values to be smaller for upflow filtration in comparison with the downflow filtration.
Resumo:
The aim of this study was to evaluate the correlation between the morphology of the mandibular dental arch and the maxillary central incisor crown. Cast models from 51 Caucasian individuals, older than 15 years, with optimal occlusion, no previous orthodontic treatment, featuring 4 of the 6 keys to normal occlusion by Andrews (the first being mandatory) were observed. The models were digitalized using a 3D scanner, and images of the maxillary central incisor and mandibular dental arch were obtained. These were printed and placed in an album below pre-set models of arches and dental crowns, and distributed to 12 dental surgeons, who were asked to choose which shape was most in accordance with the models and crown presented. The Kappa test was performed to evaluate the concordance among evaluators while the chi-square test was used to verify the association between the dental arch and central incisor morphology, at a 5% significance level. The Kappa test showed moderate agreement among evaluators for both variables of this study, and the chi-square test showed no significant association between tooth shape and mandibular dental arch morphology. It may be concluded that the use of arch morphology as a diagnostic method to determine the shape of the maxillary central incisor is not appropriate. Further research is necessary to assess tooth shape using a stricter scientific basis.
Resumo:
Computational fluid dynamics, CFD, is becoming an essential tool in the prediction of the hydrodynamic efforts and flow characteristics of underwater vehicles for manoeuvring studies. However, when applied to the manoeuvrability of autonomous underwater vehicles, AUVs, most studies have focused on the de- termination of static coefficients without considering the effects of the vehicle control surface deflection. This paper analyses the hydrodynamic efforts generated on an AUV considering the combined effects of the control surface deflection and the angle of attack using CFD software based on the Reynolds-averaged Navier–Stokes formulations. The CFD simulations are also independently conducted for the AUV bare hull and control surface to better identify their individual and interference efforts and to validate the simulations by comparing the experimental results obtained in a towing tank. Several simulations of the bare hull case were conducted to select the k –ω SST turbulent model with the viscosity approach that best predicts its hydrodynamic efforts. Mesh sensitivity analyses were conducted for all simulations. For the flow around the control surfaces, the CFD results were analysed according to two different methodologies, standard and nonlinear. The nonlinear regression methodology provides better results than the standard methodology does for predicting the stall at the control surface. The flow simulations have shown that the occurrence of the control surface stall depends on a linear relationship between the angle of attack and the control surface deflection. This type of information can be used in designing the vehicle’s autopilot system.
Resumo:
This study evaluated the applicability of kDNA-PCR as a prospective routine diagnosis method for American tegumentary leishmaniasis (ATL) in patients from the Instituto de Infectologia Emílio Ribas (IIER), a reference center for infectious diseases in São Paulo - SP, Brazil. The kDNA-PCR method detected Leishmania DNA in 87.5% (112/128) of the clinically suspected ATL patients, while the traditional methods demonstrated the following percentages of positivity: 62.8% (49/78) for the Montenegro skin test, 61.8% (47/76) for direct investigation, and 19.3% (22/114) for in vitro culture. The molecular method was able to confirm the disease in samples considered negative or inconclusive by traditional laboratory methods, contributing to the final clinical diagnosis and therapy of ATL in this hospital. Thus, we strongly recommend the inclusion of kDNA-PCR amplification as an alternative diagnostic method for ATL, suggesting a new algorithm routine to be followed to help the diagnosis and treatment of ATL in IIER.
Resumo:
In such territories where food production is mostly scattered in several small / medium size or even domestic farms, a lot of heterogeneous residues are produced yearly, since farmers usually carry out different activities in their properties. The amount and composition of farm residues, therefore, widely change during year, according to the single production process periodically achieved. Coupling high efficiency micro-cogeneration energy units with easy handling biomass conversion equipments, suitable to treat different materials, would provide many important advantages to the farmers and to the community as well, so that the increase in feedstock flexibility of gasification units is nowadays seen as a further paramount step towards their wide spreading in rural areas and as a real necessity for their utilization at small scale. Two main research topics were thought to be of main concern at this purpose, and they were therefore discussed in this work: the investigation of fuels properties impact on gasification process development and the technical feasibility of small scale gasification units integration with cogeneration systems. According to these two main aspects, the present work was thus divided in two main parts. The first one is focused on the biomass gasification process, that was investigated in its theoretical aspects and then analytically modelled in order to simulate thermo-chemical conversion of different biomass fuels, such as wood (park waste wood and softwood), wheat straw, sewage sludge and refuse derived fuels. The main idea is to correlate the results of reactor design procedures with the physical properties of biomasses and the corresponding working conditions of gasifiers (temperature profile, above all), in order to point out the main differences which prevent the use of the same conversion unit for different materials. At this scope, a gasification kinetic free model was initially developed in Excel sheets, considering different values of air to biomass ratio and the downdraft gasification technology as particular examined application. The differences in syngas production and working conditions (process temperatures, above all) among the considered fuels were tried to be connected to some biomass properties, such elementary composition, ash and water contents. The novelty of this analytical approach was the use of kinetic constants ratio in order to determine oxygen distribution among the different oxidation reactions (regarding volatile matter only) while equilibrium of water gas shift reaction was considered in gasification zone, by which the energy and mass balances involved in the process algorithm were linked together, as well. Moreover, the main advantage of this analytical tool is the easiness by which the input data corresponding to the particular biomass materials can be inserted into the model, so that a rapid evaluation on their own thermo-chemical conversion properties is possible to be obtained, mainly based on their chemical composition A good conformity of the model results with the other literature and experimental data was detected for almost all the considered materials (except for refuse derived fuels, because of their unfitting chemical composition with the model assumptions). Successively, a dimensioning procedure for open core downdraft gasifiers was set up, by the analysis on the fundamental thermo-physical and thermo-chemical mechanisms which are supposed to regulate the main solid conversion steps involved in the gasification process. Gasification units were schematically subdivided in four reaction zones, respectively corresponding to biomass heating, solids drying, pyrolysis and char gasification processes, and the time required for the full development of each of these steps was correlated to the kinetics rates (for pyrolysis and char gasification processes only) and to the heat and mass transfer phenomena from gas to solid phase. On the basis of this analysis and according to the kinetic free model results and biomass physical properties (particles size, above all) it was achieved that for all the considered materials char gasification step is kinetically limited and therefore temperature is the main working parameter controlling this step. Solids drying is mainly regulated by heat transfer from bulk gas to the inner layers of particles and the corresponding time especially depends on particle size. Biomass heating is almost totally achieved by the radiative heat transfer from the hot walls of reactor to the bed of material. For pyrolysis, instead, working temperature, particles size and the same nature of biomass (through its own pyrolysis heat) have all comparable weights on the process development, so that the corresponding time can be differently depending on one of these factors according to the particular fuel is gasified and the particular conditions are established inside the gasifier. The same analysis also led to the estimation of reaction zone volumes for each biomass fuel, so as a comparison among the dimensions of the differently fed gasification units was finally accomplished. Each biomass material showed a different volumes distribution, so that any dimensioned gasification unit does not seem to be suitable for more than one biomass species. Nevertheless, since reactors diameters were found out quite similar for all the examined materials, it could be envisaged to design a single units for all of them by adopting the largest diameter and by combining together the maximum heights of each reaction zone, as they were calculated for the different biomasses. A total height of gasifier as around 2400mm would be obtained in this case. Besides, by arranging air injecting nozzles at different levels along the reactor, gasification zone could be properly set up according to the particular material is in turn gasified. Finally, since gasification and pyrolysis times were found to considerably change according to even short temperature variations, it could be also envisaged to regulate air feeding rate for each gasified material (which process temperatures depend on), so as the available reactor volumes would be suitable for the complete development of solid conversion in each case, without even changing fluid dynamics behaviour of the unit as well as air/biomass ratio in noticeable measure. The second part of this work dealt with the gas cleaning systems to be adopted downstream the gasifiers in order to run high efficiency CHP units (i.e. internal engines and micro-turbines). Especially in the case multi–fuel gasifiers are assumed to be used, weightier gas cleaning lines need to be envisaged in order to reach the standard gas quality degree required to fuel cogeneration units. Indeed, as the more heterogeneous feed to the gasification unit, several contaminant species can simultaneously be present in the exit gas stream and, as a consequence, suitable gas cleaning systems have to be designed. In this work, an overall study on gas cleaning lines assessment is carried out. Differently from the other research efforts carried out in the same field, the main scope is to define general arrangements for gas cleaning lines suitable to remove several contaminants from the gas stream, independently on the feedstock material and the energy plant size The gas contaminant species taken into account in this analysis were: particulate, tars, sulphur (in H2S form), alkali metals, nitrogen (in NH3 form) and acid gases (in HCl form). For each of these species, alternative cleaning devices were designed according to three different plant sizes, respectively corresponding with 8Nm3/h, 125Nm3/h and 350Nm3/h gas flows. Their performances were examined on the basis of their optimal working conditions (efficiency, temperature and pressure drops, above all) and their own consumption of energy and materials. Successively, the designed units were combined together in different overall gas cleaning line arrangements, paths, by following some technical constraints which were mainly determined from the same performance analysis on the cleaning units and from the presumable synergic effects by contaminants on the right working of some of them (filters clogging, catalysts deactivation, etc.). One of the main issues to be stated in paths design accomplishment was the tars removal from the gas stream, preventing filters plugging and/or line pipes clogging At this scope, a catalytic tars cracking unit was envisaged as the only solution to be adopted, and, therefore, a catalytic material which is able to work at relatively low temperatures was chosen. Nevertheless, a rapid drop in tars cracking efficiency was also estimated for this same material, so that an high frequency of catalysts regeneration and a consequent relevant air consumption for this operation were calculated in all of the cases. Other difficulties had to be overcome in the abatement of alkali metals, which condense at temperatures lower than tars, but they also need to be removed in the first sections of gas cleaning line in order to avoid corrosion of materials. In this case a dry scrubber technology was envisaged, by using the same fine particles filter units and by choosing for them corrosion resistant materials, like ceramic ones. Besides these two solutions which seem to be unavoidable in gas cleaning line design, high temperature gas cleaning lines were not possible to be achieved for the two larger plant sizes, as well. Indeed, as the use of temperature control devices was precluded in the adopted design procedure, ammonia partial oxidation units (as the only considered methods for the abatement of ammonia at high temperature) were not suitable for the large scale units, because of the high increase of reactors temperature by the exothermic reactions involved in the process. In spite of these limitations, yet, overall arrangements for each considered plant size were finally designed, so that the possibility to clean the gas up to the required standard degree was technically demonstrated, even in the case several contaminants are simultaneously present in the gas stream. Moreover, all the possible paths defined for the different plant sizes were compared each others on the basis of some defined operational parameters, among which total pressure drops, total energy losses, number of units and secondary materials consumption. On the basis of this analysis, dry gas cleaning methods proved preferable to the ones including water scrubber technology in al of the cases, especially because of the high water consumption provided by water scrubber units in ammonia adsorption process. This result is yet connected to the possibility to use activated carbon units for ammonia removal and Nahcolite adsorber for chloride acid. The very high efficiency of this latter material is also remarkable. Finally, as an estimation of the overall energy loss pertaining the gas cleaning process, the total enthalpy losses estimated for the three plant sizes were compared with the respective gas streams energy contents, these latter obtained on the basis of low heating value of gas only. This overall study on gas cleaning systems is thus proposed as an analytical tool by which different gas cleaning line configurations can be evaluated, according to the particular practical application they are adopted for and the size of cogeneration unit they are connected to.
Resumo:
Zusammmenfassung:Um Phasenseparation in binären Polymermischungen zuuntersuchen, werden zwei dynamische Erweiterungen der selbstkonsistenten Feldtheorie (SCFT)entwickelt. Die erste Methode benutzt eine zeitliche Entwicklung der Dichten und wird dynamische selbstkonsistente Feldtheorie (DSCFT) genannt, während die zweite Methode die zeitliche Propagation der effektiven äußeren Felder der SCFT ausnutzt. Diese Methode wird mit External Potential Dynamics (EPD) bezeichnet. Für DSCFT werden kinetische Koeffizienten verwendet, die entweder die lokale Dynamik von Punktteilchen oder die nichtlokale Dynamik von Rouse'schen Polymeren nachbilden. Die EPD-Methode erzeugt mit einem konstanten kinetischen Koeffizienten die Dynamik von Rouse'schen Ketten und benötigt weniger Rechenzeit als DSCFT. Diese Methoden werden für verschiedene Systeme angewendet.Zuerst wird spinodale Entmischung im Volumen untersucht,wobei der Unterschied zwischen lokaler und nichtlokalerDynamik im Mittelpunkt steht. Um die Gültigkeit derErgebnisse zu überprüfen, werden Monte-Carlo-Simulationen durchgeführt. In Polymermischungen, die von zwei Wänden, die beide die gleiche Sorte Polymere bevorzugen, eingeschränkt werden, wird die Bildung von Anreicherungsschichten an den Wänden untersucht. Für dünne Polymerfilme zwischen antisymmetrischen Wänden, d.h. jede Wand bevorzugt eine andere Polymerspezies, wird die Spannung einer parallel zu den Wänden gebildeten Grenzfläche analysiert und der Phasenübergang von einer anfänglich homogenen Mischung zur lokalisierten Phase betrachtet. Des Weiteren wird die Dynamik von Kapillarwellenmoden untersucht.
Resumo:
Chlorinated Aliphatic Hydrocarbons (CAHs) are widespread wastewater and groundwater contaminants and represent a real danger for human health and environment. This research is related to the biodegradation technologies to treat chlorinated hydrocarbons. In particular the study of this thesis is focused on chloroform cometabolism by a butane-grown aerobic pure culture (Rhodococcus aetherovorans BCP1) in continuous-flow biofilm reactors, which are used for in-situ and on-site treatments. The work was divided in two parts: in the first one an experimental study has been conducted in two packed-bed reactors (PBRs) for a period of 370 days; in the second one a fluid dynamics and kinetic model has been developed in order to simulate the experimental data concerning a previous study made in a 2-m continuous-flow sand-filled reactor. The goals of the first study were to obtain preliminary information on the feasibility of chloroform biodegradation by BCP1 under attached-cell conditions and to evaluate the applicability of the pulsed injection of growth substrate and oxygen to biofilm reactors. The pulsed feeding represents a tool to control the clogging and to ensure a long bioreactive zone. The operational conditions implemented in the PBRs allowed the attainment of a 4-fold increase of the ratio of chloroform degraded to substrate consumed, in comparison with the phase of continuous substrate supply. The second study was aimed at identifying guidelines for optimizing the oxygen/substrate supply schedule, developing a reliable model of chloroform cometabolism in porous media. The tested model led to a suitable interpretation of the experimental data as long as the ratio of CF degraded to butane consumed was ≤ 0.27 mgchloroform /mgbutane. A long-term simulation of the best-performing schedule of pulsed oxygen/substrate supply indicated the attainment of a steady state condition characterized by unsatisfactory bioremediation performances, evidencing the need for a further optimization of the pulsed injection technique.
Resumo:
La tesi di Dottorato studia il flusso sanguigno tramite un codice agli elementi finiti (COMSOL Multiphysics). Nell’arteria è presente un catetere Doppler (in posizione concentrica o decentrata rispetto all’asse di simmetria) o di stenosi di varia forma ed estensione. Le arterie sono solidi cilindrici rigidi, elastici o iperelastici. Le arterie hanno diametri di 6 mm, 5 mm, 4 mm e 2 mm. Il flusso ematico è in regime laminare stazionario e transitorio, ed il sangue è un fluido non-Newtoniano di Casson, modificato secondo la formulazione di Gonzales & Moraga. Le analisi numeriche sono realizzate in domini tridimensionali e bidimensionali, in quest’ultimo caso analizzando l’interazione fluido-strutturale. Nei casi tridimensionali, le arterie (simulazioni fluidodinamiche) sono infinitamente rigide: ricavato il campo di pressione si procede quindi all’analisi strutturale, per determinare le variazioni di sezione e la permanenza del disturbo sul flusso. La portata sanguigna è determinata nei casi tridimensionali con catetere individuando tre valori (massimo, minimo e medio); mentre per i casi 2D e tridimensionali con arterie stenotiche la legge di pressione riproduce l’impulso ematico. La mesh è triangolare (2D) o tetraedrica (3D), infittita alla parete ed a valle dell’ostacolo, per catturare le ricircolazioni. Alla tesi sono allegate due appendici, che studiano con codici CFD la trasmissione del calore in microcanali e l’ evaporazione di gocce d’acqua in sistemi non confinati. La fluidodinamica nei microcanali è analoga all’emodinamica nei capillari. Il metodo Euleriano-Lagrangiano (simulazioni dell’evaporazione) schematizza la natura mista del sangue. La parte inerente ai microcanali analizza il transitorio a seguito dell’applicazione di un flusso termico variabile nel tempo, variando velocità in ingresso e dimensioni del microcanale. L’indagine sull’evaporazione di gocce è un’analisi parametrica in 3D, che esamina il peso del singolo parametro (temperatura esterna, diametro iniziale, umidità relativa, velocità iniziale, coefficiente di diffusione) per individuare quello che influenza maggiormente il fenomeno.
Resumo:
La dermoscopia, metodica non invasiva, di pratico utilizzo e a basso costo si è affermata negli ultimi anni come valido strumento per la diagnosi e il follow up delle lesioni cutanee pigmentate e non pigmentate. La presente ricerca è stata incentrata sullo studio dermoscopico dei nevi melanocitici a localizzazione palmo-plantare, acquisiti e congeniti, in età pediatrica: a questo scopo sono state analizzate le immagini dei nevi melanocitici acrali nei pazienti visitati c/o l’ambulatorio di Dermatologia Pediatrica del Policlinico Sant’Orsola Malpighi dal 2004 al 2011 per definire i principali pattern dermoscopici rilevati ed i cambiamenti osservati durante il follow up videodermatoscopico. Nella nostra casistica di immagini dermoscopiche pediatriche abbiamo notato un cambiamento rilevante (inteso come ogni modificazione rilevata tra il pattern demoscopico osservato al baseline e i successivi follow up) nell’88,6% dei pazienti ed in particolare abbiamo osservato come in un’alta percentuale di pazienti (80%), si sia verificato un vero e proprio impallidimento del nevo melanocitico e in un paziente è stata evidenziata totale regressione dopo un periodo di tempo di 36 mesi. E’ stato interessante notare come l’impallidimento della lesione melanocitaria si sia verificata per lo più in sedi sottoposte ad una sollecitazione meccanica cronica, come la pianta del piede e le dita (di mani e piedi), facendoci ipotizzare un ruolo del traumatismo cronico nelle modificazioni che avvengono nelle neoformazioni melanocitarie dei bambini in questa sede.
Resumo:
La regolazione dei sistemi di propulsione a razzo a propellente solido (Solid Rocket Motors) ha da sempre rappresentato una delle principali problematiche legate a questa tipologia di motori. L’assenza di un qualsiasi genere di controllo diretto del processo di combustione del grano solido, fa si che la previsione della balistica interna rappresenti da sempre il principale strumento utilizzato sia per definire in fase di progetto la configurazione ottimale del motore, sia per analizzare le eventuali anomalie riscontrate in ambito sperimentale. Variazioni locali nella struttura del propellente, difettosità interne o eterogeneità nelle condizioni di camera posso dare origine ad alterazioni del rateo locale di combustione del propellente e conseguentemente a profili di pressione e di spinta sperimentali differenti da quelli previsti per via teorica. Molti dei codici attualmente in uso offrono un approccio piuttosto semplificato al problema, facendo per lo più ricorso a fattori correttivi (fattori HUMP) semi-empirici, senza tuttavia andare a ricostruire in maniera più realistica le eterogeneità di prestazione del propellente. Questo lavoro di tesi vuole dunque proporre un nuovo approccio alla previsione numerica delle prestazioni dei sistemi a propellente solido, attraverso la realizzazione di un nuovo codice di simulazione, denominato ROBOOST (ROcket BOOst Simulation Tool). Richiamando concetti e techiche proprie della Computer Grafica, questo nuovo codice è in grado di ricostruire in processo di regressione superficiale del grano in maniera puntuale, attraverso l’utilizzo di una mesh triangolare mobile. Variazioni locali del rateo di combustione posso quindi essere facilmente riprodotte ed il calcolo della balistica interna avviene mediante l’accoppiamento di un modello 0D non-stazionario e di uno 1D quasi-stazionario. L’attività è stata svolta in collaborazione con l’azienda Avio Space Division e il nuovo codice è stato implementato con successo sul motore Zefiro 9.
Resumo:
Flow features inside centrifugal compressor stages are very complicated to simulate with numerical tools due to the highly complex geometry and varying gas conditions all across the machine. For this reason, a big effort is currently being made to increase the fidelity of the numerical models during the design and validation phases. Computational Fluid Dynamics (CFD) plays an increasing role in the assessment of the performance prediction of centrifugal compressor stages. Historically, CFD was considered reliable for performance prediction on a qualitatively level, whereas tests were necessary to predict compressors performance on a quantitatively basis. In fact "standard" CFD with only the flow-path and blades included into the computational domain is known to be weak in capturing efficiency level and operating range accurately due to the under-estimation of losses and the lack of secondary flows modeling. This research project aims to fill the gap in accuracy between "standard" CFD and tests data by including a high fidelity reproduction of the gas domain and the use of advanced numerical models and tools introduced in the author's OEM in-house CFD code. In other words, this thesis describes a methodology by which virtual tests can be conducted on single stages and multistage centrifugal compressors in a similar fashion to a typical rig test that guarantee end users to operate machines with a confidence level not achievable before. Furthermore, the new "high fidelity" approach allowed understanding flow phenomena not fully captured before, increasing aerodynamicists capability and confidence in designing high efficiency and high reliable centrifugal compressor stages.
Resumo:
Die Entstehung der Atherosklerose ist ein komplexer Vorgang, der sich durch Ablagerung von Lipiden an der Gefäßwand sowie durch immunologische und inflammatorische Prozesse auszeichnet. Neben konventionellen Risikofaktoren wie Alter, Geschlecht, Rauchen, HDL-Cholesterin, Diabetes mellitus und einer positiven Familienanamnese werden zur Bestimmung des atherosklerotischen Risikos neue Biomarker der inflammatorischen Reaktion untersucht. Ziel dieser Arbeit war die Entwicklung einer Methode zur Diagnostik des Atheroskleroserisikos. Es wurde eine neuartige Chip-Technologie eingesetzt, um das Risiko für eine potentiell drohende atherosklerotische Erkrankung abzuschätzen. Dabei wurde ausgenutzt, dass molekulare Veränderungen in Genen bestimmte Krankheitsbilder auslösen können. rnEs wurde ein molekularbiologischer Test entwickelt, welcher die Untersuchung von genetischen Variationen aus genomischer DNA ermöglicht. Dafür fand die Entwicklung einer Multiplex-PCR statt, deren Produkt mit der Chip-Technologie untersucht werden kann. Dazu wurden auf einem Mikroarray Sonden immobilisiert, mit deren Hilfe genspezifische Mutationen nachgewiesen werden können. So wurden mehrere Gene mit einem geringen Aufwand gleichzeitig getestet. rnDie Auswahl der entsprechenden Marker erfolgte anhand einer Literaturrecherche von randomisierten und kontrollierten klinischen Studien. Der Mikroarray konnte für zwölf Variationen in den acht Genen Prostaglandinsynthase-1 (PTGS1), Endotheliale NO-Synthase (eNOS), Faktor V (F5), 5,10-Methylentetrahydrofolsäure-Reduktase (MTHFR), Cholesterinester-Transferprotein (CETP), Apolipoprotein E (ApoE), Prothrombin (F2) und Lipoproteinlipase (LPL) erfolgreich etabliert werden. Die Präzision des Biochips wurde anhand der Echtzeit-PCR und der Sequenzierung nachgewiesen. rnDer innovative Mikroarray ermöglicht eine einfache, schnelle und kosteneffektive Genotypisierung von wichtigen Allelen. Viele klinisch relevante Variationen für Atherosklerose können nun in nur einem Test überprüft werden. Zukünftige Studien müssen zeigen, ob die Methode eine Vorhersage über den Ausbruch der Erkrankung und eine gezielte Therapie ermöglicht. Dies wäre ein erster Schritt in Richtung präventive und personalisierter Medizin für Atherosklerose.rn
Resumo:
In questo elaborato di tesi viene presentata la comparazione tra due codici CFD, rispettivamente Fluent e OpenFOAM, mediante simulazioni che sono alla base di uno studio numerico di flusso attorno ad un pantografo per treno ad alta velocità. Si è apprezzato quindi la facilità d’uso di un software venduto tramite licenza e la difficoltà di un software open source come OpenFOAM, il quale però ha vantaggi in termini di adattamento ai casi più specifici. Sono stati quindi studiati due casi, scambio termico in regime laminare attorno ad un cilindro bidimensionale e flusso turbolento completamente sviluppato in un canale. Tutte le simulazioni numeriche hanno raggiunto convergenza e sono state validate positivamente mediante confronto con dati sperimentali. Il primo caso prevede un cilindro investito da un flusso a temperatura minore rispetto alla temperatura della superficie del cilindro; per avere più riscontri, sono state condotte diverse prove a valori differenti del numero di Prandtl, e per ogni simulazione è stato ricavato il corrispettivo numero di Nusselt, successivamente comparato con i dati sperimentali per la validazione delle prove. A partire dalla creazione della griglia di calcolo, è stato effettuato uno studio del fenomeno in questione, creando così una griglia di calcolo sviluppata a valle del cilindro avente maggior densità di celle a ridosso della parte del cilindro. In aggiunta, svolgendo le prove con schemi numerici sia del primo che del secondo ordine, si è constatata la miglior sensibilità degli schemi numerici del secondo ordine rispetto a quelli del primo ordine. La seconda tipologia di simulazioni consiste in un flusso turbolento completamente sviluppato all’interno di un canale; sono state svolte simulazioni senza e con l’uso delle wall functions, e quindi usate griglie di calcolo differenti per i due tipi di simulazioni, già disponibili per entrambi i software. I dati ottenuti mostrano uno sforzo computazionale maggiore per le simulazioni che non prevedono l’uso delle wall functions, e quindi una maggiore praticità per le simulazioni con le wall functions. Inoltre, le simulazioni di questo secondo caso sono state svolte con diversi modelli di turbolenza; in Fluent sono stati utilizzati i modelli k-ε e RSM mentre in OpenFOAM è stato utilizzato solo il modello k-ε in quanto il modello RSM non è presente. La validazione dei risultati è affidata alla comparazione con i dati sperimentali ricavati da Moser et all mediante simulazioni DNS, mettendo in risalto la minor accuratezza delle equazioni RANS.
Resumo:
In den westlichen Industrieländern ist das Mammakarzinom der häufigste bösartige Tumor der Frau. Sein weltweiter Anteil an allen Krebserkrankungen der Frau beläuft sich auf etwa 21 %. Inzwischen ist jede neunte Frau bedroht, während ihres Lebens an Brustkrebs zu erkranken. Die alterstandardisierte Mortalitätrate liegt derzeit bei knapp 27 %.rnrnDas Mammakarzinom hat eine relative geringe Wachstumsrate. Die Existenz eines diagnostischen Verfahrens, mit dem alle Mammakarzinome unter 10 mm Durchmesser erkannt und entfernt werden, würden den Tod durch Brustkrebs praktisch beseitigen. Denn die 20-Jahres-Überlebungsrate bei Erkrankung durch initiale Karzinome der Größe 5 bis 10 mm liegt mit über 95 % sehr hoch.rnrnMit der Kontrastmittel gestützten Bildgebung durch die MRT steht eine relativ junge Untersuchungsmethode zur Verfügung, die sensitiv genug zur Erkennung von Karzinomen ab einer Größe von 3 mm Durchmesser ist. Die diagnostische Methodik ist jedoch komplex, fehleranfällig, erfordert eine lange Einarbeitungszeit und somit viel Erfahrung des Radiologen.rnrnEine Computer unterstützte Diagnosesoftware kann die Qualität einer solch komplexen Diagnose erhöhen oder zumindest den Prozess beschleunigen. Das Ziel dieser Arbeit ist die Entwicklung einer vollautomatischen Diagnose Software, die als Zweitmeinungssystem eingesetzt werden kann. Meines Wissens existiert eine solche komplette Software bis heute nicht.rnrnDie Software führt eine Kette von verschiedenen Bildverarbeitungsschritten aus, die dem Vorgehen des Radiologen nachgeahmt wurden. Als Ergebnis wird eine selbstständige Diagnose für jede gefundene Läsion erstellt: Zuerst eleminiert eine 3d Bildregistrierung Bewegungsartefakte als Vorverarbeitungsschritt, um die Bildqualität der nachfolgenden Verarbeitungsschritte zu verbessern. Jedes kontrastanreichernde Objekt wird durch eine regelbasierte Segmentierung mit adaptiven Schwellwerten detektiert. Durch die Berechnung kinetischer und morphologischer Merkmale werden die Eigenschaften der Kontrastmittelaufnahme, Form-, Rand- und Textureeigenschaften für jedes Objekt beschrieben. Abschließend werden basierend auf den erhobenen Featurevektor durch zwei trainierte neuronale Netze jedes Objekt in zusätzliche Funde oder in gut- oder bösartige Läsionen klassifiziert.rnrnDie Leistungsfähigkeit der Software wurde auf Bilddaten von 101 weiblichen Patientinnen getested, die 141 histologisch gesicherte Läsionen enthielten. Die Vorhersage der Gesundheit dieser Läsionen ergab eine Sensitivität von 88 % bei einer Spezifität von 72 %. Diese Werte sind den in der Literatur bekannten Vorhersagen von Expertenradiologen ähnlich. Die Vorhersagen enthielten durchschnittlich 2,5 zusätzliche bösartige Funde pro Patientin, die sich als falsch klassifizierte Artefakte herausstellten.rn