170 resultados para RADIATION-DRIVEN WINDS
Resumo:
Among the types of remote sensing acquisitions, optical images are certainly one of the most widely relied upon data sources for Earth observation. They provide detailed measurements of the electromagnetic radiation reflected or emitted by each pixel in the scene. Through a process termed supervised land-cover classification, this allows to automatically yet accurately distinguish objects at the surface of our planet. In this respect, when producing a land-cover map of the surveyed area, the availability of training examples representative of each thematic class is crucial for the success of the classification procedure. However, in real applications, due to several constraints on the sample collection process, labeled pixels are usually scarce. When analyzing an image for which those key samples are unavailable, a viable solution consists in resorting to the ground truth data of other previously acquired images. This option is attractive but several factors such as atmospheric, ground and acquisition conditions can cause radiometric differences between the images, hindering therefore the transfer of knowledge from one image to another. The goal of this Thesis is to supply remote sensing image analysts with suitable processing techniques to ensure a robust portability of the classification models across different images. The ultimate purpose is to map the land-cover classes over large spatial and temporal extents with minimal ground information. To overcome, or simply quantify, the observed shifts in the statistical distribution of the spectra of the materials, we study four approaches issued from the field of machine learning. First, we propose a strategy to intelligently sample the image of interest to collect the labels only in correspondence of the most useful pixels. This iterative routine is based on a constant evaluation of the pertinence to the new image of the initial training data actually belonging to a different image. Second, an approach to reduce the radiometric differences among the images by projecting the respective pixels in a common new data space is presented. We analyze a kernel-based feature extraction framework suited for such problems, showing that, after this relative normalization, the cross-image generalization abilities of a classifier are highly increased. Third, we test a new data-driven measure of distance between probability distributions to assess the distortions caused by differences in the acquisition geometry affecting series of multi-angle images. Also, we gauge the portability of classification models through the sequences. In both exercises, the efficacy of classic physically- and statistically-based normalization methods is discussed. Finally, we explore a new family of approaches based on sparse representations of the samples to reciprocally convert the data space of two images. The projection function bridging the images allows a synthesis of new pixels with more similar characteristics ultimately facilitating the land-cover mapping across images.
Resumo:
The development of language proficiency extends late into childhood and includes not only producing or comprehending sounds, words and sentences, but likewise larger utterances spanning beyond sentence borders like dialogs. Dialogs consist of information units whose value constantly varies within a verbal exchange. While information is focused when introduced for the first time or corrected in order to alter the knowledge state of communication partners, the same information turns into shared knowledge during the further course of a verbal exchange. In many languages, prosodic means are used by speakers to highlight the informational value of information foci. Our study investigated the developmental pattern of event-related potentials (ERPs) in three age groups (12, 8 and 5 years) when perceiving two information focus types (news and corrections) embedded in short question-answer dialogs. The information foci contained in the answer sentences were either adequately marked by prosodic means or not. In so doing, we questioned to what extent children depend on prosodic means to recognize information foci or whether contextual means as provided by dialog questions are sufficient to guide focus processing.Only 12-year-olds yield prosody-independent ERPs when encountering new and corrective information foci, resembling previous findings in adults. Focus processing in the 8-year-olds relied upon prosodic highlighting, and differing ERP responses as a function of focus type were observed. In the 5-year-olds, merely prosody-driven ERP responses were apparent, but no distinctive ERP indicating information focus recognition. Our findings reveal substantial alterations in information focus perception throughout childhood that are likely related to long-lasting maturational changes during brain development.
Resumo:
Aim: When planning SIRT using 90Y microspheres, the partition model is used to refine the activity calculated by the body surface area (BSA) method to potentially improve the safety and efficacy of treatment. For this partition model dosimetry, accurate determination of mean tumor-to-normal liver ratio (TNR) is critical since it directly impacts absorbed dose estimates. This work aimed at developing and assessing a reliable methodology for the calculation of 99mTc-MAA SPECT/CT-derived TNR ratios based on phantom studies. Materials and methods: IQ NEMA (6 hot spheres) and Kyoto liver phantoms with different hot/background activity concentration ratios were imaged on a SPECT/CT (GE Infinia Hawkeye 4). For each reconstruction with the IQ phantom, TNR quantification was assessed in terms of relative recovery coefficients (RC) and image noise was evaluated in terms of coefficient of variation (COV) in the filled background. RCs were compared using OSEM with Hann, Butterworth and Gaussian filters, as well as FBP reconstruction algorithms. Regarding OSEM, RCs were assessed by varying different parameters independently, such as the number of iterations (i) and subsets (s) and the cut-off frequency of the filter (fc). The influence of the attenuation and diffusion corrections was also investigated. Furthermore, both 2D-ROIs and 3D-VOIs contouring were compared. For this purpose, dedicated Matlab© routines were developed in-house for automatic 2D-ROI/3D-VOI determination to reduce intra-user and intra-slice variability. Best reconstruction parameters and RCs obtained with the IQ phantom were used to recover corrected TNR in case of the Kyoto phantom for arbitrary hot-lesion size. In addition, we computed TNR volume histograms to better assess uptake heterogeneityResults: The highest RCs were obtained with OSEM (i=2, s=10) coupled with the Butterworth filter (fc=0.8). Indeed, we observed a global 20% RC improvement over other OSEM settings and a 50% increase as compared to the best FBP reconstruction. In any case, both attenuation and diffusion corrections must be applied, thus improving RC while preserving good image noise (COV<10%). Both 2D-ROI and 3D-VOI analysis lead to similar results. Nevertheless, we recommend using 3D-VOI since tumor uptake regions are intrinsically 3D. RC-corrected TNR values lie within 17% around the true value, substantially improving the evaluation of small volume (<15 mL) regions. Conclusions: This study reports the multi-parameter optimization of 99mTc MAA SPECT/CT images reconstruction in planning 90Y dosimetry for SIRT. In phantoms, accurate quantification of TNR was obtained using OSEM coupled with Butterworth and RC correction.
Resumo:
Many mucosal pathogens invade the host by initially infecting the organized mucosa-associated lymphoid tissue (o-MALT) such as Peyer's patches or nasal cavity-associated lymphoid tissue (NALT) before spreading systemically. There is no clear demonstration that serum antibodies can prevent infections in o-MALT. We have tested this possibility by using the mouse mammary tumor virus (MMTV) as a model system. In peripheral lymph nodes or in Peyer's patches or NALT, MMTV initially infects B lymphocytes, which as a consequence express a superantigen (SAg) activity. The SAg molecule induces the local activation of a subset of T cells within 6 days after MMTV infection. We report that similar levels of anti-SAg antibody (immunoglobulin G) in serum were potent inhibitors of the SAg-induced T-cell response both in peripheral lymph nodes and in Peyer's patches or NALT. This result clearly demonstrates that systemic antibodies can gain access to Peyer's patches or NALT.
Resumo:
OBJECTIVE: This article analyses the influence of treatment duration on survival in patients with invasive carcinoma of the cervix treated by radical radiation therapy. METHOD: Three hundred and sixty patients with FIGO stage IB-IIIB carcinoma of the cervix were treated in Lausanne (Switzerland) with external radiation and brachytherapy as first line therapy. Median therapy duration was 45 days. Patients were classified according to the duration of the therapies, taking 60 days (the 75th percentile) as an arbitrary cut-off. RESULTS: The 5-year survival was 61% (S.E. = 3%) for the therapy duration group of less than 60 days and 53% (S.E. = 7%) for the group of more than 60 days. In terms of univariate hazard ratio (HR), the relative difference between the two groups corresponds to a 50% increase of deaths (HR = 1.53, 95% CI = 1.03-2.28) for the longer therapy duration group (P = 0.044). In a multivariate analysis, the magnitude of estimated relative hazards for the longer therapies are confirmed though significance was reduced (HR = 1.52, 95% CI = 0.94-2.45, P = 0.084). CONCLUSION: These findings suggest that short treatment duration is a factor associated with longer survival in carcinoma of the cervix.
Resumo:
Radiotherapy is successfully used to treat cancer. Emerging evidence, however, indicates that recurrences after radiotherapy are associated with increased local invasion, metastatic spreading and poor prognosis. Radiation-induced modifications of the tumor microenvironment have been proposed to contribute to increased aggressive tumor behavior, an effect also referred to as tumor bed effect, but the putative mechanisms involved have remained largely elusive. We have recently demonstrated that irradiation of the prospective tumor stroma impairs de novo angiogenesis through sustained inhibition of proliferation, migration and sprouting of endothelial cells. Experimental tumors growing within a pre-irradiated field have reduced tumor angiogenesis and tumor growth, increased hypoxia, necrosis, local invasion and distant metastasis. Mechanisms of progression involve adaptation of tumor cells to local hypoxic conditions as well as selection of cells with invasive and metastatic capacities. The matricellular protein CYR61 and integrin αVβ5 emerged as molecules that cooperate to mediate lung metastasis. Cilengitide, a small molecular inhibitor of αV integrins prevented lung metastasis formation. These results represent a conceptual advance to the understanding of the tumor bed effect and indicate that αV integrin inhibition might be a potential therapeutic approach for preventing metastasis in patients at risk for post-radiation recurrences.
Resumo:
BACKGROUND: To assess functional results, complications, and success of larynx preservation in patients with recurrent squamous cell carcinoma after radiotherapy. METHODS: From a database of 40 patients who underwent supracricoid partial laryngectomy (SCPL) with cricohyoidoepiglottopexy (CHEP) from June 2001 to April 2006, eight patients were treated previously with radiotherapy due to squamous cell carcinoma of the glottic region and were treated for recurrence at the site of the primary cancer. RESULTS: SCPL with CHEP was performed in six men and two women with a mean age of 67 years due to recurrence and/or persistence at a mean time of 30 months postradiotherapy (in case #8 after concomitant chemoradiotherapy). Bilateral neck dissection at levels II-V was performed in six patients. Only case #8 presented metastasis in one node. In case #5, Delphian node was positive. It was possible to preserve both arytenoids in five cases. Definitive surgical margins were negative. Complications were encountered in seven patients. Follow-up was on average 44 months (range: 20-67 months). Organ preservation in this series was 75%, and local control was 87%. Overall 5-year survival was 50%. CONCLUSIONS: In selected patient with persistence and/or recurrence after radiotherapy due to cancer of the larynx, SCPL with CHEP seems to be feasible with acceptable local control and toxicity. Complications may occur as in previously non-irradiated patients. These complications must be treated conservatively to avoid altering laryngeal function.
Resumo:
BACKGROUND: Sorafenib (Sb) is a multiple kinase inhibitor targeting both tumour cell proliferation and angiogenesis that may further act as a potent radiosensitizer by arresting cells in the most radiosensitive cell cycle phase. This phase I open-label, noncontrolled dose escalation study was performed to determine the safety and maximum tolerated dose (MTD) of Sb in combination with radiation therapy (RT) and temozolomide (TMZ) in 17 patients with newly diagnosed high-grade glioma. METHODS: Patients were treated with RT (60 Gy in 2 Gy fractions) combined with TMZ 75 mg m(-2) daily, and Sb administered at three dose levels (200 mg daily, 200 mg BID, and 400 mg BID) starting on day 8 of RT. Thirty days after the end of RT, patients received monthly TMZ (150-200 mg m(-2) D1-5/28) and Sb (400 mg BID). Pharmacokinetic (PK) analyses were performed on day 8 (TMZ) and on day 21 (TMZ&Sb) (Clinicaltrials ID: NCT00884416). RESULTS: The MTD of Sb was established at 200 mg BID. Dose-limiting toxicities included thrombocytopenia (two patients), diarrhoea (one patient) and hypercholesterolaemia (one patient). Sb administration did not affect the mean area under the curve(0-24) and mean Cmax of TMZ and its metabolite 5-amino-imidazole-4-carboxamide (AIC). Tmax of both TMZ and AIC was delayed from 0.75 (TMZ alone) to 1.5 h (combined TMZ/Sb). The median progression-free survival was 7.9 months (95% confidence interval (CI): 5.4-14.55), and the median overall survival was 17.8 months (95% CI: 14.7-25.6). CONCLUSIONS: Although Sb can be combined with RT and TMZ, significant side effects and moderate outcome results do not support further clinical development in malignant gliomas. The robust PK data of the TMZ/Sb combination could be useful in other cancer settings.
Resumo:
BACKGROUND: The purpose of this study was to determine the long-term outcomes of patients undergoing endocavitary contact radiation therapy (ECR) for stage I rectal cancer. METHODS: A database of patients treated with ECR for biopsy-proven rectal adenocarcinoma from July 1986 to June 2006 was reviewed retrospectively. Only patients with primary, non-metastatic, ultrasonographically staged T1 N0 and T2 N0 cancer who had no adjuvant treatment were included. Patients received a median of 90 (range 60-190) Gy contact radiation, delivered transanally by a 50-kV X-ray tube in two to five fractions. RESULTS: Of 149 patients, 77 (40 T1, 37 T2) met the inclusion criteria. Median age was 74 (range 38-104) years, and median follow-up 69 (range 10-219) months. ECR failed in 21 patients (27 per cent) (persistent disease, four; recurrence, 17), of whom ten remained disease free after salvage therapy. The estimated 5-year disease-free survival rate was 74 (95 per cent confidence interval 63 to 83) per cent after ECR alone, and 87 (76 to 93) per cent when survival after salvage therapy for recurrence was included. CONCLUSION: ECR is a minimally invasive treatment option for early-stage rectal cancer. However, similar to other local therapies, ECR has a worse oncological outcome than radical surgery.
Resumo:
ObjectiveCandidate genes for non-alcoholic fatty liver disease (NAFLD) identified by a bioinformatics approach were examined for variant associations to quantitative traits of NAFLD-related phenotypes.Research Design and MethodsBy integrating public database text mining, trans-organism protein-protein interaction transferal, and information on liver protein expression a protein-protein interaction network was constructed and from this a smaller isolated interactome was identified. Five genes from this interactome were selected for genetic analysis. Twenty-one tag single-nucleotide polymorphisms (SNPs) which captured all common variation in these genes were genotyped in 10,196 Danes, and analyzed for association with NAFLD-related quantitative traits, type 2 diabetes (T2D), central obesity, and WHO-defined metabolic syndrome (MetS).Results273 genes were included in the protein-protein interaction analysis and EHHADH, ECHS1, HADHA, HADHB, and ACADL were selected for further examination. A total of 10 nominal statistical significant associations (P<0.05) to quantitative metabolic traits were identified. Also, the case-control study showed associations between variation in the five genes and T2D, central obesity, and MetS, respectively. Bonferroni adjustments for multiple testing negated all associations.ConclusionsUsing a bioinformatics approach we identified five candidate genes for NAFLD. However, we failed to provide evidence of associations with major effects between SNPs in these five genes and NAFLD-related quantitative traits, T2D, central obesity, and MetS.
Resumo:
Les instabilités engendrées par des gradients de densité interviennent dans une variété d'écoulements. Un exemple est celui de la séquestration géologique du dioxyde de carbone en milieux poreux. Ce gaz est injecté à haute pression dans des aquifères salines et profondes. La différence de densité entre la saumure saturée en CO2 dissous et la saumure environnante induit des courants favorables qui le transportent vers les couches géologiques profondes. Les gradients de densité peuvent aussi être la cause du transport indésirable de matières toxiques, ce qui peut éventuellement conduire à la pollution des sols et des eaux. La gamme d'échelles intervenant dans ce type de phénomènes est très large. Elle s'étend de l'échelle poreuse où les phénomènes de croissance des instabilités s'opèrent, jusqu'à l'échelle des aquifères à laquelle interviennent les phénomènes à temps long. Une reproduction fiable de la physique par la simulation numérique demeure donc un défi en raison du caractère multi-échelles aussi bien au niveau spatial et temporel de ces phénomènes. Il requiert donc le développement d'algorithmes performants et l'utilisation d'outils de calculs modernes. En conjugaison avec les méthodes de résolution itératives, les méthodes multi-échelles permettent de résoudre les grands systèmes d'équations algébriques de manière efficace. Ces méthodes ont été introduites comme méthodes d'upscaling et de downscaling pour la simulation d'écoulements en milieux poreux afin de traiter de fortes hétérogénéités du champ de perméabilité. Le principe repose sur l'utilisation parallèle de deux maillages, le premier est choisi en fonction de la résolution du champ de perméabilité (grille fine), alors que le second (grille grossière) est utilisé pour approximer le problème fin à moindre coût. La qualité de la solution multi-échelles peut être améliorée de manière itérative pour empêcher des erreurs trop importantes si le champ de perméabilité est complexe. Les méthodes adaptatives qui restreignent les procédures de mise à jour aux régions à forts gradients permettent de limiter les coûts de calculs additionnels. Dans le cas d'instabilités induites par des gradients de densité, l'échelle des phénomènes varie au cours du temps. En conséquence, des méthodes multi-échelles adaptatives sont requises pour tenir compte de cette dynamique. L'objectif de cette thèse est de développer des algorithmes multi-échelles adaptatifs et efficaces pour la simulation des instabilités induites par des gradients de densité. Pour cela, nous nous basons sur la méthode des volumes finis multi-échelles (MsFV) qui offre l'avantage de résoudre les phénomènes de transport tout en conservant la masse de manière exacte. Dans la première partie, nous pouvons démontrer que les approximations de la méthode MsFV engendrent des phénomènes de digitation non-physiques dont la suppression requiert des opérations de correction itératives. Les coûts de calculs additionnels de ces opérations peuvent toutefois être compensés par des méthodes adaptatives. Nous proposons aussi l'utilisation de la méthode MsFV comme méthode de downscaling: la grille grossière étant utilisée dans les zones où l'écoulement est relativement homogène alors que la grille plus fine est utilisée pour résoudre les forts gradients. Dans la seconde partie, la méthode multi-échelle est étendue à un nombre arbitraire de niveaux. Nous prouvons que la méthode généralisée est performante pour la résolution de grands systèmes d'équations algébriques. Dans la dernière partie, nous focalisons notre étude sur les échelles qui déterminent l'évolution des instabilités engendrées par des gradients de densité. L'identification de la structure locale ainsi que globale de l'écoulement permet de procéder à un upscaling des instabilités à temps long alors que les structures à petite échelle sont conservées lors du déclenchement de l'instabilité. Les résultats présentés dans ce travail permettent d'étendre les connaissances des méthodes MsFV et offrent des formulations multi-échelles efficaces pour la simulation des instabilités engendrées par des gradients de densité. - Density-driven instabilities in porous media are of interest for a wide range of applications, for instance, for geological sequestration of CO2, during which CO2 is injected at high pressure into deep saline aquifers. Due to the density difference between the C02-saturated brine and the surrounding brine, a downward migration of CO2 into deeper regions, where the risk of leakage is reduced, takes place. Similarly, undesired spontaneous mobilization of potentially hazardous substances that might endanger groundwater quality can be triggered by density differences. Over the last years, these effects have been investigated with the help of numerical groundwater models. Major challenges in simulating density-driven instabilities arise from the different scales of interest involved, i.e., the scale at which instabilities are triggered and the aquifer scale over which long-term processes take place. An accurate numerical reproduction is possible, only if the finest scale is captured. For large aquifers, this leads to problems with a large number of unknowns. Advanced numerical methods are required to efficiently solve these problems with today's available computational resources. Beside efficient iterative solvers, multiscale methods are available to solve large numerical systems. Originally, multiscale methods have been developed as upscaling-downscaling techniques to resolve strong permeability contrasts. In this case, two static grids are used: one is chosen with respect to the resolution of the permeability field (fine grid); the other (coarse grid) is used to approximate the fine-scale problem at low computational costs. The quality of the multiscale solution can be iteratively improved to avoid large errors in case of complex permeability structures. Adaptive formulations, which restrict the iterative update to domains with large gradients, enable limiting the additional computational costs of the iterations. In case of density-driven instabilities, additional spatial scales appear which change with time. Flexible adaptive methods are required to account for these emerging dynamic scales. The objective of this work is to develop an adaptive multiscale formulation for the efficient and accurate simulation of density-driven instabilities. We consider the Multiscale Finite-Volume (MsFV) method, which is well suited for simulations including the solution of transport problems as it guarantees a conservative velocity field. In the first part of this thesis, we investigate the applicability of the standard MsFV method to density- driven flow problems. We demonstrate that approximations in MsFV may trigger unphysical fingers and iterative corrections are necessary. Adaptive formulations (e.g., limiting a refined solution to domains with large concentration gradients where fingers form) can be used to balance the extra costs. We also propose to use the MsFV method as downscaling technique: the coarse discretization is used in areas without significant change in the flow field whereas the problem is refined in the zones of interest. This enables accounting for the dynamic change in scales of density-driven instabilities. In the second part of the thesis the MsFV algorithm, which originally employs one coarse level, is extended to an arbitrary number of coarse levels. We prove that this keeps the MsFV method efficient for problems with a large number of unknowns. In the last part of this thesis, we focus on the scales that control the evolution of density fingers. The identification of local and global flow patterns allows a coarse description at late times while conserving fine-scale details during onset stage. Results presented in this work advance the understanding of the Multiscale Finite-Volume method and offer efficient dynamic multiscale formulations to simulate density-driven instabilities. - Les nappes phréatiques caractérisées par des structures poreuses et des fractures très perméables représentent un intérêt particulier pour les hydrogéologues et ingénieurs environnementaux. Dans ces milieux, une large variété d'écoulements peut être observée. Les plus communs sont le transport de contaminants par les eaux souterraines, le transport réactif ou l'écoulement simultané de plusieurs phases non miscibles, comme le pétrole et l'eau. L'échelle qui caractérise ces écoulements est définie par l'interaction de l'hétérogénéité géologique et des processus physiques. Un fluide au repos dans l'espace interstitiel d'un milieu poreux peut être déstabilisé par des gradients de densité. Ils peuvent être induits par des changements locaux de température ou par dissolution d'un composé chimique. Les instabilités engendrées par des gradients de densité revêtent un intérêt particulier puisque qu'elles peuvent éventuellement compromettre la qualité des eaux. Un exemple frappant est la salinisation de l'eau douce dans les nappes phréatiques par pénétration d'eau salée plus dense dans les régions profondes. Dans le cas des écoulements gouvernés par les gradients de densité, les échelles caractéristiques de l'écoulement s'étendent de l'échelle poreuse où les phénomènes de croissance des instabilités s'opèrent, jusqu'à l'échelle des aquifères sur laquelle interviennent les phénomènes à temps long. Etant donné que les investigations in-situ sont pratiquement impossibles, les modèles numériques sont utilisés pour prédire et évaluer les risques liés aux instabilités engendrées par les gradients de densité. Une description correcte de ces phénomènes repose sur la description de toutes les échelles de l'écoulement dont la gamme peut s'étendre sur huit à dix ordres de grandeur dans le cas de grands aquifères. Il en résulte des problèmes numériques de grande taille qui sont très couteux à résoudre. Des schémas numériques sophistiqués sont donc nécessaires pour effectuer des simulations précises d'instabilités hydro-dynamiques à grande échelle. Dans ce travail, nous présentons différentes méthodes numériques qui permettent de simuler efficacement et avec précision les instabilités dues aux gradients de densité. Ces nouvelles méthodes sont basées sur les volumes finis multi-échelles. L'idée est de projeter le problème original à une échelle plus grande où il est moins coûteux à résoudre puis de relever la solution grossière vers l'échelle de départ. Cette technique est particulièrement adaptée pour résoudre des problèmes où une large gamme d'échelle intervient et évolue de manière spatio-temporelle. Ceci permet de réduire les coûts de calculs en limitant la description détaillée du problème aux régions qui contiennent un front de concentration mobile. Les aboutissements sont illustrés par la simulation de phénomènes tels que l'intrusion d'eau salée ou la séquestration de dioxyde de carbone.
Resumo:
PURPOSE: The objective of this experiment is to establish a continuous postmortem circulation in the vascular system of porcine lungs and to evaluate the pulmonary distribution of the perfusate. This research is performed in the bigger scope of a revascularization project of Thiel embalmed specimens. This technique enables teaching anatomy, practicing surgical procedures and doing research under lifelike circumstances. METHODS: After cannulation of the pulmonary trunk and the left atrium, the vascular system was flushed with paraffinum perliquidum (PP) through a heart-lung machine. A continuous circulation was then established using red PP, during which perfusion parameters were measured. The distribution of contrast-containing PP in the pulmonary circulation was visualized on computed tomography. Finally, the amount of leak from the vascular system was calculated. RESULTS: A reperfusion of the vascular system was initiated for 37 min. The flow rate ranged between 80 and 130 ml/min throughout the experiment with acceptable perfusion pressures (range: 37-78 mm Hg). Computed tomography imaging and 3D reconstruction revealed a diffuse vascular distribution of PP and a decreasing vascularization ratio in cranial direction. A self-limiting leak (i.e. 66.8% of the circulating volume) towards the tracheobronchial tree due to vessel rupture was also measured. CONCLUSIONS: PP enables circulation in an isolated porcine lung model with an acceptable pressure-flow relationship resulting in an excellent recruitment of the vascular system. Despite these promising results, rupture of vessel walls may cause leaks. Further exploration of the perfusion capacities of PP in other organs is necessary. Eventually, this could lead to the development of reperfused Thiel embalmed human bodies, which have several applications.
Resumo:
Accurate modeling of flow instabilities requires computational tools able to deal with several interacting scales, from the scale at which fingers are triggered up to the scale at which their effects need to be described. The Multiscale Finite Volume (MsFV) method offers a framework to couple fine-and coarse-scale features by solving a set of localized problems which are used both to define a coarse-scale problem and to reconstruct the fine-scale details of the flow. The MsFV method can be seen as an upscaling-downscaling technique, which is computationally more efficient than standard discretization schemes and more accurate than traditional upscaling techniques. We show that, although the method has proven accurate in modeling density-driven flow under stable conditions, the accuracy of the MsFV method deteriorates in case of unstable flow and an iterative scheme is required to control the localization error. To avoid large computational overhead due to the iterative scheme, we suggest several adaptive strategies both for flow and transport. In particular, the concentration gradient is used to identify a front region where instabilities are triggered and an accurate (iteratively improved) solution is required. Outside the front region the problem is upscaled and both flow and transport are solved only at the coarse scale. This adaptive strategy leads to very accurate solutions at roughly the same computational cost as the non-iterative MsFV method. In many circumstances, however, an accurate description of flow instabilities requires a refinement of the computational grid rather than a coarsening. For these problems, we propose a modified iterative MsFV, which can be used as downscaling method (DMsFV). Compared to other grid refinement techniques the DMsFV clearly separates the computational domain into refined and non-refined regions, which can be treated separately and matched later. This gives great flexibility to employ different physical descriptions in different regions, where different equations could be solved, offering an excellent framework to construct hybrid methods.
Resumo:
Abstract : This work is concerned with the development and application of novel unsupervised learning methods, having in mind two target applications: the analysis of forensic case data and the classification of remote sensing images. First, a method based on a symbolic optimization of the inter-sample distance measure is proposed to improve the flexibility of spectral clustering algorithms, and applied to the problem of forensic case data. This distance is optimized using a loss function related to the preservation of neighborhood structure between the input space and the space of principal components, and solutions are found using genetic programming. Results are compared to a variety of state-of--the-art clustering algorithms. Subsequently, a new large-scale clustering method based on a joint optimization of feature extraction and classification is proposed and applied to various databases, including two hyperspectral remote sensing images. The algorithm makes uses of a functional model (e.g., a neural network) for clustering which is trained by stochastic gradient descent. Results indicate that such a technique can easily scale to huge databases, can avoid the so-called out-of-sample problem, and can compete with or even outperform existing clustering algorithms on both artificial data and real remote sensing images. This is verified on small databases as well as very large problems. Résumé : Ce travail de recherche porte sur le développement et l'application de méthodes d'apprentissage dites non supervisées. Les applications visées par ces méthodes sont l'analyse de données forensiques et la classification d'images hyperspectrales en télédétection. Dans un premier temps, une méthodologie de classification non supervisée fondée sur l'optimisation symbolique d'une mesure de distance inter-échantillons est proposée. Cette mesure est obtenue en optimisant une fonction de coût reliée à la préservation de la structure de voisinage d'un point entre l'espace des variables initiales et l'espace des composantes principales. Cette méthode est appliquée à l'analyse de données forensiques et comparée à un éventail de méthodes déjà existantes. En second lieu, une méthode fondée sur une optimisation conjointe des tâches de sélection de variables et de classification est implémentée dans un réseau de neurones et appliquée à diverses bases de données, dont deux images hyperspectrales. Le réseau de neurones est entraîné à l'aide d'un algorithme de gradient stochastique, ce qui rend cette technique applicable à des images de très haute résolution. Les résultats de l'application de cette dernière montrent que l'utilisation d'une telle technique permet de classifier de très grandes bases de données sans difficulté et donne des résultats avantageusement comparables aux méthodes existantes.