973 resultados para Iterative Closest Point (ICP) Algorithm
Resumo:
Numerosos son ya los estudios que se han centrado en el llamado "cine serio" de Woody Allen y, entre ellos, cabe mencionar el que Pau Gilabert Barberà, autor de este artículo, escribió (2006) sobre lo que, en su opinión, es el legado sofístico subyacente en el guión de Crimes and Misdemeanors. En esta ocasión, su objetivo es analizar la trayectoria fluctuante del director americano en relación con la tragedia griega, desde la convicción de que, sólo así, es posible revelar su empatía con el espíritu trágico de los griegos y comprender su necesidad de presentar aquel género literario como un paradigma desde el cual entender las grandezas y miserias del mundo contemporáneo.
Resumo:
Objective: To implement a carotid sparing protocol using helical Tomotherapy(HT) in T1N0 squamous-cell laryngeal carcinoma.Materials/Methods: Between July and August 2010, 7 men with stage T1N0 laryngeal carcinoma were included in this study. Age ranged from 47-74 years. Staging included endoscopic examination, CT-scan and MRI when indicated.Planned irradiation dose was 70 Gy in 35 fractions over 7 weeks. A simple treatment planning algorithm for carotidsparing was used: maximum point dose to the carotids 35 Gy, to the spinal cord 30 Gy, and 100% PTV volume to becovered with 95% of the prescribed dose. Carotid volume of interest extended to 1 cm above and below of the PTV.Doses to the carotid arteries, critical organs, and planned target volume (PTV) with our standard laryngealirradiation protocol was compared. Daily megavoltage scans were obtained before each fraction. When necessary, thePlanned Adaptive? software (TomoTherapy Inc., Madison, WI) was used to evaluate the need for a re-planning,which has never been indicated. Dose data were extracted using the VelocityAI software (Atlanta, GA), and datanormalization and dosevolume histogram (DVH) interpolation were realized using the Igor Pro software (Portland,OR).Results: A significant (p < 0.05) carotid dose sparing compared to our standard protocol with an average maximum point dose of 38.3 Gy (standard devaition [SD] 4.05 Gy), average mean dose of 18.59 Gy (SD 0.83 Gy) was achieved.In all patients, 95% of the carotid volume received less than 28.4 Gy (SD 0.98 Gy). The average maximum point doseto the spinal cord was 25.8 Gy (SD 3.24 Gy). PTV was fully covered with more than 95% of the prescribed dose forall patients with an average maximum point dose of 74.1 Gy and the absolute maximum dose in a single patient of75.2 Gy. To date, the clinical outcomes have been excellent. Three patients (42%) developed stage 1 mucositis that was conservatively managed, and all the patients presented a mild to moderate dysphonia. All adverse effectsresolved spontaneously in the month following the end of treatment. Early local control rate is 100% considering a 4-5months post treatment follow-up.Conclusions: HT allows a clinically significant decrease of carotid irradiation dose compared tostandard irradiation protocols with an acceptable spinal cord dose tradeoff. Moreover, this technique allows the PTV to be homogenously covered with a curative irradiation dose. Daily control imaging brings added security marginsespecially when working with high dose gradients. Further investigations and follow-up are underway to better evaluatethe late clinical outcomes especially the local control rate, late laryngeal and vascular toxicity, and expected potentialimpact on cerebrovascular events.
Resumo:
We herein present a preliminary practical algorithm for evaluating complementary and alternative medicine (CAM) for children which relies on basic bioethical principles and considers the influence of CAM on global child healthcare. CAM is currently involved in almost all sectors of pediatric care and frequently represents a challenge to the pediatrician. The aim of this article is to provide a decision-making tool to assist the physician, especially as it remains difficult to keep up-to-date with the latest developments in the field. The reasonable application of our algorithm together with common sense should enable the pediatrician to decide whether pediatric (P)-CAM represents potential harm to the patient, and allow ethically sound counseling. In conclusion, we propose a pragmatic algorithm designed to evaluate P-CAM, briefly explain the underlying rationale and give a concrete clinical example.
Resumo:
Sustainable use of soil, maintaining or improving its quality, is one of the goals of diversification in farmlands. From this point of view, bioindicators associated with C, N and P cycling can be used in assessments of land-use effects on soil quality. The aim of this study was to investigate chemical, microbiological and biochemical properties of soil associated with C, N and P under different land uses in a farm property with diversified activity in northern Parana, Brazil. Seven areas under different land uses were assessed: fragment of native Atlantic Forest; growing of peach-palm (Bactrys gasipaes); sugarcane ratoon (Saccharum officinarum) recently harvested, under renewal; growing of coffee (Coffea arabica) intercropped with tree species; recent reforestation (1 year) with native tree species, previously under annual crops; annual crops under no-tillage, rye (Cecale cereale); secondary forest, regenerated after abandonment (for 20 years) of an avocado (Persea americana) orchard. The soil under coffee, recent reforestation and secondary forest showed higher concentrations of organic carbon, but microbial biomass and enzyme activities were higher in soils under native forest and secondary forest, which also showed the lowest metabolic coefficient, followed by the peach-palm area. The lowest content of water-dispersible clay was found in the soil under native forest, differing from soils under sugarcane and secondary forest. Soil cover and soil use affected total organic C contents and soil enzyme and microbial activities, such that more intensive agricultural uses had deeper impacts on the indicators assessed. Calculation of the mean soil quality index showed that the secondary forest was closest to the fragment of native forest, followed by the peach-palm area, coffee-growing area, annual crop area, the area of recent reforestation and the sugarcane ratoon area.
Resumo:
Background In patients presenting with acute cardiac symptoms, abnormal ECG and raised troponin, myocarditis may be suspected after normal angiography. Aims To analyse cardiac magnetic resonance (CMR) findings in patients with a provisional diagnosis of acute coronary syndrome (ACS) in whom acute myocarditis was subsequently considered more likely. Methods and results 79 patients referred for CMR following an admission with presumed ACS and raised serum troponin in whom no culprit lesion was detected were studied. 13% had unrecognised myocardial infarction and 6% takotsubo cardiomyopathy. The remainder (81%) were diagnosed with myocarditis. Mean age was 45615 years and 70% were male. Left ventricular ejection fraction (EF) was 58610%; myocardial oedema was detected in 58%. A myocarditic pattern of late gadolinium enhancement (LGE) was detected in 92%. Abnormalities were detected more frequently in scans performed within 2 weeks of symptom onset: oedema in 81% vs 11% (p<0.0005), and LGE in 100% vs 76% (p<0.005). In 20 patients with both an acute (<2 weeks) and convalescent scan (>3 weeks), oedema decreased from 84% to 39% (p<0.01) and LGE from 5.6 to 3.0 segments (p¼0.005). Three patients presented with sustained ventricular tachycardia, another died suddenly 4 days after admission and one resuscitated 7 weeks following presentation. All 5 patients had preserved EF. Conclusions Our study emphasises the importance of access to CMR for heart attack centres. If myocarditis is suspected, CMR scanning should be performed within 14 days. Myocarditis should not be regarded as benign, even when EF is preserved.
Resumo:
We present a numerical method for spectroscopic ellipsometry of thick transparent films. When an analytical expression for the dispersion of the refractive index (which contains several unknown coefficients) is assumed, the procedure is based on fitting the coefficients at a fixed thickness. Then the thickness is varied within a range (according to its approximate value). The final result given by our method is as follows: The sample thickness is considered to be the one that gives the best fitting. The refractive index is defined by the coefficients obtained for this thickness.
Resumo:
The multiscale finite-volume (MSFV) method is designed to reduce the computational cost of elliptic and parabolic problems with highly heterogeneous anisotropic coefficients. The reduction is achieved by splitting the original global problem into a set of local problems (with approximate local boundary conditions) coupled by a coarse global problem. It has been shown recently that the numerical errors in MSFV results can be reduced systematically with an iterative procedure that provides a conservative velocity field after any iteration step. The iterative MSFV (i-MSFV) method can be obtained with an improved (smoothed) multiscale solution to enhance the localization conditions, with a Krylov subspace method [e.g., the generalized-minimal-residual (GMRES) algorithm] preconditioned by the MSFV system, or with a combination of both. In a multiphase-flow system, a balance between accuracy and computational efficiency should be achieved by finding a minimum number of i-MSFV iterations (on pressure), which is necessary to achieve the desired accuracy in the saturation solution. In this work, we extend the i-MSFV method to sequential implicit simulation of time-dependent problems. To control the error of the coupled saturation/pressure system, we analyze the transport error caused by an approximate velocity field. We then propose an error-control strategy on the basis of the residual of the pressure equation. At the beginning of simulation, the pressure solution is iterated until a specified accuracy is achieved. To minimize the number of iterations in a multiphase-flow problem, the solution at the previous timestep is used to improve the localization assumption at the current timestep. Additional iterations are used only when the residual becomes larger than a specified threshold value. Numerical results show that only a few iterations on average are necessary to improve the MSFV results significantly, even for very challenging problems. Therefore, the proposed adaptive strategy yields efficient and accurate simulation of multiphase flow in heterogeneous porous media.
Resumo:
Les instabilités engendrées par des gradients de densité interviennent dans une variété d'écoulements. Un exemple est celui de la séquestration géologique du dioxyde de carbone en milieux poreux. Ce gaz est injecté à haute pression dans des aquifères salines et profondes. La différence de densité entre la saumure saturée en CO2 dissous et la saumure environnante induit des courants favorables qui le transportent vers les couches géologiques profondes. Les gradients de densité peuvent aussi être la cause du transport indésirable de matières toxiques, ce qui peut éventuellement conduire à la pollution des sols et des eaux. La gamme d'échelles intervenant dans ce type de phénomènes est très large. Elle s'étend de l'échelle poreuse où les phénomènes de croissance des instabilités s'opèrent, jusqu'à l'échelle des aquifères à laquelle interviennent les phénomènes à temps long. Une reproduction fiable de la physique par la simulation numérique demeure donc un défi en raison du caractère multi-échelles aussi bien au niveau spatial et temporel de ces phénomènes. Il requiert donc le développement d'algorithmes performants et l'utilisation d'outils de calculs modernes. En conjugaison avec les méthodes de résolution itératives, les méthodes multi-échelles permettent de résoudre les grands systèmes d'équations algébriques de manière efficace. Ces méthodes ont été introduites comme méthodes d'upscaling et de downscaling pour la simulation d'écoulements en milieux poreux afin de traiter de fortes hétérogénéités du champ de perméabilité. Le principe repose sur l'utilisation parallèle de deux maillages, le premier est choisi en fonction de la résolution du champ de perméabilité (grille fine), alors que le second (grille grossière) est utilisé pour approximer le problème fin à moindre coût. La qualité de la solution multi-échelles peut être améliorée de manière itérative pour empêcher des erreurs trop importantes si le champ de perméabilité est complexe. Les méthodes adaptatives qui restreignent les procédures de mise à jour aux régions à forts gradients permettent de limiter les coûts de calculs additionnels. Dans le cas d'instabilités induites par des gradients de densité, l'échelle des phénomènes varie au cours du temps. En conséquence, des méthodes multi-échelles adaptatives sont requises pour tenir compte de cette dynamique. L'objectif de cette thèse est de développer des algorithmes multi-échelles adaptatifs et efficaces pour la simulation des instabilités induites par des gradients de densité. Pour cela, nous nous basons sur la méthode des volumes finis multi-échelles (MsFV) qui offre l'avantage de résoudre les phénomènes de transport tout en conservant la masse de manière exacte. Dans la première partie, nous pouvons démontrer que les approximations de la méthode MsFV engendrent des phénomènes de digitation non-physiques dont la suppression requiert des opérations de correction itératives. Les coûts de calculs additionnels de ces opérations peuvent toutefois être compensés par des méthodes adaptatives. Nous proposons aussi l'utilisation de la méthode MsFV comme méthode de downscaling: la grille grossière étant utilisée dans les zones où l'écoulement est relativement homogène alors que la grille plus fine est utilisée pour résoudre les forts gradients. Dans la seconde partie, la méthode multi-échelle est étendue à un nombre arbitraire de niveaux. Nous prouvons que la méthode généralisée est performante pour la résolution de grands systèmes d'équations algébriques. Dans la dernière partie, nous focalisons notre étude sur les échelles qui déterminent l'évolution des instabilités engendrées par des gradients de densité. L'identification de la structure locale ainsi que globale de l'écoulement permet de procéder à un upscaling des instabilités à temps long alors que les structures à petite échelle sont conservées lors du déclenchement de l'instabilité. Les résultats présentés dans ce travail permettent d'étendre les connaissances des méthodes MsFV et offrent des formulations multi-échelles efficaces pour la simulation des instabilités engendrées par des gradients de densité. - Density-driven instabilities in porous media are of interest for a wide range of applications, for instance, for geological sequestration of CO2, during which CO2 is injected at high pressure into deep saline aquifers. Due to the density difference between the C02-saturated brine and the surrounding brine, a downward migration of CO2 into deeper regions, where the risk of leakage is reduced, takes place. Similarly, undesired spontaneous mobilization of potentially hazardous substances that might endanger groundwater quality can be triggered by density differences. Over the last years, these effects have been investigated with the help of numerical groundwater models. Major challenges in simulating density-driven instabilities arise from the different scales of interest involved, i.e., the scale at which instabilities are triggered and the aquifer scale over which long-term processes take place. An accurate numerical reproduction is possible, only if the finest scale is captured. For large aquifers, this leads to problems with a large number of unknowns. Advanced numerical methods are required to efficiently solve these problems with today's available computational resources. Beside efficient iterative solvers, multiscale methods are available to solve large numerical systems. Originally, multiscale methods have been developed as upscaling-downscaling techniques to resolve strong permeability contrasts. In this case, two static grids are used: one is chosen with respect to the resolution of the permeability field (fine grid); the other (coarse grid) is used to approximate the fine-scale problem at low computational costs. The quality of the multiscale solution can be iteratively improved to avoid large errors in case of complex permeability structures. Adaptive formulations, which restrict the iterative update to domains with large gradients, enable limiting the additional computational costs of the iterations. In case of density-driven instabilities, additional spatial scales appear which change with time. Flexible adaptive methods are required to account for these emerging dynamic scales. The objective of this work is to develop an adaptive multiscale formulation for the efficient and accurate simulation of density-driven instabilities. We consider the Multiscale Finite-Volume (MsFV) method, which is well suited for simulations including the solution of transport problems as it guarantees a conservative velocity field. In the first part of this thesis, we investigate the applicability of the standard MsFV method to density- driven flow problems. We demonstrate that approximations in MsFV may trigger unphysical fingers and iterative corrections are necessary. Adaptive formulations (e.g., limiting a refined solution to domains with large concentration gradients where fingers form) can be used to balance the extra costs. We also propose to use the MsFV method as downscaling technique: the coarse discretization is used in areas without significant change in the flow field whereas the problem is refined in the zones of interest. This enables accounting for the dynamic change in scales of density-driven instabilities. In the second part of the thesis the MsFV algorithm, which originally employs one coarse level, is extended to an arbitrary number of coarse levels. We prove that this keeps the MsFV method efficient for problems with a large number of unknowns. In the last part of this thesis, we focus on the scales that control the evolution of density fingers. The identification of local and global flow patterns allows a coarse description at late times while conserving fine-scale details during onset stage. Results presented in this work advance the understanding of the Multiscale Finite-Volume method and offer efficient dynamic multiscale formulations to simulate density-driven instabilities. - Les nappes phréatiques caractérisées par des structures poreuses et des fractures très perméables représentent un intérêt particulier pour les hydrogéologues et ingénieurs environnementaux. Dans ces milieux, une large variété d'écoulements peut être observée. Les plus communs sont le transport de contaminants par les eaux souterraines, le transport réactif ou l'écoulement simultané de plusieurs phases non miscibles, comme le pétrole et l'eau. L'échelle qui caractérise ces écoulements est définie par l'interaction de l'hétérogénéité géologique et des processus physiques. Un fluide au repos dans l'espace interstitiel d'un milieu poreux peut être déstabilisé par des gradients de densité. Ils peuvent être induits par des changements locaux de température ou par dissolution d'un composé chimique. Les instabilités engendrées par des gradients de densité revêtent un intérêt particulier puisque qu'elles peuvent éventuellement compromettre la qualité des eaux. Un exemple frappant est la salinisation de l'eau douce dans les nappes phréatiques par pénétration d'eau salée plus dense dans les régions profondes. Dans le cas des écoulements gouvernés par les gradients de densité, les échelles caractéristiques de l'écoulement s'étendent de l'échelle poreuse où les phénomènes de croissance des instabilités s'opèrent, jusqu'à l'échelle des aquifères sur laquelle interviennent les phénomènes à temps long. Etant donné que les investigations in-situ sont pratiquement impossibles, les modèles numériques sont utilisés pour prédire et évaluer les risques liés aux instabilités engendrées par les gradients de densité. Une description correcte de ces phénomènes repose sur la description de toutes les échelles de l'écoulement dont la gamme peut s'étendre sur huit à dix ordres de grandeur dans le cas de grands aquifères. Il en résulte des problèmes numériques de grande taille qui sont très couteux à résoudre. Des schémas numériques sophistiqués sont donc nécessaires pour effectuer des simulations précises d'instabilités hydro-dynamiques à grande échelle. Dans ce travail, nous présentons différentes méthodes numériques qui permettent de simuler efficacement et avec précision les instabilités dues aux gradients de densité. Ces nouvelles méthodes sont basées sur les volumes finis multi-échelles. L'idée est de projeter le problème original à une échelle plus grande où il est moins coûteux à résoudre puis de relever la solution grossière vers l'échelle de départ. Cette technique est particulièrement adaptée pour résoudre des problèmes où une large gamme d'échelle intervient et évolue de manière spatio-temporelle. Ceci permet de réduire les coûts de calculs en limitant la description détaillée du problème aux régions qui contiennent un front de concentration mobile. Les aboutissements sont illustrés par la simulation de phénomènes tels que l'intrusion d'eau salée ou la séquestration de dioxyde de carbone.
Resumo:
Les POCT (point of care tests) ont un grand potentiel d'utilisation en médecine infectieuse ambulatoire grâce à leur rapidité d'exécution, leur impact sur l'administration d'antibiotiques et sur le diagnostic de certaines maladies transmissibles. Certains tests sont utilisés depuis plusieurs années (détection de Streptococcus pyogenes lors d'angine, anticorps anti-VIH, antigène urinaire de S. pneumoniae, antigène de Plasmodium falciparum). De nouvelles indications concernent les infections respiratoires, les diarrhées infantiles (rotavirus, E. coli entérohémorragique) et les infections sexuellement transmissibles. Des POCT, basés sur la détection d'acides nucléiques, viennent d'être introduits (streptocoque du groupe B chez la femme enceinte avant l'accouchement et la détection du portage de staphylocoque doré résistant à la méticilline). POCT have a great potential in ambulatory infectious diseases diagnosis, due to their impact on antibiotic administration and on communicable diseases prevention. Some are in use for long (S. pyogenes antigen, HIV antibodies) or short time (S. pneumoniae antigen, P. falciparum). The additional major indications will be community-acquired lower respiratory tract infections, infectious diarrhoea in children (rotavirus, enterotoxigenic E. coli), and hopefully sexually transmitted infections. Easy to use, these tests based on antigen-antibody reaction allow a rapid diagnosis in less than one hour; the new generation of POCT relying on nucleic acid detection are just introduced in practice (detection of GBS in pregnant women, carriage of MRSA), and will be extended to many pathogens
Resumo:
A stochastic nonlinear partial differential equation is constructed for two different models exhibiting self-organized criticality: the Bak-Tang-Wiesenfeld (BTW) sandpile model [Phys. Rev. Lett. 59, 381 (1987); Phys. Rev. A 38, 364 (1988)] and the Zhang model [Phys. Rev. Lett. 63, 470 (1989)]. The dynamic renormalization group (DRG) enables one to compute the critical exponents. However, the nontrivial stable fixed point of the DRG transformation is unreachable for the original parameters of the models. We introduce an alternative regularization of the step function involved in the threshold condition, which breaks the symmetry of the BTW model. Although the symmetry properties of the two models are different, it is shown that they both belong to the same universality class. In this case the DRG procedure leads to a symmetric behavior for both models, restoring the broken symmetry, and makes accessible the nontrivial fixed point. This technique could also be applied to other problems with threshold dynamics.
Resumo:
Cortical folding (gyrification) is determined during the first months of life, so that adverse events occurring during this period leave traces that will be identifiable at any age. As recently reviewed by Mangin and colleagues(2), several methods exist to quantify different characteristics of gyrification. For instance, sulcal morphometry can be used to measure shape descriptors such as the depth, length or indices of inter-hemispheric asymmetry(3). These geometrical properties have the advantage of being easy to interpret. However, sulcal morphometry tightly relies on the accurate identification of a given set of sulci and hence provides a fragmented description of gyrification. A more fine-grained quantification of gyrification can be achieved with curvature-based measurements, where smoothed absolute mean curvature is typically computed at thousands of points over the cortical surface(4). The curvature is however not straightforward to comprehend, as it remains unclear if there is any direct relationship between the curvedness and a biologically meaningful correlate such as cortical volume or surface. To address the diverse issues raised by the measurement of cortical folding, we previously developed an algorithm to quantify local gyrification with an exquisite spatial resolution and of simple interpretation. Our method is inspired of the Gyrification Index(5), a method originally used in comparative neuroanatomy to evaluate the cortical folding differences across species. In our implementation, which we name local Gyrification Index (lGI(1)), we measure the amount of cortex buried within the sulcal folds as compared with the amount of visible cortex in circular regions of interest. Given that the cortex grows primarily through radial expansion(6), our method was specifically designed to identify early defects of cortical development. In this article, we detail the computation of local Gyrification Index, which is now freely distributed as a part of the FreeSurfer Software (http://surfer.nmr.mgh.harvard.edu/, Martinos Center for Biomedical Imaging, Massachusetts General Hospital). FreeSurfer provides a set of automated reconstruction tools of the brain's cortical surface from structural MRI data. The cortical surface extracted in the native space of the images with sub-millimeter accuracy is then further used for the creation of an outer surface, which will serve as a basis for the lGI calculation. A circular region of interest is then delineated on the outer surface, and its corresponding region of interest on the cortical surface is identified using a matching algorithm as described in our validation study(1). This process is repeatedly iterated with largely overlapping regions of interest, resulting in cortical maps of gyrification for subsequent statistical comparisons (Fig. 1). Of note, another measurement of local gyrification with a similar inspiration was proposed by Toro and colleagues(7), where the folding index at each point is computed as the ratio of the cortical area contained in a sphere divided by the area of a disc with the same radius. The two implementations differ in that the one by Toro et al. is based on Euclidian distances and thus considers discontinuous patches of cortical area, whereas ours uses a strict geodesic algorithm and include only the continuous patch of cortical area opening at the brain surface in a circular region of interest.
Resumo:
The front form and the point form of dynamics are studied in the framework of predictive relativistic mechanics. The non-interaction theorem is proved when a Poincar-invariant Hamiltonian formulation with canonical position coordinates is required.
Resumo:
The liquid-liquid critical point scenario of water hypothesizes the existence of two metastable liq- uid phases low-density liquid (LDL) and high-density liquid (HDL) deep within the supercooled region. The hypothesis originates from computer simulations of the ST2 water model, but the stabil- ity of the LDL phase with respect to the crystal is still being debated. We simulate supercooled ST2 water at constant pressure, constant temperature, and constant number of molecules N for N ≤ 729 and times up to 1 μs. We observe clear differences between the two liquids, both structural and dynamical. Using several methods, including finite-size scaling, we confirm the presence of a liquid-liquid phase transition ending in a critical point. We find that the LDL is stable with respect to the crystal in 98% of our runs (we perform 372 runs for LDL or LDL-like states), and in 100% of our runs for the two largest system sizes (N = 512 and 729, for which we perform 136 runs for LDL or LDL-like states). In all these runs, tiny crystallites grow and then melt within 1 μs. Only for N ≤ 343 we observe six events (over 236 runs for LDL or LDL-like states) of spontaneous crystal- lization after crystallites reach an estimated critical size of about 70 ± 10 molecules.
Resumo:
Abstract : This work is concerned with the development and application of novel unsupervised learning methods, having in mind two target applications: the analysis of forensic case data and the classification of remote sensing images. First, a method based on a symbolic optimization of the inter-sample distance measure is proposed to improve the flexibility of spectral clustering algorithms, and applied to the problem of forensic case data. This distance is optimized using a loss function related to the preservation of neighborhood structure between the input space and the space of principal components, and solutions are found using genetic programming. Results are compared to a variety of state-of--the-art clustering algorithms. Subsequently, a new large-scale clustering method based on a joint optimization of feature extraction and classification is proposed and applied to various databases, including two hyperspectral remote sensing images. The algorithm makes uses of a functional model (e.g., a neural network) for clustering which is trained by stochastic gradient descent. Results indicate that such a technique can easily scale to huge databases, can avoid the so-called out-of-sample problem, and can compete with or even outperform existing clustering algorithms on both artificial data and real remote sensing images. This is verified on small databases as well as very large problems. Résumé : Ce travail de recherche porte sur le développement et l'application de méthodes d'apprentissage dites non supervisées. Les applications visées par ces méthodes sont l'analyse de données forensiques et la classification d'images hyperspectrales en télédétection. Dans un premier temps, une méthodologie de classification non supervisée fondée sur l'optimisation symbolique d'une mesure de distance inter-échantillons est proposée. Cette mesure est obtenue en optimisant une fonction de coût reliée à la préservation de la structure de voisinage d'un point entre l'espace des variables initiales et l'espace des composantes principales. Cette méthode est appliquée à l'analyse de données forensiques et comparée à un éventail de méthodes déjà existantes. En second lieu, une méthode fondée sur une optimisation conjointe des tâches de sélection de variables et de classification est implémentée dans un réseau de neurones et appliquée à diverses bases de données, dont deux images hyperspectrales. Le réseau de neurones est entraîné à l'aide d'un algorithme de gradient stochastique, ce qui rend cette technique applicable à des images de très haute résolution. Les résultats de l'application de cette dernière montrent que l'utilisation d'une telle technique permet de classifier de très grandes bases de données sans difficulté et donne des résultats avantageusement comparables aux méthodes existantes.
Resumo:
Enfants de moins de 10 ans fumant passivement 14 cigarettes ! D'avril 2010 à avril 2011, l'exposition de 148 enfants (81 garçons et 67 filles) a été testée: 10 enfants de moins d'un an, 25 de 1 à 5 ans, 19 de 5 à 10 ans, 30 de 10 à 15 ans et 64 de 15 à 18 ans. 10 d'entre eux sont des fumeurs et la plus jeune de 14 ans fume 10 cigarettes par jour. Leurs parents, ou parfois des jeunes eux-mêmes, ont commandé de manière volontaire, via les sites Internet des CIPRET Valais, Vaud et Genève, un badge MoNIC gratuit. Les résultats quant à l'exposition de ces enfants interpellent et méritent l'attention.Pour l'ensemble des enfants, la concentration moyenne de nicotine dans leur environnement intérieur mesurée via les dispositifs MoNIC est de 0,5 mg/m3, avec des maximums pouvant aller jusqu'à 21 mg/m3. Pour le collectif d'enfants âgés de moins de 10 ans (26 garçons et 28 filles; tous non-fumeurs), la concentration de nicotine n'est pas négligeable (moyenne 0,069 mg/m3, min 0, max 0,583 mg/m3). En convertissant ce résultat en équivalent de cigarettes inhalées passivement, nous obtenons des chiffres allant de 0 à 14 cigarettes par jour* avec une moyenne se situant à 1.6 cig/j. Encore plus surprenant, les enfants de moins d'un an (4 garçons et 6 filles) inhalent passivement, dans le cadre familial, en moyenne 1 cigarette (min 0, max 2.2). Pour les deux autres collectifs: 10-15 ans et 15-18 ans, les valeurs maximales avoisinent les 22 cigarettes. Notons cependant que ce résultat est influencé, ce qui n'est pas le cas des enfants plus jeunes, par le fait que ces jeunes sont également parfois des fumeurs actifs.* Quand la durée d'exposition dépassait 1 jour (8 heures), le nombre d'heures a toujours été divisé par 8 heures. Le résultat obtenu donne l'équivalent de cigarettes fumées passivement en huit heures. Il s'agit de ce fait d'une moyenne, ce qui veut dire que durant cette période les enfants ont pu être exposés irrégulièrement à des valeurs supérieures ou inférieures à cette moyenne. [Auteurs]