865 resultados para Multi-Point Method
Resumo:
Geophysical tomography captures the spatial distribution of the underlying geophysical property at a relatively high resolution, but the tomographic images tend to be blurred representations of reality and generally fail to reproduce sharp interfaces. Such models may cause significant bias when taken as a basis for predictive flow and transport modeling and are unsuitable for uncertainty assessment. We present a methodology in which tomograms are used to condition multiple-point statistics (MPS) simulations. A large set of geologically reasonable facies realizations and their corresponding synthetically calculated cross-hole radar tomograms are used as a training image. The training image is scanned with a direct sampling algorithm for patterns in the conditioning tomogram, while accounting for the spatially varying resolution of the tomograms. In a post-processing step, only those conditional simulations that predicted the radar traveltimes within the expected data error levels are accepted. The methodology is demonstrated on a two-facies example featuring channels and an aquifer analog of alluvial sedimentary structures with five facies. For both cases, MPS simulations exhibit the sharp interfaces and the geological patterns found in the training image. Compared to unconditioned MPS simulations, the uncertainty in transport predictions is markedly decreased for simulations conditioned to tomograms. As an improvement to other approaches relying on classical smoothness-constrained geophysical tomography, the proposed method allows for: (1) reproduction of sharp interfaces, (2) incorporation of realistic geological constraints and (3) generation of multiple realizations that enables uncertainty assessment.
Resumo:
Työn tavoitteena oli kuvata ja ottaa käyttöön sahauseräkohtaisen kannattavuuden laskentamenetelmä sahalle, sekä tehdä laskentamalli menetelmän tueksi. Sahauksen peruskäsitteiden jälkeen työssä on esitelty sahan tuotantoprosessi. Tuotantoprosessi on kuvattu kirjallisuuden ja asiantuntijoiden haastattelujen perusteella. Seuraavaksi kartoitettiin hyötyjä ja vaikutuksia, mitä laskentamenetelmältä odotetaan.. Kustannuslaskennan teoriaa selvitettiin kirjallisuuslähteitä käyttäen silmälläpitäen juuri tätä kehitettävää laskentamenetelmää. Lisäksi esiteltiin Uimaharjun sahalla käytettävät ja laskentaan liittyvät laskenta- ja tietojärjestelmät.Nykyisin sahalla ei ole minkäänlaista menetelmää sahauseräkohtaisen tuloksen laskemiseksi. Pienillä muutoksilla sahan tietojärjestelmään ja prosessikoneisiin voidaan sahauserä kuljettaa prosessin läpi niin, että jokaisessa prosessin vaiheessa sille saadaan kohdistettua tuotantotietoa. Eri vaiheista saatua tietoa käyttämällä saadaan tarkasti määritettyä tuotteet, joita sahauserä tuotti ja paljonko tuotantoresursseja tuottamiseen kului. Laskentamalliin syötetään tuotantotietoja ja kustannustietoa ja saadaan vastaukseksi sahauserän taloudellinen tulos.Toimenpide ehdotuksena esitetään lisätutkimusta tuotantotietojen automaattisesta keräämisestä manuaalisen työn ja virheiden poistamiseksi. Suhteellisen pienillä panoksilla voidaan jokaiselle sahauserälle kerätä tuotantotiedot täysin automaattisesti. Lisäksi kehittämäni laskentamallin tilalle tulisi hankkia sovellus, joka käyttäisi paremmin hyväksi nykyisiä tietojärjestelmiä ja poistaisi manuaalisen työvaiheen laskennassa.
Resumo:
Atherosclerosis is a chronic cardiovascular disease that involves the thicken¬ing of the artery walls as well as the formation of plaques (lesions) causing the narrowing of the lumens, in vessels such as the aorta, the coronary and the carotid arteries. Magnetic resonance imaging (MRI) is a promising modality for the assessment of atherosclerosis, as it is a non-invasive and patient-friendly procedure that does not use ionizing radiation. MRI offers high soft tissue con¬trast already without the need of intravenous contrast media; while modifica¬tion of the MR pulse sequences allows for further adjustment of the contrast for specific diagnostic needs. As such, MRI can create angiographic images of the vessel lumens to assess stenoses at the late stage of the disease, as well as blood flow-suppressed images for the early investigation of the vessel wall and the characterization of the atherosclerotic plaques. However, despite the great technical progress that occurred over the past two decades, MRI is intrinsically a low sensitive technique and some limitations still exist in terms of accuracy and performance. A major challenge for coronary artery imaging is respiratory motion. State- of-the-art diaphragmatic navigators rely on an indirect measure of motion, per¬form a ID correction, and have long and unpredictable scan time. In response, self-navigation (SM) strategies have recently been introduced that offer 100% scan efficiency and increased ease of use. SN detects respiratory motion di¬rectly from the image data obtained at the level of the heart, and retrospectively corrects the same data before final image reconstruction. Thus, SN holds po-tential for multi-dimensional motion compensation. To this regard, this thesis presents novel SN methods that estimate 2D and 3D motion parameters from aliased sub-images that are obtained from the same raw data composing the final image. Combination of all corrected sub-images produces a final image with reduced motion artifacts for the visualization of the coronaries. The first study (section 2.2, 2D Self-Navigation with Compressed Sensing) consists of a method for 2D translational motion compensation. Here, the use of com- pressed sensing (CS) reconstruction is proposed and investigated to support motion detection by reducing aliasing artifacts. In healthy human subjects, CS demonstrated an improvement in motion detection accuracy with simula¬tions on in vivo data, while improved coronary artery visualization was demon¬strated on in vivo free-breathing acquisitions. However, the motion of the heart induced by respiration has been shown to occur in three dimensions and to be more complex than a simple translation. Therefore, the second study (section 2.3,3D Self-Navigation) consists of a method for 3D affine motion correction rather than 2D only. Here, different techniques were adopted to reduce background signal contribution in respiratory motion tracking, as this can be adversely affected by the static tissue that surrounds the heart. The proposed method demonstrated to improve conspicuity and vi¬sualization of coronary arteries in healthy and cardiovascular disease patient cohorts in comparison to a conventional ID SN method. In the third study (section 2.4, 3D Self-Navigation with Compressed Sensing), the same tracking methods were used to obtain sub-images sorted according to the respiratory position. Then, instead of motion correction, a compressed sensing reconstruction was performed on all sorted sub-image data. This process ex¬ploits the consistency of the sorted data to reduce aliasing artifacts such that the sub-image corresponding to the end-expiratory phase can directly be used to visualize the coronaries. In a healthy volunteer cohort, this strategy improved conspicuity and visualization of the coronary arteries when compared to a con¬ventional ID SN method. For the visualization of the vessel wall and atherosclerotic plaques, the state- of-the-art dual inversion recovery (DIR) technique is able to suppress the signal coming from flowing blood and provide positive wall-lumen contrast. How¬ever, optimal contrast may be difficult to obtain and is subject to RR variability. Furthermore, DIR imaging is time-inefficient and multislice acquisitions may lead to prolonged scanning times. In response and as a fourth study of this thesis (chapter 3, Vessel Wall MRI of the Carotid Arteries), a phase-sensitive DIR method has been implemented and tested in the carotid arteries of a healthy volunteer cohort. By exploiting the phase information of images acquired after DIR, the proposed phase-sensitive method enhances wall-lumen contrast while widens the window of opportunity for image acquisition. As a result, a 3-fold increase in volumetric coverage is obtained at no extra cost in scanning time, while image quality is improved. In conclusion, this thesis presented novel methods to address some of the main challenges for MRI of atherosclerosis: the suppression of motion and flow artifacts for improved visualization of vessel lumens, walls and plaques. Such methods showed to significantly improve image quality in human healthy sub¬jects, as well as scan efficiency and ease-of-use of MRI. Extensive validation is now warranted in patient populations to ascertain their diagnostic perfor¬mance. Eventually, these methods may bring the use of atherosclerosis MRI closer to the clinical practice. Résumé L'athérosclérose est une maladie cardiovasculaire chronique qui implique le épaississement de la paroi des artères, ainsi que la formation de plaques (lé¬sions) provoquant le rétrécissement des lumières, dans des vaisseaux tels que l'aorte, les coronaires et les artères carotides. L'imagerie par résonance magné¬tique (IRM) est une modalité prometteuse pour l'évaluation de l'athérosclérose, car il s'agit d'une procédure non-invasive et conviviale pour les patients, qui n'utilise pas des rayonnements ionisants. L'IRM offre un contraste des tissus mous très élevé sans avoir besoin de médias de contraste intraveineux, tan¬dis que la modification des séquences d'impulsions de RM permet en outre le réglage du contraste pour des besoins diagnostiques spécifiques. À ce titre, l'IRM peut créer des images angiographiques des lumières des vaisseaux pour évaluer les sténoses à la fin du stade de la maladie, ainsi que des images avec suppression du flux sanguin pour une première enquête des parois des vais¬seaux et une caractérisation des plaques d'athérosclérose. Cependant, malgré les grands progrès techniques qui ont eu lieu au cours des deux dernières dé¬cennies, l'IRM est une technique peu sensible et certaines limitations existent encore en termes de précision et de performance. Un des principaux défis pour l'imagerie de l'artère coronaire est le mou¬vement respiratoire. Les navigateurs diaphragmatiques de pointe comptent sur une mesure indirecte de mouvement, effectuent une correction 1D, et ont un temps d'acquisition long et imprévisible. En réponse, les stratégies d'auto- navigation (self-navigation: SN) ont été introduites récemment et offrent 100% d'efficacité d'acquisition et une meilleure facilité d'utilisation. Les SN détectent le mouvement respiratoire directement à partir des données brutes de l'image obtenue au niveau du coeur, et rétrospectivement corrigent ces mêmes données avant la reconstruction finale de l'image. Ainsi, les SN détiennent un poten¬tiel pour une compensation multidimensionnelle du mouvement. A cet égard, cette thèse présente de nouvelles méthodes SN qui estiment les paramètres de mouvement 2D et 3D à partir de sous-images qui sont obtenues à partir des mêmes données brutes qui composent l'image finale. La combinaison de toutes les sous-images corrigées produit une image finale pour la visualisation des coronaires ou les artefacts du mouvement sont réduits. La première étude (section 2.2,2D Self-Navigation with Compressed Sensing) traite d'une méthode pour une compensation 2D de mouvement de translation. Ici, on étudie l'utilisation de la reconstruction d'acquisition comprimée (compressed sensing: CS) pour soutenir la détection de mouvement en réduisant les artefacts de sous-échantillonnage. Chez des sujets humains sains, CS a démontré une amélioration de la précision de la détection de mouvement avec des simula¬tions sur des données in vivo, tandis que la visualisation de l'artère coronaire sur des acquisitions de respiration libre in vivo a aussi été améliorée. Pourtant, le mouvement du coeur induite par la respiration se produit en trois dimensions et il est plus complexe qu'un simple déplacement. Par conséquent, la deuxième étude (section 2.3, 3D Self-Navigation) traite d'une méthode de cor¬rection du mouvement 3D plutôt que 2D uniquement. Ici, différentes tech¬niques ont été adoptées pour réduire la contribution du signal du fond dans le suivi de mouvement respiratoire, qui peut être influencé négativement par le tissu statique qui entoure le coeur. La méthode proposée a démontré une amélioration, par rapport à la procédure classique SN de correction 1D, de la visualisation des artères coronaires dans le groupe de sujets sains et des pa¬tients avec maladies cardio-vasculaires. Dans la troisième étude (section 2.4,3D Self-Navigation with Compressed Sensing), les mêmes méthodes de suivi ont été utilisées pour obtenir des sous-images triées selon la position respiratoire. Au lieu de la correction du mouvement, une reconstruction de CS a été réalisée sur toutes les sous-images triées. Cette procédure exploite la cohérence des données pour réduire les artefacts de sous- échantillonnage de telle sorte que la sous-image correspondant à la phase de fin d'expiration peut directement être utilisée pour visualiser les coronaires. Dans un échantillon de volontaires en bonne santé, cette stratégie a amélioré la netteté et la visualisation des artères coronaires par rapport à une méthode classique SN ID. Pour la visualisation des parois des vaisseaux et de plaques d'athérosclérose, la technique de pointe avec double récupération d'inversion (DIR) est capa¬ble de supprimer le signal provenant du sang et de fournir un contraste posi¬tif entre la paroi et la lumière. Pourtant, il est difficile d'obtenir un contraste optimal car cela est soumis à la variabilité du rythme cardiaque. Par ailleurs, l'imagerie DIR est inefficace du point de vue du temps et les acquisitions "mul- tislice" peuvent conduire à des temps de scan prolongés. En réponse à ce prob¬lème et comme quatrième étude de cette thèse (chapitre 3, Vessel Wall MRI of the Carotid Arteries), une méthode de DIR phase-sensitive a été implémenté et testé
Resumo:
Background: TILLING (Targeting Induced Local Lesions IN Genomes) is a reverse genetic method that combines chemical mutagenesis with high-throughput genome-wide screening for point mutation detection in genes of interest. However, this mutation discovery approach faces a particular problem which is how to obtain a mutant population with a sufficiently high mutation density. Furthermore, plant mutagenesis protocols require two successive generations (M1, M2) for mutation fixation to occur before the analysis of the genotype can begin. Results: Here, we describe a new TILLING approach for rice based on ethyl methanesulfonate (EMS) mutagenesis of mature seed-derived calli and direct screening of in vitro regenerated plants. A high mutagenesis rate was obtained (i.e. one mutation in every 451 Kb) when plants were screened for two senescence-related genes. Screening was carried out in 2400 individuals from a mutant population of 6912. Seven sense change mutations out of 15 point mutations were identified. Conclusions: This new strategy represents a significant advantage in terms of time-savings (i.e. more than eight months), greenhouse space and work during the generation of mutant plant populations. Furthermore, this effective chemical mutagenesis protocol ensures high mutagenesis rates thereby saving in waste removal costs and the total amount of mutagen needed thanks to the mutagenesis volume reduction.
Resumo:
A new method for decision making that uses the ordered weighted averaging (OWA) operator in the aggregation of the information is presented. It is used a concept that it is known in the literature as the index of maximum and minimum level (IMAM). This index is based on distance measures and other techniques that are useful for decision making. By using the OWA operator in the IMAM, we form a new aggregation operator that we call the ordered weighted averaging index of maximum and minimum level (OWAIMAM) operator. The main advantage is that it provides a parameterized family of aggregation operators between the minimum and the maximum and a wide range of special cases. Then, the decision maker may take decisions according to his degree of optimism and considering ideals in the decision process. A further extension of this approach is presented by using hybrid averages and Choquet integrals. We also develop an application of the new approach in a multi-person decision-making problem regarding the selection of strategies.
Resumo:
Free induction decay (FID) navigators were found to qualitatively detect rigid-body head movements, yet it is unknown to what extent they can provide quantitative motion estimates. Here, we acquired FID navigators at different sampling rates and simultaneously measured head movements using a highly accurate optical motion tracking system. This strategy allowed us to estimate the accuracy and precision of FID navigators for quantification of rigid-body head movements. Five subjects were scanned with a 32-channel head coil array on a clinical 3T MR scanner during several resting and guided head movement periods. For each subject we trained a linear regression model based on FID navigator and optical motion tracking signals. FID-based motion model accuracy and precision was evaluated using cross-validation. FID-based prediction of rigid-body head motion was found to be with a mean translational and rotational error of 0.14±0.21 mm and 0.08±0.13(°) , respectively. Robust model training with sub-millimeter and sub-degree accuracy could be achieved using 100 data points with motion magnitudes of ±2 mm and ±1(°) for translation and rotation. The obtained linear models appeared to be subject-specific as inter-subject application of a "universal" FID-based motion model resulted in poor prediction accuracy. The results show that substantial rigid-body motion information is encoded in FID navigator signal time courses. Although, the applied method currently requires the simultaneous acquisition of FID signals and optical tracking data, the findings suggest that multi-channel FID navigators have a potential to complement existing tracking technologies for accurate rigid-body motion detection and correction in MRI.
Resumo:
Tämän tutkimuksen tavoitteena oli selvittää, voidaanko asiakkaan ja palveluntarjoajan odotukset yhdistää muodostettaessa palveluntarjoajan monikanavamallia. Monikanavaisuus tuo asiakkaalle mahdollisuuksia ajasta ja paikasta riippumattomaan asiointiin. Asiakkaan näkökulmasta tämä tarkoittaa usein sitä, että hänen oppimansa asiointitapa muuttuu. Asiakas kokee haittana sen, että joutuu panostamaan uuden asiointitavan opettelemiseen ja odottaa tämän muutoksen tuovan hänelle hyötyjä. Monikanavaisuuden hyödyt asiakkaalle punnitaan tässä muutostilanteessa. Palveluntarjoaja odottaa monikanavamallin tuovan kustannussäästöjä, sillä kanavavalinnat ovat keino kehittää asiakkuuksia ja vaikuttaa yrityksen kannattavuuteen pitkällä aikajänteellä. Monikanavamallin toteuttaminen vaatii palveluntarjoajalta alkuvaiheessa resursseja, investointeja ja halutun muutoksen tavoitteellista johtamista. Tutkittavat asiakkaat valittiin Suomen Posti Oyj:n Yritykset ja yritykset –asiakassegmentistä. Tutkimuksessa ei löytynyt asiakaskohtaisia eroja asiakkaiden odotuksista palveluntarjoajien monikanavamalleihin, mutta hyötyodotusten suhteen tunnistettiin kolme erilaista asiakastyyppiä: kustannussäästöjä odottavat hintaorientoituneet asiakkaat, palvelun entistä parempaa sujumista odottavat palveluorientoituneet asiakkaat ja oman valinnanvapautensa merkitystä painottavat asiakassuhdeorientoituneet asiakkaat. Palveluntarjoajan tulee pystyä viestimään ja argumentoimaan asiointitavan muutoksesta kullekin asiakkaalle merkityksellisellä tavalla. Teorian ja empirian pohjalta voidaan sanoa, että asiakkaan ja palveluntarjoajan odotukset voidaan yhdistää muodostettaessa palveluntarjoajan monikanavamallia. Tämä edellyttää, että palveluntarjoaja tuntee asiakkaansa niin hyvin, että tietää millaiset eri asiakkaiden hyötyodotukset ovat.
Resumo:
The main purpose of this study was to examine and compare the possibilities of profit repatriation from the point of view of tax planning of an international corporation, in such a case that a Finnish parent company has a subsidiary in Poland. The main research problem was divided into two sub research problems: 1) to examine concepts and principles of international taxation and tax planning from the point of view of international corporations and 2) to discuss the main features of Polish Companies-, Accounting- and Tax Act from the point of view a Finnish parent company. The research method of this study is mainly decision making, comparative analysis. In this study have been discussed the possibilities of international profit repatriation for supporting the decision making of the management of a Finnish parent company. In addition different repatriation possibilities have been compared. In this study has been noticed that a Finnish parent company can repatriate profit of its Polish subsidiary either directly as dividends or by using indirect methods such as interests, royalties, management fees and transfer pricing of goods. The total tax burden of dividends is heavier than the tax burden of indirect methods. It was also concluded that during the last years the Polish legislation has been renewed in order to prevent hidden dividend distribution. This has been done by implementing new rules of transfer pricing and thin capitalization.
Resumo:
BACKGROUND: Diagnosing pediatric pneumonia is challenging in low-resource settings. The World Health Organization (WHO) has defined primary end-point radiological pneumonia for use in epidemiological and vaccine studies. However, radiography requires expertise and is often inaccessible. We hypothesized that plasma biomarkers of inflammation and endothelial activation may be useful surrogates for end-point pneumonia, and may provide insight into its biological significance. METHODS: We studied children with WHO-defined clinical pneumonia (n = 155) within a prospective cohort of 1,005 consecutive febrile children presenting to Tanzanian outpatient clinics. Based on x-ray findings, participants were categorized as primary end-point pneumonia (n = 30), other infiltrates (n = 31), or normal chest x-ray (n = 94). Plasma levels of 7 host response biomarkers at presentation were measured by ELISA. Associations between biomarker levels and radiological findings were assessed by Kruskal-Wallis test and multivariable logistic regression. Biomarker ability to predict radiological findings was evaluated using receiver operating characteristic curve analysis and Classification and Regression Tree analysis. RESULTS: Compared to children with normal x-ray, children with end-point pneumonia had significantly higher C-reactive protein, procalcitonin and Chitinase 3-like-1, while those with other infiltrates had elevated procalcitonin and von Willebrand Factor and decreased soluble Tie-2 and endoglin. Clinical variables were not predictive of radiological findings. Classification and Regression Tree analysis generated multi-marker models with improved performance over single markers for discriminating between groups. A model based on C-reactive protein and Chitinase 3-like-1 discriminated between end-point pneumonia and non-end-point pneumonia with 93.3% sensitivity (95% confidence interval 76.5-98.8), 80.8% specificity (72.6-87.1), positive likelihood ratio 4.9 (3.4-7.1), negative likelihood ratio 0.083 (0.022-0.32), and misclassification rate 0.20 (standard error 0.038). CONCLUSIONS: In Tanzanian children with WHO-defined clinical pneumonia, combinations of host biomarkers distinguished between end-point pneumonia, other infiltrates, and normal chest x-ray, whereas clinical variables did not. These findings generate pathophysiological hypotheses and may have potential research and clinical utility.
Resumo:
UNLABELLED: In vivo transcriptional analyses of microbial pathogens are often hampered by low proportions of pathogen biomass in host organs, hindering the coverage of full pathogen transcriptome. We aimed to address the transcriptome profiles of Candida albicans, the most prevalent fungal pathogen in systemically infected immunocompromised patients, during systemic infection in different hosts. We developed a strategy for high-resolution quantitative analysis of the C. albicans transcriptome directly from early and late stages of systemic infection in two different host models, mouse and the insect Galleria mellonella. Our results show that transcriptome sequencing (RNA-seq) libraries were enriched for fungal transcripts up to 1,600-fold using biotinylated bait probes to capture C. albicans sequences. This enrichment biased the read counts of only ~3% of the genes, which can be identified and removed based on a priori criteria. This allowed an unprecedented resolution of C. albicans transcriptome in vivo, with detection of over 86% of its genes. The transcriptional response of the fungus was surprisingly similar during infection of the two hosts and at the two time points, although some host- and time point-specific genes could be identified. Genes that were highly induced during infection were involved, for instance, in stress response, adhesion, iron acquisition, and biofilm formation. Of the in vivo-regulated genes, 10% are still of unknown function, and their future study will be of great interest. The fungal RNA enrichment procedure used here will help a better characterization of the C. albicans response in infected hosts and may be applied to other microbial pathogens. IMPORTANCE: Understanding the mechanisms utilized by pathogens to infect and cause disease in their hosts is crucial for rational drug development. Transcriptomic studies may help investigations of these mechanisms by determining which genes are expressed specifically during infection. This task has been difficult so far, since the proportion of microbial biomass in infected tissues is often extremely low, thus limiting the depth of sequencing and comprehensive transcriptome analysis. Here, we adapted a technology to capture and enrich C. albicans RNA, which was next used for deep RNA sequencing directly from infected tissues from two different host organisms. The high-resolution transcriptome revealed a large number of genes that were so far unknown to participate in infection, which will likely constitute a focus of study in the future. More importantly, this method may be adapted to perform transcript profiling of any other microbes during host infection or colonization.
Resumo:
The analysis of rockfall characteristics and spatial distribution is fundamental to understand and model the main factors that predispose to failure. In our study we analysed LiDAR point clouds aiming to: (1) detect and characterise single rockfalls; (2) investigate their spatial distribution. To this end, different cluster algorithms were applied: 1a) Nearest Neighbour Clutter Removal (NNCR) in combination with the Expectation?Maximization (EM) in order to separate feature points from clutter; 1b) a density based algorithm (DBSCAN) was applied to isolate the single clusters (i.e. the rockfall events); 2) finally we computed the Ripley's K-function to investigate the global spatial pattern of the extracted rockfalls. The method allowed proper identification and characterization of more than 600 rockfalls occurred on a cliff located in Puigcercos (Catalonia, Spain) during a time span of six months. The spatial distribution of these events proved that rockfall were clustered distributed at a welldefined distance-range. Computations were carried out using R free software for statistical computing and graphics. The understanding of the spatial distribution of precursory rockfalls may shed light on the forecasting of future failures.
Resumo:
Abstract This work studies the multi-label classification of turns in simple English Wikipedia talk pages into dialog acts. The treated dataset was created and multi-labeled by (Ferschke et al., 2012). The first part analyses dependences between labels, in order to examine the annotation coherence and to determine a classification method. Then, a multi-label classification is computed, after transforming the problem into binary relevance. Regarding features, whereas (Ferschke et al., 2012) use features such as uni-, bi-, and trigrams, time distance between turns or the indentation level of the turn, other features are considered here: lemmas, part-of-speech tags and the meaning of verbs (according to WordNet). The dataset authors applied approaches such as Naive Bayes or Support Vector Machines. The present paper proposes, as an alternative, to use Schoenberg transformations which, following the example of kernel methods, transform original Euclidean distances into other Euclidean distances, in a space of high dimensionality. Résumé Ce travail étudie la classification supervisée multi-étiquette en actes de dialogue des tours de parole des contributeurs aux pages de discussion de Simple English Wikipedia (Wikipédia en anglais simple). Le jeu de données considéré a été créé et multi-étiqueté par (Ferschke et al., 2012). Une première partie analyse les relations entre les étiquettes pour examiner la cohérence des annotations et pour déterminer une méthode de classification. Ensuite, une classification supervisée multi-étiquette est effectuée, après recodage binaire des étiquettes. Concernant les variables, alors que (Ferschke et al., 2012) utilisent des caractéristiques telles que les uni-, bi- et trigrammes, le temps entre les tours de parole ou l'indentation d'un tour de parole, d'autres descripteurs sont considérés ici : les lemmes, les catégories morphosyntaxiques et le sens des verbes (selon WordNet). Les auteurs du jeu de données ont employé des approches telles que le Naive Bayes ou les Séparateurs à Vastes Marges (SVM) pour la classification. Cet article propose, de façon alternative, d'utiliser et d'étendre l'analyse discriminante linéaire aux transformations de Schoenberg qui, à l'instar des méthodes à noyau, transforment les distances euclidiennes originales en d'autres distances euclidiennes, dans un espace de haute dimensionnalité.
Resumo:
La dermatite irritative est décrite comme une réaction réversible, non immunologique caractérisée par des lésions d'aspect très variable, allant de la simple rougeur jusqu'à la formation de bulles voire d'une nécrose, accompagnée de prurit ou d'une sensation de brûlure suite à I' application d'une substance chimique. Le teste de prédiction d'irritation cutanée est traditionnellement depuis les années 1940 le Test de Draize. Ce test consiste en l'application d'une substance chimique sur une peau rasée de lapin pendant 4h et de regarder à 24h si des signes cliniques d'irritations sont présents. Cette méthode critiquable autant d'un point de vue éthique que qualitative reste actuellement le teste le plus utilisé. Depuis le début des années 2000 de nouvelles méthodes in vitro se sont développées tel que le model d'épiderme humain recombiné (RHE). Il s agit d'une multicouche de kératinocyte bien différencié obtenu depuis une culture de don d'ovocyte. Cependant cette méthode en plus d'être très couteuse n'obtient au mieux que 76% de résultat similaire comparé au test in vivo humain. Il existe donc la nécessité de développer une nouvelle méthode in vitro qui simulerait encore mieux la réalité anatomique et physiologique retrouvée in vivo. Notre objectif a été de développer cette nouvelle méthode in vitro. Pour cela nous avons travaillé avec de la peau humaine directement prélevée après une abdominoplastie. Celle ci après préparation avec un dermatome, un couteau dont la lame est réglable pour découper l'épaisseur souhaitée de peau, est montée dans un système de diffusion cellulaire. La couche cornée est alors exposée de manière optimale à 1 ml de la substance chimique testée pendant 4h. L'échantillon de peau est alors fixé dans du formaldéhyde pour permettre la préparation de lames standards d'hématoxyline et éosine. L'irritation est alors investiguée selon des critères histopathologiques de spongioses, de nécroses et de vacuolisations cellulaires. Les résultats de ce.tte première batterie de testes sont plus que prometteurs. En effet, comparé au résultat in vivo, nous obtenons 100% de concordance pour les 4 même substances testes irritantes ou non irritantes, ce qui est supérieur au model d épiderme humain recombiné (76%). De plus le coefficient de variation entre les 3 différentes séries est inférieur à 0.1 ce qui montre une bonne reproductibilité dans un même laboratoire. Dans le futur cette méthode va devoir être testée avec un plus grand nombre de substances chimiques et sa reproductibilité évaluée dans différents laboratoires. Mais cette première evaluation, très encourageante, ouvre des pistes précieuses pour l'avenir des tests irritatifs.
Resumo:
Simultaneous localization and mapping(SLAM) is a very important problem in mobile robotics. Many solutions have been proposed by different scientists during the last two decades, nevertheless few studies have considered the use of multiple sensors simultane¬ously. The solution is on combining several data sources with the aid of an Extended Kalman Filter (EKF). Two approaches are proposed. The first one is to use the ordinary EKF SLAM algorithm for each data source separately in parallel and then at the end of each step, fuse the results into one solution. Another proposed approach is the use of multiple data sources simultaneously in a single filter. The comparison of the computational com¬plexity of the two methods is also presented. The first method is almost four times faster than the second one.
Resumo:
The goal of this thesis is to implement software for creating 3D models from point clouds. Point clouds are acquired with stereo cameras, monocular systems or laser scanners. The created 3D models are triangular models or NURBS (Non-Uniform Rational B-Splines) models. Triangular models are constructed from selected areas from the point clouds and resulted triangular models are translated into a set of quads. The quads are further translated into an estimated grid structure and used for NURBS surface approximation. Finally, we have a set of NURBS surfaces which represent the whole model. The problem wasn’t so easy to solve. The selected triangular surface reconstruction algorithm did not deal well with noise in point clouds. To handle this problem, a clustering method is introduced for simplificating the model and removing noise. As we had better results with the smaller point clouds produced by clustering, we used points in clusters to better estimate the grids for NURBS models. The overall results were good when the point cloud did not have much noise. The point clouds with small amount of error had good results as the triangular model was solid. NURBS surface reconstruction performed well on solid models.