831 resultados para HOMOGENEOUS SAMPLE
Resumo:
Le cancer du sein est le cancer le plus fréquent chez la femme. Il demeure la cause de mortalité la plus importante chez les femmes âgées entre 35 et 55 ans. Au Canada, plus de 20 000 nouveaux cas sont diagnostiqués chaque année. Les études scientifiques démontrent que l'espérance de vie est étroitement liée à la précocité du diagnostic. Les moyens de diagnostic actuels comme la mammographie, l'échographie et la biopsie comportent certaines limitations. Par exemple, la mammographie permet de diagnostiquer la présence d’une masse suspecte dans le sein, mais ne peut en déterminer la nature (bénigne ou maligne). Les techniques d’imagerie complémentaires comme l'échographie ou l'imagerie par résonance magnétique (IRM) sont alors utilisées en complément, mais elles sont limitées quant à la sensibilité et la spécificité de leur diagnostic, principalement chez les jeunes femmes (< 50 ans) ou celles ayant un parenchyme dense. Par conséquent, nombreuses sont celles qui doivent subir une biopsie alors que leur lésions sont bénignes. Quelques voies de recherche sont privilégiées depuis peu pour réduire l`incertitude du diagnostic par imagerie ultrasonore. Dans ce contexte, l’élastographie dynamique est prometteuse. Cette technique est inspirée du geste médical de palpation et est basée sur la détermination de la rigidité des tissus, sachant que les lésions en général sont plus rigides que le tissu sain environnant. Le principe de cette technique est de générer des ondes de cisaillement et d'en étudier la propagation de ces ondes afin de remonter aux propriétés mécaniques du milieu via un problème inverse préétabli. Cette thèse vise le développement d'une nouvelle méthode d'élastographie dynamique pour le dépistage précoce des lésions mammaires. L'un des principaux problèmes des techniques d'élastographie dynamiques en utilisant la force de radiation est la forte atténuation des ondes de cisaillement. Après quelques longueurs d'onde de propagation, les amplitudes de déplacement diminuent considérablement et leur suivi devient difficile voir impossible. Ce problème affecte grandement la caractérisation des tissus biologiques. En outre, ces techniques ne donnent que l'information sur l'élasticité tandis que des études récentes montrent que certaines lésions bénignes ont les mêmes élasticités que des lésions malignes ce qui affecte la spécificité de ces techniques et motive la quantification de d'autres paramètres mécaniques (e.g.la viscosité). Le premier objectif de cette thèse consiste à optimiser la pression de radiation acoustique afin de rehausser l'amplitude des déplacements générés. Pour ce faire, un modèle analytique de prédiction de la fréquence de génération de la force de radiation a été développé. Une fois validé in vitro, ce modèle a servi pour la prédiction des fréquences optimales pour la génération de la force de radiation dans d'autres expérimentations in vitro et ex vivo sur des échantillons de tissu mammaire obtenus après mastectomie totale. Dans la continuité de ces travaux, un prototype de sonde ultrasonore conçu pour la génération d'un type spécifique d'ondes de cisaillement appelé ''onde de torsion'' a été développé. Le but est d'utiliser la force de radiation optimisée afin de générer des ondes de cisaillement adaptatives, et de monter leur utilité dans l'amélioration de l'amplitude des déplacements. Contrairement aux techniques élastographiques classiques, ce prototype permet la génération des ondes de cisaillement selon des parcours adaptatifs (e.g. circulaire, elliptique,…etc.) dépendamment de la forme de la lésion. L’optimisation des dépôts énergétiques induit une meilleure réponse mécanique du tissu et améliore le rapport signal sur bruit pour une meilleure quantification des paramètres viscoélastiques. Il est aussi question de consolider davantage les travaux de recherches antérieurs par un appui expérimental, et de prouver que ce type particulier d'onde de torsion peut mettre en résonance des structures. Ce phénomène de résonance des structures permet de rehausser davantage le contraste de déplacement entre les masses suspectes et le milieu environnant pour une meilleure détection. Enfin, dans le cadre de la quantification des paramètres viscoélastiques des tissus, la dernière étape consiste à développer un modèle inverse basé sur la propagation des ondes de cisaillement adaptatives pour l'estimation des paramètres viscoélastiques. L'estimation des paramètres viscoélastiques se fait via la résolution d'un problème inverse intégré dans un modèle numérique éléments finis. La robustesse de ce modèle a été étudiée afin de déterminer ces limites d'utilisation. Les résultats obtenus par ce modèle sont comparés à d'autres résultats (mêmes échantillons) obtenus par des méthodes de référence (e.g. Rheospectris) afin d'estimer la précision de la méthode développée. La quantification des paramètres mécaniques des lésions permet d'améliorer la sensibilité et la spécificité du diagnostic. La caractérisation tissulaire permet aussi une meilleure identification du type de lésion (malin ou bénin) ainsi que son évolution. Cette technique aide grandement les cliniciens dans le choix et la planification d'une prise en charge adaptée.
Resumo:
Photothermal effect refers to heating of a sample due to the absorption of electromagnetic radiation. Photothermal (PT) heat generation which is an example of energy conversion has in general three kinds of applications. 1. PT material probing 2. PT material processing and 3. PT material destruction. The temperatures involved increases from 1-. 3. Of the above three, PT material probing is the most important in making significant contribution to the field of science and technology. Photothermal material characterization relies on high sensitivity detection techniques to monitor the effects caused by PT material heating of a sample. Photothermal method is a powerful high sensitivity non-contact tool used for non-destructive thermal characterization of materials. The high sensitivity of the photothermal methods has led to its application for analysis of low absorbance samples. Laser calorimetry, photothermal radiometry, pyroelectric technique, photoacoustic technique, photothermal beam deflection technique, etc. come under the broad class ofphotothermal techniques. However the choice of a suitable technique depends upon the nature of the sample, purpose of measurement, nature of light source used, etc. The present investigations are done on polymer thin films employing photothermal beam deflection technique, for the successful determination of their thermal diffusivity. Here the sample is excited by a He-Ne laser (A = 6328...\ ) which acts as the pump beam. Due to the refractive index gradient established in the sample surface and in the adjacent coupling medium, another optical beam called probe beam (diode laser, A= 6500A ) when passed through this region experiences a deflection and is detected using a position sensitive detector and its output is fed to a lock-in amplifier from which the amplitude and phase of the deflection can be directly obtained. The amplitude and phase of the signal is suitably analysed for determining the thermal diffusivity.The production of polymer thin film samples has gained considerable attention for the past few years. Plasma polymerization is an inexpensive tool for fabricating organic thin films. It refers to formation of polymeric materials under the influence of plasma, which is generated by some kind of electric discharge. Here plasma of the monomer vapour is generated by employing radio frequency (MHz) techniques. Plasma polymerization technique results in homogeneous, highly adhesive, thermally stable, pinhole free, dielectric, highly branched and cross-linked polymer films. The possible linkage in the formation of the polymers is suggested by comparing the FTIR spectra of the monomer and the polymer.Near IR overtone investigations on some organic molecules using local mode model are also done. Higher vibrational overtones often provide spectral simplification and greater resolution of peaks corresponding to nonequivalent X-H bonds where X is typically C, N or O. Vibrational overtone spectroscopy of molecules containing X-H oscillators is now a well established tool for molecular investigations. Conformational and steric differences between bonds and structural inequivalence ofCH bonds (methyl, aryl, acetylenic, etc.) are resolvable in the higher overtone spectra. The local mode model in which the X-H oscillators are considered to be loosely coupled anharmonic oscillators has been widely used for the interpretation of overtone spectra. If we are exciting a single local oscillator from the vibrational ground state to the vibrational state v, then the transition energy of the local mode overtone is given by .:lE a......v = A v + B v2 • A plot of .:lE / v versus v will yield A, the local mode frequency as the intercept and B, the local mode diagonal anharmonicity as the slope. Here A - B gives the mechanical frequency XI of the oscillator and B = X2 is the anharmonicity of the bond. The local mode parameters XI and X2 vary for non-equivalent X-H bonds and are sensitive to the inter and intra molecular environment of the X-H oscillator.
Resumo:
Spatial and temporal analyses of the spectra of the laser induced plasma from a polytetrafluroethylene (PTFE) target obtained with the 1.06 mu m radiation from a Q-switched Nd:YAG laser have been carried out. The spatially resolved spectra of the plasma emission show that molecular bands of C2 (Swan bands) and CN are very intense in the outer regions of the plasma, whereas higher ionized states of carbon are predominant in the core region of the plasma emission. The vibrational temperature and population distribution in the different vibrational levels have been studied as a function of laser energy. From the time resolved studies, it has been observed that there exist fairly large time delays for the onset of emission from all the species in the outer region of the plasma. The molecular bands in each region exhibit much larger time delays in comparison to the ionic lines in the plasma.
Resumo:
n this paper, a time series complexity analysis of dense array electroencephalogram signals is carried out using the recently introduced Sample Entropy (SampEn) measure. This statistic quantifies the regularity in signals recorded from systems that can vary from the purely deterministic to purely stochastic realm. The present analysis is conducted with an objective of gaining insight into complexity variations related to changing brain dynamics for EEG recorded from the three cases of passive, eyes closed condition, a mental arithmetic task and the same mental task carried out after a physical exertion task. It is observed that the statistic is a robust quantifier of complexity suited for short physiological signals such as the EEG and it points to the specific brain regions that exhibit lowered complexity during the mental task state as compared to a passive, relaxed state. In the case of mental tasks carried out before and after the performance of a physical exercise, the statistic can detect the variations brought in by the intermediate fatigue inducing exercise period. This enhances its utility in detecting subtle changes in the brain state that can find wider scope for applications in EEG based brain studies.
Resumo:
This thesis investigates the potential use of zerocrossing information for speech sample estimation. It provides 21 new method tn) estimate speech samples using composite zerocrossings. A simple linear interpolation technique is developed for this purpose. By using this method the A/D converter can be avoided in a speech coder. The newly proposed zerocrossing sampling theory is supported with results of computer simulations using real speech data. The thesis also presents two methods for voiced/ unvoiced classification. One of these methods is based on a distance measure which is a function of short time zerocrossing rate and short time energy of the signal. The other one is based on the attractor dimension and entropy of the signal. Among these two methods the first one is simple and reguires only very few computations compared to the other. This method is used imtea later chapter to design an enhanced Adaptive Transform Coder. The later part of the thesis addresses a few problems in Adaptive Transform Coding and presents an improved ATC. Transform coefficient with maximum amplitude is considered as ‘side information’. This. enables more accurate tfiiz assignment enui step—size computation. A new bit reassignment scheme is also introduced in this work. Finally, sum ATC which applies switching between luiscrete Cosine Transform and Discrete Walsh-Hadamard Transform for voiced and unvoiced speech segments respectively is presented. Simulation results are provided to show the improved performance of the coder
Resumo:
Die Aminosäure-Sequenzierung an dem als "28 kDa-Thioredoxin f" beschriebenen Protein aus der Grünalge Scenedesmus obliquus hat gezeigt, dass dieses Protein mit dem als OEE bekannten Protein 1 aus dem Photosystem II identisch ist. Die früher postulierte Möglichkeit einer Fusion eines Thioredoxins mit einem Protein unbekannter Natur oder Insertion eines Thioredoxinfragments mit der typischen -Trp-Cys-Gly-Pro-Cys-Sequenz in ein solches Protein hat sich nicht bestätigt. Durch Anwendung einer auf das 33 kDa OEE-Protein ausgerichteten Präparationsmethode konnte gezeigt werden, dass das "28 kDa-Trx f" tatsächlich in den Thylakoidmembranen lokalisiert ist. Das Protein kann so innerhalb eines Tages in hoher Reinheit aus den Thylakoidmembranfragmenten eines Algenrohhomogenats isoliert werden; dabei bleibt die Fähigkeit des OEE-Proteins das chloroplastidäre Enzym Fructosebisphosphatase (FbPase) zu stimulieren erhalten. Mit gleichen Methoden wurden die Grünalgen Chlorella vulgaris und Chlamydomonas reinhardtii auf außergewöhnliche Proteine mit Trx-f Aktivität untersucht. Die hitze- und säurestabile Proteinfraktion aus Chlorella vulgaris enthält ein Protein mit vergleichbarer Molmasse von 26 kDa, das ähnlich wie in Scenedesmus eine Stimulation der chloroplastidären Fructosebisphosphatase zeigt. In dem hitze- und säurestabilen Proteinextrakt aus Chlamydomonas reinhardtii wird solche Aktivität nicht beobachtet. Eine Probe des rekombinanten, homogenen OEE-Proteins aus Spinat wurde auf Stimulation der chloroplastidären FbPase und NADPH-abhängigen Malatdehydrogenase (MDH) untersucht. Das Spinat OEE-Protein 1 zeigt mit diesen Enzymen keine Aktivität. Da das OEE-Protein 1 in Scenedesmus starke FbPase-Stimulation zeigt, die anderen Scenedesmus-Thioredoxine mit Molmassen von 12 kDa (Trx I und II) jedoch hohe Aktivität mit der zellulären Ribonucleotidreduktase zeigen, wird postuliert, dass das OEE-Protein die Funktion des Trx-f in vivo ersetzt.
Resumo:
In this work, we present an atomistic-continuum model for simulations of ultrafast laser-induced melting processes in semiconductors on the example of silicon. The kinetics of transient non-equilibrium phase transition mechanisms is addressed with MD method on the atomic level, whereas the laser light absorption, strong generated electron-phonon nonequilibrium, fast heat conduction, and photo-excited free carrier diffusion are accounted for with a continuum TTM-like model (called nTTM). First, we independently consider the applications of nTTM and MD for the description of silicon, and then construct the combined MD-nTTM model. Its development and thorough testing is followed by a comprehensive computational study of fast nonequilibrium processes induced in silicon by an ultrashort laser irradiation. The new model allowed to investigate the effect of laser-induced pressure and temperature of the lattice on the melting kinetics. Two competing melting mechanisms, heterogeneous and homogeneous, were identified in our big-scale simulations. Apart from the classical heterogeneous melting mechanism, the nucleation of the liquid phase homogeneously inside the material significantly contributes to the melting process. The simulations showed, that due to the open diamond structure of the crystal, the laser-generated internal compressive stresses reduce the crystal stability against the homogeneous melting. Consequently, the latter can take a massive character within several picoseconds upon the laser heating. Due to the large negative volume of melting of silicon, the material contracts upon the phase transition, relaxes the compressive stresses, and the subsequent melting proceeds heterogeneously until the excess of thermal energy is consumed. A series of simulations for a range of absorbed fluences allowed us to find the threshold fluence value at which homogeneous liquid nucleation starts contributing to the classical heterogeneous propagation of the solid-liquid interface. A series of simulations for a range of the material thicknesses showed that the sample width we chosen in our simulations (800 nm) corresponds to a thick sample. Additionally, in order to support the main conclusions, the results were verified for a different interatomic potential. Possible improvements of the model to account for nonthermal effects are discussed and certain restrictions on the suitable interatomic potentials are found. As a first step towards the inclusion of these effects into MD-nTTM, we performed nanometer-scale MD simulations with a new interatomic potential, designed to reproduce ab initio calculations at the laser-induced electronic temperature of 18946 K. The simulations demonstrated that, similarly to thermal melting, nonthermal phase transition occurs through nucleation. A series of simulations showed that higher (lower) initial pressure reinforces (hinders) the creation and the growth of nonthermal liquid nuclei. For the example of Si, the laser melting kinetics of semiconductors was found to be noticeably different from that of metals with a face-centered cubic crystal structure. The results of this study, therefore, have important implications for interpretation of experimental data on the kinetics of melting process of semiconductors.
Resumo:
Most of economic literature has presented its analysis under the assumption of homogeneous capital stock. However, capital composition differs across countries. What has been the pattern of capital composition associated with World economies? We make an exploratory statistical analysis based on compositional data transformed by Aitchinson logratio transformations and we use tools for visualizing and measuring statistical estimators of association among the components. The goal is to detect distinctive patterns in the composition. As initial findings could be cited that: 1. Sectorial components behaved in a correlated way, building industries on one side and , in a less clear view, equipment industries on the other. 2. Full sample estimation shows a negative correlation between durable goods component and other buildings component and between transportation and building industries components. 3. Countries with zeros in some components are mainly low income countries at the bottom of the income category and behaved in a extreme way distorting main results observed in the full sample. 4. After removing these extreme cases, conclusions seem not very sensitive to the presence of another isolated cases
Resumo:
Precision of released figures is not only an important quality feature of official statistics, it is also essential for a good understanding of the data. In this paper we show a case study of how precision could be conveyed if the multivariate nature of data has to be taken into account. In the official release of the Swiss earnings structure survey, the total salary is broken down into several wage components. We follow Aitchison's approach for the analysis of compositional data, which is based on logratios of components. We first present diferent multivariate analyses of the compositional data whereby the wage components are broken down by economic activity classes. Then we propose a number of ways to assess precision
Resumo:
Our essay aims at studying suitable statistical methods for the clustering of compositional data in situations where observations are constituted by trajectories of compositional data, that is, by sequences of composition measurements along a domain. Observed trajectories are known as “functional data” and several methods have been proposed for their analysis. In particular, methods for clustering functional data, known as Functional Cluster Analysis (FCA), have been applied by practitioners and scientists in many fields. To our knowledge, FCA techniques have not been extended to cope with the problem of clustering compositional data trajectories. In order to extend FCA techniques to the analysis of compositional data, FCA clustering techniques have to be adapted by using a suitable compositional algebra. The present work centres on the following question: given a sample of compositional data trajectories, how can we formulate a segmentation procedure giving homogeneous classes? To address this problem we follow the steps described below. First of all we adapt the well-known spline smoothing techniques in order to cope with the smoothing of compositional data trajectories. In fact, an observed curve can be thought of as the sum of a smooth part plus some noise due to measurement errors. Spline smoothing techniques are used to isolate the smooth part of the trajectory: clustering algorithms are then applied to these smooth curves. The second step consists in building suitable metrics for measuring the dissimilarity between trajectories: we propose a metric that accounts for difference in both shape and level, and a metric accounting for differences in shape only. A simulation study is performed in order to evaluate the proposed methodologies, using both hierarchical and partitional clustering algorithm. The quality of the obtained results is assessed by means of several indices
Resumo:
Resumen tomado de la publicaci??n
Resumo:
Resumen tomado de la publicaci??n. Resumen tambi??n en ingl??s
Resumo:
Resumen tomado de la publicaci??n. Resumen tambi??n en ingl??s