956 resultados para Multimodal Man-Machine Interface


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Isoprene (ISO),the most abundant non-methane VOC, is the major contributor to secondary organic aerosols (SOA) formation. The mechanisms involved in such transformation, however, are not fully understood. Current mechanisms, which are based on the oxidation of ISO in the gas-phase, underestimate SOA yields. The heightened awareness that ISO is only partially processed in the gas-phase has turned attention to heterogeneous processes as alternative pathways toward SOA.

During my research project, I investigated the photochemical oxidation of isoprene in bulk water. Below, I will report on the λ > 305 nm photolysis of H2O2 in dilute ISO solutions. This process yields C10H15OH species as primary products, whose formation both requires and is inhibited by O2. Several isomers of C10H15OH were resolved by reverse-phase high-performance liquid chromatography and detected as MH+ (m/z = 153) and MH+-18 (m/z = 135) signals by electrospray ionization mass spectrometry. This finding is consistent with the addition of ·OH to ISO, followed by HO-ISO· reactions with ISO (in competition with O2) leading to second generation HO(ISO)2· radicals that terminate as C10H15OH via β-H abstraction by O2.

It is not generally realized that chemistry on the surface of water cannot be deduced, extrapolated or translated to those in bulk gas and liquid phases. The water density drops a thousand-fold within a few Angstroms through the gas-liquid interfacial region and therefore hydrophobic VOCs such as ISO will likely remain in these relatively 'dry' interfacial water layers rather than proceed into bulk water. In previous experiments from our laboratory, it was found that gas-phase olefins can be protonated on the surface of pH < 4 water. This phenomenon increases the residence time of gases at the interface, an event that makes them increasingly susceptible to interaction with gaseous atmospheric oxidants such as ozone and hydroxyl radicals.

In order to test this hypothesis, I carried out experiments in which ISO(g) collides with the surface of aqueous microdroplets of various compositions. Herein I report that ISO(g) is oxidized into soluble species via Fenton chemistry on the surface of aqueous Fe(II)Cl2 solutions simultaneously exposed to H2O2(g). Monomer and oligomeric species (ISO)1-8H+ were detected via online electrospray ionization mass spectrometry (ESI-MS) on the surface of pH ~ 2 water, and were then oxidized into a suite of products whose combined yields exceed ~ 5% of (ISO)1-8H+. MS/MS analysis revealed that products mainly consisted of alcohols, ketones, epoxides and acids. Our experiments demonstrated that olefins in ambient air may be oxidized upon impact on the surface of Fe-containing aqueous acidic media, such as those of typical to tropospheric aerosols.

Related experiments involving the reaction of ISO(g) with ·OH radicals from the photolysis of dissolved H2O2 were also carried out to test the surface oxidation of ISO(g) by photolyzing H2O2(aq) at 266 nm at various pH. The products were analyzed via online electrospray ionization mass spectrometry. Similar to our Fenton experiments, we detected (ISO)1-7H+ at pH < 4, and new m/z+ = 271 and m/z- = 76 products at pH > 5.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Although numerous theoretical efforts have been put forth, a systematic, unified and predictive theoretical framework that is able to capture all the essential physics of the interfacial behaviors of ions, such as the Hofmeister series effect, Jones-Ray effect and the salt effect on the bubble coalescence remain an outstanding challenge. The most common approach to treating electrostatic interactions in the presence of salt ions is the Poisson-Boltzmann (PB) theory. However, there are many systems for which the PB theory fails to offer even a qualitative explanation of the behavior, especially for ions distributed in the vicinity of an interface with dielectric contrast between the two media (like the water-vapor/oil interface). A key factor missing in the PB theory is the self energy of the ion.

In this thesis, we develop a self-consistent theory that treats the electrostatic self energy (including both the short-range Born solvation energy and the long-range image charge interactions), the nonelectrostatic contribution of the self energy, the ion-ion correlation and the screening effect systematically in a single framework. By assuming a finite charge spread of the ion instead of using the point-charge model, the self energy obtained by our theory is free of the divergence problems and gives a continuous self energy across the interface. This continuous feature allows ions on the water side and the vapor/oil side of the interface to be treated in a unified framework. The theory involves a minimum set of parameters of the ion, such as the valency, radius, polarizability of the ions, and the dielectric constants of the medium, that are both intrinsic and readily available. The general theory is first applied to study the thermodynamic property of the bulk electrolyte solution, which shows good agreement with the experiment result for predicting the activity coefficient and osmotic coefficient.

Next, we address the effect of local Born solvation energy on the bulk thermodynamics and interfacial properties of electrolyte solution mixtures. We show that difference in the solvation energy between the cations and anions naturally gives rise to local charge separation near the interface, and a finite Galvani potential between two coexisting solutions. The miscibility of the mixture can either increases or decreases depending on the competition between the solvation energy and translation entropy of the ions. The interfacial tension shows a non-monotonic dependence on the salt concentration: it increases linearly with the salt concentration at higher concentrations, and decreases approximately as the square root of the salt concentration for dilute solutions, which is in agreement with the Jones-Ray effect observed in experiment.

Next, we investigate the image effects on the double layer structure and interfacial properties near a single charged plate. We show that the image charge repulsion creates a depletion boundary layer that cannot be captured by a regular perturbation approach. The correct weak-coupling theory must include the self-energy of the ion due to the image charge interaction. The image force qualitatively alters the double layer structure and properties, and gives rise to many non-PB effects, such as nonmonotonic dependence of the surface energy on concentration and charge inversion. The image charge effect is then studied for electrolyte solutions between two plates. For two neutral plates, we show that depletion of the salt ions by the image charge repulsion results in short-range attractive and long-range repulsive forces. If cations and anions are of different valency, the asymmetric depletion leads to the formation of an induced electrical double layer. For two charged plates, the competition between the surface charge and the image charge effect can give rise to like- charge attraction.

Then, we study the inhomogeneous screening effect near the dielectric interface due to the anisotropic and nonuniform ion distribution. We show that the double layer structure and interfacial properties is drastically affected by the inhomogeneous screening if the bulk Debye screening length is comparable or smaller than the Bjerrum length. The width of the depletion layer is characterized by the Bjerrum length, independent of the salt concentration. We predict that the negative adsorption of ions at the interface increases linearly with the salt concentration, which cannot be captured by either the bulk screening approximation or the WKB approximation. For asymmetric salt, the inhomogeneous screening enhances the charge separation in the induced double layer and significantly increases the value of the surface potential.

Finally, to account for the ion specificity, we study the self energy of a single ion across the dielectric interface. The ion is considered to be polarizable: its charge distribution can be self-adjusted to the local dielectric environment to minimize the self energy. Using intrinsic parameters of the ions, such as the valency, radius, and polarizability, we predict the specific ion effect on the interfacial affinity of halogen anions at the water/air interface, and the strong adsorption of hydrophobic ions at the water/oil interface, in agreement with experiments and atomistic simulations.

The theory developed in this work represents the most systematic theoretical technique for weak-coupling electrolytes. We expect the theory to be more useful for studying a wide range of structural and dynamic properties in physicochemical, colloidal, soft-matter and biophysical systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Esta pesquisa tem por objeto de estudo a expressão da sexualidade feminina no momento do parto e nascimento. A sexualidade é entendida a partir de um enfoque abrangente, como um aspecto central do indivíduo, e que está presente em todos os momentos de sua vida. Discutimos a sexualidade feminina como aquela expressa pela mulher no momento do parto e nascimento, ou seja, seus sentimentos positivos, emoções, desejos, fonte de prazeres, troca, comunicação e afetos, expressos e vivenciados pela mulher neste momento. Assim, objetivamos descrever a sexualidade na visão das mulheres que vivenciaram o parto normal; analisar a relação existente entre sexualidade e parto, na perspectiva das mulheres que vivenciaram o parto normal; e, discutir as relações e expressões de sexualidade vividas pelas mulheres durante o parto normal. Caracteriza-se por ser um estudo qualitativo, exploratório, onde o cenário foi duas maternidades situadas no Rio de Janeiro. Participaram do estudo 11 mulheres no puerpério mediato de partos fisiológicos. A coleta dos dados foi realizada através de entrevista semiestruturada que foram analisadas a partir de Análise de Conteúdos. Emergiram dos depoimentos as categorias: Sexualidade na compreensão das depoentes e a Sexualidade e sua interface no momento da parturição: uma relação a partir da vivência da mulher. Os resultados mais significativos foram: na primeira categoria, identificamos que as mulheres, inicialmente, tiveram dificuldade em falar de sexualidade, mas mesmo assim compreendem a sexualidade a partir de relações que fizeram, a saber: sexo/relação sexual; sensações e sentimentos positivos; e, imagem corporal. Na segunda categoria, encontramos uma afirmação da sexualidade presente no parto. A associação da sexualidade com o processo parturitivo foi verbalizada e expressada pelas mulheres com base em suas vivências pessoais, que se inter-relacionam com seu cotidiano sócio-cultural. Desta maneira, apontaram que a sexualidade está presente no parto, pois é demonstrada nele: o papel sexual reprodutivo da mulher, onde observamos satisfação e prazer feminino no nascimento do filho; e, o poder feminino de parir, onde as mulheres manifestaram satisfação e prazer na sua força e potencial no parto.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the first part of the thesis we explore three fundamental questions that arise naturally when we conceive a machine learning scenario where the training and test distributions can differ. Contrary to conventional wisdom, we show that in fact mismatched training and test distribution can yield better out-of-sample performance. This optimal performance can be obtained by training with the dual distribution. This optimal training distribution depends on the test distribution set by the problem, but not on the target function that we want to learn. We show how to obtain this distribution in both discrete and continuous input spaces, as well as how to approximate it in a practical scenario. Benefits of using this distribution are exemplified in both synthetic and real data sets.

In order to apply the dual distribution in the supervised learning scenario where the training data set is fixed, it is necessary to use weights to make the sample appear as if it came from the dual distribution. We explore the negative effect that weighting a sample can have. The theoretical decomposition of the use of weights regarding its effect on the out-of-sample error is easy to understand but not actionable in practice, as the quantities involved cannot be computed. Hence, we propose the Targeted Weighting algorithm that determines if, for a given set of weights, the out-of-sample performance will improve or not in a practical setting. This is necessary as the setting assumes there are no labeled points distributed according to the test distribution, only unlabeled samples.

Finally, we propose a new class of matching algorithms that can be used to match the training set to a desired distribution, such as the dual distribution (or the test distribution). These algorithms can be applied to very large datasets, and we show how they lead to improved performance in a large real dataset such as the Netflix dataset. Their computational complexity is the main reason for their advantage over previous algorithms proposed in the covariate shift literature.

In the second part of the thesis we apply Machine Learning to the problem of behavior recognition. We develop a specific behavior classifier to study fly aggression, and we develop a system that allows analyzing behavior in videos of animals, with minimal supervision. The system, which we call CUBA (Caltech Unsupervised Behavior Analysis), allows detecting movemes, actions, and stories from time series describing the position of animals in videos. The method summarizes the data, as well as it provides biologists with a mathematical tool to test new hypotheses. Other benefits of CUBA include finding classifiers for specific behaviors without the need for annotation, as well as providing means to discriminate groups of animals, for example, according to their genetic line.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Neste trabalho, é proposta uma nova família de métodos a ser aplicada à otimização de problemas multimodais. Nestas técnicas, primeiramente são geradas soluções iniciais com o intuito de explorar o espaço de busca. Em seguida, com a finalidade de encontrar mais de um ótimo, estas soluções são agrupadas em subespaços utilizando um algoritmo de clusterização nebulosa. Finalmente, são feitas buscas locais através de métodos determinísticos de otimização dentro de cada subespaço gerado na fase anterior com a finalidade de encontrar-se o ótimo local. A família de métodos é formada por seis variantes, combinando três esquemas de inicialização das soluções na primeira fase e dois algoritmos de busca local na terceira. A fim de que esta nova família de métodos possa ser avaliada, seus constituintes são comparados com outras metodologias utilizando problemas da literatura e os resultados alcançados são promissores.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

O uso de primers autocondicionantes e de bráquetes com compósito pré-incorporado tem sido apresentado como uma alternativa para a redução de passos clínicos. O propósito deste estudo foi avaliar o efeito de um primer autocondicionante (Transbond Plus Self-Etching Primer - SEP) na resistência ao cisalhamento de bráquetes com compósito pré-incorporado colados in vivo. A amostra consistiu de 92 dentes obtidos de 23 pacientes com indicação prévia de extração de 4 pré-molares. Os dentes foram divididos em 4 grupos, sendo os bráquetes colados pelo mesmo operador, alternando os quadrantes em cada paciente: Grupo 1 (controle) - Ácido fosfórico à 37% + primer (Transbond XT Primer) + compósito (Transbond XT Adhesive Paste) + bráquete convencional; Grupo 2 - Ácido fosfórico à 37% + primer + bráquete com compósito pré-incorporado; Grupo 3 SEP + compósito + bráquete convencional; Grupo 4 - SEP + bráquete com compósito pré-incorporado. Após 30 dias os pré-molares foram extraídos, sendo submetidos ao teste de resistência ao cisalhamento através da uma Máquina de Ensaios Universal, com velocidade de 0,5mm/min. Os dados obtidos pelos grupos foram analisados com 2-way ANOVA (p<0,05). As forças médias e desvios padrão obtidos foram os seguintes: Grupo 1 = 11,35 (2,36) MPa; Grupo 2 = 9,77 (2,49) MPa; Grupo 3 = 10,89 (2,60) MPa; e Grupo 4 = 10,16 (2,75) MPa. Não foi observada diferença significativa entre o uso do SEP e o de condicionador e primer tradicionais (p = 0,948). De qualquer modo, diferenças significativas na força de adesão foram observadas quando utilizados bráquetes com compósito pré-incorporado (p = 0,032). Pode ser concluído que a combinação do primer autocondicionante com o bráquete com compósito pré-incorporado apresentou valores de força de adesão adequados, sendo promissora para uso clínico.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Optical Coherence Tomography(OCT) is a popular, rapidly growing imaging technique with an increasing number of bio-medical applications due to its noninvasive nature. However, there are three major challenges in understanding and improving an OCT system: (1) Obtaining an OCT image is not easy. It either takes a real medical experiment or requires days of computer simulation. Without much data, it is difficult to study the physical processes underlying OCT imaging of different objects simply because there aren't many imaged objects. (2) Interpretation of an OCT image is also hard. This challenge is more profound than it appears. For instance, it would require a trained expert to tell from an OCT image of human skin whether there is a lesion or not. This is expensive in its own right, but even the expert cannot be sure about the exact size of the lesion or the width of the various skin layers. The take-away message is that analyzing an OCT image even from a high level would usually require a trained expert, and pixel-level interpretation is simply unrealistic. The reason is simple: we have OCT images but not their underlying ground-truth structure, so there is nothing to learn from. (3) The imaging depth of OCT is very limited (millimeter or sub-millimeter on human tissues). While OCT utilizes infrared light for illumination to stay noninvasive, the downside of this is that photons at such long wavelengths can only penetrate a limited depth into the tissue before getting back-scattered. To image a particular region of a tissue, photons first need to reach that region. As a result, OCT signals from deeper regions of the tissue are both weak (since few photons reached there) and distorted (due to multiple scatterings of the contributing photons). This fact alone makes OCT images very hard to interpret.

This thesis addresses the above challenges by successfully developing an advanced Monte Carlo simulation platform which is 10000 times faster than the state-of-the-art simulator in the literature, bringing down the simulation time from 360 hours to a single minute. This powerful simulation tool not only enables us to efficiently generate as many OCT images of objects with arbitrary structure and shape as we want on a common desktop computer, but it also provides us the underlying ground-truth of the simulated images at the same time because we dictate them at the beginning of the simulation. This is one of the key contributions of this thesis. What allows us to build such a powerful simulation tool includes a thorough understanding of the signal formation process, clever implementation of the importance sampling/photon splitting procedure, efficient use of a voxel-based mesh system in determining photon-mesh interception, and a parallel computation of different A-scans that consist a full OCT image, among other programming and mathematical tricks, which will be explained in detail later in the thesis.

Next we aim at the inverse problem: given an OCT image, predict/reconstruct its ground-truth structure on a pixel level. By solving this problem we would be able to interpret an OCT image completely and precisely without the help from a trained expert. It turns out that we can do much better. For simple structures we are able to reconstruct the ground-truth of an OCT image more than 98% correctly, and for more complicated structures (e.g., a multi-layered brain structure) we are looking at 93%. We achieved this through extensive uses of Machine Learning. The success of the Monte Carlo simulation already puts us in a great position by providing us with a great deal of data (effectively unlimited), in the form of (image, truth) pairs. Through a transformation of the high-dimensional response variable, we convert the learning task into a multi-output multi-class classification problem and a multi-output regression problem. We then build a hierarchy architecture of machine learning models (committee of experts) and train different parts of the architecture with specifically designed data sets. In prediction, an unseen OCT image first goes through a classification model to determine its structure (e.g., the number and the types of layers present in the image); then the image is handed to a regression model that is trained specifically for that particular structure to predict the length of the different layers and by doing so reconstruct the ground-truth of the image. We also demonstrate that ideas from Deep Learning can be useful to further improve the performance.

It is worth pointing out that solving the inverse problem automatically improves the imaging depth, since previously the lower half of an OCT image (i.e., greater depth) can be hardly seen but now becomes fully resolved. Interestingly, although OCT signals consisting the lower half of the image are weak, messy, and uninterpretable to human eyes, they still carry enough information which when fed into a well-trained machine learning model spits out precisely the true structure of the object being imaged. This is just another case where Artificial Intelligence (AI) outperforms human. To the best knowledge of the author, this thesis is not only a success but also the first attempt to reconstruct an OCT image at a pixel level. To even give a try on this kind of task, it would require fully annotated OCT images and a lot of them (hundreds or even thousands). This is clearly impossible without a powerful simulation tool like the one developed in this thesis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

O propósito do presente estudo foi analisar o efeito da aplicação de múltiplas camadas consecutivas de dois sistemas adesivos convencionais de dois passos na difusão resinosa e padrão de distribuição dos componentes monoméricos resinosos. Dezesseis terceiros molares humanos hígidos foram tratados com os sistemas adesivos convencionais de dois passos de acordo com as instruções dos fabricantes ou com aplicações em múltiplas camadas consecutivas. Os espécimes foram seccionados paralelamente aos túbulos dentinários e as superfícies submetidas ao polimento com lixas 600, 1200, 1800, 2000 e 4000. Os espectros Raman foram coletados ao longo de uma linha perpendicular a interface adesivo-resina em intervalos de 1 ou 2 m. As medidas de difusão da resina adesiva e distribuição dos componentes monomériccos foram avaliadas pelos picos Raman de 1113 cm-1, 1609 cm-1 e 1454 cm-1. O gradiente de desmineralização usado na determinação da região de hibridização foi avaliado pelo pico de 960 cm-1 da apatita. De acordo com os resultados obtidos, a aplicação de múltiplas camadas apresentou uma tendência de homogeneização dos componentes poliméricos, dependente da composição química da resina adesiva.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Study of emotions in human-computer interaction is a growing research area. This paper shows an attempt to select the most significant features for emotion recognition in spoken Basque and Spanish Languages using different methods for feature selection. RekEmozio database was used as the experimental data set. Several Machine Learning paradigms were used for the emotion classification task. Experiments were executed in three phases, using different sets of features as classification variables in each phase. Moreover, feature subset selection was applied at each phase in order to seek for the most relevant feature subset. The three phases approach was selected to check the validity of the proposed approach. Achieved results show that an instance-based learning algorithm using feature subset selection techniques based on evolutionary algorithms is the best Machine Learning paradigm in automatic emotion recognition, with all different feature sets, obtaining a mean of 80,05% emotion recognition rate in Basque and a 74,82% in Spanish. In order to check the goodness of the proposed process, a greedy searching approach (FSS-Forward) has been applied and a comparison between them is provided. Based on achieved results, a set of most relevant non-speaker dependent features is proposed for both languages and new perspectives are suggested.