985 resultados para Diagnosis, Laboratory
Resumo:
Secondary organic aerosol (SOA) is produced in the atmosphere by oxidation of volatile organic compounds. Laboratory chambers are used understand the formation mechanisms and evolution of SOA formed under controlled conditions. This thesis presents studies of SOA formed from anthropogenic and biogenic precursors and discusses the effects of chamber walls on suspended vapors and particles.
During a chamber experiment, suspended vapors and particles can interact with the chamber walls. Particle wall loss is relatively well-understood, but vapor wall losses have received little study. Vapor wall loss of 2,3-epoxy-1,4-butanediol (BEPOX) and glyoxal was identified, quantified, and found to depend on chamber age and relative humidity.
Particles reside in the atmosphere for a week or more and can evolve chemically during that time period, a process termed aging. Simulating aging in laboratory chambers has proven to be challenging. A protocol was developed to extend the duration of a chamber experiment to 36 h of oxidation and was used to evaluate aging of SOA produced from m-xylene. Total SOA mass concentration increased and then decreased with increasing photooxidation suggesting a transition from functionalization to fragmentation chemistry driven by photochemical processes. SOA oxidation, measured as the bulk particle elemental oxygen-to-carbon ratio and fraction of organic mass at m/z 44, increased continuously starting after 5 h of photooxidation.
The physical state and chemical composition of an organic aerosol affect the mixing of aerosol components and its interactions with condensing species. A laboratory chamber protocol was developed to evaluate the mixing of SOA produced sequentially from two different sources by heating the chamber to induce particle evaporation. Using this protocol, SOA produced from toluene was found to be less volatile than that produced from a-pinene. When the two types of SOA were formed sequentially, the evaporation behavior most closely represented that of SOA from the second parent hydrocarbon, suggesting that the structure of the mixed SOA particles resembles a core of SOA from the first precursor coated by a layer of SOA from the second precursor, indicative of limiting mixing.
Resumo:
Motivated by needs in molecular diagnostics and advances in microfabrication, researchers started to seek help from microfluidic technology, as it provides approaches to achieve high throughput, high sensitivity, and high resolution. One strategy applied in microfluidics to fulfill such requirements is to convert continuous analog signal into digitalized signal. One most commonly used example for this conversion is digital PCR, where by counting the number of reacted compartments (triggered by the presence of the target entity) out of the total number of compartments, one could use Poisson statistics to calculate the amount of input target.
However, there are still problems to be solved and assumptions to be validated before the technology is widely employed. In this dissertation, the digital quantification strategy has been examined from two angles: efficiency and robustness. The former is a critical factor for ensuring the accuracy of absolute quantification methods, and the latter is the premise for such technology to be practically implemented in diagnosis beyond the laboratory. The two angles are further framed into a “fate” and “rate” determination scheme, where the influence of different parameters is attributed to fate determination step or rate determination step. In this discussion, microfluidic platforms have been used to understand reaction mechanism at single molecule level. Although the discussion raises more challenges for digital assay development, it brings the problem to the attention of the scientific community for the first time.
This dissertation also contributes towards developing POC test in limited resource settings. On one hand, it adds ease of access to the tests by incorporating massively producible, low cost plastic material and by integrating new features that allow instant result acquisition and result feedback. On the other hand, it explores new isothermal chemistry and new strategies to address important global health concerns such as cyctatin C quantification, HIV/HCV detection and treatment monitoring as well as HCV genotyping.
Resumo:
In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.
We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.
We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.
In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.
In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.
We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.
In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.
Resumo:
[no abstract]
Resumo:
A dissertação trata do acesso aos serviços de alta complexidade, particularmente os exames diagnósticos e complementares, estudado entre usuários de planos de saúde privados que buscam atendimento e diagnóstico especializado. Desde a década de 80 o usuário do sistema público de saúde vem procurando a saúde suplementar. Contudo, afirmar que o acesso é garantido no domínio privado, através da contratação dos planos de saúde, é uma incerteza que rodeia a inspiração para esta pesquisa, que se justifica pela relevância de ações que possibilitem a melhora da qualidade regulatória dos planos de saúde, a partir do controle social de seus usuários. O objetivo geral é analisar as percepções do acesso aos exames de alta complexidade nos serviços de saúde privados entre usuários de planos de saúde. Os objetivos específicos são descrever as percepções dos usuários de planos de saúde acerca do acesso aos exames de alta complexidade; analisar as motivações dos usuários de planos de saúde privados para a realização de exames de alta complexidade através da rede privada de assistência; e analisar o nível de satisfação dos usuários de planos de saúde quanto ao acesso aos exames de alta complexidade. A metodologia é qualitativa-descritiva, onde a amostra foi de trinta usuários de planos de saúde, acima de 18 anos, selecionados no campo de estudo no ano de 2010. O cenário de estudo foi um laboratório privado de medicina diagnóstica no Rio de Janeiro. As técnicas de coleta de dados utilizadas foram formulário e entrevista individual estruturada. A análise do formulário foi realizada através de estatística descritiva, e as entrevistas através da análise de conteúdo temática-categorial. Os usuários de plano de saúde declararam que o acesso é garantido com facilidade para os exames de alta complexidade. Suas principais motivações para a realização desses exames na rede privada de assistência foram caracterizadas pela rapidez de atendimento, flexibilidade e facilidade de marcação pela internet, telefone ou pessoalmente no laboratório estudado, pronta entrega dos resultados, dificuldade e morosidade do atendimento do SUS, localização do prestador credenciado próxima de bairros residenciais ou do trabalho, resolutividade diagnóstica de imagem de excelência, possibilidade de escolha pelo usuário entre as modalidades aberta e fechada de ressonância magnética e tomografia computadorizada, além da densitometria óssea que foram facilmente acessíveis a todos os sujeitos da pesquisa. O nível de satisfação foi correspondido com a rapidez na realização dos exames em caráter eletivo e de urgência quase equiparados na escala de tempo de acordo com os usuários. Contudo, embora as notas de avaliação dos usuários quanto aos seus planos de saúde tenham sido altas, foram abordadas algumas dificuldades, tais como: prazos de validade dos pedidos médicos com datação prévia; solicitações de senhas de autorização pela operadora; burocracia nos procedimentos de agendamento; dificuldades de acesso para tratamentos como implantes, fisioterapia, RPG, pilates, home care, consultas de check up; negação de reembolsos; restrição de materiais cirúrgicos, em especial as próteses e órteses; e restrições específicas de grau para cirurgias de miopia. Conclui-se que o atendimento rápido dos exames de imagem de alto custo na amostra foi descrito como satisfatório, embora a percepção de rapidez possa variar em função do tipo de produto do plano de saúde privado contratado, com necessidade de melhoria regulatória em alguns aspectos pontuais da saúde suplementar.
Resumo:
A single-cell diagnostic technique for epithelial cancers is developed by utilizing laser trapping and Raman spectroscopy to differentiate cancerous and normal epithelial cells. Single-cell suspensions were prepared from surgically removed human colorectal tissues following standard primary culture protocols and examined in a near-infrared laser-trapping Raman spectroscopy system, where living epithelial cells were investigated one by one. A diagnostic model was built on the spectral data obtained from 8 patients and validated by the data from 2 new patients. Our technique has potential applications from epithelial cancer diagnosis to the study of cell dynamics of carcinogenesis. (c) 2006 Optical Society of America.
Resumo:
The rhythm of division of 9 species belonging to different groups of algae were analysed in situ and in the laboratory. The research which developed in different environmental conditions attempted to establish the capacity for multiplication and assimilation of chlorophyll on the part of the algae under study with a view to placing them in a culture. The results obtained showed that the green multicellular algae (eg. Ulothrix) and the blue algae (eg. Lyngbya, Oscillatoria) are able to produce an appreciable quantity of dry matter, just as the unicellular algae. At the same time it arises that amongst the numerous factors of the environment, temperature plays one of the most important roles in the process of multiplication.
Resumo:
Our understanding of the processes and mechanisms by which secondary organic aerosol (SOA) is formed is derived from laboratory chamber studies. In the atmosphere, SOA formation is primarily driven by progressive photooxidation of SOA precursors, coupled with their gas-particle partitioning. In the chamber environment, SOA-forming vapors undergo multiple chemical and physical processes that involve production and removal via gas-phase reactions; partitioning onto suspended particles vs. particles deposited on the chamber wall; and direct deposition on the chamber wall. The main focus of this dissertation is to characterize the interactions of organic vapors with suspended particles and the chamber wall and explore how these intertwined processes in laboratory chambers govern SOA formation and evolution.
A Functional Group Oxidation Model (FGOM) that represents SOA formation and evolution in terms of the competition between functionalization and fragmentation, the extent of oxygen atom addition, and the change of volatility, is developed. The FGOM contains a set of parameters that are to be determined by fitting of the model to laboratory chamber data. The sensitivity of the model prediction to variation of the adjustable parameters allows one to assess the relative importance of various pathways involved in SOA formation.
A critical aspect of the environmental chamber is the presence of the wall, which can induce deposition of SOA-forming vapors and promote heterogeneous reactions. An experimental protocol and model framework are first developed to constrain the vapor-wall interactions. By optimal fitting the model predictions to the observed wall-induced decay profiles of 25 oxidized organic compounds, the dominant parameter governing the extent of wall deposition of a compound is identified, i.e., wall accommodation coefficient. By correlating this parameter with the molecular properties of a compound via its volatility, the wall-induced deposition rate of an organic compound can be predicted based on its carbon and oxygen numbers in the molecule.
Heterogeneous transformation of δ-hydroxycarbonyl, a major first-generation product from long-chain alkane photochemistry, is observed on the surface of particles and walls. The uniqueness of this reaction scheme is the production of substituted dihydrofuran, which is highly reactive towards ozone, OH, and NO3, thereby opening a reaction pathway that is not usually accessible to alkanes. A spectrum of highly-oxygenated products with carboxylic acid, ester, and ether functional groups is produced from the substituted dihydrofuran chemistry, thereby affecting the average oxidation state of the alkane-derived SOA.
The vapor wall loss correction is applied to several chamber-derived SOA systems generated from both anthropogenic and biogenic sources. Experimental and modeling approaches are employed to constrain the partitioning behavior of SOA-forming vapors onto suspended particles vs. chamber walls. It is demonstrated that deposition of SOA-forming vapors to the chamber wall during photooxidation experiments can lead to substantial and systematic underestimation of SOA. Therefore, it is likely that a lack of proper accounting for vapor wall losses that suppress chamber-derived SOA yields contribute substantially to the underprediction of ambient SOA concentrations in atmospheric models.
Resumo:
The seminal bag, or seminal receptacle, forms a characteristic organ of cyclopids, serving for retention of the sperms discharged from the spermatophores. The structure of the seminal receptacle, more precisely its form, is fairly widely used in diagnosis and undoubtedly can be more widely applied in the systematics of the group. Within the limits of the family Cyclopidae it is possible to distinguish crustaceans with three basic types of seminal bag. The differences consist of the position which this organ occupies in the genital segment. of one species, we carried out a series of observations on its formation in ontogenesis and during the life of the adult stage. As material for observation the study used laboratory cultures of three species; Acanthocyclops americanus (Marsh) from the plankton of the Moscow River, Cyclops vicinus Uljan and Mesocyclops leuckarti Glaus from the plankton of the channel section of the upper part of the Gorkovsk reservoir. The author concluded that the irreversibility of the changes in the seminal receptacle presents the possibility of utilising this structure as one of the indicators of the growth of the individual.
Resumo:
Coronal mass ejections (CMEs) are dramatic eruptions of large, plasma structures from the Sun. These eruptions are important because they can harm astronauts, damage electrical infrastructure, and cause auroras. A mysterious feature of these eruptions is that plasma-filled solar flux tubes first evolve slowly, but then suddenly erupt. One model, torus instability, predicts an explosive-like transition from slow expansion to fast acceleration, if the spatial decay of the ambient magnetic field exceeds a threshold.
We create arched, plasma filled, magnetic flux ropes similar to CMEs. Small, independently-powered auxiliary coils placed inside the vacuum chamber produce magnetic fields above the decay threshold that are strong enough to act on the plasma. When the strapping field is not too strong and not too weak, expansion force build up while the flux rope is in the strapping field region. When the flux rope moves to a critical height, the plasma accelerates quickly, corresponding to the observed slow-rise to fast-acceleration of most solar eruptions. This behavior is in agreement with the predictions of torus instability.
Historically, eruptions have been separated into gradual CMEs and impulsive CMEs, depending on the acceleration profile. Recent numerical studies question this separation. One study varies the strapping field profile to produce gradual eruptions and impulsive eruptions, while another study varies the temporal profile of the voltage applied to the flux tube footpoints to produce the two eruption types. Our experiment reproduced these different eruptions by changing the strapping field magnitude, and the temporal profile of the current trace. This suggests that the same physics underlies both types of CME and that the separation between impulsive and gradual classes of eruption is artificial.
Resumo:
This thesis is the culmination of field and laboratory studies aimed at assessing processes that affect the composition and distribution of atmospheric organic aerosol. An emphasis is placed on measurements conducted using compact and high-resolution Aerodyne Aerosol Mass Spectrometers (AMS). The first three chapters summarize results from aircraft campaigns designed to evaluate anthropogenic and biogenic impacts on marine aerosol and clouds off the coast of California. Subsequent chapters describe laboratory studies intended to evaluate gas and particle-phase mechanisms of organic aerosol oxidation.
The 2013 Nucleation in California Experiment (NiCE) was a campaign designed to study environments impacted by nucleated and/or freshly formed aerosol particles. Terrestrial biogenic aerosol with > 85% organic mass was observed to reside in the free troposphere above marine stratocumulus. This biogenic organic aerosol (BOA) originated from the Northwestern United States and was transported to the marine atmosphere during periodic cloud-clearing events. Spectra recorded by a cloud condensation nuclei counter demonstrated that BOA is CCN active. BOA enhancements at latitudes north of San Francisco, CA coincided with enhanced cloud water concentrations of organic species such as acetate and formate.
Airborne measurements conducted during the 2011 Eastern Pacific Emitted Aerosol Cloud Experiment (E-PEACE) were aimed at evaluating the contribution of ship emissions to the properties of marine aerosol and clouds off the coast of central California. In one study, analysis of organic aerosol mass spectra during periods of enhanced shipping activity yielded unique tracers indicative of cloud-processed ship emissions (m/z 42 and 99). The variation of their organic fraction (f42 and f99) was found to coincide with periods of heavy (f42 > 0.15; f99 > 0.04), moderate (0.05 < f42 < 0.15; 0.01 < f99 < 0.04), and negligible (f42 < 0.05; f99 < 0.01) ship influence. Application of these conditions to all measurements conducted during E-PEACE demonstrated that a large fraction of cloud droplet (72%) and dry aerosol mass (12%) sampled in the California coastal study region was heavily or moderately influenced by ship emissions. Another study investigated the chemical and physical evolution of a controlled organic plume emitted from the R/V Point Sur. Under sunny conditions, nucleated particles composed of oxidized organic compounds contributed nearly an order of magnitude more cloud condensation nuclei (CCN) than less oxidized particles formed under cloudy conditions. The processing time necessary for particles to become CCN active was short ( < 1 hr) compared to the time needed for particles to become hygroscopic at sub-saturated humidity ( > 4 hr).
Laboratory chamber experiments were also conducted to evaluate particle-phase processes influencing aerosol phase and composition. In one study, ammonium sulfate seed was coated with a layer of secondary organic aerosol (SOA) from toluene oxidation followed by a layer of SOA from α-pinene oxidation. The system exhibited different evaporative properties than ammonium sulfate seed initially coated with α-pinene SOA followed by a layer of toluene SOA. This behavior is consistent with a shell-and-core model and suggests limited mixing among different SOA types. Another study investigated the reactive uptake of isoprene epoxy diols (IEPOX) onto non-acidified aerosol. It was demonstrated that particle acidity has limited influence on organic aerosol formation onto ammonium sulfate seed, and that the chemical system is limited by the availability of nucleophiles such as sulfate.
Flow tube experiments were conducted to examine the role of iron in the reactive uptake and chemical oxidation of glycolaldehyde. Aerosol particles doped with iron and hydrogen peroxide were mixed with gas-phase glycolaldehyde and photochemically aged in a custom-built flow reactor. Compared to particles free of iron, iron-doped aerosols significantly enhanced the oxygen to carbon (O/C) ratio of accumulated organic mass. The primary oxidation mechanism is suggested to be a combination of Fenton and photo-Fenton reactions which enhance particle-phase OH radical concentrations.
Resumo:
This study concerns the longitudinal dispersion of fluid particles which are initially distributed uninformly over one cross section of a uniform, steady, turbulent open channel flow. The primary focus is on developing a method to predict the rate of dispersion in a natural stream.
Taylor's method of determining a dispersion coefficient, previously applied to flow in pipes and two-dimensional open channels, is extended to a class of three-dimensional flows which have large width-to-depth ratios, and in which the velocity varies continuously with lateral cross-sectional position. Most natural streams are included. The dispersion coefficient for a natural stream may be predicted from measurements of the channel cross-sectional geometry, the cross-sectional distribution of velocity, and the overall channel shear velocity. Tracer experiments are not required.
Large values of the dimensionless dispersion coefficient D/rU* are explained by lateral variations in downstream velocity. In effect, the characteristic length of the cross section is shown to be proportional to the width, rather than the hydraulic radius. The dimensionless dispersion coefficient depends approximately on the square of the width to depth ratio.
A numerical program is given which is capable of generating the entire dispersion pattern downstream from an instantaneous point or plane source of pollutant. The program is verified by the theory for two-dimensional flow, and gives results in good agreement with laboratory and field experiments.
Both laboratory and field experiments are described. Twenty-one laboratory experiments were conducted: thirteen in two-dimensional flows, over both smooth and roughened bottoms; and eight in three-dimensional flows, formed by adding extreme side roughness to produce lateral velocity variations. Four field experiments were conducted in the Green-Duwamish River, Washington.
Both laboratory and flume experiments prove that in three-dimensional flow the dominant mechanism for dispersion is lateral velocity variation. For instance, in one laboratory experiment the dimensionless dispersion coefficient D/rU* (where r is the hydraulic radius and U* the shear velocity) was increased by a factory of ten by roughening the channel banks. In three-dimensional laboratory flow, D/rU* varied from 190 to 640, a typical range for natural streams. For each experiment, the measured dispersion coefficient agreed with that predicted by the extension of Taylor's analysis within a maximum error of 15%. For the Green-Duwamish River, the average experimentally measured dispersion coefficient was within 5% of the prediction.