968 resultados para Reason
Resumo:
The application of principles from evolutionary biology has long been used to gain new insights into the progression and clinical control of both infectious diseases and neoplasms. This iterative evolutionary process consists of expansion, diversification and selection within an adaptive landscape - species are subject to random genetic or epigenetic alterations that result in variations; genetic information is inherited through asexual reproduction and strong selective pressures such as therapeutic intervention can lead to the adaptation and expansion of resistant variants. These principles lie at the center of modern evolutionary synthesis and constitute the primary reasons for the development of resistance and therapeutic failure, but also provide a framework that allows for more effective control.
A model system for studying the evolution of resistance and control of therapeutic failure is the treatment of chronic HIV-1 infection by broadly neutralizing antibody (bNAb) therapy. A relatively recent discovery is that a minority of HIV-infected individuals can produce broadly neutralizing antibodies, that is, antibodies that inhibit infection by many strains of HIV. Passive transfer of human antibodies for the prevention and treatment of HIV-1 infection is increasingly being considered as an alternative to a conventional vaccine. However, recent evolution studies have uncovered that antibody treatment can exert selective pressure on virus that results in the rapid evolution of resistance. In certain cases, complete resistance to an antibody is conferred with a single amino acid substitution on the viral envelope of HIV.
The challenges in uncovering resistance mechanisms and designing effective combination strategies to control evolutionary processes and prevent therapeutic failure apply more broadly. We are motivated by two questions: Can we predict the evolution to resistance by characterizing genetic alterations that contribute to modified phenotypic fitness? Given an evolutionary landscape and a set of candidate therapies, can we computationally synthesize treatment strategies that control evolution to resistance?
To address the first question, we propose a mathematical framework to reason about evolutionary dynamics of HIV from computationally derived Gibbs energy fitness landscapes -- expanding the theoretical concept of an evolutionary landscape originally conceived by Sewall Wright to a computable, quantifiable, multidimensional, structurally defined fitness surface upon which to study complex HIV evolutionary outcomes.
To design combination treatment strategies that control evolution to resistance, we propose a methodology that solves for optimal combinations and concentrations of candidate therapies, and allows for the ability to quantifiably explore tradeoffs in treatment design, such as limiting the number of candidate therapies in the combination, dosage constraints and robustness to error. Our algorithm is based on the application of recent results in optimal control to an HIV evolutionary dynamics model and is constructed from experimentally derived antibody resistant phenotypes and their single antibody pharmacodynamics. This method represents a first step towards integrating principled engineering techniques with an experimentally based mathematical model in the rational design of combination treatment strategies and offers predictive understanding of the effects of combination therapies of evolutionary dynamics and resistance of HIV. Preliminary in vitro studies suggest that the combination antibody therapies predicted by our algorithm can neutralize heterogeneous viral populations despite containing resistant mutations.
Resumo:
Los mecanismos epigenéticos, entre los que está implicada la modificación covalente de histonas, son esenciales para el mantenimiento estable de la actividad génica en las células. Estos mecanismos también están implicados en la aparición de enfermedades como el cáncer colorrectal (CCR), siendo la metástasis hepática una de las formas más agresivas de la misma al producir una drástica disminución de la esperanza de vida del enfermo. Las modificaciones en las histonas, conocidas recientemente como código histónico, afectan a la estructura de la cromatina y juegan un papel importante en el desarrollo de la tumorogénesis. Sin embargo, se sabe poco acerca de aquellas células que adquieren la capacidad de metastatizar, y es por ello que en el presente trabajo se estudian las diferencias epigenéticas entre células tumorales primarias y células tumorales metastásicas para el patrón de trimetilación de la histona H3 en tres residuos diferentes del aminoácido lisina: lisina 4 (H3K4me3), lisina 9 (H3K9me3) y lisina 27 (H3K27me3).
Resumo:
The HIV (Human Immunodeficiency Virus) is a very important disease in the world, with approximately 35 million people infected. In this study we have tried to expose the main characters of the virus, explaining the disease and the illness associated to the HIV. Besides, we have explained the antiretroviral treatments that are the most important weapon against the HIV. However, any of these treatments do not eliminate the HIV in the human body. For this reason, we have been looking for the new treatments and researches that have been development in the last years, including vaccines and genetic resistance. In addition, we have described the situation of the SIV (Simian Immunodeficiency Virus) in Africa, because it is the origin of the disease. The prevalence of the virus in primates population is something that have being studied for the last years, because it could be a new threat to the human population. Finally, we have proposed the researches lines that seems to be more effective and the ones that, in a future, could eliminate the virus in the human body.
Resumo:
Part I
Potassium bis-(tricyanovinyl) amine, K+N[C(CN)=C(CN)2]2-, crystallizes in the monoclinic system with the space group Cc and lattice constants, a = 13.346 ± 0.003 Å, c = 8.992 ± 0.003 Å, B = 114.42 ± 0.02°, and Z = 4. Three dimensional intensity data were collected by layers perpendicular to b* and c* axes. The crystal structure was refined by the least squares method with anisotropic temperature factor to an R value of 0.064.
The average carbon-carbon and carbon-nitrogen bond distances in –C-CΞN are 1.441 ± 0.016 Å and 1.146 ± 0.014 Å respectively. The bis-(tricyanovinyl) amine anion is approximately planar. The coordination number of the potassium ion is eight with bond distances from 2.890 Å to 3.408 Å. The bond angle C-N-C of the amine nitrogen is 132.4 ± 1.9°. Among six cyano groups in the molecule, two of them are bent by what appear to be significant amounts (5.0° and 7.2°). The remaining four are linear within the experimental error. The bending can probably be explained by molecular packing forces in the crystals.
Part II
The nuclear magnetic resonance of 81Br and 127I in aqueous solutions were studied. The cation-halide ion interactions were studied by studying the effect of the Li+, Na+, K+, Mg++, Cs+ upon the line width of the halide ions. The solvent-halide ion interactions were studied by studying the effects of methanol, acetonitrile, and acetone upon the line width of 81Br and 127I in the aqueous solutions. It was found that the viscosity plays a very important role upon the halide ions line width. There is no specific cation-halide ion interaction for those ions such as Mg++, Di+, Na+, and K+, whereas the Cs+ - halide ion interaction is strong. The effect of organic solvents upon the halide ion line width in aqueous solutions is in the order acetone ˃ acetonitrile ˃ methanol. It is suggested that halide ions do form some stable complex with the solvent molecules and the reason Cs+ can replace one of the ligands in the solvent-halide ion complex.
Part III
An unusually large isotope effect on the bridge hydrogen chemical shift of the enol form of pentanedione-2, 4(acetylacetone) and 3-methylpentanedione-2, 4 has been observed. An attempt has been made to interpret this effect. It is suggested from the deuterium isotope effect studies, temperature dependence of the bridge hydrogen chemical shift studies, IR studies in the OH, OD, and C=O stretch regions, and the HMO calculations, that there may probably be two structures for the enol form of acetylacetone. The difference between these two structures arises mainly from the electronic structure of the π-system. The relative population of these two structures at various temperatures for normal acetylacetone and at room temperature for the deuterated acetylacetone were calculated.
Resumo:
As a partial fulfillment of the requirements in obtaining a Professional Degree in Geophysical Engineering at the California Institute of Technology. Spontaneous Polarization method of electrical exploration was chosen as the subject of this thesis. It is also known as "self potential electrical prospecting" and "natural currents method."
The object of this thesis is to present a spontaneous polarization exploration work done by the writer, and to apply analytical interpretation methods to these field results.
The writer was confronted with the difficulty of finding the necessary information in a complete paper about this method. The available papers are all too short and repeat the usual information, giving the same examples. The decision was made to write a comprehensive paper first, including the writer's experience, and then to present the main object of the thesis.
The following paper comprises three major parts:
1 - A comprehensive treatment of the spontaneous polarization method.
2 - Report of the field work.
3 - Analytical interpretation of the field work results.
The main reason in choosing this subject is that this method is the most reliable, easiest and requires the least equipment in prospecting for sulphide orebodies on unexplored, rough terrains.
The intention of the writer in compiling the theoretical and analytical information has been mainly to prepare a reference paper about this method.
The writer wishes to express his appreciation to Dr. G. W. Potapenko, Associate Professor of Physics at California Institute of Technology, for his generous help.
Resumo:
Optical Coherence Tomography(OCT) is a popular, rapidly growing imaging technique with an increasing number of bio-medical applications due to its noninvasive nature. However, there are three major challenges in understanding and improving an OCT system: (1) Obtaining an OCT image is not easy. It either takes a real medical experiment or requires days of computer simulation. Without much data, it is difficult to study the physical processes underlying OCT imaging of different objects simply because there aren't many imaged objects. (2) Interpretation of an OCT image is also hard. This challenge is more profound than it appears. For instance, it would require a trained expert to tell from an OCT image of human skin whether there is a lesion or not. This is expensive in its own right, but even the expert cannot be sure about the exact size of the lesion or the width of the various skin layers. The take-away message is that analyzing an OCT image even from a high level would usually require a trained expert, and pixel-level interpretation is simply unrealistic. The reason is simple: we have OCT images but not their underlying ground-truth structure, so there is nothing to learn from. (3) The imaging depth of OCT is very limited (millimeter or sub-millimeter on human tissues). While OCT utilizes infrared light for illumination to stay noninvasive, the downside of this is that photons at such long wavelengths can only penetrate a limited depth into the tissue before getting back-scattered. To image a particular region of a tissue, photons first need to reach that region. As a result, OCT signals from deeper regions of the tissue are both weak (since few photons reached there) and distorted (due to multiple scatterings of the contributing photons). This fact alone makes OCT images very hard to interpret.
This thesis addresses the above challenges by successfully developing an advanced Monte Carlo simulation platform which is 10000 times faster than the state-of-the-art simulator in the literature, bringing down the simulation time from 360 hours to a single minute. This powerful simulation tool not only enables us to efficiently generate as many OCT images of objects with arbitrary structure and shape as we want on a common desktop computer, but it also provides us the underlying ground-truth of the simulated images at the same time because we dictate them at the beginning of the simulation. This is one of the key contributions of this thesis. What allows us to build such a powerful simulation tool includes a thorough understanding of the signal formation process, clever implementation of the importance sampling/photon splitting procedure, efficient use of a voxel-based mesh system in determining photon-mesh interception, and a parallel computation of different A-scans that consist a full OCT image, among other programming and mathematical tricks, which will be explained in detail later in the thesis.
Next we aim at the inverse problem: given an OCT image, predict/reconstruct its ground-truth structure on a pixel level. By solving this problem we would be able to interpret an OCT image completely and precisely without the help from a trained expert. It turns out that we can do much better. For simple structures we are able to reconstruct the ground-truth of an OCT image more than 98% correctly, and for more complicated structures (e.g., a multi-layered brain structure) we are looking at 93%. We achieved this through extensive uses of Machine Learning. The success of the Monte Carlo simulation already puts us in a great position by providing us with a great deal of data (effectively unlimited), in the form of (image, truth) pairs. Through a transformation of the high-dimensional response variable, we convert the learning task into a multi-output multi-class classification problem and a multi-output regression problem. We then build a hierarchy architecture of machine learning models (committee of experts) and train different parts of the architecture with specifically designed data sets. In prediction, an unseen OCT image first goes through a classification model to determine its structure (e.g., the number and the types of layers present in the image); then the image is handed to a regression model that is trained specifically for that particular structure to predict the length of the different layers and by doing so reconstruct the ground-truth of the image. We also demonstrate that ideas from Deep Learning can be useful to further improve the performance.
It is worth pointing out that solving the inverse problem automatically improves the imaging depth, since previously the lower half of an OCT image (i.e., greater depth) can be hardly seen but now becomes fully resolved. Interestingly, although OCT signals consisting the lower half of the image are weak, messy, and uninterpretable to human eyes, they still carry enough information which when fed into a well-trained machine learning model spits out precisely the true structure of the object being imaged. This is just another case where Artificial Intelligence (AI) outperforms human. To the best knowledge of the author, this thesis is not only a success but also the first attempt to reconstruct an OCT image at a pixel level. To even give a try on this kind of task, it would require fully annotated OCT images and a lot of them (hundreds or even thousands). This is clearly impossible without a powerful simulation tool like the one developed in this thesis.
Resumo:
In the sinusoidal phase modulating interferometer technique, the high-speed CCD is necessary to detect the interference signals. The reason of ordinary CCD's low frame rate was analyzed, and a novel high-speed image sensing technique with adjustable frame rate based on ail ordinary CCD was proposed. And the principle of the image sensor was analyzed. When the maximum frequency and channel bandwidth were constant, a custom high-speed sensor was designed by using the ordinary CCD under the control of the special driving circuit. The frame rate of the ordinary CCD has been enhanced by controlling the number of pixels of every frame; therefore, the ordinary of CCD can be used as the high frame rate image sensor with small amount of pixels. The multi-output high-speed image sensor has the deficiencies of low accuracy, and high cost, while the high-speed image senor with small number of pixels by using this technique can overcome theses faults. The light intensity varying with time was measured by using the image sensor. The frame rate was LIP to 1600 frame per second (f/s), and the size of every frame and the frame rate were adjustable. The correlation coefficient between the measurement result and the standard values were higher than 0.98026, and the relative error was lower than 0.53%. The experimental results show that this sensor is fit to the measurements of sinusoidal phase modulating interferometer technique. (c) 2007 Elsevier GmbH. All rights reserved.
Resumo:
Theoretical method to analyze three-layer large flattened mode (LFM) fibers is presented. The modal fields, including the fundamental and higher order modes, and bending loss of the fiber are analyzed. The reason forming the different modal fields is explained and the feasibility to filter out the higher order modes via bending to realize high power, high beam quality fiber laser is given. Comparisons are made with the standard step-index fiber. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
O presente trabalho visa a contribuir com o avanço das pesquisas na área de multimodalidade, mais especificamente na área aplicada ao contexto de ensino de língua estrangeira. Analisa-se uma amostra de textos multimodais em um livro didático produzido e utilizado no Brasil como ferramenta para o ensino de inglês como língua estrangeira para alunos adultos iniciantes em um curso livre. Tendo em vista a preocupação, apontada no próprio material didático, em atender às necessidades e expectativas desses alunos, objetiva-se, através desta investigação: verificar como se dá a interação entre o verbal e o visual no livro didático selecionado; verificar como essa interação contribui para atingir os objetivos pedagógicos propostos pelo material; e, por fim, contribuir, de alguma maneira, para o letramento multimodal de alunos em língua estrangeira. Tais objetivos determinam a natureza híbrida desta pesquisa que, além da sua dimensão analítico-descritiva, apresenta também uma dimensão pedagógica, que visa a apresentar propostas de trabalho multimodal com algumas das atividades selecionadas para análise. A seleção dos textos multimodais para a composição do corpus desta pesquisa foi baseada na observação da recorrência de imagens com determinados personagens ao longo do livro. Tal recorrência provocou questionamentos que só poderiam ser respondidos a partir da análise desses personagens representados em situações de (inter)ação, o que deu lugar à seleção das representações narrativas que os incluíssem. Os personagens em questão são desenhos criados para os fins pedagógicos do material e são representados em situações sociais muito limitadas: a maior parte dessas representações parece formar uma sequência narrativa cuja interação acontece em uma festa; entre as outras representações, que não representam a referida festa como contexto, incluem-se interações no escritório, no restaurante, no parque e ao telefone. Uma análise da representação visual desses atores sociais revelou que, apesar da inclusão de uma negra entre os personagens, e a consequente suposta visão multicultural transmitida com essa inclusão, os participantes representam um grupo homogêneo, pertencentes ao mesmo segmento social, que só interagem entre eles mesmos em situações sociais limitadas, não sendo, portanto, representativos da diversidade étnica, social e cultural do Brasil, ou dos países em que o inglês é falado. Após a análise da representação dos atores sociais, analisam-se, com vistas a atingir os objetivos deste trabalho, os padrões de representação e de interação nos textos multimodais selecionados, segundo categorias do quadro da multimodalidade de van Leeuwen (1996). Verifica-se, a partir de tais análises, que o verbal e o visual nem sempre apresentam uma relação direta, e que, quando apresentam, tal relação não é explorada pelo material, tornando o visual um elemento meramente decorativo que, na maioria das vezes, em nada contribui para o desenvolvimento das unidades. Por essa razão, e por se tratar de uma pesquisa centrada no contexto pedagógico, propõem-se, ao final das análises, atividades de exploração de alguns dos textos multimodais analisados, visando à formação multimodal do aluno em língua estrangeira
Resumo:
压电驱动器的位移输出能力有限,因此通常借助于柔性机构对其位移量进行放大。对常用的柔性放大机构的性能进行了分析。提出一种柔性八杆放大机构,并对其进行有限元分析和理论计算。为了提高放大率,提出两级串连式机构。机构整体具有结构紧凑、放大效率较高的优点。
Resumo:
在用半导体激光器抽运的单包层掺Yb调Q光纤激光器中观察到了清晰稳定的自锁模脉冲序列。脉冲包络形状为调Q脉冲。每个锁模脉冲的幅值由其在调Q脉冲中的相应位置决定。经过分析,认为自相位调制是调Q光纤激光器中产生锁模的主要原因。自相位调制的存在使得光脉冲的频谱被展宽,当这种展宽和腔的模式间隔相差不多时,腔内的模式便能相互作用,直到它们之间产生一个固定的相位关系。也即形成锁模。在此基础上。去掉声光晶体,并用两个光栅作为腔镜,实现了全光纤法布里-珀罗(F-P)腔锁模光纤激光器。改变腔结构,分别采用光栅和光纤反射圈作为
Resumo:
结合列阵透镜的透过率分析了其后的光场分布。列阵透镜由多个列阵元拼接而成,用以改善主透镜焦点附近能量分布的均匀性。列阵透镜在提高辐照均匀性的同时,给能量测量带来了不利的影响。这是由于经过列阵元的相邻子光束会产生干涉,干涉条纹处的激光能量密度和功率密度相应都大为增加,其数值在干涉区域中心处能上升到原来的4倍。更高的能量密度和功率密度对能量计提出了更苛刻的要求。在没有采取适当措施的时候使用,就会损坏能量计。
Resumo:
高平均功率固体激光器的增益介质由于受热而容易发生畸变,如常用材料YAG,波前畸变和去偏振现象会同时发生,高热负载固体激光介质的热效应已成为制约激光器输出功率进一步提高的严重障碍。给出一种计算热容型板条激光器热感生折射率的方法。把YAG晶体的四阶压光张量从晶胞坐标系转换到实验室坐标系,采用经过坐标转换后的新的张量,可以分析在YAG激光器中任意应力分布引起的热感应双折射。进一步的计算表明,在zigzag板条激光器中,应力双折射率与板条从晶体毛胚上切割成材的角度有关。同时也对热容板条激光器的热效应和应力特性进行了二维的理论性概述。
Resumo:
O petróleo é uma mistura complexa consistindo em um número muito grande de hidrocarbonetos. A descrição completa de todos os hidrocarbonetos existentes nessas misturas é inviável experimentalmente ou consome tempo excessivo em simulações computacionais. Por esta razão, uma abordagem molecular completa para cálculos de propriedades dessas misturas é substituído por uma abordagem pseudo-componente ou por correlações entre propriedades macroscópicas. Algumas dessas propriedades são utilizadas de acordo com a regulamentação de venda de combustíveis, e.g., para gasolina. Dependendo do esquema de refino e do óleo cru utilizado para produção desse combustível, uma larga variedade de valores é encontrada para as propriedades de correntes de processo que compõe o combustível final. A fim de planejar com precisão adequada a mistura dessas correntes, modelos devem estar disponíveis para o cálculo preciso das propriedades necessárias. Neste trabalho, oito séries de combustíveis brasileiros e duas séries de combustíveis estrangeiros foram analisadas: frações de gasolina, querosene, gasóleo e diesel. As propriedades analisadas para as frações são: número de octano, teor de aromáticos, teor de enxofre, índice de refração, densidade, ponto de fulgor, ponto de fluidez, ponto de congelamento, ponto de névoa, ponto de anilina, pressão de vapor Reid e número de cetano. Diversas correlações foram avaliadas e os melhores desempenhos foram destacados, permitindo uma estimação precisa das propriedades do combustível avaliado. Um processo de re-estimação de parâmetros foi aplicado e novos modelos foram ajustados em comparação com os dados experimentais. Esta estratégia permitiu uma estimativa mais exata das propriedades analisadas, sendo verificada por um aumento considerável no desempenho estatístico dos modelos. Além disso, foi apresentado o melhor modelo para cada propriedade e cada série
Resumo:
Esta pesquisa procura trazer para o campo da educação a discussão da divulgação científica, já bastante avançada em outras áreas de conhecimento. A proposta é abordar a temática em questão a partir da análise dos usos‟ de artigos escritos por professores/pesquisadores e publicados no Jornal Eletrônico Educação & Imagem, publicação vinculada ao Laboratório Educação e Imagem (Programa de Pós-graduação em Educação/Faculdade de Educação/Universidade de Estado do Rio de Janeiro), com a finalidade de compartilhar o que vem sendo produzido em pesquisas e práticas curriculares desenvolvidas em torno da relação imagens e educação. Este trabalho está relacionado àqueles que se desenvolvem nos estudos nos/dos/com os cotidianos, o que nos tem permitido compreender as múltiplas redes educativas nas relações conhecimentos e significados tecidas por múltiplos praticantes destas redes. Nos artigos enviados pelos professores ao jornal, observamos que, mesmo seguindo as orientações dos materiais curriculares indicados pelas secretarias, professores e alunos estão em um contexto de experiência curricular cotidiana e os usos que fazem destes materiais de acordo com as suas próprias práticas, que vivenciam dentrofora das escolas, lhes possibilitam que teçam permanentemente os currículos. Em outras palavras, dentro destes espaçostempos há muitos currículos sendo criados. Assim, ao dialogar com os trabalhos de Certeau, Martin-Barbero, Boaventura de Souza Santos, Nestor Canclini, Pierre Lévy e Carlos Vogt esta pesquisa vem pensando as táticas dos usuários de um jornal eletrônico, de professores a pesquisadores, na criação de novos conhecimentos a partir do diálogo mediado por este artefato cultural. Desejo, com a pesquisa desenvolvida, mostrar como cotidiadianamente tem sido tecidas relações entre usuários/professores/pesquisadores por meio do Jornal Eletrônico Educação & Imagem, ultrapassando dessa forma a idéia subjacente à expressão divulgação científica‟, que sugere uma unitelaridade e/ou, no mínimo, uma segregação entre cientistas e todo o resto (CERTEAU, 1994). Ao fazerem usos‟ diversos e imprevisíveis dessa mídia, esses usuários/professores/pesquisadores põem os conhecimentos produzidos para circular, possibilitando apropriações, ressignificações e criação de outros conhecimentos em/nas redes. Por isto, consideramos que é mais aplicável à área o termo circulação científica. Com isso, queremos indicar que o desenvolvimento de pesquisas com os cotidianos exige contatos constantes e de diversas ordens entre universidades e escolas para a compreensão dos múltiplos currículos existentes nas práticas das tantas escolas dos diversos sistemas educativos.