964 resultados para Observation (Scientific method)
Resumo:
Applications that operate on meshes are very popular in High Performance Computing (HPC) environments. In the past, many techniques have been developed in order to optimize the memory accesses for these datasets. Different loop transformations and domain decompositions are com- monly used for structured meshes. However, unstructured grids are more challenging. The memory accesses, based on the mesh connectivity, do not map well to the usual lin- ear memory model. This work presents a method to improve the memory performance which is suitable for HPC codes that operate on meshes. We develop a method to adjust the sequence in which the data are used inside the algorithm, by means of traversing and sorting the mesh. This sorted mesh can be transferred sequentially to the lower memory levels and allows for minimum data transfer requirements. The method also reduces the lower memory requirements dra- matically: up to 63% of the L1 cache misses are removed in a traditional cache system. We have obtained speedups of up to 2.58 on memory operations as measured in a general- purpose CPU. An improvement is also observed with se- quential access memories, where we have observed reduc- tions of up to 99% in the required low-level memory size.
Resumo:
We present a quasi-monotone semi-Lagrangian particle level set (QMSL-PLS) method for moving interfaces. The QMSL method is a blend of first order monotone and second order semi-Lagrangian methods. The QMSL-PLS method is easy to implement, efficient, and well adapted for unstructured, either simplicial or hexahedral, meshes. We prove that it is unconditionally stable in the maximum discrete norm, � · �h,∞, and the error analysis shows that when the level set solution u(t) is in the Sobolev space Wr+1,∞(D), r ≥ 0, the convergence in the maximum norm is of the form (KT/Δt)min(1,Δt � v �h,∞ /h)((1 − α)hp + hq), p = min(2, r + 1), and q = min(3, r + 1),where v is a velocity. This means that at high CFL numbers, that is, when Δt > h, the error is O( (1−α)hp+hq) Δt ), whereas at CFL numbers less than 1, the error is O((1 − α)hp−1 + hq−1)). We have tested our method with satisfactory results in benchmark problems such as the Zalesak’s slotted disk, the single vortex flow, and the rising bubble.
Resumo:
In the last years, many analyses from acoustic signal processing have been used for different applications. In most cases, these sensor systems are based on the determination of times of flight for signals from every transducer. This paper presents a flat plate generalization method for impact detection and location over linear links or bars-based structures. The use of three piezoelectric sensors allow to achieve the position and impact time while the use of additional sensors lets cover a larger area of detection and avoid wrong timing difference measurements. An experimental setup and some experimental results are briefly presented.
Resumo:
As is well known B.E.M. is obtained as a mixture of the integral representation formula of classical elasticity and the discretization philosophy of the finite element method (F.E.M.). The paper presents the application of B.E.M. to elastodynamic problems. Both the transient and steady state solutions are presented as well as some techniques to simplify problems with a free-stress boundary.
Resumo:
This thesis presents a task-oriented approach to telemanipulation for maintenance in large scientific facilities, with specific focus on the particle accelerator facilities at European Organization for Nuclear Research (CERN) in Geneva, Switzerland and GSI Helmholtz Centre for Heavy Ion Research (GSI) in Darmstadt, Germany. It examines how telemanipulation can be used in these facilities and reviews how this differs from the representation of telemanipulation tasks within the literature. It provides methods to assess and compare telemanipulation procedures as well a test suite to compare telemanipulators themselves from a dexterity perspective. It presents a formalisation of telemanipulation procedures into a hierarchical model which can be then used as a basis to aid maintenance engineers in assessing tasks for telemanipulation, and as the basis for future research. The model introduces a new concept of Elemental Actions as the building block of telemanipulation movements and incorporates the dependent factors for procedures at a higher level of abstraction. In order to gain insight into realistic tasks performed by telemanipulation systems within both industrial and research environments a survey of teleoperation experts is presented. Analysis of the responses is performed from which it is concluded that there is a need within the robotics community for physical benchmarking tests which are geared towards evaluating the dexterity of telemanipulators for comparison of their dexterous abilities. A three stage test suite is presented which is designed to allow maintenance engineers to assess different telemanipulators for their dexterity. This incorporates general characteristics of the system, a method to compare kinematic reachability of multiple telemanipulators and physical test setups to assess dexterity from a both a qualitative perspective and measurably by using performance metrics. Finally, experimental results are provided for the application of the proposed test suite onto two telemanipulation systems, one from a research setting and the other within CERN. It describes the procedure performed and discusses comparisons between the two systems, as well as providing input from the expert operator of the CERN system.
Resumo:
La observación de la Tierra es una herramienta de gran utilidad en la actualidad para el estudio de los fenómenos que se dan en la misma. La observación se puede realizar a distintas escalas y por distintos métodos dependiendo del propósito. El actual Trabajo Final de Grado persigue exponer la observación del territorio mediante técnicas de Teledetección, o Detección Remota, y su aplicación en la exploración de hidrocarburos. Desde la Segunda Guerra Mundial el capturar imágenes aéreas de regiones de la Tierra estaba restringido a usos cartográficos en el sentido estricto. Desde aquellos tiempos, hasta ahora, ha acontecido una serie de avances científicos que permiten deducir características intrínsecas de la Tierra mediante mecanismos complejos que no apreciamos a simple vista, pero que, están configurados mediante determinados parámetros geométricos y electrónicos, que permiten generar series temporales de fenómenos físicos que se dan en la Tierra. Hoy en día se puede afirmar que el aprovechamiento del espectro electromagnético está en un punto máximo. Se ha pasado del análisis de la región del espectro visible al análisis del espectro en su totalidad. Esto supone el desarrollo de nuevos algoritmos, técnicas y procesos para extraer la mayor cantidad de información acerca de la interacción de la materia con la radiación electromagnética. La información que generan los sistemas de captura va a servir para la aplicación directa e indirecta de métodos de prospección de hidrocarburos. Las técnicas utilizadas en detección por sensores remotos, aplicadas en campañas geofísicas, son utilizadas para minimizar costes y maximizar resultados en investigaciones de campo. La predicción de anomalías en la zona de estudio depende del analista, quien diseña, calcula y evalúa las variaciones de la energía electromagnética reflejada o emitida por la superficie terrestre. Para dicha predicción se revisarán distintos programas espaciales, se evaluará la bondad de registro y diferenciación espectral mediante el uso de distintas clasificaciones (supervisadas y no supervisadas). Por su influencia directa sobre las observaciones realizadas, se realiza un estudio de la corrección atmosférica; se programan distintos modelos de corrección atmosférica para imágenes multiespectrales y se evalúan los métodos de corrección atmosférica en datos hiperespectrales. Se obtendrá temperatura de la zona de interés utilizando los sensores TM-4, ASTER y OLI, así como un Modelo Digital del Terreno generado por el par estereoscópico capturado por el sensor ASTER. Una vez aplicados estos procedimientos se aplicarán los métodos directos e indirectos, para la localización de zonas probablemente afectadas por la influencia de hidrocarburos y localización directa de hidrocarburos mediante teledetección hiperespectral. Para el método indirecto se utilizan imágenes capturadas por los sensores ETM+ y ASTER. Para el método directo se usan las imágenes capturadas por el sensor Hyperion. ABSTRACT The observation of the Earth is a wonderful tool for studying the different kind of phenomena that occur on its surface. The observation could be done by different scales and by different techniques depending on the information of interest. This Graduate Thesis is intended to expose the territory observation by remote sensing acquiring data systems and the analysis that can be developed to get information of interest. Since Second World War taking aerials photographs of scene was restricted only to a cartographic sense. From these days to nowadays, it have been developed many scientific advances that make capable the interpretation of the surface behavior trough complex systems that are configure by specific geometric and electronic parameters that make possible acquiring time series of the phenomena that manifest on the earth’s surface. Today it is possible to affirm that the exploitation of the electromagnetic spectrum is on a maxim value. In the past, analysis of the electromagnetic spectrum was carry in a narrow part of it, today it is possible to study entire. This implicates the development of new algorithms, process and techniques for the extraction of information about the interaction of matter with electromagnetic radiation. The information that has been acquired by remote sensing sensors is going to be a helpful tool for the exploration of hydrocarbon through direct and vicarious methods. The techniques applied in remote sensing, especially in geophysical campaigns, are employed to minimize costs and maximize results of ground-based geologic investigations. Forecasting of anomalies in the region of interest depends directly on the expertise data analyst who designs, computes and evaluates variations in the electromagnetic energy reflected or emanated from the earth’s surface. For an optimal prediction a review of the capture system take place; assess of the goodness in data acquisition and spectral separability, is carried out by mean of supervised and unsupervised classifications. Due to the direct influence of the atmosphere in the register data, a study of the minimization of its influence has been done; a script has been programed for the atmospheric correction in multispectral data; also, a review of hyperspectral atmospheric correction is conducted. Temperature of the region of interest is computed using the images captured by TM-4, ASTER and OLI, in addition to a Digital Terrain Model generated by a pair of stereo images taken by ASTER sensor. Once these procedures have finished, direct and vicarious methods are applied in order to find altered zones influenced by hydrocarbons, as well as pinpoint directly hydrocarbon presence by mean of hyperspectral remote sensing. For this purpose ETM+ and ASTER sensors are used to apply the vicarious method and Hyperion images are used to apply the direct method.
Resumo:
The Actively Heated Fiber Optic (AHFO) method is shown to be capable of measuring soil water content several times per hour at 0.25 m spacing along cables of multiple kilometers in length. AHFO is based on distributed temperature sensing (DTS) observation of the heating and cooling of a buried fiber-optic cable resulting from an electrical impulse of energy delivered from the steel cable jacket. The results presented were collected from 750 m of cable buried in three 240 m colocated transects at 30, 60, and 90 cm depths in an agricultural field under center pivot irrigation. The calibration curve relating soil water content to the thermal response of the soil to a heat pulse of 10 W m−1 for 1 min duration was developed in the lab. This calibration was found applicable to the 30 and 60 cm depth cables, while the 90 cm depth cable illustrated the challenges presented by soil heterogeneity for this technique. This method was used to map with high resolution the variability of soil water content and fluxes induced by the nonuniformity of water application at the surface.
Resumo:
Los flujos de trabajo científicos han sido adoptados durante la última década para representar los métodos computacionales utilizados en experimentos in silico, así como para dar soporte a sus publicaciones asociadas. Dichos flujos de trabajo han demostrado ser útiles para compartir y reproducir experimentos científicos, permitiendo a investigadores visualizar, depurar y ahorrar tiempo a la hora de re-ejecutar un trabajo realizado con anterioridad. Sin embargo, los flujos de trabajo científicos pueden ser en ocasiones difíciles de entender y reutilizar. Esto es debido a impedimentos como el gran número de flujos de trabajo existentes en repositorios, su heterogeneidad o la falta generalizada de documentación y ejemplos de uso. Además, dado que normalmente es posible implementar un mismo método utilizando algoritmos o técnicas distintas, flujos de trabajo aparentemente distintos pueden estar relacionados a un determinado nivel de abstracción, basándose, por ejemplo, en su funcionalidad común. Esta tesis se centra en la reutilización de flujos de trabajo y su abstracción mediante la exploración de relaciones entre los flujos de trabajo de un repositorio y la extracción de abstracciones que podrían ayudar a la hora de reutilizar otros flujos de trabajo existentes. Para ello, se propone un modelo simple de representación de flujos de trabajo y sus ejecuciones, se analizan las abstracciones típicas que se pueden encontrar en los repositorios de flujos de trabajo, se exploran las prácticas habituales de los usuarios a la hora de reutilizar flujos de trabajo existentes y se describe un método para descubrir abstracciones útiles para usuarios, basadas en técnicas existentes de teoría de grafos. Los resultados obtenidos exponen las abstracciones y prácticas comunes de usuarios en términos de reutilización de flujos de trabajo, y muestran cómo las abstracciones que se extraen automáticamente tienen potencial para ser reutilizadas por usuarios que buscan diseñar nuevos flujos de trabajo. Abstract Scientific workflows have been adopted in the last decade to represent the computational methods used in in silico scientific experiments and their associated research products. Scientific workflows have demonstrated to be useful for sharing and reproducing scientific experiments, allowing scientists to visualize, debug and save time when re-executing previous work. However, scientific workflows may be difficult to understand and reuse. The large amount of available workflows in repositories, together with their heterogeneity and lack of documentation and usage examples may become an obstacle for a scientist aiming to reuse the work from other scientists. Furthermore, given that it is often possible to implement a method using different algorithms or techniques, seemingly disparate workflows may be related at a higher level of abstraction, based on their common functionality. In this thesis we address the issue of reusability and abstraction by exploring how workflows relate to one another in a workflow repository, mining abstractions that may be helpful for workflow reuse. In order to do so, we propose a simple model for representing and relating workflows and their executions, we analyze the typical common abstractions that can be found in workflow repositories, we explore the current practices of users regarding workflow reuse and we describe a method for discovering useful abstractions for workflows based on existing graph mining techniques. Our results expose the common abstractions and practices of users in terms of workflow reuse, and show how our proposed abstractions have potential to become useful for users designing new workflows.
Resumo:
Las playas sustentadas por medio de un pie sumergido son una atractiva alternativa de diseño de regeneración de playas especialmente cuando las condiciones física del emplazamiento o las características de la arena nativa y de préstamo producen perfiles de alimentación que no se intersectan. La observación y propuesta de este tipo de solución data de los años 1960’s, así como la experiencia internacional en la construcción de este tipo de playas. Sin embargo, a pesar de su utilización y los estudios en campo y laboratorio, no se dispone de criterios ingenieriles que apoyen el diseño de las mismas. Esta tesis consiste en un análisis experimental del perfil de playas sustentadas en un pie sumergido (o colgadas) que se concreta en una propuesta de directrices de diseño general que permiten estimar la ubicación y características geométricas del pie sumergido frente a un oleaje y material que constituye la playa determinados. En la tesis se describe el experimento bidimensional realizado en el modelo físico de fondo móvil, donde se combinan cinco tipos de oleaje con tres configuraciones del pie sumergido (“Sin estructura”, configuración baja o “Estructura 1” y configuración alta o “Estructura 2”), se presentan los resultados obtenidos y se realiza una discusión detallada de las implicaciones de los resultados desde el punto de vista hidrodinámico utilizando monomios adimensionales. Se ha realizado un análisis detallado del estado del arte sobre playas colgadas, presentando el concepto y las experiencias de realizaciones en distintos países. Se ha realizado una cuidadosa revisión de la literatura publicada sobre estudios experimentales de playas colgadas, modelos teóricos y otras cuestiones auxiliares, necesarias para la formulación de la metodología de la tesis. El estudio realizado se ha estructurado en dos fases. En la primera fase se ha realizado una experimentación en un modelo físico de fondo móvil construido en las instalaciones del Centro de Estudios de Puertos y Costas (CEPYC) del Centro de Estudios y Experimentación de Obras Públicas (CEDEX), consistente en un canal de 36 m de longitud, 3 m de anchura y 1.5 m de altura, provisto de un generador de oleaje de tipo pistón. Se ha diseñado una campaña de 15 ensayos, que se obtienen sometiendo a cinco tipos de oleaje tres configuraciones distintas de playa colgada. En los ensayos se ha medido el perfil de playa en distintos instantes hasta llegar al equilibrio, determinando a partir de esos datos el retroceso de la línea de costa y el volumen de sedimentos perdido. El tiempo total efectivo de ensayo asciende a casi 650 horas, y el número de perfiles de evolución de playa obtenidos totaliza 229. En la segunda fase se ha abordado el análisis de los resultados obtenidos con la finalidad de comprender el fenómeno, identificar las variables de las que depende y proponer unas directrices ingenieriles de diseño. Se ha estudiado el efecto de la altura de ola, del periodo del oleaje, del francobordo adimensional y del parámetro de Dean, constatándose la dificultad de comprensión del funcionamiento de estas obras ya que pueden ser beneficiosas, perjudiciales o inocuas según los casos. También se ha estudiado la respuesta del perfil de playa en función de otros monomios adimensionales, tales como el número de Reynolds o el de Froude. En el análisis se ha elegido el monomio “plunger” como el más significativo, encontrando relaciones de éste con el peralte de oleaje, la anchura de coronación adimensional, la altura del pie de playa adimensional y el parámetro de Dean. Finalmente, se propone un método de diseño de cuatro pasos que permite realizar un primer encaje del diseño funcional de la playa sustentada frente a un oleaje de características determinadas. Las contribuciones más significativas desde el punto de vista científico son: - La obtención del juego de resultados experimentales. - La caracterización del comportamiento de las playas sustentadas. - Las relaciones propuestas entre el monomio plunger y las distintas variables explicativas seleccionadas, que permiten predecir el comportamiento de la obra. - El método de diseño propuesto, en cuatro pasos, para este tipo de esquemas de defensa de costas. Perched beaches are an attractive beach nourishment design alternative especially when either the site conditions or the characteristics of both the native and the borrow sand lead to a non-intersecting profile The observation and suggestion of the use of this type of coastal defence scheme dates back to the 1960’s, as well as the international experience in the construction of this type of beaches. However, in spite of its use and the field and laboratory studies performed to-date, no design engineering guidance is available to support its design. This dissertation is based on the experimental work performed on a movable bed physical model and the use of dimensionless parameters in analyzing the results to provide general functional design guidance that allow the designer, at a particular stretch of coast - to estimate the location and geometric characteristics of the submerged sill as well as to estimate the suitable sand size to be used in the nourishment. This dissertation consists on an experimental analysis of perched beaches by means of a submerged sill, leading to the proposal of general design guidance that allows to estimate the location and geometric characteristics of the submerged sill when facing a wave condition and for a given beach material. The experimental work performed on a bi-dimensional movable bed physical model, where five types of wave conditions are combined with three configurations of the submerged sill (“No structure”, low structure or “Structure 1”, and high structure or “Structure 2”) is described, results are presented, and a detailed discussion of the results - from the hydrodynamic point of view – of the implications of the results by using dimensionless parameters is carried out. A detailed state of the art analysis about perched beaches has been performed, presenting the “perched beach concept” and the case studies of different countries. Besides, a careful revision of the literature about experimental studies on perched beaches, theoretical models, and other topics deemed necessary to formulate the methodology of this work has been completed. The study has been divided into two phases. Within the first phase, experiments on a movable-bed physical model have been developed. The physical model has been built in the Centro de Estudios de Puertos y Costas (CEPYC) facilities, Centro de Estudios y Experimentación de Obras Públicas (CEDEX). The wave flume is 36 m long, 3 m wide and 1.5 m high, and has a piston-type regular wave generator available. The test plan consisted of 15 tests resulting from five wave conditions attacking three different configurations of the perched beach. During the development of the tests, the beach profile has been surveyed at different intervals until equilibrium has been reached according to these measurements. Retreat of the shoreline and relative loss of sediment in volume have been obtained from the measurements. The total effective test time reaches nearly 650 hours, whereas the total number of beach evolution profiles measured amounts to 229. On the second phase, attention is focused on the analysis of results with the aim of understanding the phenomenon, identifying the governing variables and proposing engineering design guidelines. The effect of the wave height, the wave period, the dimensionless freeboard and of the Dean parameter have been analyzed. It has been pointed out the difficulty in understanding the way perched beaches work since they turned out to be beneficial, neutral or harmful according to wave conditions and structure configuration. Besides, the beach profile response as a function of other dimensionless parameters, such as Reynolds number or Froude number, has been studied. In this analysis, the “plunger” parameter has been selected as the most representative, and relationships between the plunger parameter and the wave steepness, the dimensionless crest width, the dimensionless crest height, and the Dean parameter have been identified. Finally, an engineering 4-step design method has been proposed, that allows for the preliminary functional design of the perched beach for a given wave condition. The most relevant contributions from the scientific point of view have been: - The acquisition of a consistent set of experimental results. - The characterization of the behavior of perched beaches. - The proposed relationships between the plunger parameter and the different explanatory variables selected, that allow for the prediction of the beach behavior. - The proposed design method, four-step method, for this type of coastal defense schemes.
Resumo:
In this paper, a rapid method for spacecraft sizing is presented. This method is useful in both the conceptual and preliminary design phases of scientific and communication satellites. The aim of this method is to provide a sizing procedure similar to the ones used in the design of aircraft; actually by determining the mass of all the spacecraft subsystems. In the Introduction, the importance of an accurate initial mass budget in the design of satellites is emphasized. Literature about this topic is not very extensive and most of the existing methods have been recapitulated. The methodology followed in the proposed procedure for spacecraft mass sizing is based on these methods. Data from 26 existing satellites have been considered to obtain correlations between each subsystem mass and the mass of the whole spacecraft.
Resumo:
O estudo visa identificar as iniciativas de Divulgação Científica empreendidas pela Universidade Federal de Mato Grosso (UFMT) e Universidade do Estado de Mato Grosso (Unemat), com vistas à atualização e ao aperfeiçoamento da comunicação institucional, maior interação com interlocutores e fortalecimento da imagem do estado como produtor de CT&I. Foram empreendidas pesquisas bibliográficas e documentais, áreas prioritárias de fomento e difusão científica; entrevistas; auditoria de imagem na mídia estadual; diagnóstico dos principais produtos de jornalismo científico desenvolvidos pela UFMT e Unemat, assim como iniciativas conjuntas (revista Fapemat Ciência e Rede de Divulgação Científica). O método investigativo adotado pode ser caracterizado como Pesquisa Participante, concebido em estreita associação com resolução de problemas, tomada de consciência ou produção de novos conhecimentos (THIOLLENT, 1996, 1997). Tal estratégia agrega distintas técnicas de pesquisa social, definidas em função de cada fase do processo de investigação. A partir da análise dos conteúdos científicos publicados nos jornais estaduais, foi possível verificar que essas IES públicas ainda não ocupam lugar relevante em tais veículos, o que pode ser justificado pela inadequação de linguagem ou canais de relacionamento, assim como, pela necessidade de uma política de divulgação mais eficiente. O mapeamento dos portais e canais de mídias sociais institucionais evidenciou que a utilização desses veículos ainda pode ser mais bem dinamizada. Por fim, as conclusões apontam que diferenças culturais e institucionais entre as duas IES inviabilizam a adoção de uma Política de Comunicação Científica integrada, comum entre UFMT e Unemat. O que pode ser considerado, é o desenvolvimento de ações para a dinamização de divulgação dessas instituições, no âmbito do Sistema Estadual de CT&I.
Resumo:
We introduce a computational method to optimize the in vitro evolution of proteins. Simulating evolution with a simple model that statistically describes the fitness landscape, we find that beneficial mutations tend to occur at amino acid positions that are tolerant to substitutions, in the limit of small libraries and low mutation rates. We transform this observation into a design strategy by applying mean-field theory to a structure-based computational model to calculate each residue's structural tolerance. Thermostabilizing and activity-increasing mutations accumulated during the experimental directed evolution of subtilisin E and T4 lysozyme are strongly directed to sites identified by using this computational approach. This method can be used to predict positions where mutations are likely to lead to improvement of specific protein properties.
Resumo:
We show, from recent data obtained at specimen North Pacific stations, that the fossil fuel CO2 signal is strongly present in the upper 400 m, and that we may consider areal extrapolations from geochemical surveys to determine the magnitude of ocean fossil fuel CO2 uptake. The debate surrounding this topic is illustrated by contrasting reports which suggest, based upon atmospheric observations and models, that the oceanic CO2 sink is small at these latitudes; or that the oceanic CO2 sink, based upon oceanic data and models, is large. The difference between these two estimates is at least a factor of two. There are contradictions arising from estimates based on surface partial pressures of CO2 alone, where the signal sought is small compared with regional and seasonal variability; and estimates of the accumulated subsurface burden, which correlates well other oceanic tracers. Ocean surface waters today contain about 45 μmol⋅kg−1 excess CO2 compared with those of the preindustrial era, and the signal is rising rapidly. What limits should we place on such calculations? The answer lies in the scientific questions to be asked. Recovery of the fossil fuel CO2 contamination signal from analysis of ocean water masses is robust enough to permit reasonable budget estimates. However, because we do not have sufficient data from the preindustrial ocean, the estimation of the required Redfield oxidation ratio in the upper several hundred meters is already blurred by the very fossil fuel CO2 signal we seek to resolve.
Resumo:
The use of molecular genetics to introduce both a metal ion binding site and a nitroxide spin label into the same protein opens the use of paramagnetic metalnitroxyl interactions to estimate intramolecular distances in a wide variety of proteins. In this report, a His-Xaa3-His metal ion binding motif was introduced at the N terminus of the long interdomain helix of T4 lysozyme (Lys-65 --> His/Gln-69 --> His) of three mutants, each containing a single nitroxide-labeled cysteine residue at position 71, 76, or 80. The results show that Cu(II)-induced relaxation effects on the nitroxide can be quantitatively analyzed in terms of interspin distance in the range of 10-25 A using Redfield theory, as first suggested by Leigh [Leigh, J.S. (1970) J. Chem. Phys. 52, 2608-2612]. Of particular interest is the observation that distances can be determined both under rigid lattice conditions in frozen solution and in the presence of motion of the spins at room temperature under physiological conditions. The method should be particularly attractive for investigating structure in membrane proteins that are difficult to crystallize. In the accompanying paper, the technique is applied to a polytopic membrane protein, lactose permease.
Resumo:
The objectives of this research dissertation were to develop and present novel analytical methods for the quantification of surface binding interactions between aqueous nanoparticles and water-soluble organic solutes. Quantification of nanoparticle surface interactions are presented in this work as association constants where the solutes have interacted with the surface of the nanoparticles. By understanding these nanoparticle-solute interactions, in part through association constants, the scientific community will better understand how organic drugs and nanomaterials interact in the environment, as well as to understand their eventual environmental fate. The biological community, pharmaceutical, and consumer product industries also have vested interests in nanoparticle-drug interactions for nanoparticle toxicity research and in using nanomaterials as drug delivery vesicles. The presented novel analytical methods, applied to nanoparticle surface association chemistry, may prove to be useful in assisting the scientific community to understand the risks, benefits, and opportunities of nanoparticles. The development of the analytical methods presented uses a model nanoparticle, Laponite-RD (LRD). LRD was the proposed nanoparticle used to model the system and technique because of its size, 25 nm in diameter. The solutes selected to model for these studies were chosen because they are also environmentally important. Caffeine, oxytetracycline (OTC), and quinine were selected to use as models because of their environmental importance and chemical properties that can be exploited in the system. All of these chemicals are found in the environment; thus, how they interact with nanoparticles and are transported through the environment is important. The analytical methods developed utilize and a wide-bore hydrodynamic chromatography to induce a partial hydrodynamic separation between nanoparticles and dissolved solutes. Then, using deconvolution techniques, two separate elution profiles for the nanoparticle and organic solute can be obtained. Followed by a mass balance approach, association constants between LRD, our model nanoparticle, and organic solutes are calculated. These findings are the first of their kind for LRD and nanoclays in dilute dispersions.