894 resultados para 2D NMR
Resumo:
Self-assembly is a phenomenon that occurs frequently throughout the universe. In this work, two self-assembling systems were studied: the formation of reverse micelles in isooctane and in supercritical CO2 (scCO2), and the formation of gels in organic solvents. The goal was the physicochemical study of these systems and the development of an NMR methodology to study them. In this work, AOT was used as a model molecule both to comprehensively study a widely researched system water/AOT/isooctane at different water concentrations and to assess its aggregation in supercritical carbon dioxide at different pressures. In order to do so an NMR methodology was devised, in which it was possible to accurately determine hydrodynamic radius of the micelle (in agreement with DLS measurements) using diffusion ordered spectroscopy (DOSY), the micellar stability and its dynamics. This was mostly assessed by 1H NMR relaxation studies, which allowed to determine correlation times and size of correlating water molecules, which are in agreement with the size of the shell that interacts with the micellar layer. The encapsulation of differently-sized carbohydrates was also studied and allowed to understand the dynamics and stability of the aggregates in such conditions. A W/CO2 microemulsion was prepared using AOT and water in scCO2, with ethanol as cosurfactant. The behaviour of the components of the system at different pressures was assessed and it is likely that above 130 bar reverse microemulsions were achieved. The homogeneity of the system was also determined by NMR. The formation of the gel network by two small molecular organogelators in toluene-d8 was studied by DOSY. A methodology using One-shot DOSY to perform the spectra was designed and applied with success. This yielded an understanding about the role of the solvent and gelator in the aggregation process, as an estimation of the time of gelation.
Resumo:
Neurological disorders are a major concern in modern societies, with increasing prevalence mainly related with the higher life expectancy. Most of the current available therapeutic options can only control and ameliorate the patients’ symptoms, often be-coming refractory over time. Therapeutic breakthroughs and advances have been hampered by the lack of accurate central nervous system (CNS) models. The develop-ment of these models allows the study of the disease onset/progression mechanisms and the preclinical evaluation of novel therapeutics. This has traditionally relied on genetically engineered animal models that often diverge considerably from the human phenotype (developmentally, anatomically and physiologically) and 2D in vitro cell models, which fail to recapitulate the characteristics of the target tissue (cell-cell and cell-matrix interactions, cell polarity). The in vitro recapitulation of CNS phenotypic and functional features requires the implementation of advanced culture strategies that enable to mimic the in vivo struc-tural and molecular complexity. Models based on differentiation of human neural stem cells (hNSC) in 3D cultures have great potential as complementary tools in preclinical research, bridging the gap between human clinical studies and animal models. This thesis aimed at the development of novel human 3D in vitro CNS models by integrat-ing agitation-based culture systems and a wide array of characterization tools. Neural differentiation of hNSC as 3D neurospheres was explored in Chapter 2. Here, it was demonstrated that human midbrain-derived neural progenitor cells from fetal origin (hmNPC) can generate complex tissue-like structures containing functional dopaminergic neurons, as well as astrocytes and oligodendrocytes. Chapter 3 focused on the development of cellular characterization assays for cell aggregates based on light-sheet fluorescence imaging systems, which resulted in increased spatial resolu-tion both for fixed samples or live imaging. The applicability of the developed human 3D cell model for preclinical research was explored in Chapter 4, evaluating the poten-tial of a viral vector candidate for gene therapy. The efficacy and safety of helper-dependent CAV-2 (hd-CAV-2) for gene delivery in human neurons was evaluated, demonstrating increased neuronal tropism, efficient transgene expression and minimal toxicity. The potential of human 3D in vitro CNS models to mimic brain functions was further addressed in Chapter 5. Exploring the use of 13C-labeled substrates and Nucle-ar Magnetic Resonance (NMR) spectroscopy tools, neural metabolic signatures were evaluated showing lineage-specific metabolic specialization and establishment of neu-ron-astrocytic shuttles upon differentiation. Chapter 6 focused on transferring the knowledge and strategies described in the previous chapters for the implementation of a scalable and robust process for the 3D differentiation of hNSC derived from human induced pluripotent stem cells (hiPSC). Here, software-controlled perfusion stirred-tank bioreactors were used as technological system to sustain cell aggregation and dif-ferentiation. The work developed in this thesis provides practical and versatile new in vitro ap-proaches to model the human brain. Furthermore, the culture strategies described herein can be further extended to other sources of neural phenotypes, including pa-tient-derived hiPSC. The combination of this 3D culture strategy with the implemented characterization methods represents a powerful complementary tool applicable in the drug discovery, toxicology and disease modeling.
Resumo:
The aim of this work was to study the self-assembly process of C3-symmetric molecules. To accomplish this objective 1,3,5 – benzentricarboxamides (BTA) derivatives were obtained. Five C3-symmetric molecules were synthesized in moderate to good yields (39-72%) using azo-benzene, aniline, benzylamine, tryptophan and tyrosine. The aggregation behavior of the BTA derivatives was probed with 1H-NMR spectroscopy, 1H-1H 2D Nuclear Overhauser Effect Spectroscopy (NOESY) and Diffusion Ordered Spectroscopy (DOSY). These experiments allowed to study the influence of H-bonding groups, aromatic rings, unsaturated bonds and the overall geometry in the molecular self-assembly associated with the different structural patterns present on these molecules. The stacking and large molecule behavior where observed in BTA 1, aniline derivative, BTA 4, tyrosine derivative or BTA 5, tryptophan derivative, with several of those discussed functional groups such as unsaturated bonds and H-bonding groups. BTA 5 was used in a few preliminary interaction studies with glucose and ammonium chloride showing interaction with the ammonium ion.
Resumo:
The chemical composition of propolis is affected by environmental factors and harvest season, making it difficult to standardize its extracts for medicinal usage. By detecting a typical chemical profile associated with propolis from a specific production region or season, certain types of propolis may be used to obtain a specific pharmacological activity. In this study, propolis from three agroecological regions (plain, plateau, and highlands) from southern Brazil, collected over the four seasons of 2010, were investigated through a novel NMR-based metabolomics data analysis workflow. Chemometrics and machine learning algorithms (PLS-DA and RF), including methods to estimate variable importance in classification, were used in this study. The machine learning and feature selection methods permitted construction of models for propolis sample classification with high accuracy (>75%, reaching 90% in the best case), better discriminating samples regarding their collection seasons comparatively to the harvest regions. PLS-DA and RF allowed the identification of biomarkers for sample discrimination, expanding the set of discriminating features and adding relevant information for the identification of the class-determining metabolites. The NMR-based metabolomics analytical platform, coupled to bioinformatic tools, allowed characterization and classification of Brazilian propolis samples regarding the metabolite signature of important compounds, i.e., chemical fingerprint, harvest seasons, and production regions.
Resumo:
El volumen de datos provenientes de experimentos basados en genómica y poteómica es grande y de estructura compleja. Solo a través de un análisis bioinformático/bioestadístico eficiente es posible identificar y caracterizar perfiles de expresión de genes y proteínas que se expresan en forma diferencial bajo distintas condiciones experimentales (CE). El objetivo principal es extender las capacidades computacionales y analíticos de los softwares disponibles de análisis de este tipo de datos, en especial para aquellos aplicables a datos de electroforésis bidimensional diferencial (2D-DIGE). En DIGE el método estadístico más usado es la prueba t de Student cuya aplicación presupone una única fuente de variación y el cumplimiento de ciertos supuestos distribucionales de los datos (como independencia y homogeneidad de varianzas), los cuales no siempre se cumplen en la práctica, pudiendo conllevar a errores en las estimaciones e inferencias de los efectos de interés. Los modelos Generalizados lineales mixtos (GLMM) permiten no solo incorporar los efectos que, se asume, afectan la variación de la respuesta sino que también modelan estructuras de covarianzas y de correlaciones más afines a las que se presentan en la realidad, liberando del supuesto de independencia y de normalidad. Estos modelos, más complejos en esencia, simplificará el análisis debido a la modelización directa de los datos crudos sin la aplicación de transformaciones para lograr distribuciones más simétricas. Produciendo también a una estimación estadísticamente más eficiente de los efectos presentes y por tanto a una detección más certera de los genes/ proteínas involucrados en procesos biológicos de interés. La característica relevante de esta tecnología es que no se conoce a priori cuáles son las proteínas presentes. Estas son identificadas mediante otras técnicas más costosas una vez que se detectó un conjunto de manchas diferenciales sobre los geles 2DE. Por ende disminuir los falsos positivos es fundamental en la identificación de tales manchas ya que inducen a resultados erróneas y asociaciones biológica ficticias. Esto no solo se logrará mediante el desarrollo de técnicas de normalización que incorporen explícitamente las CE, sino también con el desarrollo de métodos que permitan salirse del supuesto de gaussianidad y evaluar otros supuestos distribucionales más adecuados para este tipo de datos. También, se desarrollarán técnicas de aprendizaje automática que mediante optimización de funciones de costo específicas nos permitan identificar el subconjunto de proteínas con mayor potencialidad diagnóstica. Este proyecto tiene una alta componente estadístico/bioinformática, pero creemos que es el campo de aplicación, es decir la genómica y la proteómica, los que mas se beneficiarán con los resultados esperados. Para tal fin se utilizarán diversas bases de datos de distintos experimentos provistos por distintos centros de investigación nacionales e internacionales
Resumo:
El volumen de datos provenientes de experimentos basados en genómica y poteómica es grande y de estructura compleja. Solo a través de un análisis bioinformático/bioestadístico eficiente es posible identificar y caracterizar perfiles de expresión de genes y proteínas que se expresan en forma diferencial bajo distintas condiciones experimentales (CE). El objetivo principal es extender las capacidades computacionales y analíticos de los softwares disponibles de análisis de este tipo de datos, en especial para aquellos aplicables a datos de electroforésis bidimensional diferencial (2D-DIGE). En DIGE el método estadístico más usado es la prueba t de Student cuya aplicación presupone una única fuente de variación y el cumplimiento de ciertos supuestos distribucionales de los datos (como independencia y homogeneidad de varianzas), los cuales no siempre se cumplen en la práctica, pudiendo conllevar a errores en las estimaciones e inferencias de los efectos de interés. Los modelos Generalizados lineales mixtos (GLMM) permiten no solo incorporar los efectos que, se asume, afectan la variación de la respuesta sino que también modelan estructuras de covarianzas y de correlaciones más afines a las que se presentan en la realidad, liberando del supuesto de independencia y de normalidad. Estos modelos, más complejos en esencia, simplificarán el análisis debido a la modelización directa de los datos crudos sin la aplicación de transformaciones para lograr distribuciones más simétricas,produciendo también a una estimación estadísticamente más eficiente de los efectos presentes y por tanto a una detección más certera de los genes/proteínas involucrados en procesos biológicos de interés. La característica relevante de esta tecnología es que no se conoce a priori cuáles son las proteínas presentes. Estas son identificadas mediante otras técnicas más costosas una vez que se detectó un conjunto de manchas diferenciales sobre los geles 2DE. Por ende disminuir los falsos positivos es fundamental en la identificación de tales manchas ya que inducen a resultados erróneas y asociaciones biológica ficticias. Esto no solo se logrará mediante el desarrollo de técnicas de normalización que incorporen explícitamente las CE, sino también con el desarrollo de métodos que permitan salirse del supuesto de gaussianidad y evaluar otros supuestos distribucionales más adecuados para este tipo de datos. También, se desarrollarán técnicas de aprendizaje automática que mediante optimización de funciones de costo específicas nos permitan identificar el subconjunto de proteínas con mayor potencialidad diagnóstica. Este proyecto tiene un alto componente estadístico/bioinformática, pero creemos que es el campo de aplicación, es decir la genómica y la proteómica, los que más se beneficiarán con los resultados esperados. Para tal fin se utilizarán diversas bases de datos de distintos experimentos provistos por distintos centros de investigación nacionales e internacionales.
Resumo:
Surgeons may use a number of cutting instruments such as osteotomes and chisels to cut bone during an operative procedure. The initial loading of cortical bone during the cutting process results in the formation of microcracks in the vicinity of the cutting zone with main crack propagation to failure occuring with continued loading. When a material cracks, energy is emitted in the form of Acoustic Emission (AE) signals that spread in all directions, therefore, AE transducers can be used to monitor the occurrence and development of microcracking and crack propagation in cortical bone. In this research, number of AE signals (hits) and related parameters including amplitude, duration and absolute energy (abs-energy) were recorded during the indentation cutting process by a wedge blade on cortical bone specimens. The cutting force was also measured to correlate between load-displacement curves and the output from the AE sensor. The results from experiments show AE signals increase substantially during the loading just prior to fracture between 90% and 100% of maximum fracture load. Furthermore, an amplitude threshold value of 64dB (with approximate abs-energy of 1500 aJ) was established to saparate AE signals associated with microcracking (41 – 64dB) from fracture related signals (65 – 98dB). The results also demonstrated that the complete fracture event which had the highest duration value can be distinguished from other growing macrocracks which did not lead to catastrophic fracture. It was observed that the main crack initiation may be detected by capturing a high amplitude signal at a mean load value of 87% of maximum load and unsteady crack propagation may occur just prior to final fracture event at a mean load value of 96% of maximum load. The author concludes that the AE method is useful in understanding the crack initiation and fracture during the indentation cutting process.
Resumo:
1
Resumo:
3
Resumo:
2
Resumo:
FUNDAMENTO: A ecocardiografia consiste em método muito útil para seleção e avaliação de resposta à terapia de ressincronização cardíaca (TRC). O eco 3D já tem seu papel estabelecido na avaliação dos volumes ventriculares e fração de ejeção ventricular esquerda (FEVE) com excelente correlação de resultados quando comparado à RNM. OBJETIVO: Comparar a avaliação dos volumes ventriculares (VDVE, VSVE), FEVE e massa do VE antes e após a TRC pela ecocardiografia bi (Eco 2D) e tridimensional (Eco 3D). MÉTODOS: Foram avaliados 24 pacientes com IC CFIII ou IV (NYHA), ritmo sinusal QRS > 150 ms, em vigência de terapêutica otimizada para IC submetidos a TRC. Foram realizados eletrocardiograma (ECG), avaliação clínica, Eco 2D e 3D antes, três e seis meses após a TRC. A comparação entre as técnicas foi realizada utilizando-se a correlação de Pearson (r). RESULTADOS: No momento basal, a correlação entre os métodos foi de 0,96 para avaliação do VDVE, 0,95 para avaliação do VSVE, 0,87 para FEVE, e 0,72 para massa do VE. Após três meses da TRC, a correlação entre os métodos para análise do VDVE foi de 0,96, 0,95 para VSVE, 0,95 para FEVE, e 0,77 para massa do VE. Após seis meses da TRC, a correlação entre o Eco 2D e 3D para análise do VDVE foi de 0,98, 0,91 para VSVE, 0,96 para FEVE, e 0,85 para massa do VE. CONCLUSÃO: Neste estudo foi observada redução dos VDVE,VSVE, além de melhora da FEVE após a TRC. Houve excelente correlação entre o Eco 2D e o 3D para avaliação dos volumes ventriculares e FEVE, e boa correlação entre os métodos para avaliação da massa ventricular esquerda antes e após a TRC.
Resumo:
feature extraction, feature tracking, vector field visualization
Resumo:
Abstract The assessment of left atrial (LA) function is used in various cardiovascular diseases. LA plays a complementary role in cardiac performance by modulating left ventricular (LV) function. Transthoracic two-dimensional (2D) phasic volumes and Doppler echocardiography can measure LA function non‑invasively. However, evaluation of LA deformation derived from 2D speckle tracking echocardiography (STE) is a new feasible and promising approach for assessment of LA mechanics. These parameters are able to detect subclinical LA dysfunction in different pathological condition. Normal ranges for LA deformation and cut-off values to diagnose LA dysfunction with different diseases have been reported, but data are still conflicting, probably because of some methodological and technical issues. This review highlights the importance of an unique standardized technique to assess the LA phasic functions by STE, and discusses recent studies on the most important clinical applications of this technique.