967 resultados para functional methods
Resumo:
Identifying and characterizing the genes responsible for inherited human diseases will ultimately lead to a more holistic understanding of disease pathogenesis, catalyze new diagnostic and treatment modalities, and provide insights into basic biological processes. This dissertation presents research aimed at delineating the genetic and molecular basis of human diseases through epigenetic and functional studies and can be divided into two independent areas of research. The first area of research describes the development of two high-throughput melting curve based methods to assay DNA methylation, referred to as McMSP and McCOBRA. The goal of this project was to develop DNA methylation methods that can be used to rapidly determine the DNA methylation status at a specific locus in a large number of samples. McMSP and McCOBRA provide several advantages over existing methods, as they are simple, accurate, robust, and high-throughput making them applicable to large-scale DNA methylation studies. McMSP and McCOBRA were then used in an epigenetic study of the complex disease Ankylosing spondylitis (AS). Specifically, I tested the hypothesis that aberrant patterns of DNA methylation in five AS candidate genes contribute to disease susceptibility. While no statistically significant methylation differences were observed between cases and controls, this is the first study to investigate the hypothesis that epigenetic variation contributes to AS susceptibility and therefore provides the conceptual framework for future studies. ^ In the second area of research, I performed experiments to better delimit the function of aryl hydrocarbon receptor-interacting protein-like 1 (AIPL1), which when mutated causes various forms of inherited blindness such as Leber congenital amaurosis. A yeast two-hybrid screen was performed to identify putative AIPL1-interacting proteins. After screening 2 × 106 bovine retinal cDNA library clones, 6 unique putative AIPL1-interacting proteins were identified. While these 6 AIPL1 protein-protein interactions must be confirmed, their identification is an important step in understanding the functional role of AIPL1 within the retina and will provide insight into the molecular mechanisms underlying inherited blindness. ^
Resumo:
The Ca2+-binding protein calmodulin (CaM) is a key transducer of Ca2+ oscillations by virtue of its ability to bind Ca 2+ selectively and then interact specifically with a large number of downstream enzymes and proteins. It remains unclear whether Ca2+ -dependent signaling alone can activate the full range of Ca 2+/CaM regulated processes or whether other regulatory schemes in the cell exist that allow specific targeting of CaM to subsets of Ca 2+/CaM binding sites or regions of the cell. Here we investigate the possibility that alterations of the availability of CaM may serve as a potential cellular mechanism for regulating the activation of CaM-dependent targets. By utilizing sensitive optical techniques with high spatial and temporal resolution, we examine the intracellular dynamics of CaM signaling at a resolution previously unattainable. After optimizing and characterizing both the optical methods and fluorescently labeled probes for intracellular measurements, the diffusion of CaM in the cytoplasm of HEK293 cells was analyzed. It was discovered that the diffusion characteristics of CaM are similar to that of a comparably sized inert molecule. Independent manipulation of experimental parameters, including increases in total concentrations of CaM and intracellular Ca2+ levels, did not change the diffusion of CaM in the cytoplasm. However, changes in diffusion were seen when the concentration of Ca2+/CaM-binding targets was increased in conjunction with elevated Ca2+. This indicates that CaM is not normally limiting for the activation of Ca 2+/CaM-dependent enzymes in HEK293 cells but reveals that the ratio of CaM to CaM-dependent targets is a potential mechanism for changing CaM availability. Next we considered whether cellular compartmentalization may act to regulate concentrations of available Ca2+/CaM in hippocampal neurons. We discovered changes in diffusion parameters of CaM under elevated Ca2+ conditions in the soma, neurite and nucleus which suggest that either the composition of cytoplasm is different in these compartments and/or they are composed of unique families of CaM-binding proteins. Finally, we return to the HEK293 cell and for the first time directly show the intracellular binding of CaM and CaMKII, an important target for CaM critical for neuronal function and plasticity. Furthermore, we analyzed the complex binding stoichiometry of this molecular interaction in the basal, activated and autophosphorylated states of CaMKII and determined the impact of this binding on CaM availability in the cell. Overall these results demonstrate that regulation of CaM availability is a viable cellular mechanism for regulating the output of CaM-dependent processes and that this process is tuned to the specific functional needs of a particular cell type and subcellular compartment. ^
Resumo:
In the rabbit retina, there are two kinds of horizontal cells (HCs). The A-type HC is a large axonless cell which contacts cones exclusively. The B-type HC is an axon bearing cell. While the somatic dendrites of B-type HCs also contact cones, the axon expands into an elaborately branched structure, the axon terminal (AT), which contacts a large number of rods. It is difficult to label the different HCs selectively by immunochemical methods. Therefore, we developed dye injection methods to label each type of HC. Then it was possible, (1) to describe the detailed structure of the AT (2) to identify the glutamate receptors mediating cone input to A and B-type HCs and rod input to ATs and (3) to test the hypothesis that the B-type HCs are coupled via Cx57 gap junctions. ^ To obtain well filled examples of single HCs, it was necessary to block gap junction coupling to stop the spread of Neurobiotin through the network. We used dye coupling in A-type HCs to screen a series of potential gap junction antagonists. One of these compounds, meclofenamic acid (MFA), was potent, water soluble and easily reversible. This compound may be a useful tool to manipulate gap junction coupling. ^ In the presence of MFA, Neurobiotin passed down the axon of B-type HCs to reveal the detailed structure of the AT. We observed that only one AT ending entered each rod spherule invagination. This observation was confirmed by calculation and two dye injections. ^ Glutamate is the neurotransmitter used by both rods and cones. AMPA receptors were colocalized with the dendrites of A and B-type HCs at each cone pedicle. In addition, AMPA receptors were located on the AT ending at each rod spherule. Thus rod and cone input to HCs is mediated by AMPA receptors. ^ A-type and B-type HCs may express different connexins because they have different dye-coupling properties. Recently, we found that connexin50 (Cx50) is expressed by A-type HCs. B-type HCs and B-type ATs are also independently coupled. Cx57 was expressed in the OPL and double label studies showed that Cx 57 was colocalized with the AT matrix but not with the somatic dendrites of B-type HCs. ^ In summary, we have identified a useful gap junction antagonist, MFA. There is one AT ending at each rod spherule, rods inputs to ATs is mediated by AMPA receptors and coupling in the AT matrix is mediated by Cx57. This confirms that HCs with different properties use distinct connexins. The properties of ATs described in this research are consistent. The connections and properties reported here suggest that ATs functions as rod HCs and provide a negative feedback signal to rods. ^
Resumo:
Two studies among college students were conducted to evaluate appropriate measurement methods for etiological research on computing-related upper extremity musculoskeletal disorders (UEMSDs). ^ A cross-sectional study among 100 graduate students evaluated the utility of symptoms surveys (a VAS scale and 5-point Likert scale) compared with two UEMSD clinical classification systems (Gerr and Moore protocols). The two symptom measures were highly concordant (Lin's rho = 0.54; Spearman's r = 0.72); the two clinical protocols were moderately concordant (Cohen's kappa = 0.50). Sensitivity and specificity, endorsed by Youden's J statistic, did not reveal much agreement between the symptoms surveys and clinical examinations. It cannot be concluded self-report symptoms surveys can be used as surrogate for clinical examinations. ^ A pilot repeated measures study conducted among 30 undergraduate students evaluated computing exposure measurement methods. Key findings are: temporal variations in symptoms, the odds of experiencing symptoms increased with every hour of computer use (adjOR = 1.1, p < .10) and every stretch break taken (adjOR = 1.3, p < .10). When measuring posture using the Computer Use Checklist, a positive association with symptoms was observed (adjOR = 1.3, p < 0.10), while measuring posture using a modified Rapid Upper Limb Assessment produced unexpected and inconsistent associations. The findings were inconclusive in identifying an appropriate posture assessment or superior conceptualization of computer use exposure. ^ A cross-sectional study of 166 graduate students evaluated the comparability of graduate students to College Computing & Health surveys administered to undergraduate students. Fifty-five percent reported computing-related pain and functional limitations. Years of computer use in graduate school and number of years in school where weekly computer use was ≥ 10 hours were associated with pain within an hour of computing in logistic regression analyses. The findings are consistent with current literature on both undergraduate and graduate students. ^
Resumo:
Next-generation DNA sequencing platforms can effectively detect the entire spectrum of genomic variation and is emerging to be a major tool for systematic exploration of the universe of variants and interactions in the entire genome. However, the data produced by next-generation sequencing technologies will suffer from three basic problems: sequence errors, assembly errors, and missing data. Current statistical methods for genetic analysis are well suited for detecting the association of common variants, but are less suitable to rare variants. This raises great challenge for sequence-based genetic studies of complex diseases.^ This research dissertation utilized genome continuum model as a general principle, and stochastic calculus and functional data analysis as tools for developing novel and powerful statistical methods for next generation of association studies of both qualitative and quantitative traits in the context of sequencing data, which finally lead to shifting the paradigm of association analysis from the current locus-by-locus analysis to collectively analyzing genome regions.^ In this project, the functional principal component (FPC) methods coupled with high-dimensional data reduction techniques will be used to develop novel and powerful methods for testing the associations of the entire spectrum of genetic variation within a segment of genome or a gene regardless of whether the variants are common or rare.^ The classical quantitative genetics suffer from high type I error rates and low power for rare variants. To overcome these limitations for resequencing data, this project used functional linear models with scalar response to develop statistics for identifying quantitative trait loci (QTLs) for both common and rare variants. To illustrate their applications, the functional linear models were applied to five quantitative traits in Framingham heart studies. ^ This project proposed a novel concept of gene-gene co-association in which a gene or a genomic region is taken as a unit of association analysis and used stochastic calculus to develop a unified framework for testing the association of multiple genes or genomic regions for both common and rare alleles. The proposed methods were applied to gene-gene co-association analysis of psoriasis in two independent GWAS datasets which led to discovery of networks significantly associated with psoriasis.^
Resumo:
Complex diseases such as cancer result from multiple genetic changes and environmental exposures. Due to the rapid development of genotyping and sequencing technologies, we are now able to more accurately assess causal effects of many genetic and environmental factors. Genome-wide association studies have been able to localize many causal genetic variants predisposing to certain diseases. However, these studies only explain a small portion of variations in the heritability of diseases. More advanced statistical models are urgently needed to identify and characterize some additional genetic and environmental factors and their interactions, which will enable us to better understand the causes of complex diseases. In the past decade, thanks to the increasing computational capabilities and novel statistical developments, Bayesian methods have been widely applied in the genetics/genomics researches and demonstrating superiority over some regular approaches in certain research areas. Gene-environment and gene-gene interaction studies are among the areas where Bayesian methods may fully exert its functionalities and advantages. This dissertation focuses on developing new Bayesian statistical methods for data analysis with complex gene-environment and gene-gene interactions, as well as extending some existing methods for gene-environment interactions to other related areas. It includes three sections: (1) Deriving the Bayesian variable selection framework for the hierarchical gene-environment and gene-gene interactions; (2) Developing the Bayesian Natural and Orthogonal Interaction (NOIA) models for gene-environment interactions; and (3) extending the applications of two Bayesian statistical methods which were developed for gene-environment interaction studies, to other related types of studies such as adaptive borrowing historical data. We propose a Bayesian hierarchical mixture model framework that allows us to investigate the genetic and environmental effects, gene by gene interactions (epistasis) and gene by environment interactions in the same model. It is well known that, in many practical situations, there exists a natural hierarchical structure between the main effects and interactions in the linear model. Here we propose a model that incorporates this hierarchical structure into the Bayesian mixture model, such that the irrelevant interaction effects can be removed more efficiently, resulting in more robust, parsimonious and powerful models. We evaluate both of the 'strong hierarchical' and 'weak hierarchical' models, which specify that both or one of the main effects between interacting factors must be present for the interactions to be included in the model. The extensive simulation results show that the proposed strong and weak hierarchical mixture models control the proportion of false positive discoveries and yield a powerful approach to identify the predisposing main effects and interactions in the studies with complex gene-environment and gene-gene interactions. We also compare these two models with the 'independent' model that does not impose this hierarchical constraint and observe their superior performances in most of the considered situations. The proposed models are implemented in the real data analysis of gene and environment interactions in the cases of lung cancer and cutaneous melanoma case-control studies. The Bayesian statistical models enjoy the properties of being allowed to incorporate useful prior information in the modeling process. Moreover, the Bayesian mixture model outperforms the multivariate logistic model in terms of the performances on the parameter estimation and variable selection in most cases. Our proposed models hold the hierarchical constraints, that further improve the Bayesian mixture model by reducing the proportion of false positive findings among the identified interactions and successfully identifying the reported associations. This is practically appealing for the study of investigating the causal factors from a moderate number of candidate genetic and environmental factors along with a relatively large number of interactions. The natural and orthogonal interaction (NOIA) models of genetic effects have previously been developed to provide an analysis framework, by which the estimates of effects for a quantitative trait are statistically orthogonal regardless of the existence of Hardy-Weinberg Equilibrium (HWE) within loci. Ma et al. (2012) recently developed a NOIA model for the gene-environment interaction studies and have shown the advantages of using the model for detecting the true main effects and interactions, compared with the usual functional model. In this project, we propose a novel Bayesian statistical model that combines the Bayesian hierarchical mixture model with the NOIA statistical model and the usual functional model. The proposed Bayesian NOIA model demonstrates more power at detecting the non-null effects with higher marginal posterior probabilities. Also, we review two Bayesian statistical models (Bayesian empirical shrinkage-type estimator and Bayesian model averaging), which were developed for the gene-environment interaction studies. Inspired by these Bayesian models, we develop two novel statistical methods that are able to handle the related problems such as borrowing data from historical studies. The proposed methods are analogous to the methods for the gene-environment interactions on behalf of the success on balancing the statistical efficiency and bias in a unified model. By extensive simulation studies, we compare the operating characteristics of the proposed models with the existing models including the hierarchical meta-analysis model. The results show that the proposed approaches adaptively borrow the historical data in a data-driven way. These novel models may have a broad range of statistical applications in both of genetic/genomic and clinical studies.
Resumo:
Background: Octopods have successfully colonised the world's oceans from the tropics to the poles. Yet, successful persistence in these habitats has required adaptations of their advanced physiological apparatus to compensate impaired oxygen supply. Their oxygen transporter haemocyanin plays a major role in cold tolerance and accordingly has undergone functional modifications to sustain oxygen release at sub-zero temperatures. However, it remains unknown how molecular properties evolved to explain the observed functional adaptations. We thus aimed to assess whether natural selection affected molecular and structural properties of haemocyanin that explains temperature adaptation in octopods. Results: Analysis of 239 partial sequences of the haemocyanin functional units (FU) f and g of 28 octopod species of polar, temperate, subtropical and tropical origin revealed natural selection was acting primarily on charge properties of surface residues. Polar octopods contained haemocyanins with higher net surface charge due to decreased glutamic acid content and higher numbers of basic amino acids. Within the analysed partial sequences, positive selection was present at site 2545, positioned between the active copper binding centre and the FU g surface. At this site, methionine was the dominant amino acid in polar octopods and leucine was dominant in tropical octopods. Sites directly involved in oxygen binding or quaternary interactions were highly conserved within the analysed sequence. Conclusions: This study has provided the first insight into molecular and structural mechanisms that have enabled octopods to sustain oxygen supply from polar to tropical conditions. Our findings imply modulation of oxygen binding via charge-charge interaction at the protein surface, which stabilize quaternary interactions among functional units to reduce detrimental effects of high pH on venous oxygen release. Of the observed partial haemocyanin sequence, residue 2545 formed a close link between the FU g surface and the active centre, suggesting a role as allosteric binding site. The prevalence of methionine at this site in polar octopods, implies regulation of oxygen affinity via increased sensitivity to allosteric metal binding. High sequence conservation of sites directly involved in oxygen binding indicates that functional modifications of octopod haemocyanin rather occur via more subtle mechanisms, as observed in this study.
Resumo:
Coccolithophores are calcifying marine phytoplankton of the class Prymnesiophyceae. They are considered to play an import role in the global carbon cycle through the production and export of organic carbon and calcite. We have compiled observations of global coccolithophore abundance from several existing databases as well as individual contributions of published and unpublished datasets. We estimate carbon biomass using standardised conversion methods and provide estimates of uncertainty associated with these values. The database contains 58 384 individual observations at various taxonomic levels. This corresponds to 12 391 observations of total coccolithophore abundance and biomass. The data span a time period of 1929-2008, with observations from all ocean basins and all seasons, and at depths ranging from the surface to 500 m. Highest biomass values are reported in the North Atlantic, with a maximum of 501.7 ?gCl-1. Lower values are reported for the Pacific (maximum of 79.4 ?gCl-1) and Indian Ocean (up to 178.3 ?gCl-1). Coccolithophores are reported across all latitudes in the Northern Hemisphere, from the Equator to 89degN, although biomass values fall below 3 ?gCl-1 north of 70degN. In the Southern Hemisphere, biomass values fall rapidly south of 50degS, with only a single non-zero observation south of 60degS. Biomass values show a clear seasonal cycle in the Northern Hemisphere, reaching a maximum in the summer months (June-July). In the Southern Hemisphere the seasonal cycle is less evident, possibly due to a greater proportion of low-latitude data.
Resumo:
We present a remote sensing observational method for the measurement of the spatio-temporal dynamics of ocean waves. Variational techniques are used to recover a coherent space-time reconstruction of oceanic sea states given stereo video imagery. The stereoscopic reconstruction problem is expressed in a variational optimization framework. There, we design an energy functional whose minimizer is the desired temporal sequence of wave heights. The functional combines photometric observations as well as spatial and temporal regularizers. A nested iterative scheme is devised to numerically solve, via 3-D multigrid methods, the system of partial differential equations resulting from the optimality condition of the energy functional. The output of our method is the coherent, simultaneous estimation of the wave surface height and radiance at multiple snapshots. We demonstrate our algorithm on real data collected off-shore. Statistical and spectral analysis are performed. Comparison with respect to an existing sequential method is analyzed.
Resumo:
La planificación pre-operatoria se ha convertido en una tarea esencial en cirugías y terapias de marcada complejidad, especialmente aquellas relacionadas con órgano blando. Un ejemplo donde la planificación preoperatoria tiene gran interés es la cirugía hepática. Dicha planificación comprende la detección e identificación precisa de las lesiones individuales y vasos así como la correcta segmentación y estimación volumétrica del hígado funcional. Este proceso es muy importante porque determina tanto si el paciente es un candidato adecuado para terapia quirúrgica como la definición del abordaje a seguir en el procedimiento. La radioterapia de órgano blando es un segundo ejemplo donde la planificación se requiere tanto para la radioterapia externa convencional como para la radioterapia intraoperatoria. La planificación comprende la segmentación de tumor y órganos vulnerables y la estimación de la dosimetría. La segmentación de hígado funcional y la estimación volumétrica para planificación de la cirugía se estiman habitualmente a partir de imágenes de tomografía computarizada (TC). De igual modo, en la planificación de radioterapia, los objetivos de la radiación se delinean normalmente sobre TC. Sin embargo, los avances en las tecnologías de imagen de resonancia magnética (RM) están ofreciendo progresivamente ventajas adicionales. Por ejemplo, se ha visto que el ratio de detección de metástasis hepáticas es significativamente superior en RM con contraste Gd–EOB–DTPA que en TC. Por tanto, recientes estudios han destacado la importancia de combinar la información de TC y RM para conseguir el mayor nivel posible de precisión en radioterapia y para facilitar una descripción precisa de las lesiones del hígado. Con el objetivo de mejorar la planificación preoperatoria en ambos escenarios se precisa claramente de un algoritmo de registro no rígido de imagen. Sin embargo, la gran mayoría de sistemas comerciales solo proporcionan métodos de registro rígido. Las medidas de intensidad de voxel han demostrado ser criterios de similitud de imágenes robustos, y, entre ellas, la Información Mutua (IM) es siempre la primera elegida en registros multimodales. Sin embargo, uno de los principales problemas de la IM es la ausencia de información espacial y la asunción de que las relaciones estadísticas entre las imágenes son homogéneas a lo largo de su domino completo. La hipótesis de esta tesis es que la incorporación de información espacial de órganos al proceso de registro puede mejorar la robustez y calidad del mismo, beneficiándose de la disponibilidad de las segmentaciones clínicas. En este trabajo, se propone y valida un esquema de registro multimodal no rígido 3D usando una nueva métrica llamada Información Mutua Centrada en el Órgano (Organ-Focused Mutual Information metric (OF-MI)) y se compara con la formulación clásica de la Información Mutua. Esto permite mejorar los resultados del registro en áreas problemáticas incorporando información regional al criterio de similitud, beneficiándose de la disponibilidad real de segmentaciones en protocolos estándares clínicos, y permitiendo que la dependencia estadística entre las dos modalidades de imagen difiera entre órganos o regiones. El método propuesto se ha aplicado al registro de TC y RM con contraste Gd–EOB–DTPA así como al registro de imágenes de TC y MR para planificación de radioterapia intraoperatoria rectal. Adicionalmente, se ha desarrollado un algoritmo de apoyo de segmentación 3D basado en Level-Sets para la incorporación de la información de órgano en el registro. El algoritmo de segmentación se ha diseñado específicamente para la estimación volumétrica de hígado sano funcional y ha demostrado un buen funcionamiento en un conjunto de imágenes de TC abdominales. Los resultados muestran una mejora estadísticamente significativa de OF-MI comparada con la Información Mutua clásica en las medidas de calidad de los registros; tanto con datos simulados (p<0.001) como con datos reales en registro hepático de TC y RM con contraste Gd– EOB–DTPA y en registro para planificación de radioterapia rectal usando OF-MI multi-órgano (p<0.05). Adicionalmente, OF-MI presenta resultados más estables con menor dispersión que la Información Mutua y un comportamiento más robusto con respecto a cambios en la relación señal-ruido y a la variación de parámetros. La métrica OF-MI propuesta en esta tesis presenta siempre igual o mayor precisión que la clásica Información Mutua y consecuentemente puede ser una muy buena alternativa en aplicaciones donde la robustez del método y la facilidad en la elección de parámetros sean particularmente importantes. Abstract Pre-operative planning has become an essential task in complex surgeries and therapies, especially for those affecting soft tissue. One example where soft tissue preoperative planning is of high interest is liver surgery. It involves the accurate detection and identification of individual liver lesions and vessels as well as the proper functional liver segmentation and volume estimation. This process is very important because it determines whether the patient is a suitable candidate for surgical therapy and the type of procedure. Soft tissue radiation therapy is a second example where planning is required for both conventional external and intraoperative radiotherapy. It involves the segmentation of the tumor target and vulnerable organs and the estimation of the planned dose. Functional liver segmentations and volume estimations for surgery planning are commonly estimated from computed tomography (CT) images. Similarly, in radiation therapy planning, targets to be irradiated and healthy and vulnerable tissues to be protected from irradiation are commonly delineated on CT scans. However, developments in magnetic resonance imaging (MRI) technology are progressively offering advantages. For instance, the hepatic metastasis detection rate has been found to be significantly higher in Gd–EOB–DTPAenhanced MRI than in CT. Therefore, recent studies highlight the importance of combining the information from CT and MRI to achieve the highest level of accuracy in radiotherapy and to facilitate accurate liver lesion description. In order to improve those two soft tissue pre operative planning scenarios, an accurate nonrigid image registration algorithm is clearly required. However, the vast majority of commercial systems only provide rigid registration. Voxel intensity measures have been shown to be robust measures of image similarity, and among them, Mutual Information (MI) is always the first candidate in multimodal registrations. However, one of the main drawbacks of Mutual Information is the absence of spatial information and the assumption that statistical relationships between images are the same over the whole domain of the image. The hypothesis of the present thesis is that incorporating spatial organ information into the registration process may improve the registration robustness and quality, taking advantage of the clinical segmentations availability. In this work, a multimodal nonrigid 3D registration framework using a new Organ- Focused Mutual Information metric (OF-MI) is proposed, validated and compared to the classical formulation of the Mutual Information (MI). It allows improving registration results in problematic areas by adding regional information into the similitude criterion taking advantage of actual segmentations availability in standard clinical protocols and allowing the statistical dependence between the two modalities differ among organs or regions. The proposed method is applied to CT and T1 weighted delayed Gd–EOB–DTPA-enhanced MRI registration as well as to register CT and MRI images in rectal intraoperative radiotherapy planning. Additionally, a 3D support segmentation algorithm based on Level-Sets has been developed for the incorporation of the organ information into the registration. The segmentation algorithm has been specifically designed for the healthy and functional liver volume estimation demonstrating good performance in a set of abdominal CT studies. Results show a statistical significant improvement of registration quality measures with OF-MI compared to MI with both simulated data (p<0.001) and real data in liver applications registering CT and Gd–EOB–DTPA-enhanced MRI and in registration for rectal radiotherapy planning using multi-organ OF-MI (p<0.05). Additionally, OF-MI presents more stable results with smaller dispersion than MI and a more robust behavior with respect to SNR changes and parameters variation. The proposed OF-MI always presents equal or better accuracy than the classical MI and consequently can be a very convenient alternative within applications where the robustness of the method and the facility to choose the parameters are particularly important.
Resumo:
Objective: To show the results of a device that generates automated olfactory stimuli suitable for functional magnetic resonance imaging (fMRI) experiments. Material and methods: Te n normal volunteers, 5 women and 5 men, were studied. The system allows the programming of several sequences, providing the capability to synchronise the onset of odour presentation with acquisition by a trigger signal of the MRI scanner. The olfactometer is a device that allows selection of the odour, the event paradigm, the time of stimuli and the odour concentration. The paradigm used during fMRI scanning consisted of 15-s blocks. The odorant event took 2 s with butanol, mint and coffee. Results: We observed olfactory activity in the olfactory bulb, entorhinal cortex (4%), amygdala (2.5%) and temporo-parietal cortex, especially in the areas related to emotional integration. Conclusions: The device has demonstrated its effectiveness in stimulating olfactory areas and its capacity to adapt to fMRI equipment.RESUMEN Objetivo: Mostrar los resultados del olfatómetro capaz de generar tareas olfativas en un equipo de resonancia magnética funcional (fMRI). Material y métodos: Estudiamos 10 sujetos normales: 5 varones y 5 mujeres. El olfatómetro está dise ̃ nado para que el estímulo que produce se sincronice con el equipo de fMRI mediante la se ̃ nal desencadenante que suministra el propio equipo. El olfatómetro es capaz de: selec- cionar el olor, secuenciar los distintos olores, programar la frecuencia y duración de los olores y controlar la intensidad del olor. El paradigma utilizado responde a un dise ̃ no de activación asociada a eventos, en el que la duración del bloque de activación y de reposo es de 15 s. La duración del estímulo olfativo (butanol, menta o café) es de 2 segundos, durante toda la serie que consta de 9 ciclos. Resultados: Se ha observado reactividad (contraste BOLD) en las diferentes áreas cerebrales involucradas en las tareas olfativas: bulbo olfatorio, córtex entorrinal (4%), amigdala (2,5%) y córtex temporoparietal. Las áreas relacionadas con integración de las emociones tienen una reactividad mayor. Conclusiones: El dispositivo propuesto nos permite controlar de forma automática y sincronizada los olores necesarios para estudiar la actividad de las áreas olfatorias cerebrales mediante fMRI.
Resumo:
The analysis of the interdependence between time series has become an important field of research in the last years, mainly as a result of advances in the characterization of dynamical systems from the signals they produce, the introduction of concepts such as generalized and phase synchronization and the application of information theory to time series analysis. In neurophysiology, different analytical tools stemming from these concepts have added to the ‘traditional’ set of linear methods, which includes the cross-correlation and the coherency function in the time and frequency domain, respectively, or more elaborated tools such as Granger Causality. This increase in the number of approaches to tackle the existence of functional (FC) or effective connectivity (EC) between two (or among many) neural networks, along with the mathematical complexity of the corresponding time series analysis tools, makes it desirable to arrange them into a unified-easy-to-use software package. The goal is to allow neuroscientists, neurophysiologists and researchers from related fields to easily access and make use of these analysis methods from a single integrated toolbox. Here we present HERMES (http://hermes.ctb.upm.es), a toolbox for the Matlab® environment (The Mathworks, Inc), which is designed to study functional and effective brain connectivity from neurophysiological data such as multivariate EEG and/or MEG records. It includes also visualization tools and statistical methods to address the problem of multiple comparisons. We believe that this toolbox will be very helpful to all the researchers working in the emerging field of brain connectivity analysis.
Resumo:
El propósito de esta tesis es la implementación de métodos eficientes de adaptación de mallas basados en ecuaciones adjuntas en el marco de discretizaciones de volúmenes finitos para mallas no estructuradas. La metodología basada en ecuaciones adjuntas optimiza la malla refinándola adecuadamente con el objetivo de mejorar la precisión de cálculo de un funcional de salida dado. El funcional suele ser una magnitud escalar de interés ingenieril obtenida por post-proceso de la solución, como por ejemplo, la resistencia o la sustentación aerodinámica. Usualmente, el método de adaptación adjunta está basado en una estimación a posteriori del error del funcional de salida mediante un promediado del residuo numérico con las variables adjuntas, “Dual Weighted Residual method” (DWR). Estas variables se obtienen de la solución del problema adjunto para el funcional seleccionado. El procedimiento habitual para introducir este método en códigos basados en discretizaciones de volúmenes finitos involucra la utilización de una malla auxiliar embebida obtenida por refinamiento uniforme de la malla inicial. El uso de esta malla implica un aumento significativo de los recursos computacionales (por ejemplo, en casos 3D el aumento de memoria requerida respecto a la que necesita el problema fluido inicial puede llegar a ser de un orden de magnitud). En esta tesis se propone un método alternativo basado en reformular la estimación del error del funcional en una malla auxiliar más basta y utilizar una técnica de estimación del error de truncación, denominada _ -estimation, para estimar los residuos que intervienen en el método DWR. Utilizando esta estimación del error se diseña un algoritmo de adaptación de mallas que conserva los ingredientes básicos de la adaptación adjunta estándar pero con un coste computacional asociado sensiblemente menor. La metodología de adaptación adjunta estándar y la propuesta en la tesis han sido introducidas en un código de volúmenes finitos utilizado habitualmente en la industria aeronáutica Europea. Se ha investigado la influencia de distintos parámetros numéricos que intervienen en el algoritmo. Finalmente, el método propuesto se compara con otras metodologías de adaptación de mallas y su eficiencia computacional se demuestra en una serie de casos representativos de interés aeronáutico. ABSTRACT The purpose of this thesis is the implementation of efficient grid adaptation methods based on the adjoint equations within the framework of finite volume methods (FVM) for unstructured grid solvers. The adjoint-based methodology aims at adapting grids to improve the accuracy of a functional output of interest, as for example, the aerodynamic drag or lift. The adjoint methodology is based on the a posteriori functional error estimation using the adjoint/dual-weighted residual method (DWR). In this method the error in a functional output can be directly related to local residual errors of the primal solution through the adjoint variables. These variables are obtained by solving the corresponding adjoint problem for the chosen functional. The common approach to introduce the DWR method within the FVM framework involves the use of an auxiliary embedded grid. The storage of this mesh demands high computational resources, i.e. over one order of magnitude increase in memory relative to the initial problem for 3D cases. In this thesis, an alternative methodology for adapting the grid is proposed. Specifically, the DWR approach for error estimation is re-formulated on a coarser mesh level using the _ -estimation method to approximate the truncation error. Then, an output-based adaptive algorithm is designed in such way that the basic ingredients of the standard adjoint method are retained but the computational cost is significantly reduced. The standard and the new proposed adjoint-based adaptive methodologies have been incorporated into a flow solver commonly used in the EU aeronautical industry. The influence of different numerical settings has been investigated. The proposed method has been compared against different grid adaptation approaches and the computational efficiency of the new method has been demonstrated on some representative aeronautical test cases.
Resumo:
Versatile and accurate motion capture systems, with the required properties to be integrated within both clinical and domiciliary environments, would represent a significant advance in following the progress of the patients as well as in allowing the incorporation of new data exploitation and analysis methods to enhance the functional neurorehabilitation therapeutic processes. Besides, these systems would permit the later development of new applications focused on the automatization of the therapeutic tasks in order to increase the therapist/patient ratio, thus decreasing the costs [1]. However, current motion capture systems are not still ready to work within uncontrolled environments.
Resumo:
The analysis of the interdependence between time series has become an important field of research in the last years, mainly as a result of advances in the characterization of dynamical systems from the signals they produce, the introduction of concepts such as generalized and phase synchronization and the application of information theory to time series analysis. In neurophysiology, different analytical tools stemming from these concepts have added to the ?traditional? set of linear methods, which includes the cross-correlation and the coherency function in the time and frequency domain, respectively, or more elaborated tools such as Granger Causality. This increase in the number of approaches to tackle the existence of functional (FC) or effective connectivity (EC) between two (or among many) neural networks, along with the mathematical complexity of the corresponding time series analysis tools, makes it desirable to arrange them into a unified, easy-to-use software package. The goal is to allow neuroscientists, neurophysiologists and researchers from related fields to easily access and make use of these analysis methods from a single integrated toolbox. Here we present HERMES (http://hermes.ctb.upm.es), a toolbox for the Matlab® environment (The Mathworks, Inc), which is designed to study functional and effective brain connectivity from neurophysiological data such as multivariate EEG and/or MEG records. It includes also visualization tools and statistical methods to address the problem of multiple comparisons. We believe that this toolbox will be very helpful to all the researchers working in the emerging field of brain connectivity analysis.