886 resultados para real option analysis
Resumo:
Several techniques have been proposed to exploit GNSS-derived kinematic orbit information for the determination of long-wavelength gravity field features. These methods include the (i) celestial mechanics approach, (ii) short-arc approach, (iii) point-wise acceleration approach, (iv) averaged acceleration approach, and (v) energy balance approach. Although there is a general consensus that—except for energy balance—these methods theoretically provide equivalent results, real data gravity field solutions from kinematic orbit analysis have never been evaluated against each other within a consistent data processing environment. This contribution strives to close this gap. Target consistency criteria for our study are the input data sets, period of investigation, spherical harmonic resolution, a priori gravity field information, etc. We compare GOCE gravity field estimates based on the aforementioned approaches as computed at the Graz University of Technology, the University of Bern, the University of Stuttgart/Austrian Academy of Sciences, and by RHEA Systems for the European Space Agency. The involved research groups complied with most of the consistency criterions. Deviations only occur where technical unfeasibility exists. Performance measures include formal errors, differences with respect to a state-of-the-art GRACE gravity field, (cumulative) geoid height differences, and SLR residuals from precise orbit determination of geodetic satellites. We found that for the approaches (i) to (iv), the cumulative geoid height differences at spherical harmonic degree 100 differ by only ≈10 % ; in the absence of the polar data gap, SLR residuals agree by ≈96 % . From our investigations, we conclude that real data analysis results are in agreement with the theoretical considerations concerning the (relative) performance of the different approaches.
Resumo:
My dissertation focuses on developing methods for gene-gene/environment interactions and imprinting effect detections for human complex diseases and quantitative traits. It includes three sections: (1) generalizing the Natural and Orthogonal interaction (NOIA) model for the coding technique originally developed for gene-gene (GxG) interaction and also to reduced models; (2) developing a novel statistical approach that allows for modeling gene-environment (GxE) interactions influencing disease risk, and (3) developing a statistical approach for modeling genetic variants displaying parent-of-origin effects (POEs), such as imprinting. In the past decade, genetic researchers have identified a large number of causal variants for human genetic diseases and traits by single-locus analysis, and interaction has now become a hot topic in the effort to search for the complex network between multiple genes or environmental exposures contributing to the outcome. Epistasis, also known as gene-gene interaction is the departure from additive genetic effects from several genes to a trait, which means that the same alleles of one gene could display different genetic effects under different genetic backgrounds. In this study, we propose to implement the NOIA model for association studies along with interaction for human complex traits and diseases. We compare the performance of the new statistical models we developed and the usual functional model by both simulation study and real data analysis. Both simulation and real data analysis revealed higher power of the NOIA GxG interaction model for detecting both main genetic effects and interaction effects. Through application on a melanoma dataset, we confirmed the previously identified significant regions for melanoma risk at 15q13.1, 16q24.3 and 9p21.3. We also identified potential interactions with these significant regions that contribute to melanoma risk. Based on the NOIA model, we developed a novel statistical approach that allows us to model effects from a genetic factor and binary environmental exposure that are jointly influencing disease risk. Both simulation and real data analyses revealed higher power of the NOIA model for detecting both main genetic effects and interaction effects for both quantitative and binary traits. We also found that estimates of the parameters from logistic regression for binary traits are no longer statistically uncorrelated under the alternative model when there is an association. Applying our novel approach to a lung cancer dataset, we confirmed four SNPs in 5p15 and 15q25 region to be significantly associated with lung cancer risk in Caucasians population: rs2736100, rs402710, rs16969968 and rs8034191. We also validated that rs16969968 and rs8034191 in 15q25 region are significantly interacting with smoking in Caucasian population. Our approach identified the potential interactions of SNP rs2256543 in 6p21 with smoking on contributing to lung cancer risk. Genetic imprinting is the most well-known cause for parent-of-origin effect (POE) whereby a gene is differentially expressed depending on the parental origin of the same alleles. Genetic imprinting affects several human disorders, including diabetes, breast cancer, alcoholism, and obesity. This phenomenon has been shown to be important for normal embryonic development in mammals. Traditional association approaches ignore this important genetic phenomenon. In this study, we propose a NOIA framework for a single locus association study that estimates both main allelic effects and POEs. We develop statistical (Stat-POE) and functional (Func-POE) models, and demonstrate conditions for orthogonality of the Stat-POE model. We conducted simulations for both quantitative and qualitative traits to evaluate the performance of the statistical and functional models with different levels of POEs. Our results showed that the newly proposed Stat-POE model, which ensures orthogonality of variance components if Hardy-Weinberg Equilibrium (HWE) or equal minor and major allele frequencies is satisfied, had greater power for detecting the main allelic additive effect than a Func-POE model, which codes according to allelic substitutions, for both quantitative and qualitative traits. The power for detecting the POE was the same for the Stat-POE and Func-POE models under HWE for quantitative traits.
Resumo:
Complex diseases such as cancer result from multiple genetic changes and environmental exposures. Due to the rapid development of genotyping and sequencing technologies, we are now able to more accurately assess causal effects of many genetic and environmental factors. Genome-wide association studies have been able to localize many causal genetic variants predisposing to certain diseases. However, these studies only explain a small portion of variations in the heritability of diseases. More advanced statistical models are urgently needed to identify and characterize some additional genetic and environmental factors and their interactions, which will enable us to better understand the causes of complex diseases. In the past decade, thanks to the increasing computational capabilities and novel statistical developments, Bayesian methods have been widely applied in the genetics/genomics researches and demonstrating superiority over some regular approaches in certain research areas. Gene-environment and gene-gene interaction studies are among the areas where Bayesian methods may fully exert its functionalities and advantages. This dissertation focuses on developing new Bayesian statistical methods for data analysis with complex gene-environment and gene-gene interactions, as well as extending some existing methods for gene-environment interactions to other related areas. It includes three sections: (1) Deriving the Bayesian variable selection framework for the hierarchical gene-environment and gene-gene interactions; (2) Developing the Bayesian Natural and Orthogonal Interaction (NOIA) models for gene-environment interactions; and (3) extending the applications of two Bayesian statistical methods which were developed for gene-environment interaction studies, to other related types of studies such as adaptive borrowing historical data. We propose a Bayesian hierarchical mixture model framework that allows us to investigate the genetic and environmental effects, gene by gene interactions (epistasis) and gene by environment interactions in the same model. It is well known that, in many practical situations, there exists a natural hierarchical structure between the main effects and interactions in the linear model. Here we propose a model that incorporates this hierarchical structure into the Bayesian mixture model, such that the irrelevant interaction effects can be removed more efficiently, resulting in more robust, parsimonious and powerful models. We evaluate both of the 'strong hierarchical' and 'weak hierarchical' models, which specify that both or one of the main effects between interacting factors must be present for the interactions to be included in the model. The extensive simulation results show that the proposed strong and weak hierarchical mixture models control the proportion of false positive discoveries and yield a powerful approach to identify the predisposing main effects and interactions in the studies with complex gene-environment and gene-gene interactions. We also compare these two models with the 'independent' model that does not impose this hierarchical constraint and observe their superior performances in most of the considered situations. The proposed models are implemented in the real data analysis of gene and environment interactions in the cases of lung cancer and cutaneous melanoma case-control studies. The Bayesian statistical models enjoy the properties of being allowed to incorporate useful prior information in the modeling process. Moreover, the Bayesian mixture model outperforms the multivariate logistic model in terms of the performances on the parameter estimation and variable selection in most cases. Our proposed models hold the hierarchical constraints, that further improve the Bayesian mixture model by reducing the proportion of false positive findings among the identified interactions and successfully identifying the reported associations. This is practically appealing for the study of investigating the causal factors from a moderate number of candidate genetic and environmental factors along with a relatively large number of interactions. The natural and orthogonal interaction (NOIA) models of genetic effects have previously been developed to provide an analysis framework, by which the estimates of effects for a quantitative trait are statistically orthogonal regardless of the existence of Hardy-Weinberg Equilibrium (HWE) within loci. Ma et al. (2012) recently developed a NOIA model for the gene-environment interaction studies and have shown the advantages of using the model for detecting the true main effects and interactions, compared with the usual functional model. In this project, we propose a novel Bayesian statistical model that combines the Bayesian hierarchical mixture model with the NOIA statistical model and the usual functional model. The proposed Bayesian NOIA model demonstrates more power at detecting the non-null effects with higher marginal posterior probabilities. Also, we review two Bayesian statistical models (Bayesian empirical shrinkage-type estimator and Bayesian model averaging), which were developed for the gene-environment interaction studies. Inspired by these Bayesian models, we develop two novel statistical methods that are able to handle the related problems such as borrowing data from historical studies. The proposed methods are analogous to the methods for the gene-environment interactions on behalf of the success on balancing the statistical efficiency and bias in a unified model. By extensive simulation studies, we compare the operating characteristics of the proposed models with the existing models including the hierarchical meta-analysis model. The results show that the proposed approaches adaptively borrow the historical data in a data-driven way. These novel models may have a broad range of statistical applications in both of genetic/genomic and clinical studies.
Resumo:
Este artículo analiza el pensamiento de Slavoj Zizek, en relación con los conceptos de lo imaginario, lo simbólico y lo real del psicoanálisis lacaniano. Se establecen dos líneas de pensamiento dentro de la propuesta zizekiana: la primera ligada al concepto de lo simbólico y la segunda ligada a lo imaginario-real lacaniano. Esto para establecer las contradicciones inherentes al pensamiento zizekiano, que dificultan o imposibilitan asumir sus teorías dentro de una vertiente contestataria.
Resumo:
Este artículo analiza el pensamiento de Slavoj Zizek, en relación con los conceptos de lo imaginario, lo simbólico y lo real del psicoanálisis lacaniano. Se establecen dos líneas de pensamiento dentro de la propuesta zizekiana: la primera ligada al concepto de lo simbólico y la segunda ligada a lo imaginario-real lacaniano. Esto para establecer las contradicciones inherentes al pensamiento zizekiano, que dificultan o imposibilitan asumir sus teorías dentro de una vertiente contestataria.
Resumo:
Este artículo analiza el pensamiento de Slavoj Zizek, en relación con los conceptos de lo imaginario, lo simbólico y lo real del psicoanálisis lacaniano. Se establecen dos líneas de pensamiento dentro de la propuesta zizekiana: la primera ligada al concepto de lo simbólico y la segunda ligada a lo imaginario-real lacaniano. Esto para establecer las contradicciones inherentes al pensamiento zizekiano, que dificultan o imposibilitan asumir sus teorías dentro de una vertiente contestataria.
Resumo:
En los últimos años la tecnología láser se ha convertido en una herramienta imprescindible en la fabricación de dispositivos fotovoltaicos, ayudando a la consecución de dos objetivos claves para que esta opción energética se convierta en una alternativa viable: reducción de costes de fabricación y aumento de eficiencia de dispositivo. Dentro de las tecnologías fotovoltaicas, las basadas en silicio cristalino (c-Si) siguen siendo las dominantes en el mercado, y en la actualidad los esfuerzos científicos en este campo se encaminan fundamentalmente a conseguir células de mayor eficiencia a un menor coste encontrándose, como se comentaba anteriormente, que gran parte de las soluciones pueden venir de la mano de una mayor utilización de tecnología láser en la fabricación de los mismos. En este contexto, esta Tesis hace un estudio completo y desarrolla, hasta su aplicación en dispositivo final, tres procesos láser específicos para la optimización de dispositivos fotovoltaicos de alta eficiencia basados en silicio. Dichos procesos tienen como finalidad la mejora de los contactos frontal y posterior de células fotovoltaicas basadas en c-Si con vistas a mejorar su eficiencia eléctrica y reducir el coste de producción de las mismas. En concreto, para el contacto frontal se han desarrollado soluciones innovadoras basadas en el empleo de tecnología láser en la metalización y en la fabricación de emisores selectivos puntuales basados en técnicas de dopado con láser, mientras que para el contacto posterior se ha trabajado en el desarrollo de procesos de contacto puntual con láser para la mejora de la pasivación del dispositivo. La consecución de dichos objetivos ha llevado aparejado el alcanzar una serie de hitos que se resumen continuación: - Entender el impacto de la interacción del láser con los distintos materiales empleados en el dispositivo y su influencia sobre las prestaciones del mismo, identificando los efectos dañinos e intentar mitigarlos en lo posible. - Desarrollar procesos láser que sean compatibles con los dispositivos que admiten poca afectación térmica en el proceso de fabricación (procesos a baja temperatura), como los dispositivos de heterounión. - Desarrollar de forma concreta procesos, completamente parametrizados, de definición de dopado selectivo con láser, contactos puntuales con láser y metalización mediante técnicas de transferencia de material inducida por láser. - Definir tales procesos de forma que reduzcan la complejidad de la fabricación del dispositivo y que sean de fácil integración en una línea de producción. - Mejorar las técnicas de caracterización empleadas para verificar la calidad de los procesos, para lo que ha sido necesario adaptar específicamente técnicas de caracterización de considerable complejidad. - Demostrar su viabilidad en dispositivo final. Como se detalla en el trabajo, la consecución de estos hitos en el marco de desarrollo de esta Tesis ha permitido contribuir a la fabricación de los primeros dispositivos fotovoltaicos en España que incorporan estos conceptos avanzados y, en el caso de la tecnología de dopado con láser, ha permitido hacer avances completamente novedosos a nivel mundial. Asimismo los conceptos propuestos de metalización con láser abren vías, completamente originales, para la mejora de los dispositivos considerados. Por último decir que este trabajo ha sido posible por una colaboración muy estrecha entre el Centro Láser de la UPM, en el que la autora desarrolla su labor, y el Grupo de Investigación en Micro y Nanotecnologías de la Universidad Politécnica de Cataluña, encargado de la preparación y puesta a punto de las muestras y del desarrollo de algunos procesos láser para comparación. También cabe destacar la contribución de del Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas, CIEMAT, en la preparación de experimentos específicos de gran importancia en el desarrollo del trabajo. Dichas colaboraciones se han desarrollado en el marco de varios proyectos, tales como el proyecto singular estratégico PSE-MICROSIL08 (PSE-iv 120000-2006-6), el proyecto INNDISOL (IPT-420000-2010-6), ambos financiados por el Fondo Europeo de Desarrollo Regional FEDER (UE) “Una manera de hacer Europa” y el MICINN, y el proyecto del Plan Nacional AMIC (ENE2010-21384-C04-02), cuya financiación ha permitido en gran parte llevar a término este trabajo. v ABSTRACT. Last years lasers have become a fundamental tool in the photovoltaic (PV) industry, helping this technology to achieve two major goals: cost reduction and efficiency improvement. Among the present PV technologies, crystalline silicon (c-Si) maintains a clear market supremacy and, in this particular field, the technological efforts are focussing into the improvement of the device efficiency using different approaches (reducing for instance the electrical or optical losses in the device) and the cost reduction in the device fabrication (using less silicon in the final device or implementing more cost effective production steps). In both approaches lasers appear ideally suited tools to achieve the desired success. In this context, this work makes a comprehensive study and develops, until their implementation in a final device, three specific laser processes designed for the optimization of high efficiency PV devices based in c-Si. Those processes are intended to improve the front and back contact of the considered solar cells in order to reduce the production costs and to improve the device efficiency. In particular, to improve the front contact, this work has developed innovative solutions using lasers as fundamental processing tools to metalize, using laser induced forward transfer techniques, and to create local selective emitters by means of laser doping techniques. On the other side, and for the back contact, and approached based in the optimization of standard laser fired contact formation has been envisaged. To achieve these fundamental goals, a number of milestones have been reached in the development of this work, namely: - To understand the basics of the laser-matter interaction physics in the considered processes, in order to preserve the functionality of the irradiated materials. - To develop laser processes fully compatible with low temperature device concepts (as it is the case of heterojunction solar cells). - In particular, to parameterize completely processes of laser doping, laser fired contacts and metallization via laser transfer of material. - To define such a processes in such a way that their final industrial implementation could be a real option. - To improve widely used characterization techniques in order to be applied to the study of these particular processes. - To probe their viability in a final PV device. Finally, the achievement of these milestones has brought as a consequence the fabrication of the first devices in Spain incorporating these concepts. In particular, the developments achieved in laser doping, are relevant not only for the Spanish science but in a general international context, with the introduction of really innovative concepts as local selective emitters. Finally, the advances reached in the laser metallization approached presented in this work open the door to future developments, fully innovative, in the field of PV industrial metallization techniques. This work was made possible by a very close collaboration between the Laser Center of the UPM, in which the author develops his work, and the Research Group of Micro y Nanotecnology of the Universidad Politécnica de Cataluña, in charge of the preparation and development of samples and the assessment of some laser processes for comparison. As well is important to remark the collaboration of the Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas, CIEMAT, in the preparation of specific experiments of great importance in the development of the work. These collaborations have been developed within the framework of various projects such as the PSE-MICROSIL08 (PSE-120000-2006-6), the project INNDISOL (IPT-420000-2010-6), both funded by the Fondo Europeo de Desarrollo Regional FEDER (UE) “Una manera de hacer Europa” and the MICINN, and the project AMIC (ENE2010-21384-C04-02), whose funding has largely allowed to complete this work.
Resumo:
El análisis de imágenes hiperespectrales permite obtener información con una gran resolución espectral: cientos de bandas repartidas desde el espectro infrarrojo hasta el ultravioleta. El uso de dichas imágenes está teniendo un gran impacto en el campo de la medicina y, en concreto, destaca su utilización en la detección de distintos tipos de cáncer. Dentro de este campo, uno de los principales problemas que existen actualmente es el análisis de dichas imágenes en tiempo real ya que, debido al gran volumen de datos que componen estas imágenes, la capacidad de cómputo requerida es muy elevada. Una de las principales líneas de investigación acerca de la reducción de dicho tiempo de procesado se basa en la idea de repartir su análisis en diversos núcleos trabajando en paralelo. En relación a esta línea de investigación, en el presente trabajo se desarrolla una librería para el lenguaje RVC – CAL – lenguaje que está especialmente pensado para aplicaciones multimedia y que permite realizar la paralelización de una manera intuitiva – donde se recogen las funciones necesarias para implementar dos de las cuatro fases propias del procesado espectral: reducción dimensional y extracción de endmembers. Cabe mencionar que este trabajo se complementa con el realizado por Raquel Lazcano en su Proyecto Fin de Grado, donde se desarrollan las funciones necesarias para completar las otras dos fases necesarias en la cadena de desmezclado. En concreto, este trabajo se encuentra dividido en varias partes. La primera de ellas expone razonadamente los motivos que han llevado a comenzar este Proyecto Fin de Grado y los objetivos que se pretenden conseguir con él. Tras esto, se hace un amplio estudio del estado del arte actual y, en él, se explican tanto las imágenes hiperespectrales como los medios y las plataformas que servirán para realizar la división en núcleos y detectar las distintas problemáticas con las que nos podamos encontrar al realizar dicha división. Una vez expuesta la base teórica, nos centraremos en la explicación del método seguido para componer la cadena de desmezclado y generar la librería; un punto importante en este apartado es la utilización de librerías especializadas en operaciones matriciales complejas, implementadas en C++. Tras explicar el método utilizado, se exponen los resultados obtenidos primero por etapas y, posteriormente, con la cadena de procesado completa, implementada en uno o varios núcleos. Por último, se aportan una serie de conclusiones obtenidas tras analizar los distintos algoritmos en cuanto a bondad de resultados, tiempos de procesado y consumo de recursos y se proponen una serie de posibles líneas de actuación futuras relacionadas con dichos resultados. ABSTRACT. Hyperspectral imaging allows us to collect high resolution spectral information: hundred of bands covering from infrared to ultraviolet spectrum. These images have had strong repercussions in the medical field; in particular, we must highlight its use in cancer detection. In this field, the main problem we have to deal with is the real time analysis, because these images have a great data volume and they require a high computational power. One of the main research lines that deals with this problem is related with the analysis of these images using several cores working at the same time. According to this investigation line, this document describes the development of a RVC – CAL library – this language has been widely used for working with multimedia applications and allows an optimized system parallelization –, which joins all the functions needed to implement two of the four stages of the hyperspectral imaging processing chain: dimensionality reduction and endmember extraction. This research is complemented with the research conducted by Raquel Lazcano in her Diploma Project, where she studies the other two stages of the processing chain. The document is divided in several chapters. The first of them introduces the motivation of the Diploma Project and the main objectives to achieve. After that, we study the state of the art of some technologies related with this work, like hyperspectral images and the software and hardware that we will use to parallelize the system and to analyze its performance. Once we have exposed the theoretical bases, we will explain the followed methodology to compose the processing chain and to generate the library; one of the most important issues in this chapter is the use of some C++ libraries specialized in complex matrix operations. At this point, we will expose the results obtained in the individual stage analysis and then, the results of the full processing chain implemented in one or several cores. Finally, we will extract some conclusions related with algorithm behavior, time processing and system performance. In the same way, we propose some future research lines according to the results obtained in this document
Resumo:
El análisis de imágenes hiperespectrales permite obtener información con una gran resolución espectral: cientos de bandas repartidas desde el espectro infrarrojo hasta el ultravioleta. El uso de dichas imágenes está teniendo un gran impacto en el campo de la medicina y, en concreto, destaca su utilización en la detección de distintos tipos de cáncer. Dentro de este campo, uno de los principales problemas que existen actualmente es el análisis de dichas imágenes en tiempo real ya que, debido al gran volumen de datos que componen estas imágenes, la capacidad de cómputo requerida es muy elevada. Una de las principales líneas de investigación acerca de la reducción de dicho tiempo de procesado se basa en la idea de repartir su análisis en diversos núcleos trabajando en paralelo. En relación a esta línea de investigación, en el presente trabajo se desarrolla una librería para el lenguaje RVC – CAL – lenguaje que está especialmente pensado para aplicaciones multimedia y que permite realizar la paralelización de una manera intuitiva – donde se recogen las funciones necesarias para implementar el clasificador conocido como Support Vector Machine – SVM. Cabe mencionar que este trabajo complementa el realizado en [1] y [2] donde se desarrollaron las funciones necesarias para implementar una cadena de procesado que utiliza el método unmixing para procesar la imagen hiperespectral. En concreto, este trabajo se encuentra dividido en varias partes. La primera de ellas expone razonadamente los motivos que han llevado a comenzar este Trabajo de Investigación y los objetivos que se pretenden conseguir con él. Tras esto, se hace un amplio estudio del estado del arte actual y, en él, se explican tanto las imágenes hiperespectrales como sus métodos de procesado y, en concreto, se detallará el método que utiliza el clasificador SVM. Una vez expuesta la base teórica, nos centraremos en la explicación del método seguido para convertir una versión en Matlab del clasificador SVM optimizado para analizar imágenes hiperespectrales; un punto importante en este apartado es que se desarrolla la versión secuencial del algoritmo y se asientan las bases para una futura paralelización del clasificador. Tras explicar el método utilizado, se exponen los resultados obtenidos primero comparando ambas versiones y, posteriormente, analizando por etapas la versión adaptada al lenguaje RVC – CAL. Por último, se aportan una serie de conclusiones obtenidas tras analizar las dos versiones del clasificador SVM en cuanto a bondad de resultados y tiempos de procesado y se proponen una serie de posibles líneas de actuación futuras relacionadas con dichos resultados. ABSTRACT. Hyperspectral imaging allows us to collect high resolution spectral information: hundred of bands covering from infrared to ultraviolet spectrum. These images have had strong repercussions in the medical field; in particular, we must highlight its use in cancer detection. In this field, the main problem we have to deal with is the real time analysis, because these images have a great data volume and they require a high computational power. One of the main research lines that deals with this problem is related with the analysis of these images using several cores working at the same time. According to this investigation line, this document describes the development of a RVC – CAL library – this language has been widely used for working with multimedia applications and allows an optimized system parallelization –, which joins all the functions needed to implement the Support Vector Machine – SVM - classifier. This research complements the research conducted in [1] and [2] where the necessary functions to implement the unmixing method to analyze hyperspectral images were developed. The document is divided in several chapters. The first of them introduces the motivation of the Master Thesis and the main objectives to achieve. After that, we study the state of the art of some technologies related with this work, like hyperspectral images, their processing methods and, concretely, the SVM classifier. Once we have exposed the theoretical bases, we will explain the followed methodology to translate a Matlab version of the SVM classifier optimized to process an hyperspectral image to RVC – CAL language; one of the most important issues in this chapter is that a sequential implementation is developed and the bases of a future parallelization of the SVM classifier are set. At this point, we will expose the results obtained in the comparative between versions and then, the results of the different steps that compose the SVM in its RVC – CAL version. Finally, we will extract some conclusions related with algorithm behavior and time processing. In the same way, we propose some future research lines according to the results obtained in this document.
Resumo:
Neuronal migration is a critical phase of brain development, where defects can lead to severe ataxia, mental retardation, and seizures. In the developing cerebellum, granule neurons turn on the gene for tissue plasminogen activator (tPA) as they begin their migration into the cerebellar molecular layer. Granule neurons both secrete tPA, an extracellular serine protease that converts the proenzyme plasminogen into the active protease plasmin, and bind tPA to their cell surface. In the nervous system, tPA activity is correlated with neurite outgrowth, neuronal migration, learning, and excitotoxic death. Here we show that compared with their normal counterparts, mice lacking the tPA gene (tPA−/−) have greater than 2-fold more migrating granule neurons in the cerebellar molecular layer during the most active phase of granule cell migration. A real-time analysis of granule cell migration in cerebellar slices of tPA−/− mice shows that granule neurons are migrating 51% as fast as granule neurons in slices from wild-type mice. These findings establish a direct role for tPA in facilitating neuronal migration, and they raise the possibility that late arriving neurons may have altered synaptic interactions.
Resumo:
A method was developed to perform real-time analysis of cytosolic pH of arbuscular mycorrhizal fungi in culture using dye and ratiometric measurements (490/450 nm excitations). The study was mainly performed using photometric analysis, although some data were confirmed using image analysis. The use of nigericin allowed an in vivo calibration. Experimental parameters such as loading time and concentration of the dye were determined so that pH measurements could be made for a steady-state period on viable cells. A characteristic pH profile was observed along hyphae. For Gigaspora margarita, the pH of the tip (0–2 μm) was typically 6.7, increased sharply to 7.0 behind this region (9.5 μm), and decreased over the next 250 μm to a constant value of 6.6. A similar pattern was obtained for Glomus intraradices. The pH profile of G. margarita germ tubes was higher when cultured in the presence of carrot (Daucus carota) hairy roots (nonmycorrhizal). Similarly, extraradical hyphae of G. intraradices had a higher apical pH than the germ tubes. The use of a paper layer to prevent the mycorrhizal roots from being in direct contact with the medium selected hyphae with an even higher cytosolic pH. Results suggest that this method could be useful as a bioassay for studying signal perception and/or H+ cotransport of nutrients by arbuscular mycorrhizal hyphae.
Resumo:
Background The identification and characterization of genes that influence the risk of common, complex multifactorial disease primarily through interactions with other genes and environmental factors remains a statistical and computational challenge in genetic epidemiology. We have previously introduced a genetic programming optimized neural network (GPNN) as a method for optimizing the architecture of a neural network to improve the identification of gene combinations associated with disease risk. The goal of this study was to evaluate the power of GPNN for identifying high-order gene-gene interactions. We were also interested in applying GPNN to a real data analysis in Parkinson's disease. Results We show that GPNN has high power to detect even relatively small genetic effects (2–3% heritability) in simulated data models involving two and three locus interactions. The limits of detection were reached under conditions with very small heritability (
Resumo:
Background: The identification and characterization of genes that influence the risk of common, complex multifactorial disease primarily through interactions with other genes and environmental factors remains a statistical and computational challenge in genetic epidemiology. We have previously introduced a genetic programming optimized neural network (GPNN) as a method for optimizing the architecture of a neural network to improve the identification of gene combinations associated with disease risk. The goal of this study was to evaluate the power of GPNN for identifying high-order gene-gene interactions. We were also interested in applying GPNN to a real data analysis in Parkinson's disease. Results: We show that GPNN has high power to detect even relatively small genetic effects (2-3% heritability) in simulated data models involving two and three locus interactions. The limits of detection were reached under conditions with very small heritability (
Resumo:
A multi-chromosome GA (Multi-GA) was developed, based upon concepts from the natural world, allowing improved flexibility in a number of areas including representation, genetic operators, their parameter rates and real world multi-dimensional applications. A series of experiments were conducted, comparing the performance of the Multi-GA to a traditional GA on a number of recognised and increasingly complex test optimisation surfaces, with promising results. Further experiments demonstrated the Multi-GA's flexibility through the use of non-binary chromosome representations and its applicability to dynamic parameterisation. A number of alternative and new methods of dynamic parameterisation were investigated, in addition to a new non-binary 'Quotient crossover' mechanism. Finally, the Multi-GA was applied to two real world problems, demonstrating its ability to handle mixed type chromosomes within an individual, the limited use of a chromosome level fitness function, the introduction of new genetic operators for structural self-adaptation and its viability as a serious real world analysis tool. The first problem involved optimum placement of computers within a building, allowing the Multi-GA to use multiple chromosomes with different type representations and different operators in a single individual. The second problem, commonly associated with Geographical Information Systems (GIS), required a spatial analysis location of the optimum number and distribution of retail sites over two different population grids. In applying the Multi-GA, two new genetic operators (addition and deletion) were developed and explored, resulting in the definition of a mechanism for self-modification of genetic material within the Multi-GA structure and a study of this behaviour.
Resumo:
PURPOSE. The purpose of this study was to evaluate the potential of the portable Grand Seiko FR-5000 autorefractor to allow objective, continuous, open-field measurement of accommodation and pupil size for the investigation of the visual response to real-world environments and changes in the optical components of the eye. METHODS. The FR-5000 projects a pair of infrared horizontal and vertical lines on either side of fixation, analyzing the separation of the bars in the reflected image. The measurement bars were turned on permanently and the video output of the FR-5000 fed into a PC for real-time analysis. The calibration between infrared bar separation and the refractive error was assessed over a range of 10.0 D with a model eye. Tolerance to longitudinal instrument head shift was investigated over a ±15 mm range and to eye alignment away from the visual axis over eccentricities up to 25.0°. The minimum pupil size for measurement was determined with a model eye. RESULTS. The separation of the measurement bars changed linearly (r = 0.99), allowing continuous online analysis of the refractive state at 60 Hz temporal and approximately 0.01 D system resolution with pupils >2 mm. The pupil edge could be analyzed on the diagonal axes at the same rate with a system resolution of approximately 0.05 mm. The measurement of accommodation and pupil size were affected by eccentricity of viewing and instrument focusing inaccuracies. CONCLUSIONS. The small size of the instrument together with its resolution and temporal properties and ability to measure through a 2 mm pupil make it useful for the measurement of dynamic accommodation and pupil responses in confined environments, although good eye alignment is important. Copyright © 2006 American Academy of Optometry.