890 resultados para real option analysis
Resumo:
El análisis de imágenes hiperespectrales permite obtener información con una gran resolución espectral: cientos de bandas repartidas desde el espectro infrarrojo hasta el ultravioleta. El uso de dichas imágenes está teniendo un gran impacto en el campo de la medicina y, en concreto, destaca su utilización en la detección de distintos tipos de cáncer. Dentro de este campo, uno de los principales problemas que existen actualmente es el análisis de dichas imágenes en tiempo real ya que, debido al gran volumen de datos que componen estas imágenes, la capacidad de cómputo requerida es muy elevada. Una de las principales líneas de investigación acerca de la reducción de dicho tiempo de procesado se basa en la idea de repartir su análisis en diversos núcleos trabajando en paralelo. En relación a esta línea de investigación, en el presente trabajo se desarrolla una librería para el lenguaje RVC – CAL – lenguaje que está especialmente pensado para aplicaciones multimedia y que permite realizar la paralelización de una manera intuitiva – donde se recogen las funciones necesarias para implementar dos de las cuatro fases propias del procesado espectral: reducción dimensional y extracción de endmembers. Cabe mencionar que este trabajo se complementa con el realizado por Raquel Lazcano en su Proyecto Fin de Grado, donde se desarrollan las funciones necesarias para completar las otras dos fases necesarias en la cadena de desmezclado. En concreto, este trabajo se encuentra dividido en varias partes. La primera de ellas expone razonadamente los motivos que han llevado a comenzar este Proyecto Fin de Grado y los objetivos que se pretenden conseguir con él. Tras esto, se hace un amplio estudio del estado del arte actual y, en él, se explican tanto las imágenes hiperespectrales como los medios y las plataformas que servirán para realizar la división en núcleos y detectar las distintas problemáticas con las que nos podamos encontrar al realizar dicha división. Una vez expuesta la base teórica, nos centraremos en la explicación del método seguido para componer la cadena de desmezclado y generar la librería; un punto importante en este apartado es la utilización de librerías especializadas en operaciones matriciales complejas, implementadas en C++. Tras explicar el método utilizado, se exponen los resultados obtenidos primero por etapas y, posteriormente, con la cadena de procesado completa, implementada en uno o varios núcleos. Por último, se aportan una serie de conclusiones obtenidas tras analizar los distintos algoritmos en cuanto a bondad de resultados, tiempos de procesado y consumo de recursos y se proponen una serie de posibles líneas de actuación futuras relacionadas con dichos resultados. ABSTRACT. Hyperspectral imaging allows us to collect high resolution spectral information: hundred of bands covering from infrared to ultraviolet spectrum. These images have had strong repercussions in the medical field; in particular, we must highlight its use in cancer detection. In this field, the main problem we have to deal with is the real time analysis, because these images have a great data volume and they require a high computational power. One of the main research lines that deals with this problem is related with the analysis of these images using several cores working at the same time. According to this investigation line, this document describes the development of a RVC – CAL library – this language has been widely used for working with multimedia applications and allows an optimized system parallelization –, which joins all the functions needed to implement two of the four stages of the hyperspectral imaging processing chain: dimensionality reduction and endmember extraction. This research is complemented with the research conducted by Raquel Lazcano in her Diploma Project, where she studies the other two stages of the processing chain. The document is divided in several chapters. The first of them introduces the motivation of the Diploma Project and the main objectives to achieve. After that, we study the state of the art of some technologies related with this work, like hyperspectral images and the software and hardware that we will use to parallelize the system and to analyze its performance. Once we have exposed the theoretical bases, we will explain the followed methodology to compose the processing chain and to generate the library; one of the most important issues in this chapter is the use of some C++ libraries specialized in complex matrix operations. At this point, we will expose the results obtained in the individual stage analysis and then, the results of the full processing chain implemented in one or several cores. Finally, we will extract some conclusions related with algorithm behavior, time processing and system performance. In the same way, we propose some future research lines according to the results obtained in this document
Resumo:
El análisis de imágenes hiperespectrales permite obtener información con una gran resolución espectral: cientos de bandas repartidas desde el espectro infrarrojo hasta el ultravioleta. El uso de dichas imágenes está teniendo un gran impacto en el campo de la medicina y, en concreto, destaca su utilización en la detección de distintos tipos de cáncer. Dentro de este campo, uno de los principales problemas que existen actualmente es el análisis de dichas imágenes en tiempo real ya que, debido al gran volumen de datos que componen estas imágenes, la capacidad de cómputo requerida es muy elevada. Una de las principales líneas de investigación acerca de la reducción de dicho tiempo de procesado se basa en la idea de repartir su análisis en diversos núcleos trabajando en paralelo. En relación a esta línea de investigación, en el presente trabajo se desarrolla una librería para el lenguaje RVC – CAL – lenguaje que está especialmente pensado para aplicaciones multimedia y que permite realizar la paralelización de una manera intuitiva – donde se recogen las funciones necesarias para implementar el clasificador conocido como Support Vector Machine – SVM. Cabe mencionar que este trabajo complementa el realizado en [1] y [2] donde se desarrollaron las funciones necesarias para implementar una cadena de procesado que utiliza el método unmixing para procesar la imagen hiperespectral. En concreto, este trabajo se encuentra dividido en varias partes. La primera de ellas expone razonadamente los motivos que han llevado a comenzar este Trabajo de Investigación y los objetivos que se pretenden conseguir con él. Tras esto, se hace un amplio estudio del estado del arte actual y, en él, se explican tanto las imágenes hiperespectrales como sus métodos de procesado y, en concreto, se detallará el método que utiliza el clasificador SVM. Una vez expuesta la base teórica, nos centraremos en la explicación del método seguido para convertir una versión en Matlab del clasificador SVM optimizado para analizar imágenes hiperespectrales; un punto importante en este apartado es que se desarrolla la versión secuencial del algoritmo y se asientan las bases para una futura paralelización del clasificador. Tras explicar el método utilizado, se exponen los resultados obtenidos primero comparando ambas versiones y, posteriormente, analizando por etapas la versión adaptada al lenguaje RVC – CAL. Por último, se aportan una serie de conclusiones obtenidas tras analizar las dos versiones del clasificador SVM en cuanto a bondad de resultados y tiempos de procesado y se proponen una serie de posibles líneas de actuación futuras relacionadas con dichos resultados. ABSTRACT. Hyperspectral imaging allows us to collect high resolution spectral information: hundred of bands covering from infrared to ultraviolet spectrum. These images have had strong repercussions in the medical field; in particular, we must highlight its use in cancer detection. In this field, the main problem we have to deal with is the real time analysis, because these images have a great data volume and they require a high computational power. One of the main research lines that deals with this problem is related with the analysis of these images using several cores working at the same time. According to this investigation line, this document describes the development of a RVC – CAL library – this language has been widely used for working with multimedia applications and allows an optimized system parallelization –, which joins all the functions needed to implement the Support Vector Machine – SVM - classifier. This research complements the research conducted in [1] and [2] where the necessary functions to implement the unmixing method to analyze hyperspectral images were developed. The document is divided in several chapters. The first of them introduces the motivation of the Master Thesis and the main objectives to achieve. After that, we study the state of the art of some technologies related with this work, like hyperspectral images, their processing methods and, concretely, the SVM classifier. Once we have exposed the theoretical bases, we will explain the followed methodology to translate a Matlab version of the SVM classifier optimized to process an hyperspectral image to RVC – CAL language; one of the most important issues in this chapter is that a sequential implementation is developed and the bases of a future parallelization of the SVM classifier are set. At this point, we will expose the results obtained in the comparative between versions and then, the results of the different steps that compose the SVM in its RVC – CAL version. Finally, we will extract some conclusions related with algorithm behavior and time processing. In the same way, we propose some future research lines according to the results obtained in this document.
Resumo:
Neuronal migration is a critical phase of brain development, where defects can lead to severe ataxia, mental retardation, and seizures. In the developing cerebellum, granule neurons turn on the gene for tissue plasminogen activator (tPA) as they begin their migration into the cerebellar molecular layer. Granule neurons both secrete tPA, an extracellular serine protease that converts the proenzyme plasminogen into the active protease plasmin, and bind tPA to their cell surface. In the nervous system, tPA activity is correlated with neurite outgrowth, neuronal migration, learning, and excitotoxic death. Here we show that compared with their normal counterparts, mice lacking the tPA gene (tPA−/−) have greater than 2-fold more migrating granule neurons in the cerebellar molecular layer during the most active phase of granule cell migration. A real-time analysis of granule cell migration in cerebellar slices of tPA−/− mice shows that granule neurons are migrating 51% as fast as granule neurons in slices from wild-type mice. These findings establish a direct role for tPA in facilitating neuronal migration, and they raise the possibility that late arriving neurons may have altered synaptic interactions.
Resumo:
A method was developed to perform real-time analysis of cytosolic pH of arbuscular mycorrhizal fungi in culture using dye and ratiometric measurements (490/450 nm excitations). The study was mainly performed using photometric analysis, although some data were confirmed using image analysis. The use of nigericin allowed an in vivo calibration. Experimental parameters such as loading time and concentration of the dye were determined so that pH measurements could be made for a steady-state period on viable cells. A characteristic pH profile was observed along hyphae. For Gigaspora margarita, the pH of the tip (0–2 μm) was typically 6.7, increased sharply to 7.0 behind this region (9.5 μm), and decreased over the next 250 μm to a constant value of 6.6. A similar pattern was obtained for Glomus intraradices. The pH profile of G. margarita germ tubes was higher when cultured in the presence of carrot (Daucus carota) hairy roots (nonmycorrhizal). Similarly, extraradical hyphae of G. intraradices had a higher apical pH than the germ tubes. The use of a paper layer to prevent the mycorrhizal roots from being in direct contact with the medium selected hyphae with an even higher cytosolic pH. Results suggest that this method could be useful as a bioassay for studying signal perception and/or H+ cotransport of nutrients by arbuscular mycorrhizal hyphae.
Resumo:
Background The identification and characterization of genes that influence the risk of common, complex multifactorial disease primarily through interactions with other genes and environmental factors remains a statistical and computational challenge in genetic epidemiology. We have previously introduced a genetic programming optimized neural network (GPNN) as a method for optimizing the architecture of a neural network to improve the identification of gene combinations associated with disease risk. The goal of this study was to evaluate the power of GPNN for identifying high-order gene-gene interactions. We were also interested in applying GPNN to a real data analysis in Parkinson's disease. Results We show that GPNN has high power to detect even relatively small genetic effects (2–3% heritability) in simulated data models involving two and three locus interactions. The limits of detection were reached under conditions with very small heritability (
Resumo:
Background: The identification and characterization of genes that influence the risk of common, complex multifactorial disease primarily through interactions with other genes and environmental factors remains a statistical and computational challenge in genetic epidemiology. We have previously introduced a genetic programming optimized neural network (GPNN) as a method for optimizing the architecture of a neural network to improve the identification of gene combinations associated with disease risk. The goal of this study was to evaluate the power of GPNN for identifying high-order gene-gene interactions. We were also interested in applying GPNN to a real data analysis in Parkinson's disease. Results: We show that GPNN has high power to detect even relatively small genetic effects (2-3% heritability) in simulated data models involving two and three locus interactions. The limits of detection were reached under conditions with very small heritability (
Resumo:
A multi-chromosome GA (Multi-GA) was developed, based upon concepts from the natural world, allowing improved flexibility in a number of areas including representation, genetic operators, their parameter rates and real world multi-dimensional applications. A series of experiments were conducted, comparing the performance of the Multi-GA to a traditional GA on a number of recognised and increasingly complex test optimisation surfaces, with promising results. Further experiments demonstrated the Multi-GA's flexibility through the use of non-binary chromosome representations and its applicability to dynamic parameterisation. A number of alternative and new methods of dynamic parameterisation were investigated, in addition to a new non-binary 'Quotient crossover' mechanism. Finally, the Multi-GA was applied to two real world problems, demonstrating its ability to handle mixed type chromosomes within an individual, the limited use of a chromosome level fitness function, the introduction of new genetic operators for structural self-adaptation and its viability as a serious real world analysis tool. The first problem involved optimum placement of computers within a building, allowing the Multi-GA to use multiple chromosomes with different type representations and different operators in a single individual. The second problem, commonly associated with Geographical Information Systems (GIS), required a spatial analysis location of the optimum number and distribution of retail sites over two different population grids. In applying the Multi-GA, two new genetic operators (addition and deletion) were developed and explored, resulting in the definition of a mechanism for self-modification of genetic material within the Multi-GA structure and a study of this behaviour.
Resumo:
PURPOSE. The purpose of this study was to evaluate the potential of the portable Grand Seiko FR-5000 autorefractor to allow objective, continuous, open-field measurement of accommodation and pupil size for the investigation of the visual response to real-world environments and changes in the optical components of the eye. METHODS. The FR-5000 projects a pair of infrared horizontal and vertical lines on either side of fixation, analyzing the separation of the bars in the reflected image. The measurement bars were turned on permanently and the video output of the FR-5000 fed into a PC for real-time analysis. The calibration between infrared bar separation and the refractive error was assessed over a range of 10.0 D with a model eye. Tolerance to longitudinal instrument head shift was investigated over a ±15 mm range and to eye alignment away from the visual axis over eccentricities up to 25.0°. The minimum pupil size for measurement was determined with a model eye. RESULTS. The separation of the measurement bars changed linearly (r = 0.99), allowing continuous online analysis of the refractive state at 60 Hz temporal and approximately 0.01 D system resolution with pupils >2 mm. The pupil edge could be analyzed on the diagonal axes at the same rate with a system resolution of approximately 0.05 mm. The measurement of accommodation and pupil size were affected by eccentricity of viewing and instrument focusing inaccuracies. CONCLUSIONS. The small size of the instrument together with its resolution and temporal properties and ability to measure through a 2 mm pupil make it useful for the measurement of dynamic accommodation and pupil responses in confined environments, although good eye alignment is important. Copyright © 2006 American Academy of Optometry.
Resumo:
Using the case of a low cost airline company’s website we analyze some special research questions of information technology valuation. The distinctive characteristics of this research are the ex post valuation perspective; the parallel and comparative use of accounting and business valuation approaches; and the integrated application of discounted cash flow and real option valuation. As the examined international company is a strategic user of e-technology and wants to manage and account intangible IT-assets explicitly, these specific valuation perspectives are gaining practical significance.
Resumo:
Az x''+f(x) x'+g(x) = 0 alakú Liénard-típusú differenciálegyenlet központi szerepet játszik az üzleti ciklusok Káldor-Kalecki-féle [3,4] és Goodwin-féle [2] modelljeiben, sőt egy a munkanélküliség és vállalkozás-ösztönzések ciklikus változásait leíró újabb modellben [1] is. De ugyanez a nemlineáris egyenlettípus a gerjesztett ingák és elektromos rezgőkörök elméletét is felöleli [5]. Az ezzel kapcsolatos irodalom nagyrészt a határciklusok létezését vizsgálja (pl. [5]), pedig az alapvető stabilitási kérdések jóval áttekinthetőbb módon kezelhetők, s a kapott eredmények közvetve a határciklusok létezésének feltételeit is sokkal jobban be tudják határolni. Jelen dolgozatban az egyváltozós analízis hatékony nyelvezetével olyan egyszerűen megfogalmazható eredményekhez jutunk, amelyek képesek kitágítani az üzleti és más közgazdasági ciklusok modelljeinek kereteit, illetve pl. az [1]-beli modellhez újabb szemléltető speciális eseteket is nyerünk. ____ The Liénard type differential equation of the form x00 + f(x) ¢ x0 + g(x) = 0 has a central role in business cycle models by Káldor [3], Kalecki [4] and Goodwin [2], moreover in a new model describing the cyclical behavior of unemployment and entrepreneurship [1]. The same type of nonlinear equation explains the features of forced pendulums and electric circuits [5]. The related literature discusses mainly the existence of limit cycles, although the fundamental stability questions of this topic can be managed much more easily. The achieved results also outline the conditions for the existence of limit cycles. In this work, by the effective language of real valued analysis, we obtain easy-formulated results which may broaden the frames of economic and business cycle models, moreover we may gain new illustrative particular cases for e.g., [1].
Resumo:
A tanulmányban a szerzők arra a kérdésre keresik a választ, hogy az aszimmetrikus információk hatására a vállalkozók és a befektetők között kialakuló megbízó-ügynök viszonynak melyek a speciális vetületei a kockázatitőke-finanszírozás vonatkozásában. A szerzők arra a következtetésre jutottak, hogy a hiányos információk, a megbízó-ügynök viszony, illetve az ügyletek speciális jellege miatt fokozottan jelentkező ügynökprobléma kezelésére a kockázatitőke-finanszírozás szereplői speciális kockázatkezelési technikákat alkalmaznak. Ilyenek a magas elvárt hozamok, a szigorú kiválasztási kritériumok, a speciális befektetési vagy szindikátusi szerződések, a befektetést követő monitoring tevékenység, a több lépcsőben történő finanszírozás és a portfólióvállalatok hálózatba szervezése. A speciális kockázati megközelítés következtében a befektetéseket opciós szemlélet is áthatja. _____ This paper focuses on the special aspects of imperfect information in case of venture capital financing including principal-agent relationship between entrepreneurs and investors as well as adverse selection that evolves as a result of information asymmetries. The authors’ finding is that venture capital is able to manage the problems caused by imperfect information via applying divers risk management techniques such as high profit expectations, scrutiny of portfolio-companies, the use of special contracting stipulations and syndicate agreements, the monitoring of investments, multi-staged financing of companies and the integration of portfolio-companies into networks. In addition to the risk management techniques the authors also give the interpretation of the unique attitude of venture capital toward uncertainty and its special real option-like risk valuation approach that makes venture capitalists capable of handling high uncertainty under imperfect information.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Resumo:
This thesis examines the spatial and temporal variation in nitrogen dioxide (NO2) levels in Guernsey and the impacts on pre-existing asthmatics. Whilst air quality in Guernsey is generally good, the levels of NO2 exceed UK standards in several locations. The evidence indicates that people suffering from asthma have exacerbation of their symptoms if exposed to elevated levels of air pollutants including NO2, although this research has never been carried out in Guernsey before. In addition, exposure assessment of individuals is rarely carried out and research in this area is limited due to the complexity of undertaking such a study, which will include a combination of exposures in the home, the workplace and ambient exposures, which vary depending on the individual daily experience. For the first time in Guernsey, this research has examined NO2 levels in correlation with asthma patient admissions to hospital, assessment of NO2 exposures in typical homes and typical workplaces in Guernsey. The data showed a temporal correlation between NO2 levels and the number of hospital admissions and the trend from 2008-2012 was upwards. Statistical analysis of the data did not show a significant linear correlation due to the small size of the data sets. Exposure assessment of individuals showed a spatial variation in exposures in Guernsey and assessment in indoor environments showed that real-time analysis of NO2 levels needs to be undertaken if indoor micro environments for NO2 are the be assessed adequately. There was temporal and spatial variation in NO2 concentrations measured using diffusion tubes, which provide a monthly mean value, and analysers measuring NO2 concentrations in real time. The research shows that building layout and design are important factors for good air flow and ventilation and the dispersion of NO2 indoors. Environmental Health Officers have statutory responsibilities for ambient air quality, hygiene of buildings and workplace environments and this role needs to be co-ordinated with healthcare professionals to improve health outcomes for asthmatics. The outcome of the thesis was the development of a risk management framework for pre-existing asthmatics at work for use by regulators of workplaces and an information leaflet to assist in improving health outcomes for asthmatics in Guernsey.
Resumo:
The protein lysate array is an emerging technology for quantifying the protein concentration ratios in multiple biological samples. It is gaining popularity, and has the potential to answer questions about post-translational modifications and protein pathway relationships. Statistical inference for a parametric quantification procedure has been inadequately addressed in the literature, mainly due to two challenges: the increasing dimension of the parameter space and the need to account for dependence in the data. Each chapter of this thesis addresses one of these issues. In Chapter 1, an introduction to the protein lysate array quantification is presented, followed by the motivations and goals for this thesis work. In Chapter 2, we develop a multi-step procedure for the Sigmoidal models, ensuring consistent estimation of the concentration level with full asymptotic efficiency. The results obtained in this chapter justify inferential procedures based on large-sample approximations. Simulation studies and real data analysis are used to illustrate the performance of the proposed method in finite-samples. The multi-step procedure is simpler in both theory and computation than the single-step least squares method that has been used in current practice. In Chapter 3, we introduce a new model to account for the dependence structure of the errors by a nonlinear mixed effects model. We consider a method to approximate the maximum likelihood estimator of all the parameters. Using the simulation studies on various error structures, we show that for data with non-i.i.d. errors the proposed method leads to more accurate estimates and better confidence intervals than the existing single-step least squares method.