884 resultados para Piecewise linear systems


Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis presents a methodology for measuring thermal properties in situ, with a special focus on obtaining properties of layered stack-ups commonly used in armored vehicle components. The technique involves attaching a thermal source to the surface of a component, measuring the heat flux transferred between the source and the component, and measuring the surface temperature response. The material properties of the component can subsequently be determined from measurement of the transient heat flux and temperature response at the surface alone. Experiments involving multilayered specimens show that the surface temperature response to a sinusoidal heat flux forcing function is also sinusoidal. A frequency domain analysis shows that sinusoidal thermal excitation produces a gain and phase shift behavior typical of linear systems. Additionally, this analysis shows that the material properties of sub-surface layers affect the frequency response function at the surface of a particular stack-up. The methodology involves coupling a thermal simulation tool with an optimization algorithm to determine the material properties from temperature and heat flux measurement data. Use of a sinusoidal forcing function not only provides a mechanism to perform the frequency domain analysis described above, but sinusoids also have the practical benefit of reducing the need for instrumentation of the backside of the component. Heat losses can be minimized by alternately injecting and extracting heat on the front surface, as long as sufficiently high frequencies are used.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this article we propose an exact efficient simulation algorithm for the generalized von Mises circular distribution of order two. It is an acceptance-rejection algorithm with a piecewise linear envelope based on the local extrema and the inflexion points of the generalized von Mises density of order two. We show that these points can be obtained from the roots of polynomials and degrees four and eight, which can be easily obtained by the methods of Ferrari and Weierstrass. A comparative study with the von Neumann acceptance-rejection, with the ratio-of-uniforms and with a Markov chain Monte Carlo algorithms shows that this new method is generally the most efficient.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

BACKGROUND Even among HIV-infected patients who fully suppress plasma HIV RNA replication on antiretroviral therapy, genetic (e.g. CCL3L1 copy number), viral (e.g. tropism) and environmental (e.g. chronic exposure to microbial antigens) factors influence CD4 recovery. These factors differ markedly around the world and therefore the expected CD4 recovery during HIV RNA suppression may differ globally. METHODS We evaluated HIV-infected adults from North America, West Africa, East Africa, Southern Africa and Asia starting non-nucleoside reverse transcriptase inhibitorbased regimens containing efavirenz or nevirapine, who achieved at least one HIV RNA level <500/ml in the first year of therapy and observed CD4 changes during HIV RNA suppression. We used a piecewise linear regression to estimate the influence of region of residence on CD4 recovery, adjusting for socio-demographic and clinical characteristics. We observed 28 217 patients from 105 cohorts over 37 825 person-years. RESULTS After adjustment, patients from East Africa showed diminished CD4 recovery as compared with other regions. Three years after antiretroviral therapy initiation, the mean CD4 count for a prototypical patient with a pre-therapy CD4 count of 150/ml was 529/ml [95% confidence interval (CI): 517–541] in North America, 494/ml (95% CI: 429–559) in West Africa, 515/ml (95% CI: 508–522) in Southern Africa, 503/ml (95% CI: 478–528) in Asia and 437/ml (95% CI: 425–449) in East Africa. CONCLUSIONS CD4 recovery during HIV RNA suppression is diminished in East Africa as compared with other regions of the world, and observed differences are large enough to potentially influence clinical outcomes. Epidemiological analyses on a global scale can identify macroscopic effects unobservable at the clinical, national or individual regional level.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Esta tesis se ha desarrollado en el contexto del proyecto Cajal Blue Brain, una iniciativa europea dedicada al estudio del cerebro. Uno de los objetivos de esta iniciativa es desarrollar nuevos métodos y nuevas tecnologías que simplifiquen el análisis de datos en el campo neurocientífico. El presente trabajo se ha centrado en diseñar herramientas que combinen información proveniente de distintos canales sensoriales con el fin de acelerar la interacción y análisis de imágenes neurocientíficas. En concreto se estudiará la posibilidad de combinar información visual con información háptica. Las espinas dendríticas son pequeñas protuberancias que recubren la superficie dendrítica de muchas neuronas del cerebro. A día de hoy, se cree que tienen un papel clave en la transmisión de señales neuronales. Motivo por el cual, el interés por parte de la comunidad científica por estas estructuras ha ido en aumento a medida que las técnicas de adquisición de imágenes mejoraban hasta alcanzar una calidad suficiente para analizar dichas estructuras. A menudo, los neurocientíficos utilizan técnicas de microscopía con luz para obtener los datos que les permitan analizar estructuras neuronales tales como neuronas, dendritas y espinas dendríticas. A pesar de que estas técnicas ofrezcan ciertas ventajas frente a su equivalente electrónico, las técnicas basadas en luz permiten una menor resolución. En particular, estructuras pequeñas como las espinas dendríticas pueden capturarse de forma incorrecta en las imágenes obtenidas, impidiendo su análisis. En este trabajo, se presenta una nueva técnica, que permite editar imágenes volumétricas, mediante un dispositivo háptico, con el fin de reconstruir de los cuellos de las espinas dendríticas. Con este objetivo, en un primer momento se desarrolló un algoritmo que proporciona retroalimentación háptica en datos volumétricos, completando la información que provine del canal visual. Dicho algoritmo de renderizado háptico permite a los usuarios tocar y percibir una isosuperficie en el volumen de datos. El algoritmo asegura un renderizado robusto y eficiente. Se utiliza un método basado en las técnicas de “marching tetrahedra” para la extracción local de una isosuperficie continua, lineal y definida por intervalos. La robustez deriva tanto de una etapa de detección de colisiones continua de la isosuperficie extraída, como del uso de técnicas eficientes de renderizado basadas en un proxy puntual. El método de “marching tetrahedra” propuesto garantiza que la topología de la isosuperficie extraída coincida con la topología de una isosuperficie equivalente determinada utilizando una interpolación trilineal. Además, con el objetivo de mejorar la coherencia entre la información háptica y la información visual, el algoritmo de renderizado háptico calcula un segundo proxy en la isosuperficie pintada en la pantalla. En este trabajo se demuestra experimentalmente las mejoras en, primero, la etapa de extracción de isosuperficie, segundo, la robustez a la hora de mantener el proxy en la isosuperficie deseada y finalmente la eficiencia del algoritmo. En segundo lugar, a partir del algoritmo de renderizado háptico propuesto, se desarrolló un procedimiento, en cuatro etapas, para la reconstrucción de espinas dendríticas. Este procedimiento, se puede integrar en los cauces de segmentación automática y semiautomática existentes como una etapa de pre-proceso previa. El procedimiento está diseñando para que tanto la navegación como el proceso de edición en sí mismo estén controlados utilizando un dispositivo háptico. Se han diseñado dos experimentos para evaluar esta técnica. El primero evalúa la aportación de la retroalimentación háptica y el segundo se centra en evaluar la idoneidad del uso de un háptico como dispositivo de entrada. En ambos casos, los resultados demuestran que nuestro procedimiento mejora la precisión de la reconstrucción. En este trabajo se describen también dos casos de uso de nuestro procedimiento en el ámbito de la neurociencia: el primero aplicado a neuronas situadas en la corteza cerebral humana y el segundo aplicado a espinas dendríticas situadas a lo largo de neuronas piramidales de la corteza del cerebro de una rata. Por último, presentamos el programa, Neuro Haptic Editor, desarrollado a lo largo de esta tesis junto con los diferentes algoritmos ya mencionados. ABSTRACT This thesis took place within the Cajal Blue Brain project, a European initiative dedicated to the study of the brain. One of the main goals of this project is the development of new methods and technologies simplifying data analysis in neuroscience. This thesis focused on the development of tools combining information originating from distinct sensory channels with the aim of accelerating both the interaction with neuroscience images and their analysis. In concrete terms, the objective is to study the possibility of combining visual information with haptic information. Dendritic spines are thin protrusions that cover the dendritic surface of numerous neurons in the brain and whose function seems to play a key role in neural circuits. The interest of the neuroscience community toward those structures kept increasing as and when acquisition methods improved, eventually to the point that the produced datasets enabled their analysis. Quite often, neuroscientists use light microscopy techniques to produce the dataset that will allow them to analyse neuronal structures such as neurons, dendrites and dendritic spines. While offering some advantages compared to their electronic counterpart, light microscopy techniques achieve lower resolutions. Particularly, small structures such as dendritic spines might suffer from a very low level of fluorescence in the final dataset, preventing further analysis. This thesis introduces a new technique enabling the edition of volumetric datasets in order to recreate dendritic spine necks using a haptic device. In order to fulfil this objective, we first presented an algorithm to provide haptic feedback directly from volumetric datasets, as an aid to regular visualization. The haptic rendering algorithm lets users perceive isosurfaces in volumetric datasets, and it relies on several design features that ensure a robust and efficient rendering. A marching tetrahedra approach enables the dynamic extraction of a piecewise linear continuous isosurface. Robustness is derived using a Continuous Collision Detection step coupled with acknowledged proxy-based rendering methods over the extracted isosurface. The introduced marching tetrahedra approach guarantees that the extracted isosurface will match the topology of an equivalent isosurface computed using trilinear interpolation. The proposed haptic rendering algorithm improves the coherence between haptic and visual cues computing a second proxy on the isosurface displayed on screen. Three experiments demonstrate the improvements on the isosurface extraction stage as well as the robustness and the efficiency of the complete algorithm. We then introduce our four-steps procedure for the complete reconstruction of dendritic spines. Based on our haptic rendering algorithm, this procedure is intended to work as an image processing stage before the automatic segmentation step giving the final representation of the dendritic spines. The procedure is designed to allow both the navigation and the volume image editing to be carried out using a haptic device. We evaluated our procedure through two experiments. The first experiment concerns the benefits of the force feedback and the second checks the suitability of the use of a haptic device as input. In both cases, the results shows that the procedure improves the editing accuracy. We also report two concrete cases where our procedure was employed in the neuroscience field, the first one concerning dendritic spines in the human cortex, the second one referring to an ongoing experiment studying dendritic spines along dendrites of mouse cortical pyramidal neurons. Finally, we present the software program, Neuro Haptic Editor, that was built along the development of the different algorithms implemented during this thesis, and used by neuroscientists to use our procedure.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La evolución de los teléfonos móviles inteligentes, dotados de cámaras digitales, está provocando una creciente demanda de aplicaciones cada vez más complejas que necesitan algoritmos de visión artificial en tiempo real; puesto que el tamaño de las señales de vídeo no hace sino aumentar y en cambio el rendimiento de los procesadores de un solo núcleo se ha estancado, los nuevos algoritmos que se diseñen para visión artificial han de ser paralelos para poder ejecutarse en múltiples procesadores y ser computacionalmente escalables. Una de las clases de procesadores más interesantes en la actualidad se encuentra en las tarjetas gráficas (GPU), que son dispositivos que ofrecen un alto grado de paralelismo, un excelente rendimiento numérico y una creciente versatilidad, lo que los hace interesantes para llevar a cabo computación científica. En esta tesis se exploran dos aplicaciones de visión artificial que revisten una gran complejidad computacional y no pueden ser ejecutadas en tiempo real empleando procesadores tradicionales. En cambio, como se demuestra en esta tesis, la paralelización de las distintas subtareas y su implementación sobre una GPU arrojan los resultados deseados de ejecución con tasas de refresco interactivas. Asimismo, se propone una técnica para la evaluación rápida de funciones de complejidad arbitraria especialmente indicada para su uso en una GPU. En primer lugar se estudia la aplicación de técnicas de síntesis de imágenes virtuales a partir de únicamente dos cámaras lejanas y no paralelas—en contraste con la configuración habitual en TV 3D de cámaras cercanas y paralelas—con información de color y profundidad. Empleando filtros de mediana modificados para la elaboración de un mapa de profundidad virtual y proyecciones inversas, se comprueba que estas técnicas son adecuadas para una libre elección del punto de vista. Además, se demuestra que la codificación de la información de profundidad con respecto a un sistema de referencia global es sumamente perjudicial y debería ser evitada. Por otro lado se propone un sistema de detección de objetos móviles basado en técnicas de estimación de densidad con funciones locales. Este tipo de técnicas es muy adecuada para el modelado de escenas complejas con fondos multimodales, pero ha recibido poco uso debido a su gran complejidad computacional. El sistema propuesto, implementado en tiempo real sobre una GPU, incluye propuestas para la estimación dinámica de los anchos de banda de las funciones locales, actualización selectiva del modelo de fondo, actualización de la posición de las muestras de referencia del modelo de primer plano empleando un filtro de partículas multirregión y selección automática de regiones de interés para reducir el coste computacional. Los resultados, evaluados sobre diversas bases de datos y comparados con otros algoritmos del estado del arte, demuestran la gran versatilidad y calidad de la propuesta. Finalmente se propone un método para la aproximación de funciones arbitrarias empleando funciones continuas lineales a tramos, especialmente indicada para su implementación en una GPU mediante el uso de las unidades de filtraje de texturas, normalmente no utilizadas para cómputo numérico. La propuesta incluye un riguroso análisis matemático del error cometido en la aproximación en función del número de muestras empleadas, así como un método para la obtención de una partición cuasióptima del dominio de la función para minimizar el error. ABSTRACT The evolution of smartphones, all equipped with digital cameras, is driving a growing demand for ever more complex applications that need to rely on real-time computer vision algorithms. However, video signals are only increasing in size, whereas the performance of single-core processors has somewhat stagnated in the past few years. Consequently, new computer vision algorithms will need to be parallel to run on multiple processors and be computationally scalable. One of the most promising classes of processors nowadays can be found in graphics processing units (GPU). These are devices offering a high parallelism degree, excellent numerical performance and increasing versatility, which makes them interesting to run scientific computations. In this thesis, we explore two computer vision applications with a high computational complexity that precludes them from running in real time on traditional uniprocessors. However, we show that by parallelizing subtasks and implementing them on a GPU, both applications attain their goals of running at interactive frame rates. In addition, we propose a technique for fast evaluation of arbitrarily complex functions, specially designed for GPU implementation. First, we explore the application of depth-image–based rendering techniques to the unusual configuration of two convergent, wide baseline cameras, in contrast to the usual configuration used in 3D TV, which are narrow baseline, parallel cameras. By using a backward mapping approach with a depth inpainting scheme based on median filters, we show that these techniques are adequate for free viewpoint video applications. In addition, we show that referring depth information to a global reference system is ill-advised and should be avoided. Then, we propose a background subtraction system based on kernel density estimation techniques. These techniques are very adequate for modelling complex scenes featuring multimodal backgrounds, but have not been so popular due to their huge computational and memory complexity. The proposed system, implemented in real time on a GPU, features novel proposals for dynamic kernel bandwidth estimation for the background model, selective update of the background model, update of the position of reference samples of the foreground model using a multi-region particle filter, and automatic selection of regions of interest to reduce computational cost. The results, evaluated on several databases and compared to other state-of-the-art algorithms, demonstrate the high quality and versatility of our proposal. Finally, we propose a general method for the approximation of arbitrarily complex functions using continuous piecewise linear functions, specially formulated for GPU implementation by leveraging their texture filtering units, normally unused for numerical computation. Our proposal features a rigorous mathematical analysis of the approximation error in function of the number of samples, as well as a method to obtain a suboptimal partition of the domain of the function to minimize approximation error.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The visual responses of neurons in the cerebral cortex were first adequately characterized in the 1960s by D. H. Hubel and T. N. Wiesel [(1962) J. Physiol. (London) 160, 106-154; (1968) J. Physiol. (London) 195, 215-243] using qualitative analyses based on simple geometric visual targets. Over the past 30 years, it has become common to consider the properties of these neurons by attempting to make formal descriptions of these transformations they execute on the visual image. Most such models have their roots in linear-systems approaches pioneered in the retina by C. Enroth-Cugell and J. R. Robson [(1966) J. Physiol. (London) 187, 517-552], but it is clear that purely linear models of cortical neurons are inadequate. We present two related models: one designed to account for the responses of simple cells in primary visual cortex (V1) and one designed to account for the responses of pattern direction selective cells in MT (or V5), an extrastriate visual area thought to be involved in the analysis of visual motion. These models share a common structure that operates in the same way on different kinds of input, and instantiate the widely held view that computational strategies are similar throughout the cerebral cortex. Implementations of these models for Macintosh microcomputers are available and can be used to explore the models' properties.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A necessidade de obter solução de grandes sistemas lineares resultantes de processos de discretização de equações diferenciais parciais provenientes da modelagem de diferentes fenômenos físicos conduz à busca de técnicas numéricas escaláveis. Métodos multigrid são classificados como algoritmos escaláveis.Um estimador de erros deve estar associado à solução numérica do problema discreto de modo a propiciar a adequada avaliação da solução obtida pelo processo de aproximação. Nesse contexto, a presente tese caracteriza-se pela proposta de reutilização das estruturas matriciais hierárquicas de operadores de transferência e restrição dos métodos multigrid algébricos para acelerar o tempo de solução dos sistemas lineares associados à equação do transporte de contaminantes em meio poroso saturado. Adicionalmente, caracteriza-se pela implementação das estimativas residuais para os problemas que envolvem dados constantes ou não constantes, os regimes de pequena ou grande advecção e pela proposta de utilização das estimativas residuais associadas ao termo de fonte e à condição inicial para construir procedimentos adaptativos para os dados do problema. O desenvolvimento dos códigos do método de elementos finitos, do estimador residual e dos procedimentos adaptativos foram baseados no projeto FEniCS, utilizando a linguagem de programação PYTHONR e desenvolvidos na plataforma Eclipse. A implementação dos métodos multigrid algébricos com reutilização considera a biblioteca PyAMG. Baseado na reutilização das estruturas hierárquicas, os métodos multigrid com reutilização com parâmetro fixo e automática são propostos, e esses conceitos são estendidos para os métodos iterativos não-estacionários tais como GMRES e BICGSTAB. Os resultados numéricos mostraram que o estimador residual captura o comportamento do erro real da solução numérica, e fornece algoritmos adaptativos para os dados cuja malha retornada produz uma solução numérica similar à uma malha uniforme com mais elementos. Adicionalmente, os métodos com reutilização são mais rápidos que os métodos que não empregam o processo de reutilização de estruturas. Além disso, a eficiência dos métodos com reutilização também pode ser observada na solução do problema auxiliar, o qual é necessário para obtenção das estimativas residuais para o regime de grande advecção. Esses resultados englobam tanto os métodos multigrid algébricos do tipo SA quanto os métodos pré-condicionados por métodos multigrid algébrico SA, e envolvem o transporte de contaminantes em regime de pequena e grande advecção, malhas estruturadas e não estruturadas, problemas bidimensionais, problemas tridimensionais e domínios com diferentes escalas.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Includes bibliographical references (p. 58-59)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Includes index.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-06

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper we explore the practical use of neural networks for controlling complex non-linear systems. The system used to demonstrate this approach is a simulation of a gas turbine engine typical of those used to power commercial aircraft. The novelty of the work lies in the requirement for multiple controllers which are used to maintain system variables in safe operating regions as well as governing the engine thrust.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A very fast heuristic iterative method of projection on simplicial cones is presented. It consists in solving two linear systems at each step of the iteration. The extensive experiments indicate that the method furnishes the exact solution in more then 99.7 percent of the cases. The average number of steps is 5.67 (we have not found any examples which required more than 13 steps) and the relative number of steps with respect to the dimension decreases dramatically. Roughly speaking, for high enough dimensions the absolute number of steps is independent of the dimension.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis demonstrates that the use of finite elements need not be confined to space alone, but that they may also be used in the time domain, It is shown that finite element methods may be used successfully to obtain the response of systems to applied forces, including, for example, the accelerations in a tall structure subjected to an earthquake shock. It is further demonstrated that at least one of these methods may be considered to be a practical alternative to more usual methods of solution. A detailed investigation of the accuracy and stability of finite element solutions is included, and methods of applications to both single- and multi-degree of freedom systems are described. Solutions using two different temporal finite elements are compared with those obtained by conventional methods, and a comparison of computation times for the different methods is given. The application of finite element methods to distributed systems is described, using both separate discretizations in space and time, and a combined space-time discretization. The inclusion of both viscous and hysteretic damping is shown to add little to the difficulty of the solution. Temporal finite elements are also seen to be of considerable interest when applied to non-linear systems, both when the system parameters are time-dependent and also when they are functions of displacement. Solutions are given for many different examples, and the computer programs used for the finite element methods are included in an Appendix.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background: Parkinson’s disease (PD) is an incurable neurological disease with approximately 0.3% prevalence. The hallmark symptom is gradual movement deterioration. Current scientific consensus about disease progression holds that symptoms will worsen smoothly over time unless treated. Accurate information about symptom dynamics is of critical importance to patients, caregivers, and the scientific community for the design of new treatments, clinical decision making, and individual disease management. Long-term studies characterize the typical time course of the disease as an early linear progression gradually reaching a plateau in later stages. However, symptom dynamics over durations of days to weeks remains unquantified. Currently, there is a scarcity of objective clinical information about symptom dynamics at intervals shorter than 3 months stretching over several years, but Internet-based patient self-report platforms may change this. Objective: To assess the clinical value of online self-reported PD symptom data recorded by users of the health-focused Internet social research platform PatientsLikeMe (PLM), in which patients quantify their symptoms on a regular basis on a subset of the Unified Parkinson’s Disease Ratings Scale (UPDRS). By analyzing this data, we aim for a scientific window on the nature of symptom dynamics for assessment intervals shorter than 3 months over durations of several years. Methods: Online self-reported data was validated against the gold standard Parkinson’s Disease Data and Organizing Center (PD-DOC) database, containing clinical symptom data at intervals greater than 3 months. The data were compared visually using quantile-quantile plots, and numerically using the Kolmogorov-Smirnov test. By using a simple piecewise linear trend estimation algorithm, the PLM data was smoothed to separate random fluctuations from continuous symptom dynamics. Subtracting the trends from the original data revealed random fluctuations in symptom severity. The average magnitude of fluctuations versus time since diagnosis was modeled by using a gamma generalized linear model. Results: Distributions of ages at diagnosis and UPDRS in the PLM and PD-DOC databases were broadly consistent. The PLM patients were systematically younger than the PD-DOC patients and showed increased symptom severity in the PD off state. The average fluctuation in symptoms (UPDRS Parts I and II) was 2.6 points at the time of diagnosis, rising to 5.9 points 16 years after diagnosis. This fluctuation exceeds the estimated minimal and moderate clinically important differences, respectively. Not all patients conformed to the current clinical picture of gradual, smooth changes: many patients had regimes where symptom severity varied in an unpredictable manner, or underwent large rapid changes in an otherwise more stable progression. Conclusions: This information about short-term PD symptom dynamics contributes new scientific understanding about the disease progression, currently very costly to obtain without self-administered Internet-based reporting. This understanding should have implications for the optimization of clinical trials into new treatments and for the choice of treatment decision timescales.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Since Shannon derived the seminal formula for the capacity of the additive linear white Gaussian noise channel, it has commonly been interpreted as the ultimate limit of error-free information transmission rate. However, the capacity above the corresponding linear channel limit can be achieved when noise is suppressed using nonlinear elements; that is, the regenerative function not available in linear systems. Regeneration is a fundamental concept that extends from biology to optical communications. All-optical regeneration of coherent signal has attracted particular attention. Surprisingly, the quantitative impact of regeneration on the Shannon capacity has remained unstudied. Here we propose a new method of designing regenerative transmission systems with capacity that is higher than the corresponding linear channel, and illustrate it by proposing application of the Fourier transform for efficient regeneration of multilevel multidimensional signals. The regenerative Shannon limit -the upper bound of regeneration efficiency -is derived. © 2014 Macmillan Publishers Limited. All rights reserved.