967 resultados para Convex piecewise-linear costs
Resumo:
Background: Body mass index (BMI) is a risk factor for endometrial cancer. We quantified the risk and investigated whether the association differed by use of hormone replacement therapy (HRT), menopausal status, and histologic type. Methods: We searched MEDLINE and EMBASE (1966 to December 2009) to identify prospective studies of BMI and incident endometrial cancer. We did random-effects meta-analyses, meta-regressions, and generalized least square regressions for trend estimations assuming linear, and piecewise linear, relationships. Results: Twenty-four studies (17,710 cases) were analyzed; 9 studies contributed to analyses by HRT, menopausal status, or histologic type, all published since 2003. In the linear model, the overall risk ratio (RR) per 5 kg/m2 increase in BMI was 1.60 (95% CI, 1.52–1.68), P < 0.0001. In the piecewise model, RRs compared with a normal BMI were 1.22 (1.19–1.24), 2.09 (1.94–2.26), 4.36 (3.75–5.10), and 9.11 (7.26–11.51) for BMIs of 27, 32, 37, and 42 kg/m2, respectively. The association was stronger in never HRT users than in ever users: RRs were 1.90 (1.57–2.31) and 1.18 (95% CI, 1.06–1.31) with P for interaction ¼ 0.003. In the piecewise model, the RR in never users was 20.70 (8.28–51.84) at BMI 42 kg/m2, compared with never users at normal BMI. The association was not affected by menopausal status (P ¼ 0.34) or histologic type (P ¼ 0.26). Conclusions: HRT use modifies the BMI-endometrial cancer risk association. Impact: These findings support the hypothesis that hyperestrogenia is an important mechanism underlying the BMI-endometrial cancer association, whilst the presence of residual risk in HRT users points to the role of additional systems. Cancer Epidemiol Biomarkers Prev; 19(12); 3119–30.
Resumo:
In this article we propose an exact efficient simulation algorithm for the generalized von Mises circular distribution of order two. It is an acceptance-rejection algorithm with a piecewise linear envelope based on the local extrema and the inflexion points of the generalized von Mises density of order two. We show that these points can be obtained from the roots of polynomials and degrees four and eight, which can be easily obtained by the methods of Ferrari and Weierstrass. A comparative study with the von Neumann acceptance-rejection, with the ratio-of-uniforms and with a Markov chain Monte Carlo algorithms shows that this new method is generally the most efficient.
Resumo:
BACKGROUND Even among HIV-infected patients who fully suppress plasma HIV RNA replication on antiretroviral therapy, genetic (e.g. CCL3L1 copy number), viral (e.g. tropism) and environmental (e.g. chronic exposure to microbial antigens) factors influence CD4 recovery. These factors differ markedly around the world and therefore the expected CD4 recovery during HIV RNA suppression may differ globally. METHODS We evaluated HIV-infected adults from North America, West Africa, East Africa, Southern Africa and Asia starting non-nucleoside reverse transcriptase inhibitorbased regimens containing efavirenz or nevirapine, who achieved at least one HIV RNA level <500/ml in the first year of therapy and observed CD4 changes during HIV RNA suppression. We used a piecewise linear regression to estimate the influence of region of residence on CD4 recovery, adjusting for socio-demographic and clinical characteristics. We observed 28 217 patients from 105 cohorts over 37 825 person-years. RESULTS After adjustment, patients from East Africa showed diminished CD4 recovery as compared with other regions. Three years after antiretroviral therapy initiation, the mean CD4 count for a prototypical patient with a pre-therapy CD4 count of 150/ml was 529/ml [95% confidence interval (CI): 517–541] in North America, 494/ml (95% CI: 429–559) in West Africa, 515/ml (95% CI: 508–522) in Southern Africa, 503/ml (95% CI: 478–528) in Asia and 437/ml (95% CI: 425–449) in East Africa. CONCLUSIONS CD4 recovery during HIV RNA suppression is diminished in East Africa as compared with other regions of the world, and observed differences are large enough to potentially influence clinical outcomes. Epidemiological analyses on a global scale can identify macroscopic effects unobservable at the clinical, national or individual regional level.
Resumo:
The extraordinary increase of new information technologies, the development of Internet, the electronic commerce, the e-government, mobile telephony and future cloud computing and storage, have provided great benefits in all areas of society. Besides these, there are new challenges for the protection of information, such as the loss of confidentiality and integrity of electronic documents. Cryptography plays a key role by providing the necessary tools to ensure the safety of these new media. It is imperative to intensify the research in this area, to meet the growing demand for new secure cryptographic techniques. The theory of chaotic nonlinear dynamical systems and the theory of cryptography give rise to the chaotic cryptography, which is the field of study of this thesis. The link between cryptography and chaotic systems is still subject of intense study. The combination of apparently stochastic behavior, the properties of sensitivity to initial conditions and parameters, ergodicity, mixing, and the fact that periodic points are dense, suggests that chaotic orbits resemble random sequences. This fact, and the ability to synchronize multiple chaotic systems, initially described by Pecora and Carroll, has generated an avalanche of research papers that relate cryptography and chaos. The chaotic cryptography addresses two fundamental design paradigms. In the first paradigm, chaotic cryptosystems are designed using continuous time, mainly based on chaotic synchronization techniques; they are implemented with analog circuits or by computer simulation. In the second paradigm, chaotic cryptosystems are constructed using discrete time and generally do not depend on chaos synchronization techniques. The contributions in this thesis involve three aspects about chaotic cryptography. The first one is a theoretical analysis of the geometric properties of some of the most employed chaotic attractors for the design of chaotic cryptosystems. The second one is the cryptanalysis of continuos chaotic cryptosystems and finally concludes with three new designs of cryptographically secure chaotic pseudorandom generators. The main accomplishments contained in this thesis are: v Development of a method for determining the parameters of some double scroll chaotic systems, including Lorenz system and Chua’s circuit. First, some geometrical characteristics of chaotic system have been used to reduce the search space of parameters. Next, a scheme based on the synchronization of chaotic systems was built. The geometric properties have been employed as matching criterion, to determine the values of the parameters with the desired accuracy. The method is not affected by a moderate amount of noise in the waveform. The proposed method has been applied to find security flaws in the continuous chaotic encryption systems. Based on previous results, the chaotic ciphers proposed by Wang and Bu and those proposed by Xu and Li are cryptanalyzed. We propose some solutions to improve the cryptosystems, although very limited because these systems are not suitable for use in cryptography. Development of a method for determining the parameters of the Lorenz system, when it is used in the design of two-channel cryptosystem. The method uses the geometric properties of the Lorenz system. The search space of parameters has been reduced. Next, the parameters have been accurately determined from the ciphertext. The method has been applied to cryptanalysis of an encryption scheme proposed by Jiang. In 2005, Gunay et al. proposed a chaotic encryption system based on a cellular neural network implementation of Chua’s circuit. This scheme has been cryptanalyzed. Some gaps in security design have been identified. Based on the theoretical results of digital chaotic systems and cryptanalysis of several chaotic ciphers recently proposed, a family of pseudorandom generators has been designed using finite precision. The design is based on the coupling of several piecewise linear chaotic maps. Based on the above results a new family of chaotic pseudorandom generators named Trident has been designed. These generators have been specially designed to meet the needs of real-time encryption of mobile technology. According to the above results, this thesis proposes another family of pseudorandom generators called Trifork. These generators are based on a combination of perturbed Lagged Fibonacci generators. This family of generators is cryptographically secure and suitable for use in real-time encryption. Detailed analysis shows that the proposed pseudorandom generator can provide fast encryption speed and a high level of security, at the same time. El extraordinario auge de las nuevas tecnologías de la información, el desarrollo de Internet, el comercio electrónico, la administración electrónica, la telefonía móvil y la futura computación y almacenamiento en la nube, han proporcionado grandes beneficios en todos los ámbitos de la sociedad. Junto a éstos, se presentan nuevos retos para la protección de la información, como la suplantación de personalidad y la pérdida de la confidencialidad e integridad de los documentos electrónicos. La criptografía juega un papel fundamental aportando las herramientas necesarias para garantizar la seguridad de estos nuevos medios, pero es imperativo intensificar la investigación en este ámbito para dar respuesta a la demanda creciente de nuevas técnicas criptográficas seguras. La teoría de los sistemas dinámicos no lineales junto a la criptografía dan lugar a la ((criptografía caótica)), que es el campo de estudio de esta tesis. El vínculo entre la criptografía y los sistemas caóticos continúa siendo objeto de un intenso estudio. La combinación del comportamiento aparentemente estocástico, las propiedades de sensibilidad a las condiciones iniciales y a los parámetros, la ergodicidad, la mezcla, y que los puntos periódicos sean densos asemejan las órbitas caóticas a secuencias aleatorias, lo que supone su potencial utilización en el enmascaramiento de mensajes. Este hecho, junto a la posibilidad de sincronizar varios sistemas caóticos descrita inicialmente en los trabajos de Pecora y Carroll, ha generado una avalancha de trabajos de investigación donde se plantean muchas ideas sobre la forma de realizar sistemas de comunicaciones seguros, relacionando así la criptografía y el caos. La criptografía caótica aborda dos paradigmas de diseño fundamentales. En el primero, los criptosistemas caóticos se diseñan utilizando circuitos analógicos, principalmente basados en las técnicas de sincronización caótica; en el segundo, los criptosistemas caóticos se construyen en circuitos discretos u ordenadores, y generalmente no dependen de las técnicas de sincronización del caos. Nuestra contribución en esta tesis implica tres aspectos sobre el cifrado caótico. En primer lugar, se realiza un análisis teórico de las propiedades geométricas de algunos de los sistemas caóticos más empleados en el diseño de criptosistemas caóticos vii continuos; en segundo lugar, se realiza el criptoanálisis de cifrados caóticos continuos basados en el análisis anterior; y, finalmente, se realizan tres nuevas propuestas de diseño de generadores de secuencias pseudoaleatorias criptográficamente seguros y rápidos. La primera parte de esta memoria realiza un análisis crítico acerca de la seguridad de los criptosistemas caóticos, llegando a la conclusión de que la gran mayoría de los algoritmos de cifrado caóticos continuos —ya sean realizados físicamente o programados numéricamente— tienen serios inconvenientes para proteger la confidencialidad de la información ya que son inseguros e ineficientes. Asimismo una gran parte de los criptosistemas caóticos discretos propuestos se consideran inseguros y otros no han sido atacados por lo que se considera necesario más trabajo de criptoanálisis. Esta parte concluye señalando las principales debilidades encontradas en los criptosistemas analizados y algunas recomendaciones para su mejora. En la segunda parte se diseña un método de criptoanálisis que permite la identificaci ón de los parámetros, que en general forman parte de la clave, de algoritmos de cifrado basados en sistemas caóticos de Lorenz y similares, que utilizan los esquemas de sincronización excitador-respuesta. Este método se basa en algunas características geométricas del atractor de Lorenz. El método diseñado se ha empleado para criptoanalizar eficientemente tres algoritmos de cifrado. Finalmente se realiza el criptoanálisis de otros dos esquemas de cifrado propuestos recientemente. La tercera parte de la tesis abarca el diseño de generadores de secuencias pseudoaleatorias criptográficamente seguras, basadas en aplicaciones caóticas, realizando las pruebas estadísticas, que corroboran las propiedades de aleatoriedad. Estos generadores pueden ser utilizados en el desarrollo de sistemas de cifrado en flujo y para cubrir las necesidades del cifrado en tiempo real. Una cuestión importante en el diseño de sistemas de cifrado discreto caótico es la degradación dinámica debida a la precisión finita; sin embargo, la mayoría de los diseñadores de sistemas de cifrado discreto caótico no ha considerado seriamente este aspecto. En esta tesis se hace hincapié en la importancia de esta cuestión y se contribuye a su esclarecimiento con algunas consideraciones iniciales. Ya que las cuestiones teóricas sobre la dinámica de la degradación de los sistemas caóticos digitales no ha sido totalmente resuelta, en este trabajo utilizamos algunas soluciones prácticas para evitar esta dificultad teórica. Entre las técnicas posibles, se proponen y evalúan varias soluciones, como operaciones de rotación de bits y desplazamiento de bits, que combinadas con la variación dinámica de parámetros y con la perturbación cruzada, proporcionan un excelente remedio al problema de la degradación dinámica. Además de los problemas de seguridad sobre la degradación dinámica, muchos criptosistemas se rompen debido a su diseño descuidado, no a causa de los defectos esenciales de los sistemas caóticos digitales. Este hecho se ha tomado en cuenta en esta tesis y se ha logrado el diseño de generadores pseudoaleatorios caóticos criptogr áficamente seguros.
Resumo:
The analysis of complex nonlinear systems is often carried out using simpler piecewise linear representations of them. A principled and practical technique is proposed to linearize and evaluate arbitrary continuous nonlinear functions using polygonal (continuous piecewise linear) models under the L1 norm. A thorough error analysis is developed to guide an optimal design of two kinds of polygonal approximations in the asymptotic case of a large budget of evaluation subintervals N. The method allows the user to obtain the level of linearization (N) for a target approximation error and vice versa. It is suitable for, but not limited to, an efficient implementation in modern Graphics Processing Units (GPUs), allowing real-time performance of computationally demanding applications. The quality and efficiency of the technique has been measured in detail on two nonlinear functions that are widely used in many areas of scientific computing and are expensive to evaluate.
Resumo:
Esta tesis se ha desarrollado en el contexto del proyecto Cajal Blue Brain, una iniciativa europea dedicada al estudio del cerebro. Uno de los objetivos de esta iniciativa es desarrollar nuevos métodos y nuevas tecnologías que simplifiquen el análisis de datos en el campo neurocientífico. El presente trabajo se ha centrado en diseñar herramientas que combinen información proveniente de distintos canales sensoriales con el fin de acelerar la interacción y análisis de imágenes neurocientíficas. En concreto se estudiará la posibilidad de combinar información visual con información háptica. Las espinas dendríticas son pequeñas protuberancias que recubren la superficie dendrítica de muchas neuronas del cerebro. A día de hoy, se cree que tienen un papel clave en la transmisión de señales neuronales. Motivo por el cual, el interés por parte de la comunidad científica por estas estructuras ha ido en aumento a medida que las técnicas de adquisición de imágenes mejoraban hasta alcanzar una calidad suficiente para analizar dichas estructuras. A menudo, los neurocientíficos utilizan técnicas de microscopía con luz para obtener los datos que les permitan analizar estructuras neuronales tales como neuronas, dendritas y espinas dendríticas. A pesar de que estas técnicas ofrezcan ciertas ventajas frente a su equivalente electrónico, las técnicas basadas en luz permiten una menor resolución. En particular, estructuras pequeñas como las espinas dendríticas pueden capturarse de forma incorrecta en las imágenes obtenidas, impidiendo su análisis. En este trabajo, se presenta una nueva técnica, que permite editar imágenes volumétricas, mediante un dispositivo háptico, con el fin de reconstruir de los cuellos de las espinas dendríticas. Con este objetivo, en un primer momento se desarrolló un algoritmo que proporciona retroalimentación háptica en datos volumétricos, completando la información que provine del canal visual. Dicho algoritmo de renderizado háptico permite a los usuarios tocar y percibir una isosuperficie en el volumen de datos. El algoritmo asegura un renderizado robusto y eficiente. Se utiliza un método basado en las técnicas de “marching tetrahedra” para la extracción local de una isosuperficie continua, lineal y definida por intervalos. La robustez deriva tanto de una etapa de detección de colisiones continua de la isosuperficie extraída, como del uso de técnicas eficientes de renderizado basadas en un proxy puntual. El método de “marching tetrahedra” propuesto garantiza que la topología de la isosuperficie extraída coincida con la topología de una isosuperficie equivalente determinada utilizando una interpolación trilineal. Además, con el objetivo de mejorar la coherencia entre la información háptica y la información visual, el algoritmo de renderizado háptico calcula un segundo proxy en la isosuperficie pintada en la pantalla. En este trabajo se demuestra experimentalmente las mejoras en, primero, la etapa de extracción de isosuperficie, segundo, la robustez a la hora de mantener el proxy en la isosuperficie deseada y finalmente la eficiencia del algoritmo. En segundo lugar, a partir del algoritmo de renderizado háptico propuesto, se desarrolló un procedimiento, en cuatro etapas, para la reconstrucción de espinas dendríticas. Este procedimiento, se puede integrar en los cauces de segmentación automática y semiautomática existentes como una etapa de pre-proceso previa. El procedimiento está diseñando para que tanto la navegación como el proceso de edición en sí mismo estén controlados utilizando un dispositivo háptico. Se han diseñado dos experimentos para evaluar esta técnica. El primero evalúa la aportación de la retroalimentación háptica y el segundo se centra en evaluar la idoneidad del uso de un háptico como dispositivo de entrada. En ambos casos, los resultados demuestran que nuestro procedimiento mejora la precisión de la reconstrucción. En este trabajo se describen también dos casos de uso de nuestro procedimiento en el ámbito de la neurociencia: el primero aplicado a neuronas situadas en la corteza cerebral humana y el segundo aplicado a espinas dendríticas situadas a lo largo de neuronas piramidales de la corteza del cerebro de una rata. Por último, presentamos el programa, Neuro Haptic Editor, desarrollado a lo largo de esta tesis junto con los diferentes algoritmos ya mencionados. ABSTRACT This thesis took place within the Cajal Blue Brain project, a European initiative dedicated to the study of the brain. One of the main goals of this project is the development of new methods and technologies simplifying data analysis in neuroscience. This thesis focused on the development of tools combining information originating from distinct sensory channels with the aim of accelerating both the interaction with neuroscience images and their analysis. In concrete terms, the objective is to study the possibility of combining visual information with haptic information. Dendritic spines are thin protrusions that cover the dendritic surface of numerous neurons in the brain and whose function seems to play a key role in neural circuits. The interest of the neuroscience community toward those structures kept increasing as and when acquisition methods improved, eventually to the point that the produced datasets enabled their analysis. Quite often, neuroscientists use light microscopy techniques to produce the dataset that will allow them to analyse neuronal structures such as neurons, dendrites and dendritic spines. While offering some advantages compared to their electronic counterpart, light microscopy techniques achieve lower resolutions. Particularly, small structures such as dendritic spines might suffer from a very low level of fluorescence in the final dataset, preventing further analysis. This thesis introduces a new technique enabling the edition of volumetric datasets in order to recreate dendritic spine necks using a haptic device. In order to fulfil this objective, we first presented an algorithm to provide haptic feedback directly from volumetric datasets, as an aid to regular visualization. The haptic rendering algorithm lets users perceive isosurfaces in volumetric datasets, and it relies on several design features that ensure a robust and efficient rendering. A marching tetrahedra approach enables the dynamic extraction of a piecewise linear continuous isosurface. Robustness is derived using a Continuous Collision Detection step coupled with acknowledged proxy-based rendering methods over the extracted isosurface. The introduced marching tetrahedra approach guarantees that the extracted isosurface will match the topology of an equivalent isosurface computed using trilinear interpolation. The proposed haptic rendering algorithm improves the coherence between haptic and visual cues computing a second proxy on the isosurface displayed on screen. Three experiments demonstrate the improvements on the isosurface extraction stage as well as the robustness and the efficiency of the complete algorithm. We then introduce our four-steps procedure for the complete reconstruction of dendritic spines. Based on our haptic rendering algorithm, this procedure is intended to work as an image processing stage before the automatic segmentation step giving the final representation of the dendritic spines. The procedure is designed to allow both the navigation and the volume image editing to be carried out using a haptic device. We evaluated our procedure through two experiments. The first experiment concerns the benefits of the force feedback and the second checks the suitability of the use of a haptic device as input. In both cases, the results shows that the procedure improves the editing accuracy. We also report two concrete cases where our procedure was employed in the neuroscience field, the first one concerning dendritic spines in the human cortex, the second one referring to an ongoing experiment studying dendritic spines along dendrites of mouse cortical pyramidal neurons. Finally, we present the software program, Neuro Haptic Editor, that was built along the development of the different algorithms implemented during this thesis, and used by neuroscientists to use our procedure.
Resumo:
La evolución de los teléfonos móviles inteligentes, dotados de cámaras digitales, está provocando una creciente demanda de aplicaciones cada vez más complejas que necesitan algoritmos de visión artificial en tiempo real; puesto que el tamaño de las señales de vídeo no hace sino aumentar y en cambio el rendimiento de los procesadores de un solo núcleo se ha estancado, los nuevos algoritmos que se diseñen para visión artificial han de ser paralelos para poder ejecutarse en múltiples procesadores y ser computacionalmente escalables. Una de las clases de procesadores más interesantes en la actualidad se encuentra en las tarjetas gráficas (GPU), que son dispositivos que ofrecen un alto grado de paralelismo, un excelente rendimiento numérico y una creciente versatilidad, lo que los hace interesantes para llevar a cabo computación científica. En esta tesis se exploran dos aplicaciones de visión artificial que revisten una gran complejidad computacional y no pueden ser ejecutadas en tiempo real empleando procesadores tradicionales. En cambio, como se demuestra en esta tesis, la paralelización de las distintas subtareas y su implementación sobre una GPU arrojan los resultados deseados de ejecución con tasas de refresco interactivas. Asimismo, se propone una técnica para la evaluación rápida de funciones de complejidad arbitraria especialmente indicada para su uso en una GPU. En primer lugar se estudia la aplicación de técnicas de síntesis de imágenes virtuales a partir de únicamente dos cámaras lejanas y no paralelas—en contraste con la configuración habitual en TV 3D de cámaras cercanas y paralelas—con información de color y profundidad. Empleando filtros de mediana modificados para la elaboración de un mapa de profundidad virtual y proyecciones inversas, se comprueba que estas técnicas son adecuadas para una libre elección del punto de vista. Además, se demuestra que la codificación de la información de profundidad con respecto a un sistema de referencia global es sumamente perjudicial y debería ser evitada. Por otro lado se propone un sistema de detección de objetos móviles basado en técnicas de estimación de densidad con funciones locales. Este tipo de técnicas es muy adecuada para el modelado de escenas complejas con fondos multimodales, pero ha recibido poco uso debido a su gran complejidad computacional. El sistema propuesto, implementado en tiempo real sobre una GPU, incluye propuestas para la estimación dinámica de los anchos de banda de las funciones locales, actualización selectiva del modelo de fondo, actualización de la posición de las muestras de referencia del modelo de primer plano empleando un filtro de partículas multirregión y selección automática de regiones de interés para reducir el coste computacional. Los resultados, evaluados sobre diversas bases de datos y comparados con otros algoritmos del estado del arte, demuestran la gran versatilidad y calidad de la propuesta. Finalmente se propone un método para la aproximación de funciones arbitrarias empleando funciones continuas lineales a tramos, especialmente indicada para su implementación en una GPU mediante el uso de las unidades de filtraje de texturas, normalmente no utilizadas para cómputo numérico. La propuesta incluye un riguroso análisis matemático del error cometido en la aproximación en función del número de muestras empleadas, así como un método para la obtención de una partición cuasióptima del dominio de la función para minimizar el error. ABSTRACT The evolution of smartphones, all equipped with digital cameras, is driving a growing demand for ever more complex applications that need to rely on real-time computer vision algorithms. However, video signals are only increasing in size, whereas the performance of single-core processors has somewhat stagnated in the past few years. Consequently, new computer vision algorithms will need to be parallel to run on multiple processors and be computationally scalable. One of the most promising classes of processors nowadays can be found in graphics processing units (GPU). These are devices offering a high parallelism degree, excellent numerical performance and increasing versatility, which makes them interesting to run scientific computations. In this thesis, we explore two computer vision applications with a high computational complexity that precludes them from running in real time on traditional uniprocessors. However, we show that by parallelizing subtasks and implementing them on a GPU, both applications attain their goals of running at interactive frame rates. In addition, we propose a technique for fast evaluation of arbitrarily complex functions, specially designed for GPU implementation. First, we explore the application of depth-image–based rendering techniques to the unusual configuration of two convergent, wide baseline cameras, in contrast to the usual configuration used in 3D TV, which are narrow baseline, parallel cameras. By using a backward mapping approach with a depth inpainting scheme based on median filters, we show that these techniques are adequate for free viewpoint video applications. In addition, we show that referring depth information to a global reference system is ill-advised and should be avoided. Then, we propose a background subtraction system based on kernel density estimation techniques. These techniques are very adequate for modelling complex scenes featuring multimodal backgrounds, but have not been so popular due to their huge computational and memory complexity. The proposed system, implemented in real time on a GPU, features novel proposals for dynamic kernel bandwidth estimation for the background model, selective update of the background model, update of the position of reference samples of the foreground model using a multi-region particle filter, and automatic selection of regions of interest to reduce computational cost. The results, evaluated on several databases and compared to other state-of-the-art algorithms, demonstrate the high quality and versatility of our proposal. Finally, we propose a general method for the approximation of arbitrarily complex functions using continuous piecewise linear functions, specially formulated for GPU implementation by leveraging their texture filtering units, normally unused for numerical computation. Our proposal features a rigorous mathematical analysis of the approximation error in function of the number of samples, as well as a method to obtain a suboptimal partition of the domain of the function to minimize approximation error.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
Background: Parkinson’s disease (PD) is an incurable neurological disease with approximately 0.3% prevalence. The hallmark symptom is gradual movement deterioration. Current scientific consensus about disease progression holds that symptoms will worsen smoothly over time unless treated. Accurate information about symptom dynamics is of critical importance to patients, caregivers, and the scientific community for the design of new treatments, clinical decision making, and individual disease management. Long-term studies characterize the typical time course of the disease as an early linear progression gradually reaching a plateau in later stages. However, symptom dynamics over durations of days to weeks remains unquantified. Currently, there is a scarcity of objective clinical information about symptom dynamics at intervals shorter than 3 months stretching over several years, but Internet-based patient self-report platforms may change this. Objective: To assess the clinical value of online self-reported PD symptom data recorded by users of the health-focused Internet social research platform PatientsLikeMe (PLM), in which patients quantify their symptoms on a regular basis on a subset of the Unified Parkinson’s Disease Ratings Scale (UPDRS). By analyzing this data, we aim for a scientific window on the nature of symptom dynamics for assessment intervals shorter than 3 months over durations of several years. Methods: Online self-reported data was validated against the gold standard Parkinson’s Disease Data and Organizing Center (PD-DOC) database, containing clinical symptom data at intervals greater than 3 months. The data were compared visually using quantile-quantile plots, and numerically using the Kolmogorov-Smirnov test. By using a simple piecewise linear trend estimation algorithm, the PLM data was smoothed to separate random fluctuations from continuous symptom dynamics. Subtracting the trends from the original data revealed random fluctuations in symptom severity. The average magnitude of fluctuations versus time since diagnosis was modeled by using a gamma generalized linear model. Results: Distributions of ages at diagnosis and UPDRS in the PLM and PD-DOC databases were broadly consistent. The PLM patients were systematically younger than the PD-DOC patients and showed increased symptom severity in the PD off state. The average fluctuation in symptoms (UPDRS Parts I and II) was 2.6 points at the time of diagnosis, rising to 5.9 points 16 years after diagnosis. This fluctuation exceeds the estimated minimal and moderate clinically important differences, respectively. Not all patients conformed to the current clinical picture of gradual, smooth changes: many patients had regimes where symptom severity varied in an unpredictable manner, or underwent large rapid changes in an otherwise more stable progression. Conclusions: This information about short-term PD symptom dynamics contributes new scientific understanding about the disease progression, currently very costly to obtain without self-administered Internet-based reporting. This understanding should have implications for the optimization of clinical trials into new treatments and for the choice of treatment decision timescales.
Resumo:
* This work has been supported by the Office of Naval Research Contract Nr. N0014-91-J1343, the Army Research Office Contract Nr. DAAD 19-02-1-0028, the National Science Foundation grants DMS-0221642 and DMS-0200665, the Deutsche Forschungsgemeinschaft grant SFB 401, the IHP Network “Breaking Complexity” funded by the European Commission and the Alexan- der von Humboldt Foundation.
Resumo:
In the present paper the problems of the optimal control of systems when constraints are imposed on the control is considered. The optimality conditions are given in the form of Pontryagin’s maximum principle. The obtained piecewise linear function is approximated by using feedforward neural network. A numerical example is given.
Resumo:
We present a general model to find the best allocation of a limited amount of supplements (extra minutes added to a timetable in order to reduce delays) on a set of interfering railway lines. By the best allocation, we mean the solution under which the weighted sum of expected delays is minimal. Our aim is to finely adjust an already existing and well-functioning timetable. We model this inherently stochastic optimization problem by using two-stage recourse models from stochastic programming, building upon earlier research from the literature. We present an improved formulation, allowing for an efficient solution using a standard algorithm for recourse models. We show that our model may be solved using any of the following theoretical frameworks: linear programming, stochastic programming and convex non-linear programming, and present a comparison of these approaches based on a real-life case study. Finally, we introduce stochastic dependency into the model, and present a statistical technique to estimate the model parameters from empirical data.
Resumo:
The great amount of data generated as the result of the automation and process supervision in industry implies in two problems: a big demand of storage in discs and the difficulty in streaming this data through a telecommunications link. The lossy data compression algorithms were born in the 90’s with the goal of solving these problems and, by consequence, industries started to use those algorithms in industrial supervision systems to compress data in real time. These algorithms were projected to eliminate redundant and undesired information in a efficient and simple way. However, those algorithms parameters must be set for each process variable, becoming impracticable to configure this parameters for each variable in case of systems that monitor thousands of them. In that context, this paper propose the algorithm Adaptive Swinging Door Trending that consists in a adaptation of the Swinging Door Trending, as this main parameters are adjusted dynamically by the analysis of the signal tendencies in real time. It’s also proposed a comparative analysis of performance in lossy data compression algorithms applied on time series process variables and dynamometer cards. The algorithms used to compare were the piecewise linear and the transforms.
Resumo:
The Quadratic Minimum Spanning Tree (QMST) problem is a generalization of the Minimum Spanning Tree problem in which, beyond linear costs associated to each edge, quadratic costs associated to each pair of edges must be considered. The quadratic costs are due to interaction costs between the edges. When interactions occur between adjacent edges only, the problem is named Adjacent Only Quadratic Minimum Spanning Tree (AQMST). Both QMST and AQMST are NP-hard and model a number of real world applications involving infrastructure networks design. Linear and quadratic costs are summed in the mono-objective versions of the problems. However, real world applications often deal with conflicting objectives. In those cases, considering linear and quadratic costs separately is more appropriate and multi-objective optimization provides a more realistic modelling. Exact and heuristic algorithms are investigated in this work for the Bi-objective Adjacent Only Quadratic Spanning Tree Problem. The following techniques are proposed: backtracking, branch-and-bound, Pareto Local Search, Greedy Randomized Adaptive Search Procedure, Simulated Annealing, NSGA-II, Transgenetic Algorithm, Particle Swarm Optimization and a hybridization of the Transgenetic Algorithm with the MOEA-D technique. Pareto compliant quality indicators are used to compare the algorithms on a set of benchmark instances proposed in literature.
Resumo:
The Quadratic Minimum Spanning Tree (QMST) problem is a generalization of the Minimum Spanning Tree problem in which, beyond linear costs associated to each edge, quadratic costs associated to each pair of edges must be considered. The quadratic costs are due to interaction costs between the edges. When interactions occur between adjacent edges only, the problem is named Adjacent Only Quadratic Minimum Spanning Tree (AQMST). Both QMST and AQMST are NP-hard and model a number of real world applications involving infrastructure networks design. Linear and quadratic costs are summed in the mono-objective versions of the problems. However, real world applications often deal with conflicting objectives. In those cases, considering linear and quadratic costs separately is more appropriate and multi-objective optimization provides a more realistic modelling. Exact and heuristic algorithms are investigated in this work for the Bi-objective Adjacent Only Quadratic Spanning Tree Problem. The following techniques are proposed: backtracking, branch-and-bound, Pareto Local Search, Greedy Randomized Adaptive Search Procedure, Simulated Annealing, NSGA-II, Transgenetic Algorithm, Particle Swarm Optimization and a hybridization of the Transgenetic Algorithm with the MOEA-D technique. Pareto compliant quality indicators are used to compare the algorithms on a set of benchmark instances proposed in literature.