917 resultados para Computational lambda-calculus
Resumo:
The early onset of mental disorders can lead to serious cognitive damage, and timely interventions are needed in order to prevent them. In patients of low socioeconomic status, as is common in Latin America, it can be hard to identify children at risk. Here, we briefly introduce the problem by reviewing the scarce epidemiological data from Latin America regarding the onset of mental disorders, and discussing the difficulties associated with early diagnosis. Then we present computational psychiatry, a new field to which we and other Latin American researchers have contributed methods particularly relevant for the quantitative investigation of psychopathologies manifested during childhood. We focus on new technologies that help to identify mental disease and provide prodromal evaluation, so as to promote early differential diagnosis and intervention. To conclude, we discuss the application of these methods to clinical and educational practice. A comprehensive and quantitative characterization of verbal behavior in children, from hospitals and laboratories to homes and schools, may lead to more effective pedagogical and medical intervention
Resumo:
This thesis investigated the risk of accidental release of hydrocarbons during transportation and storage. Transportation of hydrocarbons from an offshore platform to processing units through subsea pipelines involves risk of release due to pipeline leakage resulting from corrosion, plastic deformation caused by seabed shakedown or damaged by contact with drifting iceberg. The environmental impacts of hydrocarbon dispersion can be severe. Overall safety and economic concerns of pipeline leakage at subsea environment are immense. A large leak can be detected by employing conventional technology such as, radar, intelligent pigging or chemical tracer but in a remote location like subsea or arctic, a small chronic leak may be undetected for a period of time. In case of storage, an accidental release of hydrocarbon from the storage tank could lead pool fire; further it could escalate to domino effects. This chain of accidents may lead to extremely severe consequences. Analyzing past accident scenarios it is observed that more than half of the industrial domino accidents involved fire as a primary event, and some other factors for instance, wind speed and direction, fuel type and engulfment of the compound. In this thesis, a computational fluid dynamics (CFD) approach is taken to model the subsea pipeline leak and the pool fire from a storage tank. A commercial software package ANSYS FLUENT Workbench 15 is used to model the subsea pipeline leakage. The CFD simulation results of four different types of fluids showed that the static pressure and pressure gradient along the axial length of the pipeline have a sharp signature variation near the leak orifice at steady state condition. Transient simulation is performed to obtain the acoustic signature of the pipe near leak orifice. The power spectral density (PSD) of acoustic signal is strong near the leak orifice and it dissipates as the distance and orientation from the leak orifice increase. The high-pressure fluid flow generates more noise than the low-pressure fluid flow. In order to model the pool fire from the storage tank, ANSYS CFX Workbench 14 is used. The CFD results show that the wind speed has significant contribution on the behavior of pool fire and its domino effects. The radiation contours are also obtained from CFD post processing, which can be applied for risk analysis. The outcome of this study will be helpful for better understanding of the domino effects of pool fire in complex geometrical settings of process industries. The attempt to reduce and prevent risks is discussed based on the results obtained from the numerical simulations of the numerical models.
Resumo:
We theoretically study the resonance fluorescence spectrum of a three-level quantum emitter coupled to a spherical metallic nanoparticle. We consider the case that the quantum emitter is driven by a single laser field along one of the optical transitions. We show that the development of the spectrum depends on the relative orientation of the dipole moments of the optical transitions in relation to the metal nanoparticle. In addition, we demonstrate that the location and width of the peaks in the spectrum are strongly modified by the exciton-plasmon coupling and the laser detuning, allowing to achieve controlled strongly subnatural spectral line. A strong antibunching of the fluorescent photons along the undriven transition is also obtained. Our results may be used for creating a tunable source of photons which could be used for a probabilistic entanglement scheme in the field of quantum information processing.
Resumo:
Esta tesis trata sobre aproximaciones de espacios métricos compactos. La aproximación y reconstrucción de espacios topológicos mediante otros más sencillos es un tema antigüo en topología geométrica. La idea es construir un espacio muy sencillo lo más parecido posible al espacio original. Como es muy difícil (o incluso no tiene sentido) intentar obtener una copia homeomorfa, el objetivo será encontrar un espacio que preserve algunas propriedades topológicas (algebraicas o no) como compacidad, conexión, axiomas de separación, tipo de homotopía, grupos de homotopía y homología, etc. Los primeros candidatos como espacios sencillos con propiedades del espacio original son los poliedros. Ver el artículo [45] para los resultados principales. En el germen de esta idea, destacamos los estudios de Alexandroff en los años 20, relacionando la dimensión del compacto métrico con la dimensión de ciertos poliedros a través de aplicaciones con imágenes o preimágenes controladas (en términos de distancias). En un contexto más moderno, la idea de aproximación puede ser realizada construyendo un complejo simplicial basado en el espacio original, como el complejo de Vietoris-Rips o el complejo de Cech y comparar su realización con él. En este sentido, tenemos el clásico lema del nervio [12, 21] el cual establece que para un recubrimiento por abiertos “suficientemente bueno" del espacio (es decir, un recubrimiento con miembros e intersecciones contractibles o vacías), el nervio del recubrimiento tiene el tipo de homotopía del espacio original. El problema es encontrar estos recubrimientos (si es que existen). Para variedades Riemannianas, existen algunos resultados en este sentido, utilizando los complejos de Vietoris-Rips. Hausmann demostró [35] que la realización del complejo de Vietoris-Rips de la variedad, para valores suficientemente bajos del parámetro, tiene el tipo de homotopía de dicha variedad. En [40], Latschev demostró una conjetura establecida por Hausmann: El tipo de homotopía de la variedad se puede recuperar utilizando un conjunto finito de puntos (suficientemente denso) para el complejo de Vietoris-Rips. Los resultados de Petersen [58], comparando la distancia Gromov-Hausdorff de los compactos métricos con su tipo de homotopía, son también interesantes. Aquí, los poliedros salen a relucir en las demostraciones, no en los resultados...
Resumo:
The research reported in this article is based on the Ph.D. project of Dr. RK, which was funded by the Scottish Informatics and Computer Science Alliance (SICSA). KvD acknowledges support from the EPSRC under the RefNet grant (EP/J019615/1).
Resumo:
Acknowledgments We thank Sally Rowland for helpful comments on the manuscript. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
Resumo:
'Image volumes' refer to realizations of images in other dimensions such as time, spectrum, and focus. Recent advances in scientific, medical, and consumer applications demand improvements in image volume capture. Though image volume acquisition continues to advance, it maintains the same sampling mechanisms that have been used for decades; every voxel must be scanned and is presumed independent of its neighbors. Under these conditions, improving performance comes at the cost of increased system complexity, data rates, and power consumption.
This dissertation explores systems and methods capable of efficiently improving sensitivity and performance for image volume cameras, and specifically proposes several sampling strategies that utilize temporal coding to improve imaging system performance and enhance our awareness for a variety of dynamic applications.
Video cameras and camcorders sample the video volume (x,y,t) at fixed intervals to gain understanding of the volume's temporal evolution. Conventionally, one must reduce the spatial resolution to increase the framerate of such cameras. Using temporal coding via physical translation of an optical element known as a coded aperture, the compressive temporal imaging (CACTI) camera emonstrates a method which which to embed the temporal dimension of the video volume into spatial (x,y) measurements, thereby greatly improving temporal resolution with minimal loss of spatial resolution. This technique, which is among a family of compressive sampling strategies developed at Duke University, temporally codes the exposure readout functions at the pixel level.
Since video cameras nominally integrate the remaining image volume dimensions (e.g. spectrum and focus) at capture time, spectral (x,y,t,\lambda) and focal (x,y,t,z) image volumes are traditionally captured via sequential changes to the spectral and focal state of the system, respectively. The CACTI camera's ability to embed video volumes into images leads to exploration of other information within that video; namely, focal and spectral information. The next part of the thesis demonstrates derivative works of CACTI: compressive extended depth of field and compressive spectral-temporal imaging. These works successfully show the technique's extension of temporal coding to improve sensing performance in these other dimensions.
Geometrical optics-related tradeoffs, such as the classic challenges of wide-field-of-view and high resolution photography, have motivated the development of mulitscale camera arrays. The advent of such designs less than a decade ago heralds a new era of research- and engineering-related challenges. One significant challenge is that of managing the focal volume (x,y,z) over wide fields of view and resolutions. The fourth chapter shows advances on focus and image quality assessment for a class of multiscale gigapixel cameras developed at Duke.
Along the same line of work, we have explored methods for dynamic and adaptive addressing of focus via point spread function engineering. We demonstrate another form of temporal coding in the form of physical translation of the image plane from its nominal focal position. We demonstrate this technique's capability to generate arbitrary point spread functions.
Resumo:
In the last two decades, the field of homogeneous gold catalysis has been
extremely active, growing at a rapid pace. Another rapidly-growing field—that of
computational chemistry—has often been applied to the investigation of various gold-
catalyzed reaction mechanisms. Unfortunately, a number of recent mechanistic studies
have utilized computational methods that have been shown to be inappropriate and
inaccurate in their description of gold chemistry. This work presents an overview of
available computational methods with a focus on the approximations and limitations
inherent in each, and offers a review of experimentally-characterized gold(I) complexes
and proposed mechanisms as compared with their computationally-modeled
counterparts. No aim is made to identify a “recommended” computational method for
investigations of gold catalysis; rather, discrepancies between experimentally and
computationally obtained values are highlighted, and the systematic errors between
different computational methods are discussed.
Resumo:
Hypertrophic cardiomyopathy (HCM) is a cardiovascular disease where the heart muscle is partially thickened and blood flow is - potentially fatally - obstructed. It is one of the leading causes of sudden cardiac death in young people. Electrocardiography (ECG) and Echocardiography (Echo) are the standard tests for identifying HCM and other cardiac abnormalities. The American Heart Association has recommended using a pre-participation questionnaire for young athletes instead of ECG or Echo tests due to considerations of cost and time involved in interpreting the results of these tests by an expert cardiologist. Initially we set out to develop a classifier for automated prediction of young athletes’ heart conditions based on the answers to the questionnaire. Classification results and further in-depth analysis using computational and statistical methods indicated significant shortcomings of the questionnaire in predicting cardiac abnormalities. Automated methods for analyzing ECG signals can help reduce cost and save time in the pre-participation screening process by detecting HCM and other cardiac abnormalities. Therefore, the main goal of this dissertation work is to identify HCM through computational analysis of 12-lead ECG. ECG signals recorded on one or two leads have been analyzed in the past for classifying individual heartbeats into different types of arrhythmia as annotated primarily in the MIT-BIH database. In contrast, we classify complete sequences of 12-lead ECGs to assign patients into two groups: HCM vs. non-HCM. The challenges and issues we address include missing ECG waves in one or more leads and the dimensionality of a large feature-set. We address these by proposing imputation and feature-selection methods. We develop heartbeat-classifiers by employing Random Forests and Support Vector Machines, and propose a method to classify full 12-lead ECGs based on the proportion of heartbeats classified as HCM. The results from our experiments show that the classifiers developed using our methods perform well in identifying HCM. Thus the two contributions of this thesis are the utilization of computational and statistical methods for discovering shortcomings in a current screening procedure and the development of methods to identify HCM through computational analysis of 12-lead ECG signals.
Resumo:
Stroke is a leading cause of death and permanent disability worldwide, affecting millions of individuals. Traditional clinical scores for assessment of stroke-related impairments are inherently subjective and limited by inter-rater and intra-rater reliability, as well as floor and ceiling effects. In contrast, robotic technologies provide objective, highly repeatable tools for quantification of neurological impairments following stroke. KINARM is an exoskeleton robotic device that provides objective, reliable tools for assessment of sensorimotor, proprioceptive and cognitive brain function by means of a battery of behavioral tasks. As such, KINARM is particularly useful for assessment of neurological impairments following stroke. This thesis introduces a computational framework for assessment of neurological impairments using the data provided by KINARM. This is done by achieving two main objectives. First, to investigate how robotic measurements can be used to estimate current and future abilities to perform daily activities for subjects with stroke. We are able to predict clinical scores related to activities of daily living at present and future time points using a set of robotic biomarkers. The findings of this analysis provide a proof of principle that robotic evaluation can be an effective tool for clinical decision support and target-based rehabilitation therapy. The second main objective of this thesis is to address the emerging problem of long assessment time, which can potentially lead to fatigue when assessing subjects with stroke. To address this issue, we examine two time reduction strategies. The first strategy focuses on task selection, whereby KINARM tasks are arranged in a hierarchical structure so that an earlier task in the assessment procedure can be used to decide whether or not subsequent tasks should be performed. The second strategy focuses on time reduction on the longest two individual KINARM tasks. Both reduction strategies are shown to provide significant time savings, ranging from 30% to 90% using task selection and 50% using individual task reductions, thereby establishing a framework for reduction of assessment time on a broader set of KINARM tasks. All in all, findings of this thesis establish an improved platform for diagnosis and prognosis of stroke using robot-based biomarkers.
Resumo:
In this work I study the optical properties of helical particles and chiral sculptured thin films, using computational modeling (discrete dipole approximation, Berreman calculus), and experimental techniques (glancing angle deposition, ellipsometry, scatterometry, and non-linear optical measurements). The first part of this work focuses on linear optics, namely light scattering from helical microparticles. I study the influence of structural parameters and orientation on the optical properties of particles: circular dichroism (CD) and optical rotation (OR), and show that as a consequence of random orientation, CD and OR can have the opposite sign, compared to that of the oriented particle, potentially resulting in ambiguity of measurement interpretation. Additionally, particles in random orientation scatter light with circular and elliptical polarization states, which implies that in order to study multiple scattering from randomly oriented chiral particles, the polarization state of light cannot be disregarded. To perform experiments and attempt to produce particles, a newly constructed multi stage thin film coating chamber is calibrated. It enables the simultaneous fabrication of multiple sculptured thin film coatings, each with different structure. With it I successfully produce helical thin film coatings with Ti and TiO_{2}. The second part of this work focuses on non-linear optics, with special emphasis on second-harmonic generation. The scientific literature shows extensive experimental and theoretical work on second harmonic generation from chiral thin films. Such films are expected to always show this non-linear effect, due to their lack of inversion symmetry. However no experimental studies report non-linear response of chiral sculptured thin films. In this work I grow films suitable for a second harmonic generation experiment, and report the first measurements of non-linear response.
Resumo:
This talk explores how the runtime system and operating system can leverage metrics that express the significance and resilience of application components in order to reduce the energy footprint of parallel applications. We will explore in particular how software can tolerate and indeed exploit higher error rates in future processors and memory technologies that may operate outside their safe margins.