867 resultados para Measurement-based quantum computing
Resumo:
The asymptotic safety scenario allows to define a consistent theory of quantized gravity within the framework of quantum field theory. The central conjecture of this scenario is the existence of a non-Gaussian fixed point of the theory's renormalization group flow, that allows to formulate renormalization conditions that render the theory fully predictive. Investigations of this possibility use an exact functional renormalization group equation as a primary non-perturbative tool. This equation implements Wilsonian renormalization group transformations, and is demonstrated to represent a reformulation of the functional integral approach to quantum field theory.rnAs its main result, this thesis develops an algebraic algorithm which allows to systematically construct the renormalization group flow of gauge theories as well as gravity in arbitrary expansion schemes. In particular, it uses off-diagonal heat kernel techniques to efficiently handle the non-minimal differential operators which appear due to gauge symmetries. The central virtue of the algorithm is that no additional simplifications need to be employed, opening the possibility for more systematic investigations of the emergence of non-perturbative phenomena. As a by-product several novel results on the heat kernel expansion of the Laplace operator acting on general gauge bundles are obtained.rnThe constructed algorithm is used to re-derive the renormalization group flow of gravity in the Einstein-Hilbert truncation, showing the manifest background independence of the results. The well-studied Einstein-Hilbert case is further advanced by taking the effect of a running ghost field renormalization on the gravitational coupling constants into account. A detailed numerical analysis reveals a further stabilization of the found non-Gaussian fixed point.rnFinally, the proposed algorithm is applied to the case of higher derivative gravity including all curvature squared interactions. This establishes an improvement of existing computations, taking the independent running of the Euler topological term into account. Known perturbative results are reproduced in this case from the renormalization group equation, identifying however a unique non-Gaussian fixed point.rn
Resumo:
In this paper a new 22 GHz water vapor spectro-radiometer which has been specifically designed for profile measurement campaigns of the middle atmosphere is presented. The instrument is of a compact design and has a simple set up procedure. It can be operated as a standalone instrument as it maintains its own weather station and a calibration scheme that does not rely on other instruments or the use of liquid nitrogen. The optical system of MIAWARA-C combines a choked gaussian horn antenna with a parabolic mirror which reduces the size of the instrument in comparison with currently existing radiometers. For the data acquisition a correlation receiver is used together with a digital cross correlating spectrometer. The complete backend section, including the computer, is located in the same housing as the instrument. The receiver section is temperature stabilized to minimize gain fluctuations. Calibration of the instrument is achieved through a balancing scheme with the sky used as the cold load and the tropospheric properties are determined by performing regular tipping curves. Since MIAWARA-C is used in measurement campaigns it is important to be able to determine the elevation pointing in a simple manner as this is a crucial parameter in the calibration process. Here we present two different methods; scanning the sky and the Sun. Finally, we report on the first spectra and retrieved water vapor profiles acquired during the Lapbiat campaign at the Finnish Meteorological Institute Arctic Research Centre in Sodankylä, Finland. The performance of MIAWARA-C is validated here by comparison of the presented profiles against the equivalent profiles from the Microwave Limb Sounder on the EOS/Aura satellite.
Resumo:
Despite the increased use of intracranial neuromonitoring during experimental subarachnoid hemorrhage (SAH), coordinates for probe placement in rabbits are lacking. This study evaluates the safety and reliability of using outer skull landmarks to identify locations for placement of cerebral blood flow (CBF) and intraparenchymal intracranial pressure (ICP) probes. Experimental SAH was performed in 17 rabbits using an extracranial-intracranial shunt model. ICP probes were placed in the frontal lobe and compared to measurements recorded from the olfactory bulb. CBF probes were placed in various locations in the frontal cortex anterior to the coronary suture. Insertion depth, relation to the ventricular system, and ideal placement location were determined by post-mortem examination. ICP recordings at the time of SAH from the frontal lobe did not differ significantly from those obtained from the right olfactory bulb. Ideal coordinates for intraparenchymal CBF probes in the left and right frontal lobe were found to be located 4.6±0.9 and 4.5±1.2 anterior to the bregma, 4.7±0.7mm and 4.7±0.5mm parasagittal, and at depths of 4±0.5mm and 3.9±0.5mm, respectively. The results demonstrate that the presented coordinates based on skull landmarks allow reliable placement of intraparenchymal ICP and CBF probes in rabbit brains without the use of a stereotactic frame.
Resumo:
The measurement of fluid volumes in cases of pericardial effusion is a necessary procedure during autopsy. With the increased use of virtual autopsy methods in forensics, the need for a quick volume measurement method on computed tomography (CT) data arises, especially since methods such as CT angiography can potentially alter the fluid content in the pericardium. We retrospectively selected 15 cases with hemopericardium, which underwent post-mortem imaging and autopsy. Based on CT data, the pericardial blood volume was estimated using segmentation techniques and downsampling of CT datasets. Additionally, a variety of measures (distances, areas and 3D approximations of the effusion) were examined to find a quick and easy way of estimating the effusion volume. Segmentation of CT images as shown in the present study is a feasible method to measure the pericardial fluid amount accurately. Downsampling of a dataset significantly increases the speed of segmentation without losing too much accuracy. Some of the other methods examined might be used to quickly estimate the severity of the effusion volumes.
Resumo:
We propose a new method for fitting proportional hazards models with error-prone covariates. Regression coefficients are estimated by solving an estimating equation that is the average of the partial likelihood scores based on imputed true covariates. For the purpose of imputation, a linear spline model is assumed on the baseline hazard. We discuss consistency and asymptotic normality of the resulting estimators, and propose a stochastic approximation scheme to obtain the estimates. The algorithm is easy to implement, and reduces to the ordinary Cox partial likelihood approach when the measurement error has a degenerative distribution. Simulations indicate high efficiency and robustness. We consider the special case where error-prone replicates are available on the unobserved true covariates. As expected, increasing the number of replicate for the unobserved covariates increases efficiency and reduces bias. We illustrate the practical utility of the proposed method with an Eastern Cooperative Oncology Group clinical trial where a genetic marker, c-myc expression level, is subject to measurement error.
Resumo:
We propose integrated optical structures that can be used as isolators and polarization splitters based on engineered photonic lattices. Starting from optical waveguide arrays that mimic Fock space (quantum state with a well-defined particle number) representation of a non-interacting two-site Bose Hubbard Hamiltonian, we show that introducing magneto-optic nonreciprocity to these structures leads to a superior optical isolation performance. In the forward propagation direction, an input TM polarized beam experiences a perfect state transfer between the input and output waveguide channels while surface Bloch oscillations block the backward transmission between the same ports. Our analysis indicates a large isolation ratio of 75 dB after a propagation distance of 8mm inside seven coupled waveguides. Moreover, we demonstrate that, a judicious choice of the nonreciprocity in this same geometry can lead to perfect polarization splitting.
Resumo:
The current article presents a novel physiological control algorithm for ventricular assist devices (VADs), which is inspired by the preload recruitable stroke work. This controller adapts the hydraulic power output of the VAD to the end-diastolic volume of the left ventricle. We tested this controller on a hybrid mock circulation where the left ventricular volume (LVV) is known, i.e., the problem of measuring the LVV is not addressed in the current article. Experiments were conducted to compare the response of the controller with the physiological and with the pathological circulation, with and without VAD support. A sensitivity analysis was performed to analyze the influence of the controller parameters and the influence of the quality of the LVV signal on the performance of the control algorithm. The results show that the controller induces a response similar to the physiological circulation and effectively prevents over- and underpumping, i.e., ventricular suction and backflow from the aorta to the left ventricle, respectively. The same results are obtained in the case of a disturbed LVV signal. The results presented in the current article motivate the development of a robust, long-term stable sensor to measure the LVV.
Resumo:
In this paper we present BitWorker, a platform for community distributed computing based on BitTorrent. Any splittable task can be easily specified by a user in a meta-information task file, such that it can be downloaded and performed by other volunteers. Peers find each other using Distributed Hash Tables, download existing results, and compute missing ones. Unlike existing distributed computing schemes relying on centralized coordination point(s), our scheme is totally distributed, therefore, highly robust. We evaluate the performance of BitWorker using mathematical models and real tests, showing processing and robustness gains. BitWorker is available for download and use by the community.
Resumo:
Stratospheric ozone is of major interest as it absorbs most harmful UV radiation from the sun, allowing life on Earth. Ground-based microwave remote sensing is the only method that allows for the measurement of ozone profiles up to the mesopause, over 24 hours and under different weather conditions with high time resolution. In this paper a novel ground-based microwave radiometer is presented. It is called GROMOS-C (GRound based Ozone MOnitoring System for Campaigns), and it has been designed to measure the vertical profile of ozone distribution in the middle atmosphere by observing ozone emission spectra at a frequency of 110.836 GHz. The instrument is designed in a compact way which makes it transportable and suitable for outdoor use in campaigns, an advantageous feature that is lacking in present day ozone radiometers. It is operated through remote control. GROMOS-C is a total power radiometer which uses a pre-amplified heterodyne receiver, and a digital fast Fourier transform spectrometer for the spectral analysis. Among its main new features, the incorporation of different calibration loads stands out; this includes a noise diode and a new type of blackbody target specifically designed for this instrument, based on Peltier elements. The calibration scheme does not depend on the use of liquid nitrogen; therefore GROMOS-C can be operated at remote places with no maintenance requirements. In addition, the instrument can be switched in frequency to observe the CO line at 115 GHz. A description of the main characteristics of GROMOS-C is included in this paper, as well as the results of a first campaign at the High Altitude Research Station at Jungfraujoch (HFSJ), Switzerland. The validation is performed by comparison of the retrieved profiles against equivalent profiles from MLS (Microwave Limb Sounding) satellite data, ECMWF (European Centre for Medium-Range Weather Forecast) model data, as well as our nearby NDACC (Network for the Detection of Atmospheric Composition Change) ozone radiometer measuring at Bern.
Resumo:
Two studies among college students were conducted to evaluate appropriate measurement methods for etiological research on computing-related upper extremity musculoskeletal disorders (UEMSDs). ^ A cross-sectional study among 100 graduate students evaluated the utility of symptoms surveys (a VAS scale and 5-point Likert scale) compared with two UEMSD clinical classification systems (Gerr and Moore protocols). The two symptom measures were highly concordant (Lin's rho = 0.54; Spearman's r = 0.72); the two clinical protocols were moderately concordant (Cohen's kappa = 0.50). Sensitivity and specificity, endorsed by Youden's J statistic, did not reveal much agreement between the symptoms surveys and clinical examinations. It cannot be concluded self-report symptoms surveys can be used as surrogate for clinical examinations. ^ A pilot repeated measures study conducted among 30 undergraduate students evaluated computing exposure measurement methods. Key findings are: temporal variations in symptoms, the odds of experiencing symptoms increased with every hour of computer use (adjOR = 1.1, p < .10) and every stretch break taken (adjOR = 1.3, p < .10). When measuring posture using the Computer Use Checklist, a positive association with symptoms was observed (adjOR = 1.3, p < 0.10), while measuring posture using a modified Rapid Upper Limb Assessment produced unexpected and inconsistent associations. The findings were inconclusive in identifying an appropriate posture assessment or superior conceptualization of computer use exposure. ^ A cross-sectional study of 166 graduate students evaluated the comparability of graduate students to College Computing & Health surveys administered to undergraduate students. Fifty-five percent reported computing-related pain and functional limitations. Years of computer use in graduate school and number of years in school where weekly computer use was ≥ 10 hours were associated with pain within an hour of computing in logistic regression analyses. The findings are consistent with current literature on both undergraduate and graduate students. ^
Resumo:
We present a novel graphical user interface program GrafLab (GRAvity Field LABoratory) for spherical harmonic synthesis (SHS) created in MATLAB®. This program allows to comfortably compute 38 various functionals of the geopotential up to ultra-high degrees and orders of spherical harmonic expansion. For the most difficult part of the SHS, namely the evaluation of the fully normalized associated Legendre functions (fnALFs), we used three different approaches according to required maximum degree: (i) the standard forward column method (up to maximum degree 1800, in some cases up to degree 2190); (ii) the modified forward column method combined with Horner's scheme (up to maximum degree 2700); (iii) the extended-range arithmetic (up to an arbitrary maximum degree). For the maximum degree 2190, the SHS with fnALFs evaluated using the extended-range arithmetic approach takes only approximately 2-3 times longer than its standard arithmetic counterpart, i.e. the standard forward column method. In the GrafLab, the functionals of the geopotential can be evaluated on a regular grid or point-wise, while the input coordinates can either be read from a data file or entered manually. For the computation on a regular grid we decided to apply the lumped coefficients approach due to significant time-efficiency of this method. Furthermore, if a full variance-covariance matrix of spherical harmonic coefficients is available, it is possible to compute the commission errors of the functionals. When computing on a regular grid, the output functionals or their commission errors may be depicted on a map using automatically selected cartographic projection.
Resumo:
Quantum dot infrared photodetectors (QDIPs) are very attractive for many applications such as infrared imaging, remote sensing and gas sensing, thanks to its promising features such as high temperature operation, normal incidence response and low dark current [1]. However, the key issue is to obtain a high-quality active region which requires an optimization of the nanostructure. By using GaAsSb capping layer, InAs QDs have improved their optical emission in the range between 1.15 and 1.3 m (at Sb composition of 14 %), due to a reduction of a compressive strain in QD and an increment of a QD height [2]. In this work, we have demonstrated strong and narrow intraband photoresponses at ~ 5 m from GaAsSb-capped InAs/GaAs QDIPs under normal light-incidence.
Resumo:
Multi-user videoconferencing systems offer communication between more than two users, who are able to interact through their webcams, microphones and other components. The use of these systems has been increased recently due to, on the one hand, improvements in Internet access, networks of companies, universities and houses, whose available bandwidth has been increased whilst the delay in sending and receiving packets has decreased. On the other hand, the advent of Rich Internet Applications (RIA) means that a large part of web application logic and control has started to be implemented on the web browsers. This has allowed developers to create web applications with a level of complexity comparable to traditional desktop applications, running on top of the Operating Systems. More recently the use of Cloud Computing systems has improved application scalability and involves a reduction in the price of backend systems. This offers the possibility of implementing web services on the Internet with no need to spend a lot of money when deploying infrastructures and resources, both hardware and software. Nevertheless there are not many initiatives that aim to implement videoconferencing systems taking advantage of Cloud systems. This dissertation proposes a set of techniques, interfaces and algorithms for the implementation of videoconferencing systems in public and private Cloud Computing infrastructures. The mechanisms proposed here are based on the implementation of a basic videoconferencing system that runs on the web browser without any previous installation requirements. To this end, the development of this thesis starts from a RIA application with current technologies that allow users to access their webcams and microphones from the browser, and to send captured data through their Internet connections. Furthermore interfaces have been implemented to allow end users to participate in videoconferencing rooms that are managed in different Cloud provider servers. To do so this dissertation starts from the results obtained from the previous techniques and backend resources were implemented in the Cloud. A traditional videoconferencing service which was implemented in the department was modified to meet typical Cloud Computing infrastructure requirements. This allowed us to validate whether Cloud Computing public infrastructures are suitable for the traffic generated by this kind of system. This analysis focused on the network level and processing capacity and stability of the Cloud Computing systems. In order to improve this validation several other general considerations were taken in order to cover more cases, such as multimedia data processing in the Cloud, as research activity has increased in this area in recent years. The last stage of this dissertation is the design of a new methodology to implement these kinds of applications in hybrid clouds reducing the cost of videoconferencing systems. Finally, this dissertation opens up a discussion about the conclusions obtained throughout this study, resulting in useful information from the different stages of the implementation of videoconferencing systems in Cloud Computing systems. RESUMEN Los sistemas de videoconferencia multiusuario permiten la comunicación entre más de dos usuarios que pueden interactuar a través de cámaras de video, micrófonos y otros elementos. En los últimos años el uso de estos sistemas se ha visto incrementado gracias, por un lado, a la mejora de las redes de acceso en las conexiones a Internet en empresas, universidades y viviendas, que han visto un aumento del ancho de banda disponible en dichas conexiones y una disminución en el retardo experimentado por los datos enviados y recibidos. Por otro lado también ayudó la aparación de las Aplicaciones Ricas de Internet (RIA) con las que gran parte de la lógica y del control de las aplicaciones web comenzó a ejecutarse en los mismos navegadores. Esto permitió a los desarrolladores la creación de aplicaciones web cuya complejidad podía compararse con la de las tradicionales aplicaciones de escritorio, ejecutadas directamente por los sistemas operativos. Más recientemente el uso de sistemas de Cloud Computing ha mejorado la escalabilidad y el abaratamiento de los costes para sistemas de backend, ofreciendo la posibilidad de implementar servicios Web en Internet sin la necesidad de grandes desembolsos iniciales en las áreas de infraestructuras y recursos tanto hardware como software. Sin embargo no existen aún muchas iniciativas con el objetivo de realizar sistemas de videoconferencia que aprovechen las ventajas del Cloud. Esta tesis doctoral propone un conjunto de técnicas, interfaces y algoritmos para la implentación de sistemas de videoconferencia en infraestructuras tanto públicas como privadas de Cloud Computing. Las técnicas propuestas en la tesis se basan en la realización de un servicio básico de videoconferencia que se ejecuta directamente en el navegador sin la necesidad de instalar ningún tipo de aplicación de escritorio. Para ello el desarrollo de esta tesis parte de una aplicación RIA con tecnologías que hoy en día permiten acceder a la cámara y al micrófono directamente desde el navegador, y enviar los datos que capturan a través de la conexión de Internet. Además se han implementado interfaces que permiten a usuarios finales la participación en salas de videoconferencia que se ejecutan en servidores de proveedores de Cloud. Para ello se partió de los resultados obtenidos en las técnicas anteriores de ejecución de aplicaciones en el navegador y se implementaron los recursos de backend en la nube. Además se modificó un servicio ya existente implementado en el departamento para adaptarlo a los requisitos típicos de las infraestructuras de Cloud Computing. Alcanzado este punto se procedió a analizar si las infraestructuras propias de los proveedores públicos de Cloud Computing podrían soportar el tráfico generado por los sistemas que se habían adaptado. Este análisis se centró tanto a nivel de red como a nivel de capacidad de procesamiento y estabilidad de los sistemas. Para los pasos de análisis y validación de los sistemas Cloud se tomaron consideraciones más generales para abarcar casos como el procesamiento de datos multimedia en la nube, campo en el que comienza a haber bastante investigación en los últimos años. Como último paso se ideó una metodología de implementación de este tipo de aplicaciones para que fuera posible abaratar los costes de los sistemas de videoconferencia haciendo uso de clouds híbridos. Finalmente en la tesis se abre una discusión sobre las conclusiones obtenidas a lo largo de este amplio estudio, obteniendo resultados útiles en las distintas etapas de implementación de los sistemas de videoconferencia en la nube.
Resumo:
El objetivo principal de esta Tesis es extender la utilización del “Soft- Computing” para el control de vehículos sin piloto utilizando visión. Este trabajo va más allá de los típicos sistemas de control utilizados en entornos altamente controlados, demonstrando la fuerza y versatilidad de la lógica difusa (Fuzzy Logic) para controlar vehículos aéreos y terrestres en un abanico de applicaciones diferentes. Para esta Tesis se ha realizado un gran número de pruebas reales en las cuales los controladores difusos han manejado una plataforma visual “pan-and-tilt”, un helicoptero, un coche comercial y hasta dos tipos de quadrirotores. El uso del método de optimización “Cross-Entropy” ha sido utilizado para mejorar el comportamiento de algunos de los controladores borrosos. Todos los controladores difusos presentados en ésta Tesis han sido implementados utilizando un código desarrollado por el candidato para tal efecto, llamado MOFS (Miguel Olivares’ Fuzzy Software). Diferentes algoritmos visuales han sido utilizados para adquirir la informaci´on visual del entorno, “Cmashift”, descomposición de la homografía y detección de marcas de realidad aumentada, entre otros. Dicha información visual ha sido utilizada como entrada de los controladores difusos para comandar los vehículos en las diferentes applicaciones autonomas. El volante de un vehículo comercial ha sido controlado para realizar pruebas de conducción autónoma en condiciones de tráfico similares a las de una ciudad. El sistema ha llegado a completar con éxito pruebas de más de 6 km sin ninguna interacción humana, mediante el seguimiento de una línea pintada en el suelo. El limitado campo visual del sistema no ha sido impedimento para alcanzar velocidades de hasta 48 km/h y ser guiado autonomamente en curvas de radio reducido. Objetos estáticos y móviles han sido seguidos desde un helicoptero no tripulado, mediante el control de una plataforma visual “pan-and-tilt”. ´Éste mismo helicoptero ha sido controlado completamente para su aterrizaje autonomo, mediante el control del movimiento lateral (roll), horizontal (pitch) y de altitud. El seguimiento de objetos volantes ha sido resulto mediante el control horizontal (pitch) y de orientación (heading) de un quadrirotor. Para tareas de evitación de obstáculos se ha implementado un controlador difuso para el manejo de la orientación (heading) de un quadrirotor. En el campo de la optimización de controladores se ha aportado al estado del arte una extensión del uso del método “Cross-Entropy”. Está Tesis presenta una novedosa implementación de dicho método para la optimización de las ganancias, la posición y medida de los conjuntos de las funciones de pertenecia y el peso de las reglas para mejorar el comportamiento de un controlador difuso. Dichos procesos de optimización se han realizado utilizando “ROS” y “Matlab Simulink” para obtener mejores resultados para la evitación de colisiones con vehículos aéreos no tripulados. Ésta Tesis demuestra que los controladores implementados con lógica difusa son altamente capaces de controlador sistemas sin tener en cuenta el modelo del vehículo a controlador en entornos altamente perturbables con un sensor de bajo coste como es una cámara. El ruido presentes causado por los cambios de iluminación en la adquisición de imágenes y la alta incertidumbre en la detección visual han sido manejados satisfactoriamente por ésta técnica de de “Soft-Computing” para distintas aplicaciones tanto con vehículos aéreos como terrestres.