867 resultados para Measurement-based quantum computing
Resumo:
The increasing importance of vertical specialisation (VS) trade has been a notable feature of rapid economic globalisation and regional integration. In an attempt to understand countries’ depth of participation in global production chains, many Input-Output based VS indicators have been developed. However, most of them focus on showing the overall magnitude of a country’s VS trade, rather than explaining the roles that specific sectors or products play in VS trade and what factors make the VS change over time. Changes in vertical specialisation indicators are, in fact, determined by mixed and complex factors such as import substitution ratios, types of exported goods and domestic production networks. In this paper, decomposition techniques are applied to VS measurement based on the OECD Input-Output database. The decomposition results not only help us understand the structure of VS at detailed sector and product levels, but also show us the contributions of trade dependency, industrial structures of foreign trade and domestic production system to a country’s vertical specialisation trade.
Resumo:
This thesis contributes to the analysis and design of printed reflectarray antennas. The main part of the work is focused on the analysis of dual offset antennas comprising two reflectarray surfaces, one of them acts as sub-reflector and the second one acts as mainreflector. These configurations introduce additional complexity in several aspects respect to conventional dual offset reflectors, however they present a lot of degrees of freedom that can be used to improve the electrical performance of the antenna. The thesis is organized in four parts: the development of an analysis technique for dualreflectarray antennas, a preliminary validation of such methodology using equivalent reflector systems as reference antennas, a more rigorous validation of the software tool by manufacturing and testing a dual-reflectarray antenna demonstrator and the practical design of dual-reflectarray systems for some applications that show the potential of these kind of configurations to scan the beam and to generate contoured beams. In the first part, a general tool has been implemented to analyze high gain antennas which are constructed of two flat reflectarray structures. The classic reflectarray analysis based on MoM under local periodicity assumption is used for both sub and main reflectarrays, taking into account the incident angle on each reflectarray element. The incident field on the main reflectarray is computed taking into account the field radiated by all the elements on the sub-reflectarray.. Two approaches have been developed, one which employs a simple approximation to reduce the computer run time, and the other which does not, but offers in many cases, improved accuracy. The approximation is based on computing the reflected field on each element on the main reflectarray only once for all the fields radiated by the sub-reflectarray elements, assuming that the response will be the same because the only difference is a small variation on the angle of incidence. This approximation is very accurate when the reflectarray elements on the main reflectarray show a relatively small sensitivity to the angle of incidence. An extension of the analysis technique has been implemented to study dual-reflectarray antennas comprising a main reflectarray printed on a parabolic surface, or in general in a curved surface. In many applications of dual-reflectarray configurations, the reflectarray elements are in the near field of the feed-horn. To consider the near field radiated by the horn, the incident field on each reflectarray element is computed using a spherical mode expansion. In this region, the angles of incidence are moderately wide, and they are considered in the analysis of the reflectarray to better calculate the actual incident field on the sub-reflectarray elements. This technique increases the accuracy for the prediction of co- and cross-polar patterns and antenna gain respect to the case of using ideal feed models. In the second part, as a preliminary validation, the proposed analysis method has been used to design a dual-reflectarray antenna that emulates previous dual-reflector antennas in Ku and W-bands including a reflectarray as subreflector. The results for the dualreflectarray antenna compare very well with those of the parabolic reflector and reflectarray subreflector; radiation patterns, antenna gain and efficiency are practically the same when the main parabolic reflector is substituted by a flat reflectarray. The results show that the gain is only reduced by a few tenths of a dB as a result of the ohmic losses in the reflectarray. The phase adjustment on two surfaces provided by the dual-reflectarray configuration can be used to improve the antenna performance in some applications requiring multiple beams, beam scanning or shaped beams. Third, a very challenging dual-reflectarray antenna demonstrator has been designed, manufactured and tested for a more rigorous validation of the analysis technique presented. The proposed antenna configuration has the feed, the sub-reflectarray and the main-reflectarray in the near field one to each other, so that the conventional far field approximations are not suitable for the analysis of such antenna. This geometry is used as benchmarking for the proposed analysis tool in very stringent conditions. Some aspects of the proposed analysis technique that allow improving the accuracy of the analysis are also discussed. These improvements include a novel method to reduce the inherent cross polarization which is introduced mainly from grounded patch arrays. It has been checked that cross polarization in offset reflectarrays can be significantly reduced by properly adjusting the patch dimensions in the reflectarray in order to produce an overall cancellation of the cross-polarization. The dimensions of the patches are adjusted in order not only to provide the required phase-distribution to shape the beam, but also to exploit the crosses by zero of the cross-polarization components. The last part of the thesis deals with direct applications of the technique described. The technique presented is directly applicable to the design of contoured beam antennas for DBS applications, where the requirements of cross-polarisation are very stringent. The beam shaping is achieved by synthesithing the phase distribution on the main reflectarray while the sub-reflectarray emulates an equivalent hyperbolic subreflector. Dual-reflectarray antennas present also the ability to scan the beam over small angles about boresight. Two possible architectures for a Ku-band antenna are also described based on a dual planar reflectarray configuration that provides electronic beam scanning in a limited angular range. In the first architecture, the beam scanning is achieved by introducing a phase-control in the elements of the sub-reflectarray and the mainreflectarray is passive. A second alternative is also studied, in which the beam scanning is produced using 1-bit control on the main reflectarray, while a passive subreflectarray is designed to provide a large focal distance within a compact configuration. The system aims to develop a solution for bi-directional satellite links for emergency communications. In both proposed architectures, the objective is to provide a compact optics and simplicity to be folded and deployed.
Resumo:
Métrica de calidad de video de alta definición construida a partir de ratios de referencia completa. La medida de calidad de video, en inglés Visual Quality Assessment (VQA), es uno de los mayores retos por solucionar en el entorno multimedia. La calidad de vídeo tiene un impacto altísimo en la percepción del usuario final (consumidor) de los servicios sustentados en la provisión de contenidos multimedia y, por tanto, factor clave en la valoración del nuevo paradigma denominado Calidad de la Experiencia, en inglés Quality of Experience (QoE). Los modelos de medida de calidad de vídeo se pueden agrupar en varias ramas según la base técnica que sustenta el sistema de medida, destacando en importancia los que emplean modelos psicovisuales orientados a reproducir las características del sistema visual humano, en inglés Human Visual System, del que toman sus siglas HVS, y los que, por el contrario, optan por una aproximación ingenieril en la que el cálculo de calidad está basado en la extracción de parámetros intrínsecos de la imagen y su comparación. A pesar de los avances recogidos en este campo en los últimos años, la investigación en métricas de calidad de vídeo, tanto en presencia de referencia (los modelos denominados de referencia completa), como en presencia de parte de ella (modelos de referencia reducida) e incluso los que trabajan en ausencia de la misma (denominados sin referencia), tiene un amplio camino de mejora y objetivos por alcanzar. Dentro de ellos, la medida de señales de alta definición, especialmente las utilizadas en las primeras etapas de la cadena de valor que son de muy alta calidad, son de especial interés por su influencia en la calidad final del servicio y no existen modelos fiables de medida en la actualidad. Esta tesis doctoral presenta un modelo de medida de calidad de referencia completa que hemos llamado PARMENIA (PArallel Ratios MEtric from iNtrInsic features Analysis), basado en la ponderación de cuatro ratios de calidad calculados a partir de características intrínsecas de la imagen. Son: El Ratio de Fidelidad, calculado mediante el gradiente morfológico o gradiente de Beucher. El Ratio de Similitud Visual, calculado mediante los puntos visualmente significativos de la imagen a través de filtrados locales de contraste. El Ratio de Nitidez, que procede de la extracción del estadístico de textura de Haralick contraste. El Ratio de Complejidad, obtenido de la definición de homogeneidad del conjunto de estadísticos de textura de Haralick PARMENIA presenta como novedad la utilización de la morfología matemática y estadísticos de Haralick como base de una métrica de medida de calidad, pues esas técnicas han estado tradicionalmente más ligadas a la teledetección y la segmentación de objetos. Además, la aproximación de la métrica como un conjunto ponderado de ratios es igualmente novedosa debido a que se alimenta de modelos de similitud estructural y otros más clásicos, basados en la perceptibilidad del error generado por la degradación de la señal asociada a la compresión. PARMENIA presenta resultados con una altísima correlación con las valoraciones MOS procedentes de las pruebas subjetivas a usuarios que se han realizado para la validación de la misma. El corpus de trabajo seleccionado procede de conjuntos de secuencias validados internacionalmente, de modo que los resultados aportados sean de la máxima calidad y el máximo rigor posible. La metodología de trabajo seguida ha consistido en la generación de un conjunto de secuencias de prueba de distintas calidades a través de la codificación con distintos escalones de cuantificación, la obtención de las valoraciones subjetivas de las mismas a través de pruebas subjetivas de calidad (basadas en la recomendación de la Unión Internacional de Telecomunicaciones BT.500), y la validación mediante el cálculo de la correlación de PARMENIA con estos valores subjetivos, cuantificada a través del coeficiente de correlación de Pearson. Una vez realizada la validación de los ratios y optimizada su influencia en la medida final y su alta correlación con la percepción, se ha realizado una segunda revisión sobre secuencias del hdtv test dataset 1 del Grupo de Expertos de Calidad de Vídeo (VQEG, Video Quality Expert Group) mostrando los resultados obtenidos sus claras ventajas. Abstract Visual Quality Assessment has been so far one of the most intriguing challenges on the media environment. Progressive evolution towards higher resolutions while increasing the quality needed (e.g. high definition and better image quality) aims to redefine models for quality measuring. Given the growing interest in multimedia services delivery, perceptual quality measurement has become a very active area of research. First, in this work, a classification of objective video quality metrics based on their underlying methodologies and approaches for measuring video quality has been introduced to sum up the state of the art. Then, this doctoral thesis describes an enhanced solution for full reference objective quality measurement based on mathematical morphology, texture features and visual similarity information that provides a normalized metric that we have called PARMENIA (PArallel Ratios MEtric from iNtrInsic features Analysis), with a high correlated MOS score. The PARMENIA metric is based on the pooling of different quality ratios that are obtained from three different approaches: Beucher’s gradient, local contrast filtering, and contrast and homogeneity Haralick’s texture features. The metric performance is excellent, and improves the current state of the art by providing a wide dynamic range that make easier to discriminate between very close quality coded sequences, especially for very high bit rates whose quality, currently, is transparent for quality metrics. PARMENIA introduces a degree of novelty against other working metrics: on the one hand, exploits the structural information variation to build the metric’s kernel, but complements the measure with texture information and a ratio of visual meaningful points that is closer to typical error sensitivity based approaches. We would like to point out that PARMENIA approach is the only metric built upon full reference ratios, and using mathematical morphology and texture features (typically used in segmentation) for quality assessment. On the other hand, it gets results with a wide dynamic range that allows measuring the quality of high definition sequences from bit rates of hundreds of Megabits (Mbps) down to typical distribution rates (5-6 Mbps), even streaming rates (1- 2 Mbps). Thus, a direct correlation between PARMENIA and MOS scores are easily constructed. PARMENIA may further enhance the number of available choices in objective quality measurement, especially for very high quality HD materials. All this results come from validation that has been achieved through internationally validated datasets on which subjective tests based on ITU-T BT.500 methodology have been carried out. Pearson correlation coefficient has been calculated to verify the accuracy of PARMENIA and its reliability.
Resumo:
Quantitative descriptive analysis (QDA) is used to describe the nature and the intensity of sensory properties from a single evaluation of a product, whereas temporal dominance of sensation (TDS) is primarily used to identify dominant sensory properties over time. Previous studies with TDS have focused on model systems, but this is the first study to use a sequential approach, i.e. QDA then TDS in measuring sensory properties of a commercial product category, using the same set of trained assessors (n = 11). The main objectives of this study were to: (1) investigate the benefits of using a sequential approach of QDA and TDS and (2) to explore the impact of the sample composition on taste and flavour perceptions in blackcurrant squashes. The present study has proposed an alternative way of determining the choice of attributes for TDS measurement based on data obtained from previous QDA studies, where available. Both methods indicated that the flavour profile was primarily influenced by the level of dilution and complexity of sample composition combined with blackcurrant juice content. In addition, artificial sweeteners were found to modify the quality of sweetness and could also contribute to bitter notes. Using QDA and TDS in tandem was shown to be more beneficial than each just on its own enabling a more complete sensory profile of the products.
Resumo:
Despite the gargantuan stakes that mergers and acquisitions represent, global companies' success rate at integrating organizations has been dismal, incurring billions of dollars in lost shareholder value. International human resources' handling of the cultural integration process is the principal differentiator between success and failure. This Capstone project proposes a process for developing cultural integration mechanisms, known as glue technology, and provides a step-by-step process map for execution through four phases. During planning, the need for glue technology is defined. Through analysis, rewards systems are assessed, and a strategy is chosen. In implementation planning, unanimous executive commitment must be secured. Last is measurement based on the integration plan's objectives. Enabling mechanisms such as removing negative influencers, speed, and communication are discussed.
Resumo:
Detection of a single nuclear spin constitutes an outstanding problem in different fields of physics such as quantum computing or magnetic imaging. Here we show that the energy levels of a single nuclear spin can be measured by means of inelastic electron tunneling spectroscopy (IETS). We consider two different systems, a magnetic adatom probed with scanning tunneling microscopy and a single Bi dopant in a silicon nanotransistor. We find that the hyperfine coupling opens new transport channels which can be resolved at experimentally accessible temperatures. Our simulations evince that IETS yields information about the occupations of the nuclear spin states, paving the way towards transport-detected single nuclear spin resonance.
Resumo:
The n-tuple pattern recognition method has been tested using a selection of 11 large data sets from the European Community StatLog project, so that the results could be compared with those reported for the 23 other algorithms the project tested. The results indicate that this ultra-fast memory-based method is a viable competitor with the others, which include optimisation-based neural network algorithms, even though the theory of memory-based neural computing is less highly developed in terms of statistical theory.
Resumo:
Miniature slow light Surface Nanoscale Axial Photonics (SNAP) devices are reviewed. The fabrication precision of these devices is two orders of magnitude higher and the transmission losses are two orders of magnitude smaller than for any of the previously reported technologies for fabrication of miniature photonic circuits. In the first part of the report, a SNAP bottle resonator with a few nm high radius variation is demonstrated as the record small, slow light, and low loss 2.6 ns dispersionless delay line of 100 ps pulses. Next, a record small SNAP bottle resonator exhibiting the 20 ns/nm dispersion compensation of 100 ps pulses is demonstrated. In the second part of the report, the prospects of the SNAP technology in applications to telecommunications, optical signal processing, quantum computing, and microfluidics are discussed. © 2014 IEEE.
Resumo:
Incumbent telecommunication lasers emitting at 1.5 µm are fabricated on InP substrates and consist of multiple strained quantum well layers of the ternary alloy InGaAs, with barriers of InGaAsP or InGaAlAs. These lasers have been seen to exhibit very strong temperature dependence of the threshold current. This strong temperature dependence leads to a situation where external cooling equipment is required to stabilise the optical output power of these lasers. This results in a significant increase in the energy bill associated with telecommunications, as well as a large increase in equipment budgets. If the exponential growth trend of end user bandwidth demand associated with the internet continues, these inefficient lasers could see the telecommunications industry become the dominant consumer of world energy. For this reason there is strong interest in developing new, much more efficient telecommunication lasers. One avenue being investigated is the development of quantum dot lasers on InP. The confinement experienced in these low dimensional structures leads to a strong perturbation of the density of states at the band edge, and has been predicted to result in reduced temperature dependence of the threshold current in these devices. The growth of these structures is difficult due to the large lattice mismatch between InP and InAs; however, recently quantum dots elongated in one dimension, known as quantum dashes, have been demonstrated. Chapter 4 of this thesis provides an experimental analysis of one of these quantum dash lasers emitting at 1.5 µm along with a numerical investigation of threshold dynamics present in this device. Another avenue being explored to increase the efficiency of telecommunications lasers is bandstructure engineering of GaAs-based materials to emit at 1.5 µm. The cause of the strong temperature sensitivity in InP-based quantum well structures has been shown to be CHSH Auger recombination. Calculations have shown and experiments have verified that the addition of bismuth to GaAs strongly reduces the bandgap and increases the spin orbit splitting energy of the alloy GaAs1−xBix. This leads to a bandstructure condition at x = 10 % where not only is 1.5 µm emission achieved on GaAs-based material, but also the bandstructure of the material can naturally suppress the costly CHSH Auger recombination which plagues InP-based quantum-well-based material. It has been predicted that telecommunications lasers based on this material system should operate in the absence of external cooling equipment and offer electrical and optical benefits over the incumbent lasers. Chapters 5, 6, and 7 provide a first analysis of several aspects of this material system relevant to the development of high bismuth content telecommunication lasers.
Resumo:
Gate-tunable two-dimensional (2D) materials-based quantum capacitors (QCs) and van der Waals heterostructures involve tuning transport or optoelectronic characteristics by the field effect. Recent studies have attributed the observed gate-tunable characteristics to the change of the Fermi level in the first 2D layer adjacent to the dielectrics, whereas the penetration of the field effect through the one-molecule-thick material is often ignored or oversimplified. Here, we present a multiscale theoretical approach that combines first-principles electronic structure calculations and the Poisson–Boltzmann equation methods to model penetration of the field effect through graphene in a metal–oxide–graphene–semiconductor (MOGS) QC, including quantifying the degree of “transparency” for graphene two-dimensional electron gas (2DEG) to an electric displacement field. We find that the space charge density in the semiconductor layer can be modulated by gating in a nonlinear manner, forming an accumulation or inversion layer at the semiconductor/graphene interface. The degree of transparency is determined by the combined effect of graphene quantum capacitance and the semiconductor capacitance, which allows us to predict the ranking for a variety of monolayer 2D materials according to their transparency to an electric displacement field as follows: graphene > silicene > germanene > WS2 > WTe2 > WSe2 > MoS2 > phosphorene > MoSe2 > MoTe2, when the majority carrier is electron. Our findings reveal a general picture of operation modes and design rules for the 2D-materials-based QCs.
Resumo:
Actualmente encontramos una fuerte presión en las organizaciones por adaptarse a un mundo competitivo con un descenso en las utilidades y una incertidumbre constante en su flujo de caja. Estas circunstancias obligan a las organizaciones al mejoramiento continuo buscando nuevas formas de gestionar sus procesos y sus recursos. Para las organizaciones de prestación de servicios en el sector de telecomunicaciones una de las ventajas competitivas más importantes de obtener es la productividad debido a que sus ganancias dependen directamente del número de actividades que puedan ejecutar cada empleado. El reto es hacer más con menos y con mejor calidad. Para lograrlo, la necesidad de gestionar efectivamente los recursos humanos aparece, y aquí es donde los sistemas de compensación toman un rol importante. El objetivo en este trabajo es diseñar y aplicar un modelo de remuneración variable para una empresa de prestación de servicios profesionales en el sector de las telecomunicaciones y con esto aportar al estudio de la gestión del desempeño y del talento humano en Colombia. Su realización permitió la documentación del diseño y aplicación del modelo de remuneración variable en un proyecto del sector de telecomunicaciones en Colombia. Su diseño utilizó las tendencias de programas remunerativos y teorías de gestión de desempeño para lograr un modelo integral que permita el crecimiento sostenido en el largo plazo y la motivación al recurso más importante de la organización que es el talento humano. Su aplicación permitió también la documentación de problemas y aciertos en la implementación de estos modelos.
Resumo:
Asymmetric organocatalysed reactions are one of the most fascinating synthetic strategies which one can adopt in order to induct a desired chirality into a reaction product. From all the possible practical applications of small organic molecules in catalytic reaction, amine–based catalysis has attracted a lot of attention during the past two decades. The high interest in asymmetric aminocatalytic pathways is to account to the huge variety of carbonyl compounds that can be functionalized by many different reactions of their corresponding chiral–enamine or –iminium ion as activated nucleophile and electrophile, respectively. Starting from the employment of L–Proline, many useful substrates have been proposed in order to further enhance the catalytic performances of these reaction in terms of enantiomeric excess values, yield, conversion of the substrate and turnover number. In particular, in the last decade the use of chiral and quasi–enantiomeric primary amine species has got a lot of attention in the field. Contemporaneously, many studies have been carried out in order to highlight the mechanism through which these kinds of substrates induct chirality into the desired products. In this scenario, computational chemistry has played a crucial role due to the possibility of simulating and studying any kind of reaction and the transition state structures involved. In the present work the transition state geometries of primary amine–catalysed Michael addition reaction of cyclohexanone to trans–β–nitrostyrene with different organic acid cocatalysts has been studied through different computational techniques such as density functional theory based quantum mechanics calculation and force–field directed molecular simulations.
Resumo:
Quantum clock models are statistical mechanical spin models which may be regarded as a sort of bridge between the one-dimensional quantum Ising model and the one-dimensional quantum XY model. This thesis aims to provide an exhaustive review of these models using both analytical and numerical techniques. We present some important duality transformations which allow us to recast clock models into different forms, involving for example parafermions and lattice gauge theories. Thus, the notion of topological order enters into the game opening new scenarios for possible applications, like topological quantum computing. The second part of this thesis is devoted to the numerical analysis of clock models. We explore their phase diagram under different setups, with and without chirality, starting with a transverse field and then adding a longitudinal field as well. The most important observables we take into account for diagnosing criticality are the energy gap, the magnetisation, the entanglement entropy and the correlation functions.
Resumo:
The power is still today an issue in wearable computing applications. The aim of the present paper is to raise awareness of the power consumption of wearable computing devices in specific scenarios to be able in the future to design energy efficient wireless sensors for context recognition in wearable computing applications. The approach is based on a hardware study. The objective of this paper is to analyze and compare the total power consumption of three representative wearable computing devices in realistic scenarios such as Display, Speaker, Camera and microphone, Transfer by Wi-Fi, Monitoring outdoor physical activity and Pedometer. A scenario based energy model is also developed. The Samsung Galaxy Nexus I9250 smartphone, the Vuzix M100 Smart Glasses and the SimValley Smartwatch AW-420.RX are the three devices representative of their form factors. The power consumption is measured using PowerTutor, an android energy profiler application with logging option and using unknown parameters so it is adjusted with the USB meter. The result shows that the screen size is the main parameter influencing the power consumption. The power consumption for an identical scenario varies depending on the wearable devices meaning that others components, parameters or processes might impact on the power consumption and further study is needed to explain these variations. This paper also shows that different inputs (touchscreen is more efficient than buttons controls) and outputs (speaker sensor is more efficient than display sensor) impact the energy consumption in different way. This paper gives recommendations to reduce the energy consumption in healthcare wearable computing application using the energy model.
Resumo:
We show how the measurement induced model of quantum computation proposed by Raussendorf and Briegel ( 2001, Phys. Rev. Letts., 86, 5188) can be adapted to a nonlinear optical interaction. This optical implementation requires a Kerr nonlinearity, a single photon source, a single photon detector and fast feed forward. Although nondeterministic optical quantum information proposals such as that suggested by KLM ( 2001, Nature, 409, 46) do not require a Kerr nonlinearity they do require complex reconfigurable optical networks. The proposal in this paper has the benefit of a single static optical layout with fixed device parameters, where the algorithm is defined by the final measurement procedure.