57 resultados para Measurement-based quantum computing
Resumo:
We propose a criterion for the validity of semiclassical gravity (SCG) which is based on the stability of the solutions of SCG with respect to quantum metric fluctuations. We pay special attention to the two-point quantum correlation functions for the metric perturbations, which contain both intrinsic and induced fluctuations. These fluctuations can be described by the Einstein-Langevin equation obtained in the framework of stochastic gravity. Specifically, the Einstein-Langevin equation yields stochastic correlation functions for the metric perturbations which agree, to leading order in the large N limit, with the quantum correlation functions of the theory of gravity interacting with N matter fields. The homogeneous solutions of the Einstein-Langevin equation are equivalent to the solutions of the perturbed semiclassical equation, which describe the evolution of the expectation value of the quantum metric perturbations. The information on the intrinsic fluctuations, which are connected to the initial fluctuations of the metric perturbations, can also be retrieved entirely from the homogeneous solutions. However, the induced metric fluctuations proportional to the noise kernel can only be obtained from the Einstein-Langevin equation (the inhomogeneous term). These equations exhibit runaway solutions with exponential instabilities. A detailed discussion about different methods to deal with these instabilities is given. We illustrate our criterion by showing explicitly that flat space is stable and a description based on SCG is a valid approximation in that case.
Resumo:
Photon migration in a turbid medium has been modeled in many different ways. The motivation for such modeling is based on technology that can be used to probe potentially diagnostic optical properties of biological tissue. Surprisingly, one of the more effective models is also one of the simplest. It is based on statistical properties of a nearest-neighbor lattice random walk. Here we develop a theory allowing one to calculate the number of visits by a photon to a given depth, if it is eventually detected at an absorbing surface. This mimics cw measurements made on biological tissue and is directed towards characterizing the depth reached by photons injected at the surface. Our development of the theory uses formalism based on the theory of a continuous-time random walk (CTRW). Formally exact results are given in the Fourier-Laplace domain, which, in turn, are used to generate approximations for parameters of physical interest.
Resumo:
We present a comprehensive study of the low-temperature magnetic relaxation in random magnets. The first part of the paper contains theoretical analysis of the expected features of the relaxation, based upon current theories of quantum tunneling of magnetization. Models of tunneling, dissipation, the crossover from the thermal to the quantum regime, and the effect of barrier distribution on the relaxation rate are discussed. It is argued that relaxation-type experiments are ideally suited for the observation of magnetic tunneling, since they automatically provide the condition of very low barriers. The second part of the paper contains experimental results on transition-metal¿rare-earth amorphous magnets. Structural and magnetic characterization of materials is presented. The temperature and field dependence of the magnetic relaxation is studied. Our key observation is a nonthermal character of the relaxation below a few kelvin. The observed features are in agreement with theoretical suggestions on quantum tunneling of magnetization.
Resumo:
Magnetic-relaxation measurements of a Tl-based high-Tc superconductor show temperature-independent flux creep below 6 K. The effect is analyzed in terms of the overdamped quantum diffusion of two-dimensional vortices. Good agreement between theory and experiment is found.
Resumo:
The structural saturation and stability, the energy gap, and the density of states of a series of small, silicon-based clusters have been studied by means of the PM3 and some ab initio (HF/6-31G* and 6-311++G**, CIS/6-31G* and MP2/6-31G*) calculations. It is shown that in order to maintain a stable nanometric and tetrahedral silicon crystallite and remove the gap states, the saturation atom or species such as H, F, Cl, OH, O, or N is necessary, and that both the cluster size and the surface species affect the energetic distribution of the density of states. This research suggests that the visible luminescence in the silicon-based nanostructured material essentially arises from the nanometric and crystalline silicon domains but is affected and protected by the surface species, and we have thus linked most of the proposed mechanisms of luminescence for the porous silicon, e.g., the quantum confinement effect due to the cluster size and the effect of Si-based surface complexes.
Resumo:
Magnetization versus temperature in the temperature interval 2-200 K was measured for amorphous alloys of three different compositions: Fe 81.5B14.5Si4, Fe40Ni38 Mo4B18, and Co70Fe5Ni 2Mo3B5Si15. The measurements were performed by means of a SQUID (superconducting quantum interference device) magnetometer. The aim was to extract information about the different mechanisms contributing to thermal demagnetization. A powerful data analysis technique based on successive minimization procedures has demonstrated that Stoner excitations of the strong ferromagnetic type play a significant role in the Fe-Ni alloy studied. The Fe-rich and Co-rich alloys do not show a measurable contribution from single-particle excitations.
Resumo:
A regularization method based on the non-extensive maximum entropy principle is devised. Special emphasis is given to the q=1/2 case. We show that, when the residual principle is considered as constraint, the q=1/2 generalized distribution of Tsallis yields a regularized solution for bad-conditioned problems. The so devised regularized distribution is endowed with a component which corresponds to the well known regularized solution of Tikhonov (1977).
Resumo:
This paper presents a research concerning the conversion of non-accessible web pages containing mathematical formulae into accessible versions through an OCR (Optical Character Recognition) tool. The objective of this research is twofold. First, to establish criteria for evaluating the potential accessibility of mathematical web sites, i.e. the feasibility of converting non-accessible (non-MathML) math sites into accessible ones (Math-ML). Second, to propose a data model and a mechanism to publish evaluation results, making them available to the educational community who may use them as a quality measurement for selecting learning material.Results show that the conversion using OCR tools is not viable for math web pages mainly due to two reasons: many of these pages are designed to be interactive, making difficult, if not almost impossible, a correct conversion; formula (either images or text) have been written without taking into account standards of math writing, as a consequence OCR tools do not properly recognize math symbols and expressions. In spite of these results, we think the proposed methodology to create and publish evaluation reports may be rather useful in other accessibility assessment scenarios.
Resumo:
A Fundamentals of Computing Theory course involves different topics that are core to the Computer Science curricula and whose level of abstraction makes them difficult both to teach and to learn. Such difficulty stems from the complexity of the abstract notions involved and the required mathematical background. Surveys conducted among our students showed that many of them were applying some theoretical concepts mechanically rather than developing significant learning. This paper shows a number of didactic strategies that we introduced in the Fundamentals of Computing Theory curricula to cope with the above problem. The proposed strategies were based on a stronger use of technology and a constructivist approach. The final goal was to promote more significant learning of the course topics.
Resumo:
In this work, a LIDAR-based 3D Dynamic Measurement System is presented and evaluated for the geometric characterization of tree crops. Using this measurement system, trees were scanned from two opposing sides to obtain two three-dimensional point clouds. After registration of the point clouds, a simple and easily obtainable parameter is the number of impacts received by the scanned vegetation. The work in this study is based on the hypothesis of the existence of a linear relationship between the number of impacts of the LIDAR sensor laser beam on the vegetation and the tree leaf area. Tests performed under laboratory conditions using an ornamental tree and, subsequently, in a pear tree orchard demonstrate the correct operation of the measurement system presented in this paper. The results from both the laboratory and field tests confirm the initial hypothesis and the 3D Dynamic Measurement System is validated in field operation. This opens the door to new lines of research centred on the geometric characterization of tree crops in the field of agriculture and, more specifically, in precision fruit growing.
Resumo:
Abstract Purpose: Several well-known managerial accounting performance measurement models rely on causal assumptions. Whilst users of the models express satisfaction and link them with improved organizational performance, academic research, of the realworld applications, shows few reliable statistical associations. This paper provides a discussion on the"problematic" of causality in a performance measurement setting. Design/methodology/approach: This is a conceptual study based on an analysis and synthesis of the literature from managerial accounting, organizational theory, strategic management and social scientific causal modelling. Findings: The analysis indicates that dynamic, complex and uncertain environments may challenge any reliance upon valid causal models. Due to cognitive limitations and judgmental biases, managers may fail to trace correct cause-and-effect understanding of the value creation in their organizations. However, even lacking this validity, causal models can support strategic learning and perform as organizational guides if they are able to mobilize managerial action. Research limitations/implications: Future research should highlight the characteristics necessary for elaboration of convincing and appealing causal models and the social process of their construction. Practical implications: Managers of organizations using causal models should be clear on the purposes of their particular models and their limitations. In particular, difficulties are observed in specifying detailed cause and effect relations and their potential for communicating and directing attention. They should therefore construct their models to suit the particular purpose envisaged. Originality/value: This paper provides an interdisciplinary and holistic view on the issue of causality in managerial accounting models.
Resumo:
Background In an agreement assay, it is of interest to evaluate the degree of agreement between the different methods (devices, instruments or observers) used to measure the same characteristic. We propose in this study a technical simplification for inference about the total deviation index (TDI) estimate to assess agreement between two devices of normally-distributed measurements and describe its utility to evaluate inter- and intra-rater agreement if more than one reading per subject is available for each device. Methods We propose to estimate the TDI by constructing a probability interval of the difference in paired measurements between devices, and thereafter, we derive a tolerance interval (TI) procedure as a natural way to make inferences about probability limit estimates. We also describe how the proposed method can be used to compute bounds of the coverage probability. Results The approach is illustrated in a real case example where the agreement between two instruments, a handle mercury sphygmomanometer device and an OMRON 711 automatic device, is assessed in a sample of 384 subjects where measures of systolic blood pressure were taken twice by each device. A simulation study procedure is implemented to evaluate and compare the accuracy of the approach to two already established methods, showing that the TI approximation produces accurate empirical confidence levels which are reasonably close to the nominal confidence level. Conclusions The method proposed is straightforward since the TDI estimate is derived directly from a probability interval of a normally-distributed variable in its original scale, without further transformations. Thereafter, a natural way of making inferences about this estimate is to derive the appropriate TI. Constructions of TI based on normal populations are implemented in most standard statistical packages, thus making it simpler for any practitioner to implement our proposal to assess agreement.
Resumo:
We present a dual-trap optical tweezers setup which directly measures forces using linear momentum conservation. The setup uses a counter-propagating geometry, which allows momentum measurement on each beam separately. The experimental advantages of this setup include low drift due to all-optical manipulation, and a robust calibration (independent of the features of the trapped object or buffer medium) due to the force measurement method. Although this design does not attain the high-resolution of some co-propagating setups, we show that it can be used to perform different single molecule measurements: fluctuation-based molecular stiffness characterization at different forces and hopping experiments on molecular hairpins. Remarkably, in our setup it is possible to manipulate very short tethers (such as molecular hairpins with short handles) down to the limit where beads are almost in contact. The setup is used to illustrate a novel method for measuring the stiffness of optical traps and tethers on the basis of equilibrium force fluctuations, i.e., without the need of measuring the force vs molecular extension curve. This method is of general interest for dual trap optical tweezers setups and can be extended to setups which do not directly measure forces.
Resumo:
Extension of shelf life and preservation of products are both very important for the food industry. However, just as with other processes, speed and higher manufacturing performance are also beneficial. Although microwave heating is utilized in a number of industrial processes, there are many unanswered questions about its effects on foods. Here we analyze whether the effects of microwave heating with continuous flow are equivalent to those of traditional heat transfer methods. In our study, the effects of heating of liquid foods by conventional and continuous flow microwave heating were studied. Among other properties, we compared the stability of the liquid foods between the two heat treatments. Our goal was to determine whether the continuous flow microwave heating and the conventional heating methods have the same effects on the liquid foods, and, therefore, whether microwave heat treatment can effectively replace conventional heat treatments. We have compared the colour, separation phenomena of the samples treated by different methods. For milk, we also monitored the total viable cell count, for orange juice, vitamin C contents in addition to the taste of the product by sensory analysis. The majority of the results indicate that the circulating coil microwave method used here is equivalent to the conventional heating method based on thermal conduction and convection. However, some results in the analysis of the milk samples show clear differences between heat transfer methods. According to our results, the colour parameters (lightness, red-green and blue-yellow values) of the microwave treated samples differed not only from the untreated control, but also from the traditional heat treated samples. The differences are visually undetectable, however, they become evident through analytical measurement with spectrophotometer. This finding suggests that besides thermal effects, microwave-based food treatment can alter product properties in other ways as well.