995 resultados para Reduced-basis approximation
Resumo:
We show that optimizing a quantum gate for an open quantum system requires the time evolution of only three states irrespective of the dimension of Hilbert space. This represents a significant reduction in computational resources compared to the complete basis of Liouville space that is commonly believed necessary for this task. The reduction is based on two observations: the target is not a general dynamical map but a unitary operation; and the time evolution of two properly chosen states is sufficient to distinguish any two unitaries. We illustrate gate optimization employing a reduced set of states for a controlled phasegate with trapped atoms as qubit carriers and a iSWAP gate with superconducting qubits.
Resumo:
Die laserinduzierte Plasmaspektroskopie (LIPS) ist eine spektrochemische Elementanalyse zur Bestimmung der atomaren Zusammensetzung einer beliebigen Probe. Für die Analyse ist keine spezielle Probenpräparation nötig und kann unter atmosphärischen Bedingungen an Proben in jedem Aggregatzustand durchgeführt werden. Femtosekunden Laserpulse bieten die Vorteile einer präzisen Ablation mit geringem thermischen Schaden sowie einer hohen Reproduzierbarkeit. Damit ist fs-LIPS ein vielversprechendes Werkzeug für die Mikroanalyse technischer Proben, insbesondere zur Untersuchung ihres Ermüdungsverhaltens. Dabei ist interessant, wie sich die initiierten Mikrorisse innerhalb der materialspezifschen Struktur ausbreiten. In der vorliegenden Arbeit sollte daher ein schnelles und einfach zu handhabendes 3D-Rasterabbildungsverfahren zur Untersuchung der Rissausbreitung in TiAl, einer neuen Legierungsklasse, entwickelt werden. Dazu wurde fs-LIPS (30 fs, 785 nm) mit einem modifizierten Mikroskopaufbau (Objektiv: 50x/NA 0.5) kombiniert, welcher eine präzise, automatisierte Probenpositionierung ermöglicht. Spektrochemische Sensitivität und räumliches Auflösungsvermögen wurden in energieabhängigen Einzel- und Multipulsexperimenten untersucht. 10 Laserpulse pro Position mit einer Pulsenergie von je 100 nJ führten in TiAl zum bestmöglichen Kompromiss aus hohem S/N-Verhältnis von 10:1 und kleinen Lochstrukturen mit inneren Durchmessern von 1.4 µm. Die für das Verfahren entscheidende laterale Auflösung, dem minimalen Lochabstand bei konstantem LIPS-Signal, beträgt mit den obigen Parametern 2 µm und ist die bislang höchste bekannte Auflösung einer auf fs-LIPS basierenden Mikro-/Mapping-Analyse im Fernfeld. Fs-LIPS Scans von Teststrukturen sowie Mikrorissen in TiAl demonstrieren eine spektrochemische Sensitivität von 3 %. Scans in Tiefenrichtung erzielen mit denselben Parametern eine axiale Auflösung von 1 µm. Um die spektrochemische Sensitivität von fs-LIPS zu erhöhen und ein besseres Verständnis für die physikalischen Prozesse während der Laserablation zu erhalten, wurde in Pump-Probe-Experimenten untersucht, in wieweit fs-Doppelpulse den laserinduzierten Abtrag sowie die Plasmaemission beeinflussen. Dazu wurden in einem Mach-Zehnder-Interferometer Pulsabstände von 100 fs bis 2 ns realisiert, Gesamtenergie und Intensitätsverhältnis beider Pulse variiert sowie der Einfluss der Materialparameter untersucht. Sowohl das LIPS-Signal als auch die Lochstrukturen zeigen eine Abhängigkeit von der Verzögerungszeit. Diese wurden in vier verschiedene Regimes eingeteilt und den physikalischen Prozessen während der Laserablation zugeordnet: Die Thermalisierung des Elektronensystems für Pulsabstände unter 1 ps, Schmelzprozesse zwischen 1 und 10 ps, der Beginn des Abtrags nach mehreren 10 ps und die Expansion der Plasmawolke nach über 100 ps. Dabei wird das LIPS-Signal effizient verstärkt und bei 800 ps maximal. Die Lochdurchmesser ändern sich als Funktion des Pulsabstands wenig im Vergleich zur Tiefe. Die gesamte Abtragsrate variiert um maximal 50 %, während sich das LIPS-Signal vervielfacht: Für Ti und TiAl typischerweise um das Dreifache, für Al um das 10-fache. Die gemessenen Transienten zeigen eine hohe Reproduzierbarkeit, jedoch kaum eine Energie- bzw. materialspezifische Abhängigkeit. Mit diesen Ergebnissen wurde eine gezielte Optimierung der DP-LIPS-Parameter an Al durchgeführt: Bei einem Pulsabstand von 800 ps und einer Gesamtenergie von 65 nJ (vierfach über der Ablationsschwelle) wurde eine 40-fache Signalerhöhung bei geringerem Rauschen erzielt. Die Lochdurchmesser vergrößerten sich dabei um 44 % auf (650±150) nm, die Lochtiefe um das Doppelte auf (100±15) nm. Damit war es möglich, die spektrochemische Sensitivität von fs-LIPS zu erhöhen und gleichzeitig die hohe räumliche Auflösung aufrecht zu erhalten.
Resumo:
We are currently at the cusp of a revolution in quantum technology that relies not just on the passive use of quantum effects, but on their active control. At the forefront of this revolution is the implementation of a quantum computer. Encoding information in quantum states as “qubits” allows to use entanglement and quantum superposition to perform calculations that are infeasible on classical computers. The fundamental challenge in the realization of quantum computers is to avoid decoherence – the loss of quantum properties – due to unwanted interaction with the environment. This thesis addresses the problem of implementing entangling two-qubit quantum gates that are robust with respect to both decoherence and classical noise. It covers three aspects: the use of efficient numerical tools for the simulation and optimal control of open and closed quantum systems, the role of advanced optimization functionals in facilitating robustness, and the application of these techniques to two of the leading implementations of quantum computation, trapped atoms and superconducting circuits. After a review of the theoretical and numerical foundations, the central part of the thesis starts with the idea of using ensemble optimization to achieve robustness with respect to both classical fluctuations in the system parameters, and decoherence. For the example of a controlled phasegate implemented with trapped Rydberg atoms, this approach is demonstrated to yield a gate that is at least one order of magnitude more robust than the best known analytic scheme. Moreover this robustness is maintained even for gate durations significantly shorter than those obtained in the analytic scheme. Superconducting circuits are a particularly promising architecture for the implementation of a quantum computer. Their flexibility is demonstrated by performing optimizations for both diagonal and non-diagonal quantum gates. In order to achieve robustness with respect to decoherence, it is essential to implement quantum gates in the shortest possible amount of time. This may be facilitated by using an optimization functional that targets an arbitrary perfect entangler, based on a geometric theory of two-qubit gates. For the example of superconducting qubits, it is shown that this approach leads to significantly shorter gate durations, higher fidelities, and faster convergence than the optimization towards specific two-qubit gates. Performing optimization in Liouville space in order to properly take into account decoherence poses significant numerical challenges, as the dimension scales quadratically compared to Hilbert space. However, it can be shown that for a unitary target, the optimization only requires propagation of at most three states, instead of a full basis of Liouville space. Both for the example of trapped Rydberg atoms, and for superconducting qubits, the successful optimization of quantum gates is demonstrated, at a significantly reduced numerical cost than was previously thought possible. Together, the results of this thesis point towards a comprehensive framework for the optimization of robust quantum gates, paving the way for the future realization of quantum computers.
Resumo:
Non-resonant light interacting with diatomics via the polarizability anisotropy couples different rotational states and may lead to strong hybridization of the motion. The modification of shape resonances and low-energy scattering states due to this interaction can be fully captured by an asymptotic model, based on the long-range properties of the scattering (Crubellier et al 2015 New J. Phys. 17 045020). Remarkably, the properties of the field-dressed shape resonances in this asymptotic multi-channel description are found to be approximately linear in the field intensity up to fairly large intensity. This suggests a perturbative single-channel approach to be sufficient to study the control of such resonances by the non-resonant field. The multi-channel results furthermore indicate the dependence on field intensity to present, at least approximately, universal characteristics. Here we combine the nodal line technique to solve the asymptotic Schrödinger equation with perturbation theory. Comparing our single channel results to those obtained with the full interaction potential, we find nodal lines depending only on the field-free scattering length of the diatom to yield an approximate but universal description of the field-dressed molecule, confirming universal behavior.
Resumo:
Freehand sketching is both a natural and crucial part of design, yet is unsupported by current design automation software. We are working to combine the flexibility and ease of use of paper and pencil with the processing power of a computer to produce a design environment that feels as natural as paper, yet is considerably smarter. One of the most basic steps in accomplishing this is converting the original digitized pen strokes in the sketch into the intended geometric objects using feature point detection and approximation. We demonstrate how multiple sources of information can be combined for feature detection in strokes and apply this technique using two approaches to signal processing, one using simple average based thresholding and a second using scale space.
Resumo:
The Support Vector (SV) machine is a novel type of learning machine, based on statistical learning theory, which contains polynomial classifiers, neural networks, and radial basis function (RBF) networks as special cases. In the RBF case, the SV algorithm automatically determines centers, weights and threshold such as to minimize an upper bound on the expected test error. The present study is devoted to an experimental comparison of these machines with a classical approach, where the centers are determined by $k$--means clustering and the weights are found using error backpropagation. We consider three machines, namely a classical RBF machine, an SV machine with Gaussian kernel, and a hybrid system with the centers determined by the SV method and the weights trained by error backpropagation. Our results show that on the US postal service database of handwritten digits, the SV machine achieves the highest test accuracy, followed by the hybrid approach. The SV approach is thus not only theoretically well--founded, but also superior in a practical application.
Resumo:
The computation of a piecewise smooth function that approximates a finite set of data points may be decomposed into two decoupled tasks: first, the computation of the locally smooth models, and hence, the segmentation of the data into classes that consist on the sets of points best approximated by each model, and second, the computation of the normalized discriminant functions for each induced class. The approximating function may then be computed as the optimal estimator with respect to this measure field. We give an efficient procedure for effecting both computations, and for the determination of the optimal number of components.
Resumo:
To recognize a previously seen object, the visual system must overcome the variability in the object's appearance caused by factors such as illumination and pose. Developments in computer vision suggest that it may be possible to counter the influence of these factors, by learning to interpolate between stored views of the target object, taken under representative combinations of viewing conditions. Daily life situations, however, typically require categorization, rather than recognition, of objects. Due to the open-ended character both of natural kinds and of artificial categories, categorization cannot rely on interpolation between stored examples. Nonetheless, knowledge of several representative members, or prototypes, of each of the categories of interest can still provide the necessary computational substrate for the categorization of new instances. The resulting representational scheme based on similarities to prototypes appears to be computationally viable, and is readily mapped onto the mechanisms of biological vision revealed by recent psychophysical and physiological studies.
Resumo:
We derive a new representation for a function as a linear combination of local correlation kernels at optimal sparse locations and discuss its relation to PCA, regularization, sparsity principles and Support Vector Machines. We first review previous results for the approximation of a function from discrete data (Girosi, 1998) in the context of Vapnik"s feature space and dual representation (Vapnik, 1995). We apply them to show 1) that a standard regularization functional with a stabilizer defined in terms of the correlation function induces a regression function in the span of the feature space of classical Principal Components and 2) that there exist a dual representations of the regression function in terms of a regularization network with a kernel equal to a generalized correlation function. We then describe the main observation of the paper: the dual representation in terms of the correlation function can be sparsified using the Support Vector Machines (Vapnik, 1982) technique and this operation is equivalent to sparsify a large dictionary of basis functions adapted to the task, using a variation of Basis Pursuit De-Noising (Chen, Donoho and Saunders, 1995; see also related work by Donahue and Geiger, 1994; Olshausen and Field, 1995; Lewicki and Sejnowski, 1998). In addition to extending the close relations between regularization, Support Vector Machines and sparsity, our work also illuminates and formalizes the LFA concept of Penev and Atick (1996). We discuss the relation between our results, which are about regression, and the different problem of pattern classification.
Resumo:
This paper presents a new paradigm for signal reconstruction and superresolution, Correlation Kernel Analysis (CKA), that is based on the selection of a sparse set of bases from a large dictionary of class- specific basis functions. The basis functions that we use are the correlation functions of the class of signals we are analyzing. To choose the appropriate features from this large dictionary, we use Support Vector Machine (SVM) regression and compare this to traditional Principal Component Analysis (PCA) for the tasks of signal reconstruction, superresolution, and compression. The testbed we use in this paper is a set of images of pedestrians. This paper also presents results of experiments in which we use a dictionary of multiscale basis functions and then use Basis Pursuit De-Noising to obtain a sparse, multiscale approximation of a signal. The results are analyzed and we conclude that 1) when used with a sparse representation technique, the correlation function is an effective kernel for image reconstruction and superresolution, 2) for image compression, PCA and SVM have different tradeoffs, depending on the particular metric that is used to evaluate the results, 3) in sparse representation techniques, L_1 is not a good proxy for the true measure of sparsity, L_0, and 4) the L_epsilon norm may be a better error metric for image reconstruction and compression than the L_2 norm, though the exact psychophysical metric should take into account high order structure in images.
Resumo:
The Support Vector Machine (SVM) is a new and very promising classification technique developed by Vapnik and his group at AT&T Bell Labs. This new learning algorithm can be seen as an alternative training technique for Polynomial, Radial Basis Function and Multi-Layer Perceptron classifiers. An interesting property of this approach is that it is an approximate implementation of the Structural Risk Minimization (SRM) induction principle. The derivation of Support Vector Machines, its relationship with SRM, and its geometrical insight, are discussed in this paper. Training a SVM is equivalent to solve a quadratic programming problem with linear and box constraints in a number of variables equal to the number of data points. When the number of data points exceeds few thousands the problem is very challenging, because the quadratic form is completely dense, so the memory needed to store the problem grows with the square of the number of data points. Therefore, training problems arising in some real applications with large data sets are impossible to load into memory, and cannot be solved using standard non-linear constrained optimization algorithms. We present a decomposition algorithm that can be used to train SVM's over large data sets. The main idea behind the decomposition is the iterative solution of sub-problems and the evaluation of, and also establish the stopping criteria for the algorithm. We present previous approaches, as well as results and important details of our implementation of the algorithm using a second-order variant of the Reduced Gradient Method as the solver of the sub-problems. As an application of SVM's, we present preliminary results we obtained applying SVM to the problem of detecting frontal human faces in real images.
Resumo:
In this paper we consider the problem of approximating a function belonging to some funtion space Φ by a linear comination of n translates of a given function G. Ussing a lemma by Jones (1990) and Barron (1991) we show that it is possible to define function spaces and functions G for which the rate of convergence to zero of the erro is 0(1/n) in any number of dimensions. The apparent avoidance of the "curse of dimensionality" is due to the fact that these function spaces are more and more constrained as the dimension increases. Examples include spaces of the Sobolev tpe, in which the number of weak derivatives is required to be larger than the number of dimensions. We give results both for approximation in the L2 norm and in the Lc norm. The interesting feature of these results is that, thanks to the constructive nature of Jones" and Barron"s lemma, an iterative procedure is defined that can achieve this rate.
Resumo:
El text intenta fer una primera aproximació al debat contemporani entre realistes i anti-realistes sobre el món empíric, centrant-se en les posicions de Putnam i Nagel. El seu objectiu principal és el d'entendre les motivacions de les posicions i l'estructura actual del debat, i el d'establir les característiques que hauria de tenir qualsevol posició satisfactòria
Resumo:
In this article, the results of a modified SERVQUAL questionnaire (Parasuraman et al., 1991) are reported. The modifications consisted in substituting questionnaire items particularly suited to a specific service (banking) and context (county of Girona, Spain) for the original rather general and abstract items. These modifications led to more interpretable factors which accounted for a higher percentage of item variance. The data were submitted to various structural equation models which made it possible to conclude that the questionnaire contains items with a high measurement quality with respect to five identified dimensions of service quality which differ from those specified by Parasuraman et al. And are specific to the banking service. The two dimensions relating to the behaviour of employees have the greatest predictive power on overall quality and satisfaction ratings, which enables managers to use a low-cost reduced version of the questionnaire to monitor quality on a regular basis. It was also found that satisfaction and overall quality were perfectly correlated thus showing that customers do not perceive these concepts as being distinct
Resumo:
This paper describes the basis of citation auctions as a new approach to selecting scientific papers for publication. Our main idea is to use an auction for selecting papers for publication through - differently from the state of the art - bids that consist of the number of citations that a scientist expects to receive if the paper is published. Hence, a citation auction is the selection process itself, and no reviewers are involved. The benefits of the proposed approach are two-fold. First, the cost of refereeing will be either totally eliminated or significantly reduced, because the process of citation auction does not need prior understanding of the paper's content to judge the quality of its contribution. Additionally, the method will not prejudge the content of the paper, so it will increase the openness of publications to new ideas. Second, scientists will be much more committed to the quality of their papers, paying close attention to distributing and explaining their papers in detail to maximize the number of citations that the paper receives. Sample analyses of the number of citations collected in papers published in years 1999-2004 for one journal, and in years 2003-2005 for a series of conferences (in a totally different discipline), via Google scholar, are provided. Finally, a simple simulation of an auction is given to outline the behaviour of the citation auction approach