996 resultados para Optimal Partitioning
Resumo:
We show that optimizing a quantum gate for an open quantum system requires the time evolution of only three states irrespective of the dimension of Hilbert space. This represents a significant reduction in computational resources compared to the complete basis of Liouville space that is commonly believed necessary for this task. The reduction is based on two observations: the target is not a general dynamical map but a unitary operation; and the time evolution of two properly chosen states is sufficient to distinguish any two unitaries. We illustrate gate optimization employing a reduced set of states for a controlled phasegate with trapped atoms as qubit carriers and a iSWAP gate with superconducting qubits.
Resumo:
Since no physical system can ever be completely isolated from its environment, the study of open quantum systems is pivotal to reliably and accurately control complex quantum systems. In practice, reliability of the control field needs to be confirmed via certification of the target evolution while accuracy requires the derivation of high-fidelity control schemes in the presence of decoherence. In the first part of this thesis an algebraic framework is presented that allows to determine the minimal requirements on the unique characterisation of arbitrary unitary gates in open quantum systems, independent on the particular physical implementation of the employed quantum device. To this end, a set of theorems is devised that can be used to assess whether a given set of input states on a quantum channel is sufficient to judge whether a desired unitary gate is realised. This allows to determine the minimal input for such a task, which proves to be, quite remarkably, independent of system size. These results allow to elucidate the fundamental limits regarding certification and tomography of open quantum systems. The combination of these insights with state-of-the-art Monte Carlo process certification techniques permits a significant improvement of the scaling when certifying arbitrary unitary gates. This improvement is not only restricted to quantum information devices where the basic information carrier is the qubit but it also extends to systems where the fundamental informational entities can be of arbitary dimensionality, the so-called qudits. The second part of this thesis concerns the impact of these findings from the point of view of Optimal Control Theory (OCT). OCT for quantum systems utilises concepts from engineering such as feedback and optimisation to engineer constructive and destructive interferences in order to steer a physical process in a desired direction. It turns out that the aforementioned mathematical findings allow to deduce novel optimisation functionals that significantly reduce not only the required memory for numerical control algorithms but also the total CPU time required to obtain a certain fidelity for the optimised process. The thesis concludes by discussing two problems of fundamental interest in quantum information processing from the point of view of optimal control - the preparation of pure states and the implementation of unitary gates in open quantum systems. For both cases specific physical examples are considered: for the former the vibrational cooling of molecules via optical pumping and for the latter a superconducting phase qudit implementation. In particular, it is illustrated how features of the environment can be exploited to reach the desired targets.
Resumo:
We investigate the properties of feedforward neural networks trained with Hebbian learning algorithms. A new unsupervised algorithm is proposed which produces statistically uncorrelated outputs. The algorithm causes the weights of the network to converge to the eigenvectors of the input correlation with largest eigenvalues. The algorithm is closely related to the technique of Self-supervised Backpropagation, as well as other algorithms for unsupervised learning. Applications of the algorithm to texture processing, image coding, and stereo depth edge detection are given. We show that the algorithm can lead to the development of filters qualitatively similar to those found in primate visual cortex.
Resumo:
Biological systems exhibit rich and complex behavior through the orchestrated interplay of a large array of components. It is hypothesized that separable subsystems with some degree of functional autonomy exist; deciphering their independent behavior and functionality would greatly facilitate understanding the system as a whole. Discovering and analyzing such subsystems are hence pivotal problems in the quest to gain a quantitative understanding of complex biological systems. In this work, using approaches from machine learning, physics and graph theory, methods for the identification and analysis of such subsystems were developed. A novel methodology, based on a recent machine learning algorithm known as non-negative matrix factorization (NMF), was developed to discover such subsystems in a set of large-scale gene expression data. This set of subsystems was then used to predict functional relationships between genes, and this approach was shown to score significantly higher than conventional methods when benchmarking them against existing databases. Moreover, a mathematical treatment was developed to treat simple network subsystems based only on their topology (independent of particular parameter values). Application to a problem of experimental interest demonstrated the need for extentions to the conventional model to fully explain the experimental data. Finally, the notion of a subsystem was evaluated from a topological perspective. A number of different protein networks were examined to analyze their topological properties with respect to separability, seeking to find separable subsystems. These networks were shown to exhibit separability in a nonintuitive fashion, while the separable subsystems were of strong biological significance. It was demonstrated that the separability property found was not due to incomplete or biased data, but is likely to reflect biological structure.
Resumo:
In most classical frameworks for learning from examples, it is assumed that examples are randomly drawn and presented to the learner. In this paper, we consider the possibility of a more active learner who is allowed to choose his/her own examples. Our investigations are carried out in a function approximation setting. In particular, using arguments from optimal recovery (Micchelli and Rivlin, 1976), we develop an adaptive sampling strategy (equivalent to adaptive approximation) for arbitrary approximation schemes. We provide a general formulation of the problem and show how it can be regarded as sequential optimal recovery. We demonstrate the application of this general formulation to two special cases of functions on the real line 1) monotonically increasing functions and 2) functions with bounded derivative. An extensive investigation of the sample complexity of approximating these functions is conducted yielding both theoretical and empirical results on test functions. Our theoretical results (stated insPAC-style), along with the simulations demonstrate the superiority of our active scheme over both passive learning as well as classical optimal recovery. The analysis of active function approximation is conducted in a worst-case setting, in contrast with other Bayesian paradigms obtained from optimal design (Mackay, 1992).
Optimal Methodology for Synchronized Scheduling of Parallel Station Assembly with Air Transportation
Resumo:
We present an optimal methodology for synchronized scheduling of production assembly with air transportation to achieve accurate delivery with minimized cost in consumer electronics supply chain (CESC). This problem was motivated by a major PC manufacturer in consumer electronics industry, where it is required to schedule the delivery requirements to meet the customer needs in different parts of South East Asia. The overall problem is decomposed into two sub-problems which consist of an air transportation allocation problem and an assembly scheduling problem. The air transportation allocation problem is formulated as a Linear Programming Problem with earliness tardiness penalties for job orders. For the assembly scheduling problem, it is basically required to sequence the job orders on the assembly stations to minimize their waiting times before they are shipped by flights to their destinations. Hence the second sub-problem is modelled as a scheduling problem with earliness penalties. The earliness penalties are assumed to be independent of the job orders.
Resumo:
Recent developments in optical communications have allowed simpler optical devices to improve network resource utilization. As such, we propose adding a lambda-monitoring device to a wavelength-routing switch (WRS) allowing better performance when traffic is routed and groomed. This device may allow a WRS to aggregate traffic over optical routes without incurring in optical-electrical-optical conversion for the existing traffic. In other words, optical routes can be taken partially to route demands creating a sort of "lighttours". In this paper, we compare the number of OEO conversions needed to route a complete given traffic matrix using either lighttours or lightpaths
Resumo:
Most network operators have considered reducing Label Switched Routers (LSR) label spaces (i.e. the number of labels that can be used) as a means of simplifying management of underlaying Virtual Private Networks (VPNs) and, hence, reducing operational expenditure (OPEX). This letter discusses the problem of reducing the label spaces in Multiprotocol Label Switched (MPLS) networks using label merging - better known as MultiPoint-to-Point (MP2P) connections. Because of its origins in IP, MP2P connections have been considered to have tree- shapes with Label Switched Paths (LSP) as branches. Due to this fact, previous works by many authors affirm that the problem of minimizing the label space using MP2P in MPLS - the Merging Problem - cannot be solved optimally with a polynomial algorithm (NP-complete), since it involves a hard- decision problem. However, in this letter, the Merging Problem is analyzed, from the perspective of MPLS, and it is deduced that tree-shapes in MP2P connections are irrelevant. By overriding this tree-shape consideration, it is possible to perform label merging in polynomial time. Based on how MPLS signaling works, this letter proposes an algorithm to compute the minimum number of labels using label merging: the Full Label Merging algorithm. As conclusion, we reclassify the Merging Problem as Polynomial-solvable, instead of NP-complete. In addition, simulation experiments confirm that without the tree-branch selection problem, more labels can be reduced
Resumo:
La obesidad es un problema de salud global siendo la cirugía bariatrica el mejor tratamiento demostrado. El Bypass gástrico (BGYR) es el método más utilizado que combina restricción y malabsorcion; sin embargo los procedimientos restrictivos se han popularizado recientemente. La Gastro-gastroplastia produce restricción gástrica reversible por medio de un pouch gástrico con anastomosis gastrogástrica y propusimos su evaluación Métodos: Estudio retrospectivo no randomizado que evaluó archivos de pacientes con GG y BGYR laparoscópicos entre febrero de 2008 y Abril de 2011 Resultados: 289 pacientes identificados: 180 GG y 109 BGYR de los cuales 138 cumplieron criterios de inclusión, 77 (55.8%) GG y 61 (44,2%) BGYR, 18 (13%) hombres y 120 (87%) mujeres. Para GG la mediana del peso inicial fue 97,15 (± 17,3) kg, IMC inicial de 39,35 (± 3,38) kg/m2 y exceso de peso de 37,1 (±11,9). La mediana de IMC a los 1, 6 y 12 meses fue 34,8 (±3,58) kg/m2, 30,81 (±3,81) kg/m2, 29,58 (±4,25) kg/m2 respectivamente. La mediana de % PEP 1, 6 y 12 meses fue 30,9 (±14,2) %, 61,88 (±18,27) %, 68,4 (±19,64) % respectivamente. Para BGYR la mediana del peso inicial fue 108,1 (± 25,4) kg, IMC inicial 44,4 (± 8,1) y exceso de peso de 48,4 (±15,2) %. La mediana de IMC a los 1, 6 y 12 meses fue 39 (±7,5) kg/m2, 33,31 (±4,9) kg/m2, 30,9 (±4,8) kg/m2 respectivamente. La mediana de % PEP 1, 6 y 12 meses fue 25,9 (±12,9) %, 61,87 (±18,62) %, 71,41 (±21,09) % respectivamente. Seguimiento a un año Conclusiones: La gastro-gastroplastia se plantea como técnica restrictiva, reversible, con resultados óptimos en reducción de peso y alternativa quirúrgica en pacientes con obesidad. Son necesarios estudios a mayor plazo para demostrar mantenimiento de cambios en el tiempo
Resumo:
La obesidad es un problema de salud global siendo la cirugía bariatrica el mejor tratamiento demostrado. El Bypass Gástrico (BGYR) es el método más utilizado que combina restricción y malabsorcion; sin embargo los procedimientos restrictivos se han popularizado recientemente. La Gastro-gastroplastia produce restricción gástrica reversible por medio de un pouch gástrico con anastomosis gastrogástrica y propusimos su evaluación Métodos: Estudio retrospectivo no randomizado que evaluó archivos de pacientes con GG y BGYR laparoscópicos entre Febrero de 2008 y Abril de 2011 Resultados: 289 pacientes identificados: 180 GG y 109 BGYR de los cuales 138 cumplieron criterios de inclusión, 77 (55.8%) GG y 61 (44,2%) BGYR, 18 (13%) hombres y 120 (87%) mujeres. Para GG la mediana del peso inicial fue 97,15 (± 17,3) kg, IMC inicial de 39,35 (± 3,38) kg/m2 y exceso de peso de 37,1 (±11,9). La mediana de IMC a los 1, 6 y 12 meses fue 34,8 (±3,58) kg/m2, 30,81 (±3,81) kg/m2, 29,58 (±4,25) kg/m2 respectivamente. La mediana de % PEP 1, 6 y 12 meses fue 30,9 (±14,2) %, 61,88 (±18,27) %, 68,4 (±19,64) % respectivamente. Para BGYR la mediana del peso inicial fue 108,1 (± 25,4) kg, IMC inicial 44,4 (± 8,1) y exceso de peso de 48,4 (±15,2) %. La mediana de IMC a los 1, 6 y 12 meses fue 39 (±7,5) kg/m2, 33,31 (±4,9) kg/m2, 30,9 (±4,8) kg/m2 respectivamente. La mediana de % PEP 1, 6 y 12 meses fue 25,9 (±12,9) %, 61,87 (±18,62) %, 71,41 (±21,09) % respectivamente. Seguimiento a un año. Conclusiones: La gastro-gastroplastia se plantea como técnica restrictiva, reversible, con resultados óptimos en reducción de peso y alternativa quirúrgica en pacientes con obesidad. Son necesarios estudios a mayor plazo para demostrar mantenimiento de cambios en el tiempo.
Resumo:
In this paper I consider the role of education poli-cies in redistribution of income when individuals differ in two aspects: ability and inherited wealth. I discuss the extent to which the rules that emerge in unidimensional settings apply also in the bidimen-sional setting considered in this paper. The main conclusion is that, subject to some qualifi cations, the same type of rules that determine optimal education policies when only ability heterogeneity is considered apply to the case where both parameters of heterogeneity are considered. The qualifi cations pertain to the implementation of the optimal alloca-tion of resources to education and not the way the optimal allocations fi rst- and second-best differ.
Resumo:
Resumen en español. Resumen basado en el de la publicación
Resumo:
The total energy of molecule in terms of 'fuzzy atoms' presented as sum of one- and two-atomic energy components is described. The divisions of three-dimensional physical space into atomic regions exhibit continuous transition from one to another. The energy components are on chemical energy scale according to proper definitions. The Becke's integration scheme and weight function determines realization of method which permits effective numerical integrations
Resumo:
A conceptually new approach is introduced for the decomposition of the molecular energy calculated at the density functional theory level of theory into sum of one- and two-atomic energy components, and is realized in the "fuzzy atoms" framework. (Fuzzy atoms mean that the three-dimensional physical space is divided into atomic regions having no sharp boundaries but exhibiting a continuous transition from one to another.) The new scheme uses the new concept of "bond order density" to calculate the diatomic exchange energy components and gives them unexpectedly close to the values calculated by the exact (Hartree-Fock) exchange for the same Kohn-Sham orbitals
Resumo:
Dynamic optimization methods have become increasingly important over the last years in economics. Within the dynamic optimization techniques employed, optimal control has emerged as the most powerful tool for the theoretical economic analysis. However, there is the need to advance further and take account that many dynamic economic processes are, in addition, dependent on some other parameter different than time. One can think of relaxing the assumption of a representative (homogeneous) agent in macro- and micro-economic applications allowing for heterogeneity among the agents. For instance, the optimal adaptation and diffusion of a new technology over time, may depend on the age of the person that adopted the new technology. Therefore, the economic models must take account of heterogeneity conditions within the dynamic framework. This thesis intends to accomplish two goals. The first goal is to analyze and revise existing environmental policies that focus on defining the optimal management of natural resources over time, by taking account of the heterogeneity of environmental conditions. Thus, the thesis makes a policy orientated contribution in the field of environmental policy by defining the necessary changes to transform an environmental policy based on the assumption of homogeneity into an environmental policy which takes account of heterogeneity. As a result the newly defined environmental policy will be more efficient and likely also politically more acceptable since it is tailored more specifically to the heterogeneous environmental conditions. Additionally to its policy orientated contribution, this thesis aims making a methodological contribution by applying a new optimization technique for solving problems where the control variables depend on two or more arguments --- the so-called two-stage solution approach ---, and by applying a numerical method --- the Escalator Boxcar Train Method --- for solving distributed optimal control problems, i.e., problems where the state variables, in addition to the control variables, depend on two or more arguments. Chapter 2 presents a theoretical framework to determine optimal resource allocation over time for the production of a good by heterogeneous producers, who generate a stock externalit and derives government policies to modify the behavior of competitive producers in order to achieve optimality. Chapter 3 illustrates the method in a more specific context, and integrates the aspects of quality and time, presenting a theoretical model that allows to determine the socially optimal outcome over time and space for the problem of waterlogging in irrigated agricultural production. Chapter 4 of this thesis concentrates on forestry resources and analyses the optimal selective-logging regime of a size-distributed forest.