12 resultados para Two-Level Optimization
em Archivo Digital para la Docencia y la Investigación - Repositorio Institucional de la Universidad del País Vasco
Resumo:
19 p.
Resumo:
Reaching the strong coupling regime of light-matter interaction has led to an impressive development in fundamental quantum physics and applications to quantum information processing. Latests advances in different quantum technologies, like superconducting circuits or semiconductor quantum wells, show that the ultrastrong coupling regime (USC) can also be achieved, where novel physical phenomena and potential computational benefits have been predicted. Nevertheless, the lack of effective decoupling mechanism in this regime has so far hindered control and measurement processes. Here, we propose a method based on parity symmetry conservation that allows for the generation and reconstruction of arbitrary states in the ultrastrong coupling regime of light-matter interactions. Our protocol requires minimal external resources by making use of the coupling between the USC system and an ancillary two-level quantum system.
Resumo:
In this paper we introduce four scenario Cluster based Lagrangian Decomposition (CLD) procedures for obtaining strong lower bounds to the (optimal) solution value of two-stage stochastic mixed 0-1 problems. At each iteration of the Lagrangian based procedures, the traditional aim consists of obtaining the solution value of the corresponding Lagrangian dual via solving scenario submodels once the nonanticipativity constraints have been dualized. Instead of considering a splitting variable representation over the set of scenarios, we propose to decompose the model into a set of scenario clusters. We compare the computational performance of the four Lagrange multiplier updating procedures, namely the Subgradient Method, the Volume Algorithm, the Progressive Hedging Algorithm and the Dynamic Constrained Cutting Plane scheme for different numbers of scenario clusters and different dimensions of the original problem. Our computational experience shows that the CLD bound and its computational effort depend on the number of scenario clusters to consider. In any case, our results show that the CLD procedures outperform the traditional LD scheme for single scenarios both in the quality of the bounds and computational effort. All the procedures have been implemented in a C++ experimental code. A broad computational experience is reported on a test of randomly generated instances by using the MIP solvers COIN-OR and CPLEX for the auxiliary mixed 0-1 cluster submodels, this last solver within the open source engine COIN-OR. We also give computational evidence of the model tightening effect that the preprocessing techniques, cut generation and appending and parallel computing tools have in stochastic integer optimization. Finally, we have observed that the plain use of both solvers does not provide the optimal solution of the instances included in the testbed with which we have experimented but for two toy instances in affordable elapsed time. On the other hand the proposed procedures provide strong lower bounds (or the same solution value) in a considerably shorter elapsed time for the quasi-optimal solution obtained by other means for the original stochastic problem.
Resumo:
Idioma: Inglés Abstract: This project focuses on two indicators of prices, the GDP deflator and the consumer price index (CPI), and analyzes the differences and similarities they present. These price indexes have been chosen taking into account its great representativeness and importance to economic and social level, and its direct relationship to the overall functioning of the economy and, in particular, inflation. It should be also mentioned that this study was conducted for cases of the euro area and the United States, as the impact of these economies in the economic and social situation at international level is very significant.
Resumo:
[en] It is known that most of the problems applied in the real life present uncertainty. In the rst part of the dissertation, basic concepts and properties of the Stochastic Programming have been introduced to the reader, also known as Optimization under Uncertainty. Moreover, since stochastic programs are complex to compute, we have presented some other models such as wait-and-wee, expected value and the expected result of using expected value. The expected value of perfect information and the value of stochastic solution measures quantify how worthy the Stochastic Programming is, with respect to the other models. In the second part, it has been designed and implemented with the modeller GAMS and the optimizer CPLEX an application that optimizes the distribution of non-perishable products, guaranteeing some nutritional requirements with minimum cost. It has been developed within Hazia project, managed by Sortarazi association and associated with Food Bank of Biscay and Basic Social Services of several districts of Biscay.
Resumo:
This paper provides microeconomic evidence on the variation over time of the firm-specific wage premium in Spain from 1995 to 2002, and its impact on wage inequality. We make use of two waves of a detailed linked employer-employee data set. In addition, a new data set with financial information on firms is used for 2002 to control as flexibly as possible for differences in the performance of firms (aggregated at industry level). To our knowledge, there is no microeconomic evidence on the dynamics of the firm-specific wage premium for Spain or for any other country with a similar institutional setting. Our results suggest that there is a clear tendency towards centralization in the collective bargaining process in Spain over this seven-year period, that the firm-level contract wage premium undergoes a substantial decrease, particularly for women, and finally that the "centralization" observed in the collective bargaining process has resulted in a slight decrease in wage inequality.
Resumo:
In this article we describe the methodology developed for the semiautomatic annotation of EPEC-RolSem, a Basque corpus labeled at predicate level following the PropBank-VerbNet model. The methodology presented is the product of detailed theoretical study of the semantic nature of verbs in Basque and of their similarities and differences with verbs in other languages. As part of the proposed methodology, we are creating a Basque lexicon on the PropBank-VerbNet model that we have named the Basque Verb Index (BVI). Our work thus dovetails the general trend toward building lexicons from tagged corpora that is clear in work conducted for other languages. EPEC-RolSem and BVI are two important resources for the computational semantic processing of Basque; as far as the authors are aware, they are also the first resources of their kind developed for Basque. In addition, each entry in BVI is linked to the corresponding verb-entry in well-known resources like PropBank, VerbNet, WordNet, Levin’s Classification and FrameNet. We have also implemented several automatic processes to aid in creating and annotating the BVI, including processes designed to facilitate the task of manual annotation.
Resumo:
227 págs.
Resumo:
179 p.
Resumo:
221 p.
Resumo:
We have grown an atom-thin, ordered, two-dimensional multi-phase film in situ through germanium molecular beam epitaxy using a gold (111) surface as a substrate. Its growth is similar to the formation of silicene layers on silver (111) templates. One of the phases, forming large domains, as observed in scanning tunneling microscopy, shows a clear, nearly flat, honeycomb structure. Thanks to thorough synchrotron radiation core-level spectroscopy measurements and advanced density functional theory calculations we can identify it as a root 3 x root 3 R(30 degrees) germanene layer in conjunction with a root 7 x root 7 R(19.1 degrees) Au(111) supercell, presenting compelling evidence of the synthesis of the germanium-based cousin of graphene on gold.
Resumo:
The main contribution of this work is to analyze and describe the state of the art performance as regards answer scoring systems from the SemEval- 2013 task, as well as to continue with the development of an answer scoring system (EHU-ALM) developed in the University of the Basque Country. On the overall this master thesis focuses on finding any possible configuration that lets improve the results in the SemEval dataset by using attribute engineering techniques in order to find optimal feature subsets, along with trying different hierarchical configurations in order to analyze its performance against the traditional one versus all approach. Altogether, throughout the work we propose two alternative strategies: on the one hand, to improve the EHU-ALM system without changing the architecture, and, on the other hand, to improve the system adapting it to an hierarchical con- figuration. To build such new models we describe and use distinct attribute engineering, data preprocessing, and machine learning techniques.