910 resultados para schooling, productivity effects, upper bound


Relevância:

100.00% 100.00%

Publicador:

Resumo:

General Summary Although the chapters of this thesis address a variety of issues, the principal aim is common: test economic ideas in an international economic context. The intention has been to supply empirical findings using the largest suitable data sets and making use of the most appropriate empirical techniques. This thesis can roughly be divided into two parts: the first one, corresponding to the first two chapters, investigates the link between trade and the environment, the second one, the last three chapters, is related to economic geography issues. Environmental problems are omnipresent in the daily press nowadays and one of the arguments put forward is that globalisation causes severe environmental problems through the reallocation of investments and production to countries with less stringent environmental regulations. A measure of the amplitude of this undesirable effect is provided in the first part. The third and the fourth chapters explore the productivity effects of agglomeration. The computed spillover effects between different sectors indicate how cluster-formation might be productivity enhancing. The last chapter is not about how to better understand the world but how to measure it and it was just a great pleasure to work on it. "The Economist" writes every week about the impressive population and economic growth observed in China and India, and everybody agrees that the world's center of gravity has shifted. But by how much and how fast did it shift? An answer is given in the last part, which proposes a global measure for the location of world production and allows to visualize our results in Google Earth. A short summary of each of the five chapters is provided below. The first chapter, entitled "Unraveling the World-Wide Pollution-Haven Effect" investigates the relative strength of the pollution haven effect (PH, comparative advantage in dirty products due to differences in environmental regulation) and the factor endowment effect (FE, comparative advantage in dirty, capital intensive products due to differences in endowments). We compute the pollution content of imports using the IPPS coefficients (for three pollutants, namely biological oxygen demand, sulphur dioxide and toxic pollution intensity for all manufacturing sectors) provided by the World Bank and use a gravity-type framework to isolate the two above mentioned effects. Our study covers 48 countries that can be classified into 29 Southern and 19 Northern countries and uses the lead content of gasoline as proxy for environmental stringency. For North-South trade we find significant PH and FE effects going in the expected, opposite directions and being of similar magnitude. However, when looking at world trade, the effects become very small because of the high North-North trade share, where we have no a priori expectations about the signs of these effects. Therefore popular fears about the trade effects of differences in environmental regulations might by exaggerated. The second chapter is entitled "Is trade bad for the Environment? Decomposing worldwide SO2 emissions, 1990-2000". First we construct a novel and large database containing reasonable estimates of SO2 emission intensities per unit labor that vary across countries, periods and manufacturing sectors. Then we use these original data (covering 31 developed and 31 developing countries) to decompose the worldwide SO2 emissions into the three well known dynamic effects (scale, technique and composition effect). We find that the positive scale (+9,5%) and the negative technique (-12.5%) effect are the main driving forces of emission changes. Composition effects between countries and sectors are smaller, both negative and of similar magnitude (-3.5% each). Given that trade matters via the composition effects this means that trade reduces total emissions. We next construct, in a first experiment, a hypothetical world where no trade happens, i.e. each country produces its imports at home and does no longer produce its exports. The difference between the actual and this no-trade world allows us (under the omission of price effects) to compute a static first-order trade effect. The latter now increases total world emissions because it allows, on average, dirty countries to specialize in dirty products. However, this effect is smaller (3.5%) in 2000 than in 1990 (10%), in line with the negative dynamic composition effect identified in the previous exercise. We then propose a second experiment, comparing effective emissions with the maximum or minimum possible level of SO2 emissions. These hypothetical levels of emissions are obtained by reallocating labour accordingly across sectors within each country (under the country-employment and the world industry-production constraints). Using linear programming techniques, we show that emissions are reduced by 90% with respect to the worst case, but that they could still be reduced further by another 80% if emissions were to be minimized. The findings from this chapter go together with those from chapter one in the sense that trade-induced composition effect do not seem to be the main source of pollution, at least in the recent past. Going now to the economic geography part of this thesis, the third chapter, entitled "A Dynamic Model with Sectoral Agglomeration Effects" consists of a short note that derives the theoretical model estimated in the fourth chapter. The derivation is directly based on the multi-regional framework by Ciccone (2002) but extends it in order to include sectoral disaggregation and a temporal dimension. This allows us formally to write present productivity as a function of past productivity and other contemporaneous and past control variables. The fourth chapter entitled "Sectoral Agglomeration Effects in a Panel of European Regions" takes the final equation derived in chapter three to the data. We investigate the empirical link between density and labour productivity based on regional data (245 NUTS-2 regions over the period 1980-2003). Using dynamic panel techniques allows us to control for the possible endogeneity of density and for region specific effects. We find a positive long run elasticity of density with respect to labour productivity of about 13%. When using data at the sectoral level it seems that positive cross-sector and negative own-sector externalities are present in manufacturing while financial services display strong positive own-sector effects. The fifth and last chapter entitled "Is the World's Economic Center of Gravity Already in Asia?" computes the world economic, demographic and geographic center of gravity for 1975-2004 and compares them. Based on data for the largest cities in the world and using the physical concept of center of mass, we find that the world's economic center of gravity is still located in Europe, even though there is a clear shift towards Asia. To sum up, this thesis makes three main contributions. First, it provides new estimates of orders of magnitudes for the role of trade in the globalisation and environment debate. Second, it computes reliable and disaggregated elasticities for the effect of density on labour productivity in European regions. Third, it allows us, in a geometrically rigorous way, to track the path of the world's economic center of gravity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dans cette thèse, nous étudions les aspects comportementaux d'agents qui interagissent dans des systèmes de files d'attente à l'aide de modèles de simulation et de méthodologies expérimentales. Chaque période les clients doivent choisir un prestataire de servivce. L'objectif est d'analyser l'impact des décisions des clients et des prestataires sur la formation des files d'attente. Dans un premier cas nous considérons des clients ayant un certain degré d'aversion au risque. Sur la base de leur perception de l'attente moyenne et de la variabilité de cette attente, ils forment une estimation de la limite supérieure de l'attente chez chacun des prestataires. Chaque période, ils choisissent le prestataire pour lequel cette estimation est la plus basse. Nos résultats indiquent qu'il n'y a pas de relation monotone entre le degré d'aversion au risque et la performance globale. En effet, une population de clients ayant un degré d'aversion au risque intermédiaire encoure généralement une attente moyenne plus élevée qu'une population d'agents indifférents au risque ou très averses au risque. Ensuite, nous incorporons les décisions des prestataires en leur permettant d'ajuster leur capacité de service sur la base de leur perception de la fréquence moyenne d'arrivées. Les résultats montrent que le comportement des clients et les décisions des prestataires présentent une forte "dépendance au sentier". En outre, nous montrons que les décisions des prestataires font converger l'attente moyenne pondérée vers l'attente de référence du marché. Finalement, une expérience de laboratoire dans laquelle des sujets jouent le rôle de prestataire de service nous a permis de conclure que les délais d'installation et de démantèlement de capacité affectent de manière significative la performance et les décisions des sujets. En particulier, les décisions du prestataire, sont influencées par ses commandes en carnet, sa capacité de service actuellement disponible et les décisions d'ajustement de capacité qu'il a prises, mais pas encore implémentées. - Queuing is a fact of life that we witness daily. We all have had the experience of waiting in line for some reason and we also know that it is an annoying situation. As the adage says "time is money"; this is perhaps the best way of stating what queuing problems mean for customers. Human beings are not very tolerant, but they are even less so when having to wait in line for service. Banks, roads, post offices and restaurants are just some examples where people must wait for service. Studies of queuing phenomena have typically addressed the optimisation of performance measures (e.g. average waiting time, queue length and server utilisation rates) and the analysis of equilibrium solutions. The individual behaviour of the agents involved in queueing systems and their decision making process have received little attention. Although this work has been useful to improve the efficiency of many queueing systems, or to design new processes in social and physical systems, it has only provided us with a limited ability to explain the behaviour observed in many real queues. In this dissertation we differ from this traditional research by analysing how the agents involved in the system make decisions instead of focusing on optimising performance measures or analysing an equilibrium solution. This dissertation builds on and extends the framework proposed by van Ackere and Larsen (2004) and van Ackere et al. (2010). We focus on studying behavioural aspects in queueing systems and incorporate this still underdeveloped framework into the operations management field. In the first chapter of this thesis we provide a general introduction to the area, as well as an overview of the results. In Chapters 2 and 3, we use Cellular Automata (CA) to model service systems where captive interacting customers must decide each period which facility to join for service. They base this decision on their expectations of sojourn times. Each period, customers use new information (their most recent experience and that of their best performing neighbour) to form expectations of sojourn time at the different facilities. Customers update their expectations using an adaptive expectations process to combine their memory and their new information. We label "conservative" those customers who give more weight to their memory than to the xiv Summary new information. In contrast, when they give more weight to new information, we call them "reactive". In Chapter 2, we consider customers with different degree of risk-aversion who take into account uncertainty. They choose which facility to join based on an estimated upper-bound of the sojourn time which they compute using their perceptions of the average sojourn time and the level of uncertainty. We assume the same exogenous service capacity for all facilities, which remains constant throughout. We first analyse the collective behaviour generated by the customers' decisions. We show that the system achieves low weighted average sojourn times when the collective behaviour results in neighbourhoods of customers loyal to a facility and the customers are approximately equally split among all facilities. The lowest weighted average sojourn time is achieved when exactly the same number of customers patronises each facility, implying that they do not wish to switch facility. In this case, the system has achieved the Nash equilibrium. We show that there is a non-monotonic relationship between the degree of risk-aversion and system performance. Customers with an intermediate degree of riskaversion typically achieve higher sojourn times; in particular they rarely achieve the Nash equilibrium. Risk-neutral customers have the highest probability of achieving the Nash Equilibrium. Chapter 3 considers a service system similar to the previous one but with risk-neutral customers, and relaxes the assumption of exogenous service rates. In this sense, we model a queueing system with endogenous service rates by enabling managers to adjust the service capacity of the facilities. We assume that managers do so based on their perceptions of the arrival rates and use the same principle of adaptive expectations to model these perceptions. We consider service systems in which the managers' decisions take time to be implemented. Managers are characterised by a profile which is determined by the speed at which they update their perceptions, the speed at which they take decisions, and how coherent they are when accounting for their previous decisions still to be implemented when taking their next decision. We find that the managers' decisions exhibit a strong path-dependence: owing to the initial conditions of the model, the facilities of managers with identical profiles can evolve completely differently. In some cases the system becomes "locked-in" into a monopoly or duopoly situation. The competition between managers causes the weighted average sojourn time of the system to converge to the exogenous benchmark value which they use to estimate their desired capacity. Concerning the managers' profile, we found that the more conservative Summary xv a manager is regarding new information, the larger the market share his facility achieves. Additionally, the faster he takes decisions, the higher the probability that he achieves a monopoly position. In Chapter 4 we consider a one-server queueing system with non-captive customers. We carry out an experiment aimed at analysing the way human subjects, taking on the role of the manager, take decisions in a laboratory regarding the capacity of a service facility. We adapt the model proposed by van Ackere et al (2010). This model relaxes the assumption of a captive market and allows current customers to decide whether or not to use the facility. Additionally the facility also has potential customers who currently do not patronise it, but might consider doing so in the future. We identify three groups of subjects whose decisions cause similar behavioural patterns. These groups are labelled: gradual investors, lumpy investors, and random investor. Using an autocorrelation analysis of the subjects' decisions, we illustrate that these decisions are positively correlated to the decisions taken one period early. Subsequently we formulate a heuristic to model the decision rule considered by subjects in the laboratory. We found that this decision rule fits very well for those subjects who gradually adjust capacity, but it does not capture the behaviour of the subjects of the other two groups. In Chapter 5 we summarise the results and provide suggestions for further work. Our main contribution is the use of simulation and experimental methodologies to explain the collective behaviour generated by customers' and managers' decisions in queueing systems as well as the analysis of the individual behaviour of these agents. In this way, we differ from the typical literature related to queueing systems which focuses on optimising performance measures and the analysis of equilibrium solutions. Our work can be seen as a first step towards understanding the interaction between customer behaviour and the capacity adjustment process in queueing systems. This framework is still in its early stages and accordingly there is a large potential for further work that spans several research topics. Interesting extensions to this work include incorporating other characteristics of queueing systems which affect the customers' experience (e.g. balking, reneging and jockeying); providing customers and managers with additional information to take their decisions (e.g. service price, quality, customers' profile); analysing different decision rules and studying other characteristics which determine the profile of customers and managers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: The objective of this study was to describe educational achievements of childhood cancer survivors in Switzerland compared with the general population. In particular, the authors investigated educational problems during childhood, final educational achievement in adulthood, and its predictors. METHODS: Childhood cancer survivors who were aged <16 years at diagnosis from 1976 to 2003 who had survived for ≥5 years and were currently ages 20 to 40 years received a postal questionnaire during 2007 to 2009. Controls were respondents of the Swiss Health Survey ages 20 to 40 years. Educational achievement included compulsory schooling, vocational training, upper secondary schooling, and university degree. The analysis was weighted to optimize comparability of the populations. The authors analyzed the association between demographic and clinical predictors and educational achievement using multivariable logistic regression. Subgroup analyses focused on survivors aged ≥27 years. RESULTS: One-third of survivors encountered educational problems during schooling (30% repeated 1 year, and 35% received supportive tutoring). In the total sample, more survivors than controls achieved compulsory schooling only (8.7% vs 5.2%) and fewer acquired a university degree (7.3% vs 11%), but more survivors than controls achieved an upper secondary education (36.1 vs 24.1%). In those aged ≥27 years, differences in compulsory schooling and university education largely disappeared. In survivors and controls, sex, nationality, language region, and migration background were strong predictors of achievement. Survivors of central nervous system tumors or those who had a relapse had poorer outcomes (P < .05). CONCLUSIONS: Childhood cancer survivors encountered problems during schooling and completed professional education with some delay. However, with the exception of patients who had central nervous system tumors and those who experienced a relapse, the final educational achievement in survivors of child cancer was comparable to that of the general population.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new coding technique to be used in steganography is evaluated. The performanceof this new technique is computed and comparisons with the well-known theoreticalupper bound, Hamming upper bound and basic LSB are established.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tässä diplomityössä tarkastellaan sähkökoneiden tyypillisiä vikoja ja joitain näiden havaitsemiseen käytettäviä mittauksia ja analyysejä; mittausten tuottamaa tietomäärää arvioidaan. Muutamia teollisuuslaitoksissa käytettyjä väyliä ja tiedonsiirtoprotokollia esitellään, ja analysoidaan kunnonvalvonnan mittauksien siirtämisen näillä väylillä vaatimaa väylän datansiirtonopeutta. Valvottavien sähkökoneiden määrän ylärajaa arvioidaan kunkin väylän/protokollan tapauksessa. Työssä esitellään ratkaisuja sähkökoneiden etädiagnostiikan tiedonsiirron toteuttamiseksi ja arvioidaan valvottavien sähkökoneiden määrän ylärajaa kussakin tapauksessa. Lisäksi työssä suunnitellaan ja toteutetaan kunnonvalvonnan mitta-antureiden kanssa käytettävä mittaustiedon keräily-yksikkö. Keräily-yksikön sisältävä kunnonvalvonnan ja etädiagnostiikan tiedonkeruu- ja tiedonsiirtojärjestelmä asennetaan kahteen pilot-kohteeseen: sellutehtaalle ja pienvesivoimalaan.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

By modifying a domain first suggested by Ruth Goodman in 1935 and by exploiting the explicit solution by Fedorov of the Polyá-Chebotarev problem in the case of four symmetrically placed points, an improved upper bound for the univalent Bloch-Landau constant is obtained. The domain that leads to this improved bound takes the form of a disk from which some arcs are removed in such a way that the resulting simply connected domain is harmonically symmetric in each arc with respect to the origin. The existence of domains of this type is established, using techniques from conformal welding, and some general properties of harmonically symmetric arcs in this setting are established.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

By an exponential sum of the Fourier coefficients of a holomorphic cusp form we mean the sum which is formed by first taking the Fourier series of the said form,then cutting the beginning and the tail away and considering the remaining sum on the real axis. For simplicity’s sake, typically the coefficients are normalized. However, this isn’t so important as the normalization can be done and removed simply by using partial summation. We improve the approximate functional equation for the exponential sums of the Fourier coefficients of the holomorphic cusp forms by giving an explicit upper bound for the error term appearing in the equation. The approximate functional equation is originally due to Jutila [9] and a crucial tool for transforming sums into shorter sums. This transformation changes the point of the real axis on which the sum is to be considered. We also improve known upper bounds for the size estimates of the exponential sums. For very short sums we do not obtain any better estimates than the very easy estimate obtained by multiplying the upper bound estimate for a Fourier coefficient (they are bounded by the divisor function as Deligne [2] showed) by the number of coefficients. This estimate is extremely rough as no possible cancellation is taken into account. However, with small sums, it is unclear whether there happens any remarkable amounts of cancellation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many European states apply score systems to evaluate the disability severity of non-fatal motor victims under the law of third-party liability. The score is a non-negative integer with an upper bound at 100 that increases with severity. It may be automatically converted into financial terms and thus also reflects the compensation cost for disability. In this paper, discrete regression models are applied to analyze the factors that influence the disability severity score of victims. Standard and zero-altered regression models are compared from two perspectives: an interpretation of the data generating process and the level of statistical fit. The results have implications for traffic safety policy decisions aimed at reducing accident severity. An application using data from Spain is provided.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Automobile bodily injury (BI) claims remain unsettled for a long time after the accident. The estimation of an accurate reserve for Reported But Not Settled (RBNS) claims is therefore vital for insurers. In accordance with the recommendation included in the Solvency II project (CEIOPS, 2007) a statistical model is here implemented for RBNS reserve estimation. Lognormality on empirical compensation cost data is observed for different levels of BI severity. The individual claim provision is estimated by allocating the expected mean compensation for the predicted severity of the victim’s injury, for which the upper bound is also computed. The BI severity is predicted by means of a heteroscedastic multiple choice model, because empirical evidence has found that the variability in the latent severity of injured individuals travelling by car is not constant. It is shown that this methodology can improve the accuracy of RBNS reserve estimation at all stages, as compared to the subjective assessment that has traditionally been made by practitioners.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Steganography is an information hiding application which aims tohide secret data imperceptibly into a cover object. In this paper, we describe anovel coding method based on Z2Z4-additive codes in which data is embeddedby distorting each cover symbol by one unit at most (+-1-steganography). Thismethod is optimal and solves the problem encountered by the most e cientmethods known today, concerning the treatment of boundary values. Theperformance of this new technique is compared with that of the mentionedmethods and with the well-known rate-distortion upper bound to conclude thata higher payload can be obtained for a given distortion by using the proposedmethod.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The three main topics of this work are independent systems and chains of word equations, parametric solutions of word equations on three unknowns, and unique decipherability in the monoid of regular languages. The most important result about independent systems is a new method giving an upper bound for their sizes in the case of three unknowns. The bound depends on the length of the shortest equation. This result has generalizations for decreasing chains and for more than three unknowns. The method also leads to shorter proofs and generalizations of some old results. Hmelevksii’s theorem states that every word equation on three unknowns has a parametric solution. We give a significantly simplified proof for this theorem. As a new result we estimate the lengths of parametric solutions and get a bound for the length of the minimal nontrivial solution and for the complexity of deciding whether such a solution exists. The unique decipherability problem asks whether given elements of some monoid form a code, that is, whether they satisfy a nontrivial equation. We give characterizations for when a collection of unary regular languages is a code. We also prove that it is undecidable whether a collection of binary regular languages is a code.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tea has been considered a medicine and a healthy beverage since ancient times, but recently it has received a great deal of attention because of its antioxidant properties. Green tea polyphenols have demonstrated to be an effective chemopreventive agent. Recently, investigators have found that EGCG, one of the green tea catechins, could have anti-HIV effects when bound to CD4 receptor. Many factors can constitute important influences on the composition of tea, such as species, season, age of the leaf, climate, and horticultural practices (soil, water, minerals, fertilizers). This paper presents an HPLC analytical methodology development, using column RP-18 and mobile phase composed by water, acetonitrile, methanol, ethyl acetate, glacial acetic acid (89:6:1:3:1 v/v/v/v/v) for simultaneous determination and quantification of caffeine (CAF), catechin (C), epicatechin (EC) and epigallocatechin gallate (EGCG) in samples of Camellia sinensis (green tea) grown in Brazil and harvested in spring, in summer and in autumn, in comparison to Brazilian black tea, to samples of Japanese and Chinese green tea and to two standardized dry extracts of green tea. The method has been statistically evaluated and has proved to be adequate to qualitative and quantitative determination of the samples.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis will introduce a new strongly typed programming language utilizing Self types, named Win--*Foy, along with a suitable user interface designed specifically to highlight language features. The need for such a programming language is based on deficiencies found in programming languages that support both Self types and subtyping. Subtyping is a concept that is taken for granted by most software engineers programming in object-oriented languages. Subtyping supports subsumption but it does not support the inheritance of binary methods. Binary methods contain an argument of type Self, the same type as the object itself, in a contravariant position, i.e. as a parameter. There are several arguments in favour of introducing Self types into a programming language (11. This rationale led to the development of a relation that has become known as matching [4, 5). The matching relation does not support subsumption, however, it does support the inheritance of binary methods. Two forms of matching have been proposed (lJ. Specifically, these relations are known as higher-order matching and I-bound matching. Previous research on these relations indicates that the higher-order matching relation is both reflexive and transitive whereas the f-bound matching is reflexive but not transitive (7]. The higher-order matching relation provides significant flexibility regarding inheritance of methods that utilize or return values of the same type. This flexibility, in certain situations, can restrict the programmer from defining specific classes and methods which are based on constant values [21J. For this reason, the type This is used as a second reference to the type of the object that cannot, contrary to Self, be specialized in subclasses. F-bound matching allows a programmer to define a function that will work for all types of A', a subtype of an upper bound function of type A, with the result type being dependent on A'. The use of parametric polymorphism in f-bound matching provides a connection to subtyping in object-oriented languages. This thesis will contain two main sections. Firstly, significant details concerning deficiencies of the subtype relation and the need to introduce higher-order and f-bound matching relations into programming languages will be explored. Secondly, a new programming language named Win--*Foy Functional Object-Oriented Programming Language has been created, along with a suitable user interface, in order to facilitate experimentation by programmers regarding the matching relation. The construction of the programming language and the user interface will be explained in detail.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Le problème de localisation-routage avec capacités (PLRC) apparaît comme un problème clé dans la conception de réseaux de distribution de marchandises. Il généralisele problème de localisation avec capacités (PLC) ainsi que le problème de tournées de véhicules à multiples dépôts (PTVMD), le premier en ajoutant des décisions liées au routage et le deuxième en ajoutant des décisions liées à la localisation des dépôts. Dans cette thèse on dévelope des outils pour résoudre le PLRC à l’aide de la programmation mathématique. Dans le chapitre 3, on introduit trois nouveaux modèles pour le PLRC basés sur des flots de véhicules et des flots de commodités, et on montre comment ceux-ci dominent, en termes de la qualité de la borne inférieure, la formulation originale à deux indices [19]. Des nouvelles inégalités valides ont été dévelopées et ajoutées aux modèles, de même que des inégalités connues. De nouveaux algorithmes de séparation ont aussi été dévelopés qui dans la plupart de cas généralisent ceux trouvés dans la litterature. Les résultats numériques montrent que ces modèles de flot sont en fait utiles pour résoudre des instances de petite à moyenne taille. Dans le chapitre 4, on présente une nouvelle méthode de génération de colonnes basée sur une formulation de partition d’ensemble. Le sous-problème consiste en un problème de plus court chemin avec capacités (PCCC). En particulier, on utilise une relaxation de ce problème dans laquelle il est possible de produire des routes avec des cycles de longueur trois ou plus. Ceci est complété par des nouvelles coupes qui permettent de réduire encore davantage le saut d’intégralité en même temps que de défavoriser l’apparition de cycles dans les routes. Ces résultats suggèrent que cette méthode fournit la meilleure méthode exacte pour le PLRC. Dans le chapitre 5, on introduit une nouvelle méthode heuristique pour le PLRC. Premièrement, on démarre une méthode randomisée de type GRASP pour trouver un premier ensemble de solutions de bonne qualité. Les solutions de cet ensemble sont alors combinées de façon à les améliorer. Finalement, on démarre une méthode de type détruir et réparer basée sur la résolution d’un nouveau modèle de localisation et réaffectation qui généralise le problème de réaffectaction [48].

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ce mémoire étudie l'algorithme d'amplification de l'amplitude et ses applications dans le domaine de test de propriété. On utilise l'amplification de l'amplitude pour proposer le plus efficace algorithme quantique à ce jour qui teste la linéarité de fonctions booléennes et on généralise notre nouvel algorithme pour tester si une fonction entre deux groupes abéliens finis est un homomorphisme. Le meilleur algorithme quantique connu qui teste la symétrie de fonctions booléennes est aussi amélioré et l'on utilise ce nouvel algorithme pour tester la quasi-symétrie de fonctions booléennes. Par la suite, on approfondit l'étude du nombre de requêtes à la boîte noire que fait l'algorithme d'amplification de l'amplitude pour amplitude initiale inconnue. Une description rigoureuse de la variable aléatoire représentant ce nombre est présentée, suivie du résultat précédemment connue de la borne supérieure sur l'espérance. Suivent de nouveaux résultats sur la variance de cette variable. Il est notamment montré que, dans le cas général, la variance est infinie, mais nous montrons aussi que, pour un choix approprié de paramètres, elle devient bornée supérieurement.