803 resultados para Ellsberg paradox
Resumo:
Summary. Energy saving has been a stated policy objective of the EU since the 1970s. Presently, the 2020 target is a 20% reduction of EU energy consumption in comparison with current projections for 2020. This is one of the headline targets of the European Energy Strategy 2020 but efforts to achieve it remain slow and insufficient. The aim of this paper is to understand why this is happening. Firstly, this paper examines the reasons why public measures promoting energy efficiency are needed and what form these measures should optimally take (§ 1). Fortunately, over the last 20 years, much research has been done into the famous ‘energy efficiency gap’ (or ‘the energy efficiency paradox’), even if more remains to be done. Multiple explanations have been given: market failures, modelling flaws and behavioural obstacles. Each encompasses many complex aspects. Several types of instruments can be adopted to encourage energy efficiency: measures guaranteeing the correct pricing of energy are preferred, followed by taxes or tradable white certificates which in turn are preferred to standards or subsidies. Information programmes are also necessary. Secondly, the paper analyzes the evolution of the different programmes from 2000 onwards (§ 2). This reveals the extreme complexity of the subject. It deals with quite diverse topics: buildings, appliances, public sector, industry and transport. The market for energy efficiency is as diffuse as energy consumption patterns themselves. It is composed of many market actors who demand more efficient provision of energy services, and that suppliers of the necessary goods and know-how deliver this greater efficiency. Consumers in this market include individuals, businesses and governments, and market activities cover all energy-consuming sectors of the economy. Additionally, energy efficiency is the perfect example of a shared competence between the EU and the Member States. Lastly, the legal framework has steadily increased in complexity, and despite the successive energy efficiency programmes used to build this framework, it has become clear that the gap between the target and the results remains. The paper then examines whether the 2012/27/EU Directive adopted to improve the situation could bring better results. It briefly describes the content of this framework Directive, which accompanies and implements the latest energy efficiency programme (§ 3). Although the Directive is technically complex and maintains nonbinding energy efficiency targets, it certainly represents an improvement in several aspects. However, it is also saddled with a multiplicity of exemption clauses and interpretative documents (with no binding value) which weaken its provisions. Furthermore, alone, it will allow the achievement of only about 17.7% of final energy savings by 2020. The implementation process, which is essential, also remains fairly weak. The paper also gives a glimpse of the various EU instruments for financing energy efficiency projects (§ 4). Though useful, they do not indicate a strong priority. Fourthly, the paper tries to analyze the EU’s limited progress so far and gather a few suggestions for improvement. One thing seems to remain useful: targets which can be defined in various ways (§ 5). Basically, all this indicates that the EU energy efficiency strategy has so far failed to reach its targets, lacks coherence and remains ambiguous. In the new Commission’s proposals of 22 January 2014 – intended to define a new climate/energy package in the period from 2020 to 2030 – the approach to energy efficiency remains unclear. This is regrettable. Energy efficiency is the only instrument which allows the EU to reach simultaneously its three targets: sustainability, competitiveness and security. The final conclusion appears thus paradoxical. On the one hand, all existing studies indicate that the decarbonization of the EU economy will be absolutely impossible without some very serious improvements in energy efficiency. On the other hand, in reality energy efficiency has always been treated as a second zone priority. It is imperative to eliminate this contradiction.
Resumo:
There is a puzzling, little-remarked contradiction in scholarly views of the European Commission. On the one hand, the Commission is seen as the maestro of European integration, gently but persistently guiding both governments and firms toward Brussels. On the other hand, the Commission is portrayed as a headless bunch of bickering fiefdoms who can hardly be bothered by anything but their own in ternecine turf wars. The reason these very different views of the same institution have so seldom come into conflict is quite apparent: EU studies has a set of relatively autonomous and poorly integrated sub fields that work at different levels of analysis. Those scholars holding the "heroic" view of the Com mission are generally focused on the contest between national and supranational levels that character ized the 1992 program and subsequent major steps toward European integration. By contrast, those scholars with the "bureaucratic politics" view are generally authors of case studies or legislative his tories of individual EU directives or decisions. However, the fact that these twO images of the Commis sion are often two ships passing in the night hardly implies that there is no dispute. Clearly both views cannot be right; but then, how can we explain the significant support each enjoys from the empirical record? The CommiSSion, perhaps the single most important supranational body in the world, certainly deserves better than the schizophrenic interpretation the EU studies community has given it. In this paper, I aim to make a contribution toward the unraveling of this paradox. In brief, the argument I make is as follows: the European Commission can be effective in pursuit of its broad integration goals in spite of, and even because of, its internal divisions. The folk wisdom that too many chefs spoil the broth may often be true, but it need not always be so. The paper is organized as follows. 1 begin with an elaboration of the theoretical position briefly out lined above. 1 then tum to a case study from the major Commission efforts to restructure the computer industry in the context of its 1992 program. The computer sector does not merely provide interesting, random illustrations of the hypothesis 1 have advanced. Rather, as Wayne Sandholtz and John Zysman have stressed, the Commission's efforts on informatics formed one of the most crucial parts of the en tire 1992 program, and so the Commission's success in "Europeanizing" these issues had significant ripple effects across the entire European political economy. I conclude with some thoughts on the fol lowing question: now that the Commission has succeeded in bringing the world to its doorstep, does its bureaucratic division still serve a useful purpose?
Resumo:
Des recherches au Québec (Garon, 2009), en France (Donnat, 2011) et aux États-Unis (Kolb, 2001) confirment un état de fait général: le vieillissement du public de la musique classique. Si le public du répertoire est reconnu pour posséder un haut niveau d’études, pourquoi les étudiants universitaires de nos jours ne sont-ils pas plus présents dans les salles de concert ? Cette étude explore cette problématique d’abord par une recherche historique et par des entrevues auprès de certains des organismes de musique classique à Montréal, et ce afin de comprendre leurs stratégies de développement des publics concernés de 2004 à 2014. Ensuite, par un sondage auprès de 555 étudiants universitaires de la ville, pour faire un portrait de leur relation avec la musique à l’heure actuelle. Notre analyse, appuyée par une bibliographie en sociomusicologie et en sociologie des pratiques culturelles, confirme des tendances comme celle de l’«omnivorisme culturel» et l’éclectisme musical des jeunes universitaires. Elle nous montre aussi une réception positive des œuvres classiques, quoiqu’incompatible avec les critères esthétiques des genres musicaux favoris. À partir de ce paradoxe, nous étudions la force des motivations extramusicales qui les amènent aux concerts, leurs formats préférés, l’impact de l’éducation musicale, l’influence des parents, de l’internet, des nouvelles technologies. Finalement, nous constatons le nombre peu élevé d’initiatives des organismes musicaux dans le milieu universitaire à Montréal qui, pourtant, se montre un bassin au grand potentiel pour le renouvellement des publics de la musique classique.
Resumo:
The quantity, type, and maturity of the organic matter of Quaternary and Tertiary sediments from the Japan Trench (DSDP Leg 56, Sites 434 and 436; and Leg 57, Sites 438, 439 and 440) were determined. The hydrocarbons in lipid extracts were analyzed by capillary- column gas chromatography and combined gas chromatography/ mass spectrometry. Kerogen concentrates were investigated by microscopy, and vitrinite-reflectance values were determined. Measured organic-carbon values were in the range of 0.13 to 1.00 per cent. Extract yields, however, were extremely low. Normalized to organic carbon, total extracts ranged from 4.1 to 15.7 mg/g Corg. Gas chromatography of non-aromatic hydrocarbons showed that all sediments, except one Oligocene sample, contained very immature, mainly terrigenous organic material. This was indicated by n-alkane maxima at C29 and C31 and high odd-carbon-number predominances. Unsaturated steroid hydrocarbons were found to be major cyclic compounds in lower- and middle-Miocene samples from the upper inner trench slope (Sites 438 and 439). Perylene was the dominating aromatic hydrocarbon in all but the Oligocene sample. Microscopy showed kerogens rich in terrigenous organic particles, with a major portion of recycled vitrinite. Nevertheless, almost all the liptinite particles appeared to be primary. This is a paradox, as the bulk of the samples were composed of hemipelagic mineral matter with a major siliceous biogenic (planktonic) component. A trend of reduced size and increased roundness can be seen for the vitrinite/ inertinite particles from west to east (from upper inner trench slope to outer trench slope). All sediments but one are relatively immature, with mean huminite-reflectance values (Ro)in the range of 0.30 to 0.45 per cent. The oldest and deepest sediment investigated, an Oligocene sandstone from Site 439, yielded a mean vitrinitereflectance value of 0.74 per cent and a mature n-alkane distribution. This sample may indicate a geothermal event in late Oligocene time. It failed to affect the overlying lower Miocene and may have been caused by an intrusion. Boulders of acidic igneous rocks in the Oligocene can be interpreted as witnesses of nearby volcanic activity accompanied by intrusions.
Resumo:
Blue whiting (Micromesistius poutassou, http://www.marinespecies.org/aphia.php?p=taxdetails&id=126439) is a small mesopelagic planktivorous gadoid found throughout the North-East Atlantic. This data contains the results of a model-based analysis of larvae captured by the Continuous Plankton Recorder (CPR) during the period 1951-2005. The observations are analysed using Generalised Additive Models (GAMs) of the the spatial, seasonal and interannual variation in the occurrence of larvae. The best fitting model is chosen using the Aikaike Information Criteria (AIC). The probability of occurrence in the continous plankton recorder is then normalised and converted to a probability distribution function in space (UTM projection Zone 28) and season (day of year). The best fitting model splits the distribution into two separate spawning grounds north and south of a dividing line at 53 N. The probability distribution is therefore normalised in these two regions (ie the space-time integral over each of the two regions is 1). The modelled outputs are on a UTM Zone 28 grid: however, for convenience, the latitude ("lat") and longitude ("lon") of each of these grid points are also included as a variable in the NetCDF file. The assignment of each grid point to either the Northern or Southern component (defined here as north/south of 53 N), is also included as a further variable ("component"). Finally, the day of year ("doy") is stored as the number of days elapsed from and included January 1 (ie doy=1 on January 1) - the year is thereafter divided into 180 grid points.
Resumo:
Nuevo fanatismo -- Petrarca y su Laura -- Electra -- Moderno disparatorio -- Sienkiewicz y el diario de Plozowski -- El último drama de Ibsen -- El gallo de Sócrates -- Pinturas literarias -- Los tópicos -- Agua turbia -- Sacrificios -- Las metamórfosis -- Cuentos aragoneses -- La poesía de Santos Chocano -- Ideas sobre la técnica y la crítica literarias -- Vicios de lenguaje -- Menudencias lexicográficas -- Casandra -- El arte de escribir -- Paradox rey -- La lengua internacional -- Biología de la sátira -- Opiniones.
Resumo:
Thesis (Master's)--University of Washington, 2016-07
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
We investigate multipartite entanglement in relation to the process of quantum state exchange. In particular, we consider such entanglement for a certain pure state involving two groups of N trapped atoms. The state, which can be produced via quantum state exchange, is analogous to the steady-state intracavity state of the subthreshold optical nondegenerate parametric amplifier. We show that, first, it possesses some 2N-way entanglement. Second, we place a lower bound on the amount of such entanglement in the state using a measure called the entanglement of minimum bipartite entropy.
Resumo:
Stabilizing selection has been predicted to change genetic variances and covariances so that the orientation of the genetic variance-covariance matrix (G) becomes aligned with the orientation of the fitness surface, but it is less clear how directional selection may change G. Here we develop statistical approaches to the comparison of G with vectors of linear and nonlinear selection. We apply these approaches to a set of male sexually selected cuticular hydrocarbons (CHCs) of Drosophila serrata. Even though male CHCs displayed substantial additive genetic variance, more than 99% of the genetic variance was orientated 74.9degrees away from the vector of linear sexual selection, suggesting that open-ended female preferences may greatly reduce genetic variation in male display traits. Although the orientation of G and the fitness surface were found to differ significantly, the similarity present in eigenstructure was a consequence of traits under weak linear selection and strong nonlinear ( convex) selection. Associating the eigenstructure of G with vectors of linear and nonlinear selection may provide a way of determining what long-term changes in G may be generated by the processes of natural and sexual selection.
Resumo:
Four variations on Two Envelope Paradox are stated and compared. The variations are employed to provide a diagnosis and an explanation of what has gone awry in the paradoxical modeling of the decision problem that the paradox poses. The canonical formulation of the paradox underdescribes the ways in which one envelope can have twice the amount that is in the other. Some ways one envelope can have twice the amount that is in the other make it rational to prefer the envelope that was originally rejected. Some do not, and it is a mistake to treat them alike. The nature of the mistake is diagnosed by the different roles that rigid designators and definite descriptions play in unproblematic and in untoward formulations of decision tables that are employed in setting out the decision problem that gives rise to the paradox. The decision maker’s knowledge or ignorance of how one envelope came to have twice the amount that is in the other determines which of the different ways of modeling his decision problem is correct. Under this diagnosis, the paradoxical modeling of the Two Envelope problem is incoherent.
Resumo:
Following on from previous work [J.-A. Larsson, Phys. Rev. A 67, 022108 (2003)], Bell inequalities based on correlations between binary digits are considered for a particular entangled state involving 2N trapped ions. These inequalities involve applying displacement operations to half of the ions and then measuring correlations between pairs of corresponding bits in the binary representations of the number of center-of-mass phonons of N particular ions. It is shown that the state violates the inequalities and thus displays nonclassical correlations. It is also demonstrated that it violates a Bell inequality when the displacements are replaced by squeezing operations.
Resumo:
Cox's theorem states that, under certain assumptions, any measure of belief is isomorphic to a probability measure. This theorem, although intended as a justification of the subjectivist interpretation of probability theory, is sometimes presented as an argument for more controversial theses. Of particular interest is the thesis that the only coherent means of representing uncertainty is via the probability calculus. In this paper I examine the logical assumptions of Cox's theorem and I show how these impinge on the philosophical conclusions thought to be supported by the theorem. I show that the more controversial thesis is not supported by Cox's theorem. (C) 2003 Elsevier Inc. All rights reserved.
Resumo:
We present an experimental analysis of quadrature entanglement produced from a pair of amplitude squeezed beams. The correlation matrix of the state is characterized within a set of reasonable assumptions, and the strength of the entanglement is gauged using measures of the degree of inseparability and the degree of Einstein-Podolsky-Rosen (EPR) paradox. We introduce controlled decoherence in the form of optical loss to the entangled state, and demonstrate qualitative differences in the response of the degrees of inseparability and EPR paradox to this loss. The entanglement is represented on a photon number diagram that provides an intuitive and physically relevant description of the state. We calculate efficacy contours for several quantum information protocols on this diagram, and use them to predict the effectiveness of our entanglement in those protocols.
Resumo:
We present a fully quantum mechanical treatment of the nondegenerate optical parametric oscillator both below and near threshold. This is a nonequilibrium quantum system with a critical point phase transition, that is also known to exhibit strong yet easily observed squeezing and quantum entanglement. Our treatment makes use of the positive P representation and goes beyond the usual linearized theory. We compare our analytical results with numerical simulations and find excellent agreement. We also carry out a detailed comparison of our results with those obtained from stochastic electrodynamics, a theory obtained by truncating the equation of motion for the Wigner function, with a view to locating regions of agreement and disagreement between the two. We calculate commonly used measures of quantum behavior including entanglement, squeezing, and Einstein-Podolsky-Rosen (EPR) correlations as well as higher order tripartite correlations, and show how these are modified as the critical point is approached. These results are compared with those obtained using two degenerate parametric oscillators, and we find that in the near-critical region the nondegenerate oscillator has stronger EPR correlations. In general, the critical fluctuations represent an ultimate limit to the possible entanglement that can be achieved in a nondegenerate parametric oscillator.