833 resultados para 090608 Renewable Power and Energy Systems Engineering (excl. Solar Cells)
Resumo:
Abelian and non-Abelian gauge theories are of central importance in many areas of physics. In condensed matter physics, AbelianU(1) lattice gauge theories arise in the description of certain quantum spin liquids. In quantum information theory, Kitaev’s toric code is a Z(2) lattice gauge theory. In particle physics, Quantum Chromodynamics (QCD), the non-Abelian SU(3) gauge theory of the strong interactions between quarks and gluons, is nonperturbatively regularized on a lattice. Quantum link models extend the concept of lattice gauge theories beyond the Wilson formulation, and are well suited for both digital and analog quantum simulation using ultracold atomic gases in optical lattices. Since quantum simulators do not suffer from the notorious sign problem, they open the door to studies of the real-time evolution of strongly coupled quantum systems, which are impossible with classical simulation methods. A plethora of interesting lattice gauge theories suggests itself for quantum simulation, which should allow us to address very challenging problems, ranging from confinement and deconfinement, or chiral symmetry breaking and its restoration at finite baryon density, to color superconductivity and the real-time evolution of heavy-ion collisions, first in simpler model gauge theories and ultimately in QCD.
Resumo:
Cocoa-based small-scale agriculture is the most important source of income for most farming families in the region of Alto Beni in the sub-humid foothills of the Andes. Cocoa is grown in cultivation systems of varying ecological complexity. The plantations are highly susceptible to climate change impacts. Local cocoa producers mention heat waves, droughts, floods and plant diseases as the main impacts affecting plants and working conditions, and they associate these impacts with global climate change. From a sustainable regional development point of view, cocoa farms need to become more resilient in order to cope with the climate change related effects that are putting cocoa-based livelihoods at risk. This study assesses agroecosystem resilience under three different cocoa cultivation systems (successional agroforestry, simple agroforestry and common practice monocultures). In a first step, farmers’ perceptions of climate change impacts were assessed and eight indicators of agroecological resilience were derived in a transdisciplinary process (focus groups and workshop) based on farmers’ and scientists’ knowledge. These indicators (soil organic matter, depth of Ah horizon, soil bulk density, tree species diversity, crop varieties diversity, ant species diversity, cocoa yields and infestation of cocoa trees with Moniliophthora perniciosa) were then surveyed on 15 cocoa farms and compared for the three different cultivation systems. Parts of the socio-economic aspects of resilience were covered by evaluating the role of cocoa cooperatives and organic certification in transitioning to more resilient cocoa farms (interviews with 15 cocoa farmers combined with five expert interviews). Agroecosystem resilience was higher under the two agroforestry systems than under common practice monoculture, especially under successional agroforestry. Both agroforestry systems achieved higher cocoa yields than common practice monoculture due to agroforestry farmers’ enhanced knowledge regarding cocoa cultivation. Knowledge sharing was promoted by local organizations facilitating organic certification. These organizations were thus found to enhance the social process of farmers’ integration into cooperatives and their reorientation toward organic principles and diversified agroforestry.
Resumo:
Ore-forming and geoenviromental systems commonly involve coupled fluid flowand chemical reaction processes. The advanced numerical methods and computational modeling have become indispensable tools for simulating such processes in recent years. This enables many hitherto unsolvable geoscience problems to be addressed using numerical methods and computational modeling approaches. For example, computational modeling has been successfully used to solve ore-forming and mine site contamination/remediation problems, in which fluid flow and geochemical processes play important roles in the controlling dynamic mechanisms. The main purpose of this paper is to present a generalized overview of: (1) the various classes and models associated with fluid flow/chemically reacting systems in order to highlight possible opportunities and developments for the future; (2) some more general issues that need attention in the development of computational models and codes for simulating ore-forming and geoenviromental systems; (3) the related progresses achieved on the geochemical modeling over the past 50 years or so; (4) the general methodology for modeling of oreforming and geoenvironmental systems; and (5) the future development directions associated with modeling of ore-forming and geoenviromental systems.
Resumo:
Excavations of Neolithic (4000 – 3500 BC) and Late Bronze Age (1200 – 800 BC) wetland sites on the northern Alpine periphery have produced astonishing and detailed information about the life and human environment of prehistoric societies. It is even possible to reconstruct settlement histories and settlement dynamics, which suggest a high degree of mobility during the Neolithic. Archaeological finds—such as pottery—show local typological developments in addition to foreign influences. Furthermore, exogenous lithic forms indicate far reaching interaction. Many hundreds of bronze artefacts are recorded from the Late Bronze Age settlements, demonstrating that some wetland sites were centres of bronzework production. Exogenous forms of bronzework are relatively rare in the wetland settlements during the Late Bronze Age. However, the products produced in the lake-settlements can be found widely across central Europe, indicating their continued involvement in interregional exchange partnerships. Potential motivations and dynamics of the relationships between sites and other regions of Europe will be detailed using case studies focussing on the settlements Seedorf Lobsigensee (BE), Concise (VD), and Sutz-Lattrigen Hauptstation innen (BE), and an initial assessment of intra-site connectivity through Network Analysis of sites within the region of Lake Neuchâtel, Lake Biel, and Lake Murten.
Resumo:
Voting power is commonly measured using a probability. But what kind of probability is this? Is it a degree of belief or an objective chance or some other sort of probability? The aim of this paper is to answer this question. The answer depends on the use to which a measure of voting power is put. Some objectivist interpretations of probabilities are appropriate when we employ such a measure for descriptive purposes. By contrast, when voting power is used to normatively assess voting rules, the probabilities are best understood as classical probabilities, which count possibilities. This is so because, from a normative stance, voting power is most plausibly taken to concern rights and thus possibilities. The classical interpretation also underwrites the use of the Bernoulli model upon which the Penrose/Banzhaf measure is based.
Resumo:
Both obesity and asthma are highly prevalent, complex diseases modified by multiple factors. Genetic, developmental, lung mechanical, immunological and behavioural factors have all been suggested as playing a causal role between the two entities; however, their complex mechanistic interactions are still poorly understood and evidence of causality in children remains scant. Equally lacking is evidence of effective treatment strategies, despite the fact that imbalances at vulnerable phases in childhood can impact long-term health. This review is targeted at both clinicians frequently faced with the dilemma of how to investigate and treat the obese asthmatic child and researchers interested in the topic. Highlighting the breadth of the spectrum of factors involved, this review collates evidence regarding the investigation and treatment of asthma in obese children, particularly in comparison with current approaches in 'difficult-to-treat' childhood asthma. Finally, the authors propose hypotheses for future research from a systems-based perspective.
Resumo:
Introduction to a special issue
Resumo:
Bargaining is the building block of many economic interactions, ranging from bilateral to multilateral encounters and from situations in which the actors are individuals to negotiations between firms or countries. In all these settings, economists have been intrigued for a long time by the fact that some projects, trades or agreements are not realized even though they are mutually beneficial. On the one hand, this has been explained by incomplete information. A firm may not be willing to offer a wage that is acceptable to a qualified worker, because it knows that there are also unqualified workers and cannot distinguish between the two types. This phenomenon is known as adverse selection. On the other hand, it has been argued that even with complete information, the presence of externalities may impede efficient outcomes. To see this, consider the example of climate change. If a subset of countries agrees to curb emissions, non-participant regions benefit from the signatories’ efforts without incurring costs. These free riding opportunities give rise to incentives to strategically improve ones bargaining power that work against the formation of a global agreement. This thesis is concerned with extending our understanding of both factors, adverse selection and externalities. The findings are based on empirical evidence from original laboratory experiments as well as game theoretic modeling. On a very general note, it is demonstrated that the institutions through which agents interact matter to a large extent. Insights are provided about which institutions we should expect to perform better than others, at least in terms of aggregate welfare. Chapters 1 and 2 focus on the problem of adverse selection. Effective operation of markets and other institutions often depends on good information transmission properties. In terms of the example introduced above, a firm is only willing to offer high wages if it receives enough positive signals about the worker’s quality during the application and wage bargaining process. In Chapter 1, it will be shown that repeated interaction coupled with time costs facilitates information transmission. By making the wage bargaining process costly for the worker, the firm is able to obtain more accurate information about the worker’s type. The cost could be pure time cost from delaying agreement or cost of effort arising from a multi-step interviewing process. In Chapter 2, I abstract from time cost and show that communication can play a similar role. The simple fact that a worker states to be of high quality may be informative. In Chapter 3, the focus is on a different source of inefficiency. Agents strive for bargaining power and thus may be motivated by incentives that are at odds with the socially efficient outcome. I have already mentioned the example of climate change. Other examples are coalitions within committees that are formed to secure voting power to block outcomes or groups that commit to different technological standards although a single standard would be optimal (e.g. the format war between HD and BlueRay). It will be shown that such inefficiencies are directly linked to the presence of externalities and a certain degree of irreversibility in actions. I now discuss the three articles in more detail. In Chapter 1, Olivier Bochet and I study a simple bilateral bargaining institution that eliminates trade failures arising from incomplete information. In this setting, a buyer makes offers to a seller in order to acquire a good. Whenever an offer is rejected by the seller, the buyer may submit a further offer. Bargaining is costly, because both parties suffer a (small) time cost after any rejection. The difficulties arise, because the good can be of low or high quality and the quality of the good is only known to the seller. Indeed, without the possibility to make repeated offers, it is too risky for the buyer to offer prices that allow for trade of high quality goods. When allowing for repeated offers, however, at equilibrium both types of goods trade with probability one. We provide an experimental test of these predictions. Buyers gather information about sellers using specific price offers and rates of trade are high, much as the model’s qualitative predictions. We also observe a persistent over-delay before trade occurs, and this mitigates efficiency substantially. Possible channels for over-delay are identified in the form of two behavioral assumptions missing from the standard model, loss aversion (buyers) and haggling (sellers), which reconcile the data with the theoretical predictions. Chapter 2 also studies adverse selection, but interaction between buyers and sellers now takes place within a market rather than isolated pairs. Remarkably, in a market it suffices to let agents communicate in a very simple manner to mitigate trade failures. The key insight is that better informed agents (sellers) are willing to truthfully reveal their private information, because by doing so they are able to reduce search frictions and attract more buyers. Behavior observed in the experimental sessions closely follows the theoretical predictions. As a consequence, costless and non-binding communication (cheap talk) significantly raises rates of trade and welfare. Previous experiments have documented that cheap talk alleviates inefficiencies due to asymmetric information. These findings are explained by pro-social preferences and lie aversion. I use appropriate control treatments to show that such consideration play only a minor role in our market. Instead, the experiment highlights the ability to organize markets as a new channel through which communication can facilitate trade in the presence of private information. In Chapter 3, I theoretically explore coalition formation via multilateral bargaining under complete information. The environment studied is extremely rich in the sense that the model allows for all kinds of externalities. This is achieved by using so-called partition functions, which pin down a coalitional worth for each possible coalition in each possible coalition structure. It is found that although binding agreements can be written, efficiency is not guaranteed, because the negotiation process is inherently non-cooperative. The prospects of cooperation are shown to crucially depend on i) the degree to which players can renegotiate and gradually build up agreements and ii) the absence of a certain type of externalities that can loosely be described as incentives to free ride. Moreover, the willingness to concede bargaining power is identified as a novel reason for gradualism. Another key contribution of the study is that it identifies a strong connection between the Core, one of the most important concepts in cooperative game theory, and the set of environments for which efficiency is attained even without renegotiation.
Resumo:
Indoor positioning has attracted considerable attention for decades due to the increasing demands for location based services. In the past years, although numerous methods have been proposed for indoor positioning, it is still challenging to find a convincing solution that combines high positioning accuracy and ease of deployment. Radio-based indoor positioning has emerged as a dominant method due to its ubiquitousness, especially for WiFi. RSSI (Received Signal Strength Indicator) has been investigated in the area of indoor positioning for decades. However, it is prone to multipath propagation and hence fingerprinting has become the most commonly used method for indoor positioning using RSSI. The drawback of fingerprinting is that it requires intensive labour efforts to calibrate the radio map prior to experiments, which makes the deployment of the positioning system very time consuming. Using time information as another way for radio-based indoor positioning is challenged by time synchronization among anchor nodes and timestamp accuracy. Besides radio-based positioning methods, intensive research has been conducted to make use of inertial sensors for indoor tracking due to the fast developments of smartphones. However, these methods are normally prone to accumulative errors and might not be available for some applications, such as passive positioning. This thesis focuses on network-based indoor positioning and tracking systems, mainly for passive positioning, which does not require the participation of targets in the positioning process. To achieve high positioning accuracy, we work on some information of radio signals from physical-layer processing, such as timestamps and channel information. The contributions in this thesis can be divided into two parts: time-based positioning and channel information based positioning. First, for time-based indoor positioning (especially for narrow-band signals), we address challenges for compensating synchronization offsets among anchor nodes, designing timestamps with high resolution, and developing accurate positioning methods. Second, we work on range-based positioning methods with channel information to passively locate and track WiFi targets. Targeting less efforts for deployment, we work on range-based methods, which require much less calibration efforts than fingerprinting. By designing some novel enhanced methods for both ranging and positioning (including trilateration for stationary targets and particle filter for mobile targets), we are able to locate WiFi targets with high accuracy solely relying on radio signals and our proposed enhanced particle filter significantly outperforms the other commonly used range-based positioning algorithms, e.g., a traditional particle filter, extended Kalman filter and trilateration algorithms. In addition to using radio signals for passive positioning, we propose a second enhanced particle filter for active positioning to fuse inertial sensor and channel information to track indoor targets, which achieves higher tracking accuracy than tracking methods solely relying on either radio signals or inertial sensors.