962 resultados para DC power transmission
Resumo:
Wireless Sensor Networks (WSNs) offer a new solution for distributed monitoring, processing and communication. First of all, the stringent energy constraints to which sensing nodes are typically subjected. WSNs are often battery powered and placed where it is not possible to recharge or replace batteries. Energy can be harvested from the external environment but it is a limited resource that must be used efficiently. Energy efficiency is a key requirement for a credible WSNs design. From the power source's perspective, aggressive energy management techniques remain the most effective way to prolong the lifetime of a WSN. A new adaptive algorithm will be presented, which minimizes the consumption of wireless sensor nodes in sleep mode, when the power source has to be regulated using DC-DC converters. Another important aspect addressed is the time synchronisation in WSNs. WSNs are used for real-world applications where physical time plays an important role. An innovative low-overhead synchronisation approach will be presented, based on a Temperature Compensation Algorithm (TCA). The last aspect addressed is related to self-powered WSNs with Energy Harvesting (EH) solutions. Wireless sensor nodes with EH require some form of energy storage, which enables systems to continue operating during periods of insufficient environmental energy. However, the size of the energy storage strongly restricts the use of WSNs with EH in real-world applications. A new approach will be presented, which enables computation to be sustained during intermittent power supply. The discussed approaches will be used for real-world WSN applications. The first presented scenario is related to the experience gathered during an European Project (3ENCULT Project), regarding the design and implementation of an innovative network for monitoring heritage buildings. The second scenario is related to the experience with Telecom Italia, regarding the design of smart energy meters for monitoring the usage of household's appliances.
Resumo:
The energy harvesting research field has grown considerably in the last decade due to increasing interests in energy autonomous sensing systems, which require smart and efficient interfaces for extracting power from energy source and power management (PM) circuits. This thesis investigates the design trade-offs for minimizing the intrinsic power of PM circuits, in order to allow operation with very weak energy sources. For validation purposes, three different integrated power converter and PM circuits for energy harvesting applications are presented. They have been designed for nano-power operations and single-source converters can operate with input power lower than 1 μW. The first IC is a buck-boost converter for piezoelectric transducers (PZ) implementing Synchronous Electrical Charge Extraction (SECE), a non-linear energy extraction technique. Moreover, Residual Charge Inversion technique is exploited for extracting energy from PZ with weak and irregular excitations (i.e. lower voltage), and the implemented PM policy, named Two-Way Energy Storage, considerably reduces the start-up time of the converter, improving the overall conversion efficiency. The second proposed IC is a general-purpose buck-boost converter for low-voltage DC energy sources, up to 2.5 V. An ultra-low-power MPPT circuit has been designed in order to track variations of source power. Furthermore, a capacitive boost circuit has been included, allowing the converter start-up from a source voltage VDC0 = 223 mV. A nano-power programmable linear regulator is also included in order to provide a stable voltage to the load. The third IC implements an heterogeneous multisource buck-boost converter. It provides up to 9 independent input channels, of which 5 are specific for PZ (with SECE) and 4 for DC energy sources with MPPT. The inductor is shared among channels and an arbiter, designed with asynchronous logic to reduce the energy consumption, avoids simultaneous access to the buck-boost core, with a dynamic schedule based on source priority.
Resumo:
The aim of this thesis is to develop a depth analysis of the inductive power transfer (or wireless power transfer, WPT) along a metamaterial composed of cells arranged in a planar configuration, in order to deliver power to a receiver sliding on them. In this way, the problem of the efficiency strongly affected by the weak coupling between emitter and receiver can be obviated, and the distance of transmission can significantly be increased. This study is made using a circuital approach and the magnetoinductive wave (MIW) theory, in order to simply explain the behavior of the transmission coefficient and efficiency from the circuital and experimental point of view. Moreover, flat spiral resonators are used as metamaterial cells, particularly indicated in literature for WPT metamaterials operating at MHz frequencies (5-30 MHz). Finally, this thesis presents a complete electrical characterization of multilayer and multiturn flat spiral resonators and, in particular, it proposes a new approach for the resistance calculation through finite element simulations, in order to consider all the high frequency parasitic effects. Multilayer and multiturn flat spiral resonators are studied in order to decrease the operating frequency down to kHz, maintaining small external dimensions and allowing the metamaterials to be supplied by electronic power converters (resonant inverters).
Resumo:
Durch steigende Energiekosten und erhöhte CO2 Emission ist die Forschung an thermoelektrischen (TE) Materialien in den Fokus gerückt. Die Eignung eines Materials für die Verwendung in einem TE Modul ist verknüpft mit der Gütezahl ZT und entspricht α2σTκ-1 (Seebeck Koeffizient α, Leitfähigkeit σ, Temperatur T und thermische Leitfähigkeit κ). Ohne den Leistungsfaktor α2σ zu verändern, soll ZT durch Senkung der thermischen Leitfähigkeit mittels Nanostrukturierung angehoben werden.rnBis heute sind die TE Eigenschaften von den makroskopischen halb-Heusler Materialen TiNiSn und Zr0.5Hf0.5NiSn ausgiebig erforscht worden. Mit Hilfe von dc Magnetron-Sputterdeposition wurden nun erstmals halbleitende TiNiSn und Zr0.5Hf0.5NiSn Schichten hergestellt. Auf MgO (100) Substraten sind stark texturierte polykristalline Schichten bei Substrattemperaturen von 450°C abgeschieden worden. Senkrecht zur Oberfläche haben sich Korngrößen von 55 nm feststellen lassen. Diese haben Halbwertsbreiten bei Rockingkurven von unter 1° aufgewiesen. Strukturanalysen sind mit Hilfe von Röntgenbeugungsexperimenten (XRD) durchgeführt worden. Durch Wachstumsraten von 1 nms 1 konnten in kürzester Zeit Filmdicken von mehr als einem µm hergestellt werden. TiNiSn zeigte den höchsten Leistungsfaktor von 0.4 mWK 2m 1 (550 K). Zusätzlich wurde bei Raumtemperatur mit Hilfe der differentiellen 3ω Methode eine thermische Leitfähigkeit von 2.8 Wm 1K 1 bestimmt. Es ist bekannt, dass die thermische Leitfähigkeit mit der Variation von Massen abnimmt. Weil zudem angenommen wird, dass sie durch Grenzflächenstreuung von Phononen ebenfalls reduziert wird, wurden Übergitter hergestellt. Dabei wurden TiNiSn und Zr0.5Hf0.5NiSn nacheinander abgeschieden. Die sehr hohe Kristallqualität der Übergitter mit ihren scharfen Grenzflächen konnte durch Satellitenpeaks und Transmissionsmikroskopie (STEM) nachgewiesen werden. Für ein Übergitter mit einer Periodizität von 21 nm (TiNiSn und Zr0.5Hf0.5NiSn jeweils 10.5 nm) ist bei einer Temperatur von 550 K ein Leistungsfaktor von 0.77 mWK 2m 1 nachgewiesen worden (α = 80 µVK 1; σ = 8.2 µΩm). Ein Übergitter mit der Periodizität von 8 nm hat senkrecht zu den Grenzflächen eine thermische Leitfähigkeit von 1 Wm 1K 1 aufgewiesen. Damit hat sich die Reduzierung der thermischen Leitfähigkeit durch die halb-Heusler Übergitter bestätigt. Durch die isoelektronischen Eigenschaften von Titan, Zirkonium und Hafnium wird angenommen, dass die elektrische Bandstruktur und damit der Leistungsfaktor senkrecht zu den Grenzflächen nur schwach beeinflusst wird.rn
Resumo:
The electric utility business is an inherently dangerous area to work in with employees exposed to many potential hazards daily. One such hazard is an arc flash. An arc flash is a rapid release of energy, referred to as incident energy, caused by an electric arc. Due to the random nature and occurrence of an arc flash, one can only prepare and minimize the extent of harm to themself, other employees and damage to equipment due to such a violent event. Effective January 1, 2009 the National Electric Safety Code (NESC) requires that an arc-flash assessment be performed by companies whose employees work on or near energized equipment to determine the potential exposure to an electric arc. To comply with the NESC requirement, Minnesota Power’s (MP’s) current short circuit and relay coordination software package, ASPEN OneLinerTM and one of the first software packages to implement an arc-flash module, is used to conduct an arc-flash hazard analysis. At the same time, the package is benchmarked against equations provided in the IEEE Std. 1584-2002 and ultimately used to determine the incident energy levels on the MP transmission system. This report goes into the depth of the history of arc-flash hazards, analysis methods, both software and empirical derived equations, issues of concern with calculation methods and the work conducted at MP. This work also produced two offline software products to conduct and verify an offline arc-flash hazard analysis.
Resumo:
Electrical Power Assisted Steering system (EPAS) will likely be used on future automotive power steering systems. The sinusoidal brushless DC (BLDC) motor has been identified as one of the most suitable actuators for the EPAS application. Motor characteristic variations, which can be indicated by variations of the motor parameters such as the coil resistance and the torque constant, directly impart inaccuracies in the control scheme based on the nominal values of parameters and thus the whole system performance suffers. The motor controller must address the time-varying motor characteristics problem and maintain the performance in its long service life. In this dissertation, four adaptive control algorithms for brushless DC (BLDC) motors are explored. The first algorithm engages a simplified inverse dq-coordinate dynamics controller and solves for the parameter errors with the q-axis current (iq) feedback from several past sampling steps. The controller parameter values are updated by slow integration of the parameter errors. Improvement such as dynamic approximation, speed approximation and Gram-Schmidt orthonormalization are discussed for better estimation performance. The second algorithm is proposed to use both the d-axis current (id) and the q-axis current (iq) feedback for parameter estimation since id always accompanies iq. Stochastic conditions for unbiased estimation are shown through Monte Carlo simulations. Study of the first two adaptive algorithms indicates that the parameter estimation performance can be achieved by using more history data. The Extended Kalman Filter (EKF), a representative recursive estimation algorithm, is then investigated for the BLDC motor application. Simulation results validated the superior estimation performance with the EKF. However, the computation complexity and stability may be barriers for practical implementation of the EKF. The fourth algorithm is a model reference adaptive control (MRAC) that utilizes the desired motor characteristics as a reference model. Its stability is guaranteed by Lyapunov’s direct method. Simulation shows superior performance in terms of the convergence speed and current tracking. These algorithms are compared in closed loop simulation with an EPAS model and a motor speed control application. The MRAC is identified as the most promising candidate controller because of its combination of superior performance and low computational complexity. A BLDC motor controller developed with the dq-coordinate model cannot be implemented without several supplemental functions such as the coordinate transformation and a DC-to-AC current encoding scheme. A quasi-physical BLDC motor model is developed to study the practical implementation issues of the dq-coordinate control strategy, such as the initialization and rotor angle transducer resolution. This model can also be beneficial during first stage development in automotive BLDC motor applications.
Resumo:
Wind power based generation has been rapidly growing world-wide during the recent past. In order to transmit large amounts of wind power over long distances, system planners may often add series compensation to existing transmission lines owing to several benefits such as improved steady-state power transfer limit, improved transient stability, and efficient utilization of transmission infrastructure. Application of series capacitors has posed resonant interaction concerns such as through subsynchronous resonance (SSR) with conventional turbine-generators. Wind turbine-generators may also be susceptible to such resonant interactions. However, not much information is available in literature and even engineering standards are yet to address these issues. The motivation problem for this research is based on an actual system switching event that resulted in undamped oscillations in a 345-kV series-compensated, typical ring-bus power system configuration. Based on time-domain ATP (Alternative Transients Program) modeling, simulations and analysis of system event records, the occurrence of subsynchronous interactions within the existing 345-kV series-compensated power system has been investigated. Effects of various small-signal and large-signal power system disturbances with both identical and non-identical wind turbine parameters (such as with a statistical-spread) has been evaluated. Effect of parameter variations on subsynchronous oscillations has been quantified using 3D-DFT plots and the oscillations have been identified as due to electrical self-excitation effects, rather than torsional interaction. Further, the generator no-load reactance and the rotor-side converter inner-loop controller gains have been identified as bearing maximum sensitivity to either damping or exacerbating the self-excited oscillations. A higher-order spectral analysis method based on modified Prony estimation has been successfully applied to the field records identifying dominant 9.79 Hz subsynchronous oscillations. Recommendations have been made for exploring countermeasures.
Resumo:
Two important and upcoming technologies, microgrids and electricity generation from wind resources, are increasingly being combined. Various control strategies can be implemented, and droop control provides a simple option without requiring communication between microgrid components. Eliminating the single source of potential failure around the communication system is especially important in remote, islanded microgrids, which are considered in this work. However, traditional droop control does not allow the microgrid to utilize much of the power available from the wind. This dissertation presents a novel droop control strategy, which implements a droop surface in higher dimension than the traditional strategy. The droop control relationship then depends on two variables: the dc microgrid bus voltage, and the wind speed at the current time. An approach for optimizing this droop control surface in order to meet a given objective, for example utilizing all of the power available from a wind resource, is proposed and demonstrated. Various cases are used to test the proposed optimal high dimension droop control method, and demonstrate its function. First, the use of linear multidimensional droop control without optimization is demonstrated through simulation. Next, an optimal high dimension droop control surface is implemented with a simple dc microgrid containing two sources and one load. Various cases for changing load and wind speed are investigated using simulation and hardware-in-the-loop techniques. Optimal multidimensional droop control is demonstrated with a wind resource in a full dc microgrid example, containing an energy storage device as well as multiple sources and loads. Finally, the optimal high dimension droop control method is applied with a solar resource, and using a load model developed for a military patrol base application. The operation of the proposed control is again investigated using simulation and hardware-in-the-loop techniques.
Resumo:
Nanoscale research in energy storage has recently focused on investigating the properties of nanostructures in order to increase energy density, power rate, and capacity. To better understand the intrinsic properties of nanomaterials, a new and advanced in situ system was designed that allows atomic scale observation of materials under external fields. A special holder equipped with a scanning tunneling microscopy (STM) probe inside a transmission electron microscopy (TEM) system was used to perform the in situ studies on mechanical, electrical, and electrochemical properties of nanomaterials. The nanostructures of titanium dioxide (TiO2) nanotubes are characterized by electron imaging, diffraction, and chemical analysis techniques inside TEM. TiO2 nanotube is one of the candidates as anode materials for lithium ion batteries. It is necessary to study their morphological, mechanical, electrical, and electrochemical properties at atomic level. The synthesis of TiO2 nanotubes showed that the aspect ratio of TiO2 could be controlled by processing parameters, such as anodization time and voltage. Ammonium hydroxide (NH4OH) treated TiO2 nanotubes showed unexpected instability. Observation revealed the nanotubes were disintegrated into nanoparticles and the tubular morphology was vanished after annealing. The nitrogen compounds incorporated in surface defects weaken the nanotube and result in the collapse of nanotube into nanoparticles during phase transformation. Next, the electrical and mechanical properties of TiO2 nanotubes were studied by in situ TEM system. Phase transformation of anatase TiO2 nanotubes into rutile nanoparticles was studied by in situ Joule heating. The results showed that single anatase TiO2 nanotubes broke into ultrafine small anatase nanoparticles. On further increasing the bias, the nanoclusters of anatase particles became prone to a solid state reaction and were grown into stable large rutile nanoparticles. The relationship between mechanical and electrical properties of TiO2 nanotubes was also investigated. Initially, both anatase and amorphous TiO2 nanotubes were characterized by using I-V test to demonstrate the semiconductor properties. The observation of mechanical bending on TiO2 nanotubes revealed that the conductivity would increase when bending deformation happened. The defects on the nanotubes created by deformation helped electron transportation to increase the conductivity. Lastly, the electrochemical properties of amorphous TiO2 nanotubes were characterized by in situ TEM system. The direct chemical and imaging evidence of lithium-induced atomic ordering in amorphous TiO2 nanotubes was studied. The results indicated that the lithiation started with the valance reduction of Ti4+ to Ti3+ leading to a LixTiO2 intercalation compound. The continued intercalation of Li ions in TiO2 nanotubes triggered an amorphous to crystalline phase transformation. The crystals were formed as nano islands and identified to be Li2Ti2O4 with cubic structure (a = 8.375 Å). This phase transformation is associated with local inhomogeneities in Li distribution. Based on these observations, a new reaction mechanism is proposed to explain the first cycle lithiation behavior in amorphous TiO2 nanotubes.
Resumo:
For a microgrid with a high penetration level of renewable energy, energy storage use becomes more integral to the system performance due to the stochastic nature of most renewable energy sources. This thesis examines the use of droop control of an energy storage source in dc microgrids in order to optimize a global cost function. The approach involves using a multidimensional surface to determine the optimal droop parameters based on load and state of charge. The optimal surface is determined using knowledge of the system architecture and can be implemented with fully decentralized source controllers. The optimal surface control of the system is presented. Derivations of a cost function along with the implementation of the optimal control are included. Results were verified using a hardware-in-the-loop system.
Resumo:
As microgrid power systems gain prevalence and renewable energy comprises greater and greater portions of distributed generation, energy storage becomes important to offset the higher variance of renewable energy sources and maximize their usefulness. One of the emerging techniques is to utilize a combination of lead-acid batteries and ultracapacitors to provide both short and long-term stabilization to microgrid systems. The different energy and power characteristics of batteries and ultracapacitors imply that they ought to be utilized in different ways. Traditional linear controls can use these energy storage systems to stabilize a power grid, but cannot effect more complex interactions. This research explores a fuzzy logic approach to microgrid stabilization. The ability of a fuzzy logic controller to regulate a dc bus in the presence of source and load fluctuations, in a manner comparable to traditional linear control systems, is explored and demonstrated. Furthermore, the expanded capabilities (such as storage balancing, self-protection, and battery optimization) of a fuzzy logic system over a traditional linear control system are shown. System simulation results are presented and validated through hardware-based experiments. These experiments confirm the capabilities of the fuzzy logic control system to regulate bus voltage, balance storage elements, optimize battery usage, and effect self-protection.
Resumo:
Linkage disequilibrium methods can be used to find genes influencing quantitative trait variation in humans. Linkage disequilibrium methods can require smaller sample sizes than linkage equilibrium methods, such as the variance component approach to find loci with a specific effect size. The increase in power is at the expense of requiring more markers to be typed to scan the entire genome. This thesis compares different linkage disequilibrium methods to determine which factors influence the power to detect disequilibrium. The costs of disequilibrium and equilibrium tests were compared to determine whether the savings in phenotyping costs when using disequilibrium methods outweigh the additional genotyping costs.^ Nine linkage disequilibrium tests were examined by simulation. Five tests involve selecting isolated unrelated individuals while four involved the selection of parent child trios (TDT). All nine tests were found to be able to identify disequilibrium with the correct significance level in Hardy-Weinberg populations. Increasing linked genetic variance and trait allele frequency were found to increase the power to detect disequilibrium, while increasing the number of generations and distance between marker and trait loci decreased the power to detect disequilibrium. Discordant sampling was used for several of the tests. It was found that the more stringent the sampling, the greater the power to detect disequilibrium in a sample of given size. The power to detect disequilibrium was not affected by the presence of polygenic effects.^ When the trait locus had more than two trait alleles, the power of the tests maximized to less than one. For the simulation methods used here, when there were more than two-trait alleles there was a probability equal to 1-heterozygosity of the marker locus that both trait alleles were in disequilibrium with the same marker allele, resulting in the marker being uninformative for disequilibrium.^ The five tests using isolated unrelated individuals were found to have excess error rates when there was disequilibrium due to population admixture. Increased error rates also resulted from increased unlinked major gene effects, discordant trait allele frequency, and increased disequilibrium. Polygenic effects did not affect the error rates. The TDT, Transmission Disequilibrium Test, based tests were not liable to any increase in error rates.^ For all sample ascertainment costs, for recent mutations ($<$100 generations) linkage disequilibrium tests were less expensive than the variance component test to carry out. Candidate gene scans saved even more money. The use of recently admixed populations also decreased the cost of performing a linkage disequilibrium test. ^
Resumo:
In this paper, we describe dynamic unicast to increase communication efficiency in opportunistic Information-centric networks. The approach is based on broadcast requests to quickly find content and dynamically creating unicast links to content sources without the need of neighbor discovery. The links are kept temporarily as long as they deliver content and are quickly removed otherwise. Evaluations in mobile networks show that this approach maintains ICN flexibility to support seamless mobile communication and achieves up to 56.6% shorter transmission times compared to broadcast in case of multiple concurrent requesters. Apart from that, dynamic unicast unburdens listener nodes from processing unwanted content resulting in lower processing overhead and power consumption at these nodes. The approach can be easily included into existing ICN architectures using only available data structures.
Resumo:
This paper introduces an area- and power-efficient approach for compressive recording of cortical signals used in an implantable system prior to transmission. Recent research on compressive sensing has shown promising results for sub-Nyquist sampling of sparse biological signals. Still, any large-scale implementation of this technique faces critical issues caused by the increased hardware intensity. The cost of implementing compressive sensing in a multichannel system in terms of area usage can be significantly higher than a conventional data acquisition system without compression. To tackle this issue, a new multichannel compressive sensing scheme which exploits the spatial sparsity of the signals recorded from the electrodes of the sensor array is proposed. The analysis shows that using this method, the power efficiency is preserved to a great extent while the area overhead is significantly reduced resulting in an improved power-area product. The proposed circuit architecture is implemented in a UMC 0.18 [Formula: see text]m CMOS technology. Extensive performance analysis and design optimization has been done resulting in a low-noise, compact and power-efficient implementation. The results of simulations and subsequent reconstructions show the possibility of recovering fourfold compressed intracranial EEG signals with an SNR as high as 21.8 dB, while consuming 10.5 [Formula: see text]W of power within an effective area of 250 [Formula: see text]m × 250 [Formula: see text]m per channel.
Resumo:
Bargaining is the building block of many economic interactions, ranging from bilateral to multilateral encounters and from situations in which the actors are individuals to negotiations between firms or countries. In all these settings, economists have been intrigued for a long time by the fact that some projects, trades or agreements are not realized even though they are mutually beneficial. On the one hand, this has been explained by incomplete information. A firm may not be willing to offer a wage that is acceptable to a qualified worker, because it knows that there are also unqualified workers and cannot distinguish between the two types. This phenomenon is known as adverse selection. On the other hand, it has been argued that even with complete information, the presence of externalities may impede efficient outcomes. To see this, consider the example of climate change. If a subset of countries agrees to curb emissions, non-participant regions benefit from the signatories’ efforts without incurring costs. These free riding opportunities give rise to incentives to strategically improve ones bargaining power that work against the formation of a global agreement. This thesis is concerned with extending our understanding of both factors, adverse selection and externalities. The findings are based on empirical evidence from original laboratory experiments as well as game theoretic modeling. On a very general note, it is demonstrated that the institutions through which agents interact matter to a large extent. Insights are provided about which institutions we should expect to perform better than others, at least in terms of aggregate welfare. Chapters 1 and 2 focus on the problem of adverse selection. Effective operation of markets and other institutions often depends on good information transmission properties. In terms of the example introduced above, a firm is only willing to offer high wages if it receives enough positive signals about the worker’s quality during the application and wage bargaining process. In Chapter 1, it will be shown that repeated interaction coupled with time costs facilitates information transmission. By making the wage bargaining process costly for the worker, the firm is able to obtain more accurate information about the worker’s type. The cost could be pure time cost from delaying agreement or cost of effort arising from a multi-step interviewing process. In Chapter 2, I abstract from time cost and show that communication can play a similar role. The simple fact that a worker states to be of high quality may be informative. In Chapter 3, the focus is on a different source of inefficiency. Agents strive for bargaining power and thus may be motivated by incentives that are at odds with the socially efficient outcome. I have already mentioned the example of climate change. Other examples are coalitions within committees that are formed to secure voting power to block outcomes or groups that commit to different technological standards although a single standard would be optimal (e.g. the format war between HD and BlueRay). It will be shown that such inefficiencies are directly linked to the presence of externalities and a certain degree of irreversibility in actions. I now discuss the three articles in more detail. In Chapter 1, Olivier Bochet and I study a simple bilateral bargaining institution that eliminates trade failures arising from incomplete information. In this setting, a buyer makes offers to a seller in order to acquire a good. Whenever an offer is rejected by the seller, the buyer may submit a further offer. Bargaining is costly, because both parties suffer a (small) time cost after any rejection. The difficulties arise, because the good can be of low or high quality and the quality of the good is only known to the seller. Indeed, without the possibility to make repeated offers, it is too risky for the buyer to offer prices that allow for trade of high quality goods. When allowing for repeated offers, however, at equilibrium both types of goods trade with probability one. We provide an experimental test of these predictions. Buyers gather information about sellers using specific price offers and rates of trade are high, much as the model’s qualitative predictions. We also observe a persistent over-delay before trade occurs, and this mitigates efficiency substantially. Possible channels for over-delay are identified in the form of two behavioral assumptions missing from the standard model, loss aversion (buyers) and haggling (sellers), which reconcile the data with the theoretical predictions. Chapter 2 also studies adverse selection, but interaction between buyers and sellers now takes place within a market rather than isolated pairs. Remarkably, in a market it suffices to let agents communicate in a very simple manner to mitigate trade failures. The key insight is that better informed agents (sellers) are willing to truthfully reveal their private information, because by doing so they are able to reduce search frictions and attract more buyers. Behavior observed in the experimental sessions closely follows the theoretical predictions. As a consequence, costless and non-binding communication (cheap talk) significantly raises rates of trade and welfare. Previous experiments have documented that cheap talk alleviates inefficiencies due to asymmetric information. These findings are explained by pro-social preferences and lie aversion. I use appropriate control treatments to show that such consideration play only a minor role in our market. Instead, the experiment highlights the ability to organize markets as a new channel through which communication can facilitate trade in the presence of private information. In Chapter 3, I theoretically explore coalition formation via multilateral bargaining under complete information. The environment studied is extremely rich in the sense that the model allows for all kinds of externalities. This is achieved by using so-called partition functions, which pin down a coalitional worth for each possible coalition in each possible coalition structure. It is found that although binding agreements can be written, efficiency is not guaranteed, because the negotiation process is inherently non-cooperative. The prospects of cooperation are shown to crucially depend on i) the degree to which players can renegotiate and gradually build up agreements and ii) the absence of a certain type of externalities that can loosely be described as incentives to free ride. Moreover, the willingness to concede bargaining power is identified as a novel reason for gradualism. Another key contribution of the study is that it identifies a strong connection between the Core, one of the most important concepts in cooperative game theory, and the set of environments for which efficiency is attained even without renegotiation.