902 resultados para DARK ENERGY MODELS


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The need for efficient, sustainable, and planned utilization of resources is ever more critical. In the U.S. alone, buildings consume 34.8 Quadrillion (1015) BTU of energy annually at a cost of $1.4 Trillion. Of this energy 58% is utilized for heating and air conditioning. ^ Several building energy analysis tools have been developed to assess energy demands and lifecycle energy costs in buildings. Such analyses are also essential for an efficient HVAC design that overcomes the pitfalls of an under/over-designed system. DOE-2 is among the most widely known full building energy analysis models. It also constitutes the simulation engine of other prominent software such as eQUEST, EnergyPro, PowerDOE. Therefore, it is essential that DOE-2 energy simulations be characterized by high accuracy. ^ Infiltration is an uncontrolled process through which outside air leaks into a building. Studies have estimated infiltration to account for up to 50% of a building's energy demand. This, considered alongside the annual cost of buildings energy consumption, reveals the costs of air infiltration. It also stresses the need that prominent building energy simulation engines accurately account for its impact. ^ In this research the relative accuracy of current air infiltration calculation methods is evaluated against an intricate Multiphysics Hygrothermal CFD building envelope analysis. The full-scale CFD analysis is based on a meticulous representation of cracking in building envelopes and on real-life conditions. The research found that even the most advanced current infiltration methods, including in DOE-2, are at up to 96.13% relative error versus CFD analysis. ^ An Enhanced Model for Combined Heat and Air Infiltration Simulation was developed. The model resulted in 91.6% improvement in relative accuracy over current models. It reduces error versus CFD analysis to less than 4.5% while requiring less than 1% of the time required for such a complex hygrothermal analysis. The algorithm used in our model was demonstrated to be easy to integrate into DOE-2 and other engines as a standalone method for evaluating infiltration heat loads. This will vastly increase the accuracy of such simulation engines while maintaining their speed and ease of use characteristics that make them very widely used in building design.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mangrove forests are ecosystems susceptible to changing water levels and temperatures due to climate change as well as perturbations resulting from tropical storms. Numerical models can be used to project mangrove forest responses to regional and global environmental changes, and the reliability of these models depends on surface energy balance closure. However, for tidal ecosystems, the surface energy balance is complex because the energy transport associated with tidal activity remains poorly understood. This study aimed to quantify impacts of tidal flows on energy dynamics within a mangrove ecosystem. To address the research objective, an intensive 10-day study was conducted in a mangrove forest located along the Shark River in the Everglades National Park, FL, USA. Forest–atmosphere turbulent exchanges of energy were quantified with an eddy covariance system installed on a 30-m-tall flux tower. Energy transport associated with tidal activity was calculated based on a coupled mass and energy balance approach. The mass balance included tidal flows and accumulation of water on the forest floor. The energy balance included temporal changes in enthalpy, resulting from tidal flows and temperature changes in the water column. By serving as a net sink or a source of available energy, flood waters reduced the impact of high radiational loads on the mangrove forest. Also, the regression slope of available energy versus sink terms increased from 0.730 to 0.754 and from 0.798 to 0.857, including total enthalpy change in the water column in the surface energy balance for 30-min periods and daily daytime sums, respectively. Results indicated that tidal inundation provides an important mechanism for heat removal and that tidal exchange should be considered in surface energy budgets of coastal ecosystems. Results also demonstrated the importance of including tidal energy advection in mangrove biophysical models that are used for predicting ecosystem response to changing climate and regional freshwater management practices.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Software engineering researchers are challenged to provide increasingly more powerful levels of abstractions to address the rising complexity inherent in software solutions. One new development paradigm that places models as abstraction at the forefront of the development process is Model-Driven Software Development (MDSD). MDSD considers models as first class artifacts, extending the capability for engineers to use concepts from the problem domain of discourse to specify apropos solutions. A key component in MDSD is domain-specific modeling languages (DSMLs) which are languages with focused expressiveness, targeting a specific taxonomy of problems. The de facto approach used is to first transform DSML models to an intermediate artifact in a HLL e.g., Java or C++, then execute that resulting code.^ Our research group has developed a class of DSMLs, referred to as interpreted DSMLs (i-DSMLs), where models are directly interpreted by a specialized execution engine with semantics based on model changes at runtime. This execution engine uses a layered architecture and is referred to as a domain-specific virtual machine (DSVM). As the domain-specific model being executed descends the layers of the DSVM the semantic gap between the user-defined model and the services being provided by the underlying infrastructure is closed. The focus of this research is the synthesis engine, the layer in the DSVM which transforms i-DSML models into executable scripts for the next lower layer to process.^ The appeal of an i-DSML is constrained as it possesses unique semantics contained within the DSVM. Existing DSVMs for i-DSMLs exhibit tight coupling between the implicit model of execution and the semantics of the domain, making it difficult to develop DSVMs for new i-DSMLs without a significant investment in resources.^ At the onset of this research only one i-DSML had been created for the user- centric communication domain using the aforementioned approach. This i-DSML is the Communication Modeling Language (CML) and its DSVM is the Communication Virtual machine (CVM). A major problem with the CVM's synthesis engine is that the domain-specific knowledge (DSK) and the model of execution (MoE) are tightly interwoven consequently subsequent DSVMs would need to be developed from inception with no reuse of expertise.^ This dissertation investigates how to decouple the DSK from the MoE and subsequently producing a generic model of execution (GMoE) from the remaining application logic. This GMoE can be reused to instantiate synthesis engines for DSVMs in other domains. The generalized approach to developing the model synthesis component of i-DSML interpreters utilizes a reusable framework loosely coupled to DSK as swappable framework extensions.^ This approach involves first creating an i-DSML and its DSVM for a second do- main, demand-side smartgrid, or microgrid energy management, and designing the synthesis engine so that the DSK and MoE are easily decoupled. To validate the utility of the approach, the SEs are instantiated using the GMoE and DSKs of the two aforementioned domains and an empirical study to support our claim of reduced developmental effort is performed.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research focuses on developing active suspension optimal controllers for two linear and non-linear half-car models. A detailed comparison between quarter-car and half-car active suspension approaches is provided for improving two important scenarios in vehicle dynamics, i.e. ride quality and road holding. Having used a half-car vehicle model, heave and pitch motion are analyzed for those scenarios, with cargo mass as a variable. The governing equations of the system are analysed in a multi-energy domain package, i.e., 20-Sim. System equations are presented in the bond-graph language to facilitate calculation of energy usage. The results present optimum set of gains for both ride quality and road holding scenarios are the gains which has derived when maximum allowable cargo mass is considered for the vehicle. The energy implications of substituting passive suspension units with active ones are studied by considering not only the energy used by the actuator, but also the reduction in energy lost through the passive damper. Energy analysis showed less energy was dissipated in shock absorbers when either quarter-car or half-car controllers were used instead of passive suspension. It was seen that more energy could be saved by using half-car active controllers than the quarter-car ones. Results also proved that using active suspension units, whether quarter-car or half-car based, under those realistic limitations is energy-efficient and suggested.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The direct drive point absorber is a robust and efficient system for wave energy harvesting, where the linear generator represents the most complex part of the system. Therefore, its design and optimization are crucial tasks. The tubular shape of a linear generator’s magnetic circuit offers better permanent magnet flux encapsulation and reduction in radial forces on the translator due to its symmetry. A double stator topology can improve the power density of the linear tubular machine. Common designs employ a set of aligned stators on each side of a translator with radially magnetized permanent magnets. Such designs require doubling the amount of permanent magnet material and lead to an increase in the cogging force. The design presented in this thesis utilizes a translator with buried axially magnetized magnets and axially shifted positioning of the two stators such that no additional magnetic material, compared to single side machine, is required. In addition to the conservation of magnetic material, a significant improvement in the cogging force occurs in the two phase topology, while the double sided three phase system produces more power at the cost of a small increase in the cogging force. The analytical and the FEM models of the generator are described and their results compared to the experimental results. In general, the experimental results compare favourably with theoretical predictions. However, the experimentally observed permanent magnet flux leakage in the double sided machine is larger than predicted theoretically, which can be justified by the limitations in the prototype fabrication and resulting deviations from the theoretical analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Scalar particles coupled to the Standard Model fields through a disformal coupling arise in different theories, such as massive gravity or braneworld models. We will review the main phenomenology associated with such particles. Distinctive disformal signatures could be measured at colliders and with astrophysical observations. The phenomeno-logical relevance of the disformal coupling demands the introduction of a set of symmetries, which may ensure the stability of these new degrees of freedom. In such a case, they constitute natural dark matter candidates since they are generally massive and weakly coupled. We will illustrate these ideas by paying particular attention to the branon case, since these questions arise naturally in braneworld models with low tension, where they were first discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Peer reviewed

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The authors would like to thank the College of Life Sciences of Aberdeen University and Marine Scotland Science which funded CP's PhD project. Skate tagging experiments were undertaken as part of Scottish Government project SP004. We thank Ian Burrett for help in catching the fish and the other fishermen and anglers who returned tags. We thank José Manuel Gonzalez-Irusta for extracting and making available the environmental layers used as environmental covariates in the environmental suitability modelling procedure. We also thank Jason Matthiopoulos for insightful suggestions on habitat utilization metrics as well as Stephen C.F. Palmer, and three anonymous reviewers for useful suggestions to improve the clarity and quality of the manuscript.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a study of the Galactic Center region as a possible source of both secondary gamma-ray and neutrino fluxes from annihilating dark matter. We have studied the gamma-ray flux observed by the High Energy Stereoscopic System (HESS) from the J1745-290 Galactic Center source. The data are well fitted as annihilating dark matter in combination with an astrophysical background. The analysis was performed by means of simulated gamma spectra produced by Monte Carlo event generators packages. We analyze the differences in the spectra obtained by the various Monte Carlo codes developed so far in particle physics. We show that, within some uncertainty, the HESS data can be fitted as a signal from a heavy dark matter density distribution peaked at the Galactic Center, with a power-law for the background with a spectral index which is compatible with the Fermi-Large Area Telescope (LAT) data from the same region. If this kind of dark matter distribution generates the gamma-ray flux observed by HESS, we also expect to observe a neutrino flux. We show prospective results for the observation of secondary neutrinos with the Astronomy with a Neutrino Telescope and Abyss environmental RESearch project (ANTARES), Ice Cube Neutrino Observatory (Ice Cube) and the Cubic Kilometer Neutrino Telescope (KM3NeT). Prospects solely depend on the device resolution angle when its effective area and the minimum energy threshold are fixed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For an Erbium-doped mode locked fibre laser, we demonstrate experimentally a new type of vector rogue waves (RWs) emergence of which is caused by the coherent coupling of the orthogonal states of polarisation (SOPs). Unlike weak interaction between neighbouring dissipative solitons for the soliton rain, this creates a new type of the energy landscape where the interaction of the orthogonal SOPs leads to polarisation trapping or escapes from the trapping triggered by polarisation instabilities and so results in the pulse dynamics satisfying criteria of the 'dark' and 'bright' RWs. The obtained results, apart from the fundamental interest, can provide a base for development of the rogue waves mitigation techniques in the context of the applications in photonics and beyond.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Human use of the oceans is increasingly in conflict with conservation of endangered species. Methods for managing the spatial and temporal placement of industries such as military, fishing, transportation and offshore energy, have historically been post hoc; i.e. the time and place of human activity is often already determined before assessment of environmental impacts. In this dissertation, I build robust species distribution models in two case study areas, US Atlantic (Best et al. 2012) and British Columbia (Best et al. 2015), predicting presence and abundance respectively, from scientific surveys. These models are then applied to novel decision frameworks for preemptively suggesting optimal placement of human activities in space and time to minimize ecological impacts: siting for offshore wind energy development, and routing ships to minimize risk of striking whales. Both decision frameworks relate the tradeoff between conservation risk and industry profit with synchronized variable and map views as online spatial decision support systems.

For siting offshore wind energy development (OWED) in the U.S. Atlantic (chapter 4), bird density maps are combined across species with weights of OWED sensitivity to collision and displacement and 10 km2 sites are compared against OWED profitability based on average annual wind speed at 90m hub heights and distance to transmission grid. A spatial decision support system enables toggling between the map and tradeoff plot views by site. A selected site can be inspected for sensitivity to a cetaceans throughout the year, so as to capture months of the year which minimize episodic impacts of pre-operational activities such as seismic airgun surveying and pile driving.

Routing ships to avoid whale strikes (chapter 5) can be similarly viewed as a tradeoff, but is a different problem spatially. A cumulative cost surface is generated from density surface maps and conservation status of cetaceans, before applying as a resistance surface to calculate least-cost routes between start and end locations, i.e. ports and entrance locations to study areas. Varying a multiplier to the cost surface enables calculation of multiple routes with different costs to conservation of cetaceans versus cost to transportation industry, measured as distance. Similar to the siting chapter, a spatial decisions support system enables toggling between the map and tradeoff plot view of proposed routes. The user can also input arbitrary start and end locations to calculate the tradeoff on the fly.

Essential to the input of these decision frameworks are distributions of the species. The two preceding chapters comprise species distribution models from two case study areas, U.S. Atlantic (chapter 2) and British Columbia (chapter 3), predicting presence and density, respectively. Although density is preferred to estimate potential biological removal, per Marine Mammal Protection Act requirements in the U.S., all the necessary parameters, especially distance and angle of observation, are less readily available across publicly mined datasets.

In the case of predicting cetacean presence in the U.S. Atlantic (chapter 2), I extracted datasets from the online OBIS-SEAMAP geo-database, and integrated scientific surveys conducted by ship (n=36) and aircraft (n=16), weighting a Generalized Additive Model by minutes surveyed within space-time grid cells to harmonize effort between the two survey platforms. For each of 16 cetacean species guilds, I predicted the probability of occurrence from static environmental variables (water depth, distance to shore, distance to continental shelf break) and time-varying conditions (monthly sea-surface temperature). To generate maps of presence vs. absence, Receiver Operator Characteristic (ROC) curves were used to define the optimal threshold that minimizes false positive and false negative error rates. I integrated model outputs, including tables (species in guilds, input surveys) and plots (fit of environmental variables, ROC curve), into an online spatial decision support system, allowing for easy navigation of models by taxon, region, season, and data provider.

For predicting cetacean density within the inner waters of British Columbia (chapter 3), I calculated density from systematic, line-transect marine mammal surveys over multiple years and seasons (summer 2004, 2005, 2008, and spring/autumn 2007) conducted by Raincoast Conservation Foundation. Abundance estimates were calculated using two different methods: Conventional Distance Sampling (CDS) and Density Surface Modelling (DSM). CDS generates a single density estimate for each stratum, whereas DSM explicitly models spatial variation and offers potential for greater precision by incorporating environmental predictors. Although DSM yields a more relevant product for the purposes of marine spatial planning, CDS has proven to be useful in cases where there are fewer observations available for seasonal and inter-annual comparison, particularly for the scarcely observed elephant seal. Abundance estimates are provided on a stratum-specific basis. Steller sea lions and harbour seals are further differentiated by ‘hauled out’ and ‘in water’. This analysis updates previous estimates (Williams & Thomas 2007) by including additional years of effort, providing greater spatial precision with the DSM method over CDS, novel reporting for spring and autumn seasons (rather than summer alone), and providing new abundance estimates for Steller sea lion and northern elephant seal. In addition to providing a baseline of marine mammal abundance and distribution, against which future changes can be compared, this information offers the opportunity to assess the risks posed to marine mammals by existing and emerging threats, such as fisheries bycatch, ship strikes, and increased oil spill and ocean noise issues associated with increases of container ship and oil tanker traffic in British Columbia’s continental shelf waters.

Starting with marine animal observations at specific coordinates and times, I combine these data with environmental data, often satellite derived, to produce seascape predictions generalizable in space and time. These habitat-based models enable prediction of encounter rates and, in the case of density surface models, abundance that can then be applied to management scenarios. Specific human activities, OWED and shipping, are then compared within a tradeoff decision support framework, enabling interchangeable map and tradeoff plot views. These products make complex processes transparent for gaming conservation, industry and stakeholders towards optimal marine spatial management, fundamental to the tenets of marine spatial planning, ecosystem-based management and dynamic ocean management.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this dissertation, we develop a novel methodology for characterizing and simulating nonstationary, full-field, stochastic turbulent wind fields.

In this new method, nonstationarity is characterized and modeled via temporal coherence, which is quantified in the discrete frequency domain by probability distributions of the differences in phase between adjacent Fourier components.

The empirical distributions of the phase differences can also be extracted from measured data, and the resulting temporal coherence parameters can quantify the occurrence of nonstationarity in empirical wind data.

This dissertation (1) implements temporal coherence in a desktop turbulence simulator, (2) calibrates empirical temporal coherence models for four wind datasets, and (3) quantifies the increase in lifetime wind turbine loads caused by temporal coherence.

The four wind datasets were intentionally chosen from locations around the world so that they had significantly different ambient atmospheric conditions.

The prevalence of temporal coherence and its relationship to other standard wind parameters was modeled through empirical joint distributions (EJDs), which involved fitting marginal distributions and calculating correlations.

EJDs have the added benefit of being able to generate samples of wind parameters that reflect the characteristics of a particular site.

Lastly, to characterize the effect of temporal coherence on design loads, we created four models in the open-source wind turbine simulator FAST based on the \windpact turbines, fit response surfaces to them, and used the response surfaces to calculate lifetime turbine responses to wind fields simulated with and without temporal coherence.

The training data for the response surfaces was generated from exhaustive FAST simulations that were run on the high-performance computing (HPC) facilities at the National Renewable Energy Laboratory.

This process was repeated for wind field parameters drawn from the empirical distributions and for wind samples drawn using the recommended procedure in the wind turbine design standard \iec.

The effect of temporal coherence was calculated as a percent increase in the lifetime load over the base value with no temporal coherence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: Computed Tomography (CT) is one of the standard diagnostic imaging modalities for the evaluation of a patient’s medical condition. In comparison to other imaging modalities such as Magnetic Resonance Imaging (MRI), CT is a fast acquisition imaging device with higher spatial resolution and higher contrast-to-noise ratio (CNR) for bony structures. CT images are presented through a gray scale of independent values in Hounsfield units (HU). High HU-valued materials represent higher density. High density materials, such as metal, tend to erroneously increase the HU values around it due to reconstruction software limitations. This problem of increased HU values due to metal presence is referred to as metal artefacts. Hip prostheses, dental fillings, aneurysm clips, and spinal clips are a few examples of metal objects that are of clinical relevance. These implants create artefacts such as beam hardening and photon starvation that distort CT images and degrade image quality. This is of great significance because the distortions may cause improper evaluation of images and inaccurate dose calculation in the treatment planning system. Different algorithms are being developed to reduce these artefacts for better image quality for both diagnostic and therapeutic purposes. However, very limited information is available about the effect of artefact correction on dose calculation accuracy. This research study evaluates the dosimetric effect of metal artefact reduction algorithms on severe artefacts on CT images. This study uses Gemstone Spectral Imaging (GSI)-based MAR algorithm, projection-based Metal Artefact Reduction (MAR) algorithm, and the Dual-Energy method.

Materials and Methods: The Gemstone Spectral Imaging (GSI)-based and SMART Metal Artefact Reduction (MAR) algorithms are metal artefact reduction protocols embedded in two different CT scanner models by General Electric (GE), and the Dual-Energy Imaging Method was developed at Duke University. All three approaches were applied in this research for dosimetric evaluation on CT images with severe metal artefacts. The first part of the research used a water phantom with four iodine syringes. Two sets of plans, multi-arc plans and single-arc plans, using the Volumetric Modulated Arc therapy (VMAT) technique were designed to avoid or minimize influences from high-density objects. The second part of the research used projection-based MAR Algorithm and the Dual-Energy Method. Calculated Doses (Mean, Minimum, and Maximum Doses) to the planning treatment volume (PTV) were compared and homogeneity index (HI) calculated.

Results: (1) Without the GSI-based MAR application, a percent error between mean dose and the absolute dose ranging from 3.4-5.7% per fraction was observed. In contrast, the error was decreased to a range of 0.09-2.3% per fraction with the GSI-based MAR algorithm. There was a percent difference ranging from 1.7-4.2% per fraction between with and without using the GSI-based MAR algorithm. (2) A range of 0.1-3.2% difference was observed for the maximum dose values, 1.5-10.4% for minimum dose difference, and 1.4-1.7% difference on the mean doses. Homogeneity indexes (HI) ranging from 0.068-0.065 for dual-energy method and 0.063-0.141 with projection-based MAR algorithm were also calculated.

Conclusion: (1) Percent error without using the GSI-based MAR algorithm may deviate as high as 5.7%. This error invalidates the goal of Radiation Therapy to provide a more precise treatment. Thus, GSI-based MAR algorithm was desirable due to its better dose calculation accuracy. (2) Based on direct numerical observation, there was no apparent deviation between the mean doses of different techniques but deviation was evident on the maximum and minimum doses. The HI for the dual-energy method almost achieved the desirable null values. In conclusion, the Dual-Energy method gave better dose calculation accuracy to the planning treatment volume (PTV) for images with metal artefacts than with or without GE MAR Algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation studies capacity investments in energy sources, with a focus on renewable technologies, such as solar and wind energy. We develop analytical models to provide insights for policymakers and use real data from the state of Texas to corroborate our findings.

We first take a strategic perspective and focus on electricity pricing policies. Specifically, we investigate the capacity investments of a utility firm in renewable and conventional energy sources under flat and peak pricing policies. We consider generation patterns and intermittency of solar and wind energy in relation to the electricity demand throughout a day. We find that flat pricing leads to a higher investment level for solar energy and it can still lead to more investments in wind energy if considerable amount of wind energy is generated throughout the day.

In the second essay, we complement the first one by focusing on the problem of matching supply with demand in every operating period (e.g., every five minutes) from the perspective of a utility firm. We study the interaction between renewable and conventional sources with different levels of operational flexibility, i.e., the possibility

of quickly ramping energy output up or down. We show that operational flexibility determines these interactions: renewable and inflexible sources (e.g., nuclear energy) are substitutes, whereas renewable and flexible sources (e.g., natural gas) are complements.

In the final essay, rather than the capacity investments of the utility firms, we focus on the capacity investments of households in rooftop solar panels. We investigate whether or not these investments may cause a utility death spiral effect, which is a vicious circle of increased solar adoption and higher electricity prices. We observe that the current rate-of-return regulation may lead to a death spiral for utility firms. We show that one way to reverse the spiral effect is to allow the utility firms to maximize their profits by determining electricity prices.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nature is challenged to move charge efficiently over many length scales. From sub-nm to μm distances, electron-transfer proteins orchestrate energy conversion, storage, and release both inside and outside the cell. Uncovering the detailed mechanisms of biological electron-transfer reactions, which are often coupled to bond-breaking and bond-making events, is essential to designing durable, artificial energy conversion systems that mimic the specificity and efficiency of their natural counterparts. Here, we use theoretical modeling of long-distance charge hopping (Chapter 3), synthetic donor-bridge-acceptor molecules (Chapters 4, 5, and 6), and de novo protein design (Chapters 5 and 6) to investigate general principles that govern light-driven and electrochemically driven electron-transfer reactions in biology. We show that fast, μm-distance charge hopping along bacterial nanowires requires closely packed charge carriers with low reorganization energies (Chapter 3); singlet excited-state electronic polarization of supermolecular electron donors can attenuate intersystem crossing yields to lower-energy, oppositely polarized, donor triplet states (Chapter 4); the effective static dielectric constant of a small (~100 residue) de novo designed 4-helical protein bundle can change upon phototriggering an electron transfer event in the protein interior, providing a means to slow the charge-recombination reaction (Chapter 5); and a tightly-packed de novo designed 4-helix protein bundle can drastically alter charge-transfer driving forces of photo-induced amino acid radical formation in the bundle interior, effectively turning off a light-driven oxidation reaction that occurs in organic solvent (Chapter 6). This work leverages unique insights gleaned from proteins designed from scratch that bind synthetic donor-bridge-acceptor molecules that can also be studied in organic solvents, opening new avenues of exploration into the factors critical for protein control of charge flow in biology.