9 resultados para Thermodynamic Optimization
em Duke University
Resumo:
Free energy calculations are a computational method for determining thermodynamic quantities, such as free energies of binding, via simulation.
Currently, due to computational and algorithmic limitations, free energy calculations are limited in scope.
In this work, we propose two methods for improving the efficiency of free energy calculations.
First, we expand the state space of alchemical intermediates, and show that this expansion enables us to calculate free energies along lower variance paths.
We use Q-learning, a reinforcement learning technique, to discover and optimize paths at low computational cost.
Second, we reduce the cost of sampling along a given path by using sequential Monte Carlo samplers.
We develop a new free energy estimator, pCrooks (pairwise Crooks), a variant on the Crooks fluctuation theorem (CFT), which enables decomposition of the variance of the free energy estimate for discrete paths, while retaining beneficial characteristics of CFT.
Combining these two advancements, we show that for some test models, optimal expanded-space paths have a nearly 80% reduction in variance relative to the standard path.
Additionally, our free energy estimator converges at a more consistent rate and on average 1.8 times faster when we enable path searching, even when the cost of path discovery and refinement is considered.
Resumo:
The quantification of protein-ligand interactions is essential for systems biology, drug discovery, and bioengineering. Ligand-induced changes in protein thermal stability provide a general, quantifiable signature of binding and may be monitored with dyes such as Sypro Orange (SO), which increase their fluorescence emission intensities upon interaction with the unfolded protein. This method is an experimentally straightforward, economical, and high-throughput approach for observing thermal melts using commonly available real-time polymerase chain reaction instrumentation. However, quantitative analysis requires careful consideration of the dye-mediated reporting mechanism and the underlying thermodynamic model. We determine affinity constants by analysis of ligand-mediated shifts in melting-temperature midpoint values. Ligand affinity is determined in a ligand titration series from shifts in free energies of stability at a common reference temperature. Thermodynamic parameters are obtained by fitting the inverse first derivative of the experimental signal reporting on thermal denaturation with equations that incorporate linear or nonlinear baseline models. We apply these methods to fit protein melts monitored with SO that exhibit prominent nonlinear post-transition baselines. SO can perturb the equilibria on which it is reporting. We analyze cases in which the ligand binds to both the native and denatured state or to the native state only and cases in which protein:ligand stoichiometry needs to treated explicitly.
Resumo:
Molecular chaperones are a highly diverse group of proteins that recognize and bind unfolded proteins to facilitate protein folding and prevent nonspecific protein aggregation. The mechanisms by which chaperones bind their protein substrates have been studied for decades. However, there are few reports about the affinity of molecular chaperones for their unfolded protein substrates. Thus, little is known about the relative binding affinities of different chaperones and about the relative binding affinities of chaperones for different unfolded protein substrates. Here we describe the application of SUPREX (stability of unpurified proteins from rates of H-D exchange), an H-D exchange and MALDI-based technique, in studying the binding interaction between the molecular chaperone Hsp33 and four different unfolded protein substrates, including citrate synthase, lactate dehydrogenase, malate dehydrogenase, and aldolase. The results of our studies suggest that the cooperativity of the Hsp33 folding-unfolding reaction increases upon binding with denatured protein substrates. This is consistent with the burial of significant hydrophobic surface area in Hsp33 when it interacts with its substrate proteins. The SUPREX-derived K(d) values for Hsp33 complexes with four different substrates were all found to be within the range of 3-300 nM.
Resumo:
In this paper, we propose a framework for robust optimization that relaxes the standard notion of robustness by allowing the decision maker to vary the protection level in a smooth way across the uncertainty set. We apply our approach to the problem of maximizing the expected value of a payoff function when the underlying distribution is ambiguous and therefore robustness is relevant. Our primary objective is to develop this framework and relate it to the standard notion of robustness, which deals with only a single guarantee across one uncertainty set. First, we show that our approach connects closely to the theory of convex risk measures. We show that the complexity of this approach is equivalent to that of solving a small number of standard robust problems. We then investigate the conservatism benefits and downside probability guarantees implied by this approach and compare to the standard robust approach. Finally, we illustrate theme thodology on an asset allocation example consisting of historical market data over a 25-year investment horizon and find in every case we explore that relaxing standard robustness with soft robustness yields a seemingly favorable risk-return trade-off: each case results in a higher out-of-sample expected return for a relatively minor degradation of out-of-sample downside performance. © 2010 INFORMS.
Resumo:
BACKGROUND AND PURPOSE: Previous studies have demonstrated that treatment strategy plays a critical role in ensuring maximum stone fragmentation during shockwave lithotripsy (SWL). We aimed to develop an optimal treatment strategy in SWL to produce maximum stone fragmentation. MATERIALS AND METHODS: Four treatment strategies were evaluated using an in-vitro experimental setup that mimics stone fragmentation in the renal pelvis. Spherical stone phantoms were exposed to 2100 shocks using the Siemens Modularis (electromagnetic) lithotripter. The treatment strategies included increasing output voltage with 100 shocks at 12.3 kV, 400 shocks at 14.8 kV, and 1600 shocks at 15.8 kV, and decreasing output voltage with 1600 shocks at 15.8 kV, 400 shocks at 14.8 kV, and 100 shocks at 12.3 kV. Both increasing and decreasing voltages models were run at a pulse repetition frequency (PRF) of 1 and 2 Hz. Fragmentation efficiency was determined using a sequential sieving method to isolate fragments less than 2 mm. A fiberoptic probe hydrophone was used to characterize the pressure waveforms at different output voltage and frequency settings. In addition, a high-speed camera was used to assess cavitation activity in the lithotripter field that was produced by different treatment strategies. RESULTS: The increasing output voltage strategy at 1 Hz PRF produced the best stone fragmentation efficiency. This result was significantly better than the decreasing voltage strategy at 1 Hz PFR (85.8% vs 80.8%, P=0.017) and over the same strategy at 2 Hz PRF (85.8% vs 79.59%, P=0.0078). CONCLUSIONS: A pretreatment dose of 100 low-voltage output shockwaves (SWs) at 60 SWs/min before increasing to a higher voltage output produces the best overall stone fragmentation in vitro. These findings could lead to increased fragmentation efficiency in vivo and higher success rates clinically.
Resumo:
An enterprise information system (EIS) is an integrated data-applications platform characterized by diverse, heterogeneous, and distributed data sources. For many enterprises, a number of business processes still depend heavily on static rule-based methods and extensive human expertise. Enterprises are faced with the need for optimizing operation scheduling, improving resource utilization, discovering useful knowledge, and making data-driven decisions.
This thesis research is focused on real-time optimization and knowledge discovery that addresses workflow optimization, resource allocation, as well as data-driven predictions of process-execution times, order fulfillment, and enterprise service-level performance. In contrast to prior work on data analytics techniques for enterprise performance optimization, the emphasis here is on realizing scalable and real-time enterprise intelligence based on a combination of heterogeneous system simulation, combinatorial optimization, machine-learning algorithms, and statistical methods.
On-demand digital-print service is a representative enterprise requiring a powerful EIS.We use real-life data from Reischling Press, Inc. (RPI), a digit-print-service provider (PSP), to evaluate our optimization algorithms.
In order to handle the increase in volume and diversity of demands, we first present a high-performance, scalable, and real-time production scheduling algorithm for production automation based on an incremental genetic algorithm (IGA). The objective of this algorithm is to optimize the order dispatching sequence and balance resource utilization. Compared to prior work, this solution is scalable for a high volume of orders and it provides fast scheduling solutions for orders that require complex fulfillment procedures. Experimental results highlight its potential benefit in reducing production inefficiencies and enhancing the productivity of an enterprise.
We next discuss analysis and prediction of different attributes involved in hierarchical components of an enterprise. We start from a study of the fundamental processes related to real-time prediction. Our process-execution time and process status prediction models integrate statistical methods with machine-learning algorithms. In addition to improved prediction accuracy compared to stand-alone machine-learning algorithms, it also performs a probabilistic estimation of the predicted status. An order generally consists of multiple series and parallel processes. We next introduce an order-fulfillment prediction model that combines advantages of multiple classification models by incorporating flexible decision-integration mechanisms. Experimental results show that adopting due dates recommended by the model can significantly reduce enterprise late-delivery ratio. Finally, we investigate service-level attributes that reflect the overall performance of an enterprise. We analyze and decompose time-series data into different components according to their hierarchical periodic nature, perform correlation analysis,
and develop univariate prediction models for each component as well as multivariate models for correlated components. Predictions for the original time series are aggregated from the predictions of its components. In addition to a significant increase in mid-term prediction accuracy, this distributed modeling strategy also improves short-term time-series prediction accuracy.
In summary, this thesis research has led to a set of characterization, optimization, and prediction tools for an EIS to derive insightful knowledge from data and use them as guidance for production management. It is expected to provide solutions for enterprises to increase reconfigurability, accomplish more automated procedures, and obtain data-driven recommendations or effective decisions.
Resumo:
© 2014, Springer-Verlag Berlin Heidelberg.The frequency and severity of extreme events are tightly associated with the variance of precipitation. As climate warms, the acceleration in hydrological cycle is likely to enhance the variance of precipitation across the globe. However, due to the lack of an effective analysis method, the mechanisms responsible for the changes of precipitation variance are poorly understood, especially on regional scales. Our study fills this gap by formulating a variance partition algorithm, which explicitly quantifies the contributions of atmospheric thermodynamics (specific humidity) and dynamics (wind) to the changes in regional-scale precipitation variance. Taking Southeastern (SE) United States (US) summer precipitation as an example, the algorithm is applied to the simulations of current and future climate by phase 5 of Coupled Model Intercomparison Project (CMIP5) models. The analysis suggests that compared to observations, most CMIP5 models (~60 %) tend to underestimate the summer precipitation variance over the SE US during the 1950–1999, primarily due to the errors in the modeled dynamic processes (i.e. large-scale circulation). Among the 18 CMIP5 models analyzed in this study, six of them reasonably simulate SE US summer precipitation variance in the twentieth century and the underlying physical processes; these models are thus applied for mechanistic study of future changes in SE US summer precipitation variance. In the future, the six models collectively project an intensification of SE US summer precipitation variance, resulting from the combined effects of atmospheric thermodynamics and dynamics. Between them, the latter plays a more important role. Specifically, thermodynamics results in more frequent and intensified wet summers, but does not contribute to the projected increase in the frequency and intensity of dry summers. In contrast, atmospheric dynamics explains the projected enhancement in both wet and dry summers, indicating its importance in understanding future climate change over the SE US. The results suggest that the intensified SE US summer precipitation variance is not a purely thermodynamic response to greenhouse gases forcing, and cannot be explained without the contribution of atmospheric dynamics. Our analysis provides important insights to understand the mechanisms of SE US summer precipitation variance change. The algorithm formulated in this study can be easily applied to other regions and seasons to systematically explore the mechanisms responsible for the changes in precipitation extremes in a warming climate.
Resumo:
CONCLUSION Radiation dose reduction, while saving image quality could be easily implemented with this approach. Furthermore, the availability of a dosimetric data archive provides immediate feedbacks, related to the implemented optimization strategies. Background JCI Standards and European Legislation (EURATOM 59/2013) require the implementation of patient radiation protection programs in diagnostic radiology. Aim of this study is to demonstrate the possibility to reduce patients radiation exposure without decreasing image quality, through a multidisciplinary team (MT), which analyzes dosimetric data of diagnostic examinations. Evaluation Data from CT examinations performed with two different scanners (Siemens DefinitionTM and GE LightSpeed UltraTM) between November and December 2013 are considered. CT scanners are configured to automatically send images to DoseWatch© software, which is able to store output parameters (e.g. kVp, mAs, pitch ) and exposure data (e.g. CTDIvol, DLP, SSDE). Data are analyzed and discussed by a MT composed by Medical Physicists and Radiologists, to identify protocols which show critical dosimetric values, then suggest possible improvement actions to be implemented. Furthermore, the large amount of data available allows to monitor diagnostic protocols currently in use and to identify different statistic populations for each of them. Discussion We identified critical values of average CTDIvol for head and facial bones examinations (respectively 61.8 mGy, 151 scans; 61.6 mGy, 72 scans), performed with the GE LightSpeed CTTM. Statistic analysis allowed us to identify the presence of two different populations for head scan, one of which was only 10% of the total number of scans and corresponded to lower exposure values. The MT adopted this protocol as standard. Moreover, the constant output parameters monitoring allowed us to identify unusual values in facial bones exams, due to changes during maintenance service, which the team promptly suggested to correct. This resulted in a substantial dose saving in CTDIvol average values of approximately 15% and 50% for head and facial bones exams, respectively. Diagnostic image quality was deemed suitable for clinical use by radiologists.