936 resultados para Probabilistic Optimal Power Flow
Resumo:
The dissipation of high heat flux from integrated circuit chips and the maintenance of acceptable junction temperatures in high powered electronics require advanced cooling technologies. One such technology is two-phase cooling in microchannels under confined flow boiling conditions. In macroscale flow boiling bubbles will nucleate on the channel walls, grow, and depart from the surface. In microscale flow boiling bubbles can fill the channel diameter before the liquid drag force has a chance to sweep them off the channel wall. As a confined bubble elongates in a microchannel, it traps thin liquid films between the heated wall and the vapor core that are subject to large temperature gradients. The thin films evaporate rapidly, sometimes faster than the incoming mass flux can replenish bulk fluid in the microchannel. When the local vapor pressure spike exceeds the inlet pressure, it forces the upstream interface to travel back into the inlet plenum and create flow boiling instabilities. Flow boiling instabilities reduce the temperature at which critical heat flux occurs and create channel dryout. Dryout causes high surface temperatures that can destroy the electronic circuits that use two-phase micro heat exchangers for cooling. Flow boiling instability is characterized by periodic oscillation of flow regimes which induce oscillations in fluid temperature, wall temperatures, pressure drop, and mass flux. When nanofluids are used in flow boiling, the nanoparticles become deposited on the heated surface and change its thermal conductivity, roughness, capillarity, wettability, and nucleation site density. It also affects heat transfer by changing bubble departure diameter, bubble departure frequency, and the evaporation of the micro and macrolayer beneath the growing bubbles. Flow boiling was investigated in this study using degassed, deionized water, and 0.001 vol% aluminum oxide nanofluids in a single rectangular brass microchannel with a hydraulic diameter of 229 µm for one inlet fluid temperature of 63°C and two constant flow rates of 0.41 ml/min and 0.82 ml/min. The power input was adjusted for two average surface temperatures of 103°C and 119°C at each flow rate. High speed images were taken periodically for water and nanofluid flow boiling after durations of 25, 75, and 125 minutes from the start of flow. The change in regime timing revealed the effect of nanoparticle suspension and deposition on the Onset of Nucelate Boiling (ONB) and the Onset of Bubble Elongation (OBE). Cycle duration and bubble frequencies are reported for different nanofluid flow boiling durations. The addition of nanoparticles was found to stabilize bubble nucleation and growth and limit the recession rate of the upstream and downstream interfaces, mitigating the spreading of dry spots and elongating the thin film regions to increase thin film evaporation.
DESIGN AND IMPLEMENT DYNAMIC PROGRAMMING BASED DISCRETE POWER LEVEL SMART HOME SCHEDULING USING FPGA
Resumo:
With the development and capabilities of the Smart Home system, people today are entering an era in which household appliances are no longer just controlled by people, but also operated by a Smart System. This results in a more efficient, convenient, comfortable, and environmentally friendly living environment. A critical part of the Smart Home system is Home Automation, which means that there is a Micro-Controller Unit (MCU) to control all the household appliances and schedule their operating times. This reduces electricity bills by shifting amounts of power consumption from the on-peak hour consumption to the off-peak hour consumption, in terms of different “hour price”. In this paper, we propose an algorithm for scheduling multi-user power consumption and implement it on an FPGA board, using it as the MCU. This algorithm for discrete power level tasks scheduling is based on dynamic programming, which could find a scheduling solution close to the optimal one. We chose FPGA as our system’s controller because FPGA has low complexity, parallel processing capability, a large amount of I/O interface for further development and is programmable on both software and hardware. In conclusion, it costs little time running on FPGA board and the solution obtained is good enough for the consumers.
Resumo:
Two important and upcoming technologies, microgrids and electricity generation from wind resources, are increasingly being combined. Various control strategies can be implemented, and droop control provides a simple option without requiring communication between microgrid components. Eliminating the single source of potential failure around the communication system is especially important in remote, islanded microgrids, which are considered in this work. However, traditional droop control does not allow the microgrid to utilize much of the power available from the wind. This dissertation presents a novel droop control strategy, which implements a droop surface in higher dimension than the traditional strategy. The droop control relationship then depends on two variables: the dc microgrid bus voltage, and the wind speed at the current time. An approach for optimizing this droop control surface in order to meet a given objective, for example utilizing all of the power available from a wind resource, is proposed and demonstrated. Various cases are used to test the proposed optimal high dimension droop control method, and demonstrate its function. First, the use of linear multidimensional droop control without optimization is demonstrated through simulation. Next, an optimal high dimension droop control surface is implemented with a simple dc microgrid containing two sources and one load. Various cases for changing load and wind speed are investigated using simulation and hardware-in-the-loop techniques. Optimal multidimensional droop control is demonstrated with a wind resource in a full dc microgrid example, containing an energy storage device as well as multiple sources and loads. Finally, the optimal high dimension droop control method is applied with a solar resource, and using a load model developed for a military patrol base application. The operation of the proposed control is again investigated using simulation and hardware-in-the-loop techniques.
Resumo:
Planning in realistic domains typically involves reasoning under uncertainty, operating under time and resource constraints, and finding the optimal subset of goals to work on. Creating optimal plans that consider all of these features is a computationally complex, challenging problem. This dissertation develops an AO* search based planner named CPOAO* (Concurrent, Probabilistic, Over-subscription AO*) which incorporates durative actions, time and resource constraints, concurrent execution, over-subscribed goals, and probabilistic actions. To handle concurrent actions, action combinations rather than individual actions are taken as plan steps. Plan optimization is explored by adding two novel aspects to plans. First, parallel steps that serve the same goal are used to increase the plan’s probability of success. Traditionally, only parallel steps that serve different goals are used to reduce plan execution time. Second, actions that are executing but are no longer useful can be terminated to save resources and time. Conventional planners assume that all actions that were started will be carried out to completion. To reduce the size of the search space, several domain independent heuristic functions and pruning techniques were developed. The key ideas are to exploit dominance relations for candidate action sets and to develop relaxed planning graphs to estimate the expected rewards of states. This thesis contributes (1) an AO* based planner to generate parallel plans, (2) domain independent heuristics to increase planner efficiency, and (3) the ability to execute redundant actions and to terminate useless actions to increase plan efficiency.
Resumo:
Sensor networks have been an active research area in the past decade due to the variety of their applications. Many research studies have been conducted to solve the problems underlying the middleware services of sensor networks, such as self-deployment, self-localization, and synchronization. With the provided middleware services, sensor networks have grown into a mature technology to be used as a detection and surveillance paradigm for many real-world applications. The individual sensors are small in size. Thus, they can be deployed in areas with limited space to make unobstructed measurements in locations where the traditional centralized systems would have trouble to reach. However, there are a few physical limitations to sensor networks, which can prevent sensors from performing at their maximum potential. Individual sensors have limited power supply, the wireless band can get very cluttered when multiple sensors try to transmit at the same time. Furthermore, the individual sensors have limited communication range, so the network may not have a 1-hop communication topology and routing can be a problem in many cases. Carefully designed algorithms can alleviate the physical limitations of sensor networks, and allow them to be utilized to their full potential. Graphical models are an intuitive choice for designing sensor network algorithms. This thesis focuses on a classic application in sensor networks, detecting and tracking of targets. It develops feasible inference techniques for sensor networks using statistical graphical model inference, binary sensor detection, events isolation and dynamic clustering. The main strategy is to use only binary data for rough global inferences, and then dynamically form small scale clusters around the target for detailed computations. This framework is then extended to network topology manipulation, so that the framework developed can be applied to tracking in different network topology settings. Finally the system was tested in both simulation and real-world environments. The simulations were performed on various network topologies, from regularly distributed networks to randomly distributed networks. The results show that the algorithm performs well in randomly distributed networks, and hence requires minimum deployment effort. The experiments were carried out in both corridor and open space settings. A in-home falling detection system was simulated with real-world settings, it was setup with 30 bumblebee radars and 30 ultrasonic sensors driven by TI EZ430-RF2500 boards scanning a typical 800 sqft apartment. Bumblebee radars are calibrated to detect the falling of human body, and the two-tier tracking algorithm is used on the ultrasonic sensors to track the location of the elderly people.
Resumo:
For a microgrid with a high penetration level of renewable energy, energy storage use becomes more integral to the system performance due to the stochastic nature of most renewable energy sources. This thesis examines the use of droop control of an energy storage source in dc microgrids in order to optimize a global cost function. The approach involves using a multidimensional surface to determine the optimal droop parameters based on load and state of charge. The optimal surface is determined using knowledge of the system architecture and can be implemented with fully decentralized source controllers. The optimal surface control of the system is presented. Derivations of a cost function along with the implementation of the optimal control are included. Results were verified using a hardware-in-the-loop system.
Resumo:
In power electronic basedmicrogrids, the computational requirements needed to implement an optimized online control strategy can be prohibitive. The work presented in this dissertation proposes a generalized method of derivation of geometric manifolds in a dc microgrid that is based on the a-priori computation of the optimal reactions and trajectories for classes of events in a dc microgrid. The proposed states are the stored energies in all the energy storage elements of the dc microgrid and power flowing into them. It is anticipated that calculating a large enough set of dissimilar transient scenarios will also span many scenarios not specifically used to develop the surface. These geometric manifolds will then be used as reference surfaces in any type of controller, such as a sliding mode hysteretic controller. The presence of switched power converters in microgrids involve different control actions for different system events. The control of the switch states of the converters is essential for steady state and transient operations. A digital memory look-up based controller that uses a hysteretic sliding mode control strategy is an effective technique to generate the proper switch states for the converters. An example dcmicrogrid with three dc-dc boost converters and resistive loads is considered for this work. The geometric manifolds are successfully generated for transient events, such as step changes in the loads and the sources. The surfaces corresponding to a specific case of step change in the loads are then used as reference surfaces in an EEPROM for experimentally validating the control strategy. The required switch states corresponding to this specific transient scenario are programmed in the EEPROM as a memory table. This controls the switching of the dc-dc boost converters and drives the system states to the reference manifold. In this work, it is shown that this strategy effectively controls the system for a transient condition such as step changes in the loads for the example case.
Resumo:
PURPOSE: To evaluate a widely used nontunneled triple-lumen central venous catheter in order to determine whether the largest of the three lumina (16 gauge) can tolerate high flow rates, such as those required for computed tomographic angiography. MATERIALS AND METHODS: Forty-two catheters were tested in vitro, including 10 new and 32 used catheters (median indwelling time, 5 days). Injection pressures were continuously monitored at the site of the 16-gauge central venous catheter hub. Catheters were injected with 300 and 370 mg of iodine per milliliter of iopamidol by using a mechanical injector at increasing flow rates until the catheter failed. The infusion rate, hub pressure, and location were documented for each failure event. The catheter pressures generated during hand injection by five operators were also analyzed. Mean flow rates and pressures at failure were compared by means of two-tailed Student t test, with differences considered significant at P < .05. RESULTS: Injections of iopamidol with 370 mg of iodine per milliliter generate more pressure than injections of iopamidol with 300 mg of iodine per milliliter at the same injection rate. All catheters failed in the tubing external to the patient. The lowest flow rate at which catheter failure occurred was 9 mL/sec. The lowest hub pressure at failure was 262 pounds per square inch gauge (psig) for new and 213 psig for used catheters. Hand injection of iopamidol with 300 mg of iodine per milliliter generated peak hub pressures ranging from 35 to 72 psig, corresponding to flow rates ranging from 2.5 to 5.0 mL/sec. CONCLUSION: Indwelling use has an effect on catheter material property, but even for used catheters there is a substantial safety margin for power injection with the particular triple-lumen central venous catheter tested in this study, as the manufacturer's recommendation for maximum pressure is 15 psig.
Resumo:
In this paper, we propose an intelligent method, named the Novelty Detection Power Meter (NodePM), to detect novelties in electronic equipment monitored by a smart grid. Considering the entropy of each device monitored, which is calculated based on a Markov chain model, the proposed method identifies novelties through a machine learning algorithm. To this end, the NodePM is integrated into a platform for the remote monitoring of energy consumption, which consists of a wireless sensors network (WSN). It thus should be stressed that the experiments were conducted in real environments different from many related works, which are evaluated in simulated environments. In this sense, the results show that the NodePM reduces by 13.7% the power consumption of the equipment we monitored. In addition, the NodePM provides better efficiency to detect novelties when compared to an approach from the literature, surpassing it in different scenarios in all evaluations that were carried out.
Resumo:
Flow represents an optimal psychological state that is intrinsically rewarding. However, to date only a few studies have investigated the conditions for flow in sports. The present research aims to expand our understanding of the psychological factors that promote the flow experience in sports, focusing on the person-goal fit, or more precisely on the athletes’ situational and dispositional goal orientations. We hypothesize that a fit between an athlete’s situational and dispositional approach versus avoidance goal orientation should promote flow, whereas a non-fit will hinder flow during sports. In addition to the flow experience, we hypothesize that an athlete’s affective well-being is also affected by the person-goal fit. Here our assumptions are theoretically rooted in research on person-environment fit. An experimental study in an ecologically valid sport setting was conducted in order to draw causal conclusions and derive useful strategies for the practice of sports. Specifically, we investigated 67 male soccer players from a regional amateur league during a regular training session. They were randomly assigned to an approach or avoidance goal group and asked to take five penalty shots. Immediately afterwards, their flow experience and affective well-being during the penalty shootout were measured. As predicted, soccer players with a strong dispositional approach goal orientation experienced more flow and reported higher affective well-being when they were assigned to the approach goal. In contrast, soccer players with a strong dispositional avoidance goal orientation benefited from being assigned an avoidance goal in terms of their flow experience and affective well-being. The results are discussed critically with respect to their theoretical and practical implications.
Resumo:
OBJECTIVE Standard stroke CT protocols start with non-enhanced CT followed by perfusion-CT (PCT) and end with CTA. We aimed to evaluate the influence of the sequence of PCT and CTA on quantitative perfusion parameters, venous contrast enhancement and examination time to save critical time in the therapeutic window in stroke patients. METHODS AND MATERIALS Stroke CT data sets of 85 patients, 47 patients with CTA before PCT (group A) and 38 with CTA after PCT (group B) were retrospectively analyzed by two experienced neuroradiologists. Parameter maps of cerebral blood flow, cerebral blood volume, time to peak and mean transit time and contrast enhancements (arterial and venous) were compared. RESULTS Both readers rated contrast of brain-supplying arteries to be equal in both groups (p=0.55 (intracranial) and p=0.73 (extracranial)) although the extent of venous superimposition of the ICA was rated higher in group B (p=0.04). Quantitative perfusion parameters did not significantly differ between the groups (all p>0.18), while the extent of venous superimposition of the ICA was rated higher in group B (p=0.04). The time to complete the diagnostic CT examination was significantly shorter for group A (p<0.01). CONCLUSION Performing CTA directly after NECT has no significant effect on PCT parameters and avoids venous preloading in CTA, while examination times were significantly shorter.
Resumo:
BACKGROUND Surgical site infections are the most common hospital-acquired infections among surgical patients. The administration of surgical antimicrobial prophylaxis reduces the risk of surgical site infections . The optimal timing of this procedure is still a matter of debate. While most studies suggest that it should be given as close to the incision time as possible, others conclude that this may be too late for optimal prevention of surgical site infections. A large observational study suggests that surgical antimicrobial prophylaxis should be administered 74 to 30 minutes before surgery. The aim of this article is to report the design and protocol of a randomized controlled trial investigating the optimal timing of surgical antimicrobial prophylaxis.Methods/design: In this bi-center randomized controlled trial conducted at two tertiary referral centers in Switzerland, we plan to include 5,000 patients undergoing general, oncologic, vascular and orthopedic trauma procedures. Patients are randomized in a 1:1 ratio into two groups: one receiving surgical antimicrobial prophylaxis in the anesthesia room (75 to 30 minutes before incision) and the other receiving surgical antimicrobial prophylaxis in the operating room (less than 30 minutes before incision). We expect a significantly lower rate of surgical site infections with surgical antimicrobial prophylaxis administered more than 30 minutes before the scheduled incision. The primary outcome is the occurrence of surgical site infections during a 30-day follow-up period (one year with an implant in place). When assuming a 5 surgical site infection risk with administration of surgical antimicrobial prophylaxis in the operating room, the planned sample size has an 80% power to detect a relative risk reduction for surgical site infections of 33% when administering surgical antimicrobial prophylaxis in the anesthesia room (with a two-sided type I error of 5%). We expect the study to be completed within three years. DISCUSSION The results of this randomized controlled trial will have an important impact on current international guidelines for infection control strategies in the hospital. Moreover, the results of this randomized controlled trial are of significant interest for patient safety and healthcare economics.Trial registration: This trial is registered on ClinicalTrials.gov under the identifier NCT01790529.
Resumo:
BACKGROUND Unilateral ischemic stroke disrupts the well balanced interactions within bilateral cortical networks. Restitution of interhemispheric balance is thought to contribute to post-stroke recovery. Longitudinal measurements of cerebral blood flow (CBF) changes might act as surrogate marker for this process. OBJECTIVE To quantify longitudinal CBF changes using arterial spin labeling MRI (ASL) and interhemispheric balance within the cortical sensorimotor network and to assess their relationship with motor hand function recovery. METHODS Longitudinal CBF data were acquired in 23 patients at 3 and 9 months after cortical sensorimotor stroke and in 20 healthy controls using pulsed ASL. Recovery of grip force and manual dexterity was assessed with tasks requiring power and precision grips. Voxel-based analysis was performed to identify areas of significant CBF change. Region-of-interest analyses were used to quantify the interhemispheric balance across nodes of the cortical sensorimotor network. RESULTS Dexterity was more affected, and recovered at a slower pace than grip force. In patients with successful recovery of dexterous hand function, CBF decreased over time in the contralesional supplementary motor area, paralimbic anterior cingulate cortex and superior precuneus, and interhemispheric balance returned to healthy control levels. In contrast, patients with poor recovery presented with sustained hypoperfusion in the sensorimotor cortices encompassing the ischemic tissue, and CBF remained lateralized to the contralesional hemisphere. CONCLUSIONS Sustained perfusion imbalance within the cortical sensorimotor network, as measured with task-unrelated ASL, is associated with poor recovery of dexterous hand function after stroke. CBF at rest might be used to monitor recovery and gain prognostic information.
Resumo:
We consider the problem of twenty questions with noisy answers, in which we seek to find a target by repeatedly choosing a set, asking an oracle whether the target lies in this set, and obtaining an answer corrupted by noise. Starting with a prior distribution on the target's location, we seek to minimize the expected entropy of the posterior distribution. We formulate this problem as a dynamic program and show that any policy optimizing the one-step expected reduction in entropy is also optimal over the full horizon. Two such Bayes optimal policies are presented: one generalizes the probabilistic bisection policy due to Horstein and the other asks a deterministic set of questions. We study the structural properties of the latter, and illustrate its use in a computer vision application.
Resumo:
OBJECTIVES This study compared clinical outcomes and revascularization strategies among patients presenting with low ejection fraction, low-gradient (LEF-LG) severe aortic stenosis (AS) according to the assigned treatment modality. BACKGROUND The optimal treatment modality for patients with LEF-LG severe AS and concomitant coronary artery disease (CAD) requiring revascularization is unknown. METHODS Of 1,551 patients, 204 with LEF-LG severe AS (aortic valve area <1.0 cm(2), ejection fraction <50%, and mean gradient <40 mm Hg) were allocated to medical therapy (MT) (n = 44), surgical aortic valve replacement (SAVR) (n = 52), or transcatheter aortic valve replacement (TAVR) (n = 108). CAD complexity was assessed using the SYNTAX score (SS) in 187 of 204 patients (92%). The primary endpoint was mortality at 1 year. RESULTS LEF-LG severe AS patients undergoing SAVR were more likely to undergo complete revascularization (17 of 52, 35%) compared with TAVR (8 of 108, 8%) and MT (0 of 44, 0%) patients (p < 0.001). Compared with MT, both SAVR (adjusted hazard ratio [adj HR]: 0.16; 95% confidence interval [CI]: 0.07 to 0.38; p < 0.001) and TAVR (adj HR: 0.30; 95% CI: 0.18 to 0.52; p < 0.001) improved survival at 1 year. In TAVR and SAVR patients, CAD severity was associated with higher rates of cardiovascular death (no CAD: 12.2% vs. low SS [0 to 22], 15.3% vs. high SS [>22], 31.5%; p = 0.037) at 1 year. Compared with no CAD/complete revascularization, TAVR and SAVR patients undergoing incomplete revascularization had significantly higher 1-year cardiovascular death rates (adj HR: 2.80; 95% CI: 1.07 to 7.36; p = 0.037). CONCLUSIONS Among LEF-LG severe AS patients, SAVR and TAVR improved survival compared with MT. CAD severity was associated with worse outcomes and incomplete revascularization predicted 1-year cardiovascular mortality among TAVR and SAVR patients.