926 resultados para Maximum available power
Resumo:
Der zunehmende Anteil von Strom aus erneuerbaren Energiequellen erfordert ein dynamisches Konzept, um Spitzenlastzeiten und Versorgungslücken aus der Wind- und Solarenergie ausgleichen zu können. Biogasanlagen können aufgrund ihrer hohen energetischen Verfügbarkeit und der Speicherbarkeit von Biogas eine flexible Energiebereitstellung ermöglichen und darüber hinaus über ein „Power-to-Gas“-Verfahren bei einem kurzzeitigen Überschuss von Strom eine Überlastung des Stromnetzes verhindern. Ein nachfrageorientierter Betrieb von Biogasanlagen stellt jedoch hohe Anforderungen an die Mikrobiologie im Reaktor, die sich an die häufig wechselnden Prozessbedingungen wie der Raumbelastung im Reaktor anpassen muss. Eine Überwachung des Fermentationsprozesses in Echtzeit ist daher unabdingbar, um Störungen in den mikrobiellen Gärungswegen frühzeitig erkennen und adäquat entgegenwirken zu können. rnBisherige mikrobielle Populationsanalysen beschränken sich auf aufwendige, molekularbiologische Untersuchungen des Gärsubstrates, deren Ergebnisse dem Betreiber daher nur zeitversetzt zur Verfügung stehen. Im Rahmen dieser Arbeit wurde erstmalig ein Laser-Absorptionsspektrometer zur kontinuierlichen Messung der Kohlenstoff-Isotopenverhältnisse des Methans an einer Forschungsbiogasanlage erprobt. Dabei konnten, in Abhängigkeit der Raumbelastung und Prozessbedingungen variierende Isotopenverhältnisse gemessen werden. Anhand von Isolaten aus dem untersuchten Reaktor konnte zunächst gezeigt werden, dass für jeden Methanogenesepfad (hydrogeno-troph, aceto¬klastisch sowie methylotroph) eine charakteristische, natürliche Isotopensignatur im Biogas nachgewiesen werden kann, sodass eine Identifizierung der aktuell dominierenden methanogenen Reaktionen anhand der Isotopen-verhältnisse im Biogas möglich ist. rnDurch den Einsatz von 13C- und 2H-isotopen¬markierten Substraten in Rein- und Mischkulturen und Batchreaktoren, sowie HPLC- und GC-Unter¬suchungen der Stoffwechselprodukte konnten einige bislang unbekannte C-Flüsse in Bioreaktoren festgestellt werden, die sich wiederum auf die gemessenen Isotopenverhältnisse im Biogas auswirken können. So konnte die Entstehung von Methanol sowie dessen mikrobieller Abbauprodukte bis zur finalen CH4-Bildung anhand von fünf Isolaten erstmalig in einer landwirtschaftlichen Biogasanlage rekonstruiert und das Vorkommen methylotropher Methanogenesewege nachgewiesen werden. Mithilfe molekularbiologischer Methoden wurden darüber hinaus methanoxidierende Bakterien zahlreicher, unbekannter Arten im Reaktor detektiert, deren Vorkommen aufgrund des geringen O2-Gehaltes in Biogasanlagen bislang nicht erwartet wurde. rnDurch die Konstruktion eines synthetischen DNA-Stranges mit den Bindesequenzen für elf spezifische Primerpaare konnte eine neue Methode etabliert werden, anhand derer eine Vielzahl mikrobieller Zielorganismen durch die Verwendung eines einheitlichen Kopienstandards in einer real-time PCR quantifiziert werden können. Eine über 70 Tage durchgeführte, wöchentliche qPCR-Analyse von Fermenterproben zeigte, dass die Isotopenverhältnisse im Biogas signifikant von der Zusammensetzung der Reaktormikrobiota beeinflusst sind. Neben den aktuell dominierenden Methanogenesewegen war es auch möglich, einige bakterielle Reaktionen wie eine syntrophe Acetatoxidation, Acetogenese oder Sulfatreduktion anhand der δ13C (CH4)-Werte zu identifizieren, sodass das hohe Potential einer kontinuierlichen Isotopenmessung zur Prozessanalytik in Biogasanlagen aufgezeigt werden konnte.rn
Resumo:
In the last years, the European countries have paid increasing attention to renewable sources and greenhouse emissions. The Council of the European Union and the European Parliament have established ambitious targets for the next years. In this scenario, biomass plays a prominent role since its life cycle produces a zero net carbon dioxide emission. Additionally, biomass can ensure plant operation continuity thanks to its availability and storage ability. Several conventional systems running on biomass are available at the moment. Most of them are performant either in the large-scale or in the small power range. The absence of an efficient system on the small-middle scale inspired this thesis project. The object is an innovative plant based on a wet indirectly fired gas turbine (WIFGT) integrated with an organic Rankine cycle (ORC) unit for combined heat and power production. The WIFGT is a performant system in the small-middle power range; the ORC cycle is capable of giving value to low-temperature heat sources. Their integration is investigated in this thesis with the aim of carrying out a preliminary design of the components. The targeted plant output is around 200 kW in order not to need a wide cultivation area and to avoid biomass shipping. Existing in-house simulation tools are used: They are adapted to this purpose. Firstly the WIFGT + ORC model is built; Zero-dimensional models of heat exchangers, compressor, turbines, furnace, dryer and pump are used. Different fluids are selected but toluene and benzene turn out to be the most suitable. In the indirectly fired gas turbine a pressure ratio around 4 leads to the highest efficiency. From the thermodynamic analysis the system shows an electric efficiency of 38%, outdoing other conventional plants in the same power range. The combined plant is designed to recover thermal energy: Water is used as coolant in the condenser. It is heated from 60°C up to 90°C, ensuring the possibility of space heating. Mono-dimensional models are used to design the heat exchange equipment. Different types of heat exchangers are chosen depending on the working temperature. A finned-plate heat exchanger is selected for the WIFGT heat transfer equipment due to the high temperature, oxidizing and corrosive environment. A once-through boiler with finned tubes is chosen to vaporize the organic fluid in the ORC. A plate heat exchanger is chosen for the condenser and recuperator. A quasi-monodimensional model for single-stage axial turbine is implemented to design both the WIFGT and the ORC turbine. The system simulation after the components design shows an electric efficiency around 34% with a decrease by 10% compared to the zero-dimensional analysis. The work exhibits the system potentiality compared to the existing plants from both technical and economic point of view.
Resumo:
Introduction The survival of patients admitted to an emergency department is determined by the severity of acute illness and the quality of care provided. The high number and the wide spectrum of severity of illness of admitted patients make an immediate assessment of all patients unrealistic. The aim of this study is to evaluate a scoring system based on readily available physiological parameters immediately after admission to an emergency department (ED) for the purpose of identification of at-risk patients. Methods This prospective observational cohort study includes 4,388 consecutive adult patients admitted via the ED of a 960-bed tertiary referral hospital over a period of six months. Occurrence of each of seven potential vital sign abnormalities (threat to airway, abnormal respiratory rate, oxygen saturation, systolic blood pressure, heart rate, low Glasgow Coma Scale and seizures) was collected and added up to generate the vital sign score (VSS). VSSinitial was defined as the VSS in the first 15 minutes after admission, VSSmax as the maximum VSS throughout the stay in ED. Occurrence of single vital sign abnormalities in the first 15 minutes and VSSinitial and VSSmax were evaluated as potential predictors of hospital mortality. Results Logistic regression analysis identified all evaluated single vital sign abnormalities except seizures and abnormal respiratory rate to be independent predictors of hospital mortality. Increasing VSSinitial and VSSmax were significantly correlated to hospital mortality (odds ratio (OR) 2.80, 95% confidence interval (CI) 2.50 to 3.14, P < 0.0001 for VSSinitial; OR 2.36, 95% CI 2.15 to 2.60, P < 0.0001 for VSSmax). The predictive power of VSS was highest if collected in the first 15 minutes after ED admission (log rank Chi-square 468.1, P < 0.0001 for VSSinitial;,log rank Chi square 361.5, P < 0.0001 for VSSmax). Conclusions Vital sign abnormalities and VSS collected in the first minutes after ED admission can identify patients at risk of an unfavourable outcome.
Resumo:
Solar energy is the most abundant persistent energy resource. It is also an intermittent one available for only a fraction of each day while the demand for electric power never ceases. To produce a significant amount of power at the utility scale, electricity generated from solar energy must be dispatchable and able to be supplied in response to variations in demand. This requires energy storage that serves to decouple the intermittent solar resource from the load and enables around-the-clock power production from solar energy. Practically, solar energy storage technologies must be efficient as any energy loss results in an increase in the amount of required collection hardware, the largest cost in a solar electric power system. Storing solar energy as heat has been shown to be an efficient, scalable, and relatively low-cost approach to providing dispatchable solar electricity. Concentrating solar power systems that include thermal energy storage (TES) use mirrors to focus sunlight onto a heat exchanger where it is converted to thermal energy that is carried away by a heat transfer fluid and used to drive a conventional thermal power cycle (e.g., steam power plant), or stored for later use. Several approaches to TES have been developed and can generally be categorized as either thermophysical (wherein energy is stored in a hot fluid or solid medium or by causing a phase change that can later be reversed to release heat) or thermochemical (in which energy is stored in chemical bonds requiring two or more reversible chemical reactions).
Resumo:
Single-screw extrusion is one of the widely used processing methods in plastics industry, which was the third largest manufacturing industry in the United States in 2007 [5]. In order to optimize the single-screw extrusion process, tremendous efforts have been devoted for development of accurate models in the last fifty years, especially for polymer melting in screw extruders. This has led to a good qualitative understanding of the melting process; however, quantitative predictions of melting from various models often have a large error in comparison to the experimental data. Thus, even nowadays, process parameters and the geometry of the extruder channel for the single-screw extrusion are determined by trial and error. Since new polymers are developed frequently, finding the optimum parameters to extrude these polymers by trial and error is costly and time consuming. In order to reduce the time and experimental work required for optimizing the process parameters and the geometry of the extruder channel for a given polymer, the main goal of this research was to perform a coordinated experimental and numerical investigation of melting in screw extrusion. In this work, a full three-dimensional finite element simulation of the two-phase flow in the melting and metering zones of a single-screw extruder was performed by solving the conservation equations for mass, momentum, and energy. The only attempt for such a three-dimensional simulation of melting in screw extruder was more than twenty years back. However, that work had only a limited success because of the capability of computers and mathematical algorithms available at that time. The dramatic improvement of computational power and mathematical knowledge now make it possible to run full 3-D simulations of two-phase flow in single-screw extruders on a desktop PC. In order to verify the numerical predictions from the full 3-D simulations of two-phase flow in single-screw extruders, a detailed experimental study was performed. This experimental study included Maddock screw-freezing experiments, Screw Simulator experiments and material characterization experiments. Maddock screw-freezing experiments were performed in order to visualize the melting profile along the single-screw extruder channel with different screw geometry configurations. These melting profiles were compared with the simulation results. Screw Simulator experiments were performed to collect the shear stress and melting flux data for various polymers. Cone and plate viscometer experiments were performed to obtain the shear viscosity data which is needed in the simulations. An optimization code was developed to optimize two screw geometry parameters, namely, screw lead (pitch) and depth in the metering section of a single-screw extruder, such that the output rate of the extruder was maximized without exceeding the maximum temperature value specified at the exit of the extruder. This optimization code used a mesh partitioning technique in order to obtain the flow domain. The simulations in this flow domain was performed using the code developed to simulate the two-phase flow in single-screw extruders.
Resumo:
The primary challenge in groundwater and contaminant transport modeling is obtaining the data needed for constructing, calibrating and testing the models. Large amounts of data are necessary for describing the hydrostratigraphy in areas with complex geology. Increasingly states are making spatial data available that can be used for input to groundwater flow models. The appropriateness of this data for large-scale flow systems has not been tested. This study focuses on modeling a plume of 1,4-dioxane in a heterogeneous aquifer system in Scio Township, Washtenaw County, Michigan. The analysis consisted of: (1) characterization of hydrogeology of the area and construction of a conceptual model based on publicly available spatial data, (2) development and calibration of a regional flow model for the site, (3) conversion of the regional model to a more highly resolved local model, (4) simulation of the dioxane plume, and (5) evaluation of the model's ability to simulate field data and estimation of the possible dioxane sources and subsequent migration until maximum concentrations are at or below the Michigan Department of Environmental Quality's residential cleanup standard for groundwater (85 ppb). MODFLOW-2000 and MT3D programs were utilized to simulate the groundwater flow and the development and movement of the 1, 4-dioxane plume, respectively. MODFLOW simulates transient groundwater flow in a quasi-3-dimensional sense, subject to a variety of boundary conditions that can simulate recharge, pumping, and surface-/groundwater interactions. MT3D simulates solute advection with groundwater flow (using the flow solution from MODFLOW), dispersion, source/sink mixing, and chemical reaction of contaminants. This modeling approach was successful at simulating the groundwater flows by calibrating recharge and hydraulic conductivities. The plume transport was adequately simulated using literature dispersivity and sorption coefficients, although the plume geometries were not well constrained.
Resumo:
The seasonal appearance of a deep chlorophyll maximum (DCM) in Lake Superior is a striking phenomenon that is widely observed; however its mechanisms of formation and maintenance are not well understood. As this phenomenon may be the reflection of an ecological driver, or a driver itself, a lack of understanding its driving forces limits the ability to accurately predict and manage changes in this ecosystem. Key mechanisms generally associated with DCM dynamics (i.e. ecological, physiological and physical phenomena) are examined individually and in concert to establish their role. First the prevailing paradigm, “the DCM is a great place to live”, is analyzed through an integration of the results of laboratory experiments and field measurements. The analysis indicates that growth at this depth is severely restricted and thus not able to explain the full magnitude of this phenomenon. Additional contributing mechanisms like photoadaptation, settling and grazing are reviewed with a one-dimensional mathematical model of chlorophyll and particulate organic carbon. Settling has the strongest impact on the formation and maintenance of the DCM, transporting biomass to the metalimnion and resulting in the accumulation of algae, i.e. a peak in the particulate organic carbon profile. Subsequently, shade adaptation becomes manifest as a chlorophyll maximum deeper in the water column where light conditions particularly favor the process. Shade adaptation mediates the magnitude, shape and vertical position of the chlorophyll peak. Growth at DCM depth shows only a marginal contribution, while grazing has an adverse effect on the extent of the DCM. The observed separation of the carbon biomass and chlorophyll maximum should caution scientists to equate the DCM with a large nutrient pool that is available to higher trophic levels. The ecological significance of the DCM should not be separated from the underlying carbon dynamics. When evaluated in its entirety, the DCM becomes the projected image of a structure that remains elusive to measure but represents the foundation of all higher trophic levels. These results also offer guidance in examine ecosystem perturbations such as climate change. For example, warming would be expected to prolong the period of thermal stratification, extending the late summer period of suboptimal (phosphorus-limited) growth and attendant transport of phytoplankton to the metalimnion. This reduction in epilimnetic algal production would decrease the supply of algae to the metalimnion, possibly reducing the supply of prey to the grazer community. This work demonstrates the value of modeling to challenge and advance our understanding of ecosystem dynamics, steps vital to reliable testing of management alternatives.
Resumo:
Space Based Solar Power satellites use solar arrays to generate clean, green, and renewable electricity in space and transmit it to earth via microwave, radiowave or laser beams to corresponding receivers (ground stations). These traditionally are large structures orbiting around earth at the geo-synchronous altitude. This thesis introduces a new architecture for a Space Based Solar Power satellite constellation. The proposed concept reduces the high cost involved in the construction of the space satellite and in the multiple launches to the geo-synchronous altitude. The proposed concept is a constellation of Low Earth Orbit satellites that are smaller in size than the conventional system. For this application a Repeated Sun-Synchronous Track Circular Orbit is considered (RSSTO). In these orbits, the spacecraft re-visits the same locations on earth periodically every given desired number of days with the line of nodes of the spacecraft’s orbit fixed relative to the Sun. A wide range of solutions are studied, and, in this thesis, a two-orbit constellation design is chosen and simulated. The number of satellites is chosen based on the electric power demands in a given set of global cities. The orbits of the satellites are designed such that their ground tracks visit a maximum number of ground stations during the revisit period. In the simulation, the locations of the ground stations are chosen close to big cities, in USA and worldwide, so that the space power constellation beams down power directly to locations of high electric power demands. The j2 perturbations are included in the mathematical model used in orbit design. The Coverage time of each spacecraft over a ground site and the gap time between two consecutive spacecrafts visiting a ground site are simulated in order to evaluate the coverage continuity of the proposed solar power constellation. It has been observed from simulations that there always periods in which s spacecraft does not communicate with any ground station. For this reason, it is suggested that each satellite in the constellation be equipped with power storage components so that it can store power for later transmission. This thesis presents a method for designing the solar power constellation orbits such that the number of ground stations visited during the given revisit period is maximized. This leads to maximizing the power transmission to ground stations.
Resumo:
Power transformers are key components of the power grid and are also one of the most subjected to a variety of power system transients. The failure of a large transformer can cause severe monetary losses to a utility, thus adequate protection schemes are of great importance to avoid transformer damage and maximize the continuity of service. Computer modeling can be used as an efficient tool to improve the reliability of a transformer protective relay application. Unfortunately, transformer models presently available in commercial software lack completeness in the representation of several aspects such as internal winding faults, which is a common cause of transformer failure. It is also important to adequately represent the transformer at frequencies higher than the power frequency for a more accurate simulation of switching transients since these are a well known cause for the unwanted tripping of protective relays. This work develops new capabilities for the Hybrid Transformer Model (XFMR) implemented in ATPDraw to allow the representation of internal winding faults and slow-front transients up to 10 kHz. The new model can be developed using any of two sources of information: 1) test report data and 2) design data. When only test-report data is available, a higher-order leakage inductance matrix is created from standard measurements. If design information is available, a Finite Element Model is created to calculate the leakage parameters for the higher-order model. An analytical model is also implemented as an alternative to FEM modeling. Measurements on 15-kVA 240?/208Y V and 500-kVA 11430Y/235Y V distribution transformers were performed to validate the model. A transformer model that is valid for simulations for frequencies above the power frequency was developed after continuing the division of windings into multiple sections and including a higher-order capacitance matrix. Frequency-scan laboratory measurements were used to benchmark the simulations. Finally, a stability analysis of the higher-order model was made by analyzing the trapezoidal rule for numerical integration as used in ATP. Numerical damping was also added to suppress oscillations locally when discontinuities occurred in the solution. A maximum error magnitude of 7.84% was encountered in the simulated currents for different turn-to-ground and turn-to-turn faults. The FEM approach provided the most accurate means to determine the leakage parameters for the ATP model. The higher-order model was found to reproduce the short-circuit impedance acceptably up to about 10 kHz and the behavior at the first anti-resonant frequency was better matched with the measurements.
Resumo:
Photovoltaic power has become one of the most popular research area in new energy field. In this report, the case of household solar power system is presented. Based on the Matlab environment, the simulation is built by using Simulink and SimPowerSystem. There are four parts in a household solar system, solar cell, MPPT system, battery and power consumer. Solar cell and MPPT system are been studied and analyzed individually. The system with MPPT generates 30% more energy than the system without MPPT. After simulating the household system, it is can be seen that the power which generated by the system is 40.392 kWh per sunny day. By combining the power generated by the system and the price of the electric power, 8.42 years are need for the system to achieve a balance of income and expenditure when weather condition is considered.
Resumo:
We evaluated the muscular strength, endurance, and power responses of 12 college students, ranging in age from 19-40 years, who participated in a 6-wk high-intensity training program commonly used to improve muscular endurance. Muscular strength was measured by a one repetition maximum (1RM) bench press test and a 1RM Hammer bench press test; muscular endurance was measured by administering a 70-percent 1RM test to failure on the Hammer bench press; and upper body power was measured by adminstering a medicine ball throw test. We observed a 4.8-percent improvement of 2.7 kg on the bench press, a 14.6-percent improvement of 10.5 kg on the Hammer bench press, a 45.5-percent improvement with an average increase of five repetitions on the submaximal test to failure and an average improvement of ~ 20 percent, 60 cm, for the medicine ball throw. Foe our subjects, a commonly used high-intensity training muscular endurance program resulted in improved performance on tests measuring muscular strength, endurance, and power, and resulted in zero reported injuries during training or assessment procedures.
Resumo:
PURPOSE: To evaluate a widely used nontunneled triple-lumen central venous catheter in order to determine whether the largest of the three lumina (16 gauge) can tolerate high flow rates, such as those required for computed tomographic angiography. MATERIALS AND METHODS: Forty-two catheters were tested in vitro, including 10 new and 32 used catheters (median indwelling time, 5 days). Injection pressures were continuously monitored at the site of the 16-gauge central venous catheter hub. Catheters were injected with 300 and 370 mg of iodine per milliliter of iopamidol by using a mechanical injector at increasing flow rates until the catheter failed. The infusion rate, hub pressure, and location were documented for each failure event. The catheter pressures generated during hand injection by five operators were also analyzed. Mean flow rates and pressures at failure were compared by means of two-tailed Student t test, with differences considered significant at P < .05. RESULTS: Injections of iopamidol with 370 mg of iodine per milliliter generate more pressure than injections of iopamidol with 300 mg of iodine per milliliter at the same injection rate. All catheters failed in the tubing external to the patient. The lowest flow rate at which catheter failure occurred was 9 mL/sec. The lowest hub pressure at failure was 262 pounds per square inch gauge (psig) for new and 213 psig for used catheters. Hand injection of iopamidol with 300 mg of iodine per milliliter generated peak hub pressures ranging from 35 to 72 psig, corresponding to flow rates ranging from 2.5 to 5.0 mL/sec. CONCLUSION: Indwelling use has an effect on catheter material property, but even for used catheters there is a substantial safety margin for power injection with the particular triple-lumen central venous catheter tested in this study, as the manufacturer's recommendation for maximum pressure is 15 psig.
Resumo:
Bone-anchored hearing implants (BAHI) are routinely used to alleviate the effects of the acoustic head shadow in single-sided sensorineural deafness (SSD). In this study, the influence of the directional microphone setting and the maximum power output of the BAHI sound processor on speech understanding in noise in a laboratory setting were investigated. Eight adult BAHI users with SSD participated in this pilot study. Speech understanding in noise was measured using a new Slovak speech-in-noise test in two different spatial settings, either with noise coming from the front and noise from the side of the BAHI (S90N0) or vice versa (S0N90). In both spatial settings, speech understanding was measured without a BAHI, with a Baha BP100 in omnidirectional mode, with a BP100 in directional mode, with a BP110 power in omnidirectional and with a BP110 power in directional mode. In spatial setting S90N0, speech understanding in noise with either sound processor and in either directional mode was improved by 2.2-2.8 dB (p = 0.004-0.016). In spatial setting S0N90, speech understanding in noise was reduced by either BAHI, but was significantly better by 1.0-1.8 dB, if the directional microphone system was activated (p = 0.046), when compared to the omnidirectional setting. With the limited number of subjects in this study, no statistically significant differences were found between the two sound processors.
Resumo:
The motion of lung tumors during respiration makes the accurate delivery of radiation therapy to the thorax difficult because it increases the uncertainty of target position. The adoption of four-dimensional computed tomography (4D-CT) has allowed us to determine how a tumor moves with respiration for each individual patient. Using information acquired during a 4D-CT scan, we can define the target, visualize motion, and calculate dose during the planning phase of the radiotherapy process. One image data set that can be created from the 4D-CT acquisition is the maximum-intensity projection (MIP). The MIP can be used as a starting point to define the volume that encompasses the motion envelope of the moving gross target volume (GTV). Because of the close relationship that exists between the MIP and the final target volume, we investigated four MIP data sets created with different methodologies (3 using various 4D-CT sorting implementations, and one using all available cine CT images) to compare target delineation. It has been observed that changing the 4D-CT sorting method will lead to the selection of a different collection of images; however, the clinical implications of changing the constituent images on the resultant MIP data set are not clear. There has not been a comprehensive study that compares target delineation based on different 4D-CT sorting methodologies in a patient population. We selected a collection of patients who had previously undergone thoracic 4D-CT scans at our institution, and who had lung tumors that moved at least 1 cm. We then generated the four MIP data sets and automatically contoured the target volumes. In doing so, we identified cases in which the MIP generated from a 4D-CT sorting process under-represented the motion envelope of the target volume by more than 10% than when measured on the MIP generated from all of the cine CT images. The 4D-CT methods suffered from duplicate image selection and might not choose maximum extent images. Based on our results, we suggest utilization of a MIP generated from the full cine CT data set to ensure a representative inclusive tumor extent, and to avoid geometric miss.