12 resultados para ENERGY-LEVEL

em Digital Commons at Florida International University


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hazardous radioactive liquid waste is the legacy of more than 50 years of plutonium production associated with the United States' nuclear weapons program. It is estimated that more than 245,000 tons of nitrate wastes are stored at facilities such as the single-shell tanks (SST) at the Hanford Site in the state of Washington, and the Melton Valley storage tanks at Oak Ridge National Laboratory (ORNL) in Tennessee. In order to develop an innovative, new technology for the destruction and immobilization of nitrate-based radioactive liquid waste, the United State Department of Energy (DOE) initiated the research project which resulted in the technology known as the Nitrate to Ammonia and Ceramic (NAC) process. However, inasmuch as the nitrate anion is highly mobile and difficult to immobilize, especially in relatively porous cement-based grout which has been used to date as a method for the immobilization of liquid waste, it presents a major obstacle to environmental clean-up initiatives. Thus, in an effort to contribute to the existing body of knowledge and enhance the efficacy of the NAC process, this research involved the experimental measurement of the rheological and heat transfer behaviors of the NAC product slurry and the determination of the optimal operating parameters for the continuous NAC chemical reaction process. Test results indicate that the NAC product slurry exhibits a typical non-Newtonian flow behavior. Correlation equations for the slurry's rheological properties and heat transfer rate in a pipe flow have been developed; these should prove valuable in the design of a full-scale NAC processing plant. The 20-percent slurry exhibited a typical dilatant (shear thickening) behavior and was in the turbulent flow regime due to its lower viscosity. The 40-percent slurry exhibited a typical pseudoplastic (shear thinning) behavior and remained in the laminar flow regime throughout its experimental range. The reactions were found to be more efficient in the lower temperature range investigated. With respect to leachability, the experimental final NAC ceramic waste form is comparable to the final product of vitrification, the technology chosen by DOE to treat these wastes. As the NAC process has the potential of reducing the volume of nitrate-based radioactive liquid waste by as much as 70 percent, it not only promises to enhance environmental remediation efforts but also effect substantial cost savings. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Eddy covariance (EC) estimates of carbon dioxide (CO2) fluxes and energy balance are examined to investigate the functional responses of a mature mangrove forest to a disturbance generated by Hurricane Wilma on October 24, 2005 in the Florida Everglades. At the EC site, high winds from the hurricane caused nearly 100% defoliation in the upper canopy and widespread tree mortality. Soil temperatures down to -50 cm increased, and air temperature lapse rates within the forest canopy switched from statically stable to statically unstable conditions following the disturbance. Unstable conditions allowed more efficient transport of water vapor and CO2 from the surface up to the upper canopy layer. Significant increases in latent heat fluxes (LE) and nighttime net ecosystem exchange (NEE) were also observed and sensible heat fluxes (H) as a proportion of net radiation decreased significantly in response to the disturbance. Many of these impacts persisted through much of the study period through 2009. However, local albedo and MODIS (Moderate Resolution Imaging Spectro-radiometer) data (the Enhanced Vegetation Index) indicated a substantial proportion of active leaf area recovered before the EC measurements began 1 year after the storm. Observed changes in the vertical distribution and the degree of clumping in newly emerged leaves may have affected the energy balance. Direct comparisons of daytime NEE values from before the storm and after our measurements resumed did not show substantial or consistent differences that could be attributed to the disturbance. Regression analyses on seasonal time scales were required to differentiate the storm's impact on monthly average daytime NEE from the changes caused by interannual variability in other environmental drivers. The effects of the storm were apparent on annual time scales, and CO2 uptake remained approximately 250 g C m-2 yr-1 lower in 2009 compared to the average annual values measured in 2004-2005. Dry season CO2 uptake was relatively more affected by the disturbance than wet season values. Complex leaf regeneration dynamics on damaged trees during ecosystem recovery are hypothesized to lead to the variable dry versus wet season impacts on daytime NEE. In contrast, nighttime CO2 release (i.e., nighttime respiration) was consistently and significantly greater, possibly as a result of the enhanced decomposition of litter and coarse woody debris generated by the storm, and this effect was most apparent in the wet seasons compared to the dry seasons. The largest pre- and post-storm differences in NEE coincided roughly with the delayed peak in cumulative mortality of stems in 2007-2008. Across the hurricane-impacted region, cumulative tree mortality rates were also closely correlated with declines in peat surface elevation. Mangrove forest-atmosphere interactions are interpreted with respect to the damage and recovery of stand dynamics and soil accretion processes following the hurricane.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The relative importance of algal and detrital energy pathways remains a central question in wetlands ecology. We used bulk stable isotope analysis and fatty acid composition to investigate the relative contributions of periphyton (algae) and floc (detritus) in a freshwater wetland with the goal of determining the inputs of these resource pools to lower trophic-level consumers. All animal samples revealed fatty acid markers indicative of both microbial (detrital) and algal origins, though the relative contributions varied among species. Vascular plant markers were in low abundance in most consumers. Detritivory is important for chironomids and amphipods, as demonstrated by the enhanced bacterial fatty acids present in both consumers, while algal resources, in the form of periphyton, likely support ephemeropteran larvae. Invertebrates such as amphipods and grass shrimp appear to be important resources for small omnivorous fish, while Poecilia latipinna appear to strongly use periphyton and Ephemeroptera larvae as food sources. Both P. latipinna and Lepomis spp. assimilated small amounts of vascular plant debris, possibly due to unintentional ingestion of floc while foraging for invertebrates and insect larvae. Physid snails, Haitia spp., were characterized by considerably different fatty acid compositions than other taxa examined, and likely play a unique role in Everglades’ food webs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

High efficiency of power converters placed between renewable energy sources and the utility grid is required to maximize the utilization of these sources. Power quality is another aspect that requires large passive elements (inductors, capacitors) to be placed between these sources and the grid. The main objective is to develop higher-level high frequency-based power converter system (HFPCS) that optimizes the use of hybrid renewable power injected into the power grid. The HFPCS provides high efficiency, reduced size of passive components, higher levels of power density realization, lower harmonic distortion, higher reliability, and lower cost. The dynamic modeling for each part in this system is developed, simulated and tested. The steady-state performance of the grid-connected hybrid power system with battery storage is analyzed. Various types of simulations were performed and a number of algorithms were developed and tested to verify the effectiveness of the power conversion topologies. A modified hysteresis-control strategy for the rectifier and the battery charging/discharging system was developed and implemented. A voltage oriented control (VOC) scheme was developed to control the energy injected into the grid. The developed HFPCS was compared experimentally with other currently available power converters. The developed HFPCS was employed inside a microgrid system infrastructure, connecting it to the power grid to verify its power transfer capabilities and grid connectivity. Grid connectivity tests verified these power transfer capabilities of the developed converter in addition to its ability of serving the load in a shared manner. In order to investigate the performance of the developed system, an experimental setup for the HF-based hybrid generation system was constructed. We designed a board containing a digital signal processor chip on which the developed control system was embedded. The board was fabricated and experimentally tested. The system's high precision requirements were verified. Each component of the system was built and tested separately, and then the whole system was connected and tested. The simulation and experimental results confirm the effectiveness of the developed converter system for grid-connected hybrid renewable energy systems as well as for hybrid electric vehicles and other industrial applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modern data centers host hundreds of thousands of servers to achieve economies of scale. Such a huge number of servers create challenges for the data center network (DCN) to provide proportionally large bandwidth. In addition, the deployment of virtual machines (VMs) in data centers raises the requirements for efficient resource allocation and find-grained resource sharing. Further, the large number of servers and switches in the data center consume significant amounts of energy. Even though servers become more energy efficient with various energy saving techniques, DCN still accounts for 20% to 50% of the energy consumed by the entire data center. The objective of this dissertation is to enhance DCN performance as well as its energy efficiency by conducting optimizations on both host and network sides. First, as the DCN demands huge bisection bandwidth to interconnect all the servers, we propose a parallel packet switch (PPS) architecture that directly processes variable length packets without segmentation-and-reassembly (SAR). The proposed PPS achieves large bandwidth by combining switching capacities of multiple fabrics, and it further improves the switch throughput by avoiding padding bits in SAR. Second, since certain resource demands of the VM are bursty and demonstrate stochastic nature, to satisfy both deterministic and stochastic demands in VM placement, we propose the Max-Min Multidimensional Stochastic Bin Packing (M3SBP) algorithm. M3SBP calculates an equivalent deterministic value for the stochastic demands, and maximizes the minimum resource utilization ratio of each server. Third, to provide necessary traffic isolation for VMs that share the same physical network adapter, we propose the Flow-level Bandwidth Provisioning (FBP) algorithm. By reducing the flow scheduling problem to multiple stages of packet queuing problems, FBP guarantees the provisioned bandwidth and delay performance for each flow. Finally, while DCNs are typically provisioned with full bisection bandwidth, DCN traffic demonstrates fluctuating patterns, we propose a joint host-network optimization scheme to enhance the energy efficiency of DCNs during off-peak traffic hours. The proposed scheme utilizes a unified representation method that converts the VM placement problem to a routing problem and employs depth-first and best-fit search to find efficient paths for flows.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Today, modern System-on-a-Chip (SoC) systems have grown rapidly due to the increased processing power, while maintaining the size of the hardware circuit. The number of transistors on a chip continues to increase, but current SoC designs may not be able to exploit the potential performance, especially with energy consumption and chip area becoming two major concerns. Traditional SoC designs usually separate software and hardware. Thus, the process of improving the system performance is a complicated task for both software and hardware designers. The aim of this research is to develop hardware acceleration workflow for software applications. Thus, system performance can be improved with constraints of energy consumption and on-chip resource costs. The characteristics of software applications can be identified by using profiling tools. Hardware acceleration can have significant performance improvement for highly mathematical calculations or repeated functions. The performance of SoC systems can then be improved, if the hardware acceleration method is used to accelerate the element that incurs performance overheads. The concepts mentioned in this study can be easily applied to a variety of sophisticated software applications. The contributions of SoC-based hardware acceleration in the hardware-software co-design platform include the following: (1) Software profiling methods are applied to H.264 Coder-Decoder (CODEC) core. The hotspot function of aimed application is identified by using critical attributes such as cycles per loop, loop rounds, etc. (2) Hardware acceleration method based on Field-Programmable Gate Array (FPGA) is used to resolve system bottlenecks and improve system performance. The identified hotspot function is then converted to a hardware accelerator and mapped onto the hardware platform. Two types of hardware acceleration methods – central bus design and co-processor design, are implemented for comparison in the proposed architecture. (3) System specifications, such as performance, energy consumption, and resource costs, are measured and analyzed. The trade-off of these three factors is compared and balanced. Different hardware accelerators are implemented and evaluated based on system requirements. 4) The system verification platform is designed based on Integrated Circuit (IC) workflow. Hardware optimization techniques are used for higher performance and less resource costs. Experimental results show that the proposed hardware acceleration workflow for software applications is an efficient technique. The system can reach 2.8X performance improvements and save 31.84% energy consumption by applying the Bus-IP design. The Co-processor design can have 7.9X performance and save 75.85% energy consumption.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Physical activity is recommended to facilitate weight management. However, some individuals may be unable to successfully manage their weight due to certain psychological and cognitive factors that trigger them to compensate for calories expended in exercise. The primary purpose of this study was to evaluate the effect of moderate-intensity exercise on lunch and 12-hour post-exercise energy intake (PE-EI) in normal weight and overweight sedentary males. Perceived hunger, mood, carbohydrate intake from beverages, and accuracy in estimating energy intake (EI) and energy expenditure (EE) were also assessed. The study consisted of two conditions, exercise (treadmill walking) and rest (sitting), with each participant completing each condition, in a counterbalanced-crossover design on two days. Eighty males, mean age 30 years (SD=8) were categorized into five groups according to weight (normal-/overweight), dietary restraint level (high/low), and dieting status (yes/no). Results of repeated measures, 5x2 ANOVA indicated that the main effects of condition and group, and the interaction were not significant for lunch or 12-hour PE-EI. Among overweight participants, dieters consumed significantly (p<0.05) fewer calories than non-dieters at lunch (M=822 vs. M=1149) and over 12 hours (M=1858 vs. M =2497). Overall, participants’ estimated exercise EE was significantly (p<0.01) higher than actual exercise EE, and estimated resting EE was significantly (p<0.001) lower than actual resting EE. Participants significantly (p<0.001) underestimated EI at lunch on both experimental days. Perceived hunger was significantly (p<0.05) lower after exercise (M=49 mm, SEM=3) than after rest (M=57 mm, SEM=3). Mood scores and carbohydrate intake from beverages were not influenced by weight, dietary restraint, and dieting status. In conclusion, a single bout of moderate-intensity exercise did not influence PE-EI in sedentary males in reference to weight, dietary restraint, and dieting status, suggesting that this population may not be at risk for overeating in response to exercise. Therefore, exercise can be prescribed and used as an effective tool for weight management. Results also indicated that there was an inability to accurately estimate EI (ad libitum lunch meal) and EE (60 minutes of moderate-intensity exercise). Inaccuracies in the estimation of calories for EI and EE could have the potential to unfavorably impact weight management.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study investigated the impact of an acute bout of physical activity on postexercise energy intake (PE-EI) in overweight females who were dieting with high restraint (D-HR) and non-dieting with either high restraint (ND-HR) or low restraint (ND-LR). PE-EI at lunch and 12-hours after was compared on the exercise (E) and a nonexercise (NE) day. There was a significant interaction (F (2,33)= 4.12, p = 0.025) of dieting/restraint status and condition (E vs. NE day) on the 12-hour El. The D-HR ate 519 ± 596 kcal more on the E than on the NE day; while the ND-HR ate 177 ± 392 kcal less on the E than on the NE day. The results of this study demonstrate that the impact of exercise on PE-EI is determined by both a physiological and psychological response. Dieting status, dietary restraint, level of disinhibition and cognitive factors may influence PE-EI and weight.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The future power grid will effectively utilize renewable energy resources and distributed generation to respond to energy demand while incorporating information technology and communication infrastructure for their optimum operation. This dissertation contributes to the development of real-time techniques, for wide-area monitoring and secure real-time control and operation of hybrid power systems. ^ To handle the increased level of real-time data exchange, this dissertation develops a supervisory control and data acquisition (SCADA) system that is equipped with a state estimation scheme from the real-time data. This system is verified on a specially developed laboratory-based test bed facility, as a hardware and software platform, to emulate the actual scenarios of a real hybrid power system with the highest level of similarities and capabilities to practical utility systems. It includes phasor measurements at hundreds of measurement points on the system. These measurements were obtained from especially developed laboratory based Phasor Measurement Unit (PMU) that is utilized in addition to existing commercially based PMU’s. The developed PMU was used in conjunction with the interconnected system along with the commercial PMU’s. The tested studies included a new technique for detecting the partially islanded micro grids in addition to several real-time techniques for synchronization and parameter identifications of hybrid systems. ^ Moreover, due to numerous integration of renewable energy resources through DC microgrids, this dissertation performs several practical cases for improvement of interoperability of such systems. Moreover, increased number of small and dispersed generating stations and their need to connect fast and properly into the AC grids, urged this work to explore the challenges that arise in synchronization of generators to the grid and through introduction of a Dynamic Brake system to improve the process of connecting distributed generators to the power grid.^ Real time operation and control requires data communication security. A research effort in this dissertation was developed based on Trusted Sensing Base (TSB) process for data communication security. The innovative TSB approach improves the security aspect of the power grid as a cyber-physical system. It is based on available GPS synchronization technology and provides protection against confidentiality attacks in critical power system infrastructures. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A review of the literature reveals few research has attempted to demonstrate if a relationship exists between the type of teacher training a science teacher has received and the perceived attitudes of his/her students. Considering that a great deal of time and energy has been devoted by university colleges, school districts, and educators towards refining the teacher education process, it would be more efficient for all parties involved, if research were available that could discern if certain pathways in achieving that education, would promote the tendency towards certain teacher behaviors occurring in the classroom, while other pathways would lead towards different behaviors. Some of the teacher preparation factors examined in this study include the college major chosen by the science teacher, the highest degree earned, the number of years of teaching experience, the type of science course taught, and the grade level taught by the teacher. This study examined how the various factors mentioned, could influence the behaviors which are characteristic of the teacher, and how these behaviors could be reflective in the classroom environment experienced by the students. The instrument used in the study was the Classroom Environment Scale (CES), Real Form. The measured classroom environment was broken down into three separate dimensions, with three components within each dimension in the CES. Multiple Regression statistical analyses examined how components of the teachers' education influenced the perceived dimensions of the classroom environment from the students. The study occurred in Miami-Dade County Florida, with a predominantly urban high school student population. There were 40 secondary science teachers involved, each with an average of 30 students. The total number of students sampled in the study was 1200. The teachers who participated in the study taught the entire range of secondary science courses offered at this large school district. All teachers were selected by the researcher so that a balance would occur in the sample between teachers who were education major versus science major. Additionally, the researcher selected teachers so that a balance occurred in regards to the different levels of college degrees earned among those involved in the study. Several research questions sought to determine if there was significant difference between the type of the educational background obtained by secondary science teachers and the students' perception of the classroom environment. Other research questions sought to determine if there were significant differences in the students' perceptions of the classroom environment for secondary science teachers who taught biological content, or non-biological content sciences. An additional research question sought to evaluate if the grade level taught would affect the students' perception of the classroom environment. Analysis of the multiple regression were run for each of four scores from the CES, Real Form. For score 1, involvement of students, the results showed that teachers with the highest number of years of experience, with masters or masters plus degrees, who were education majors, and who taught twelfth grade students, had greater amounts of students being attentive and interested in class activities, participating in discussions, and doing additional work on their own, as compared with teachers who had lower experience, a bachelors degree, were science majors, and who taught a grade lower than twelfth. For score 2, task orientation, which emphasized completing the required activities and staying on-task, the results showed that teachers with the highest and intermediate experience, a science major, and with the highest college degree, showed higher scores as compared with the teachers indicating lower experiences, education major and a bachelors degree. For Score 3, competition, which indicated how difficult it was to achieve high grades in the class, the results showed that teachers who taught non-biology content subjects had the greatest effect on the regression. Teachers with a masters degree, low levels of experience, and who taught twelfth grade students were also factored into the regression equation. For Score 4, innovation, which indicated the extent in which the teachers used new and innovative techniques to encourage diverse and creative thinking included teachers with an education major as the first entry into the regression equation. Teachers with the least experience (0 to 3 years), and teachers who taught twelfth and eleventh grade students were also included into the regression equation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modern data centers host hundreds of thousands of servers to achieve economies of scale. Such a huge number of servers create challenges for the data center network (DCN) to provide proportionally large bandwidth. In addition, the deployment of virtual machines (VMs) in data centers raises the requirements for efficient resource allocation and find-grained resource sharing. Further, the large number of servers and switches in the data center consume significant amounts of energy. Even though servers become more energy efficient with various energy saving techniques, DCN still accounts for 20% to 50% of the energy consumed by the entire data center. The objective of this dissertation is to enhance DCN performance as well as its energy efficiency by conducting optimizations on both host and network sides. First, as the DCN demands huge bisection bandwidth to interconnect all the servers, we propose a parallel packet switch (PPS) architecture that directly processes variable length packets without segmentation-and-reassembly (SAR). The proposed PPS achieves large bandwidth by combining switching capacities of multiple fabrics, and it further improves the switch throughput by avoiding padding bits in SAR. Second, since certain resource demands of the VM are bursty and demonstrate stochastic nature, to satisfy both deterministic and stochastic demands in VM placement, we propose the Max-Min Multidimensional Stochastic Bin Packing (M3SBP) algorithm. M3SBP calculates an equivalent deterministic value for the stochastic demands, and maximizes the minimum resource utilization ratio of each server. Third, to provide necessary traffic isolation for VMs that share the same physical network adapter, we propose the Flow-level Bandwidth Provisioning (FBP) algorithm. By reducing the flow scheduling problem to multiple stages of packet queuing problems, FBP guarantees the provisioned bandwidth and delay performance for each flow. Finally, while DCNs are typically provisioned with full bisection bandwidth, DCN traffic demonstrates fluctuating patterns, we propose a joint host-network optimization scheme to enhance the energy efficiency of DCNs during off-peak traffic hours. The proposed scheme utilizes a unified representation method that converts the VM placement problem to a routing problem and employs depth-first and best-fit search to find efficient paths for flows.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Today, modern System-on-a-Chip (SoC) systems have grown rapidly due to the increased processing power, while maintaining the size of the hardware circuit. The number of transistors on a chip continues to increase, but current SoC designs may not be able to exploit the potential performance, especially with energy consumption and chip area becoming two major concerns. Traditional SoC designs usually separate software and hardware. Thus, the process of improving the system performance is a complicated task for both software and hardware designers. The aim of this research is to develop hardware acceleration workflow for software applications. Thus, system performance can be improved with constraints of energy consumption and on-chip resource costs. The characteristics of software applications can be identified by using profiling tools. Hardware acceleration can have significant performance improvement for highly mathematical calculations or repeated functions. The performance of SoC systems can then be improved, if the hardware acceleration method is used to accelerate the element that incurs performance overheads. The concepts mentioned in this study can be easily applied to a variety of sophisticated software applications. The contributions of SoC-based hardware acceleration in the hardware-software co-design platform include the following: (1) Software profiling methods are applied to H.264 Coder-Decoder (CODEC) core. The hotspot function of aimed application is identified by using critical attributes such as cycles per loop, loop rounds, etc. (2) Hardware acceleration method based on Field-Programmable Gate Array (FPGA) is used to resolve system bottlenecks and improve system performance. The identified hotspot function is then converted to a hardware accelerator and mapped onto the hardware platform. Two types of hardware acceleration methods – central bus design and co-processor design, are implemented for comparison in the proposed architecture. (3) System specifications, such as performance, energy consumption, and resource costs, are measured and analyzed. The trade-off of these three factors is compared and balanced. Different hardware accelerators are implemented and evaluated based on system requirements. 4) The system verification platform is designed based on Integrated Circuit (IC) workflow. Hardware optimization techniques are used for higher performance and less resource costs. Experimental results show that the proposed hardware acceleration workflow for software applications is an efficient technique. The system can reach 2.8X performance improvements and save 31.84% energy consumption by applying the Bus-IP design. The Co-processor design can have 7.9X performance and save 75.85% energy consumption.