884 resultados para Energy-distribution
Resumo:
This thesis is focused on Smart Grid applications in medium voltage distribution networks. For the development of new applications it appears useful the availability of simulation tools able to model dynamic behavior of both the power system and the communication network. Such a co-simulation environment would allow the assessment of the feasibility of using a given network technology to support communication-based Smart Grid control schemes on an existing segment of the electrical grid and to determine the range of control schemes that different communications technologies can support. For this reason, is presented a co-simulation platform that has been built by linking the Electromagnetic Transients Program Simulator (EMTP v3.0) with a Telecommunication Network Simulator (OPNET-Riverbed v18.0). The simulator is used to design and analyze a coordinate use of Distributed Energy Resources (DERs) for the voltage/var control (VVC) in distribution network. This thesis is focused control structure based on the use of phase measurement units (PMUs). In order to limit the required reinforcements of the communication infrastructures currently adopted by Distribution Network Operators (DNOs), the study is focused on leader-less MAS schemes that do not assign special coordinating rules to specific agents. Leader-less MAS are expected to produce more uniform communication traffic than centralized approaches that include a moderator agent. Moreover, leader-less MAS are expected to be less affected by limitations and constraint of some communication links. The developed co-simulator has allowed the definition of specific countermeasures against the limitations of the communication network, with particular reference to the latency and loss and information, for both the case of wired and wireless communication networks. Moreover, the co-simulation platform has bee also coupled with a mobility simulator in order to study specific countermeasures against the negative effects on the medium voltage/current distribution network caused by the concurrent connection of electric vehicles.
Resumo:
Solid-state shear pulverization (SSSP) is a unique processing technique for mechanochemical modification of polymers, compatibilization of polymer blends, and exfoliation and dispersion of fillers in polymer nanocomposites. A systematic parametric study of the SSSP technique is conducted to elucidate the detailed mechanism of the process and establish the basis for a range of current and future operation scenarios. Using neat, single component polypropylene (PP) as the model material, we varied machine type, screw design, and feed rate to achieve a range of shear and compression applied to the material, which can be quantified through specific energy input (Ep). As a universal processing variable, Ep reflects the level of chain scission occurring in the material, which correlates well to the extent of the physical property changes of the processed PP. Additionally, we compared the operating cost estimates of SSSP and conventional twin screw extrusion to determine the practical viability of SSSP.
Resumo:
Solid-state shear pulverization (SSSP) is a unique processing technique for mechanochemical modification of polymers, compatibilization of polymer blends, and exfoliation and dispersion of fillers in polymer nanocomposites. A systematic parametric study of the SSSP technique is conducted to elucidate the detailed mechanism of the process and establish the basis for a range of current and future operation scenarios. Using neat, single component polypropylene (PP) as the model material, we varied machine type, screw design, and feed rate to achieve a range of shear and compression applied to the material, which can be quantified through specific energy input (Ep). As a universal processing variable, Ep reflects the level of chain scission occurring in the material, which correlates well to the extent of the physical property changes of the processed PP. Additionally, we compared the operating cost estimates of SSSP and conventional twin screw extrusion to determine the practical viability of SSSP.
Resumo:
A transmission electron microscope (TEM) accessory, the energy filter, enables the establishment of a method for elemental microanalysis, the electron energy-loss spectroscopy (EELS). In conventional TEM, unscattered, elastic, and inelastic scattered electrons contribute to image information. Energy-filtering TEM (EFTEM) allows elemental analysis at the ultrastructural level by using selected inelastic scattered electrons. EELS is an excellent method for elemental microanalysis and nanoanalysis with good sensitivity and accuracy. However, it is a complex method whose potential is seldom completely exploited, especially for biological specimens. In addition to spectral analysis, parallel-EELS, we present two different imaging techniques in this chapter, namely electron spectroscopic imaging (ESI) and image-EELS. We aim to introduce these techniques in this chapter with the elemental microanalysis of titanium. Ultrafine, 22-nm titanium dioxide particles are used in an inhalation study in rats to investigate the distribution of nanoparticles in lung tissue.
Resumo:
ABSTRACT: BACKGROUND: Translocation of nanoparticles (NP) from the pulmonary airways into other pulmonary compartments or the systemic circulation is controversially discussed in the literature. In a previous study it was shown that titanium dioxide (TiO2) NP were "distributed in four lung compartments (air-filled spaces, epithelium/endothelium, connective tissue, capillary lumen) in correlation with compartment size". It was concluded that particles can move freely between these tissue compartments. To analyze whether the distribution of TiO2 NP in the lungs is really random or shows a preferential targeting we applied a newly developed method for comparing NP distributions. METHODS: Rat lungs exposed to an aerosol containing TiO2 NP were prepared for light and electron microscopy at 1 h and at 24 h after exposure. Numbers of TiO2 NP associated with each compartment were counted using energy filtering transmission electron microscopy. Compartment size was estimated by unbiased stereology from systematically sampled light micrographs. Numbers of particles were related to compartment size using a relative deposition index and chi-squared analysis. RESULTS: Nanoparticle distribution within the four compartments was not random at 1 h or at 24 h after exposure. At 1 h the connective tissue was the preferential target of the particles. At 24 h the NP were preferentially located in the capillary lumen. CONCLUSION: We conclude that TiO2 NP do not move freely between pulmonary tissue compartments, although they can pass from one compartment to another with relative ease. The residence time of NP in each tissue compartment of the respiratory system depends on the compartment and the time after exposure. It is suggested that a small fraction of TiO2 NP are rapidly transported from the airway lumen to the connective tissue and subsequently released into the systemic circulation.
Resumo:
In distribution system operations, dispatchers at control center closely monitor system operating limits to ensure system reliability and adequacy. This reliability is partly due to the provision of remote controllable tie and sectionalizing switches. While the stochastic nature of wind generation can impact the level of wind energy penetration in the network, an estimate of the output from wind on hourly basis can be extremely useful. Under any operating conditions, the switching actions require human intervention and can be an extremely stressful task. Currently, handling a set of switching combinations with the uncertainty of distributed wind generation as part of the decision variables has been nonexistent. This thesis proposes a three-fold online management framework: (1) prediction of wind speed, (2) estimation of wind generation capacity, and (3) enumeration of feasible switching combinations. The proposed methodology is evaluated on 29-node test system with 8 remote controllable switches and two wind farms of 18MW and 9MW nameplate capacities respectively for generating the sequence of system reconfiguration states during normal and emergency conditions.
Resumo:
Fatal falls from great height are a frequently encountered setting in forensic pathology. They present--by virtue of a calculable energy transmission to the body--an ideal model for the assessment of the effects of blunt trauma to a human body. As multislice computed tomography (MSCT) has proven not only to be invaluable in clinical examinations, but also to be a viable tool in post-mortem imaging, especially in the field of osseous injuries, we performed a MSCT scan on 20 victims of falls from great height. We hereby detected fractures and their distributions were compared with the impact energy. Our study suggests a marked increase of extensive damage to different body regions at about 20 kJ and more. The thorax was most often affected, regardless of the amount of impacting energy and the primary impact site. Cranial fracture frequency displayed a biphasic distribution with regard to the impacting energy; they were more frequent in energies of less than 10, and more than 20 kJ, but rarer in the intermediate energy group, namely that of 10-20 kJ.
Resumo:
As continued global funding and coordination are allocated toward the improvement of access to safe sources of drinking water, alternative solutions may be necessary to expand implementation to remote communities. This report evaluates two technologies used in a small water distribution system in a mountainous region of Panama; solar powered pumping and flow-reducing discs. The two parts of the system function independently, but were both chosen for their ability to mitigate unique issues in the community. The design program NeatWork and flow-reducing discs were evaluated because they are tools taught to Peace Corps Volunteers in Panama. Even when ample water is available, mountainous terrains affect the pressure available throughout a water distribution system. Since the static head in the system only varies with the height of water in the tank, frictional losses from pipes and fittings must be exploited to balance out the inequalities caused by the uneven terrain. Reducing the maximum allowable flow to connections through the installation of flow-reducing discs can help to retain enough residual pressure in the main distribution lines to provide reliable service to all connections. NeatWork was calibrated to measured flow rates by changing the orifice coefficient (θ), resulting in a value of 0.68, which is 10-15% higher than typical values for manufactured flow-reducing discs. NeatWork was used to model various system configurations to determine if a single-sized flow-reducing disc could provide equitable flow rates throughout an entire system. There is a strong correlation between the optimum single-sized flow- reducing disc and the average elevation change throughout a water distribution system; the larger the elevation change across the system, the smaller the recommended uniform orifice size. Renewable energy can jump the infrastructure gap and provide basic services at a fraction of the cost and time required to install transmission lines. Methods for the assessment of solar powered pumping systems as a means for rural water supply are presented and assessed. It was determined that manufacturer provided product specifications can be used to appropriately design a solar pumping system, but care must be taken to ensure that sufficient water can be provided to the system despite variations in solar intensity.
Resumo:
In recent years, advanced metering infrastructure (AMI) has been the main research focus due to the traditional power grid has been restricted to meet development requirements. There has been an ongoing effort to increase the number of AMI devices that provide real-time data readings to improve system observability. Deployed AMI across distribution secondary networks provides load and consumption information for individual households which can improve grid management. Significant upgrade costs associated with retrofitting existing meters with network-capable sensing can be made more economical by using image processing methods to extract usage information from images of the existing meters. This thesis presents a new solution that uses online data exchange of power consumption information to a cloud server without modifying the existing electromechanical analog meters. In this framework, application of a systematic approach to extract energy data from images replaces the manual reading process. One case study illustrates the digital imaging approach is compared to the averages determined by visual readings over a one-month period.
Resumo:
To assess the effect of age and disease on mineral distribution at the distal third of the tibia, bone mineral content (BMC) and bone mineral density (BMD) were measured at lumbar spine (spine), femoral neck (neck), and diaphysis (Dia) and distal epiphysis (Epi) of the tibia in 89 healthy control women of different age groups (20-29, n = 12; 30-39, n = 11; 40-44, n = 12; 45-49, n = 12; 50-54, n = 12; 55-59, n = 10; 60-69, n = 11; 70-79, n = 9), in 25 women with untreated vertebral osteoporosis (VOP), and in 19 women with primary hyperparathyroidism (PHPT) using dual-energy x-ray absorptiometry (DXA; Hologic QDR 1000 and standard spine software). A soft tissue simulator was used to compensate for heterogeneity of soft tissue thickness around the leg. Tibia was scanned over a length of 130 mm from the ankle joint, fibula being excluded from analysis. For BMC and BMD, 10 sections 13 mm each were analyzed separately and then pooled to define the epiphysis (Epi 13-52 mm) and diaphysis area (Dia 91-130 mm). Precision after repositioning was 1.9 and 2.1% for Epi and Dia, respectively. In the control group, at any site there was no significant difference between age groups 20-29 and 30-39, which thus were pooled to define the peak bone mass (PBM).(ABSTRACT TRUNCATED AT 250 WORDS)
Resumo:
The Internet of Things (IoT) is attracting considerable attention from the universities, industries, citizens and governments for applications, such as healthcare, environmental monitoring and smart buildings. IoT enables network connectivity between smart devices at all times, everywhere, and about everything. In this context, Wireless Sensor Networks (WSNs) play an important role in increasing the ubiquity of networks with smart devices that are low-cost and easy to deploy. However, sensor nodes are restricted in terms of energy, processing and memory. Additionally, low-power radios are very sensitive to noise, interference and multipath distortions. In this context, this article proposes a routing protocol based on Routing by Energy and Link quality (REL) for IoT applications. To increase reliability and energy-efficiency, REL selects routes on the basis of a proposed end-to-end link quality estimator mechanism, residual energy and hop count. Furthermore, REL proposes an event-driven mechanism to provide load balancing and avoid the premature energy depletion of nodes/networks. Performance evaluations were carried out using simulation and testbed experiments to show the impact and benefits of REL in small and large-scale networks. The results show that REL increases the network lifetime and services availability, as well as the quality of service of IoT applications. It also provides an even distribution of scarce network resources and reduces the packet loss rate, compared with the performance of well-known protocols.
Resumo:
Various applications for the purposes of event detection, localization, and monitoring can benefit from the use of wireless sensor networks (WSNs). Wireless sensor networks are generally easy to deploy, with flexible topology and can support diversity of tasks thanks to the large variety of sensors that can be attached to the wireless sensor nodes. To guarantee the efficient operation of such a heterogeneous wireless sensor networks during its lifetime an appropriate management is necessary. Typically, there are three management tasks, namely monitoring, (re) configuration, and code updating. On the one hand, status information, such as battery state and node connectivity, of both the wireless sensor network and the sensor nodes has to be monitored. And on the other hand, sensor nodes have to be (re)configured, e.g., setting the sensing interval. Most importantly, new applications have to be deployed as well as bug fixes have to be applied during the network lifetime. All management tasks have to be performed in a reliable, time- and energy-efficient manner. The ability to disseminate data from one sender to multiple receivers in a reliable, time- and energy-efficient manner is critical for the execution of the management tasks, especially for code updating. Using multicast communication in wireless sensor networks is an efficient way to handle such traffic pattern. Due to the nature of code updates a multicast protocol has to support bulky traffic and endto-end reliability. Further, the limited resources of wireless sensor nodes demand an energy-efficient operation of the multicast protocol. Current data dissemination schemes do not fulfil all of the above requirements. In order to close the gap, we designed the Sensor Node Overlay Multicast (SNOMC) protocol such that to support a reliable, time-efficient and energy-efficient dissemination of data from one sender node to multiple receivers. In contrast to other multicast transport protocols, which do not support reliability mechanisms, SNOMC supports end-to-end reliability using a NACK-based reliability mechanism. The mechanism is simple and easy to implement and can significantly reduce the number of transmissions. It is complemented by a data acknowledgement after successful reception of all data fragments by the receiver nodes. In SNOMC three different caching strategies are integrated for an efficient handling of necessary retransmissions, namely, caching on each intermediate node, caching on branching nodes, or caching only on the sender node. Moreover, an option was included to pro-actively request missing fragments. SNOMC was evaluated both in the OMNeT++ simulator and in our in-house real-world testbed and compared to a number of common data dissemination protocols, such as Flooding, MPR, TinyCubus, PSFQ, and both UDP and TCP. The results showed that SNOMC outperforms the selected protocols in terms of transmission time, number of transmitted packets, and energy-consumption. Moreover, we showed that SNOMC performs well with different underlying MAC protocols, which support different levels of reliability and energy-efficiency. Thus, SNOMC can offer a robust, high-performing solution for the efficient distribution of code updates and management information in a wireless sensor network. To address the three management tasks, in this thesis we developed the Management Architecture for Wireless Sensor Networks (MARWIS). MARWIS is specifically designed for the management of heterogeneous wireless sensor networks. A distinguished feature of its design is the use of wireless mesh nodes as backbone, which enables diverse communication platforms and offloading functionality from the sensor nodes to the mesh nodes. This hierarchical architecture allows for efficient operation of the management tasks, due to the organisation of the sensor nodes into small sub-networks each managed by a mesh node. Furthermore, we developed a intuitive -based graphical user interface, which allows non-expert users to easily perform management tasks in the network. In contrast to other management frameworks, such as Mate, MANNA, TinyCubus, or code dissemination protocols, such as Impala, Trickle, and Deluge, MARWIS offers an integrated solution monitoring, configuration and code updating of sensor nodes. Integration of SNOMC into MARWIS further increases performance efficiency of the management tasks. To our knowledge, our approach is the first one, which offers a combination of a management architecture with an efficient overlay multicast transport protocol. This combination of SNOMC and MARWIS supports reliably, time- and energy-efficient operation of a heterogeneous wireless sensor network.
Resumo:
Neutral hydrogen atoms that travel into the heliosphere from the local interstellar medium (LISM) experience strong effects due to charge exchange and radiation pressure from resonant absorption and re-emission of Lyα. The radiation pressure roughly compensates for the solar gravity. As a result, interstellar hydrogen atoms move along trajectories that are quite different than those of heavier interstellar species such as helium and oxygen, which experience relatively weak radiation pressure. Charge exchange leads to the loss of primary neutrals from the LISM and the addition of new secondary neutrals from the heliosheath. IBEX observations show clear effects of radiation pressure in a large longitudinal shift in the peak of interstellar hydrogen compared with that of interstellar helium. Here, we compare results from the Lee et al. interstellar neutral model with IBEX-Lo hydrogen observations to describe the distribution of hydrogen near 1 AU and provide new estimates of the solar radiation pressure. We find over the period analyzed from 2009 to 2011 that radiation pressure divided by the gravitational force (μ) has increased slightly from μ = 0.94 ± 0.04 in 2009 to μ = 1.01 ± 0.05 in 2011. We have also derived the speed, temperature, source longitude, and latitude of the neutral H atoms and find that these parameters are roughly consistent with those of interstellar He, particularly when considering the filtration effects that act on H in the outer heliosheath. Thus, our analysis shows that over the period from 2009 to 2011, we observe signatures of neutral H consistent with the primary distribution of atoms from the LISM and a radiation pressure that increases in the early rise of solar activity.
Resumo:
Proton radiation therapy is gaining popularity because of the unique characteristics of its dose distribution, e.g., high dose-gradient at the distal end of the percentage-depth-dose curve (known as the Bragg peak). The high dose-gradient offers the possibility of delivering high dose to the target while still sparing critical organs distal to the target. However, the high dose-gradient is a double-edged sword: a small shift of the highly conformal high-dose area can cause the target to be substantially under-dosed or the critical organs to be substantially over-dosed. Because of that, large margins are required in treatment planning to ensure adequate dose coverage of the target, which prevents us from realizing the full potential of proton beams. Therefore, it is critical to reduce uncertainties in the proton radiation therapy. One major uncertainty in a proton treatment is the range uncertainty related to the estimation of proton stopping power ratio (SPR) distribution inside a patient. The SPR distribution inside a patient is required to account for tissue heterogeneities when calculating dose distribution inside the patient. In current clinical practice, the SPR distribution inside a patient is estimated from the patient’s treatment planning computed tomography (CT) images based on the CT number-to-SPR calibration curve. The SPR derived from a single CT number carries large uncertainties in the presence of human tissue composition variations, which is the major drawback of the current SPR estimation method. We propose to solve this problem by using dual energy CT (DECT) and hypothesize that the range uncertainty can be reduced by a factor of two from currently used value of 3.5%. A MATLAB program was developed to calculate the electron density ratio (EDR) and effective atomic number (EAN) from two CT measurements of the same object. An empirical relationship was discovered between mean excitation energies and EANs existing in human body tissues. With the MATLAB program and the empirical relationship, a DECT-based method was successfully developed to derive SPRs for human body tissues (the DECT method). The DECT method is more robust against the uncertainties in human tissues compositions than the current single-CT-based method, because the DECT method incorporated both density and elemental composition information in the SPR estimation. Furthermore, we studied practical limitations of the DECT method. We found that the accuracy of the DECT method using conventional kV-kV x-ray pair is susceptible to CT number variations, which compromises the theoretical advantage of the DECT method. Our solution to this problem is to use a different x-ray pair for the DECT. The accuracy of the DECT method using different combinations of x-ray energies, i.e., the kV-kV, kV-MV and MV-MV pair, was compared using the measured imaging uncertainties for each case. The kV-MV DECT was found to be the most robust against CT number variations. In addition, we studied how uncertainties propagate through the DECT calculation, and found general principles of selecting x-ray pairs for the DECT method to minimize its sensitivity to CT number variations. The uncertainties in SPRs estimated using the kV-MV DECT were analyzed further and compared to those using the stoichiometric method. The uncertainties in SPR estimation can be divided into five categories according to their origins: the inherent uncertainty, the DECT modeling uncertainty, the CT imaging uncertainty, the uncertainty in the mean excitation energy, and SPR variation with proton energy. Additionally, human body tissues were divided into three tissue groups – low density (lung) tissues, soft tissues and bone tissues. The uncertainties were estimated separately because their uncertainties were different under each condition. An estimate of the composite range uncertainty (2s) was determined for three tumor sites – prostate, lung, and head-and-neck, by combining the uncertainty estimates of all three tissue groups, weighted by their proportions along typical beam path for each treatment site. In conclusion, the DECT method holds theoretical advantages in estimating SPRs for human tissues over the current single-CT-based method. Using existing imaging techniques, the kV-MV DECT approach was capable of reducing the range uncertainty from the currently used value of 3.5% to 1.9%-2.3%, but it is short to reach our original goal of reducing the range uncertainty by a factor of two. The dominant source of uncertainties in the kV-MV DECT was the uncertainties in CT imaging, especially in MV CT imaging. Further reduction in beam hardening effect, the impact of scatter, out-of-field object etc. would reduce the Hounsfeld Unit variations in CT imaging. The kV-MV DECT still has the potential to reduce the range uncertainty further.