873 resultados para Energy Efficient Algorithms
Resumo:
In this article we propose an exact efficient simulation algorithm for the generalized von Mises circular distribution of order two. It is an acceptance-rejection algorithm with a piecewise linear envelope based on the local extrema and the inflexion points of the generalized von Mises density of order two. We show that these points can be obtained from the roots of polynomials and degrees four and eight, which can be easily obtained by the methods of Ferrari and Weierstrass. A comparative study with the von Neumann acceptance-rejection, with the ratio-of-uniforms and with a Markov chain Monte Carlo algorithms shows that this new method is generally the most efficient.
Resumo:
There is still discussion regarding whether liquid biofuels can contribute to rural energy security in the global South. We argue that transitioning to a village energy supply based on jatropha hedges around smallholder plots is possible, but requires collective effort for the acquisition and maintenance of processing equipment and for the running of village generators. The use of jatropha oil for lighting in rural households is affordable and technically possible, but not ideal if more efficient electric solutions exist. Cooking with jatropha oil or press cake is also possible, but quantities produced in hedges can only substitute a small part of the firewood used by rural households.
Resumo:
As the complexity of active medical implants increases, the task of embedding a life-long power supply at the time of implantation becomes more challenging. A periodic renewal of the energy source is often required. Human energy harvesting is, therefore, seen as a possible remedy. In this paper, we present a novel idea to harvest energy from the pressure-driven deformation of an artery by the principle of magneto-hydrodynamics. The generator relies on a highly electrically conductive fluid accelerated perpendicularly to a magnetic field by means of an efficient lever arm mechanism. An artery with 10 mm inner diameter is chosen as a potential implantation site and its ability to drive the generator is established. Three analytical models are proposed to investigate the relevant design parameters and to determine the existence of an optimal configuration. The predicted output power reaches 65 μW according to the first two models and 135 μW according to the third model. It is found that the generator, designed as a circular structure encompassing the artery, should not exceed a total volume of 3 cm3.
Resumo:
Hot Jupiters, due to the proximity to their parent stars, are subjected to a strong irradiating flux that governs their radiative and dynamical properties. We compute a suite of three-dimensional circulation models with dual-band radiative transfer, exploring a relevant range of irradiation temperatures, both with and without temperature inversions. We find that, for irradiation temperatures T irr lsim 2000 K, heat redistribution is very efficient, producing comparable dayside and nightside fluxes. For T irr ≈ 2200-2400 K, the redistribution starts to break down, resulting in a high day-night flux contrast. Our simulations indicate that the efficiency of redistribution is primarily governed by the ratio of advective to radiative timescales. Models with temperature inversions display a higher day-night contrast due to the deposition of starlight at higher altitudes, but we find this opacity-driven effect to be secondary compared to the effects of irradiation. The hotspot offset from the substellar point is large when insolation is weak and redistribution is efficient, and decreases as redistribution breaks down. The atmospheric flow can be potentially subjected to the Kelvin-Helmholtz instability (as indicated by the Richardson number) only in the uppermost layers, with a depth that penetrates down to pressures of a few millibars at most. Shocks penetrate deeper, down to several bars in the hottest model. Ohmic dissipation generally occurs down to deeper levels than shock dissipation (to tens of bars), but the penetration depth varies with the atmospheric opacity. The total dissipated Ohmic power increases steeply with the strength of the irradiating flux and the dissipation depth recedes into the atmosphere, favoring radius inflation in the most irradiated objects. A survey of the existing data, as well as the inferences made from them, reveals that our results are broadly consistent with the observational trends.
Resumo:
The efficient collection of solar energy relies on the design and construction of well-organized light-harvesting systems. Herein we report that supramolecular phenanthrene polymers doped with pyrene are effective collectors of light energy. The linear polymers are formed through the assembly of short amphiphilic oligomers in water. Absorption of light by phenanthrene residues is followed by electronic energy transfer along the polymer over long distances (>100 nm) to the accepting pyrene molecules. The high efficiency of the energy transfer, which is documented by large fluorescence quantum yields, suggests a quantum coherent process.
Resumo:
The session aims at analyzing efforts in up-scaling cleaner and more efficient energy solutions for poor people in developing countries by addressing the following questions: What are factors along the whole value chain and in the institutional, social, but also environmental space that enable up-scaling of improved pro-poor technologies? Are there differences between energy carriers or in different contexts? What are most promising entry points for up-scaling?
Resumo:
Until recently, measurements of energy expenditure (EE; herein defined as heat production) in respiration chambers did not account for the extra energy requirements of grazing dairy cows on pasture. As energy is first limiting in most pasture-based milk production systems, its efficient use is important. Therefore, the aim of the present study was to compare EE, which can be affected by differences in body weight (BW), body composition, grazing behavior, physical activity, and milk production level, in 2 Holstein cow strains. Twelve Swiss Holstein-Friesian (HCH; 616 kg of BW) and 12 New Zealand Holstein-Friesian (HNZ; 570 kg of BW) cows in the third stage of lactation were paired according to their stage of lactation and kept in a rotational, full-time grazing system without concentrate supplementation. After adaption, the daily milk yield, grass intake using the alkane double-indicator technique, nutrient digestibility, physical activity, and grazing behavior recorded by an automatic jaw movement recorder were investigated over 7d. Using the (13)C bicarbonate dilution technique in combination with an automatic blood sampling system, EE based on measured carbon dioxide production was determined in 1 cow pair per day between 0800 to 1400 h. The HCH were heavier and had a lower body condition score compared with HNZ, but the difference in BW was smaller compared with former studies. Milk production, grass intake, and nutrient digestibility did not differ between the 2 cow strains, but HCH grazed for a longer time during the 6-h measurement period and performed more grazing mastication compared with the HNZ. No difference was found between the 2 cow strains with regard to EE (291 ± 15.6 kJ) per kilogram of metabolic BW, mainly due to a high between-animal variation in EE. As efficiency and energy use are important in sustainable, pasture-based, organic milk production systems, the determining factors for EE, such as methodology, genetics, physical activity, grazing behavior, and pasture quality, should be investigated and quantified in more detail in future studies.
Resumo:
OBJECTIVE The purpose of this study was to investigate the feasibility of microdose CT using a comparable dose as for conventional chest radiographs in two planes including dual-energy subtraction for lung nodule assessment. MATERIALS AND METHODS We investigated 65 chest phantoms with 141 lung nodules, using an anthropomorphic chest phantom with artificial lung nodules. Microdose CT parameters were 80 kV and 6 mAs, with pitch of 2.2. Iterative reconstruction algorithms and an integrated circuit detector system (Stellar, Siemens Healthcare) were applied for maximum dose reduction. Maximum intensity projections (MIPs) were reconstructed. Chest radiographs were acquired in two projections with bone suppression. Four blinded radiologists interpreted the images in random order. RESULTS A soft-tissue CT kernel (I30f) delivered better sensitivities in a pilot study than a hard kernel (I70f), with respective mean (SD) sensitivities of 91.1% ± 2.2% versus 85.6% ± 5.6% (p = 0.041). Nodule size was measured accurately for all kernels. Mean clustered nodule sensitivity with chest radiography was 45.7% ± 8.1% (with bone suppression, 46.1% ± 8%; p = 0.94); for microdose CT, nodule sensitivity was 83.6% ± 9% without MIP (with additional MIP, 92.5% ± 6%; p < 10(-3)). Individual sensitivities of microdose CT for readers 1, 2, 3, and 4 were 84.3%, 90.7%, 68.6%, and 45.0%, respectively. Sensitivities with chest radiography for readers 1, 2, 3, and 4 were 42.9%, 58.6%, 36.4%, and 90.7%, respectively. In the per-phantom analysis, respective sensitivities of microdose CT versus chest radiography were 96.2% and 75% (p < 10(-6)). The effective dose for chest radiography including dual-energy subtraction was 0.242 mSv; for microdose CT, the applied dose was 0.1323 mSv. CONCLUSION Microdose CT is better than the combination of chest radiography and dual-energy subtraction for the detection of solid nodules between 5 and 12 mm at a lower dose level of 0.13 mSv. Soft-tissue kernels allow better sensitivities. These preliminary results indicate that microdose CT has the potential to replace conventional chest radiography for lung nodule detection.
Resumo:
In this work we devise two novel algorithms for blind deconvolution based on a family of logarithmic image priors. In contrast to recent approaches, we consider a minimalistic formulation of the blind deconvolution problem where there are only two energy terms: a least-squares term for the data fidelity and an image prior based on a lower-bounded logarithm of the norm of the image gradients. We show that this energy formulation is sufficient to achieve the state of the art in blind deconvolution with a good margin over previous methods. Much of the performance is due to the chosen prior. On the one hand, this prior is very effective in favoring sparsity of the image gradients. On the other hand, this prior is non convex. Therefore, solutions that can deal effectively with local minima of the energy become necessary. We devise two iterative minimization algorithms that at each iteration solve convex problems: one obtained via the primal-dual approach and one via majorization-minimization. While the former is computationally efficient, the latter achieves state-of-the-art performance on a public dataset.
Resumo:
This paper presents the electron and photon energy calibration achieved with the ATLAS detector using about 25 fb−1 of LHC proton–proton collision data taken at centre-of-mass energies of √s = 7 and 8 TeV. The reconstruction of electron and photon energies is optimised using multivariate algorithms. The response of the calorimeter layers is equalised in data and simulation, and the longitudinal profile of the electromagnetic showers is exploited to estimate the passive material in front of the calorimeter and reoptimise the detector simulation. After all corrections, the Z resonance is used to set the absolute energy scale. For electrons from Z decays, the achieved calibration is typically accurate to 0.05% in most of the detector acceptance, rising to 0.2% in regions with large amounts of passive material. The remaining inaccuracy is less than 0.2–1% for electrons with a transverse energy of 10 GeV, and is on average 0.3% for photons. The detector resolution is determined with a relative inaccuracy of less than 10% for electrons and photons up to 60 GeV transverse energy, rising to 40% for transverse energies above 500 GeV.
Resumo:
Long-term electrocardiogram (ECG) often suffers from relevant noise. Baseline wander in particular is pronounced in ECG recordings using dry or esophageal electrodes, which are dedicated for prolonged registration. While analog high-pass filters introduce phase distortions, reliable offline filtering of the baseline wander implies a computational burden that has to be put in relation to the increase in signal-to-baseline ratio (SBR). Here we present a graphics processor unit (GPU) based parallelization method to speed up offline baseline wander filter algorithms, namely the wavelet, finite, and infinite impulse response, moving mean, and moving median filter. Individual filter parameters were optimized with respect to the SBR increase based on ECGs from the Physionet database superimposed to auto-regressive modeled, real baseline wander. A Monte-Carlo simulation showed that for low input SBR the moving median filter outperforms any other method but negatively affects ECG wave detection. In contrast, the infinite impulse response filter is preferred in case of high input SBR. However, the parallelized wavelet filter is processed 500 and 4 times faster than these two algorithms on the GPU, respectively, and offers superior baseline wander suppression in low SBR situations. Using a signal segment of 64 mega samples that is filtered as entire unit, wavelet filtering of a 7-day high-resolution ECG is computed within less than 3 seconds. Taking the high filtering speed into account, the GPU wavelet filter is the most efficient method to remove baseline wander present in long-term ECGs, with which computational burden can be strongly reduced.
Resumo:
In the fermion loop formulation the contributions to the partition function naturally separate into topological equivalence classes with a definite sign. This separation forms the basis for an efficient fermion simulation algorithm using a fluctuating open fermion string. It guarantees sufficient tunnelling between the topological sectors, and hence provides a solution to the fermion sign problem affecting systems with broken supersymmetry. Moreover, the algorithm shows no critical slowing down even in the massless limit and can hence handle the massless Goldstino mode emerging in the supersymmetry broken phase. In this paper – the third in a series of three – we present the details of the simulation algorithm and demonstrate its efficiency by means of a few examples.
Resumo:
The optical and luminescence properties of CaI2 and NaCl doped with divalent thulium are reported for solar energy applications. These halides strongly absorb solar light from the UV up to 900 nm due to the intense Tm2+ 4f13→4f125d1 electronic transitions. Absorption is followed by emission of 1140 nm light due to the 2F5/2→2F7/2 transition of the 4f13 configuration that can be efficiently converted to electric power by thin film CuInSe2 (CIS) solar cells. Because of a negligible spectral overlap between absorption and emission spectra, a luminescent solar concentrator (LSC) based on these black luminescent materials would not suffer from self-absorption losses. The Tm2+ doped halides may therefore lead to efficient semi-transparent power generating windows that absorb solar light over the whole visible spectrum. It will be shown that the power efficiency of the Tm2+ based LSCs can be up to four times higher compared to LSCs based on organic dyes or quantum dots.
Resumo:
Information-centric networking (ICN) is a new communication paradigm that has been proposed to cope with drawbacks of host-based communication protocols, namely scalability and security. In this thesis, we base our work on Named Data Networking (NDN), which is a popular ICN architecture, and investigate NDN in the context of wireless and mobile ad hoc networks. In a first part, we focus on NDN efficiency (and potential improvements) in wireless environments by investigating NDN in wireless one-hop communication, i.e., without any routing protocols. A basic requirement to initiate informationcentric communication is the knowledge of existing and available content names. Therefore, we develop three opportunistic content discovery algorithms and evaluate them in diverse scenarios for different node densities and content distributions. After content names are known, requesters can retrieve content opportunistically from any neighbor node that provides the content. However, in case of short contact times to content sources, content retrieval may be disrupted. Therefore, we develop a requester application that keeps meta information of disrupted content retrievals and enables resume operations when a new content source has been found. Besides message efficiency, we also evaluate power consumption of information-centric broadcast and unicast communication. Based on our findings, we develop two mechanisms to increase efficiency of information-centric wireless one-hop communication. The first approach called Dynamic Unicast (DU) avoids broadcast communication whenever possible since broadcast transmissions result in more duplicate Data transmissions, lower data rates and higher energy consumption on mobile nodes, which are not interested in overheard Data, compared to unicast communication. Hence, DU uses broadcast communication only until a content source has been found and then retrieves content directly via unicast from the same source. The second approach called RC-NDN targets efficiency of wireless broadcast communication by reducing the number of duplicate Data transmissions. In particular, RC-NDN is a Data encoding scheme for content sources that increases diversity in wireless broadcast transmissions such that multiple concurrent requesters can profit from each others’ (overheard) message transmissions. If requesters and content sources are not in one-hop distance to each other, requests need to be forwarded via multi-hop routing. Therefore, in a second part of this thesis, we investigate information-centric wireless multi-hop communication. First, we consider multi-hop broadcast communication in the context of rather static community networks. We introduce the concept of preferred forwarders, which relay Interest messages slightly faster than non-preferred forwarders to reduce redundant duplicate message transmissions. While this approach works well in static networks, the performance may degrade in mobile networks if preferred forwarders may regularly move away. Thus, to enable routing in mobile ad hoc networks, we extend DU for multi-hop communication. Compared to one-hop communication, multi-hop DU requires efficient path update mechanisms (since multi-hop paths may expire quickly) and new forwarding strategies to maintain NDN benefits (request aggregation and caching) such that only a few messages need to be transmitted over the entire end-to-end path even in case of multiple concurrent requesters. To perform quick retransmission in case of collisions or other transmission errors, we implement and evaluate retransmission timers from related work and compare them to CCNTimer, which is a new algorithm that enables shorter content retrieval times in information-centric wireless multi-hop communication. Yet, in case of intermittent connectivity between requesters and content sources, multi-hop routing protocols may not work because they require continuous end-to-end paths. Therefore, we present agent-based content retrieval (ACR) for delay-tolerant networks. In ACR, requester nodes can delegate content retrieval to mobile agent nodes, which move closer to content sources, can retrieve content and return it to requesters. Thus, ACR exploits the mobility of agent nodes to retrieve content from remote locations. To enable delay-tolerant communication via agents, retrieved content needs to be stored persistently such that requesters can verify its authenticity via original publisher signatures. To achieve this, we develop a persistent caching concept that maintains received popular content in repositories and deletes unpopular content if free space is required. Since our persistent caching concept can complement regular short-term caching in the content store, it can also be used for network caching to store popular delay-tolerant content at edge routers (to reduce network traffic and improve network performance) while real-time traffic can still be maintained and served from the content store.
Resumo:
The effectiveness of the Anisotropic Analytical Algorithm (AAA) implemented in the Eclipse treatment planning system (TPS) was evaluated using theRadiologicalPhysicsCenteranthropomorphic lung phantom using both flattened and flattening-filter-free high energy beams. Radiation treatment plans were developed following the Radiation Therapy Oncology Group and theRadiologicalPhysicsCenterguidelines for lung treatment using Stereotactic Radiation Body Therapy. The tumor was covered such that at least 95% of Planning Target Volume (PTV) received 100% of the prescribed dose while ensuring that normal tissue constraints were followed as well. Calculated doses were exported from the Eclipse TPS and compared with the experimental data as measured using thermoluminescence detectors (TLD) and radiochromic films that were placed inside the phantom. The results demonstrate that the AAA superposition-convolution algorithm is able to calculate SBRT treatment plans with all clinically used photon beams in the range from 6 MV to 18 MV. The measured dose distribution showed a good agreement with the calculated distribution using clinically acceptable criteria of ±5% dose or 3mm distance to agreement. These results show that in a heterogeneous environment a 3D pencil beam superposition-convolution algorithms with Monte Carlo pre-calculated scatter kernels, such as AAA, are able to reliably calculate dose, accounting for increased lateral scattering due to the loss of electronic equilibrium in low density medium. The data for high energy plans (15 MV and 18 MV) showed very good tumor coverage in contrast to findings by other investigators for less sophisticated dose calculation algorithms, which demonstrated less than expected tumor doses and generally worse tumor coverage for high energy plans compared to 6MV plans. This demonstrates that the modern superposition-convolution AAA algorithm is a significant improvement over previous algorithms and is able to calculate doses accurately for SBRT treatment plans in the highly heterogeneous environment of the thorax for both lower (≤12 MV) and higher (greater than 12 MV) beam energies.