992 resultados para Algorithm transfer
Resumo:
Across Europe, elevated phosphorus (P) concentrations in lowland rivers have made them particularly susceptible to eutrophication. This is compounded in southern and central UK by increasing pressures on water resources, which may be further enhanced by the potential effects of climate change. The EU Water Framework Directive requires an integrated approach to water resources management at the catchment scale and highlights the need for modelling tools that can distinguish relative contributions from multiple nutrient sources and are consistent with the information content of the available data. Two such models are introduced and evaluated within a stochastic framework using daily flow and total phosphorus concentrations recorded in a clay catchment typical of many areas of the lowland UK. Both models disaggregate empirical annual load estimates, derived from land use data, as a function of surface/near surface runoff, generated using a simple conceptual rainfall-runoff model. Estimates of the daily load from agricultural land, together with those from baseflow and point sources, feed into an in-stream routing algorithm. The first model assumes constant concentrations in runoff via surface/near surface pathways and incorporates an additional P store in the river-bed sediments, depleted above a critical discharge, to explicitly simulate resuspension. The second model, which is simpler, simulates P concentrations as a function of surface/near surface runoff, thus emphasising the influence of non-point source loads during flow peaks and mixing of baseflow and point sources during low flows. The temporal consistency of parameter estimates and thus the suitability of each approach is assessed dynamically following a new approach based on Monte-Carlo analysis. (c) 2004 Elsevier B.V. All rights reserved.
Resumo:
This article presents and assesses an algorithm that constructs 3D distributions of cloud from passive satellite imagery and collocated 2D nadir profiles of cloud properties inferred synergistically from lidar, cloud radar and imager data. It effectively widens the active–passive retrieved cross-section (RXS) of cloud properties, thereby enabling computation of radiative fluxes and radiances that can be compared with measured values in an attempt to perform radiative closure experiments that aim to assess the RXS. For this introductory study, A-train data were used to verify the scene-construction algorithm and only 1D radiative transfer calculations were performed. The construction algorithm fills off-RXS recipient pixels by computing sums of squared differences (a cost function F) between their spectral radiances and those of potential donor pixels/columns on the RXS. Of the RXS pixels with F lower than a certain value, the one with the smallest Euclidean distance to the recipient pixel is designated as the donor, and its retrieved cloud properties and other attributes such as 1D radiative heating rates are consigned to the recipient. It is shown that both the RXS itself and Moderate Resolution Imaging Spectroradiometer (MODIS) imagery can be reconstructed extremely well using just visible and thermal infrared channels. Suitable donors usually lie within 10 km of the recipient. RXSs and their associated radiative heating profiles are reconstructed best for extensive planar clouds and less reliably for broken convective clouds. Domain-average 1D broadband radiative fluxes at the top of theatmosphere(TOA)for (21 km)2 domains constructed from MODIS, CloudSat andCloud–Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) data agree well with coincidental values derived from Clouds and the Earth’s Radiant Energy System (CERES) radiances: differences betweenmodelled and measured reflected shortwave fluxes are within±10Wm−2 for∼35% of the several hundred domains constructed for eight orbits. Correspondingly, for outgoing longwave radiation∼65% are within ±10Wm−2.
Resumo:
Models for water transfer in the crop-soil system are key components of agro-hydrological models for irrigation, fertilizer and pesticide practices. Many of the hydrological models for water transfer in the crop-soil system are either too approximate due to oversimplified algorithms or employ complex numerical schemes. In this paper we developed a simple and sufficiently accurate algorithm which can be easily adopted in agro-hydrological models for the simulation of water dynamics. We used a dual crop coefficient approach proposed by the FAO for estimating potential evaporation and transpiration, and a dynamic model for calculating relative root length distribution on a daily basis. In a small time step of 0.001 d, we implemented algorithms separately for actual evaporation, root water uptake and soil water content redistribution by decoupling these processes. The Richards equation describing soil water movement was solved using an integration strategy over the soil layers instead of complex numerical schemes. This drastically simplified the procedures of modeling soil water and led to much shorter computer codes. The validity of the proposed model was tested against data from field experiments on two contrasting soils cropped with wheat. Good agreement was achieved between measurement and simulation of soil water content in various depths collected at intervals during crop growth. This indicates that the model is satisfactory in simulating water transfer in the crop-soil system, and therefore can reliably be adopted in agro-hydrological models. Finally we demonstrated how the developed model could be used to study the effect of changes in the environment such as lowering the groundwater table caused by the construction of a motorway on crop transpiration. (c) 2009 Elsevier B.V. All rights reserved.
H-infinity control design for time-delay linear systems: a rational transfer function based approach
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
This work proposes a methodology for optimized allocation of switches for automatic load transfer in distribution systems in order to improve the reliability indexes by restoring such systems which present voltage classes of 23 to 35 kV and radial topology. The automatic switches must be allocated on the system in order to transfer load remotely among the sources at the substations. The problem of switch allocation is formulated as nonlinear constrained mixed integer programming model subject to a set of economical and physical constraints. A dedicated Tabu Search (TS) algorithm is proposed to solve this model. The proposed methodology is tested for a large real-life distribution system. © 2011 IEEE.
Resumo:
Il presente lavoro di tesi è stato svolto presso il servizio di Fisica Sanitaria del Policlinico Sant'Orsola-Malpighi di Bologna. Lo studio si è concentrato sul confronto tra le tecniche di ricostruzione standard (Filtered Back Projection, FBP) e quelle iterative in Tomografia Computerizzata. Il lavoro è stato diviso in due parti: nella prima è stata analizzata la qualità delle immagini acquisite con una CT multislice (iCT 128, sistema Philips) utilizzando sia l'algoritmo FBP sia quello iterativo (nel nostro caso iDose4). Per valutare la qualità delle immagini sono stati analizzati i seguenti parametri: il Noise Power Spectrum (NPS), la Modulation Transfer Function (MTF) e il rapporto contrasto-rumore (CNR). Le prime due grandezze sono state studiate effettuando misure su un fantoccio fornito dalla ditta costruttrice, che simulava la parte body e la parte head, con due cilindri di 32 e 20 cm rispettivamente. Le misure confermano la riduzione del rumore ma in maniera differente per i diversi filtri di convoluzione utilizzati. Lo studio dell'MTF invece ha rivelato che l'utilizzo delle tecniche standard e iterative non cambia la risoluzione spaziale; infatti gli andamenti ottenuti sono perfettamente identici (a parte le differenze intrinseche nei filtri di convoluzione), a differenza di quanto dichiarato dalla ditta. Per l'analisi del CNR sono stati utilizzati due fantocci; il primo, chiamato Catphan 600 è il fantoccio utilizzato per caratterizzare i sistemi CT. Il secondo, chiamato Cirs 061 ha al suo interno degli inserti che simulano la presenza di lesioni con densità tipiche del distretto addominale. Lo studio effettuato ha evidenziato che, per entrambi i fantocci, il rapporto contrasto-rumore aumenta se si utilizza la tecnica di ricostruzione iterativa. La seconda parte del lavoro di tesi è stata quella di effettuare una valutazione della riduzione della dose prendendo in considerazione diversi protocolli utilizzati nella pratica clinica, si sono analizzati un alto numero di esami e si sono calcolati i valori medi di CTDI e DLP su un campione di esame con FBP e con iDose4. I risultati mostrano che i valori ricavati con l'utilizzo dell'algoritmo iterativo sono al di sotto dei valori DLR nazionali di riferimento e di quelli che non usano i sistemi iterativi.
Resumo:
INTRODUCTION: Guidelines for the treatment of patients in severe hypothermia and mainly in hypothermic cardiac arrest recommend the rewarming using the extracorporeal circulation (ECC). However,guidelines for the further in-hospital diagnostic and therapeutic approach of these patients, who often suffer from additional injuries—especially in avalanche casualties, are lacking. Lack of such algorithms may relevantly delay treatment and put patients at further risk. Together with a multidisciplinary team, the Emergency Department at the University Hospital in Bern, a level I trauma centre, created an algorithm for the in-hospital treatment of patients with hypothermic cardiac arrest. This algorithm primarily focuses on the decision-making process for the administration of ECC. THE BERNESE HYPOTHERMIA ALGORITHM: The major difference between the traditional approach, where all hypothermic patients are primarily admitted to the emergency centre, and our new algorithm is that hypothermic cardiac arrest patients without obvious signs of severe trauma are taken to the operating theatre without delay. Subsequently, the interdisciplinary team decides whether to rewarm the patient using ECC based on a standard clinical trauma assessment, serum potassium levels, core body temperature, sonographic examinations of the abdomen, pleural space, and pericardium, as well as a pelvic X-ray, if needed. During ECC, sonography is repeated and haemodynamic function as well as haemoglobin levels are regularly monitored. Standard radiological investigations according to the local multiple trauma protocol are performed only after ECC. Transfer to the intensive care unit, where mild therapeutic hypothermia is maintained for another 12 h, should not be delayed by additional X-rays for minor injuries. DISCUSSION: The presented algorithm is intended to facilitate in-hospital decision-making and shorten the door-to-reperfusion time for patients with hypothermic cardiac arrest. It was the result of intensive collaboration between different specialties and highlights the importance of high-quality teamwork for rare cases of severe accidental hypothermia. Information derived from the new International Hypothermia Registry will help to answer open questions and further optimize the algorithm.
Resumo:
Multi-parametric and quantitative magnetic resonance imaging (MRI) techniques have come into the focus of interest, both as a research and diagnostic modality for the evaluation of patients suffering from mild cognitive decline and overt dementia. In this study we address the question, if disease related quantitative magnetization transfer effects (qMT) within the intra- and extracellular matrices of the hippocampus may aid in the differentiation between clinically diagnosed patients with Alzheimer disease (AD), patients with mild cognitive impairment (MCI) and healthy controls. We evaluated 22 patients with AD (n=12) and MCI (n=10) and 22 healthy elderly (n=12) and younger (n=10) controls with multi-parametric MRI. Neuropsychological testing was performed in patients and elderly controls (n=34). In order to quantify the qMT effects, the absorption spectrum was sampled at relevant off-resonance frequencies. The qMT-parameters were calculated according to a two-pool spin-bath model including the T1- and T2 relaxation parameters of the free pool, determined in separate experiments. Histograms (fixed bin-size) of the normalized qMT-parameter values (z-scores) within the anterior and posterior hippocampus (hippocampal head and body) were subjected to a fuzzy-c-means classification algorithm with downstreamed PCA projection. The within-cluster sums of point-to-centroid distances were used to examine the effects of qMT- and diffusion anisotropy parameters on the discrimination of healthy volunteers, patients with Alzheimer and MCIs. The qMT-parameters T2(r) (T2 of the restricted pool) and F (fractional pool size) differentiated between the three groups (control, MCI and AD) in the anterior hippocampus. In our cohort, the MT ratio, as proposed in previous reports, did not differentiate between MCI and AD or healthy controls and MCI, but between healthy controls and AD.
Resumo:
Artificial pancreas is in the forefront of research towards the automatic insulin infusion for patients with type 1 diabetes. Due to the high inter- and intra-variability of the diabetic population, the need for personalized approaches has been raised. This study presents an adaptive, patient-specific control strategy for glucose regulation based on reinforcement learning and more specifically on the Actor-Critic (AC) learning approach. The control algorithm provides daily updates of the basal rate and insulin-to-carbohydrate (IC) ratio in order to optimize glucose regulation. A method for the automatic and personalized initialization of the control algorithm is designed based on the estimation of the transfer entropy (TE) between insulin and glucose signals. The algorithm has been evaluated in silico in adults, adolescents and children for 10 days. Three scenarios of initialization to i) zero values, ii) random values and iii) TE-based values have been comparatively assessed. The results have shown that when the TE-based initialization is used, the algorithm achieves faster learning with 98%, 90% and 73% in the A+B zones of the Control Variability Grid Analysis for adults, adolescents and children respectively after five days compared to 95%, 78%, 41% for random initialization and 93%, 88%, 41% for zero initial values. Furthermore, in the case of children, the daily Low Blood Glucose Index reduces much faster when the TE-based tuning is applied. The results imply that automatic and personalized tuning based on TE reduces the learning period and improves the overall performance of the AC algorithm.
Resumo:
In astrophysical regimes where the collisional excitation of hydrogen atoms is relevant, the cross-sections for the interactions of hydrogen atoms with electrons and protons are necessary for calculating line profiles and intensities. In particular, at relative velocities exceeding ∼1000 km s−1, collisional excitation by protons dominates over that by electrons. Surprisingly, the H–H+ cross-sections at these velocities do not exist for atomic levels of n≥ 4, forcing researchers to utilize extrapolation via inaccurate scaling laws. In this study, we present a faster and improved algorithm for computing cross-sections for the H–H+ collisional system, including excitation and charge transfer to the n≥ 2 levels of the hydrogen atom. We develop a code named BDSCX which directly solves the Schrödinger equation with variable (but non-adaptive) resolution and utilizes a hybrid spatial-Fourier grid. Our novel hybrid grid reduces the number of grid points needed from ∼4000n6 (for a ‘brute force’, Cartesian grid) to ∼2000n4 and speeds up the computation by a factor of ∼50 for calculations going up to n= 4. We present (l, m)-resolved results for charge transfer and excitation final states for n= 2–4 and for projectile energies of 5–80 keV, as well as fitting functions for the cross-sections. The ability to accurately compute H–H+ cross-sections to n= 4 allows us to calculate the Balmer decrement, the ratio of Hα to Hβ line intensities. We find that the Balmer decrement starts to increase beyond its largely constant value of 2–3 below 10 keV, reaching values of 4–5 at 5 keV, thus complicating its use as a diagnostic of dust extinction when fast (∼1000 km s−1) shocks are impinging upon the ambient interstellar medium.
Resumo:
A sequential design method is presented for the design of thermally coupled distillation sequences. The algorithm starts by selecting a set of sequences in the space of basic configurations in which the internal structure of condensers and reboilers is explicitly taken into account and extended with the possibility of including divided wall columns (DWC). This first stage is based on separation tasks (except by the DWCs) and therefore it does not provide an actual sequence of columns. In the second stage the best arrangement in N-1 actual columns is performed taking into account operability and mechanical constraints. Finally, for a set of candidate sequences the algorithm try to reduce the number of total columns by considering Kaibel columns, elimination of transfer blocks or columns with vertical partitions. An example illustrate the different steps of the sequential algorithm.
Resumo:
In this paper, a new differential evolution (DE) based power system optimal available transfer capability (ATC) assessment is presented. Power system total transfer capability (TTC) is traditionally solved by the repeated power flow (RPF) method and the continuation power flow (CPF) method. These methods are based on the assumption that the productions of the source area generators are increased in identical proportion to balance the load increment in the sink area. A new approach based on DE algorithm to generate optimal dispatch both in source area generators and sink area loads is proposed in this paper. This new method can compute ATC between two areas with significant improvement in accuracy compared with the traditional RPF and CPF based methods. A case study using a 30 bus system is given to verify the efficiency and effectiveness of this new DE based ATC optimization approach.
Resumo:
A 10 cm diameter four-stage Scheibel column with dispersed phase wetted packing sections has been constructed to study the hydrodynamics and mass transfer using the system toluene-acetone-water. The literature pertaining to the above extractor has been examined and the important phenomena such as droplet break-up and coalescence, mass transfer and backmixing have been reviewed. A critical analysis of the backmixing or axial mixing models and the corresponding techniques for parameter estimation was applied and an optimization technique based on Marquardt's algorithm was implemented. A single phase sampling technique was developed to estimate the acetone concentration profile in both phases along the column. Column flooding characteristics were investigated under various operating conditions and it was found that, when the impellers were located at about DI/5cm from the upper surface of the pads, the limiting flow rates increased with impeller speed. This unusual behaviour was explained in terms of the pumping effect created by the turbine impellers. Correlations were developed to predict Sauter mean drop diameters. A five-cell with backflow model was used to estimate the column performance (stage efficiency) and phases non-ideality (backflow parameters). Overall mass transfer coefficients were computed using the above model and compared with those calculated using the correlations based on single drop mechanism.
Resumo:
ACM Computing Classification System (1998): I.2.8, G.1.6.
Resumo:
This thesis presents the study of a two-degree-of-freedom (2 DOF) nonlinear system consisting of two grounded linear oscillators coupled to two separate light weight nonlinear energy sinks of an essentially nonlinear stiffness. In this thesis, Targeted Energy Transfer (TET) and NES concept are introduced. Previous studies and research of Energy pumping and NES are presented. The characters in nonlinear energy pumping have been introduced at the start of the thesis. For the aim to design the application of a tremor reduction assessment device, the knowledge of tremor reduction has also been mentioned. Two main parties have been presented in the research: dynamical theoretic method of nonlinear energy pumping study and experiments of nonlinear vibration reduction model. In this thesis, nonlinear energy sink (NES) has been studied and used as a core attachment for the research. A new theoretic method of nonlinear vibration reduction which with two NESs has been attached to a primary system has been designed and tested with the technology of targeted energy transfer. Series connection and parallel connection structure systems have been designed to run the tests. Genetic algorithm has been used and presented in the thesis for searching the fit components. One more experiment has been tested with the final components. The results have been compared to find out most efficiency structure and components for the theoretic model. A tremor reduction experiment has been designed and presented in the thesis. The experiment is for designing an application for reducing human body tremor. By using the theoretic method earlier, the experiment has been designed and tested with a tremor reduction model. The experiment includes several tests, one single NES attached system and two NESs attached systems with different structures. The results of theoretic models and experiment models have been compared. The discussion has been made in the end. At the end of the thesis, some further work has been considered to designing the device of the tremor reduction.