950 resultados para ITERATIVE WATER-FILLING ALGORITHM
Resumo:
This paper discusses the power allocation with fixed rate constraint problem in multi-carrier code division multiple access (MC-CDMA) networks, that has been solved through game theoretic perspective by the use of an iterative water-filling algorithm (IWFA). The problem is analyzed under various interference density configurations, and its reliability is studied in terms of solution existence and uniqueness. Moreover, numerical results reveal the approach shortcoming, thus a new method combining swarm intelligence and IWFA is proposed to make practicable the use of game theoretic approaches in realistic MC-CDMA systems scenarios. The contribution of this paper is twofold: (i) provide a complete analysis for the existence and uniqueness of the game solution, from simple to more realist and complex interference scenarios; (ii) propose a hybrid power allocation optimization method combining swarm intelligence, game theory and IWFA. To corroborate the effectiveness of the proposed method, an outage probability analysis in realistic interference scenarios, and a complexity comparison with the classical IWFA are presented. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
The Iterative Closest Point algorithm (ICP) is commonly used in engineering applications to solve the rigid registration problem of partially overlapped point sets which are pre-aligned with a coarse estimate of their relative positions. This iterative algorithm is applied in many areas such as the medicine for volumetric reconstruction of tomography data, in robotics to reconstruct surfaces or scenes using range sensor information, in industrial systems for quality control of manufactured objects or even in biology to study the structure and folding of proteins. One of the algorithm’s main problems is its high computational complexity (quadratic in the number of points with the non-optimized original variant) in a context where high density point sets, acquired by high resolution scanners, are processed. Many variants have been proposed in the literature whose goal is the performance improvement either by reducing the number of points or the required iterations or even enhancing the complexity of the most expensive phase: the closest neighbor search. In spite of decreasing its complexity, some of the variants tend to have a negative impact on the final registration precision or the convergence domain thus limiting the possible application scenarios. The goal of this work is the improvement of the algorithm’s computational cost so that a wider range of computationally demanding problems from among the ones described before can be addressed. For that purpose, an experimental and mathematical convergence analysis and validation of point-to-point distance metrics has been performed taking into account those distances with lower computational cost than the Euclidean one, which is used as the de facto standard for the algorithm’s implementations in the literature. In that analysis, the functioning of the algorithm in diverse topological spaces, characterized by different metrics, has been studied to check the convergence, efficacy and cost of the method in order to determine the one which offers the best results. Given that the distance calculation represents a significant part of the whole set of computations performed by the algorithm, it is expected that any reduction of that operation affects significantly and positively the overall performance of the method. As a result, a performance improvement has been achieved by the application of those reduced cost metrics whose quality in terms of convergence and error has been analyzed and validated experimentally as comparable with respect to the Euclidean distance using a heterogeneous set of objects, scenarios and initial situations.
Resumo:
Dynamic spectrum management (DSM) comprises a new set of techniques for multiuser power allocation and/or detection in digital subscriber line (DSL) networks. At the Alcatel Research and Innovation Labs, we have recently developed a DSM test bed, which allows the performance of DSM algorithms to be evaluated in practice. With this test bed, we have evaluated the performance of a DSM level-1 algorithm known as iterative water-filling in an ADSL scenario. This paper describes the results of, on the one hand, the performance gains achieved with iterative water-filling, and, on the other hand, the nonstationary noise robustness of DSM-enabled ADSL modems. It will be shown that DSM trades off nonstationary noise robustness for performance improvements. A new bit swap procedure is then introduced to increase the noise robustness when applying DSM.
Resumo:
Motivation: The immunogenicity of peptides depends on their ability to bind to MHC molecules. MHC binding affinity prediction methods can save significant amounts of experimental work. The class II MHC binding site is open at both ends, making epitope prediction difficult because of the multiple binding ability of long peptides. Results: An iterative self-consistent partial least squares (PLS)-based additive method was applied to a set of 66 pep- tides no longer than 16 amino acids, binding to DRB1*0401. A regression equation containing the quantitative contributions of the amino acids at each of the nine positions was generated. Its predictability was tested using two external test sets which gave r pred =0.593 and r pred=0.655, respectively. Furthermore, it was benchmarked using 25 known T-cell epitopes restricted by DRB1*0401 and we compared our results with four other online predictive methods. The additive method showed the best result finding 24 of the 25 T-cell epitopes. Availability: Peptides used in the study are available from http://www.jenner.ac.uk/JenPep. The PLS method is available commercially in the SYBYL molecular modelling software package. The final model for affinity prediction of peptides binding to DRB1*0401 molecule is available at http://www.jenner.ac.uk/MHCPred. Models developed for DRB1*0101 and DRB1*0701 also are available in MHC- Pred
Resumo:
The characterization of air-water two-phase vertical flow in a 12 m flow loop with 1.5 m of vertical section is studied by using electrical resistance tomography (ERT). By applying a fast data collection to a dual-plane ERT sensor and an iterative image reconstruction algorithm, relevant information is gathered for implementation of flow characteristics, particularly for flow regime recognition. A cross-correlation method is also used to interpret the velocity distribution of the gas phase on the cross section. The paper demonstrates that ERT can now be deployed routinely for velocity measurements and this capability will increase as faster measurement systems evolve.
Resumo:
To investigate whether an adaptive statistical iterative reconstruction (ASIR) algorithm improves the image quality at low-tube-voltage (80-kVp), high-tube-current (675-mA) multidetector abdominal computed tomography (CT) during the late hepatic arterial phase.
Resumo:
We propose power allocation algorithms for increasing the sum rate of two and three user interference channels. The channels experience fast fading and there is an average power constraint on each transmitter. Our achievable strategies for two and three user interference channels are based on the classification of the interference into very strong, strong and weak interferences. We present numerical results of the power allocation algorithm for two user Gaussian interference channel with Rician fading with mean total power gain of the fade Omega = 3 and Rician factor kappa = 0.5 and compare the sum rate with that obtained from ergodic interference alignment with water-filling. We show that our power allocation algorithm increases the sum rate with a gain of 1.66dB at average transmit SNR of 5dB. For the three user Gaussian interference channel with Rayleigh fading with distribution CN(0, 0.5), we show that our power allocation algorithm improves the sum rate with a gain of 1.5dB at average transmit SNR of 5dB.
Resumo:
The light emission spectrum from a scanning tunnelling microscope (LESTM) is investigated as a function of relative humidity and shown to provide a novel and sensitive means for probing the growth and properties of a water meniscus on the nanometre scale. An empirical model of the light emission process is formulated and applied successfully to replicate the decay in light intensity and spectral changes observed with increasing relative humidity. The modelling indicates a progressive water filling of the tip-sample junction with increasing humidity or, more pertinently, of the volume of the localized surface plasmons responsible for light emission; it also accounts for the effect of asymmetry in structuring of the water molecules with respect to the polarity of the applied bias. This is juxtaposed with the case of a non-polar liquid in the tip-sample nanocavity where no polarity dependence of the light emission is observed. In contrast to the discrete detection of the presence/absence of a water bridge in other scanning probe experiments through measurement of the feedback parameter for instrument control, LESTM offers a means of continuously monitoring the development of the water bridge with sub-nanometre sensitivity. The results are relevant to applications such as dip-pen nanolithography and electrochemical scanning probe microscopy.
Resumo:
In this text, we present two stereo-based head tracking techniques along with a fast 3D model acquisition system. The first tracking technique is a robust implementation of stereo-based head tracking designed for interactive environments with uncontrolled lighting. We integrate fast face detection and drift reduction algorithms with a gradient-based stereo rigid motion tracking technique. Our system can automatically segment and track a user's head under large rotation and illumination variations. Precision and usability of this approach are compared with previous tracking methods for cursor control and target selection in both desktop and interactive room environments. The second tracking technique is designed to improve the robustness of head pose tracking for fast movements. Our iterative hybrid tracker combines constraints from the ICP (Iterative Closest Point) algorithm and normal flow constraint. This new technique is more precise for small movements and noisy depth than ICP alone, and more robust for large movements than the normal flow constraint alone. We present experiments which test the accuracy of our approach on sequences of real and synthetic stereo images. The 3D model acquisition system we present quickly aligns intensity and depth images, and reconstructs a textured 3D mesh. 3D views are registered with shape alignment based on our iterative hybrid tracker. We reconstruct the 3D model using a new Cubic Ray Projection merging algorithm which takes advantage of a novel data structure: the linked voxel space. We present experiments to test the accuracy of our approach on 3D face modelling using real-time stereo images.
Resumo:
OBJECTIVE The aim of the present study was to evaluate a dose reduction in contrast-enhanced chest computed tomography (CT) by comparing the three latest generations of Siemens CT scanners used in clinical practice. We analyzed the amount of radiation used with filtered back projection (FBP) and an iterative reconstruction (IR) algorithm to yield the same image quality. Furthermore, the influence on the radiation dose of the most recent integrated circuit detector (ICD; Stellar detector, Siemens Healthcare, Erlangen, Germany) was investigated. MATERIALS AND METHODS 136 Patients were included. Scan parameters were set to a thorax routine: SOMATOM Sensation 64 (FBP), SOMATOM Definition Flash (IR), and SOMATOM Definition Edge (ICD and IR). Tube current was set constantly to the reference level of 100 mA automated tube current modulation using reference milliamperes. Care kV was used on the Flash and Edge scanner, while tube potential was individually selected between 100 and 140 kVp by the medical technologists at the SOMATOM Sensation. Quality assessment was performed on soft-tissue kernel reconstruction. Dose was represented by the dose length product. RESULTS Dose-length product (DLP) with FBP for the average chest CT was 308 mGy*cm ± 99.6. In contrast, the DLP for the chest CT with IR algorithm was 196.8 mGy*cm ± 68.8 (P = 0.0001). Further decline in dose can be noted with IR and the ICD: DLP: 166.4 mGy*cm ± 54.5 (P = 0.033). The dose reduction compared to FBP was 36.1% with IR and 45.6% with IR/ICD. Signal-to-noise ratio (SNR) was favorable in the aorta, bone, and soft tissue for IR/ICD in combination compared to FBP (the P values ranged from 0.003 to 0.048). Overall contrast-to-noise ratio (CNR) improved with declining DLP. CONCLUSION The most recent technical developments, namely IR in combination with integrated circuit detectors, can significantly lower radiation dose in chest CT examinations.
Resumo:
This volume contains the Proceedings of the Twenty-Sixth Annual Biochemical Engineering Symposium held at Kansas State University on September 21, 1996. The program included 10 oral presentations and 14 posters. Some of the papers describe the progress of ongoing projects, and others contain the results of completed projects. Only brief summaries are given of some of the papers; many of the papers will be published in full elsewhere. A listing of those who attended is given below. ContentsForeign Protein Production from SV40 Early Promoter in Continuous Cultures of Recombinant CHO Cells - Gautam Banik, Paul Todd, and Dhinakar Kampala Enhanced Cell Recruitment Due to Cell-Cell Interactions - Brad Farlow and Matthias Nollert The Recirculation of Hybridoma Suspension Cultures: Effects on Cell Death, Metabolism and Mab Productivity - Peng Jin and Carole A. Heath The Importance of Enzyme Inactivation and Self-Recovery in Cometabolic Biodegradation of Chlorinated Solvents - Xi-Hui Zhang, Shanka Banerji, and Rakesh Bajpai Phytoremediation of VOC contaminated Groundwater using Poplar Trees - Melissa Miller, Jason Dana, L.C. Davis, Murlidharan Narayanan, and L.E. Erickson Biological Treatment of Off-Gases from Aluminum Can Production: Experimental Results and Mathematical Modeling - Adeyma Y. Arroyo, Julio Zimbron, and Kenneth F. Reardon Inertial Migration Based Separation of Chlorella Microalgae in Branched Tubes - N.M. Poflee, A.L. Rakow, D.S. Dandy, M.L. Chappell, and M.N. Pons Contribution of Electrochemical Charge to Protein Partitioning in Aqueous Two-Phase Systems - Weiyu Fan and Charles C. Glatz Biodegradation of Some Commercial Surfactants Used in Bioremediation - Jun Gu, G.W. Preckshot, S.K. Banerji, and Rakesh Bajpai Modeling the Role of Biomass in Heavy Metal Transport Ln Vadose Zone - K.V. Nedunuri, L.E. Erickson, and R.S. Govindaraju Multivariable Statistical Methods for Monitoring Process Quality: Application to Bioinsecticide Production by 73 89 Bacillus Thuringiensis - c. Puente and M.N. Karim The Use of Polymeric Flocculants in Bacterial Lysate Streams - H. Graham, A.S. Cibulskas and E.H. Dunlop Effect of Water Content on transport of Trichloroethylene in a Chamber with Alfalfa Plants - Muralidharan Narayanan, Jiang Hu, Lawrence C. Davis, and Larry E. Erickson Detection of Specific Microorganisms using the Arbitrary Primed PCR in the Bacterial Community of Vegetated Soil - X. Wu and L.C. Davis Flux Enhancement Using Backpulsing - V.T. Kuberkar and R.H. Davis Chromatographic Purification of Oligonucleotides: Comparison with Electrophoresis - Stephen P. Cape, Ching-Yuan Lee, Kevin Petrini, Sean Foree, Micheal G. Sportiello and Paul Todd Determining Singular Arc Control Policies for Bioreactor Systems Using a Modified Iterative Dynamic Programming Algorithm - Arun Tholudur and W. Fred Ramirez Pressure Effect on Subtilisins Measured via FTIR, EPR and Activity Assays, and Its Impact on Crystallizations - J.N. Webb, R.Y. Waghmare, M.G. Bindewald, T.W. Randolph, J.F. Carpenter, C.E. Glatz Intercellular Calcium Changes in Endothelial Cells Exposed to Flow - Laura Worthen and Matthias Nollert Application of Liquid-Liquid Extraction in Propionic Acid Fermentation - Zhong Gu, Bonita A. Glatz, and Charles E. Glatz Purification of Recombinant T4 Lysozyme from E. Coli: Ion-Exchange Chromatography - Weiyu Fan, Matt L. Thatcher, and Charles E. Glatz Recovery and Purification of Recombinant Beta-Glucuronidase from Transgenic Corn - Ann R. Kusnadi, Roque Evangelista, Zivko L. Nikolov, and John Howard Effects of Auxins and cytokinins on Formation of Catharanthus Roseus G. Don Multiple Shoots - Ying-Jin Yuan, Yu-Min Yang, Tsung-Ting Hu, and Jiang Hu Fate and Effect of Trichloroethylene as Nonaqueous Phase Liquid in Chambers with Alfalfa - Qizhi Zhang, Brent Goplen, Sara Vanderhoof, Lawrence c. Davis, and Larry E. Erickson Oxygen Transport and Mixing Considerations for Microcarrier Culture of Mammalian Cells in an Airlift Reactor - Sridhar Sunderam, Frederick R. Souder, and Marylee Southard Effects of Cyclic Shear Stress on Mammalian Cells under Laminar Flow Conditions: Apparatus and Methods - M.L. Rigney, M.H. Liew, and M.Z. Southard
Resumo:
A fully 3D iterative image reconstruction algorithm has been developed for high-resolution PET cameras composed of pixelated scintillator crystal arrays and rotating planar detectors, based on the ordered subsets approach. The associated system matrix is precalculated with Monte Carlo methods that incorporate physical effects not included in analytical models, such as positron range effects and interaction of the incident gammas with the scintillator material. Custom Monte Carlo methodologies have been developed and optimized for modelling of system matrices for fast iterative image reconstruction adapted to specific scanner geometries, without redundant calculations. According to the methodology proposed here, only one-eighth of the voxels within two central transaxial slices need to be modelled in detail. The rest of the system matrix elements can be obtained with the aid of axial symmetries and redundancies, as well as in-plane symmetries within transaxial slices. Sparse matrix techniques for the non-zero system matrix elements are employed, allowing for fast execution of the image reconstruction process. This 3D image reconstruction scheme has been compared in terms of image quality to a 2D fast implementation of the OSEM algorithm combined with Fourier rebinning approaches. This work confirms the superiority of fully 3D OSEM in terms of spatial resolution, contrast recovery and noise reduction as compared to conventional 2D approaches based on rebinning schemes. At the same time it demonstrates that fully 3D methodologies can be efficiently applied to the image reconstruction problem for high-resolution rotational PET cameras by applying accurate pre-calculated system models and taking advantage of the system's symmetries.
Resumo:
Introduction Diffusion weighted Imaging (DWI) techniques are able to measure, in vivo and non-invasively, the diffusivity of water molecules inside the human brain. DWI has been applied on cerebral ischemia, brain maturation, epilepsy, multiple sclerosis, etc. [1]. Nowadays, there is a very high availability of these images. DWI allows the identification of brain tissues, so its accurate segmentation is a common initial step for the referred applications. Materials and Methods We present a validation study on automated segmentation of DWI based on the Gaussian mixture and hidden Markov random field models. This methodology is widely solved with iterative conditional modes algorithm, but some studies suggest [2] that graph-cuts (GC) algorithms improve the results when initialization is not close to the final solution. We implemented a segmentation tool integrating ITK with a GC algorithm [3], and a validation software using fuzzy overlap measures [4]. Results Segmentation accuracy of each tool is tested against a gold-standard segmentation obtained from a T1 MPRAGE magnetic resonance image of the same subject, registered to the DWI space. The proposed software shows meaningful improvements by using the GC energy minimization approach on DTI and DSI (Diffusion Spectrum Imaging) data. Conclusions The brain tissues segmentation on DWI is a fundamental step on many applications. Accuracy and robustness improvements are achieved with the proposed software, with high impact on the application’s final result.
Resumo:
In this contribution a novel iterative bit- and power allocation (IBPA) approach has been developed when transmitting a given bit/s/Hz data rate over a correlated frequency non-selective (4× 4) Multiple-Input MultipleOutput (MIMO) channel. The iterative resources allocation algorithm developed in this investigation is aimed at the achievement of the minimum bit-error rate (BER) in a correlated MIMO communication system. In order to achieve this goal, the available bits are iteratively allocated in the MIMO active layers which present the minimum transmit power requirement per time slot.
Resumo:
An iterative Monte Carlo algorithm for evaluating linear functionals of the solution of integral equations with polynomial non-linearity is proposed and studied. The method uses a simulation of branching stochastic processes. It is proved that the mathematical expectation of the introduced random variable is equal to a linear functional of the solution. The algorithm uses the so-called almost optimal density function. Numerical examples are considered. Parallel implementation of the algorithm is also realized using the package ATHAPASCAN as an environment for parallel realization.The computational results demonstrate high parallel efficiency of the presented algorithm and give a good solution when almost optimal density function is used as a transition density.