853 resultados para heterogeneous regressions algorithms


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Global connectivity is on the verge of becoming a reality to provide high-speed, high-quality, and reliable communication channels for mobile devices at anytime, anywhere in the world. In a heterogeneous wireless environment, one of the key ingredients to provide efficient and ubiquitous computing with guaranteed quality and continuity of service is the design of intelligent handoff algorithms. Traditional single-metric handoff decision algorithms, such as Received Signal Strength (RSS), are not efficient and intelligent enough to minimize the number of unnecessary handoffs, decision delays, call-dropping and blocking probabilities. This research presents a novel approach for of a Multi Attribute Decision Making (MADM) model based on an integrated fuzzy approach for target network selection.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Deployment of low power basestations within cellular networks can potentially increase both capacity and coverage. However, such deployments require efficient resource allocation schemes for managing interference from the low power and macro basestations that are located within each other’s transmission range. In this dissertation, we propose novel and efficient dynamic resource allocation algorithms in the frequency, time and space domains. We show that the proposed algorithms perform better than the current state-of-art resource management algorithms. In the first part of the dissertation, we propose an interference management solution in the frequency domain. We introduce a distributed frequency allocation scheme that shares frequencies between macro and low power pico basestations, and guarantees a minimum average throughput to users. The scheme seeks to minimize the total number of frequencies needed to honor the minimum throughput requirements. We evaluate our scheme using detailed simulations and show that it performs on par with the centralized optimum allocation. Moreover, our proposed scheme outperforms a static frequency reuse scheme and the centralized optimal partitioning between the macro and picos. In the second part of the dissertation, we propose a time domain solution to the interference problem. We consider the problem of maximizing the alpha-fairness utility over heterogeneous wireless networks (HetNets) by jointly optimizing user association, wherein each user is associated to any one transmission point (TP) in the network, and activation fractions of all TPs. Activation fraction of a TP is the fraction of the frame duration for which it is active, and together these fractions influence the interference seen in the network. To address this joint optimization problem which we show is NP-hard, we propose an alternating optimization based approach wherein the activation fractions and the user association are optimized in an alternating manner. The subproblem of determining the optimal activation fractions is solved using a provably convergent auxiliary function method. On the other hand, the subproblem of determining the user association is solved via a simple combinatorial algorithm. Meaningful performance guarantees are derived in either case. Simulation results over a practical HetNet topology reveal the superior performance of the proposed algorithms and underscore the significant benefits of the joint optimization. In the final part of the dissertation, we propose a space domain solution to the interference problem. We consider the problem of maximizing system utility by optimizing over the set of user and TP pairs in each subframe, where each user can be served by multiple TPs. To address this optimization problem which is NP-hard, we propose a solution scheme based on difference of submodular function optimization approach. We evaluate our scheme using detailed simulations and show that it performs on par with a much more computationally demanding difference of convex function optimization scheme. Moreover, the proposed scheme performs within a reasonable percentage of the optimal solution. We further demonstrate the advantage of the proposed scheme by studying its performance with variation in different network topology parameters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tumor functional volume (FV) and its mean activity concentration (mAC) are the quantities derived from positron emission tomography (PET). These quantities are used for estimating radiation dose for a therapy, evaluating the progression of a disease and also use it as a prognostic indicator for predicting outcome. PET images have low resolution, high noise and affected by partial volume effect (PVE). Manually segmenting each tumor is very cumbersome and very hard to reproduce. To solve the above problem I developed an algorithm, called iterative deconvolution thresholding segmentation (IDTS) algorithm; the algorithm segment the tumor, measures the FV, correct for the PVE and calculates mAC. The algorithm corrects for the PVE without the need to estimate camera’s point spread function (PSF); also does not require optimizing for a specific camera. My algorithm was tested in physical phantom studies, where hollow spheres (0.5-16 ml) were used to represent tumors with a homogeneous activity distribution. It was also tested on irregular shaped tumors with a heterogeneous activity profile which were acquired using physical and simulated phantom. The physical phantom studies were performed with different signal to background ratios (SBR) and with different acquisition times (1-5 min). The algorithm was applied on ten clinical data where the results were compared with manual segmentation and fixed percentage thresholding method called T50 and T60 in which 50% and 60% of the maximum intensity respectively is used as threshold. The average error in FV and mAC calculation was 30% and -35% for 0.5 ml tumor. The average error FV and mAC calculation were ~5% for 16 ml tumor. The overall FV error was ~10% for heterogeneous tumors in physical and simulated phantom data. The FV and mAC error for clinical image compared to manual segmentation was around -17% and 15% respectively. In summary my algorithm has potential to be applied on data acquired from different cameras as its not dependent on knowing the camera’s PSF. The algorithm can also improve dose estimation and treatment planning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modern High-Performance Computing HPC systems are gradually increasing in size and complexity due to the correspondent demand of larger simulations requiring more complicated tasks and higher accuracy. However, as side effects of the Dennard’s scaling approaching its ultimate power limit, the efficiency of software plays also an important role in increasing the overall performance of a computation. Tools to measure application performance in these increasingly complex environments provide insights into the intricate ways in which software and hardware interact. The monitoring of the power consumption in order to save energy is possible through processors interfaces like Intel Running Average Power Limit RAPL. Given the low level of these interfaces, they are often paired with an application-level tool like Performance Application Programming Interface PAPI. Since several problems in many heterogeneous fields can be represented as a complex linear system, an optimized and scalable linear system solver algorithm can decrease significantly the time spent to compute its resolution. One of the most widely used algorithms deployed for the resolution of large simulation is the Gaussian Elimination, which has its most popular implementation for HPC systems in the Scalable Linear Algebra PACKage ScaLAPACK library. However, another relevant algorithm, which is increasing in popularity in the academic field, is the Inhibition Method. This thesis compares the energy consumption of the Inhibition Method and Gaussian Elimination from ScaLAPACK to profile their execution during the resolution of linear systems above the HPC architecture offered by CINECA. Moreover, it also collates the energy and power values for different ranks, nodes, and sockets configurations. The monitoring tools employed to track the energy consumption of these algorithms are PAPI and RAPL, that will be integrated with the parallel execution of the algorithms managed with the Message Passing Interface MPI.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Abstract In this paper, we address the problem of picking a subset of bids in a general combinatorial auction so as to maximize the overall profit using the first-price model. This winner determination problem assumes that a single bidding round is held to determine both the winners and prices to be paid. We introduce six variants of biased random-key genetic algorithms for this problem. Three of them use a novel initialization technique that makes use of solutions of intermediate linear programming relaxations of an exact mixed integer-linear programming model as initial chromosomes of the population. An experimental evaluation compares the effectiveness of the proposed algorithms with the standard mixed linear integer programming formulation, a specialized exact algorithm, and the best-performing heuristics proposed for this problem. The proposed algorithms are competitive and offer strong results, mainly for large-scale auctions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work an iterative strategy is developed to tackle the problem of coupling dimensionally-heterogeneous models in the context of fluid mechanics. The procedure proposed here makes use of a reinterpretation of the original problem as a nonlinear interface problem for which classical nonlinear solvers can be applied. Strong coupling of the partitions is achieved while dealing with different codes for each partition, each code in black-box mode. The main application for which this procedure is envisaged arises when modeling hydraulic networks in which complex and simple subsystems are treated using detailed and simplified models, correspondingly. The potentialities and the performance of the strategy are assessed through several examples involving transient flows and complex network configurations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose and analyze two different Bayesian online algorithms for learning in discrete Hidden Markov Models and compare their performance with the already known Baldi-Chauvin Algorithm. Using the Kullback-Leibler divergence as a measure of generalization we draw learning curves in simplified situations for these algorithms and compare their performances.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the present work, cellulose obtained from sisal, which is a source of rapid growth, was used. Cellulose acetates were produced in heterogeneous medium, using acetic anhydride as esterifying agent and iodine as catalyst, to check if the procedure described in the literature for commercial cellulose also is adequate to sisal cellulose. The results indicated that iodine is an excellent catalyst to obtain sisal cellulose acetates, but the reaction is so fast as described in the literature when, instead of sisal, lower average molar weight cellulose (microcrystalline) is used. The crystallinity index (I(c)) of sisal cellulose acetates diminished compared to sisal cellulose, but there was no direct correlation between their degree of substitution (DS) and I(c). Probably acetyl groups were introduced more homogeneously along the short chains of microcrystalline cellulose, when compared to sisal cellulose, and then for microcrystalline cellulose acetates the Ic decreases as DS increases. Using the linear correlation that was found between degree of substitution (DS) and time reaction is possible to control the DS of sisal cellulose acetates, considering a large interval of degrees of substitution (0.3-2.8).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Although the Clock Drawing Test (CDT) is the second most used test in the world for the screening of dementia, there is still debate over its sensitivity specificity, application and interpretation in dementia diagnosis. This study has three main aims: to evaluate the sensitivity and specificity of the CDT in a sample composed of older adults with Alzheimer`s disease (AD) and normal controls; to compare CDT accuracy to the that of the Mini-mental State Examination (MMSE) and the Cambridge Cognitive Examination (CAMCOG), and to test whether the association of the MMSE with the CDT leads to higher or comparable accuracy as that reported for the CAMCOG. Methods: Cross-sectional assessment was carried out for 121 AD and 99 elderly controls with heterogeneous educational levels from a geriatric outpatient clinic who completed the Cambridge Examination for Mental Disorder of the Elderly (CAMDEX). The CDT was evaluated according to the Shulman, Mendez and Sunderland scales. Results: The CDT showed high sensitivity and specificity. There were significant correlations between the CDT and the MMSE (0.700-0.730; p < 0.001) and between the CDT and the CAMCOG (0.753-0.779; p < 0.001). The combination of the CDT with the MMSE improved sensitivity and specificity (SE = 89.2-90%; SP = 71.7-79.8%). Subgroup analysis indicated that for elderly people with lower education, sensitivity and specificity were both adequate and high. Conclusions: The CDT is a robust screening test when compared with the MMSE or the CAMCOG, independent of the scale used for its interpretation. The combination with the MMSE improves its performance significantly, becoming equivalent to the CAMCOG.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Voltage and current waveforms of a distribution or transmission power system are not pure sinusoids. There are distortions in these waveforms that can be represented as a combination of the fundamental frequency, harmonics and high frequency transients. This paper presents a novel approach to identifying harmonics in power system distorted waveforms. The proposed method is based on Genetic Algorithms, which is an optimization technique inspired by genetics and natural evolution. GOOAL, a specially designed intelligent algorithm for optimization problems, was successfully implemented and tested. Two kinds of representations concerning chromosomes are utilized: binary and real. The results show that the proposed method is more precise than the traditional Fourier Transform, especially considering the real representation of the chromosomes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The objective of this study was to estimate the first-order intrinsic kinetic constant (k(1)) and the liquid-phase mass transfer coefficient (k(c)) in a bench-scale anaerobic sequencing batch biofilm reactor (ASBBR) fed with glucose. A dynamic heterogeneous mathematical model, considering two phases (liquid and solid), was developed through mass balances in the liquid and solid phases. The model was adjusted to experimental data obtained from the ASBBR applied for the treatment of glucose-based synthetic wastewater with approximately 500 mg L-1 of glucose, operating in 8 h batch cycles, at 30 degrees C and 300 rpm. The values of the parameters obtained were 0.8911 min(-1) for k(1) and 0.7644 cm min(-1) for kc. The model was validated utilizing the estimated parameters with data obtained from the ASBBR operating in 3 h batch cycles, with a good representation of the experimental behavior. The solid-phase mass transfer flux was found to be the limiting step of the overall glucose conversion rate.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A modeling study was completed to develop a methodology that combines the sequencing and finite difference methods for the simulation of a heterogeneous model of a tubular reactor applied in the treatment of wastewater. The system included a liquid phase (convection diffusion transport) and a solid phase (diffusion reaction) that was obtained by completing a mass balance in the reactor and in the particle, respectively. The model was solved using a pilot-scale horizontal-flow anaerobic immobilized biomass (HAIB) reactor to treat domestic sewage, with the concentration results compared with the experimental data. A comparison of the behavior of the liquid phase concentration profile and the experimental results indicated that both the numerical methods offer a good description of the behavior of the concentration along the reactor. The advantage of the sequencing method over the finite difference method is that it is easier to apply and requires less computational time to model the dynamic simulation of outlet response of HAIB.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a strategy for the solution of the WDM optical networks planning. Specifically, the problem of Routing and Wavelength Allocation (RWA) in order to minimize the amount of wavelengths used. In this case, the problem is known as the Min-RWA. Two meta-heuristics (Tabu Search and Simulated Annealing) are applied to take solutions of good quality and high performance. The key point is the degradation of the maximum load on the virtual links in favor of minimization of number of wavelengths used; the objective is to find a good compromise between the metrics of virtual topology (load in Gb/s) and of the physical topology (quantity of wavelengths). The simulations suggest good results when compared to some existing in the literature.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This technical note develops information filter and array algorithms for a linear minimum mean square error estimator of discrete-time Markovian jump linear systems. A numerical example for a two-mode Markovian jump linear system, to show the advantage of using array algorithms to filter this class of systems, is provided.