977 resultados para Robust Stochastic Optimization


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Many methodologies dealing with prediction or simulation of soft tissue deformations on medical image data require preprocessing of the data in order to produce a different shape representation that complies with standard methodologies, such as mass–spring networks, finite element method s (FEM). On the other hand, methodologies working directly on the image space normally do not take into account mechanical behavior of tissues and tend to lack physics foundations driving soft tissue deformations. This chapter presents a method to simulate soft tissue deformations based on coupled concepts from image analysis and mechanics theory. The proposed methodology is based on a robust stochastic approach that takes into account material properties retrieved directly from the image, concepts from continuum mechanics and FEM. The optimization framework is solved within a hierarchical Markov random field (HMRF) which is implemented on the graphics processor unit (GPU See Graphics processing unit ).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

An effective aperture approach is used as a tool for analysis and parameter optimization of mostly known ultrasound imaging systems - phased array systems, compounding systems and synthetic aperture imaging systems. Both characteristics of an imaging system, the effective aperture function and the corresponding two-way radiation pattern, provide information about two of the most important parameters of images produced by an ultrasound system - lateral resolution and contrast. Therefore, in the design, optimization of the effective aperture function leads to optimal choice of such parameters of an imaging systems that influence on lateral resolution and contrast of images produced by this imaging system. It is shown that the effective aperture approach can be used for optimization of a sparse synthetic transmit aperture (STA) imaging system. A new two-stage algorithm is proposed for optimization of both the positions of the transmitted elements and the weights of the receive elements. The proposed system employs a 64-element array with only four active elements used during transmit. The numerical results show that Hamming apodization gives the best compromise between the contrast of images and the lateral resolution.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Subspaces and manifolds are two powerful models for high dimensional signals. Subspaces model linear correlation and are a good fit to signals generated by physical systems, such as frontal images of human faces and multiple sources impinging at an antenna array. Manifolds model sources that are not linearly correlated, but where signals are determined by a small number of parameters. Examples are images of human faces under different poses or expressions, and handwritten digits with varying styles. However, there will always be some degree of model mismatch between the subspace or manifold model and the true statistics of the source. This dissertation exploits subspace and manifold models as prior information in various signal processing and machine learning tasks.

A near-low-rank Gaussian mixture model measures proximity to a union of linear or affine subspaces. This simple model can effectively capture the signal distribution when each class is near a subspace. This dissertation studies how the pairwise geometry between these subspaces affects classification performance. When model mismatch is vanishingly small, the probability of misclassification is determined by the product of the sines of the principal angles between subspaces. When the model mismatch is more significant, the probability of misclassification is determined by the sum of the squares of the sines of the principal angles. Reliability of classification is derived in terms of the distribution of signal energy across principal vectors. Larger principal angles lead to smaller classification error, motivating a linear transform that optimizes principal angles. This linear transformation, termed TRAIT, also preserves some specific features in each class, being complementary to a recently developed Low Rank Transform (LRT). Moreover, when the model mismatch is more significant, TRAIT shows superior performance compared to LRT.

The manifold model enforces a constraint on the freedom of data variation. Learning features that are robust to data variation is very important, especially when the size of the training set is small. A learning machine with large numbers of parameters, e.g., deep neural network, can well describe a very complicated data distribution. However, it is also more likely to be sensitive to small perturbations of the data, and to suffer from suffer from degraded performance when generalizing to unseen (test) data.

From the perspective of complexity of function classes, such a learning machine has a huge capacity (complexity), which tends to overfit. The manifold model provides us with a way of regularizing the learning machine, so as to reduce the generalization error, therefore mitigate overfiting. Two different overfiting-preventing approaches are proposed, one from the perspective of data variation, the other from capacity/complexity control. In the first approach, the learning machine is encouraged to make decisions that vary smoothly for data points in local neighborhoods on the manifold. In the second approach, a graph adjacency matrix is derived for the manifold, and the learned features are encouraged to be aligned with the principal components of this adjacency matrix. Experimental results on benchmark datasets are demonstrated, showing an obvious advantage of the proposed approaches when the training set is small.

Stochastic optimization makes it possible to track a slowly varying subspace underlying streaming data. By approximating local neighborhoods using affine subspaces, a slowly varying manifold can be efficiently tracked as well, even with corrupted and noisy data. The more the local neighborhoods, the better the approximation, but the higher the computational complexity. A multiscale approximation scheme is proposed, where the local approximating subspaces are organized in a tree structure. Splitting and merging of the tree nodes then allows efficient control of the number of neighbourhoods. Deviation (of each datum) from the learned model is estimated, yielding a series of statistics for anomaly detection. This framework extends the classical {\em changepoint detection} technique, which only works for one dimensional signals. Simulations and experiments highlight the robustness and efficacy of the proposed approach in detecting an abrupt change in an otherwise slowly varying low-dimensional manifold.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Facility location concerns the placement of facilities, for various objectives, by use of mathematical models and solution procedures. Almost all facility location models that can be found in literature are based on minimizing costs or maximizing cover, to cover as much demand as possible. These models are quite efficient for finding an optimal location for a new facility for a particular data set, which is considered to be constant and known in advance. In a real world situation, input data like demand and travelling costs are not fixed, nor known in advance. This uncertainty and uncontrollability can lead to unacceptable losses or even bankruptcy. A way of dealing with these factors is robustness modelling. A robust facility location model aims to locate a facility that stays within predefined limits for all expectable circumstances as good as possible. The deviation robustness concept is used as basis to develop a new competitive deviation robustness model. The competition is modelled with a Huff based model, which calculates the market share of the new facility. Robustness in this model is defined as the ability of a facility location to capture a minimum market share, despite variations in demand. A test case is developed by which algorithms can be tested on their ability to solve robust facility location models. Four stochastic optimization algorithms are considered from which Simulated Annealing turned out to be the most appropriate. The test case is slightly modified for a competitive market situation. With the Simulated Annealing algorithm, the developed competitive deviation model is solved, for three considered norms of deviation. At the end, also a grid search is performed to illustrate the landscape of the objective function of the competitive deviation model. The model appears to be multimodal and seems to be challenging for further research.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents a robust stochastic framework for the incorporation of visual observations into conventional estimation, data fusion, navigation and control algorithms. The representation combines Isomap, a non-linear dimensionality reduction algorithm, with expectation maximization, a statistical learning scheme. The joint probability distribution of this representation is computed offline based on existing training data. The training phase of the algorithm results in a nonlinear and non-Gaussian likelihood model of natural features conditioned on the underlying visual states. This generative model can be used online to instantiate likelihoods corresponding to observed visual features in real-time. The instantiated likelihoods are expressed as a Gaussian mixture model and are conveniently integrated within existing non-linear filtering algorithms. Example applications based on real visual data from heterogenous, unstructured environments demonstrate the versatility of the generative models.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Rapid development of plug-in hybrid electric vehicles (PHEVs) brings new challenges and opportunities to the power industry. A large number of idle PHEVs can potentially be employed to form a distributed energy storage system for supporting renewable generation. To reduce the negative effects of unsteady renewable generation outputs, a stochastic optimization-based dispatch model capable of handling uncertain outputs of PHEVs and renewable generation is formulated in this paper. The mathematical expectations, second-order original moments, and variances of wind and photovoltaic (PV) generation outputs are derived analytically. Incorporated all the derived uncertainties, a novel generation shifting objective is proposed. The cross-entropy (CE) method is employed to solve this optimal dispatch model. Multiple patterns of renewable generation depending on seasons and renewable market shares are investigated. The feasibility and efficiency of the developed optimal dispatch model, as well as the CE method, are demonstrated with a 33-node distribution system.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Wireless adhoc networks transmit information from a source to a destination via multiple hops in order to save energy and, thus, increase the lifetime of battery-operated nodes. The energy savings can be especially significant in cooperative transmission schemes, where several nodes cooperate during one hop to forward the information to the next node along a route to the destination. Finding the best multi-hop transmission policy in such a network which determines nodes that are involved in each hop, is a very important problem, but also a very difficult one especially when the physical wireless channel behavior is to be accounted for and exploited. We model the above optimization problem for randomly fading channels as a decentralized control problem - the channel observations available at each node define the information structure, while the control policy is defined by the power and phase of the signal transmitted by each node. In particular, we consider the problem of computing an energy-optimal cooperative transmission scheme in a wireless network for two different channel fading models: (i) slow fading channels, where the channel gains of the links remain the same for a large number of transmissions, and (ii) fast fading channels, where the channel gains of the links change quickly from one transmission to another. For slow fading, we consider a factored class of policies (corresponding to local cooperation between nodes), and show that the computation of an optimal policy in this class is equivalent to a shortest path computation on an induced graph, whose edge costs can be computed in a decentralized manner using only locally available channel state information (CSI). For fast fading, both CSI acquisition and data transmission consume energy. Hence, we need to jointly optimize over both these; we cast this optimization problem as a large stochastic optimization problem. We then jointly optimize over a set of CSI functions of the local channel states, and a c- - orresponding factored class of control poli.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Learning automata are adaptive decision making devices that are found useful in a variety of machine learning and pattern recognition applications. Although most learning automata methods deal with the case of finitely many actions for the automaton, there are also models of continuous-action-set learning automata (CALA). A team of such CALA can be useful in stochastic optimization problems where one has access only to noise-corrupted values of the objective function. In this paper, we present a novel formulation for noise-tolerant learning of linear classifiers using a CALA team. We consider the general case of nonuniform noise, where the probability that the class label of an example is wrong may be a function of the feature vector of the example. The objective is to learn the underlying separating hyperplane given only such noisy examples. We present an algorithm employing a team of CALA and prove, under some conditions on the class conditional densities, that the algorithm achieves noise-tolerant learning as long as the probability of wrong label for any example is less than 0.5. We also present some empirical results to illustrate the effectiveness of the algorithm.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

[1] D. Tse and P. Viswanath, Fundamentals of Wireless Communication.Cambridge University Press, 2006. [2] H. Bolcskei, D. Gesbert, C. B. Papadias, and A.-J. van der Veen, Spacetime Wireless Systems: From Array Processing to MIMO Communications.Cambridge University Press, 2006. [3] Q. H. Spencer, C. B. Peel, A. L. Swindlehurst, and M. Haardt, “An introduction to the multiuser MIMO downlink,” IEEE Commun. Mag.,vol. 42, pp. 60–67, Oct. 2004. [4] K. Kusume, M. Joham,W. Utschick, and G. Bauch, “Efficient tomlinsonharashima precoding for spatial multiplexing on flat MIMO channel,”in Proc. IEEE ICC’2005, May 2005, pp. 2021–2025. [5] R. Fischer, C. Windpassinger, A. Lampe, and J. Huber, “MIMO precoding for decentralized receivers,” in Proc. IEEE ISIT’2002, 2002, p.496. [6] M. Schubert and H. Boche, “Iterative multiuser uplink and downlink beamforming under SINR constraints,” IEEE Trans. Signal Process.,vol. 53, pp. 2324–2334, Jul. 2005. [7] ——, “Solution of multiuser downlink beamforming problem with individual SINR constraints,” IEEE Trans. Veh. Technol., vol. 53, pp.18–28, Jan. 2004. [8] A. Wiesel, Y. C. Eldar, and Shamai, “Linear precoder via conic optimization for fixed MIMO receivers,” IEEE Trans. Signal Process., vol. 52,pp. 161–176, Jan. 2006. [9] N. Jindal, “MIMO broadcast channels with finite rate feed-back,” in Proc. IEEE GLOBECOM’2005, Nov. 2005. [10] R. Hunger, F. Dietrich, M. Joham, and W. Utschick, “Robust transmit zero-forcing filters,” in Proc. ITG Workshop on Smart Antennas, Munich,Mar. 2004, pp. 130–137. [11] M. B. Shenouda and T. N. Davidson, “Linear matrix inequality formulations of robust QoS precoding for broadcast channels,” in Proc.CCECE’2007, Apr. 2007, pp. 324–328. [12] M. Payaro, A. Pascual-Iserte, and M. A. Lagunas, “Robust power allocation designs for multiuser and multiantenna downlink communication systems through convex optimization,” IEEE J. Sel. Areas Commun.,vol. 25, pp. 1392–1401, Sep. 2007. [13] M. Biguesh, S. Shahbazpanahi, and A. B. Gershman, “Robust downlink power control in wireless cellular systems,” EURASIP Jl. Wireless Commun. Networking, vol. 2, pp. 261–272, 2004. [14] B. Bandemer, M. Haardt, and S. Visuri, “Liner MMSE multi-user MIMO downlink precoding for users with multple antennas,” in Proc.PIMRC’06, Sep. 2006, pp. 1–5. [15] J. Zhang, Y. Wu, S. Zhou, and J. Wang, “Joint linear transmitter and receiver design for the downlink of multiuser MIMO systems,” IEEE Commun. Lett., vol. 9, pp. 991–993, Nov. 2005. [16] S. Shi, M. Schubert, and H. Boche, “Downlink MMSE transceiver optimization for multiuser MIMO systems: Duality and sum-mse minimization,”IEEE Trans. Signal Process., vol. 55, pp. 5436–5446, Nov.2007. [17] A. Mezghani, M. Joham, R. Hunger, and W. Utschick, “Transceiver design for multi-user MIMO systems,” in Proc. WSA 2006, Mar. 2006. [18] R. Doostnejad, T. J. Lim, and E. Sousa, “Joint precoding and beamforming design for the downlink in a multiuser MIMO system,” in Proc.WiMob’2005, Aug. 2005, pp. 153–159. [19] N. Vucic, H. Boche, and S. Shi, “Robust transceiver optimization in downlink multiuser MIMO systems with channel uncertainty,” in Proc.IEEE ICC’2008, Beijing, China, May 2008. [20] A. Ben-Tal and A. Nemirovsky, “Selected topics in robust optimization,”Math. Program., vol. 112, pp. 125–158, Feb. 2007. [21] D. Bertsimas and M. Sim, “Tractable approximations to robust conic optimization problems,” Math. Program., vol. 107, pp. 5–36, Jun. 2006. [22] P. Ubaidulla and A. Chockalingam, “Robust Transceiver Design for Multiuser MIMO Downlink,” in Proc. IEEE Globecom’2008, New Orleans, USA, Dec. 2008, to appear. [23] S. Boyd and L. Vandenberghe, Convex Optimization. Cambridge University Press, 2004. [24] G. H. Golub and C. F. V. Loan, Matrix Computations. The John Hopkins University Press, 1996.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Wireless networks transmit information from a source to a destination via multiple hops in order to save energy and, thus, increase the lifetime of battery-operated nodes. The energy savings can be especially significant in cooperative transmission schemes, where several nodes cooperate during one hop to forward the information to the next node along a route to the destination. Finding the best multi-hop transmission policy in such a network which determines nodes that are involved in each hop, is a very important problem, but also a very difficult one especially when the physical wireless channel behavior is to be accounted for and exploited. We model the above optimization problem for randomly fading channels as a decentralized control problem – the channel observations available at each node define the information structure, while the control policy is defined by the power and phase of the signal transmitted by each node.In particular, we consider the problem of computing an energy-optimal cooperative transmission scheme in a wireless network for two different channel fading models: (i) slow fading channels, where the channel gains of the links remain the same for a large number of transmissions, and (ii) fast fading channels,where the channel gains of the links change quickly from one transmission to another. For slow fading, we consider a factored class of policies (corresponding to local cooperation between nodes), and show that the computation of an optimal policy in this class is equivalent to a shortest path computation on an induced graph, whose edge costs can be computed in a decentralized manner using only locally available channel state information(CSI). For fast fading, both CSI acquisition and data transmission consume energy. Hence, we need to jointly optimize over both these; we cast this optimization problem as a large stochastic optimization problem. We then jointly optimize over a set of CSI functions of the local channel states, and a corresponding factored class of control policies corresponding to local cooperation between nodes with a local outage constraint. The resulting optimal scheme in this class can again be computed efficiently in a decentralized manner. We demonstrate significant energy savings for both slow and fast fading channels through numerical simulations of randomly distributed networks.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper we address the problem of the separation and recovery of convolutively mixed autoregressive processes in a Bayesian framework. Solving this problem requires the ability to solve integration and/or optimization problems of complicated posterior distributions. We thus propose efficient stochastic algorithms based on Markov chain Monte Carlo (MCMC) methods. We present three algorithms. The first one is a classical Gibbs sampler that generates samples from the posterior distribution. The two other algorithms are stochastic optimization algorithms that allow to optimize either the marginal distribution of the sources, or the marginal distribution of the parameters of the sources and mixing filters, conditional upon the observation. Simulations are presented.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A general framework for multi-criteria optimal design is presented which is well-suited for automated design of structural systems. A systematic computer-aided optimal design decision process is developed which allows the designer to rapidly evaluate and improve a proposed design by taking into account the major factors of interest related to different aspects such as design, construction, and operation.

The proposed optimal design process requires the selection of the most promising choice of design parameters taken from a large design space, based on an evaluation using specified criteria. The design parameters specify a particular design, and so they relate to member sizes, structural configuration, etc. The evaluation of the design uses performance parameters which may include structural response parameters, risks due to uncertain loads and modeling errors, construction and operating costs, etc. Preference functions are used to implement the design criteria in a "soft" form. These preference functions give a measure of the degree of satisfaction of each design criterion. The overall evaluation measure for a design is built up from the individual measures for each criterion through a preference combination rule. The goal of the optimal design process is to obtain a design that has the highest overall evaluation measure - an optimization problem.

Genetic algorithms are stochastic optimization methods that are based on evolutionary theory. They provide the exploration power necessary to explore high-dimensional search spaces to seek these optimal solutions. Two special genetic algorithms, hGA and vGA, are presented here for continuous and discrete optimization problems, respectively.

The methodology is demonstrated with several examples involving the design of truss and frame systems. These examples are solved by using the proposed hGA and vGA.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis presents methods for incrementally constructing controllers in the presence of uncertainty and nonlinear dynamics. The basic setting is motion planning subject to temporal logic specifications. Broadly, two categories of problems are treated. The first is reactive formal synthesis when so-called discrete abstractions are available. The fragment of linear-time temporal logic (LTL) known as GR(1) is used to express assumptions about an adversarial environment and requirements of the controller. Two problems of changes to a specification are posed that concern the two major aspects of GR(1): safety and liveness. Algorithms providing incremental updates to strategies are presented as solutions. In support of these, an annotation of strategies is developed that facilitates repeated modifications. A variety of properties are proven about it, including necessity of existence and sufficiency for a strategy to be winning. The second category of problems considered is non-reactive (open-loop) synthesis in the absence of a discrete abstraction. Instead, the presented stochastic optimization methods directly construct a control input sequence that achieves low cost and satisfies a LTL formula. Several relaxations are considered as heuristics to address the rarity of sampling trajectories that satisfy an LTL formula and demonstrated to improve convergence rates for Dubins car and single-integrators subject to a recurrence task.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents a method to generate new melodies, based on conserving the semiotic structure of a template piece. A pattern discovery algorithm is applied to a template piece to extract significant segments: those that are repeated and those that are transposed in the piece. Two strategies are combined to describe the semiotic coherence structure of the template piece: inter-segment coherence and intra-segment coherence. Once the structure is described it is used as a template for new musical content that is generated using a statistical model created from a corpus of bertso melodies and iteratively improved using a stochastic optimization method. Results show that the method presented here effectively describes a coherence structure of a piece by discovering repetition and transposition relations between segments, and also by representing the relations among notes within the segments. For bertso generation the method correctly conserves all intra and inter-segment coherence of the template, and the optimization method produces coherent generated melodies.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Molecular markers have been demonstrated to be useful for the estimation of stock mixture proportions where the origin of individuals is determined from baseline samples. Bayesian statistical methods are widely recognized as providing a preferable strategy for such analyses. In general, Bayesian estimation is based on standard latent class models using data augmentation through Markov chain Monte Carlo techniques. In this study, we introduce a novel approach based on recent developments in the estimation of genetic population structure. Our strategy combines analytical integration with stochastic optimization to identify stock mixtures. An important enhancement over previous methods is the possibility of appropriately handling data where only partial baseline sample information is available. We address the potential use of nonmolecular, auxiliary biological information in our Bayesian model.