904 resultados para Lipschitzian bounds
Resumo:
Groundwater management involves conflicting objectives as maximization of discharge contradicts the criteria of minimum pumping cost and minimum piping cost. In addition, available data contains uncertainties such as market fluctuations, variations in water levels of wells and variations of ground water policies. A fuzzy model is to be evolved to tackle the uncertainties, and a multiobjective optimization is to be conducted to simultaneously satisfy the contradicting objectives. Towards this end, a multiobjective fuzzy optimization model is evolved. To get at the upper and lower bounds of the individual objectives, particle Swarm optimization (PSO) is adopted. The analytic element method (AEM) is employed to obtain the operating potentio metric head. In this study, a multiobjective fuzzy optimization model considering three conflicting objectives is developed using PSO and AEM methods for obtaining a sustainable groundwater management policy. The developed model is applied to a case study, and it is demonstrated that the compromise solution satisfies all the objectives with adequate levels of satisfaction. Sensitivity analysis is carried out by varying the parameters, and it is shown that the effect of any such variation is quite significant. Copyright (c) 2015 John Wiley & Sons, Ltd.
Resumo:
Let be a set of points in the plane. A geometric graph on is said to be locally Gabriel if for every edge in , the Euclidean disk with the segment joining and as diameter does not contain any points of that are neighbors of or in . A locally Gabriel graph(LGG) is a generalization of Gabriel graph and is motivated by applications in wireless networks. Unlike a Gabriel graph, there is no unique LGG on a given point set since no edge in a LGG is necessarily included or excluded. Thus the edge set of the graph can be customized to optimize certain network parameters depending on the application. The unit distance graph(UDG), introduced by Erdos, is also a LGG. In this paper, we show the following combinatorial bounds on edge complexity and independent sets of LGG: (i) For any , there exists LGG with edges. This improves upon the previous best bound of . (ii) For various subclasses of convex point sets, we show tight linear bounds on the maximum edge complexity of LGG. (iii) For any LGG on any point set, there exists an independent set of size .
Resumo:
Coarse Grained Reconfigurable Architectures (CGRA) are emerging as embedded application processing units in computing platforms for Exascale computing. Such CGRAs are distributed memory multi- core compute elements on a chip that communicate over a Network-on-chip (NoC). Numerical Linear Algebra (NLA) kernels are key to several high performance computing applications. In this paper we propose a systematic methodology to obtain the specification of Compute Elements (CE) for such CGRAs. We analyze block Matrix Multiplication and block LU Decomposition algorithms in the context of a CGRA, and obtain theoretical bounds on communication requirements, and memory sizes for a CE. Support for high performance custom computations common to NLA kernels are met through custom function units (CFUs) in the CEs. We present results to justify the merits of such CFUs.
Resumo:
The ATLAS and CMS collaborations at the LHC have performed analyses on the existing data sets, studying the case of one vector-like fermion or multiplet coupling to the standard model Yukawa sector. In the near future, with more data available, these experimental collaborations will start to investigate more realistic cases. The presence of more than one extra vector-like multiplet is indeed a common situation in many extensions of the standard model. The interplay of these vector-like multiplet between precision electroweak bounds, flavour and collider phenomenology is a important question in view of establishing bounds or for the discovery of physics beyond the standard model. In this work we study the phenomenological consequences of the presence of two vector-like multiplets. We analyse the constraints on such scenarios from tree-level data and oblique corrections for the case of mixing to each of the SM generations. In the present work, we limit to scenarios with two top-like partners and no mixing in the down-sector.
Resumo:
Measurement of device current during switching characterisation of an insulated gate bipolar transistor (IGBT) requires a current sensor with low insertion impedance and high bandwidth. This study presents an experimental procedure for evaluating the performance of a coaxial current transformer (CCT), designed for the above purpose. A prototype CCT, which can be mounted directly on a power terminal of a 1200 V/50 A half-bridge IGBT module, is characterised experimentally. The measured characteristics include insertion impedance, gain and phase of the CCT at different frequencies. The bounds of linearity within which the CCT can operate without saturation are determined theoretically, and are also verified experimentally. The experimental study on linearity of the CCT requires a high-amplitude current source. A proportional-resonant (PR) controller-based current-controlled half-bridge inverter is developed for this purpose. A systematic procedure for selection of PR controller parameters is also reported in this study. This set-up is helpful to determine the limit of linearity and also to measure the frequency response of the CCT at realistic amplitudes of current in the low-frequency range.
Resumo:
This paper considers decentralized spectrum sensing, i.e., detection of occupancy of the primary users' spectrum by a set of Cognitive Radio (CR) nodes, under a Bayesian set-up. The nodes use energy detection to make their individual decisions, which are combined at a Fusion Center (FC) using the K-out-of-N fusion rule. The channel from the primary transmitter to the CR nodes is assumed to undergo fading, while that from the nodes to the FC is assumed to be error-free. In this scenario, a novel concept termed as the Error Exponent with a Confidence Level (EECL) is introduced to evaluate and compare the performance of different detection schemes. Expressions for the EECL under general fading conditions are derived. As a special case, it is shown that the conventional error exponent both at individual sensors, and at the FC is zero. Further, closed-form lower bounds on the EECL are derived under Rayleigh fading and lognormal shadowing. As an example application, it answers the question of whether to use pilot-signal based narrowband sensing, where the signal undergoes Rayleigh fading, or to sense over the entire bandwidth of a wideband signal, where the signal undergoes lognormal shadowing. Theoretical results are validated using Monte Carlo simulations. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
A network cascade model that captures many real-life correlated node failures in large networks via load redistribution is studied. The considered model is well suited for networks where physical quantities are transmitted, e.g., studying large scale outages in electrical power grids, gridlocks in road networks, and connectivity breakdown in communication networks, etc. For this model, a phase transition is established, i.e., existence of critical thresholds above or below which a small number of node failures lead to a global cascade of network failures or not. Theoretical bounds are obtained for the phase transition on the critical capacity parameter that determines the threshold above and below which cascade appears or disappears, respectively, that are shown to closely follow numerical simulation results. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
A network cascade model that captures many real-life correlated node failures in large networks via load redistribution is studied. The considered model is well suited for networks where physical quantities are transmitted, e.g., studying large scale outages in electrical power grids, gridlocks in road networks, and connectivity breakdown in communication networks, etc. For this model, a phase transition is established, i.e., existence of critical thresholds above or below which a small number of node failures lead to a global cascade of network failures or not. Theoretical bounds are obtained for the phase transition on the critical capacity parameter that determines the threshold above and below which cascade appears or disappears, respectively, that are shown to closely follow numerical simulation results. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
Generalized spatial modulation (GSM) uses N antenna elements but fewer radio frequency (RF) chains (R) at the transmitter. In GSM, apart from conveying information bits through R modulation symbols, information bits are also conveyed through the indices of the R active transmit antennas. In this letter, we derive lower and upper bounds on the the capacity of a (N, M, R)-GSM MIMO system, where M is the number of receive antennas. Further, we propose a computationally efficient GSM encoding method and a message passing-based low-complexity detection algorithm suited for large-scale GSM-MIMO systems.
Resumo:
It is shown how suitably scaled, order-m moments, D-m(+/-), of the Elsasser vorticity fields in three-dimensional magnetohydrodynamics (MHD) can be used to identify three possible regimes for solutions of the MHD equations with magnetic Prandtl number P-M = 1. These vorticity fields are defined by omega(+/-) = curl z(+/-) = omega +/- j, where z(+/-) are Elsasser variables, and where omega and j are, respectively, the fluid vorticity and current density. This study follows recent developments in the study of three-dimensional Navier-Stokes fluid turbulence Gibbon et al., Nonlinearity 27, 2605 (2014)]. Our mathematical results are then compared with those from a variety of direct numerical simulations, which demonstrate that all solutions that have been investigated remain in only one of these regimes which has depleted nonlinearity. The exponents q(+/-) that characterize the inertial range power-law dependencies of the z(+/-) energy spectra, epsilon(+/-)(k), are then examined, and bounds are obtained. Comments are also made on (a) the generalization of our results to the case P-M not equal 1 and (b) the relation between D-m(+/-) and the order-m moments of gradients of magnetohydrodynamic fields, which are used to characterize intermittency in turbulent flows.
Resumo:
A three-phase confocal elliptical cylinder model is proposed for fiber-reinforced composites, in terms of which a generalized self-consistent method is developed for fiber-reinforced composites accounting for variations in fiber section shapes and randomness in fiber section orientation. The reasonableness of the fiber distribution function in the present model is shown. The dilute, self-consistent, differential and Mori-Tanaka methods are also extended to consider randomness in fiber section orientation in a statistical sense. A full comparison is made between various micromechanics methods and with the Hashin and Shtrikman's bounds. The present method provides convergent and reasonable results for a full range of variations in fiber section shapes (from circular fibers to ribbons), for a complete spectrum of the fiber volume fraction (from 0 to 1, and the latter limit shows the correct asymptotic behavior in the fully packed case) and for extreme types of the inclusion phases (from voids to rigid inclusions). A very different dependence of the five effective moduli on fiber section shapes is theoretically predicted, and it provides a reasonable explanation on the poor correlation between previous theory and experiment in the case of longitudinal shear modulus.
Resumo:
A finite element study has been performed on the effects of holes and rigid inclusions on the elastic modulus and yield strength of regular honeycombs under biaxial loading. The focus is on honeycombs that have already been weakened by a small degree of geometrical imperfection, such as a random distribution of fractured cell walls, as these imperfect honeycombs resemble commercially available metallic foams. Hashin-Shtrikman lower and upper bounds and self-consistent estimates of elastic moduli are derived to provide reference solutions to the finite element calculations. It is found that the strength of an imperfect honeycomb is relatively insensitive to the presence of holes and inclusions, consistent with recent experimental observations on commercial aluminum alloy foams.
Resumo:
Sequential Monte Carlo methods, also known as particle methods, are a widely used set of computational tools for inference in non-linear non-Gaussian state-space models. In many applications it may be necessary to compute the sensitivity, or derivative, of the optimal filter with respect to the static parameters of the state-space model; for instance, in order to obtain maximum likelihood model parameters of interest, or to compute the optimal controller in an optimal control problem. In Poyiadjis et al. [2011] an original particle algorithm to compute the filter derivative was proposed and it was shown using numerical examples that the particle estimate was numerically stable in the sense that it did not deteriorate over time. In this paper we substantiate this claim with a detailed theoretical study. Lp bounds and a central limit theorem for this particle approximation of the filter derivative are presented. It is further shown that under mixing conditions these Lp bounds and the asymptotic variance characterized by the central limit theorem are uniformly bounded with respect to the time index. We demon- strate the performance predicted by theory with several numerical examples. We also use the particle approximation of the filter derivative to perform online maximum likelihood parameter estimation for a stochastic volatility model.
Resumo:
Optimal Bayesian multi-target filtering is in general computationally impractical owing to the high dimensionality of the multi-target state. The Probability Hypothesis Density (PHD) filter propagates the first moment of the multi-target posterior distribution. While this reduces the dimensionality of the problem, the PHD filter still involves intractable integrals in many cases of interest. Several authors have proposed Sequential Monte Carlo (SMC) implementations of the PHD filter. However, these implementations are the equivalent of the Bootstrap Particle Filter, and the latter is well known to be inefficient. Drawing on ideas from the Auxiliary Particle Filter (APF), a SMC implementation of the PHD filter which employs auxiliary variables to enhance its efficiency was proposed by Whiteley et. al. Numerical examples were presented for two scenarios, including a challenging nonlinear observation model, to support the claim. This paper studies the theoretical properties of this auxiliary particle implementation. $\mathbb{L}_p$ error bounds are established from which almost sure convergence follows.
Resumo:
The concept of ''Saturation Impulse'' for rigid, perfectly plastic structures with finite-deflections subjected to dynamic loading was put forward by Zhao, Yu and Fang (1994a). This paper extends the concept of Saturation Impulse to the analysis of structures such as simply supported circular plates, simply supported and fully clamped square plates, and cylindrical shells subjected to rectangular pressure pulses in the medium load range. Both upper and lower bounds of nondimensional saturation impulses are presented.