153 resultados para monotonicity


Relevância:

10.00% 10.00%

Publicador:

Resumo:

A reduced 3D continuum model of dynamic piezoelectricity in a thin-film surface-bonded to the substrate/host is presented in this article. While employing large area flexible thin piezoelectric films for novel applications in device/diagnostics, the feasibility of the proposed model in sensing the surface and/or sub-surface defects is demonstrated through simulations - which involve metallic beams with cracks and composite beam with delaminations of various sizes. We have introduced a set of electrical measures to capture the severity of the damage in the existing structures. Characteristics of these electrical measures in terms of the potential difference and its spatial gradients are illustrated in the time domain. Sensitivity studies of the proposed measures in terms of the defected areas and their region of occurence relative to the sensing film are reported. The simulations' results for electrical measures for damaged hosts/substrates are compared with those due to undamaged hosts/substrates, which show monotonicity with high degree of sensitivity to variations in the damage parameters.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A relationship between 2-monotonicity and 2-asummability has been established and thereby a fast method for testing 2-asummability of switching functions derived. The approach is based on the fact that only a particular type of 2-sums need be examined for 2-asummability testing of 2-monotonic switching functions. These 2-sums are those which contain more than five 1's. 2-asummability testing for these 2-sums can be easily done by using the authors' technique.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The existence of an optimal feedback law is established for the risk-sensitive optimal control problem with denumerable state space. The main assumptions imposed are irreducibility and a near monotonicity condition on the one-step cost function. A solution can be found constructively using either value iteration or policy iteration under suitable conditions on initial feedback law.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Given a parametrized n-dimensional SQL query template and a choice of query optimizer, a plan diagram is a color-coded pictorial enumeration of the execution plan choices of the optimizer over the query parameter space. These diagrams have proved to be a powerful metaphor for the analysis and redesign of modern optimizers, and are gaining currency in diverse industrial and academic institutions. However, their utility is adversely impacted by the impractically large computational overheads incurred when standard brute-force exhaustive approaches are used for producing fine-grained diagrams on high-dimensional query templates. In this paper, we investigate strategies for efficiently producing close approximations to complex plan diagrams. Our techniques are customized to the features available in the optimizer's API, ranging from the generic optimizers that provide only the optimal plan for a query, to those that also support costing of sub-optimal plans and enumerating rank-ordered lists of plans. The techniques collectively feature both random and grid sampling, as well as inference techniques based on nearest-neighbor classifiers, parametric query optimization and plan cost monotonicity. Extensive experimentation with a representative set of TPC-H and TPC-DS-based query templates on industrial-strength optimizers indicates that our techniques are capable of delivering 90% accurate diagrams while incurring less than 15% of the computational overheads of the exhaustive approach. In fact, for full-featured optimizers, we can guarantee zero error with less than 10% overheads. These approximation techniques have been implemented in the publicly available Picasso optimizer visualization tool.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A new structured discretization of 2D space, named X-discretization, is proposed to solve bivariate population balance equations using the framework of minimal internal consistency of discretization of Chakraborty and Kumar [2007, A new framework for solution of multidimensional population balance equations. Chem. Eng. Sci. 62, 4112-4125] for breakup and aggregation of particles. The 2D space of particle constituents (internal attributes) is discretized into bins by using arbitrarily spaced constant composition radial lines and constant mass lines of slope -1. The quadrilaterals are triangulated by using straight lines pointing towards the mean composition line. The monotonicity of the new discretization makes is quite easy to implement, like a rectangular grid but with significantly reduced numerical dispersion. We use the new discretization of space to automate the expansion and contraction of the computational domain for the aggregation process, corresponding to the formation of larger particles and the disappearance of smaller particles by adding and removing the constant mass lines at the boundaries. The results show that the predictions of particle size distribution on fixed X-grid are in better agreement with the analytical solution than those obtained with the earlier techniques. The simulations carried out with expansion and/or contraction of the computational domain as population evolves show that the proposed strategy of evolving the computational domain with the aggregation process brings down the computational effort quite substantially; larger the extent of evolution, greater is the reduction in computational effort. (C) 2011 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Frequent episode discovery framework is a popular framework in temporal data mining with many applications. Over the years, many different notions of frequencies of episodes have been proposed along with different algorithms for episode discovery. In this paper, we present a unified view of all the apriori-based discoverymethods for serial episodes under these different notions of frequencies. Specifically, we present a unified view of the various frequency counting algorithms. We propose a generic counting algorithm such that all current algorithms are special cases of it. This unified view allows one to gain insights into different frequencies, and we present quantitative relationships among different frequencies.Our unified view also helps in obtaining correctness proofs for various counting algorithms as we show here. It also aids in understanding and obtaining the anti-monotonicity properties satisfied by the various frequencies, the properties exploited by the candidate generation step of any apriori-based method. We also point out how our unified view of counting helps to consider generalization of the algorithm to count episodes with general partial orders.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Frequent episode discovery framework is a popular framework in temporal data mining with many applications. Over the years, many different notions of frequencies of episodes have been proposed along with different algorithms for episode discovery. In this paper, we present a unified view of all the apriori-based discovery methods for serial episodes under these different notions of frequencies. Specifically, we present a unified view of the various frequency counting algorithms. We propose a generic counting algorithm such that all current algorithms are special cases of it. This unified view allows one to gain insights into different frequencies, and we present quantitative relationships among different frequencies. Our unified view also helps in obtaining correctness proofs for various counting algorithms as we show here. It also aids in understanding and obtaining the anti-monotonicity properties satisfied by the various frequencies, the properties exploited by the candidate generation step of any apriori-based method. We also point out how our unified view of counting helps to consider generalization of the algorithm to count episodes with general partial orders.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In pay-per-click sponsored search auctions which are currently extensively used by search engines, the auction for a keyword involves a certain number of advertisers (say k) competing for available slots (say m) to display their advertisements (ads for short). A sponsored search auction for a keyword is typically conducted for a number of rounds (say T). There are click probabilities mu(ij) associated with each agent slot pair (agent i and slot j). The search engine would like to maximize the social welfare of the advertisers, that is, the sum of values of the advertisers for the keyword. However, the search engine does not know the true values advertisers have for a click to their respective advertisements and also does not know the click probabilities. A key problem for the search engine therefore is to learn these click probabilities during the initial rounds of the auction and also to ensure that the auction mechanism is truthful. Mechanisms for addressing such learning and incentives issues have recently been introduced. These mechanisms, due to their connection to the multi-armed bandit problem, are aptly referred to as multi-armed bandit (MAB) mechanisms. When m = 1, exact characterizations for truthful MAB mechanisms are available in the literature. Recent work has focused on the more realistic but non-trivial general case when m > 1 and a few promising results have started appearing. In this article, we consider this general case when m > 1 and prove several interesting results. Our contributions include: (1) When, mu(ij)s are unconstrained, we prove that any truthful mechanism must satisfy strong pointwise monotonicity and show that the regret will be Theta T7) for such mechanisms. (2) When the clicks on the ads follow a certain click precedence property, we show that weak pointwise monotonicity is necessary for MAB mechanisms to be truthful. (3) If the search engine has a certain coarse pre-estimate of mu(ij) values and wishes to update them during the course of the T rounds, we show that weak pointwise monotonicity and type-I separatedness are necessary while weak pointwise monotonicity and type-II separatedness are sufficient conditions for the MAB mechanisms to be truthful. (4) If the click probabilities are separable into agent-specific and slot-specific terms, we provide a characterization of MAB mechanisms that are truthful in expectation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Multivariate neural data provide the basis for assessing interactions in brain networks. Among myriad connectivity measures, Granger causality (GC) has proven to be statistically intuitive, easy to implement, and generate meaningful results. Although its application to functional MRI (fMRI) data is increasing, several factors have been identified that appear to hinder its neural interpretability: (a) latency differences in hemodynamic response function (HRF) across different brain regions, (b) low-sampling rates, and (c) noise. Recognizing that in basic and clinical neuroscience, it is often the change of a dependent variable (e.g., GC) between experimental conditions and between normal and pathology that is of interest, we address the question of whether there exist systematic relationships between GC at the fMRI level and that at the neural level. Simulated neural signals were convolved with a canonical HRF, down-sampled, and noise-added to generate simulated fMRI data. As the coupling parameters in the model were varied, fMRI GC and neural GC were calculated, and their relationship examined. Three main results were found: (1) GC following HRF convolution is a monotonically increasing function of neural GC; (2) this monotonicity can be reliably detected as a positive correlation when realistic fMRI temporal resolution and noise level were used; and (3) although the detectability of monotonicity declined due to the presence of HRF latency differences, substantial recovery of detectability occurred after correcting for latency differences. These results suggest that Granger causality is a viable technique for analyzing fMRI data when the questions are appropriately formulated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Single fluid schemes that rely on an interface function for phase identification in multicomponent compressible flows are widely used to study hydrodynamic flow phenomena in several diverse applications. Simulations based on standard numerical implementation of these schemes suffer from an artificial increase in the width of the interface function owing to the numerical dissipation introduced by an upwind discretization of the governing equations. In addition, monotonicity requirements which ensure that the sharp interface function remains bounded at all times necessitate use of low-order accurate discretization strategies. This results in a significant reduction in accuracy along with a loss of intricate flow features. In this paper we develop a nonlinear transformation based interface capturing method which achieves superior accuracy without compromising the simplicity, computational efficiency and robustness of the original flow solver. A nonlinear map from the signed distance function to the sigmoid type interface function is used to effectively couple a standard single fluid shock and interface capturing scheme with a high-order accurate constrained level set reinitialization method in a way that allows for oscillation-free transport of the sharp material interface. Imposition of a maximum principle, which ensures that the multidimensional preconditioned interface capturing method does not produce new maxima or minima even in the extreme events of interface merger or breakup, allows for an explicit determination of the interface thickness in terms of the grid spacing. A narrow band method is formulated in order to localize computations pertinent to the preconditioned interface capturing method. Numerical tests in one dimension reveal a significant improvement in accuracy and convergence; in stark contrast to the conventional scheme, the proposed method retains its accuracy and convergence characteristics in a shifted reference frame. Results from the test cases in two dimensions show that the nonlinear transformation based interface capturing method outperforms both the conventional method and an interface capturing method without nonlinear transformation in resolving intricate flow features such as sheet jetting in the shock-induced cavity collapse. The ability of the proposed method in accounting for the gravitational and surface tension forces besides compressibility is demonstrated through a model fully three-dimensional problem concerning droplet splash and formation of a crownlike feature. (C) 2014 Elsevier Inc. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The transformation of flowing liquids into rigid glasses is thought to involve increasingly cooperative relaxation dynamics as the temperature approaches that of the glass transition. However, the precise nature of this motion is unclear, and a complete understanding of vitrification thus remains elusive. Of the numerous theoretical perspectives(1-4) devised to explain the process, random first-order theory (RFOT; refs 2,5) is a well-developed thermodynamic approach, which predicts a change in the shape of relaxing regions as the temperature is lowered. However, the existence of an underlying `ideal' glass transition predicted by RFOT remains debatable, largely because the key microscopic predictions concerning the growth of amorphous order and the nature of dynamic correlations lack experimental verification. Here, using holographic optical tweezers, we freeze a wall of particles in a two-dimensional colloidal glass-forming liquid and provide direct evidence for growing amorphous order in the form of a static point-to-set length. We uncover the non-monotonic dependence of dynamic correlations on area fraction and show that this non-monotonicity follows directly from the change in morphology and internal structure of cooperatively rearranging regions(6,7). Our findings support RFOT and thereby constitute a crucial step in distinguishing between competing theories of glass formation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We characterize a monotonic core concept defined on the class of veto balanced games. We also discuss what restricted versions of monotonicity are possible when selecting core allocations. We introduce a family of monotonic core concepts for veto balanced games and we show that, in general, the nucleolus per capita is not monotonic.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We prove that the SD-prenucleolus satisfies monotonicity in the class of convex games. The SD-prenucleolus is thus the only known continuous core concept that satisfies monotonicity for convex games. We also prove that for convex games the SD-prenucleolus and the SD-prekernel coincide.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The dissertation is concerned with the mathematical study of various network problems. First, three real-world networks are considered: (i) the human brain network (ii) communication networks, (iii) electric power networks. Although these networks perform very different tasks, they share similar mathematical foundations. The high-level goal is to analyze and/or synthesis each of these systems from a “control and optimization” point of view. After studying these three real-world networks, two abstract network problems are also explored, which are motivated by power systems. The first one is “flow optimization over a flow network” and the second one is “nonlinear optimization over a generalized weighted graph”. The results derived in this dissertation are summarized below.

Brain Networks: Neuroimaging data reveals the coordinated activity of spatially distinct brain regions, which may be represented mathematically as a network of nodes (brain regions) and links (interdependencies). To obtain the brain connectivity network, the graphs associated with the correlation matrix and the inverse covariance matrix—describing marginal and conditional dependencies between brain regions—have been proposed in the literature. A question arises as to whether any of these graphs provides useful information about the brain connectivity. Due to the electrical properties of the brain, this problem will be investigated in the context of electrical circuits. First, we consider an electric circuit model and show that the inverse covariance matrix of the node voltages reveals the topology of the circuit. Second, we study the problem of finding the topology of the circuit based on only measurement. In this case, by assuming that the circuit is hidden inside a black box and only the nodal signals are available for measurement, the aim is to find the topology of the circuit when a limited number of samples are available. For this purpose, we deploy the graphical lasso technique to estimate a sparse inverse covariance matrix. It is shown that the graphical lasso may find most of the circuit topology if the exact covariance matrix is well-conditioned. However, it may fail to work well when this matrix is ill-conditioned. To deal with ill-conditioned matrices, we propose a small modification to the graphical lasso algorithm and demonstrate its performance. Finally, the technique developed in this work will be applied to the resting-state fMRI data of a number of healthy subjects.

Communication Networks: Congestion control techniques aim to adjust the transmission rates of competing users in the Internet in such a way that the network resources are shared efficiently. Despite the progress in the analysis and synthesis of the Internet congestion control, almost all existing fluid models of congestion control assume that every link in the path of a flow observes the original source rate. To address this issue, a more accurate model is derived in this work for the behavior of the network under an arbitrary congestion controller, which takes into account of the effect of buffering (queueing) on data flows. Using this model, it is proved that the well-known Internet congestion control algorithms may no longer be stable for the common pricing schemes, unless a sufficient condition is satisfied. It is also shown that these algorithms are guaranteed to be stable if a new pricing mechanism is used.

Electrical Power Networks: Optimal power flow (OPF) has been one of the most studied problems for power systems since its introduction by Carpentier in 1962. This problem is concerned with finding an optimal operating point of a power network minimizing the total power generation cost subject to network and physical constraints. It is well known that OPF is computationally hard to solve due to the nonlinear interrelation among the optimization variables. The objective is to identify a large class of networks over which every OPF problem can be solved in polynomial time. To this end, a convex relaxation is proposed, which solves the OPF problem exactly for every radial network and every meshed network with a sufficient number of phase shifters, provided power over-delivery is allowed. The concept of “power over-delivery” is equivalent to relaxing the power balance equations to inequality constraints.

Flow Networks: In this part of the dissertation, the minimum-cost flow problem over an arbitrary flow network is considered. In this problem, each node is associated with some possibly unknown injection, each line has two unknown flows at its ends related to each other via a nonlinear function, and all injections and flows need to satisfy certain box constraints. This problem, named generalized network flow (GNF), is highly non-convex due to its nonlinear equality constraints. Under the assumption of monotonicity and convexity of the flow and cost functions, a convex relaxation is proposed, which always finds the optimal injections. A primary application of this work is in the OPF problem. The results of this work on GNF prove that the relaxation on power balance equations (i.e., load over-delivery) is not needed in practice under a very mild angle assumption.

Generalized Weighted Graphs: Motivated by power optimizations, this part aims to find a global optimization technique for a nonlinear optimization defined over a generalized weighted graph. Every edge of this type of graph is associated with a weight set corresponding to the known parameters of the optimization (e.g., the coefficients). The motivation behind this problem is to investigate how the (hidden) structure of a given real/complex valued optimization makes the problem easy to solve, and indeed the generalized weighted graph is introduced to capture the structure of an optimization. Various sufficient conditions are derived, which relate the polynomial-time solvability of different classes of optimization problems to weak properties of the generalized weighted graph such as its topology and the sign definiteness of its weight sets. As an application, it is proved that a broad class of real and complex optimizations over power networks are polynomial-time solvable due to the passivity of transmission lines and transformers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The need to make default assumptions is frequently encountered in reasoning about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non-monotonicity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occuring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.