116 resultados para monotonicity
Resumo:
The one-dimensional propagation of a combustion wave through a premixed solid fuel for two-stage kinetics is studied. We re-examine the analysis of a single reaction travelling-wave and extend it to the case of two-stage reactions. We derive an expression for the travelling wave speed in the limit of large activation energy for both reactions. The analysis shows that when both reactions are exothermic, the wave structure is similar to the single reaction case. However, when the second reaction is endothermic, the wave structure can be significantly different from single reaction case. In particular, as might be expected, a travelling wave does not necessarily exist in this case. We establish conditions in the limiting large activation energy limit for the non-existence, and for monotonicity of the temperature profile in the travelling wave.
Semiparametric estimates of the supply and demand effects of disability on labor force participation
Resumo:
This paper modifies and uses the semiparametric methods of Ichimura and Lee (1991) on standard cross-section data to decompose the effect of disability on labor force participation into a demand and a supply effect. It shows that straightforward use of Ichimura and Lee leads to meaningless results while imposing monotonicity on the unknown function leads to substantial results. The paper finds that supply effects dominate the demand effects of disability.
Resumo:
A cooperative game played in a sequential manner by a pair of learning automata is investigated in this paper. The automata operate in an unknown random environment which gives a common pay-off to the automata. Necessary and sufficient conditions on the functions in the reinforcement scheme are given for absolute monotonicity which enables the expected pay-off to be monotonically increasing in any arbitrary environment. As each participating automaton operates with no information regarding the other partner, the results of the paper are relevant to decentralized control.
Resumo:
A reduced 3D continuum model of dynamic piezoelectricity in a thin-film surface-bonded to the substrate/host is presented in this article. While employing large area flexible thin piezoelectric films for novel applications in device/diagnostics, the feasibility of the proposed model in sensing the surface and/or sub-surface defects is demonstrated through simulations - which involve metallic beams with cracks and composite beam with delaminations of various sizes. We have introduced a set of electrical measures to capture the severity of the damage in the existing structures. Characteristics of these electrical measures in terms of the potential difference and its spatial gradients are illustrated in the time domain. Sensitivity studies of the proposed measures in terms of the defected areas and their region of occurence relative to the sensing film are reported. The simulations' results for electrical measures for damaged hosts/substrates are compared with those due to undamaged hosts/substrates, which show monotonicity with high degree of sensitivity to variations in the damage parameters.
Resumo:
A relationship between 2-monotonicity and 2-asummability has been established and thereby a fast method for testing 2-asummability of switching functions derived. The approach is based on the fact that only a particular type of 2-sums need be examined for 2-asummability testing of 2-monotonic switching functions. These 2-sums are those which contain more than five 1's. 2-asummability testing for these 2-sums can be easily done by using the authors' technique.
Resumo:
The existence of an optimal feedback law is established for the risk-sensitive optimal control problem with denumerable state space. The main assumptions imposed are irreducibility and a near monotonicity condition on the one-step cost function. A solution can be found constructively using either value iteration or policy iteration under suitable conditions on initial feedback law.
Resumo:
Given a parametrized n-dimensional SQL query template and a choice of query optimizer, a plan diagram is a color-coded pictorial enumeration of the execution plan choices of the optimizer over the query parameter space. These diagrams have proved to be a powerful metaphor for the analysis and redesign of modern optimizers, and are gaining currency in diverse industrial and academic institutions. However, their utility is adversely impacted by the impractically large computational overheads incurred when standard brute-force exhaustive approaches are used for producing fine-grained diagrams on high-dimensional query templates. In this paper, we investigate strategies for efficiently producing close approximations to complex plan diagrams. Our techniques are customized to the features available in the optimizer's API, ranging from the generic optimizers that provide only the optimal plan for a query, to those that also support costing of sub-optimal plans and enumerating rank-ordered lists of plans. The techniques collectively feature both random and grid sampling, as well as inference techniques based on nearest-neighbor classifiers, parametric query optimization and plan cost monotonicity. Extensive experimentation with a representative set of TPC-H and TPC-DS-based query templates on industrial-strength optimizers indicates that our techniques are capable of delivering 90% accurate diagrams while incurring less than 15% of the computational overheads of the exhaustive approach. In fact, for full-featured optimizers, we can guarantee zero error with less than 10% overheads. These approximation techniques have been implemented in the publicly available Picasso optimizer visualization tool.
Resumo:
A new structured discretization of 2D space, named X-discretization, is proposed to solve bivariate population balance equations using the framework of minimal internal consistency of discretization of Chakraborty and Kumar [2007, A new framework for solution of multidimensional population balance equations. Chem. Eng. Sci. 62, 4112-4125] for breakup and aggregation of particles. The 2D space of particle constituents (internal attributes) is discretized into bins by using arbitrarily spaced constant composition radial lines and constant mass lines of slope -1. The quadrilaterals are triangulated by using straight lines pointing towards the mean composition line. The monotonicity of the new discretization makes is quite easy to implement, like a rectangular grid but with significantly reduced numerical dispersion. We use the new discretization of space to automate the expansion and contraction of the computational domain for the aggregation process, corresponding to the formation of larger particles and the disappearance of smaller particles by adding and removing the constant mass lines at the boundaries. The results show that the predictions of particle size distribution on fixed X-grid are in better agreement with the analytical solution than those obtained with the earlier techniques. The simulations carried out with expansion and/or contraction of the computational domain as population evolves show that the proposed strategy of evolving the computational domain with the aggregation process brings down the computational effort quite substantially; larger the extent of evolution, greater is the reduction in computational effort. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Frequent episode discovery framework is a popular framework in temporal data mining with many applications. Over the years, many different notions of frequencies of episodes have been proposed along with different algorithms for episode discovery. In this paper, we present a unified view of all the apriori-based discoverymethods for serial episodes under these different notions of frequencies. Specifically, we present a unified view of the various frequency counting algorithms. We propose a generic counting algorithm such that all current algorithms are special cases of it. This unified view allows one to gain insights into different frequencies, and we present quantitative relationships among different frequencies.Our unified view also helps in obtaining correctness proofs for various counting algorithms as we show here. It also aids in understanding and obtaining the anti-monotonicity properties satisfied by the various frequencies, the properties exploited by the candidate generation step of any apriori-based method. We also point out how our unified view of counting helps to consider generalization of the algorithm to count episodes with general partial orders.
Resumo:
Frequent episode discovery framework is a popular framework in temporal data mining with many applications. Over the years, many different notions of frequencies of episodes have been proposed along with different algorithms for episode discovery. In this paper, we present a unified view of all the apriori-based discovery methods for serial episodes under these different notions of frequencies. Specifically, we present a unified view of the various frequency counting algorithms. We propose a generic counting algorithm such that all current algorithms are special cases of it. This unified view allows one to gain insights into different frequencies, and we present quantitative relationships among different frequencies. Our unified view also helps in obtaining correctness proofs for various counting algorithms as we show here. It also aids in understanding and obtaining the anti-monotonicity properties satisfied by the various frequencies, the properties exploited by the candidate generation step of any apriori-based method. We also point out how our unified view of counting helps to consider generalization of the algorithm to count episodes with general partial orders.
Resumo:
In pay-per-click sponsored search auctions which are currently extensively used by search engines, the auction for a keyword involves a certain number of advertisers (say k) competing for available slots (say m) to display their advertisements (ads for short). A sponsored search auction for a keyword is typically conducted for a number of rounds (say T). There are click probabilities mu(ij) associated with each agent slot pair (agent i and slot j). The search engine would like to maximize the social welfare of the advertisers, that is, the sum of values of the advertisers for the keyword. However, the search engine does not know the true values advertisers have for a click to their respective advertisements and also does not know the click probabilities. A key problem for the search engine therefore is to learn these click probabilities during the initial rounds of the auction and also to ensure that the auction mechanism is truthful. Mechanisms for addressing such learning and incentives issues have recently been introduced. These mechanisms, due to their connection to the multi-armed bandit problem, are aptly referred to as multi-armed bandit (MAB) mechanisms. When m = 1, exact characterizations for truthful MAB mechanisms are available in the literature. Recent work has focused on the more realistic but non-trivial general case when m > 1 and a few promising results have started appearing. In this article, we consider this general case when m > 1 and prove several interesting results. Our contributions include: (1) When, mu(ij)s are unconstrained, we prove that any truthful mechanism must satisfy strong pointwise monotonicity and show that the regret will be Theta T7) for such mechanisms. (2) When the clicks on the ads follow a certain click precedence property, we show that weak pointwise monotonicity is necessary for MAB mechanisms to be truthful. (3) If the search engine has a certain coarse pre-estimate of mu(ij) values and wishes to update them during the course of the T rounds, we show that weak pointwise monotonicity and type-I separatedness are necessary while weak pointwise monotonicity and type-II separatedness are sufficient conditions for the MAB mechanisms to be truthful. (4) If the click probabilities are separable into agent-specific and slot-specific terms, we provide a characterization of MAB mechanisms that are truthful in expectation.
Resumo:
Multivariate neural data provide the basis for assessing interactions in brain networks. Among myriad connectivity measures, Granger causality (GC) has proven to be statistically intuitive, easy to implement, and generate meaningful results. Although its application to functional MRI (fMRI) data is increasing, several factors have been identified that appear to hinder its neural interpretability: (a) latency differences in hemodynamic response function (HRF) across different brain regions, (b) low-sampling rates, and (c) noise. Recognizing that in basic and clinical neuroscience, it is often the change of a dependent variable (e.g., GC) between experimental conditions and between normal and pathology that is of interest, we address the question of whether there exist systematic relationships between GC at the fMRI level and that at the neural level. Simulated neural signals were convolved with a canonical HRF, down-sampled, and noise-added to generate simulated fMRI data. As the coupling parameters in the model were varied, fMRI GC and neural GC were calculated, and their relationship examined. Three main results were found: (1) GC following HRF convolution is a monotonically increasing function of neural GC; (2) this monotonicity can be reliably detected as a positive correlation when realistic fMRI temporal resolution and noise level were used; and (3) although the detectability of monotonicity declined due to the presence of HRF latency differences, substantial recovery of detectability occurred after correcting for latency differences. These results suggest that Granger causality is a viable technique for analyzing fMRI data when the questions are appropriately formulated.
Resumo:
Single fluid schemes that rely on an interface function for phase identification in multicomponent compressible flows are widely used to study hydrodynamic flow phenomena in several diverse applications. Simulations based on standard numerical implementation of these schemes suffer from an artificial increase in the width of the interface function owing to the numerical dissipation introduced by an upwind discretization of the governing equations. In addition, monotonicity requirements which ensure that the sharp interface function remains bounded at all times necessitate use of low-order accurate discretization strategies. This results in a significant reduction in accuracy along with a loss of intricate flow features. In this paper we develop a nonlinear transformation based interface capturing method which achieves superior accuracy without compromising the simplicity, computational efficiency and robustness of the original flow solver. A nonlinear map from the signed distance function to the sigmoid type interface function is used to effectively couple a standard single fluid shock and interface capturing scheme with a high-order accurate constrained level set reinitialization method in a way that allows for oscillation-free transport of the sharp material interface. Imposition of a maximum principle, which ensures that the multidimensional preconditioned interface capturing method does not produce new maxima or minima even in the extreme events of interface merger or breakup, allows for an explicit determination of the interface thickness in terms of the grid spacing. A narrow band method is formulated in order to localize computations pertinent to the preconditioned interface capturing method. Numerical tests in one dimension reveal a significant improvement in accuracy and convergence; in stark contrast to the conventional scheme, the proposed method retains its accuracy and convergence characteristics in a shifted reference frame. Results from the test cases in two dimensions show that the nonlinear transformation based interface capturing method outperforms both the conventional method and an interface capturing method without nonlinear transformation in resolving intricate flow features such as sheet jetting in the shock-induced cavity collapse. The ability of the proposed method in accounting for the gravitational and surface tension forces besides compressibility is demonstrated through a model fully three-dimensional problem concerning droplet splash and formation of a crownlike feature. (C) 2014 Elsevier Inc. All rights reserved.
Resumo:
The transformation of flowing liquids into rigid glasses is thought to involve increasingly cooperative relaxation dynamics as the temperature approaches that of the glass transition. However, the precise nature of this motion is unclear, and a complete understanding of vitrification thus remains elusive. Of the numerous theoretical perspectives(1-4) devised to explain the process, random first-order theory (RFOT; refs 2,5) is a well-developed thermodynamic approach, which predicts a change in the shape of relaxing regions as the temperature is lowered. However, the existence of an underlying `ideal' glass transition predicted by RFOT remains debatable, largely because the key microscopic predictions concerning the growth of amorphous order and the nature of dynamic correlations lack experimental verification. Here, using holographic optical tweezers, we freeze a wall of particles in a two-dimensional colloidal glass-forming liquid and provide direct evidence for growing amorphous order in the form of a static point-to-set length. We uncover the non-monotonic dependence of dynamic correlations on area fraction and show that this non-monotonicity follows directly from the change in morphology and internal structure of cooperatively rearranging regions(6,7). Our findings support RFOT and thereby constitute a crucial step in distinguishing between competing theories of glass formation.
Resumo:
We characterize a monotonic core concept defined on the class of veto balanced games. We also discuss what restricted versions of monotonicity are possible when selecting core allocations. We introduce a family of monotonic core concepts for veto balanced games and we show that, in general, the nucleolus per capita is not monotonic.