873 resultados para constrained clustering
Resumo:
In this paper we first derive a necessary and sufficient condition for a stationary strategy to be the Nash equilibrium of discounted constrained stochastic game under certain assumptions. In this process we also develop a nonlinear (non-convex) optimization problem for a discounted constrained stochastic game. We use the linear best response functions of every player and complementary slackness theorem for linear programs to derive both the optimization problem and the equivalent condition. We then extend this result to average reward constrained stochastic games. Finally, we present a heuristic algorithm motivated by our necessary and sufficient conditions for a discounted cost constrained stochastic game. We numerically observe the convergence of this algorithm to Nash equilibrium. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
We are given a set of sensors at given locations, a set of potential locations for placing base stations (BSs, or sinks), and another set of potential locations for placing wireless relay nodes. There is a cost for placing a BS and a cost for placing a relay. The problem we consider is to select a set of BS locations, a set of relay locations, and an association of sensor nodes with the selected BS locations, so that the number of hops in the path from each sensor to its BS is bounded by h(max), and among all such feasible networks, the cost of the selected network is the minimum. The hop count bound suffices to ensure a certain probability of the data being delivered to the BS within a given maximum delay under a light traffic model. We observe that the problem is NP-Hard, and is hard to even approximate within a constant factor. For this problem, we propose a polynomial time approximation algorithm (SmartSelect) based on a relay placement algorithm proposed in our earlier work, along with a modification of the greedy algorithm for weighted set cover. We have analyzed the worst case approximation guarantee for this algorithm. We have also proposed a polynomial time heuristic to improve upon the solution provided by SmartSelect. Our numerical results demonstrate that the algorithms provide good quality solutions using very little computation time in various randomly generated network scenarios.
Resumo:
Image inpainting is the process of filling the unwanted region in an image marked by the user. It is used for restoring old paintings and photographs, removal of red eyes from pictures, etc. In this paper, we propose an efficient inpainting algorithm which takes care of false edge propagation. We use the classical exemplar based technique to find out the priority term for each patch. To ensure that the edge content of the nearest neighbor patch found by minimizing L-2 distance between patches, we impose an additional constraint that the entropy of the patches be similar. Entropy of the patch acts as a good measure of edge content. Additionally, we fill the image by considering overlapping patches to ensure smoothness in the output. We use structural similarity index as the measure of similarity between ground truth and inpainted image. The results of the proposed approach on a number of examples on real and synthetic images show the effectiveness of our algorithm in removing objects and thin scratches or text written on image. It is also shown that the proposed approach is robust to the shape of the manually selected target. Our results compare favorably to those obtained by existing techniques
Resumo:
Homogeneous temperature regions are necessary for use in hydrometeorological studies. The regions are often delineated by analysing statistics derived from time series of maximum, minimum or mean temperature, rather than attributes influencing temperature. This practice cannot yield meaningful regions in data-sparse areas. Further, independent validation of the delineated regions for homogeneity in temperature is not possible, as temperature records form the basis to arrive at the regions. To address these issues, a two-stage clustering approach is proposed in this study to delineate homogeneous temperature regions. First stage of the approach involves (1) determining correlation structure between observed temperature over the study area and possible predictors (large-scale atmospheric variables) influencing the temperature and (2) using the correlation structure as the basis to delineate sites in the study area into clusters. Second stage of the approach involves analysis on each of the clusters to (1) identify potential predictors (large-scale atmospheric variables) influencing temperature at sites in the cluster and (2) partition the cluster into homogeneous fuzzy temperature regions using the identified potential predictors. Application of the proposed approach to India yielded 28 homogeneous regions that were demonstrated to be effective when compared to an alternate set of 6 regions that were previously delineated over the study area. Intersite cross-correlations of monthly maximum and minimum temperatures in the existing regions were found to be weak and negative for several months, which is undesirable. This problem was not found in the case of regions delineated using the proposed approach. Utility of the proposed regions in arriving at estimates of potential evapotranspiration for ungauged locations in the study area is demonstrated.
Resumo:
Purpose: A prior image based temporally constrained reconstruction ( PITCR) algorithm was developed for obtaining accurate temperature maps having better volume coverage, and spatial, and temporal resolution than other algorithms for highly undersampled data in magnetic resonance (MR) thermometry. Methods: The proposed PITCR approach is an algorithm that gives weight to the prior image and performs accurate reconstruction in a dynamic imaging environment. The PITCR method is compared with the temporally constrained reconstruction (TCR) algorithm using pork muscle data. Results: The PITCR method provides superior performance compared to the TCR approach with highly undersampled data. The proposed approach is computationally expensive compared to the TCR approach, but this could be overcome by the advantage of reconstructing with fewer measurements. In the case of reconstruction of temperature maps from 16% of fully sampled data, the PITCR approach was 1.57x slower compared to the TCR approach, while the root mean square error using PITCR is 0.784 compared to 2.815 with the TCR scheme. Conclusions: The PITCR approach is able to perform more accurate reconstructions of temperature maps compared to the TCR approach with highly undersampled data in MR guided high intensity focused ultrasound. (C) 2015 American Association of Physicists in Medicine.
Resumo:
Clock synchronization is highly desirable in distributed systems, including many applications in the Internet of Things and Humans. It improves the efficiency, modularity, and scalability of the system, and optimizes use of event triggers. For IoTH, BLE - a subset of the recent Bluetooth v4.0 stack - provides a low-power and loosely coupled mechanism for sensor data collection with ubiquitous units (e.g., smartphones and tablets) carried by humans. This fundamental design paradigm of BLE is enabled by a range of broadcast advertising modes. While its operational benefits are numerous, the lack of a common time reference in the broadcast mode of BLE has been a fundamental limitation. This article presents and describes CheepSync, a time synchronization service for BLE advertisers, especially tailored for applications requiring high time precision on resource constrained BLE platforms. Designed on top of the existing Bluetooth v4.0 standard, the CheepSync framework utilizes low-level time-stamping and comprehensive error compensation mechanisms for overcoming uncertainties in message transmission, clock drift, and other system-specific constraints. CheepSync was implemented on custom designed nRF24Cheep beacon platforms (as broadcasters) and commercial off-the-shelf Android ported smartphones (as passive listeners). We demonstrate the efficacy of CheepSync by numerous empirical evaluations in a variety of experimental setups, and show that its average (single-hop) time synchronization accuracy is in the 10 mu s range.
Resumo:
Cooperative relaying combined with selection exploits spatial diversity to significantly improve the performance of interference-constrained secondary users in an underlay cognitive radio network. We present a novel and optimal relay selection (RS) rule that minimizes the symbol error probability (SEP) of an average interference-constrained underlay secondary system that uses amplify-and-forward relays. A key point that the rule highlights for the first time is that, for the average interference constraint, the signal-to-interference-plus-noise-ratio (SINR) of the direct source-to-destination (SI)) link affects the choice of the optimal relay. Furthermore, as the SINR increases, the odds that no relay transmits increase. We also propose a simpler, more practical, and near-optimal variant of the optimal rule that requires just one bit of feedback about the state of the SD link to the relays. Compared to the SD-unaware ad hoc RS rules proposed in the literature, the proposed rules markedly reduce the SEP by up to two orders of magnitude.
Resumo:
Previous investigations have unveiled size effects in the strength of metallic foams under simple shear - the shear strength increases with diminishing specimen size, a phenomena similar to that shown by Fleck et al. (Acta Mat., 1994, Vol. 42, p. 475.) on the torsion tests of copper wires of various radii. In this study, experimental study of the constrained deformation of a foam layer sandwiched between two steel plates has been conducted. The sandwiched plates are subjected to combined shear and normal loading. It is found that measured yield loci of metallic foams in the normal and shear stress space corresponding to various foam layer thicknesses are self-similar in shape but their size increases as the foam layer thickness decreases. Moreover, the strains profiles across the foam layer thickness are parabolic instead of uniform; their values increase from the interfaces between the foam layer and the steel plates and reach their maximum in the middle of the foam layer, yielding boundary layers adjacent to the steel plates. In order to further explore the origin of observed size effects, micromechanics models have been developed, with the foam layer represented by regular and irregular honeycombs. Though the regular honeycomb model is seen to underestimate the size effects, the irregular honeycomb model faithfully captures the observed features of the constrained deformation of metallic foams.
Resumo:
The constrained deformation of an aluminium alloy foam sandwiched between steel substrates has been investigated. The sandwich plates are subjected to through-thickness shear and normal loading, and it is found that the face sheets constrain the foam against plastic deformation and result in a size effect: the yield strength increases with diminishing thickness of foam layer. The strain distribution across the foam core has been measured by a visual strain mapping technique, and a boundary layer of reduced straining was observed adjacent to the face sheets. The deformation response of the aluminium foam layer was modelled by the elastic-plastic finite element analysis of regular and irregular two dimensional honeycombs, bonded to rigid face sheets; in the simulations, the rotation of the boundary nodes of the cell-wall beam elements was set to zero to simulate full constraint from the rigid face sheets. It is found that the regular honeycomb under-estimates the size effect whereas the irregular honeycomb provides a faithful representation of both the observed size effect and the observed strain profile through the foam layer. Additionally, a compressible version of the Fleck-Hutchinson strain gradient theory was used to predict the size effect; by identifying the cell edge length as the relevant microstructural length scale the strain gradient model is able to reproduce the observed strain profiles across the layer and the thickness dependence of strength. © 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
A hybrid method of continuum and particle dynamics is developed for micro- and nano-fluidics, where fluids are described by a molecular dynamics (MD) in one domain and by the Navier-Stokes (NS) equations in another domain. In order to ensure the continuity of momentum flux, the continuum and molecular dynamics in the overlap domain are coupled through a constrained particle dynamics. The constrained particle dynamics is constructed with a virtual damping force and a virtual added mass force. The sudden-start Couette flows with either non-slip or slip boundary condition are used to test the hybrid method. It is shown that the results obtained are quantitatively in agreement with the analytical solutions under the non-slip boundary conditions and the full MD simulations under the slip boundary conditions.