54 resultados para Reciprocal Value
Resumo:
In this paper, we consider the problem of selecting, for any given positive integer k, the top-k nodes in a social network, based on a certain measure appropriate for the social network. This problem is relevant in many settings such as analysis of co-authorship networks, diffusion of information, viral marketing, etc. However, in most situations, this problem turns out to be NP-hard. The existing approaches for solving this problem are based on approximation algorithms and assume that the objective function is sub-modular. In this paper, we propose a novel and intuitive algorithm based on the Shapley value, for efficiently computing an approximate solution to this problem. Our proposed algorithm does not use the sub-modularity of the underlying objective function and hence it is a general approach. We demonstrate the efficacy of the algorithm using a co-authorship data set from e-print arXiv (www.arxiv.org), having 8361 authors.
Resumo:
In this paper we address the problem of forming procurement networks for items with value adding stages that are linearly arranged. Formation of such procurement networks involves a bottom-up assembly of complex production, assembly, and exchange relationships through supplier selection and contracting decisions. Research in supply chain management has emphasized that such decisions need to take into account the fact that suppliers and buyers are intelligent and rational agents who act strategically. In this paper, we view the problem of procurement network formation (PNF) for multiple units of a single item as a cooperative game where agents cooperate to form a surplus maximizing procurement network and then share the surplus in a fair manner. We study the implications of using the Shapley value as a solution concept for forming such procurement networks. We also present a protocol, based on the extensive form game realization of the Shapley value, for forming these networks.
Resumo:
An exact classical theory of the motion of a point dipole in a meson field is given which takes into account the effects of the reaction of the emitted meson field. The meson field is characterized by a constant $\chi =\mu /\hslash $ of the dimensions of a reciprocal length, $\mu $ being the meson mass, and as $\chi \rightarrow $ 0 the theory of this paper goes over continuously into the theory of the preceding paper for the motion of a spinning particle in a Maxwell field. The mass of the particle and the spin angular momentum are arbitrary mechanical constants. The field contributes a small finite addition to the mass, and a negative moment of inertia about an axis perpendicular to the spin axis. A cross-section (formula (88 a)) is given for the scattering of transversely polarized neutral mesons by the rotation of the spin of the neutron or proton which should be valid up to energies of 10$^{9}$ eV. For low energies E it agrees completely with the old quantum cross-section, having a dependence on energy proportional to p$^{4}$/E$^{2}$ (p being the meson momentum). At higher energies it deviates completely from the quantum cross-section, which it supersedes by taking into account the effects of radiation reaction on the rotation of the spin. The cross-section is a maximum at E $\sim $ 3$\cdot $5$\mu $, its value at this point being 3 $\times $ 10$^{-26}$ cm.$^{2}$, after which it decreases rapidly, becoming proportional to E$^{-2}$ at high energies. Thus the quantum theory of the interaction of neutrons with mesons goes wrong for E $\gtrsim $ 3$\mu $. The scattering of longitudinally polarized mesons is due to the translational but not the rotational motion of the dipole and is at least twenty thousand times smaller. With the assumption previously made by the present author that the heavy partilesc may exist in states of any integral charge, and in particular that protons of charge 2e and - e may occur in nature, the above results can be applied to charged mesons. Thus transversely polarised mesons should undergo a very big scattering and consequent absorption at energies near 3$\cdot $5$\mu $. Hence the energy spectrum of transversely polarized mesons should fall off rapidly for energies below about 3$\mu $. Scattering plays a relatively unimportant part in the absorption of longitudinally polarized mesons, and they are therefore much more penetrating. The theory does not lead to Heisenberg explosions and multiple processes.
Resumo:
Notched three point bend (TPB) specimens made with plain concrete and cement mortar were tested under crack mouth opening displacement (CMOD) control at a rate of 0.0004 mm/s and simultaneously acoustic emissions (AE) released were recorded during the experiments. Amplitude distribution analysis of AE released during concrete was carried out to study the development of fracture process in concrete and mortar specimens. The slope of the log-linear frequency-amplitude distribution of AE is known as the AE based b-value. The AE based b-value was computed in terms of physical process of time varying applied load using cumulative frequency distribution (Gutenberg-Richter relationship) and discrete frequency distribution (Aki's method) of AE released during concrete fracture. AE characteristics of plain concrete and cement mortar were studied and discussed and it was observed that the AE based b-value analysis serves as a tool to identify the damage in concrete structural members. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
In this article, we address stochastic differential games of mixed type with both control and stopping times. Under standard assumptions, we show that the value of the game can be characterized as the unique viscosity solution of corresponding Hamilton-Jacobi-Isaacs (HJI) variational inequalities.
Resumo:
We develop a quadratic C degrees interior penalty method for linear fourth order boundary value problems with essential and natural boundary conditions of the Cahn-Hilliard type. Both a priori and a posteriori error estimates are derived. The performance of the method is illustrated by numerical experiments.
Resumo:
We investigate the problem of influence limitation in the presence of competing campaigns in a social network. Given a negative campaign which starts propagating from a specified source and a positive/counter campaign that is initiated, after a certain time delay, to limit the the influence or spread of misinformation by the negative campaign, we are interested in finding the top k influential nodes at which the positive campaign may be triggered. This problem has numerous applications in situations such as limiting the propagation of rumor, arresting the spread of virus through inoculation, initiating a counter-campaign against malicious propaganda, etc. The influence function for the generic influence limitation problem is non-submodular. Restricted versions of the influence limitation problem, reported in the literature, assume submodularity of the influence function and do not capture the problem in a realistic setting. In this paper, we propose a novel computational approach for the influence limitation problem based on Shapley value, a solution concept in cooperative game theory. Our approach works equally effectively for both submodular and non-submodular influence functions. Experiments on standard real world social network datasets reveal that the proposed approach outperforms existing heuristics in the literature. As a non-trivial extension, we also address the problem of influence limitation in the presence of multiple competing campaigns.
Resumo:
This paper proposes an algorithm for joint data detection and tracking of the dominant singular mode of a time varying channel at the transmitter and receiver of a time division duplex multiple input multiple output beamforming system. The method proposed is a modified expectation maximization algorithm which utilizes an initial estimate to track the dominant modes of the channel at the transmitter and the receiver blindly; and simultaneously detects the un known data. Furthermore, the estimates are constrained to be within a confidence interval of the previous estimate in order to improve the tracking performance and mitigate the effect of error propagation. Monte-Carlo simulation results of the symbol error rate and the mean square inner product between the estimated and the true singular vector are plotted to show the performance benefits offered by the proposed method compared to existing techniques.
Resumo:
Subsurface lithology and seismic site classification of Lucknow urban center located in the central part of the Indo-Gangetic Basin (IGB) are presented based on detailed shallow subsurface investigations and borehole analysis. These are done by carrying out 47 seismic surface wave tests using multichannel analysis of surface waves (MASW) and 23 boreholes drilled up to 30 m with standard penetration test (SPT) N values. Subsurface lithology profiles drawn from the drilled boreholes show low- to medium-compressibility clay and silty to poorly graded sand available till depth of 30 m. In addition, deeper boreholes (depth >150 m) were collected from the Lucknow Jal Nigam (Water Corporation), Government of Uttar Pradesh to understand deeper subsoil stratification. Deeper boreholes in this paper refer to those with depth over 150 m. These reports show the presence of clay mix with sand and Kankar at some locations till a depth of 150 m, followed by layers of sand, clay, and Kankar up to 400 m. Based on the available details, shallow and deeper cross-sections through Lucknow are presented. Shear wave velocity (SWV) and N-SPT values were measured for the study area using MASW and SPT testing. Measured SWV and N-SPT values for the same locations were found to be comparable. These values were used to estimate 30 m average values of N-SPT (N-30) and SWV (V-s(30)) for seismic site classification of the study area as per the National Earthquake Hazards Reduction Program (NEHRP) soil classification system. Based on the NEHRP classification, the entire study area is classified into site class C and D based on V-s(30) and site class D and E based on N-30. The issue of larger amplification during future seismic events is highlighted for a major part of the study area which comes under site class D and E. Also, the mismatch of site classes based on N-30 and V-s(30) raises the question of the suitability of the NEHRP classification system for the study region. Further, 17 sets of SPT and SWV data are used to develop a correlation between N-SPT and SWV. This represents a first attempt of seismic site classification and correlation between N-SPT and SWV in the Indo-Gangetic Basin.
Resumo:
Fast and efficient channel estimation is key to achieving high data rate performance in mobile and vehicular communication systems, where the channel is fast time-varying. To this end, this work proposes and optimizes channel-dependent training schemes for reciprocal Multiple-Input Multiple-Output (MIMO) channels with beamforming (BF) at the transmitter and receiver. First, assuming that Channel State Information (CSI) is available at the receiver, a channel-dependent Reverse Channel Training (RCT) signal is proposed that enables efficient estimation of the BF vector at the transmitter with a minimum training duration of only one symbol. In contrast, conventional orthogonal training requires a minimum training duration equal to the number of receive antennas. A tight approximation to the capacity lower bound on the system is derived, which is used as a performance metric to optimize the parameters of the RCT. Next, assuming that CSI is available at the transmitter, a channel-dependent forward-link training signal is proposed and its power and duration are optimized with respect to an approximate capacity lower bound. Monte Carlo simulations illustrate the significant performance improvement offered by the proposed channel-dependent training schemes over the existing channel-agnostic orthogonal training schemes.
Resumo:
A necessary step for the recognition of scanned documents is binarization, which is essentially the segmentation of the document. In order to binarize a scanned document, we can find several algorithms in the literature. What is the best binarization result for a given document image? To answer this question, a user needs to check different binarization algorithms for suitability, since different algorithms may work better for different type of documents. Manually choosing the best from a set of binarized documents is time consuming. To automate the selection of the best segmented document, either we need to use ground-truth of the document or propose an evaluation metric. If ground-truth is available, then precision and recall can be used to choose the best binarized document. What is the case, when ground-truth is not available? Can we come up with a metric which evaluates these binarized documents? Hence, we propose a metric to evaluate binarized document images using eigen value decomposition. We have evaluated this measure on DIBCO and H-DIBCO datasets. The proposed method chooses the best binarized document that is close to the ground-truth of the document.