279 resultados para Computationally efficient
Resumo:
The clutter-rejection properties of compact f.s.k. bursts with amplitude modulation are investigated. A procedure for computer-aided design of such signals is given. The loss of clutter performance on constraining the individual pulse amplitudes to be equal is evaluated.
Resumo:
Tuberculosis continues to be a major health challenge, warranting the need for newer strategies for therapeutic intervention and newer approaches to discover them. Here, we report the identification of efficient metabolism disruption strategies by analysis of a reactome network. Protein-protein dependencies at a genome scale are derived from the curated metabolic network, from which insights into the nature and extent of inter-protein and inter-pathway dependencies have been obtained. A functional distance matrix and a subsequent nearness index derived from this information, helps in understanding how the influence of a given protein can pervade to the metabolic network. Thus, the nearness index can be viewed as a metabolic disruptability index, which suggests possible strategies for achieving maximal metabolic disruption by inhibition of the least number of proteins. A greedy approach has been used to identify the most influential singleton, and its combination with the other most pervasive proteins to obtain highly influential pairs, triplets and quadruplets. The effect of deletion of these combinations on cellular metabolism has been studied by flux balance analysis. An obvious outcome of this study is a rational identification of drug targets, to efficiently bring down mycobacterial metabolism.
Resumo:
Aromatic aldehydes and aryl isocyanates do not react at room temperature. However, we have shown for the first time that in the presence of catalytic amounts of group(IV) n-butoxide, they undergo metathesis at room temperature to produce imines with the extrusion of carbon dioxide. The mechanism of action has been investigated by a study of stoichiometric reactions. The insertion of aryl isocyanates into the metal n-butoxide occurs very rapidly. Reaction of the insertion product with the aldehyde is responsible for the metathesis. Among the n-butoxides of group(IV) metals, Ti((OBu)-Bu-n)(4) (8aTi) was found to be more efficient than Zr((OBu)-Bu-n)(4) (8aZr) and Hf((OBu)-Bu-n)(4) (8aHf) in carrying out metathesis. The surprisingly large difference in the metathetic activity of these alkoxides has been probed computationally using model complexes Ti(OMe)(4) (8bTi), Zr(OMe)(4) (8bZr) and Hf(OMe)(4) (8bHf) at the B3LYP/LANL2DZ level of theory. These studies indicate that the insertion product formed by Zr and Hf are extremely stable compared to that formed by Ti. This makes subsequent reaction of Zr and Hf complexes unfavorable.
Resumo:
Bluetooth is a short-range radio technology operating in the unlicensed industrial-scientific-medical (ISM) band at 2.45 GHz. A piconet is basically a collection of slaves controlled by a master. A scatternet, on the other hand, is established by linking several piconets together in an ad hoc fashion to yield a global wireless ad hoc network. This paper proposes a scheduling policy that aims to achieve increased system throughput and reduced packet delays while providing reasonably good fairness among all traffic flows in bluetooth piconets and scatternets. We propose a novel algorithm for scheduling slots to slaves for both piconets and scatternets using multi-layered parameterized policies. Our scheduling scheme works with real data and obtains an optimal feedback policy within prescribed parameterized classes of these by using an efficient two-timescale simultaneous perturbation stochastic approximation (SPSA) algorithm. We show the convergence of our algorithm to an optimal multi-layered policy. We also propose novel polling schemes for intra- and inter-piconet scheduling that are seen to perform well. We present an extensive set of simulation results and performance comparisons with existing scheduling algorithms. Our results indicate that our proposed scheduling algorithm performs better overall on a wide range of experiments over the existing algorithms for both piconets (Das et al. in INFOCOM, pp. 591–600, 2001; Lapeyrie and Turletti in INFOCOM conference proceedings, San Francisco, US, 2003; Shreedhar and Varghese in SIGCOMM, pp. 231–242, 1995) and scatternets (Har-Shai et al. in OPNETWORK, 2002; Saha and Matsumot in AICT/ICIW, 2006; Tan and Guttag in The 27th annual IEEE conference on local computer networks(LCN). Tampa, 2002). Our studies also confirm that our proposed scheme achieves a high throughput and low packet delays with reasonable fairness among all the connections.
Resumo:
Bluetooth is an emerging standard in short range, low cost and low power wireless networks. MAC is a generic polling based protocol, where a central Bluetooth unit (master) determines channel access to all other nodes (slaves) in the network (piconet). An important problem in Bluetooth is the design of efficient scheduling protocols. This paper proposes a polling policy that aims to achieve increased system throughput and reduced packet delays while providing reasonably good fairness among all traffic flows in a Bluetooth Piconet. We present an extensive set of simulation results and performance comparisons with two important existing algorithms. Our results indicate that our proposed scheduling algorithm outperforms the Round Robin scheduling algorithm by more than 40% in all cases tried. Our study also confirms that our proposed policy achieves higher throughput and lower packet delays with reasonable fairness among all the connections.
Resumo:
We derive expressions for convolution multiplication properties of discrete cosine transform II (DCT II) starting from equivalent discrete Fourier transform (DFT) representations. Using these expressions, a method for implementing linear filtering through block convolution in the DCT II domain is presented. For the case of nonsymmetric impulse response, additional discrete sine transform II (DST II) is required for implementing the filter in DCT II domain, where as for a symmetric impulse response, the additional transform is not required. Comparison with recently proposed circular convolution technique in DCT II domain shows that the proposed new method is computationally more efficient.
Resumo:
We explore an isoparametric interpolation of total quaternion for geometrically consistent, strain-objective and path-independent finite element solutions of the geometrically exact beam. This interpolation is a variant of the broader class known as slerp. The equivalence between the proposed interpolation and that of relative rotation is shown without any recourse to local bijection between quaternions and rotations. We show that, for a two-noded beam element, the use of relative rotation is not mandatory for attaining consistency cum objectivity and an appropriate interpolation of total rotation variables is sufficient. The interpolation of total quaternion, which is computationally more efficient than the one based on local rotations, converts nodal rotation vectors to quaternions and interpolates them in a manner consistent with the character of the rotation manifold. This interpolation, unlike the additive interpolation of total rotation, corresponds to a geodesic on the rotation manifold. For beam elements with more than two nodes, however, a consistent extension of the proposed quaternion interpolation is difficult. Alternatively, a quaternion-based procedure involving interpolation of relative rotations is proposed for such higher order elements. We also briefly discuss a strategy for the removal of possible singularity in the interpolation of quaternions, proposed in [I. Romero, The interpolation of rotations and its application to finite element models of geometrically exact rods, Comput. Mech. 34 (2004) 121–133]. The strain-objectivity and path-independence of solutions are justified theoretically and then demonstrated through numerical experiments. This study, being focused only on the interpolation of rotations, uses a standard finite element discretization, as adopted by Simo and Vu-Quoc [J.C. Simo, L. Vu-Quoc, A three-dimensional finite rod model part II: computational aspects, Comput. Methods Appl. Mech. Engrg. 58 (1986) 79–116]. The rotation update is achieved via quaternion multiplication followed by the extraction of the rotation vector. Nodal rotations are stored in terms of rotation vectors and no secondary storages are required.
Resumo:
The paper presents a novel slicing based method for computation of volume fractions in multi-material solids given as a B-rep whose faces are triangulated and shared by either one or two materials. Such objects occur naturally in geoscience applications and the said computation is necessary for property estimation problems and iterative forward modeling. Each facet in the model is cut by the planes delineating the given grid structure or grid cells. The method, instead of classifying the points or cells with respect to the solid, exploits the convexity of triangles and the simple axis-oriented disposition of the cutting surfaces to construct a novel intermediate space enumeration representation called slice-representation, from which both the cell containment test and the volume-fraction computation are done easily. Cartesian and cylindrical grids with uniform and non-uniform spacings have been dealt with in this paper. After slicing, each triangle contributes polygonal facets, with potential elliptical edges, to the grid cells through which it passes. The volume fractions of different materials in a grid cell that is in interaction with the material interfaces are obtained by accumulating the volume contributions computed from each facet in the grid cell. The method is fast, accurate, robust and memory efficient. Examples illustrating the method and performance are included in the paper.
Resumo:
We propose a simple and energy efficient distributed change detection scheme for sensor networks based on Page's parametric CUSUM algorithm. The sensor observations are IID over time and across the sensors conditioned on the change variable. Each sensor runs CUSUM and transmits only when the CUSUM is above some threshold. The transmissions from the sensors are fused at the physical layer. The channel is modeled as a multiple access channel (MAC) corrupted with IID noise. The fusion center which is the global decision maker, performs another CUSUM to detect the change. We provide the analysis and simulation results for our scheme and compare the performance with an existing scheme which ensures energy efficiency via optimal power selection.
Resumo:
In this paper, we are concerned with energy efficient area monitoring using information coverage in wireless sensor networks, where collaboration among multiple sensors can enable accurate sensing of a point in a given area-to-monitor even if that point falls outside the physical coverage of all the sensors. We refer to any set of sensors that can collectively sense all points in the entire area-to-monitor as a full area information cover. We first propose a low-complexity heuristic algorithm to obtain full area information covers. Using these covers, we then obtain the optimum schedule for activating the sensing activity of various sensors that maximizes the sensing lifetime. The scheduling of sensor activity using the optimum schedules obtained using the proposed algorithm is shown to achieve significantly longer sensing lifetimes compared to those achieved using physical coverage. Relaxing the full area coverage requirement to a partial area coverage (e.g., 95% of area coverage as adequate instead of 100% area coverage) further enhances the lifetime.
Resumo:
In this paper, we are concerned with algorithms for scheduling the sensing activity of sensor nodes that are deployed to sense/measure point-targets in wireless sensor networks using information coverage. Defining a set of sensors which collectively can sense a target accurately as an information cover, we propose an algorithm to obtain Disjoint Set of Information Covers (DSIC), which achieves longer network life compared to the set of covers obtained using an Exhaustive-Greedy-Equalized Heuristic (EGEH) algorithm proposed recently in the literature. We also present a detailed complexity comparison between the DSIC and EGEH algorithms.
Resumo:
In prediction phase, the hierarchical tree structure obtained from the test image is used to predict every central pixel of an image by its four neighboring pixels. The prediction scheme generates the predicted error image, to which the wavelet/sub-band coding algorithm can be applied to obtain efficient compression. In quantization phase, we used a modified SPIHT algorithm to achieve efficiency in memory requirements. The memory constraint plays a vital role in wireless and bandwidth-limited applications. A single reusable list is used instead of three continuously growing linked lists as in case of SPIHT. This method is error resilient. The performance is measured in terms of PSNR and memory requirements. The algorithm shows good compression performance and significant savings in memory. (C) 2006 Elsevier B.V. All rights reserved.
Resumo:
Modern database systems incorporate a query optimizer to identify the most efficient "query execution plan" for executing the declarative SQL queries submitted by users. A dynamic-programming-based approach is used to exhaustively enumerate the combinatorially large search space of plan alternatives and, using a cost model, to identify the optimal choice. While dynamic programming (DP) works very well for moderately complex queries with up to around a dozen base relations, it usually fails to scale beyond this stage due to its inherent exponential space and time complexity. Therefore, DP becomes practically infeasible for complex queries with a large number of base relations, such as those found in current decision-support and enterprise management applications. To address the above problem, a variety of approaches have been proposed in the literature. Some completely jettison the DP approach and resort to alternative techniques such as randomized algorithms, whereas others have retained DP by using heuristics to prune the search space to computationally manageable levels. In the latter class, a well-known strategy is "iterative dynamic programming" (IDP) wherein DP is employed bottom-up until it hits its feasibility limit, and then iteratively restarted with a significantly reduced subset of the execution plans currently under consideration. The experimental evaluation of IDP indicated that by appropriate choice of algorithmic parameters, it was possible to almost always obtain "good" (within a factor of twice of the optimal) plans, and in the few remaining cases, mostly "acceptable" (within an order of magnitude of the optimal) plans, and rarely, a "bad" plan. While IDP is certainly an innovative and powerful approach, we have found that there are a variety of common query frameworks wherein it can fail to consistently produce good plans, let alone the optimal choice. This is especially so when star or clique components are present, increasing the complexity of th- e join graphs. Worse, this shortcoming is exacerbated when the number of relations participating in the query is scaled upwards.
Resumo:
Presented here is the two-phase thermodynamic (2PT) model for the calculation of energy and entropy of molecular fluids from the trajectory of molecular dynamics (MD) simulations. In this method, the density of state (DoS) functions (including the normal modes of translation, rotation, and intramolecular vibration motions) are determined from the Fourier transform of the corresponding velocity autocorrelation functions. A fluidicity parameter (f), extracted from the thermodynamic state of the system derived from the same MD, is used to partition the translation and rotation modes into a diffusive, gas-like component (with 3Nf degrees of freedom) and a nondiffusive, solid-like component. The thermodynamic properties, including the absolute value of entropy, are then obtained by applying quantum statistics to the solid component and applying hard sphere/rigid rotor thermodynamics to the gas component. The 2PT method produces exact thermodynamic properties of the system in two limiting states: the nondiffusive solid state (where the fluidicity is zero) and the ideal gas state (where the fluidicity becomes unity). We examine the 2PT entropy for various water models (F3C, SPC, SPC/E, TIP3P, and TIP4P-Ew) at ambient conditions and find good agreement with literature results obtained based on other simulation techniques. We also validate the entropy of water in the liquid and vapor phases along the vapor-liquid equilibrium curve from the triple point to the critical point. We show that this method produces converged liquid phase entropy in tens of picoseconds, making it an efficient means for extracting thermodynamic properties from MD simulations.
Resumo:
In many applications of wireless ad hoc networks, wireless nodes are owned by rational and intelligent users. In this paper, we call nodes selfish if they are owned by independent users and their only objective is to maximize their individual goals. In such situations, it may not be possible to use the existing protocols for wireless ad hoc networks as these protocols assume that nodes follow the prescribed protocol without deviation. Stimulating cooperation among these nodes is an interesting and challenging problem. Providing incentives and pricing the transactions are well known approaches to stimulate cooperation. In this paper, we present a game theoretic framework for truthful broadcast protocol and strategy proof pricing mechanism called Immediate Predecessor Node Pricing Mechanism (IPNPM). The phrase strategy proof here means that truth revelation of cost is a weakly dominant-strategy (in game theoretic terms) for each node. In order to steer our mechanism-design approach towards practical implementation, we compute the payments to nodes using a distributed algorithm. We also propose a new protocol for broadcast in wireless ad hoc network with selfish nodes based on IPNPM. The features of the proposed broadcast protocol are reliability and a significantly reduced number of packet forwards compared to the number of network nodes, which in turn leads to less system-wide power consumption to broadcast a single packet. Our simulation results show the efficacy of the proposed broadcast protocol.