105 resultados para Anchoring heuristic
Resumo:
The possibility or the impossibility of separating the particle and the electrode interactions is discussed in a wider context of the effects due to any two interaction potentials on the equation of state. The involved nature of the pressure dependence on two individually definable forces is illustrated through the Percus Yevick results for the adhesive hard spheres. An alternative form of the adsorption isotherm is given to bring home the intimate relationship between the actual equation of state and the free energy of adsorption. Thermodynamic consequences of congruence with respect to E (or q) as reflected through the linear plots of q (or E) vs. θ are well known. Mathematical consequences of simultaneous congruence have been pointed out recently. In this paper, the physical nature of congruence hypothesis is revealed. In passing "the pseudo-congruence" is also discussed. It is emphasised that the problem is no less ambiguous with regard to modelling the particle/particle interaction. The ad hoc nature of our dependence of the available equations of state is emphasised through a discussion on the HFL theory. Finally, a heuristic method for modelling ΔG mathematically-incorporating its behaviour at saturation coverages-is advanced. The more interesting aspects of this approach, which generalises almost all isotherms hitherto known, are sketched.
Resumo:
In an earlier paper [1], it has been shown that velocity ratio, defined with reference to the analogous circuit, is a basic parameter in the complete analysis of a linear one-dimensional dynamical system. In this paper it is shown that the terms constituting velocity ratio can be readily determined by means of an algebraic algorithm developed from a heuristic study of the process of transfer matrix multiplication. The algorithm permits the set of most significant terms at a particular frequency of interest to be identified from a knowledge of the relative magnitudes of the impedances of the constituent elements of a proposed configuration. This feature makes the algorithm a potential tool in a first approach to a rational design of a complex dynamical filter. This algorithm is particularly suited for the desk analysis of a medium size system with lumped as well as distributed elements.
Resumo:
We consider a modification of the three-dimensional Navier-Stokes equations and other hydrodynamical evolution equations with space-periodic initial conditions in which the usual Laplacian of the dissipation operator is replaced by an operator whose Fourier symbol grows exponentially as e(vertical bar k vertical bar/kd) at high wavenumbers vertical bar k vertical bar. Using estimates in suitable classes of analytic functions, we show that the solutions with initially finite energy become immediately entire in the space variables and that the Fourier coefficients decay faster than e-(C(k/kd) ln(vertical bar k vertical bar/kd)) for any C < 1/(2 ln 2). The same result holds for the one-dimensional Burgers equation with exponential dissipation but can be improved: heuristic arguments and very precise simulations, analyzed by the method of asymptotic extrapolation of van der Hoeven, indicate that the leading-order asymptotics is precisely of the above form with C = C-* = 1/ ln 2. The same behavior with a universal constant C-* is conjectured for the Navier-Stokes equations with exponential dissipation in any space dimension. This universality prevents the strong growth of intermittency in the far dissipation range which is obtained for ordinary Navier-Stokes turbulence. Possible applications to improved spectral simulations are briefly discussed.
Resumo:
We study the performance of greedy scheduling in multihop wireless networks where the objective is aggregate utility maximization. Following standard approaches, we consider the dual of the original optimization problem. Optimal scheduling requires selecting independent sets of maximum aggregate price, but this problem is known to be NP-hard. We propose and evaluate a simple greedy heuristic. Analytical bounds on performance are provided and simulations indicate that the greedy heuristic performs well in practice.
Resumo:
We present a generic theory for the dynamics of a stiff filament under tension, in an active medium with orientational correlations, such as a microtubule in contractile actin. In sharp contrast to the case of a passive medium, we find the filament can stiffen, and possibly oscillate or buckle, depending on both the contractile or tensile nature of the activity and the filament-medium anchoring interaction. We also demonstrate a strong violation of the fluctuation-dissipation (FD) relation in the effective dynamics of the filament, including a negative FD ratio. Our approach is also of relevance to the dynamics of axons, and our model equations bear a remarkable formal similarity to those in recent work [Martin P, Hudspeth AJ, Juelicher F (2001) Proc Natl Acad Sci USA 98: 14380-14385] on auditory hair cells. Detailed tests of our predictions can be made by using a single filament in actomyosin extracts or bacterial suspensions.
Resumo:
Bacterial persistent infections are responsible for a significant amount of the human morbidity and mortality. Unlike acute bacterial infections, it is very difficult to treat persistent bacterial infections (e.g. tuberculosis). Knowledge about the location of pathogenic bacteria during persistent infection will help to treat such conditions by designing novel drugs which can reach such locations. In this study, events of bacterial persistent infections were analyzed using game theory. A game was defined where the pathogen and the host are the two players with a conflict of interest. Criteria for the establishment of Nash equilibrium were calculated for this game. This theoretical model, which is very simple and heuristic, predicts that during persistent infections pathogenic bacteria stay in both intracellular and extracellular compartments of the host. The result of this study implies that a bacterium should be able to survive in both intracellular and extracellular compartments of the host in order to cause persistent infections. This explains why persistent infections are more often caused by intracellular pathogens like Mycobacterium and Salmonella. Moreover, this prediction is in consistence with the results of previous experimental studies.
Resumo:
We extend the modeling heuristic of (Harsha et al. 2006. In IEEE IWQoS 06, pp 178 - 187) to evaluate the performance of an IEEE 802.11e infrastructure network carrying packet telephone calls, streaming video sessions and TCP controlled file downloads, using Enhanced Distributed Channel Access (EDCA). We identify the time boundaries of activities on the channel (called channel slot boundaries) and derive a Markov Renewal Process of the contending nodes on these epochs. This is achieved by the use of attempt probabilities of the contending nodes as those obtained from the saturation fixed point analysis of (Ramaiyan et al. 2005. In Proceedings ACM Sigmetrics, `05. Journal version accepted for publication in IEEE TON). Regenerative analysis on this MRP yields the desired steady state performance measures. We then use the MRP model to develop an effective bandwidth approach for obtaining a bound on the size of the buffer required at the video queue of the AP, such that the streaming video packet loss probability is kept to less than 1%. The results obtained match well with simulations using the network simulator, ns-2. We find that, with the default IEEE 802.11e EDCA parameters for access categories AC 1, AC 2 and AC 3, the voice call capacity decreases if even one streaming video session and one TCP file download are initiated by some wireless station. Subsequently, reducing the voice calls increases the video downlink stream throughput by 0.38 Mbps and file download capacity by 0.14 Mbps, for every voice call (for the 11 Mbps PHY). We find that a buffer size of 75KB is sufficient to ensure that the video packet loss probability at the QAP is within 1%.
Resumo:
A direct borohydride-hydrogen peroxide fuel cell employing carbon-supported Prussian Blue (PB) as mediated electron-transfer cathode catalyst is reported. While operating at 30 °C, the direct borohydride-hydrogen peroxide fuel cell employing carbon-supported PB cathode catalyst shows superior performance with the maximum output power density of 68 mW cm−2 at an operating voltage of 1.1 V compared to direct borohydride-hydrogen peroxide fuel cell employing the conventional gold-based cathode with the maximum output power density of 47 mW cm−2 at an operating voltage of 0.7 V. X-ray diffraction (XRD), Scanning Electron Microscopy (SEM), and Energy Dispersive X-ray Analysis (EDAX) suggest that anchoring of Cetyl-Trimethyl Ammonium Bromide (CTAB) as a surfactant moiety on carbon-supported PB affects the catalyst morphology. Polarization studies on direct borohydride-hydrogen peroxide fuel cell with carbon-supported CTAB-anchored PB cathode exhibit better performance with the maximum output power density of 50 mW cm−2 at an operating voltage of 1 V than the direct borohydride-hydrogen peroxide fuel cell with carbon-supported Prussian Blue without CTAB with the maximum output power density of 29 mW cm−2 at an operating voltage of 1 V.
Resumo:
Experimental characterization of high dimensional dynamic systems sometimes uses the proper orthogonal decomposition (POD). If there are many measurement locations and relatively fewer sensors, then steady-state behavior can still be studied by sequentially taking several sets of simultaneous measurements. The number required of such sets of measurements can be minimized if we solve a combinatorial optimization problem. We aim to bring this problem to the attention of engineering audiences, summarize some known mathematical results about this problem, and present a heuristic (suboptimal) calculation that gives reasonable, if not stellar, results.
Resumo:
In this paper, we are concerned with energy efficient area monitoring using information coverage in wireless sensor networks, where collaboration among multiple sensors can enable accurate sensing of a point in a given area-to-monitor even if that point falls outside the physical coverage of all the sensors. We refer to any set of sensors that can collectively sense all points in the entire area-to-monitor as a full area information cover. We first propose a low-complexity heuristic algorithm to obtain full area information covers. Using these covers, we then obtain the optimum schedule for activating the sensing activity of various sensors that maximizes the sensing lifetime. The scheduling of sensor activity using the optimum schedules obtained using the proposed algorithm is shown to achieve significantly longer sensing lifetimes compared to those achieved using physical coverage. Relaxing the full area coverage requirement to a partial area coverage (e.g., 95% of area coverage as adequate instead of 100% area coverage) further enhances the lifetime.
Resumo:
In this paper, we are concerned with algorithms for scheduling the sensing activity of sensor nodes that are deployed to sense/measure point-targets in wireless sensor networks using information coverage. Defining a set of sensors which collectively can sense a target accurately as an information cover, we propose an algorithm to obtain Disjoint Set of Information Covers (DSIC), which achieves longer network life compared to the set of covers obtained using an Exhaustive-Greedy-Equalized Heuristic (EGEH) algorithm proposed recently in the literature. We also present a detailed complexity comparison between the DSIC and EGEH algorithms.
Resumo:
This study considers the scheduling problem observed in the burn-in operation of semiconductor final testing, where jobs are associated with release times, due dates, processing times, sizes, and non-agreeable release times and due dates. The burn-in oven is modeled as a batch-processing machine which can process a batch of several jobs as long as the total sizes of the jobs do not exceed the machine capacity and the processing time of a batch is equal to the longest time among all the jobs in the batch. Due to the importance of on-time delivery in semiconductor manufacturing, the objective measure of this problem is to minimize total weighted tardiness. We have formulated the scheduling problem into an integer linear programming model and empirically show its computational intractability. Due to the computational intractability, we propose a few simple greedy heuristic algorithms and meta-heuristic algorithm, simulated annealing (SA). A series of computational experiments are conducted to evaluate the performance of the proposed heuristic algorithms in comparison with exact solution on various small-size problem instances and in comparison with estimated optimal solution on various real-life large size problem instances. The computational results show that the SA algorithm, with initial solution obtained using our own proposed greedy heuristic algorithm, consistently finds a robust solution in a reasonable amount of computation time.
Resumo:
Integrating low dielectric permittivity (low-k) polymers to metals is an exacting fundamental challenge because poor bonding between low-polarizability moieties and metals precludes good interfacial adhesion. Conventional adhesion-enhancing methods such as using intermediary layers are unsuitable for engineering polymer/metal interfaces for many applications because of the collateral increase in dielectric permittivity. Here, we demonstrate a completely new approach without surface treatments or intermediary layers to obtain an excellent interfacial fracture toughness of > 13 J/m(2) in a model system comprising copper. and a cross-linked polycarbosilane with k similar to 2.7 obtained by curing a cyclolinear polycarbosilane in air.Our results suggest that interfacial oxygen catalyzed molecularring-opening and anchoring of the opened ring moieties of the polymer to copper is the main toughening mechanism. This novel approach of realizing adherent low-k polymer/metal structures without intermediary layers by activating metal-anchoring polymer moieties at the interface could be adapted for applications such as device wiring and packaging, and laminates and composites.
Resumo:
We view association of concepts as a complex network and present a heuristic for clustering concepts by taking into account the underlying network structure of their associations. Clusters generated from our approach are qualitatively better than clusters generated from the conventional spectral clustering mechanism used for graph partitioning.
Resumo:
In this article, the problem of two Unmanned Aerial Vehicles (UAVs) cooperatively searching an unknown region is addressed. The search region is discretized into hexagonal cells and each cell is assumed to possess an uncertainty value. The UAVs have to cooperatively search these cells taking limited endurance, sensor and communication range constraints into account. Due to limited endurance, the UAVs need to return to the base station for refuelling and also need to select a base station when multiple base stations are present. This article proposes a route planning algorithm that takes endurance time constraints into account and uses game theoretical strategies to reduce the uncertainty. The route planning algorithm selects only those cells that ensure the agent will return to any one of the available bases. A set of paths are formed using these cells which the game theoretical strategies use to select a path that yields maximum uncertainty reduction. We explore non-cooperative Nash, cooperative and security strategies from game theory to enhance the search effectiveness. Monte-Carlo simulations are carried out which show the superiority of the game theoretical strategies over greedy strategy for different look ahead step length paths. Within the game theoretical strategies, non-cooperative Nash and cooperative strategy perform similarly in an ideal case, but Nash strategy performs better than the cooperative strategy when the perceived information is different. We also propose a heuristic based on partitioning of the search space into sectors to reduce computational overhead without performance degradation.