110 resultados para counter-terrorism
Resumo:
Droplet collision occurs frequently in regions where the droplet number density is high. Even for Lean Premixed and Pre-vaporized (LPP) liquid sprays, the collision effects can be very high on the droplet size distributions, which will in turn affect the droplet vaporization process. Hence, in conjunction with vaporization modeling, collision modeling for such spray systems is also essential. The standard O'Rourke's collision model, usually implemented in CFD codes, tends to generate unphysical numerical artifact when simulations are performed on Cartesian grid and the results are not grid independent. Thus, a new collision modeling approach based on no-time-counter method (NTC) proposed by Schmidt and Rutland is implemented to replace O'Rourke's collision algorithm to solve a spray injection problem in a cylindrical coflow premixer. The so called ``four-leaf clover'' numerical artifacts are eliminated by the new collision algorithm and results from a diesel spray show very good grid independence. Next, the dispersion and vaporization processes for liquid fuel sprays are simulated in a coflow premixer. Two liquid fuels under investigation are jet-A and Rapeseed Methyl Esters (RME). Results show very good grid independence in terms of SMD distribution, droplet number distribution and fuel vapor mass flow rate. A baseline test is first established with a spray cone angle of 90 degrees and injection velocity of 3 m/s and jet-A achieves much better vaporization performance than RME due to its higher vapor pressure. To improve the vaporization performance for both fuels, a series of simulations have been done at several different combinations of spray cone angle and injection velocity. At relatively low spray cone angle and injection velocity, the collision effect on the average droplet size and the vaporization performance are very high due to relatively high coalescence rate induced by droplet collisions. Thus, at higher spray cone angle and injection velocity, the results expectedly show improvement in fuel vaporization performance since smaller droplet has a higher vaporization rate. The vaporization performance and the level of homogeneity of fuel-air mixture can be significantly improved when the dispersion level is high, which can be achieved by increasing the spray cone angle and injection velocity. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
A recent modelling study has shown that precipitation and runoff over land would increase when the reflectivity of marine clouds is increased to counter global warming. This implies that large scale albedo enhancement over land could lead to a decrease in runoff over land. In this study, we perform simulations using NCAR CAM3.1 that have implications for Solar Radiation Management geoengineering schemes that increase the albedo over land. We find that an increase in reflectivity over land that mitigates the global mean warming from a doubling of CO2 leads to a large residual warming in the southern hemisphere and cooling in the northern hemisphere since most of the land is located in northern hemisphere. Precipitation and runoff over land decrease by 13.4 and 22.3%, respectively, because of a large residual sinking motion over land triggered by albedo enhancement over land. Soil water content also declines when albedo over land is enhanced. The simulated magnitude of hydrological changes over land are much larger when compared to changes over oceans in the recent marine cloud albedo enhancement study since the radiative forcing over land needed (-8.2 W m(-2)) to counter global mean radiative forcing from a doubling of CO2 (3.3 W m(-2)) is approximately twice the forcing needed over the oceans (-4.2 W m(-2)). Our results imply that albedo enhancement over oceans produce climates closer to the unperturbed climate state than do albedo changes on land when the consequences on land hydrology are considered. Our study also has important implications for any intentional or unintentional large scale changes in land surface albedo such as deforestation/afforestation/reforestation, air pollution, and desert and urban albedo modification.
Resumo:
We report a simple, reliable and one-step method of synthesizing ZnO porous structures at room temperature by anodization of zinc (Zn) sheet with water as an electrolyte and graphite as a counter electrode. We observed that the de-ionized (DI) water used in the experiment is slightly acidic (pH=5.8), which is due to the dissolution of carbon dioxide from the atmosphere forming carbonic acid. Porous ZnO is characterized by scanning electron microscopy (SEM), transmission electron microscopy (TEM), Raman spectroscopy and photoluminescence (PL) studies. The current-transient measurement is carried out using a Gamry Instruments Reference 3000 and the thickness of the deposited films is measured using a Dektak surface profilometer. The PL, Raman and X-ray photoelectron spectroscopy are used to confirm the presence of ZnO phase. We have demonstrated that the hybrid structures of ZnO and poly (3,4-ethylenedioxythiophene):poly (styrene sulfonate) (PEDOT:PSS) exhibit good rectifying characteristics. The evaluated barrier height and the ideality factor are 0.45 eV and 3.6, respectively.
Resumo:
Various logical formalisms with the freeze quantifier have been recently considered to model computer systems even though this is a powerful mechanism that often leads to undecidability. In this article, we study a linear-time temporal logic with past-time operators such that the freeze operator is only used to express that some value from an infinite set is repeated in the future or in the past. Such a restriction has been inspired by a recent work on spatio-temporal logics that suggests such a restricted use of the freeze operator. We show decidability of finitary and infinitary satisfiability by reduction into the verification of temporal properties in Petri nets by proposing a symbolic representation of models. This is a quite surprising result in view of the expressive power of the logic since the logic is closed under negation, contains future-time and past-time temporal operators and can express the nonce property and its negation. These ingredients are known to lead to undecidability with a more liberal use of the freeze quantifier. The article also contains developments about the relationships between temporal logics with the freeze operator and counter automata as well as reductions into first-order logics over data words.
Resumo:
Among all methods of metal alloy slurry preparation, the cooling slope method is the simplest in terms of design and process control. The method involves pouring of the melt from top, down an oblique and channel shaped plate cooled from bottom by counter flowing water. The melt, while flowing down, partially solidifies and forms columnar dendrites on plate wall. These dendrites are broken into equiaxed grains and are washed away with melt. The melt, together with the equiaxed grains, forms semisolid slurry collected at the slope exit and cast into billets having non-dendritic microstructure. The final microstructure depends on several process parameters such as slope angle, slope length, pouring superheat, and cooling rate. The present work involves scaling analysis of conservation equations of momentum, energy and species for the melt flow down a cooling slope. The main purpose of the scaling analysis is to obtain a physical insight into the role and relative importance of each parameter in influencing the final microstructure. For assessing the scaling analysis, the trends predicted by scaling are compared against corresponding numerical results using an enthalpy based solidification model with incorporation of solid phase movement.
Resumo:
A linkage of rigid bodies under gravity loads can be statically counter-balanced by adding compensating gravity loads. Similarly, gravity loads or spring loads can be counterbalanced by adding springs. In the current literature, among the techniques that add springs, some achieve perfect static balance while others achieve only approximate balance. Further, all of them add auxiliary bodies to the linkage in addition to springs. We present a perfect static balancing technique that adds only springs but not auxiliary bodies, in contrast to the existing techniques. This technique can counter-balance both gravity loads and spring loads. The technique requires that every joint that connects two bodies in the linkage be either a revolute joint or a spherical joint. Apart from this, the linkage can have any number of bodies connected in any manner. In order to achieve perfect balance, this technique requires that all the spring loads have the feature of zero-free-length, as is the case with the existing techniques. This requirement is neither impractical nor restrictive since the feature can be practically incorporated into any normal spring either by modifying the spring or by adding another spring in parallel. DOI: 10.1115/1.4006521]
Resumo:
We investigate the problem of influence limitation in the presence of competing campaigns in a social network. Given a negative campaign which starts propagating from a specified source and a positive/counter campaign that is initiated, after a certain time delay, to limit the the influence or spread of misinformation by the negative campaign, we are interested in finding the top k influential nodes at which the positive campaign may be triggered. This problem has numerous applications in situations such as limiting the propagation of rumor, arresting the spread of virus through inoculation, initiating a counter-campaign against malicious propaganda, etc. The influence function for the generic influence limitation problem is non-submodular. Restricted versions of the influence limitation problem, reported in the literature, assume submodularity of the influence function and do not capture the problem in a realistic setting. In this paper, we propose a novel computational approach for the influence limitation problem based on Shapley value, a solution concept in cooperative game theory. Our approach works equally effectively for both submodular and non-submodular influence functions. Experiments on standard real world social network datasets reveal that the proposed approach outperforms existing heuristics in the literature. As a non-trivial extension, we also address the problem of influence limitation in the presence of multiple competing campaigns.
Resumo:
Network Intrusion Detection Systems (NIDS) intercept the traffic at an organization's network periphery to thwart intrusion attempts. Signature-based NIDS compares the intercepted packets against its database of known vulnerabilities and malware signatures to detect such cyber attacks. These signatures are represented using Regular Expressions (REs) and strings. Regular Expressions, because of their higher expressive power, are preferred over simple strings to write these signatures. We present Cascaded Automata Architecture to perform memory efficient Regular Expression pattern matching using existing string matching solutions. The proposed architecture performs two stage Regular Expression pattern matching. We replace the substring and character class components of the Regular Expression with new symbols. We address the challenges involved in this approach. We augment the Word-based Automata, obtained from the re-written Regular Expressions, with counter-based states and length bound transitions to perform Regular Expression pattern matching. We evaluated our architecture on Regular Expressions taken from Snort rulesets. We were able to reduce the number of automata states between 50% to 85%. Additionally, we could reduce the number of transitions by a factor of 3 leading to further reduction in the memory requirements.
Resumo:
Adaptive Gaussian Mixture Models (GMM) have been one of the most popular and successful approaches to perform foreground segmentation on multimodal background scenes. However, the good accuracy of the GMM algorithm comes at a high computational cost. An improved GMM technique was proposed by Zivkovic to reduce computational cost by minimizing the number of modes adaptively. In this paper, we propose a modification to his adaptive GMM algorithm that further reduces execution time by replacing expensive floating point computations with low cost integer operations. To maintain accuracy, we derive a heuristic that computes periodic floating point updates for the GMM weight parameter using the value of an integer counter. Experiments show speedups in the range of 1.33 - 1.44 on standard video datasets where a large fraction of pixels are multimodal.
Resumo:
Effective network overload alleviation is very much essential in order to maintain security and integrity from the operational viewpoint of deregulated power systems. This paper aims at developing a methodology to reschedule the active power generation from the sources in order to manage the network congestion under normal/contingency conditions. An effective method has been proposed using fuzzy rule based inference system. Using virtual flows concept, which provides partial contributions/counter flows in the network elements is used as a basis in the proposed method to manage network congestions to the possible extent. The proposed method is illustrated on a sample 6 bus test system and on modified IEEE 39 bus system.
Resumo:
The notion of the 1-D analytic signal is well understood and has found many applications. At the heart of the analytic signal concept is the Hilbert transform. The problem in extending the concept of analytic signal to higher dimensions is that there is no unique multidimensional definition of the Hilbert transform. Also, the notion of analyticity is not so well under stood in higher dimensions. Of the several 2-D extensions of the Hilbert transform, the spiral-phase quadrature transform or the Riesz transform seems to be the natural extension and has attracted a lot of attention mainly due to its isotropic properties. From the Riesz transform, Larkin et al. constructed a vortex operator, which approximates the quadratures based on asymptotic stationary-phase analysis. In this paper, we show an alternative proof for the quadrature approximation property by invoking the quasi-eigenfunction property of linear, shift-invariant systems. We show that the vortex operator comes up as a natural consequence of applying this property. We also characterize the quadrature approximation error in terms of its energy as well as the peak spatial-domain error. Such results are available for 1-D signals, but their counter part for 2-D signals have not been provided. We also provide simulation results to supplement the analytical calculations.
Resumo:
In the present investigation, various kinds of textures were attained on the steel surfaces. Roughness of the textures was varied using different grits of emery papers or polishing powders. Pins made of pure Al, Al-4Mg alloy and pure Mg were then slid against prepared steel plate surfaces at various numbers of cycles using an inclined pin-on-plate sliding tester. Tests were conducted at a sliding velocity of 2mms(-1) in ambient conditions under both dry and lubricated conditions. Normal loads were increased up to 110N during the tests. The morphologies of the worn surfaces of the pins and the formation of transfer layer on the counter surfaces were observed using a scanning electron microscope. Surface roughness parameters of the plate were measured using an optical profilometer. In the experiments, it was observed that the coefficient of friction and formation of a transfer layer (under dry and lubricated conditions) only depended on surface texture during the first few sliding cycles. The steady-state variation in the coefficient of friction under both dry and lubrication conditions was attributed to the self-organisation of texture of the surfaces at the interface during sliding. Copyright (C) 2012 John Wiley & Sons, Ltd.
Resumo:
We propose a Physical layer Network Coding (PNC) scheme for the K-user wireless Multiple Access Relay Channel, in which K source nodes want to transmit messages to a destination node D with the help of a relay node R. The proposed scheme involves (i) Phase 1 during which the source nodes alone transmit and (ii) Phase 2 during which the source nodes and the relay node transmit. At the end of Phase 1, the relay node decodes the messages of the source nodes and during Phase 2 transmits a many-to-one function of the decoded messages. To counter the error propagation from the relay node, we propose a novel decoder which takes into account the possibility of error events at R. It is shown that if certain parameters are chosen properly and if the network coding map used at R forms a Latin Hypercube, the proposed decoder offers the maximum diversity order of two. Also, it is shown that for a proper choice of the parameters, the proposed decoder admits fast decoding, with the same decoding complexity order as that of the reference scheme based on Complex Field Network Coding (CFNC). Simulation results indicate that the proposed PNC scheme offers a large gain over the CFNC scheme.
Resumo:
We experimentally study the effect of having hinged leaflets at the jet exit on the formation of a two-dimensional counter-rotating vortex pair. A piston-cylinder mechanism is used to generate a starting jet from a high-aspect-ratio channel into a quiescent medium. For a rigid exit, with no leaflets at the channel exit, the measurements at a central plane show that the trailing jet in the present case is never detached from the vortex pair, and keeps feeding into the latter, unlike in the axisymmetric case. Passive flexibility is introduced in the form of rigid leaflets or flaps that are hinged at the exit of the channel, with the flaps initially parallel to the channel walls. The experimental arrangement closely approximates the limiting case of a free-to-rotate rigid flap with negligible structural stiffness, damping and flap inertia, as these limiting structural properties permit the largest flap openings. Using this arrangement, we start the flow and measure the flap kinematics and the vorticity fields for different flap lengths and piston velocity programs. The typical motion of the flaps involves a rapid opening and a subsequent more gradual return to its initial position, both of which occur when the piston is still moving. The initial opening of the flaps can be attributed to an excess pressure that develops in the channel when the flow starts, due to the acceleration that has to be imparted to the fluid slug between the flaps. In the case with flaps, two additional pairs of vortices are formed because of the motion of the flaps, leading to the ejection of a total of up to three vortex pairs from the hinged exit. The flaps' length (L-f) is found to significantly affect flap motions when plotted using the conventional time scale L/d, where L is the piston stroke and d is the channel width. However, with a newly defined time scale based on the flap length (L/L-f), we find a good collapse of all the measured flap motions irrespective of flap length and piston velocity for an impulsively started piston motion. The maximum opening angle in all these impulsive velocity program cases, irrespective of the flap length, is found to be close to 15 degrees. Even though the flap kinematics collapses well with L/L-f, there are differences in the distribution of the ejected vorticity even for the same L/L-f. Such a redistribution of vorticity can lead to important changes in the overall properties of the flow, and it gives us a better understanding of the importance of exit flexibility in such flows.
Resumo:
Estimating program worst case execution time(WCET) accurately and efficiently is a challenging task. Several programs exhibit phase behavior wherein cycles per instruction (CPI) varies in phases during execution. Recent work has suggested the use of phases in such programs to estimate WCET with minimal instrumentation. However the suggested model uses a function of mean CPI that has no probabilistic guarantees. We propose to use Chebyshev's inequality that can be applied to any arbitrary distribution of CPI samples, to probabilistically bound CPI of a phase. Applying Chebyshev's inequality to phases that exhibit high CPI variation leads to pessimistic upper bounds. We propose a mechanism that refines such phases into sub-phases based on program counter(PC) signatures collected using profiling and also allows the user to control variance of CPI within a sub-phase. We describe a WCET analyzer built on these lines and evaluate it with standard WCET and embedded benchmark suites on two different architectures for three chosen probabilities, p={0.9, 0.95 and 0.99}. For p= 0.99, refinement based on PC signatures alone, reduces average pessimism of WCET estimate by 36%(77%) on Arch1 (Arch2). Compared to Chronos, an open source static WCET analyzer, the average improvement in estimates obtained by refinement is 5%(125%) on Arch1 (Arch2). On limiting variance of CPI within a sub-phase to {50%, 10%, 5% and 1%} of its original value, average accuracy of WCET estimate improves further to {9%, 11%, 12% and 13%} respectively, on Arch1. On Arch2, average accuracy of WCET improves to 159% when CPI variance is limited to 50% of its original value and improvement is marginal beyond that point.