21 resultados para hijacking the event

em Indian Institute of Science - Bangalore - Índia


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The performance of the Advanced Regional Prediction System (ARPS) in simulating an extreme rainfall event is evaluated, and subsequently the physical mechanisms leading to its initiation and sustenance are explored. As a case study, the heavy precipitation event that led to 65 cm of rainfall accumulation in a span of around 6 h (1430 LT-2030 LT) over Santacruz (Mumbai, India), on 26 July, 2005, is selected. Three sets of numerical experiments have been conducted. The first set of experiments (EXP1) consisted of a four-member ensemble, and was carried out in an idealized mode with a model grid spacing of 1 km. In spite of the idealized framework, signatures of heavy rainfall were seen in two of the ensemble members. The second set (EXP2) consisted of a five-member ensemble, with a four-level one-way nested integration and grid spacing of 54, 18, 6 and 1 km. The model was able to simulate a realistic spatial structure with the 54, 18, and 6 km grids; however, with the 1 km grid, the simulations were dominated by the prescribed boundary conditions. The third and final set of experiments (EXP3) consisted of a five-member ensemble, with a four-level one-way nesting and grid spacing of 54, 18, 6, and 2 km. The Scaled Lagged Average Forecasting (SLAF) methodology was employed to construct the ensemble members. The model simulations in this case were closer to observations, as compared to EXP2. Specifically, among all experiments, the timing of maximum rainfall, the abrupt increase in rainfall intensities, which was a major feature of this event, and the rainfall intensities simulated in EXP3 (at 6 km resolution) were closest to observations. Analysis of the physical mechanisms causing the initiation and sustenance of the event reveals some interesting aspects. Deep convection was found to be initiated by mid-tropospheric convergence that extended to lower levels during the later stage. In addition, there was a high negative vertical gradient of equivalent potential temperature suggesting strong atmospheric instability prior to and during the occurrence of the event. Finally, the presence of a conducive vertical wind shear in the lower and mid-troposphere is thought to be one of the major factors influencing the longevity of the event.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The similar to 2500 km long Himalayan arc has experienced three large to great earthquakes of M-w 7.8 to 8.4 during the past century, but none produced surface rupture. Paleoseismic studies have been conducted during the last decade to begin understanding the timing, size, rupture extent, return period, and mechanics of the faulting associated with the occurrence of large surface rupturing earthquakes along the similar to 2500 km long Himalayan Frontal Thrust (HFT) system of India and Nepal. The previous studies have been limited to about nine sites along the western two-thirds of the HFT extending through northwest India and along the southern border of Nepal. We present here the results of paleoseismic investigations at three additional sites further to the northeast along the HFT within the Indian states of West Bengal and Assam. The three sites reside between the meizoseismal areas of the 1934 Bihar-Nepal and 1950 Assam earthquakes. The two westernmost of the sites, near the village of Chalsa and near the Nameri Tiger Preserve, show that offsets during the last surface rupture event were at minimum of about 14 m and 12 m, respectively. Limits on the ages of surface rupture at Chalsa (site A) and Nameri (site B), though broad, allow the possibility that the two sites record the same great historical rupture reported in Nepal around A.D. 1100. The correlation between the two sites is supported by the observation that the large displacements as recorded at Chalsa and Nameri would most likely be associated with rupture lengths of hundreds of kilometers or more and are on the same order as reported for a surface rupture earthquake reported in Nepal around A.D. 1100. Assuming the offsets observed at Chalsa and Nameri occurred synchronously with reported offsets in Nepal, the rupture length of the event would approach 700 to 800 km. The easternmost site is located within Harmutty Tea Estate (site C) at the edges of the 1950 Assam earthquake meizoseismal area. Here the most recent event offset is relatively much smaller (<2.5 m), and radiocarbon dating shows it to have occurred after A.D. 1100 (after about A.D. 1270). The location of the site near the edge of the meizoseismal region of the 1950 Assam earthquake and the relatively lesser offset allows speculation that the displacement records the 1950 M-w 8.4 Assam earthquake. Scatter in radiocarbon ages on detrital charcoal has not resulted in a firm bracket on the timing of events observed in the trenches. Nonetheless, the observations collected here, when taken together, suggest that the largest of thrust earthquakes along the Himalayan arc have rupture lengths and displacements of similar scale to the largest that have occurred historically along the world's subduction zones.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Frequent episode discovery is a popular framework for mining data available as a long sequence of events. An episode is essentially a short ordered sequence of event types and the frequency of an episode is some suitable measure of how often the episode occurs in the data sequence. Recently,we proposed a new frequency measure for episodes based on the notion of non-overlapped occurrences of episodes in the event sequence, and showed that, such a definition, in addition to yielding computationally efficient algorithms, has some important theoretical properties in connecting frequent episode discovery with HMM learning. This paper presents some new algorithms for frequent episode discovery under this non-overlapped occurrences-based frequency definition. The algorithms presented here are better (by a factor of N, where N denotes the size of episodes being discovered) in terms of both time and space complexities when compared to existing methods for frequent episode discovery. We show through some simulation experiments, that our algorithms are very efficient. The new algorithms presented here have arguably the least possible orders of spaceand time complexities for the task of frequent episode discovery.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Most pattern mining methods yield a large number of frequent patterns, and isolating a small relevant subset of patterns is a challenging problem of current interest. In this paper, we address this problem in the context of discovering frequent episodes from symbolic time-series data. Motivated by the Minimum Description Length principle, we formulate the problem of selecting relevant subset of patterns as one of searching for a subset of patterns that achieves best data compression. We present algorithms for discovering small sets of relevant non-redundant episodes that achieve good data compression. The algorithms employ a novel encoding scheme and use serial episodes with inter-event constraints as the patterns. We present extensive simulation studies with both synthetic and real data, comparing our method with the existing schemes such as GoKrimp and SQS. We also demonstrate the effectiveness of these algorithms on event sequences from a composable conveyor system; this system represents a new application area where use of frequent patterns for compressing the event sequence is likely to be important for decision support and control.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The structure and dynamics of the two-dimensional linear shear flow of inelastic disks at high area fractions are analyzed. The event-driven simulation technique is used in the hard-particle limit, where the particles interact through instantaneous collisions. The structure (relative arrangement of particles) is analyzed using the bond-orientational order parameter. It is found that the shear flow reduces the order in the system, and the order parameter in a shear flow is lower than that in a collection of elastic hard disks at equilibrium. The distribution of relative velocities between colliding particles is analyzed. The relative velocity distribution undergoes a transition from a Gaussian distribution for nearly elastic particles, to an exponential distribution at low coefficients of restitution. However, the single-particle distribution function is close to a Gaussian in the dense limit, indicating that correlations between colliding particles have a strong influence on the relative velocity distribution. This results in a much lower dissipation rate than that predicted using the molecular chaos assumption, where the velocities of colliding particles are considered to be uncorrelated.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Seismic microzonation has generally been recognized as the most accepted tool in seismic hazard assessment and risk evaluation. In general, risk reduction can be done by reducing the hazard, the vulnerability or the value at risk. Since the earthquake hazard can not be reduced, one has to concentrate on vulnerability and value at risk. The vulnerability of an urban area / municipalities depends on the vulnerability of infrastructure and redundancies within the infrastructure. The earthquake risk is the damage to buildings along with number of people that are killed / hurt and the economic losses during the event due to an earthquake with a return period corresponding to this time period. The principal approaches one can follow to reduce these losses are to avoid, if possible, high hazard areas for the siting of buildings and infrastructure, and further ensure that the buildings and infrastructure are designed and constructed to resist expected earthquake loads. This can be done if one can assess the hazard at local scales. Seismic microzonation maps provide the basis for scientifically based decision-making to reduce earthquake risk for Govt./public agencies, private owners and the general public. Further, seismic microzonation carried out on an appropriate scale provides a valuable tool for disaster mitigation planning and emergency response planning for urban centers / municipalities. It provides the basis for the identification of the areas of the city / municipality which are most likely to experience serious damage in the event of an earthquake.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Acoustic emission (AE) energy, instead of amplitude, associated with each of the event is used to estimate the fracture process zone (FPZ) size. A steep increase in the cumulative AE energy of the events with respect to time is correlated with the formation of FPZ. Based on the AE energy released during these events and the locations of the events, FPZ size is obtained. The size-independent fracture energy is computed using the expressions given in the boundary effect model by least squares method since over-determined system of equations are obtained when data from several specimens are used. Instead of least squares method a different method is suggested in which the transition ligament length, measured from the plot of histograms of AE events plotted over the un-cracked ligament, is used directly to obtain size-independent fracture energy. The fracture energy thus calculated seems to be size-independent.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We study the dynamical properties of the homogeneous shear flow of inelastic dumbbells in two dimensions as a first step towards examining the effect of shape on the properties of flowing granular materials. The dumbbells are modelled as smooth fused disks characterized by the ratio of the distance between centres (L) and the disk diameter (D), with an aspect ratio (L/D) varying between 0 and 1 in our simulations. Area fractions studied are in the range 0.1-0.7, while coefficients of normal restitution (e(n)) from 0.99 to 0.7 are considered. The simulations use a modified form of the event-driven methodology for circular disks. The average orientation is characterized by an order parameter S, which varies between 0 (for a perfectly disordered fluid) and 1 (for a fluid with the axes of all dumbbells in the same direction). We investigate power-law fits of S as a function of (L D) and (1 - e(n)(2)) There is a gradual increase in ordering as the area fraction is increased, as the aspect ratio is increased or as the coefficient of restitution is decreased. The order parameter has a maximum value of about 0.5 for the highest area fraction and lowest coefficient of restitution considered here. The mean energy of the velocity fluctuations in the flow direction is higher than that in the gradient direction and the rotational energy, though the difference decreases as the area fraction increases, due to the efficient collisional transfer of energy between the three directions. The distributions of the translational and rotational velocities are Gaussian to a very good approximation. The pressure is found to be remarkably independent of the coefficient of restitution. The pressure and dissipation rate show relatively little variation when scaled by the collision frequency for all the area fractions studied here, indicating that the collision frequency determines the momentum transport and energy dissipation, even at the lowest area fractions studied here. The mean angular velocity of the particles is equal to half the vorticity at low area fractions, but the magnitude systematically decreases to less than half the vorticity as the area fraction is increased, even though the stress tensor is symmetric.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Removal of impurity elements from hot metal is essential in basic oxygen steelmaking. Oxidation of phosphorus from hot metal has been studied by several authors since the early days of steelmaking. Influence of different parameters on the distribution of phosphorus, seen during the recent work of the authors, differs somewhat from that reported earlier. On the other hand, removal of sulphur during steelmaking has drawn much less attention. This may be due to the magnitude of desulphurisation in oxygen steelmaking being relatively low and desulphurisation during hot metal pre-treatment or in the ladle furnace offering better commercial viability Further, it is normally accepted that sulphur is removed to steelmaking slag in the form of sulphide only However, recent investigations have indicated that a significant amount of sulphur removed during basic oxygen steelmaking can exist in the form of sulphate in the slag under oxidising conditions. The distribution of sulphur during steelmaking becomes more important in the event of carry-over of sulphur-rich blast-furnace slag, which increases sulphur load in the BOF. The chemical nature of sulphur in this slag undergoes a gradual transition from sulphide to sulphate as the oxidative refining progresses.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A new fault-tolerant multi-transputer architecture capable of tolerating failure of any one component in the system is described. In the proposed architecture the processing nodes are automatically reconfigured in the event of a fault and the computations continue from the stage where the fault occurred. The process of reconfiguration is transparent to the user, and the identity of the failed component is communicated to the user along with the results of computations. Parallel solution of a typical engineering problem involving solution of Laplace's equation by the boundary element method has been implemented. The performance of the architecture in the event of faults has been investigated.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We have made a detailed study of the signals expected at CERN LEP 2 from charged scalar bosons whose dominant decay channels are into four fermions. The event rates as well as kinematics of the final states are discussed when such scalars are either pair produced or are generated through a tree-level interaction involving a charged scalar, the W, and the Z. The backgrounds in both cases are discussed. We also suggest the possibility of reconstructing the mass of such a scalar at LEP 2.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A Wireless Sensor Network (WSN) powered using harvested energies is limited in its operation by instantaneous power. Since energy availability can be different across nodes in the network, network setup and collaboration is a non trivial task. At the same time, in the event of excess energy, exciting node collaboration possibilities exist; often not feasible with battery driven sensor networks. Operations such as sensing, computation, storage and communication are required to achieve the common goal for any sensor network. In this paper, we design and implement a smart application that uses a Decision Engine, and morphs itself into an energy matched application. The results are based on measurements using IRIS motes running on solar energy. We have done away with batteries; instead used low leakage super capacitors to store harvested energy. The Decision Engine utilizes two pieces of data to provide its recommendations. Firstly, a history based energy prediction model assists the engine with information about in-coming energy. The second input is the energy cost database for operations. The energy driven Decision Engine calculates the energy budgets and recommends the best possible set of operations. Under excess energy condition, the Decision Engine, promiscuously sniffs the neighborhood looking for all possible data from neighbors. This data includes neighbor's energy level and sensor data. Equipped with this data, nodes establish detailed data correlation and thus enhance collaboration such as filling up data gaps on behalf of nodes hibernating under low energy conditions. The results are encouraging. Node and network life time of the sensor nodes running the smart application is found to be significantly higher compared to the base application.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A multilevel inverter topology for seven-level space vector generation is proposed in this paper. In this topology, the seven-level structure is realized using two conventional two-level inverters and six capacitor-fed H-bridge cells. It needs only two isolated dc-voltage sources of voltage rating V(dc)/2 where V(dc) is the dc voltage magnitude required by the conventional neutral point clamped (NPC) seven-level topology. The proposed topology is capable of maintaining the H-bridge capacitor voltages at the required level of V(dc)/6 under all operating conditions, covering the entire linear modulation and overmodulation regions, by making use of the switching state redundancies. In the event of any switch failure in H-bridges, this inverter can operate in three-level mode, a feature that enhances the reliability of the drive system. The two-level inverters, which operate at a higher voltage level of V(dc)/2, switch less compared to the H-bridges, which operate at a lower voltage level of V(dc)/6, resulting in switching loss reduction. The experimental verification of the proposed topology is carried out for the entire modulation range, under steady state as well as transient conditions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Unending quest for performance improvement coupled with the advancements in integrated circuit technology have led to the development of new architectural paradigm. Speculative multithreaded architecture (SpMT) philosophy relies on aggressive speculative execution for improved performance. However, aggressive speculative execution comes with a mixed flavor of improving performance, when successful, and adversely affecting the energy consumption (and performance) because of useless computation in the event of mis-speculation. Dynamic instruction criticality information can be usefully applied to control and guide such an aggressive speculative execution. In this paper, we present a model of micro-execution for SpMT architecture that we have developed to determine the dynamic instruction criticality. We have also developed two novel techniques utilizing the criticality information namely delaying the non-critical loads and the criticality based thread-prediction for reducing useless computations and energy consumption. Experimental results showing break-up of critical instructions and effectiveness of proposed techniques in reducing energy consumption are presented in the context of multiscalar processor that implements SpMT architecture. Our experiments show 17.7% and 11.6% reduction in dynamic energy for criticality based thread prediction and criticality based delayed load scheme respectively while the improvement in dynamic energy delay product is 13.9% and 5.5%, respectively. (c) 2012 Published by Elsevier B.V.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The El Nino/Southern Oscillation phenomenon, characterized by anomalous sea surface temperatures and winds in the tropical Pacific, affects climate across the globe(1). El Ninos occur every 2-7 years, whereas the El Nino/Southern Oscillation itself varies on decadal timescales in frequency and amplitude, with a different spatial pattern of surface anomalies(2) each time the tropical Pacific undergoes a regime shift. Recent work has shown that Bjerknes feedback(3,4) (coupling of the atmosphere and the ocean through changes in equatorial winds driven by changes in sea surface temperature owing to suppression of equatorial upwelling in the east Pacific) is not necessary(5) for the development of an El Nino. Thus it is unclear what remains constant through regimes and is crucial for producing the anomalies recognized as El Nino. Here we show that the subsurface process of discharging warm waters always begins in the boreal summer/autumn of the year before the event (up to 18 months before the peak) independent of regimes, identifying the discharge process as fundamental to the El Nino onset. It is therefore imperative that models capture this process accurately to further our theoretical understanding, improve forecasts and predict how the El Nino/Southern Oscillation may respond to climate change.