23 resultados para event log
Resumo:
The standard curve-fitting methods, Casagrande's log t method and Taylor's root t method, for the determination of the coefficient of consolidation use the later part of the consolidation curve and are influenced by secondary compression effects. Literature shows that secondary compression is concurrent with primary consolidation and that its effect is to decrease the value of the coefficient of consolidation. If the early part of the time-compression data is used, the values obtained will be less influenced by secondary compression effects. A method that uses the early part of the log t plot is proposed in this technical note. As the influence of secondary compression is reduced, the value obtained by this method is greater than that yielded by both the standard methods. The permeability values computed from C-v (obtained from the proposed method) rue more in agreement with the measured values than the standard methods showing that the effects of secondary compression are minimized. Time-compression data for a shorter duration is sufficient for the determination of C-v if the coefficient of secondary compression is not required.
Resumo:
An event sequence recorder is a specialized piece of equipment that accepts inputs from switches and contactors, and prints the sequence in which they operate. This paper describes an event sequence recorder based on an Intel 8085 microprocessor. It scans the inputs every millisecond and prints in a compact form the channel number, type of event (normal or abnormal) and time of occurrence. It also communicates these events over an RS232C link to a remote computer. A realtime calendar/clock is included. The system described has been designed for continuous operation in process plants, power stations etc. The system has been tested and found to be working satisfactorily.
Resumo:
The garnet-kyanite-staurolite and garnet-biotite-staurolite gneisses were collected from a locality within Lukung area that belongs to the Pangong metamorphic complex in Shyok valley, Ladakh Himalaya. The kyanite-free samples have garnet and staurolite in equilibrium, where garnets show euhedral texture and have flat compositional profile. On the other hand, the kyanite-bearing sample shows equilibrium assemblage of garnet-kyanite-staurolite along with muscovite and biotite. In this case, garnet has an inclusion rich core with a distinct grain boundary, which was later overgrown by inclusion free euhedral garnet. Garnet cores are rich in Mn and Ca, while the rims are poor in Mn and rich in Fe and Mg, suggesting two distinct generations of growth. However, the compositional profiles and textural signature of garnets suggests the same stage of P -T evolution for the formation of the inclusion free euhedral garnets in the kyanite-free gneisses and the inclusion free euhedral garnet rims in the kyanite-bearing gneiss. Muscovites from the four samples have consistent K-Ar ages, suggesting the cooling age (∼ 10 Ma) of the gneisses. These ages make a constraint on the timing of the youngest post-collision metamorphic event that may be closely related to an activation of the Karakoram fault in Pangong metamorphic complex.
Resumo:
The keyword based search technique suffers from the problem of synonymic and polysemic queries. Current approaches address only theproblem of synonymic queries in which different queries might have the same information requirement. But the problem of polysemic queries,i.e., same query having different intentions, still remains unaddressed. In this paper, we propose the notion of intent clusters, the members of which will have the same intention. We develop a clustering algorithm that uses the user session information in query logs in addition to query URL entries to identify cluster of queries having the same intention. The proposed approach has been studied through case examples from the actual log data from AOL, and the clustering algorithm is shown to be successful in discerning the user intentions.
Resumo:
The performance of the Advanced Regional Prediction System (ARPS) in simulating an extreme rainfall event is evaluated, and subsequently the physical mechanisms leading to its initiation and sustenance are explored. As a case study, the heavy precipitation event that led to 65 cm of rainfall accumulation in a span of around 6 h (1430 LT-2030 LT) over Santacruz (Mumbai, India), on 26 July, 2005, is selected. Three sets of numerical experiments have been conducted. The first set of experiments (EXP1) consisted of a four-member ensemble, and was carried out in an idealized mode with a model grid spacing of 1 km. In spite of the idealized framework, signatures of heavy rainfall were seen in two of the ensemble members. The second set (EXP2) consisted of a five-member ensemble, with a four-level one-way nested integration and grid spacing of 54, 18, 6 and 1 km. The model was able to simulate a realistic spatial structure with the 54, 18, and 6 km grids; however, with the 1 km grid, the simulations were dominated by the prescribed boundary conditions. The third and final set of experiments (EXP3) consisted of a five-member ensemble, with a four-level one-way nesting and grid spacing of 54, 18, 6, and 2 km. The Scaled Lagged Average Forecasting (SLAF) methodology was employed to construct the ensemble members. The model simulations in this case were closer to observations, as compared to EXP2. Specifically, among all experiments, the timing of maximum rainfall, the abrupt increase in rainfall intensities, which was a major feature of this event, and the rainfall intensities simulated in EXP3 (at 6 km resolution) were closest to observations. Analysis of the physical mechanisms causing the initiation and sustenance of the event reveals some interesting aspects. Deep convection was found to be initiated by mid-tropospheric convergence that extended to lower levels during the later stage. In addition, there was a high negative vertical gradient of equivalent potential temperature suggesting strong atmospheric instability prior to and during the occurrence of the event. Finally, the presence of a conducive vertical wind shear in the lower and mid-troposphere is thought to be one of the major factors influencing the longevity of the event.
Resumo:
The behaviour of saturated soils undergoing consolidation is very complex, It may not follow Terzaghi's theory over the entire consolidation process, Different soils may behave in such a way as to fit into Terzaghi's theory over some specific stages of the consolidation process (percentage of consolidation), This may be one of the reasons for the difficulties faced by the existing curve-fitting procedures in obtaining the coefficient of consolidation, c(v). It has been shown that the slope of the initial linear portion of the theoretical log U-log T curve is constant over a wider range of degree of consolidation, U, when compared with the other methods in use, This initial well-defined straight line in the log U-log T plot intersects the U = 100% line at T = pi/4, which corresponds to U = 88.3%, The proposed log delta-log t method is based on this observation, which gives the value of c(v) through simple graphical construction, In the proposed method, which is more versatile, identification of the characteristic straight lines is very clear; the intersection of these lines is more precise and the method does not depend upon the initial compression for the determination of c(v).
Resumo:
In this paper we consider the problem of learning an n × n kernel matrix from m(1) similarity matrices under general convex loss. Past research have extensively studied the m = 1 case and have derived several algorithms which require sophisticated techniques like ACCP, SOCP, etc. The existing algorithms do not apply if one uses arbitrary losses and often can not handle m > 1 case. We present several provably convergent iterative algorithms, where each iteration requires either an SVM or a Multiple Kernel Learning (MKL) solver for m > 1 case. One of the major contributions of the paper is to extend the well knownMirror Descent(MD) framework to handle Cartesian product of psd matrices. This novel extension leads to an algorithm, called EMKL, which solves the problem in O(m2 log n 2) iterations; in each iteration one solves an MKL involving m kernels and m eigen-decomposition of n × n matrices. By suitably defining a restriction on the objective function, a faster version of EMKL is proposed, called REKL,which avoids the eigen-decomposition. An alternative to both EMKL and REKL is also suggested which requires only an SVMsolver. Experimental results on real world protein data set involving several similarity matrices illustrate the efficacy of the proposed algorithms.
Resumo:
Molecular complexes of melamine with hydroxy and dihydroxybenzoic acids have been analyzed to assess the collective role of the hydroxyl (OH) and carboxyl (COOH) functionalities in the recognition process. In most cases, solvents of crystallization do play a major role in self-assembly and structure stabilization. Hydrated compounds generate linear chains of melamine molecules with acid molecules pendant resulting in a zipper architecture. However, anhydrous and solvated compounds generate tetrameric units consisting of melamine dimers together with acid molecules. These tetramers in turn interweave to form a Lincoln log arrangement in the crystal. The salt/co-crystal formation in these complexes cannot be predicted apriori on the basis of Delta pK(a) values as there exists a salt-to-co-crystal continuum.
Resumo:
Frequent episode discovery is a popular framework for mining data available as a long sequence of events. An episode is essentially a short ordered sequence of event types and the frequency of an episode is some suitable measure of how often the episode occurs in the data sequence. Recently,we proposed a new frequency measure for episodes based on the notion of non-overlapped occurrences of episodes in the event sequence, and showed that, such a definition, in addition to yielding computationally efficient algorithms, has some important theoretical properties in connecting frequent episode discovery with HMM learning. This paper presents some new algorithms for frequent episode discovery under this non-overlapped occurrences-based frequency definition. The algorithms presented here are better (by a factor of N, where N denotes the size of episodes being discovered) in terms of both time and space complexities when compared to existing methods for frequent episode discovery. We show through some simulation experiments, that our algorithms are very efficient. The new algorithms presented here have arguably the least possible orders of spaceand time complexities for the task of frequent episode discovery.
Resumo:
Because of its essential nature, each step of transcription, viz., initiation, elongation, and termination, is subjected to elaborate regulation. A number of transcription factors modulate the rates of transcription at these different steps, and several inhibitors shut down the process. Many modulators, including small molecules and proteinaceous inhibitors, bind the RNA polymerase (RNAP) secondary channel to control transcription. We describe here the first small protein inhibitor of transcription in Mycobacterium tuberculosis. Rv3788 is a homolog of the Gre factors that binds near the secondary channel of RNAP to inhibit transcription. The factor also affected the action of guanosine pentaphosphate (pppGpp) on transcription and abrogated Gre action, indicating its function in the modulation of the catalytic center of RNAP. Although it has a Gre factor-like domain organization with the conserved acidic residues in the N terminus and retains interaction with RNAP, the factor did not show any transcript cleavage stimulatory activity. Unlike Rv3788, another Gre homolog from Mycobacterium smegmatis, MSMEG_6292 did not exhibit transcription-inhibitory activities, hinting at the importance of the former in influencing the lifestyle of M. tuberculosis.
Resumo:
In this paper we consider the process of discovering frequent episodes in event sequences. The most computationally intensive part of this process is that of counting the frequencies of a set of candidate episodes. We present two new frequency counting algorithms for speeding up this part. These, referred to as non-overlapping and non-inteleaved frequency counts, are based on directly counting suitable subsets of the occurrences of an episode. Hence they are different from the frequency counts of Mannila et al [1], where they count the number of windows in which the episode occurs. Our new frequency counts offer a speed-up factor of 7 or more on real and synthetic datasets. We also show how the new frequency counts can be used when the events in episodes have time-durations as well.
Resumo:
Discovering patterns in temporal data is an important task in Data Mining. A successful method for this was proposed by Mannila et al. [1] in 1997. In their framework, mining for temporal patterns in a database of sequences of events is done by discovering the so called frequent episodes. These episodes characterize interesting collections of events occurring relatively close to each other in some partial order. However, in this framework(and in many others for finding patterns in event sequences), the ordering of events in an event sequence is the only allowed temporal information. But there are many applications where the events are not instantaneous; they have time durations. Interesting episodesthat we want to discover may need to contain information regarding event durations etc. In this paper we extend Mannila et al.’s framework to tackle such issues. In our generalized formulation, episodes are defined so that much more temporal information about events can be incorporated into the structure of an episode. This significantly enhances the expressive capability of the rules that can be discovered in the frequent episode framework. We also present algorithms for discovering such generalized frequent episodes.
Resumo:
We consider a small extent sensor network for event detection, in which nodes periodically take samples and then contend over a random access network to transmit their measurement packets to the fusion center. We consider two procedures at the fusion center for processing the measurements. The Bayesian setting, is assumed, that is, the fusion center has a prior distribution on the change time. In the first procedure, the decision algorithm at the fusion center is network-oblivious and makes a decision only when a complete vector of measurements taken at a sampling instant is available. In the second procedure, the decision algorithm at the fusion center is network-aware and processes measurements as they arrive, but in a time-causal order. In this case, the decision statistic depends on the network delays, whereas in the network-oblivious case, the decision statistic does not. This yields a Bayesian change-detection problem with a trade-off between the random network delay and the decision delay that is, a higher sampling rate reduces the decision delay but increases the random access delay. Under periodic sampling, in the network-oblivious case, the structure of the optimal stopping rule is the same as that without the network, and the optimal change detection delay decouples into the network delay and the optimal decision delay without the network. In the network-aware case, the optimal stopping problem is analyzed as a partially observable Markov decision process, in which the states of the queues and delays in the network need to be maintained. A sufficient decision statistic is the network state and the posterior probability of change having occurred, given the measurements received and the state of the network. The optimal regimes are studied using simulation.
Resumo:
Mountain waves in the stratosphere have been observed over elevated topographies using both nadir-looking and limb-viewing satellites. However, the characteristics of mountain waves generated over the Himalayan Mountain range and the adjacent Tibetan Plateau are relatively less explored. The present study reports on three-dimensional (3-D) properties of a mountain wave event that occurred over the western Himalayan region on 9 December 2008. Observations made by the Atmospheric Infrared Sounder on board the Aqua and Microwave Limb Sounder on board the Aura satellites are used to delineate the wave properties. The observed wave properties such as horizontal (lambda(x), lambda(y)) and vertical (lambda(z)) wavelengths are 276 km (zonal), 289 km (meridional), and 25 km, respectively. A good agreement is found between the observed and modeled/analyzed vertical wavelength for a stationary gravity wave determined using the Modern Era Retrospective Analysis for Research and Applications (MERRA) reanalysis winds. The analysis of both the National Centers for Environmental Prediction/National Center for Atmospheric Research reanalysis and MERRA winds shows that the waves are primarily forced by strong flow across the topography. Using the 3-D properties of waves and the corrected temperature amplitudes, we estimated wave momentum fluxes of the order of similar to 0.05 Pa, which is in agreement with large-amplitude mountain wave events reported elsewhere. In this regard, the present study is considered to be very much informative to the gravity wave drag schemes employed in current general circulation models for this region.