858 resultados para Operation based method
Resumo:
Frequent episode discovery framework is a popular framework in temporal data mining with many applications. Over the years, many different notions of frequencies of episodes have been proposed along with different algorithms for episode discovery. In this paper, we present a unified view of all the apriori-based discoverymethods for serial episodes under these different notions of frequencies. Specifically, we present a unified view of the various frequency counting algorithms. We propose a generic counting algorithm such that all current algorithms are special cases of it. This unified view allows one to gain insights into different frequencies, and we present quantitative relationships among different frequencies.Our unified view also helps in obtaining correctness proofs for various counting algorithms as we show here. It also aids in understanding and obtaining the anti-monotonicity properties satisfied by the various frequencies, the properties exploited by the candidate generation step of any apriori-based method. We also point out how our unified view of counting helps to consider generalization of the algorithm to count episodes with general partial orders.
Resumo:
Frequent episode discovery framework is a popular framework in temporal data mining with many applications. Over the years, many different notions of frequencies of episodes have been proposed along with different algorithms for episode discovery. In this paper, we present a unified view of all the apriori-based discovery methods for serial episodes under these different notions of frequencies. Specifically, we present a unified view of the various frequency counting algorithms. We propose a generic counting algorithm such that all current algorithms are special cases of it. This unified view allows one to gain insights into different frequencies, and we present quantitative relationships among different frequencies. Our unified view also helps in obtaining correctness proofs for various counting algorithms as we show here. It also aids in understanding and obtaining the anti-monotonicity properties satisfied by the various frequencies, the properties exploited by the candidate generation step of any apriori-based method. We also point out how our unified view of counting helps to consider generalization of the algorithm to count episodes with general partial orders.
Resumo:
The inverse problem in the diffuse optical tomography is known to be nonlinear, ill-posed, and sometimes under-determined, requiring regularization to obtain meaningful results, with Tikhonov-type regularization being the most popular one. The choice of this regularization parameter dictates the reconstructed optical image quality and is typically chosen empirically or based on prior experience. An automated method for optimal selection of regularization parameter that is based on regularized minimal residual method (MRM) is proposed and is compared with the traditional generalized cross-validation method. The results obtained using numerical and gelatin phantom data indicate that the MRM-based method is capable of providing the optimal regularization parameter. (C) 2012 Society of Photo-Optical Instrumentation Engineers (SPIE). DOI: 10.1117/1.JBO.17.10.106015]
Resumo:
Purpose: Developing a computationally efficient automated method for the optimal choice of regularization parameter in diffuse optical tomography. Methods: The least-squares QR (LSQR)-type method that uses Lanczos bidiagonalization is known to be computationally efficient in performing the reconstruction procedure in diffuse optical tomography. The same is effectively deployed via an optimization procedure that uses the simplex method to find the optimal regularization parameter. The proposed LSQR-type method is compared with the traditional methods such as L-curve, generalized cross-validation (GCV), and recently proposed minimal residual method (MRM)-based choice of regularization parameter using numerical and experimental phantom data. Results: The results indicate that the proposed LSQR-type and MRM-based methods performance in terms of reconstructed image quality is similar and superior compared to L-curve and GCV-based methods. The proposed method computational complexity is at least five times lower compared to MRM-based method, making it an optimal technique. Conclusions: The LSQR-type method was able to overcome the inherent limitation of computationally expensive nature of MRM-based automated way finding the optimal regularization parameter in diffuse optical tomographic imaging, making this method more suitable to be deployed in real-time. (C) 2013 American Association of Physicists in Medicine. http://dx.doi.org/10.1118/1.4792459]
Resumo:
In this paper, we propose low-complexity algorithms based on Monte Carlo sampling for signal detection and channel estimation on the uplink in large-scale multiuser multiple-input-multiple-output (MIMO) systems with tens to hundreds of antennas at the base station (BS) and a similar number of uplink users. A BS receiver that employs a novel mixed sampling technique (which makes a probabilistic choice between Gibbs sampling and random uniform sampling in each coordinate update) for detection and a Gibbs-sampling-based method for channel estimation is proposed. The algorithm proposed for detection alleviates the stalling problem encountered at high signal-to-noise ratios (SNRs) in conventional Gibbs-sampling-based detection and achieves near-optimal performance in large systems with M-ary quadrature amplitude modulation (M-QAM). A novel ingredient in the detection algorithm that is responsible for achieving near-optimal performance at low complexity is the joint use of a mixed Gibbs sampling (MGS) strategy coupled with a multiple restart (MR) strategy with an efficient restart criterion. Near-optimal detection performance is demonstrated for a large number of BS antennas and users (e. g., 64 and 128 BS antennas and users). The proposed Gibbs-sampling-based channel estimation algorithm refines an initial estimate of the channel obtained during the pilot phase through iterations with the proposed MGS-based detection during the data phase. In time-division duplex systems where channel reciprocity holds, these channel estimates can be used for multiuser MIMO precoding on the downlink. The proposed receiver is shown to achieve good performance and scale well for large dimensions.
Resumo:
Research has been undertaken to ascertain the predictability of non-stationary time series using wavelet and Empirical Mode Decomposition (EMD) based time series models. Methods have been developed in the past to decompose a time series into components. Forecasting of these components combined with random component could yield predictions. Using this ideology, wavelet and EMD analyses have been incorporated separately which decomposes a time series into independent orthogonal components with both time and frequency localizations. The component series are fit with specific auto-regressive models to obtain forecasts which are later combined to obtain the actual predictions. Four non-stationary streamflow sites (USGS data resources) of monthly total volumes and two non-stationary gridded rainfall sites (IMD) of monthly total rainfall are considered for the study. The predictability is checked for six and twelve months ahead forecasts across both the methodologies. Based on performance measures, it is observed that wavelet based method has better prediction capabilities over EMD based method despite some of the limitations of time series methods and the manner in which decomposition takes place. Finally, the study concludes that the wavelet based time series algorithm can be used to model events such as droughts with reasonable accuracy. Also, some modifications that can be made in the model have been discussed that could extend the scope of applicability to other areas in the field of hydrology. (C) 2013 Elesvier B.V. All rights reserved.
Resumo:
The paper discusses a wave propagation based method for identifying the damages in an aircraft built up structural component such as delamination and skin-stiffener debonding. First, a spectral finite element mode l (SFEM) is developed for modeling wave propagation in general built-up structures by using the concept of assembling 2D spectral plate elements. The developed numerical model is validated using conventional 2-D FEM. Studies are performed to capture the mode coupling,that is, the flexural-axial coupling present in the wave responses. Lastly, the damages in these built up structures are then identified using the developed SFEM model and the measured responses using the concept Damage Force Indicator (DFI) technique.
Resumo:
This article describes a new performance-based approach for evaluating the return period of seismic soil liquefaction based on standard penetration test (SPT) and cone penetration test (CPT) data. The conventional liquefaction evaluation methods consider a single acceleration level and magnitude and these approaches fail to take into account the uncertainty in earthquake loading. The seismic hazard analysis based on the probabilistic method clearly shows that a particular acceleration value is being contributed by different magnitudes with varying probability. In the new method presented in this article, the entire range of ground shaking and the entire range of earthquake magnitude are considered and the liquefaction return period is evaluated based on the SPT and CPT data. This article explains the performance-based methodology for the liquefaction analysis – starting from probabilistic seismic hazard analysis (PSHA) for the evaluation of seismic hazard and the performance-based method to evaluate the liquefaction return period. A case study has been done for Bangalore, India, based on SPT data and converted CPT values. The comparison of results obtained from both the methods have been presented. In an area of 220 km2 in Bangalore city, the site class was assessed based on large number of borehole data and 58 Multi-channel analysis of surface wave survey. Using the site class and peak acceleration at rock depth from PSHA, the peak ground acceleration at the ground surface was estimated using probabilistic approach. The liquefaction analysis was done based on 450 borehole data obtained in the study area. The results of CPT match well with the results obtained from similar analysis with SPT data.
Resumo:
Multilevel inverters with dodecagonal (12-sided polygon) voltage space vector (SV) structures have advantages like extension of linear modulation range, elimination of fifth and seventh harmonics in phase voltages and currents for the full modulation range including extreme 12-step operation, reduced device voltage ratings, lesser dv/dt stresses on devices and motor phase windings resulting in lower EMI/EMC problems, and lower switching frequency-making it more suitable for high-power drive applications. This paper proposes a simple method to obtain pulsewidth modulation (PWM) timings for a dodecagonal voltage SV structure using only sampled reference voltages. In addition to this, a carrier-based method for obtaining the PWM timings for a general N-level dodecagonal structure is proposed in this paper for the first time. The algorithm outputs the triangle information and the PWM timing values which can be set as the compare values for any carrier-based hardware PWM module to obtain SV PWM like switching sequences. The proposed method eliminates the need for angle estimation, computation of modulation indices, and iterative search algorithms that are typical in multilevel dodecagonal SV systems. The proposed PWM scheme was implemented on a five-level dodecagonal SV structure. Exhaustive simulation and experimental results for steady-state and transient conditions are presented to validate the proposed method.
Resumo:
The sensor scheduling problem can be formulated as a controlled hidden Markov model and this paper solves the problem when the state, observation and action spaces are continuous. This general case is important as it is the natural framework for many applications. The aim is to minimise the variance of the estimation error of the hidden state w.r.t. the action sequence. We present a novel simulation-based method that uses a stochastic gradient algorithm to find optimal actions. © 2007 Elsevier Ltd. All rights reserved.
Resumo:
This paper considers a class of dynamic Spatial Point Processes (PP) that evolves over time in a Markovian fashion. This Markov in time PP is hidden and observed indirectly through another PP via thinning, displacement and noise. This statistical model is important for Multi object Tracking applications and we present an approximate likelihood based method for estimating the model parameters. The work is supported by an extensive numerical study.
Resumo:
Quality of cardiopulmonary resuscitation (CPR) improves through the use of CPR feedback devices. Most feedback devices integrate the acceleration twice to estimate compression depth. However, they use additional sensors or processing techniques to compensate for large displacement drifts caused by integration. This study introduces an accelerometer-based method that avoids integration by using spectral techniques on short duration acceleration intervals. We used a manikin placed on a hard surface, a sternal triaxial accelerometer, and a photoelectric distance sensor (gold standard). Twenty volunteers provided 60 s of continuous compressions to test various rates (80-140 min(-1)), depths (3-5 cm), and accelerometer misalignment conditions. A total of 320 records with 35312 compressions were analysed. The global root-mean-square errors in rate and depth were below 1.5 min(-1) and 2 mm for analysis intervals between 2 and 5 s. For 3 s analysis intervals the 95% levels of agreement between the method and the gold standard were within -1.64-1.67 min(-1) and -1.69-1.72 mm, respectively. Accurate feedback on chest compression rate and depth is feasible applying spectral techniques to the acceleration. The method avoids additional techniques to compensate for the integration displacement drift, improving accuracy, and simplifying current accelerometer-based devices.
Resumo:
A technique is presented for measuring the exhaust gas recirculation (EGR) and residual gas fraction (RGF) using a fast UEGO based O2 measurement of the manifold or in-cylinder gases, and of the exhaust gases. The technique has some advantages over the more common CO2-based method. In the case of an RGF measurement, fuel interference must be eliminated and special fuelling arrangements are is required. It is shown how a UEGO-based measurement, though sensitive to reactive species in the exhaust (such as H 2), as a system reports EGR/ RGF rates faithfully. Preliminary tests showed that EGR and RGF measurements using the O2 approach agreed well with CO2-based measurements. © 2011 SAE International.
Resumo:
The conventional technology for generating ultrashort pulses relies on soliton-like operation based mode-locking. In this regime, the pulse duration is limited by nonlinear optical effects[1]. One method to mitigate these effects is to alternate segments of normal and anomalous group velocity dispersion (GVD) fiber[1]. This configuration is known as dispersion-managed soliton design. It decreases the nonlinear optical effects and reduces the pulse duration[1]. © 2011 IEEE.
Resumo:
This paper presents an agenda-based user simulator which has been extended to be trainable on real data with the aim of more closely modelling the complex rational behaviour exhibited by real users. The train-able part is formed by a set of random decision points that may be encountered during the process of receiving a system act and responding with a user act. A sample-based method is presented for using real user data to estimate the parameters that control these decisions. Evaluation results are given both in terms of statistics of generated user behaviour and the quality of policies trained with different simulators. Compared to a handcrafted simulator, the trained system provides a much better fit to corpus data and evaluations suggest that this better fit should result in improved dialogue performance. © 2010 Association for Computational Linguistics.