62 resultados para distinguishability metrics
Resumo:
Detection of QRS serves as a first step in many automated ECG analysis techniques. Motivated by the strong similarities between the signal structures of an ECG signal and the integrated linear prediction residual (ILPR) of voiced speech, an algorithm proposed earlier for epoch detection from ILPR is extended to the problem of QRS detection. The ECG signal is pre-processed by high-pass filtering to remove the baseline wandering and by half-wave rectification to reduce the ambiguities. The initial estimates of the QRS are iteratively obtained using a non-linear temporal feature, named the dynamic plosion index suitable for detection of transients in a signal. These estimates are further refined to obtain a higher temporal accuracy. Unlike most of the high performance algorithms, this technique does not make use of any threshold or differencing operation. The proposed algorithm is validated on the MIT-BIH database using the standard metrics and its performance is found to be comparable to the state-of-the-art algorithms, despite its threshold independence and simple decision logic.
Resumo:
Land use (LU) land cover (LC) information at a temporal scale illustrates the physical coverage of the Earth's terrestrial surface according to its use and provides the intricate information for effective planning and management activities. LULC changes are stated as local and location specific, collectively they act as drivers of global environmental changes. Understanding and predicting the impact of LULC change processes requires long term historical restorations and projecting into the future of land cover changes at regional to global scales. The present study aims at quantifying spatio temporal landscape dynamics along the gradient of varying terrains presented in the landscape by multi-data approach (MDA). MDA incorporates multi temporal satellite imagery with demographic data and other additional relevant data sets. The gradient covers three different types of topographic features, planes; hilly terrain and coastal region to account the significant role of elevation in land cover change. The seasonality is another aspect to be considered in the vegetation dominated landscapes; variations are accounted using multi seasonal data. Spatial patterns of the various patches are identified and analysed using landscape metrics to understand the forest fragmentation. The prediction of likely changes in 2020 through scenario analysis has been done to account for the changes, considering the present growth rates and due to the proposed developmental projects. This work summarizes recent estimates on changes in cropland, agricultural intensification, deforestation, pasture expansion, and urbanization as the causal factors for LULC change.
Resumo:
In this paper, a fractional order proportional-integral controller is developed for a miniature air vehicle for rectilinear path following and trajectory tracking. The controller is implemented by constructing a vector field surrounding the path to be followed, which is then used to generate course commands for the miniature air vehicle. The fractional order proportional-integral controller is simulated using the fundamentals of fractional calculus, and the results for this controller are compared with those obtained for a proportional controller and a proportional integral controller. In order to analyze the performance of the controllers, four performance metrics, namely (maximum) overshoot, control effort, settling time and integral of the timed absolute error cost, have been selected. A comparison of the nominal as well as the robust performances of these controllers indicates that the fractional order proportional-integral controller exhibits the best performance in terms of ITAE while showing comparable performances in all other aspects.
Resumo:
We consider the Riemannian functional defined on the space of Riemannian metrics with unit volume on a closed smooth manifold M where R(g) and dv (g) denote the corresponding Riemannian curvature tensor and volume form and p a (0, a). First we prove that the Riemannian metrics with non-zero constant sectional curvature are strictly stable for for certain values of p. Then we conclude that they are strict local minimizers for for those values of p. Finally generalizing this result we prove that product of space forms of same type and dimension are strict local minimizer for for certain values of p.
Resumo:
Central to network tomography is the problem of identifiability, the ability to identify internal network characteristics uniquely from end-to-end measurements. This problem is often underconstrained even when internal network characteristics such as link delays are modeled as additive constants. While it is known that the network topology can play a role in determining the extent of identifiability, there is a lack in the fundamental understanding of being able to quantify it for a given network. In this paper, we consider the problem of identifying additive link metrics in an arbitrary undirected network using measurement nodes and establishing paths/cycles between them. For a given placement of measurement nodes, we define and derive the ``link rank'' of the network-the maximum number of linearly independent cycles/paths that may be established between the measurement nodes. We achieve this in linear time. The link rank helps quantify the exact extent of identifiability in a network. We also develop a quadratic time algorithm to compute a set of cycles/paths that achieves the maximum rank.
Resumo:
The performance of prediction models is often based on ``abstract metrics'' that estimate the model's ability to limit residual errors between the observed and predicted values. However, meaningful evaluation and selection of prediction models for end-user domains requires holistic and application-sensitive performance measures. Inspired by energy consumption prediction models used in the emerging ``big data'' domain of Smart Power Grids, we propose a suite of performance measures to rationally compare models along the dimensions of scale independence, reliability, volatility and cost. We include both application independent and dependent measures, the latter parameterized to allow customization by domain experts to fit their scenario. While our measures are generalizable to other domains, we offer an empirical analysis using real energy use data for three Smart Grid applications: planning, customer education and demand response, which are relevant for energy sustainability. Our results underscore the value of the proposed measures to offer a deeper insight into models' behavior and their impact on real applications, which benefit both data mining researchers and practitioners.
Resumo:
Models of river flow time series are essential in efficient management of a river basin. It helps policy makers in developing efficient water utilization strategies to maximize the utility of scarce water resource. Time series analysis has been used extensively for modeling river flow data. The use of machine learning techniques such as support-vector regression and neural network models is gaining increasing popularity. In this paper we compare the performance of these techniques by applying it to a long-term time-series data of the inflows into the Krishnaraja Sagar reservoir (KRS) from three tributaries of the river Cauvery. In this study flow data over a period of 30 years from three different observation points established in upper Cauvery river sub-basin is analyzed to estimate their contribution to KRS. Specifically, ANN model uses a multi-layer feed forward network trained with a back-propagation algorithm and support vector regression with epsilon intensive-loss function is used. Auto-regressive moving average models are also applied to the same data. The performance of different techniques is compared using performance metrics such as root mean squared error (RMSE), correlation, normalized root mean squared error (NRMSE) and Nash-Sutcliffe Efficiency (NSE).
Resumo:
Rapid and invasive urbanization has been associated with depletion of natural resources (vegetation and water resources), which in turn deteriorates the landscape structure and conditions in the local environment. Rapid increase in population due to the migration from rural areas is one of the critical issues of the urban growth. Urbanisation in India is drastically changing the land cover and often resulting in the sprawl. The sprawl regions often lack basic amenities such as treated water supply, sanitation, etc. This necessitates regular monitoring and understanding of the rate of urban development in order to ensure the sustenance of natural resources. Urban sprawl is the extent of urbanization which leads to the development of urban forms with the destruction of ecology and natural landforms. The rate of change of land use and extent of urban sprawl can be efficiently visualized and modelled with the help of geo-informatics. The knowledge of urban area, especially the growth magnitude, shape geometry, and spatial pattern is essential to understand the growth and characteristics of urbanization process. Urban pattern, shape and growth can be quantified using spatial metrics. This communication quantifies the urbanisation and associated growth pattern in Delhi. Spatial data of four decades were analysed to understand land over and land use dynamics. Further the region was divided into 4 zones and into circles of 1 km incrementing radius to understand and quantify the local spatial changes. Results of the landscape metrics indicate that the urban center was highly aggregated and the outskirts and the buffer regions were in the verge of aggregating urban patches. Shannon's Entropy index clearly depicted the outgrowth of sprawl areas in different zones of Delhi. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
Structural information over the entire course of binding interactions based on the analyses of energy landscapes is described, which provides a framework to understand the events involved during biomolecular recognition. Conformational dynamics of malectin's exquisite selectivity for diglucosylated N-glycan (Dig-N-glycan), a highly flexible oligosaccharide comprising of numerous dihedral torsion angles, are described as an example. For this purpose, a novel approach based on hierarchical sampling for acquiring metastable molecular conformations constituting low-energy minima for understanding the structural features involved in a biologic recognition is proposed. For this purpose, four variants of principal component analysis were employed recursively in both Cartesian space and dihedral angles space that are characterized by free energy landscapes to select the most stable conformational substates. Subsequently, k-means clustering algorithm was implemented for geometric separation of the major native state to acquire a final ensemble of metastable conformers. A comparison of malectin complexes was then performed to characterize their conformational properties. Analyses of stereochemical metrics and other concerted binding events revealed surface complementarity, cooperative and bidentate hydrogen bonds, water-mediated hydrogen bonds, carbohydrate-aromatic interactions including CH-pi and stacking interactions involved in this recognition. Additionally, a striking structural transition from loop to beta-strands in malectin CRD upon specific binding to Dig-N-glycan is observed. The interplay of the above-mentioned binding events in malectin and Dig-N-glycan supports an extended conformational selection model as the underlying binding mechanism.
Resumo:
This article presents frequentist inference of accelerated life test data of series systems with independent log-normal component lifetimes. The means of the component log-lifetimes are assumed to depend on the stress variables through a linear stress translation function that can accommodate the standard stress translation functions in the literature. An expectation-maximization algorithm is developed to obtain the maximum likelihood estimates of model parameters. The maximum likelihood estimates are then further refined by bootstrap, which is also used to infer about the component and system reliability metrics at usage stresses. The developed methodology is illustrated by analyzing a real as well as a simulated dataset. A simulation study is also carried out to judge the effectiveness of the bootstrap. It is found that in this model, application of bootstrap results in significant improvement over the simple maximum likelihood estimates.
Resumo:
Virtualization is one of the key enabling technologies for Cloud computing. Although it facilitates improved utilization of resources, virtualization can lead to performance degradation due to the sharing of physical resources like CPU, memory, network interfaces, disk controllers, etc. Multi-tenancy can cause highly unpredictable performance for concurrent I/O applications running inside virtual machines that share local disk storage in Cloud. Disk I/O requests in a typical Cloud setup may have varied requirements in terms of latency and throughput as they arise from a range of heterogeneous applications having diverse performance goals. This necessitates providing differential performance services to different I/O applications. In this paper, we present PriDyn, a novel scheduling framework which is designed to consider I/O performance metrics of applications such as acceptable latency and convert them to an appropriate priority value for disk access based on the current system state. This framework aims to provide differentiated I/O service to various applications and ensures predictable performance for critical applications in multi-tenant Cloud environment. We demonstrate through experimental validations on real world I/O traces that this framework achieves appreciable enhancements in I/O performance, indicating that this approach is a promising step towards enabling QoS guarantees on Cloud storage.
Resumo:
We present an analysis of the rate of sign changes in the discrete Fourier spectrum of a sequence. The sign changes of either the real or imaginary parts of the spectrum are considered, and the rate of sign changes is termed as the spectral zero-crossing rate (SZCR). We show that SZCR carries information pertaining to the locations of transients within the temporal observation window. We show duality with temporal zero-crossing rate analysis by expressing the spectrum of a signal as a sum of sinusoids with random phases. This extension leads to spectral-domain iterative filtering approaches to stabilize the spectral zero-crossing rate and to improve upon the location estimates. The localization properties are compared with group-delay-based localization metrics in a stylized signal setting well-known in speech processing literature. We show applications to epoch estimation in voiced speech signals using the SZCR on the integrated linear prediction residue. The performance of the SZCR-based epoch localization technique is competitive with the state-of-the-art epoch estimation techniques that are based on average pitch period.
Resumo:
Hippocampal pyramidal neurons exhibit gamma-phase preference in their spikes, selectively route inputs through gamma frequency multiplexing and are considered part of gamma-bound cell assemblies. How do these neurons exhibit gamma-frequency coincidence detection capabilities, a feature that is essential for the expression of these physiological observations, despite their slow membrane time constant? In this conductance-based modelling study, we developed quantitative metrics for the temporal window of integration/coincidence detection based on the spike-triggered average (STA) of the neuronal compartment. We employed these metrics in conjunction with quantitative measures for spike initiation dynamics to assess the emergence and dependence of coincidence detection and STA spectral selectivity on various ion channel combinations. We found that the presence of resonating conductances (hyperpolarization-activated cyclic nucleotide-gated or T-type calcium), either independently or synergistically when expressed together, led to the emergence of spectral selectivity in the spike initiation dynamics and a significant reduction in the coincidence detection window (CDW). The presence of A-type potassium channels, along with resonating conductances, reduced the STA characteristic frequency and broadened the CDW, but persistent sodium channels sharpened the CDW by strengthening the spectral selectivity in the STA. Finally, in a morphologically precise model endowed with experimentally constrained channel gradients, we found that somatodendritic compartments expressed functional maps of strong theta-frequency selectivity in spike initiation dynamics and gamma-range CDW. Our results reveal the heavy expression of resonating and spike-generating conductances as the mechanism underlying the robust emergence of stratified gamma-range coincidence detection in the dendrites of hippocampal and cortical pyramidal neurons.
Resumo:
The spatial error structure of daily precipitation derived from the latest version 7 (v7) tropical rainfall measuring mission (TRMM) level 2 data products are studied through comparison with the Asian precipitation highly resolved observational data integration toward evaluation of the water resources (APHRODITE) data over a subtropical region of the Indian subcontinent for the seasonal rainfall over 6 years from June 2002 to September 2007. The data products examined include v7 data from the TRMM radiometer Microwave Imager (TMI) and radar precipitation radar (PR), namely, 2A12, 2A25, and 2B31 (combined data from PR and TMI). The spatial distribution of uncertainty from these data products were quantified based on performance metrics derived from the contingency table. For the seasonal daily precipitation over a subtropical basin in India, the data product of 2A12 showed greater skill in detecting and quantifying the volume of rainfall when compared with the 2A25 and 2B31 data products. Error characterization using various error models revealed that random errors from multiplicative error models were homoscedastic and that they better represented rainfall estimates from 2A12 algorithm. Error decomposition techniques performed to disentangle systematic and random errors verify that the multiplicative error model representing rainfall from 2A12 algorithm successfully estimated a greater percentage of systematic error than 2A25 or 2B31 algorithms. Results verify that although the radiometer derived 2A12 rainfall data is known to suffer from many sources of uncertainties, spatial analysis over the case study region of India testifies that the 2A12 rainfall estimates are in a very good agreement with the reference estimates for the data period considered.
Resumo:
This study presents a comprehensive evaluation of five widely used multisatellite precipitation estimates (MPEs) against 1 degrees x 1 degrees gridded rain gauge data set as ground truth over India. One decade observations are used to assess the performance of various MPEs (Climate Prediction Center (CPC)-South Asia data set, CPC Morphing Technique (CMORPH), Precipitation Estimation From Remotely Sensed Information Using Artificial Neural Networks, Tropical Rainfall Measuring Mission's Multisatellite Precipitation Analysis (TMPA-3B42), and Global Precipitation Climatology Project). All MPEs have high detection skills of rain with larger probability of detection (POD) and smaller ``missing'' values. However, the detection sensitivity differs from one product (and also one region) to the other. While the CMORPH has the lowest sensitivity of detecting rain, CPC shows highest sensitivity and often overdetects rain, as evidenced by large POD and false alarm ratio and small missing values. All MPEs show higher rain sensitivity over eastern India than western India. These differential sensitivities are found to alter the biases in rain amount differently. All MPEs show similar spatial patterns of seasonal rain bias and root-mean-square error, but their spatial variability across India is complex and pronounced. The MPEs overestimate the rainfall over the dry regions (northwest and southeast India) and severely underestimate over mountainous regions (west coast and northeast India), whereas the bias is relatively small over the core monsoon zone. Higher occurrence of virga rain due to subcloud evaporation and possible missing of small-scale convective events by gauges over the dry regions are the main reasons for the observed overestimation of rain by MPEs. The decomposed components of total bias show that the major part of overestimation is due to false precipitation. The severe underestimation of rain along the west coast is attributed to the predominant occurrence of shallow rain and underestimation of moderate to heavy rain by MPEs. The decomposed components suggest that the missed precipitation and hit bias are the leading error sources for the total bias along the west coast. All evaluation metrics are found to be nearly equal in two contrasting monsoon seasons (southwest and northeast), indicating that the performance of MPEs does not change with the season, at least over southeast India. Among various MPEs, the performance of TMPA is found to be better than others, as it reproduced most of the spatial variability exhibited by the reference.