89 resultados para 319.380275
Resumo:
Query suggestion is an important feature of the search engine with the explosive and diverse growth of web contents. Different kind of suggestions like query, image, movies, music and book etc. are used every day. Various types of data sources are used for the suggestions. If we model the data into various kinds of graphs then we can build a general method for any suggestions. In this paper, we have proposed a general method for query suggestion by combining two graphs: (1) query click graph which captures the relationship between queries frequently clicked on common URLs and (2) query text similarity graph which finds the similarity between two queries using Jaccard similarity. The proposed method provides literally as well as semantically relevant queries for users' need. Simulation results show that the proposed algorithm outperforms heat diffusion method by providing more number of relevant queries. It can be used for recommendation tasks like query, image, and product suggestion.
Resumo:
Executing authenticated computation on outsourced data is currently an area of major interest in cryptology. Large databases are being outsourced to untrusted servers without appreciable verification mechanisms. As adversarial server could produce erroneous output, clients should not trust the server's response blindly. Primitive set operations like union, set difference, intersection etc. can be invoked on outsourced data in different concrete settings and should be verifiable by the client. One such interesting adaptation is to authenticate email search result where the untrusted mail server has to provide a proof along with the search result. Recently Ohrimenko et al. proposed a scheme for authenticating email search. We suggest significant improvements over their proposal in terms of client computation and communication resources by properly recasting it in two-party settings. In contrast to Ohrimenko et al. we are able to make the number of bilinear pairing evaluation, the costliest operation in verification procedure, independent of the result set cardinality for union operation. We also provide an analytical comparison of our scheme with their proposal which is further corroborated through experiments.
Resumo:
This paper presents speaker normalization approaches for audio search task. Conventional state-of-the-art feature set, viz., Mel Frequency Cepstral Coefficients (MFCC) is known to contain speaker-specific and linguistic information implicitly. This might create problem for speaker-independent audio search task. In this paper, universal warping-based approach is used for vocal tract length normalization in audio search. In particular, features such as scale transform and warped linear prediction are used to compensate speaker variability in audio matching. The advantage of these features over conventional feature set is that they apply universal frequency warping for both the templates to be matched during audio search. The performance of Scale Transform Cepstral Coefficients (STCC) and Warped Linear Prediction Cepstral Coefficients (WLPCC) are about 3% higher than the state-of-the-art MFCC feature sets on TIMIT database.
Resumo:
In this article, we survey several kinds of trace formulas that one encounters in the theory of single and multi-variable operators. We give some sketches of the proofs, often based on the principle of finite-dimensional approximations to the objects at hand in the formulas.
Resumo:
A low molecular weight sulfated chitosan (SP-LMWSC) was isolated from the cuttlebone of Sepia pharaonis. Elemental analysis established the presence of C, H and N. The sulfation of SP-LMWSC was confirmed by the presence of characteristic peaks in FT-IR and FT-Raman spectra. The thermal properties of SP-LMWSC were studied by thermogravimetric analysis and differential scanning calorimetry. Electrolytic conductivity of SP-LMWSC was measured by cyclic voltammetry and the molecular weight was determined by MALDI-TOF/MS. The molecular structure and sulfation sites of SP-LMWSC were unambiguously confirmed using H-1,C-13, 2D COSY and 2D HSQC NMR spectroscopy. SP-LMWSC exhibited increased anticoagulant activity in avian blood by delaying coagulation parameters and displayed cytostatic activity by inhibiting the migration of avian leucocytes. SP-LMWSC demonstrated avian antiviral activity by binding to Newcastle disease virus receptors at a low titer value of 1/64. These findings suggested that SP-LMWSC isolated from an industrial discard holds immense potentials as carbohydrate based pharmaceuticals in future. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
Eleven coupled model intercomparison project 3 based global climate models are evaluated for the case study of Upper Malaprabha catchment, India for precipitation rate. Correlation coefficient, normalised root mean square deviation, and skill score are considered as performance indicators for evaluation in fuzzy environment and assumed to have equal impact on the global climate models. Fuzzy technique for order preference by similarity to an ideal solution is used to rank global climate models. Top three positions are occupied by MIROC3, GFDL2.1 and GISS with relative closeness of 0.7867, 0.7070, and 0.7068. IPSL-CM4, NCAR-PCMI occupied the tenth and eleventh positions with relative closeness of 0.4959 and 0.4562.
Resumo:
Computer Assisted Assessment (CAA) has been existing for several years now. While some forms of CAA do not require sophisticated text understanding (e.g., multiple choice questions), there are also student answers that consist of free text and require analysis of text in the answer. Research towards the latter till date has concentrated on two main sub-tasks: (i) grading of essays, which is done mainly by checking the style, correctness of grammar, and coherence of the essay and (ii) assessment of short free-text answers. In this paper, we present a structured view of relevant research in automated assessment techniques for short free-text answers. We review papers spanning the last 15 years of research with emphasis on recent papers. Our main objectives are two folds. First we present the survey in a structured way by segregating information on dataset, problem formulation, techniques, and evaluation measures. Second we present a discussion on some of the potential future directions in this domain which we hope would be helpful for researchers.
Resumo:
This paper proposes a design methodology to stabilize collective circular motion of a group of N-identical agents moving at unit speed around individual circles of different radii and different centers. The collective circular motion studied in this paper is characterized by the clockwise rotation of all agents around a common circle of desired radius as well as center, which is fixed. Our interest is to achieve those collective circular motions in which the phases of the agents are arranged either in synchronized, in balanced or in splay formation. In synchronized formation, the agents and their centroid move in a common direction while in balanced formation, the movement of the agents ensures a fixed location of the centroid. The splay state is a special case of balanced formation, in which the phases are separated by multiples of 2 pi/N. We derive the feedback controls and prove the asymptotic stability of the desired collective circular motion by using Lyapunov theory and the LaSalle's Invariance principle.
Resumo:
In this paper the soft lunar landing with minimum fuel expenditure is formulated as a nonlinear optimal guidance problem. The realization of pinpoint soft landing with terminal velocity and position constraints is achieved using Model Predictive Static Programming (MPSP). The high accuracy of the terminal conditions is ensured as the formulation of the MPSP inherently poses final conditions as a set of hard constraints. The computational efficiency and fast convergence make the MPSP preferable for fixed final time onboard optimal guidance algorithm. It has also been observed that the minimum fuel requirement strongly depends on the choice of the final time (a critical point that is not given due importance in many literature). Hence, to optimally select the final time, a neural network is used to learn the mapping between various initial conditions in the domain of interest and the corresponding optimal flight time. To generate the training data set, the optimal final time is computed offline using a gradient based optimization technique. The effectiveness of the proposed method is demonstrated with rigorous simulation results.
Resumo:
In this text we present the design of a wearable health monitoring device capable of remotely monitoring health parameters of neonates for the first few weeks after birth. The device is primarily aimed at continuously tracking the skin temperature to indicate the onset of hypothermia in newborns. A medical grade thermistor is responsible for temperature measurement and is directly interfaced to a microcontroller with an integrated bluetooth low energy radio. An inertial sensor is also present in the device to facilitate breathing rate measurement which has been discussed briefly. Sensed data is transferred securely over bluetooth low energy radio to a nearby gateway, which relays the information to a central database for real time monitoring. Low power optimizations at both the circuit and software levels ensure a prolonged battery life. The device is packaged in a baby friendly, water proof housing and is easily sterilizable and reusable.
Resumo:
Imaging flow cytometry is an emerging technology that combines the statistical power of flow cytometry with spatial and quantitative morphology of digital microscopy. It allows high-throughput imaging of cells with good spatial resolution, while they are in flow. This paper proposes a general framework for the processing/classification of cells imaged using imaging flow cytometer. Each cell is localized by finding an accurate cell contour. Then, features reflecting cell size, circularity and complexity are extracted for the classification using SVM. Unlike the conventional iterative, semi-automatic segmentation algorithms such as active contour, we propose a noniterative, fully automatic graph-based cell localization. In order to evaluate the performance of the proposed framework, we have successfully classified unstained label-free leukaemia cell-lines MOLT, K562 and HL60 from video streams captured using custom fabricated cost-effective microfluidics-based imaging flow cytometer. The proposed system is a significant development in the direction of building a cost-effective cell analysis platform that would facilitate affordable mass screening camps looking cellular morphology for disease diagnosis. Lay description In this article, we propose a novel framework for processing the raw data generated using microfluidics based imaging flow cytometers. Microfluidics microscopy or microfluidics based imaging flow cytometry (mIFC) is a recent microscopy paradigm, that combines the statistical power of flow cytometry with spatial and quantitative morphology of digital microscopy, which allows us imaging cells while they are in flow. In comparison to the conventional slide-based imaging systems, mIFC is a nascent technology enabling high throughput imaging of cells and is yet to take the form of a clinical diagnostic tool. The proposed framework process the raw data generated by the mIFC systems. The framework incorporates several steps: beginning from pre-processing of the raw video frames to enhance the contents of the cell, localising the cell by a novel, fully automatic, non-iterative graph based algorithm, extraction of different quantitative morphological parameters and subsequent classification of cells. In order to evaluate the performance of the proposed framework, we have successfully classified unstained label-free leukaemia cell-lines MOLT, K562 and HL60 from video streams captured using cost-effective microfluidics based imaging flow cytometer. The cell lines of HL60, K562 and MOLT were obtained from ATCC (American Type Culture Collection) and are separately cultured in the lab. Thus, each culture contains cells from its own category alone and thereby provides the ground truth. Each cell is localised by finding a closed cell contour by defining a directed, weighted graph from the Canny edge images of the cell such that the closed contour lies along the shortest weighted path surrounding the centroid of the cell from a starting point on a good curve segment to an immediate endpoint. Once the cell is localised, morphological features reflecting size, shape and complexity of the cells are extracted and used to develop a support vector machine based classification system. We could classify the cell-lines with good accuracy and the results were quite consistent across different cross validation experiments. We hope that imaging flow cytometers equipped with the proposed framework for image processing would enable cost-effective, automated and reliable disease screening in over-loaded facilities, which cannot afford to hire skilled personnel in large numbers. Such platforms would potentially facilitate screening camps in low income group countries; thereby transforming the current health care paradigms by enabling rapid, automated diagnosis for diseases like cancer.
Resumo:
Since streaming data keeps coming continuously as an ordered sequence, massive amounts of data is created. A big challenge in handling data streams is the limitation of time and space. Prototype selection on streaming data requires the prototypes to be updated in an incremental manner as new data comes in. We propose an incremental algorithm for prototype selection. This algorithm can also be used to handle very large datasets. Results have been presented on a number of large datasets and our method is compared to an existing algorithm for streaming data. Our algorithm saves time and the prototypes selected gives good classification accuracy.
Resumo:
In this paper, we present two new stochastic approximation algorithms for the problem of quantile estimation. The algorithms uses the characterization of the quantile provided in terms of an optimization problem in 1]. The algorithms take the shape of a stochastic gradient descent which minimizes the optimization problem. Asymptotic convergence of the algorithms to the true quantile is proven using the ODE method. The theoretical results are also supplemented through empirical evidence. The algorithms are shown to provide significant improvement in terms of memory requirement and accuracy.
Resumo:
The Restricted Boltzmann Machines (RBM) can be used either as classifiers or as generative models. The quality of the generative RBM is measured through the average log-likelihood on test data. Due to the high computational complexity of evaluating the partition function, exact calculation of test log-likelihood is very difficult. In recent years some estimation methods are suggested for approximate computation of test log-likelihood. In this paper we present an empirical comparison of the main estimation methods, namely, the AIS algorithm for estimating the partition function, the CSL method for directly estimating the log-likelihood, and the RAISE algorithm that combines these two ideas.