977 resultados para federated search tool
Resumo:
This paper investigates the use of adaptive group testing to find a spectrum hole of a specified bandwidth in a given wideband of interest. We propose a group testing-based spectrum hole search algorithm that exploits sparsity in the primary spectral occupancy by testing a group of adjacent subbands in a single test. This is enabled by a simple and easily implementable sub-Nyquist sampling scheme for signal acquisition by the cognitive radios (CRs). The sampling scheme deliberately introduces aliasing during signal acquisition, resulting in a signal that is the sum of signals from adjacent subbands. Energy-based hypothesis tests are used to provide an occupancy decision over the group of subbands, and this forms the basis of the proposed algorithm to find contiguous spectrum holes of a specified bandwidth. We extend this framework to a multistage sensing algorithm that can be employed in a variety of spectrum sensing scenarios, including noncontiguous spectrum hole search. Furthermore, we provide the analytical means to optimize the group tests with respect to the detection thresholds, number of samples, group size, and number of stages to minimize the detection delay under a given error probability constraint. Our analysis allows one to identify the sparsity and SNR regimes where group testing can lead to significantly lower detection delays compared with a conventional bin-by-bin energy detection scheme; the latter is, in fact, a special case of the group test when the group size is set to 1 bin. We validate our analytical results via Monte Carlo simulations.
Resumo:
Simultaneous measurements of thickness and temperature profile of the lubricant film at chip-tool interface during machining have been studied in this experimental programme. Conventional techniques such as thermography can only provide temperature measurement under controlled environment in a laboratory and without the addition of lubricant. The present study builds on the capabilities of luminescent sensors in addition to direct image based observations of the chip-tool interface. A suite of experiments conducted using different types of sensors are reported in this paper, especially noteworthy are concomitant measures of thickness and temperature of the lubricant. (C) 2014 Elsevier Ltd.
Resumo:
Aims. In this work we search for the signatures of low-dimensional chaos in the temporal behavior of the Kepler-field blazar W2R 1946+42. Methods. We use a publicly available, similar to 160 000-point-long and mostly equally spaced light curve of W2R 1946+42. We apply the correlation integral method to both real datasets and phase randomized surrogates. Results. We are not able to confirm the presence of low-dimensional chaos in the light curve. This result, however, still leads to some important implications for blazar emission mechanisms, which are discussed.
Resumo:
DNA sequence and structure play a key role in imparting fragility to different regions of the genome. Recent studies have shown that non-B DNA structures play a key role in causing genomic instability, apart from their physiological roles at telomeres and promoters. Structures such as G-quadruplexes, cruciforms, and triplexes have been implicated in making DNA susceptible to breakage, resulting in genomic rearrangements. Hence, techniques that aid in the easy identification of such non-B DNA motifs will prove to be very useful in determining factors responsible for genomic instability. In this study, we provide evidence for the use of primer extension as a sensitive and specific tool to detect such altered DNA structures. We have used the G-quadruplex motif, recently characterized at the BCL2 major breakpoint region as a proof of principle to demonstrate the advantages of the technique. Our results show that pause sites corresponding to the non-B DNA are specific, since they are absent when the G-quadruplex motif is mutated and their positions change in tandem with that of the primers. The efficiency of primer extension pause sites varied according to the concentration of monovalant cations tested, which support G-quadruplex formation. Overall, our results demonstrate that primer extension is a strong in vitro tool to detect non-B DNA structures such as G-quadruplex on a plasmid DNA, which can be further adapted to identify non-B DNA structures, even at the genomic level.
Resumo:
Background: Understanding channel structures that lead to active sites or traverse the molecule is important in the study of molecular functions such as ion, ligand, and small molecule transport. Efficient methods for extracting, storing, and analyzing protein channels are required to support such studies. Further, there is a need for an integrated framework that supports computation of the channels, interactive exploration of their structure, and detailed visual analysis of their properties. Results: We describe a method for molecular channel extraction based on the alpha complex representation. The method computes geometrically feasible channels, stores both the volume occupied by the channel and its centerline in a unified representation, and reports significant channels. The representation also supports efficient computation of channel profiles that help understand channel properties. We describe methods for effective visualization of the channels and their profiles. These methods and the visual analysis framework are implemented in a software tool, CHEXVIS. We apply the method on a number of known channel containing proteins to extract pore features. Results from these experiments on several proteins show that CHEXVIS performance is comparable to, and in some cases, better than existing channel extraction techniques. Using several case studies, we demonstrate how CHEXVIS can be used to study channels, extract their properties and gain insights into molecular function. Conclusion: CHEXVIS supports the visual exploration of multiple channels together with their geometric and physico-chemical properties thereby enabling the understanding of the basic biology of transport through protein channels. The CHEXVIS web-server is freely available at http://vgl.serc.iisc.ernet.in/chexvis/. The web-server is supported on all modern browsers with latest Java plug-in.
Resumo:
We consider the problem of optimizing the workforce of a service system. Adapting the staffing levels in such systems is non-trivial due to large variations in workload and the large number of system parameters do not allow for a brute force search. Further, because these parameters change on a weekly basis, the optimization should not take longer than a few hours. Our aim is to find the optimum staffing levels from a discrete high-dimensional parameter set, that minimizes the long run average of the single-stage cost function, while adhering to the constraints relating to queue stability and service-level agreement (SLA) compliance. The single-stage cost function balances the conflicting objectives of utilizing workers better and attaining the target SLAs. We formulate this problem as a constrained parameterized Markov cost process parameterized by the (discrete) staffing levels. We propose novel simultaneous perturbation stochastic approximation (SPSA)-based algorithms for solving the above problem. The algorithms include both first-order as well as second-order methods and incorporate SPSA-based gradient/Hessian estimates for primal descent, while performing dual ascent for the Lagrange multipliers. Both algorithms are online and update the staffing levels in an incremental fashion. Further, they involve a certain generalized smooth projection operator, which is essential to project the continuous-valued worker parameter tuned by our algorithms onto the discrete set. The smoothness is necessary to ensure that the underlying transition dynamics of the constrained Markov cost process is itself smooth (as a function of the continuous-valued parameter): a critical requirement to prove the convergence of both algorithms. We validate our algorithms via performance simulations based on data from five real-life service systems. For the sake of comparison, we also implement a scatter search based algorithm using state-of-the-art optimization tool-kit OptQuest. From the experiments, we observe that both our algorithms converge empirically and consistently outperform OptQuest in most of the settings considered. This finding coupled with the computational advantage of our algorithms make them amenable for adaptive labor staffing in real-life service systems.
Resumo:
This paper describes a university based system relevant to doctoral students who have problems with themselves, their peers and research supervisors. Doctoral students have various challenges to solve and these challenges contribute to delays in their thesis submission. This tool aims at helping them think through their problem in a pre-counseling stage. The tool uses narratives and hypothetical stories to walk a doctoral student through options of responses he or she can make given the situation in the narrative. Narratives were developed after a preliminary survey (n=57) of doctoral students. The survey indicated that problems they experienced were: busy supervisors, negative competition from peers and laziness with self. The narrative scenarios in the tool prompt self-reflection and provide for options to chose from leading to the next scenario that will ensue. The different stages of the stimulus-response cycles are designed based on Thomas-Kilmann conflict resolution techniques (collaboration and avoidance). Each stimulus-response cycle has a score attached that reflects the student's ability to judge a collaborative approach. At the end of all the stages a scorecard is generated indicating either a progressive or regressive outcome of thesis submission.
Resumo:
The problem of scaling up data integration, such that new sources can be quickly utilized as they are discovered, remains elusive: Global schemas for integrated data are difficult to develop and expand, and schema and record matching techniques are limited by the fact that data and metadata are often under-specified and must be disambiguated by data experts. One promising approach is to avoid using a global schema, and instead to develop keyword search-based data integration-where the system lazily discovers associations enabling it to join together matches to keywords, and return ranked results. The user is expected to understand the data domain and provide feedback about answers' quality. The system generalizes such feedback to learn how to correctly integrate data. A major open challenge is that under this model, the user only sees and offers feedback on a few ``top-'' results: This result set must be carefully selected to include answers of high relevance and answers that are highly informative when feedback is given on them. Existing systems merely focus on predicting relevance, by composing the scores of various schema and record matching algorithms. In this paper, we show how to predict the uncertainty associated with a query result's score, as well as how informative feedback is on a given result. We build upon these foundations to develop an active learning approach to keyword search-based data integration, and we validate the effectiveness of our solution over real data from several very different domains.
Resumo:
Executing authenticated computation on outsourced data is currently an area of major interest in cryptology. Large databases are being outsourced to untrusted servers without appreciable verification mechanisms. As adversarial server could produce erroneous output, clients should not trust the server's response blindly. Primitive set operations like union, set difference, intersection etc. can be invoked on outsourced data in different concrete settings and should be verifiable by the client. One such interesting adaptation is to authenticate email search result where the untrusted mail server has to provide a proof along with the search result. Recently Ohrimenko et al. proposed a scheme for authenticating email search. We suggest significant improvements over their proposal in terms of client computation and communication resources by properly recasting it in two-party settings. In contrast to Ohrimenko et al. we are able to make the number of bilinear pairing evaluation, the costliest operation in verification procedure, independent of the result set cardinality for union operation. We also provide an analytical comparison of our scheme with their proposal which is further corroborated through experiments.
Resumo:
This paper presents speaker normalization approaches for audio search task. Conventional state-of-the-art feature set, viz., Mel Frequency Cepstral Coefficients (MFCC) is known to contain speaker-specific and linguistic information implicitly. This might create problem for speaker-independent audio search task. In this paper, universal warping-based approach is used for vocal tract length normalization in audio search. In particular, features such as scale transform and warped linear prediction are used to compensate speaker variability in audio matching. The advantage of these features over conventional feature set is that they apply universal frequency warping for both the templates to be matched during audio search. The performance of Scale Transform Cepstral Coefficients (STCC) and Warped Linear Prediction Cepstral Coefficients (WLPCC) are about 3% higher than the state-of-the-art MFCC feature sets on TIMIT database.
Resumo:
River water composition (major ion and Sr-87/Sr-86 ratio) was monitored on a monthly basis over a period of three years from a mountainous river (Nethravati River) of southwestern India. The total dissolved solid (TDS) concentration is relatively low (46 mg L-1) with silica being the dominant contributor. The basin is characterised by lower dissolved Sr concentration (avg. 150 nmol L-1), with radiogenic Sr-87/Sr-86 isotopic ratios (avg. 0.72041 at outlet). The composition of Sr and Sr-87/Sr-86 and their correlation with silicate derived cations in the river basin reveal that their dominant source is from the radiogenic silicate rock minerals. Their composition in the stream is controlled by a combination of physical and chemical weathering occurring in the basin. The molar ratio of SiO2/Ca and Sr-87/Sr-86 isotopic ratio show strong seasonal variation in the river water, i.e., low SiO2/Ca ratio with radiogenic isotopes during non-monsoon and higher SiO2/Ca with less radiogenic isotopes during monsoon season. Whereas, the seasonal variation of Rb/Sr ratio in the stream water is not significant suggesting that change in the mineral phase being involved in the weathering reaction could be unlikely for the observed molar SiO2/Ca and Sr-87/Sr-86 isotope variation in river water. Therefore, the shift in the stream water chemical composition could be attributed to contribution of ground water which is in contact with the bedrock (weathering front) during non-monsoon and weathering of secondary soil minerals in the regolith layer during monsoon. The secondary soil mineral weathering leads to limited silicate cation and enhanced silica fluxes in the Nethravati river basin. (C) 2015 Elsevier Ltd. All rights reserved.
Resumo:
In this paper, we search for the regions of the phenomenological minimal supersymmetric standard model (pMSSM) parameter space where one can expect to have moderate Higgs mixing angle (alpha) with relatively light (up to 600 GeV) additional Higgses after satisfying the current LHC data. We perform a global fit analysis using most updated data (till December 2014) from the LHC and Tevatron experiments. The constraints coming from the precision measurements of the rare b-decays B-s -> mu(+)mu(-) and b -> s gamma are also considered. We find that low M-A(less than or similar to 350) and high tan beta(greater than or similar to 25) regions are disfavored by the combined effect of the global analysis and flavor data. However, regions with Higgs mixing angle alpha similar to 0.1-0.8 are still allowed by the current data. We then study the existing direct search bounds on the heavy scalar/pseudoscalar (H/A) and charged Higgs boson (H-+/-) masses and branchings at the LHC. It has been found that regions with low to moderate values of tan beta with light additional Higgses (mass <= 600 GeV) are unconstrained by the data, while the regions with tan beta > 20 are excluded considering the direct search bounds by the LHC-8 data. The possibility to probe the region with tan beta <= 20 at the high luminosity run of LHC are also discussed, giving special attention to the H -> hh, H/A -> t (t) over bar and H/A -> tau(+)tau(-) decay modes.
Resumo:
Forty-six lectin domains which have homologues among well established eukaryotic and bacterial lectins of known three-dimensional structure, have been identified through a search of 165 archeal genomes using a multipronged approach involving domain recognition, sequence search and analysis of binding sites. Twenty-one of them have the 7-bladed -propeller lectin fold while 16 have the -trefoil fold and 7 the legume lectin fold. The remainder assumes the C-type lectin, the -prism I and the tachylectin folds. Acceptable models of almost all of them could be generated using the appropriate lectins of known three-dimensional structure as templates, with binding sites at one or more expected locations. The work represents the first comprehensive bioinformatic study of archeal lectins. The presence of lectins with the same fold in all domains of life indicates their ancient origin well before the divergence of the three branches. Further work is necessary to identify archeal lectins which have no homologues among eukaryotic and bacterial species. Proteins 2016; 84:21-30. (c) 2015 Wiley Periodicals, Inc.
Resumo:
We study an s-channel resonance R as a viable candidate to fit the diboson excess reported by ATLAS. We compute the contribution of the similar to 2 TeV resonance R to semileptonic and leptonic final states at the 13 TeV LHC. To explain the absence of an excess in the semileptonic channel, we explore the possibility where the particle R decays to additional light scalars X, X or X, Y. A modified analysis strategy has been proposed to study the three-particle final state of the resonance decay and to identify decay channels of X. Associated production of R with gauge bosons has been studied in detail to identify the production mechanism of R. We construct comprehensive categories for vector and scalar beyond-standard-model particles which may play the role of particles R, X, Y and find alternate channels to fix the new couplings and search for these particles.