882 resultados para query rewriting
Resumo:
Microwave modulation has been achieved by using thin-film amorphous-semiconductor switches made of ternary chalcogenides. X-band microwaves were modulated by a threshold switch at frequencies varying from 100 Hz to 1 MHz, with modulation efficiencies comparable to siliconp¿i¿n diodes. The insertion loss was 0.5 to 0.6 dB and the isolation was 18 dB at 100 mA operating current. Possible applications this method are discussed.
Resumo:
For many, particularly in the Anglophone world and Western Europe, it may be obvious that Google has a monopoly over online search and advertising and that this is an undesirable state of affairs, due to Google's ability to mediate information flows online. The baffling question may be why governments and regulators are doing little to nothing about this situation, given the increasingly pivotal importance of the internet and free flowing communications in our lives. However, the law concerning monopolies, namely antitrust or competition law, works in what may be seen as a less intuitive way by the general public. Monopolies themselves are not illegal. Conduct that is unlawful, i.e. abuses of that market power, is defined by a complex set of rules and revolves principally around economic harm suffered due to anticompetitive behavior. However the effect of information monopolies over search, such as Google’s, is more than just economic, yet competition law does not address this. Furthermore, Google’s collection and analysis of user data and its portfolio of related services make it difficult for others to compete. Such a situation may also explain why Google’s established search rivals, Bing and Yahoo, have not managed to provide services that are as effective or popular as Google’s own (on this issue see also the texts by Dirk Lewandowski and Astrid Mager in this reader). Users, however, are not entirely powerless. Google's business model rests, at least partially, on them – especially the data collected about them. If they stop using Google, then Google is nothing.
Resumo:
The departures of the operational amplifiers (OA's) from the ideal performance and their effect on VCV's in the inverting and noninverting mode are discussed. It is found that for the same ideal gain, the bandwidths for the inverting and noninverting modes are different, the former being less. Complete equivalent circuits describing the frequency dependance of the input and output impedances for both modes are given. In particular, the output impedance is shown to be inductive for the frequencies of interest, and this is also confirmed by experimental results.
Resumo:
A quasi-geometric stability criterion for feedback systems with a linear time invariant forward block and a periodically time varying nonlinear gain in the feedback loop is developed.
Resumo:
Antitubercular treatment is directed against actively replicating organisms. There is an urgent need to develop drugs targeting persistent subpopulations of Mycobacterium tuberculosis. The DevR response regulator is believed to play a key role in bacterial dormancy adaptation during hypoxia. We developed a homology-based model of DevR and used it for the rational design of inhibitors. A phenylcoumarin derivative (compound 10) identified by in silico pharmacophore-based screening of 2.5 million compounds employing protocols with some novel features including a water-based pharmacophore query, was characterized further. Compound 10 inhibited DevR binding to target DNA, down-regulated dormancy genes transcription, and drastically reduced survival of hypoxic but not nutrient-starved dormant bacteria or actively growing organ ` isms. Our findings suggest that compound 10 ``locks'' DevR in an inactive conformation that is unable to bind cognate DNA and induce the dormancy regulon. These results provide proof-of-concept for DevR as a novel target to develop molecules with sterilizing activity against tubercle bacilli.
Resumo:
We present a low power gas sensor system on CMOS platform consisting of micromachined polysilicon microheater, temperature controller circuit, resistance readout circuit and SnO2 transducer film. The design criteria for different building blocks of the system is elaborated The microheaters are optimized for temperature uniformity as well as static and dynamic response. The electrical equivalent model for the microheater is derived by extracting thermal and mechanical poles through extensive laser doppler vibrometer measurements. The temperature controller and readout circuit are realized on 130nm CMOS technology The temperature controller re-uses the heater as a temperature sensor and controls the duty cycle of the waveform driving the gate of the power MOSFET which supplies heater current. The readout circuit, with subthreshold operation of the MOSFETs, is based oil resistance to time period conversion followed by frequency to digital converter Subthreshold operatin of MOSFETs coupled with sub-ranging technique, achieves ultra low power consumption with more than five orders of magnitude dynamic range RF sputtered SnO2 film is optimized for its microstructure to achive high sensitivity to sense LPG gas.
Resumo:
We consider the transmission of correlated Gaussian sources over orthogonal Gaussian channels. It is shown that the Amplify and Forward (AF) scheme which simplifies the design of encoders and the decoder, performs close to the optimal scheme even at high SNR. Also, it outperforms a recently proposed scalar quantizer scheme both in performance and complexity. We also study AF when there is side information at the encoders and decoder.
Resumo:
We propose a quantity called information ambiguity that plays the same role in the worst-case information-theoretic nalyses as the well-known notion of information entropy performs in the corresponding average-case analyses. We prove various properties of information ambiguity and illustrate its usefulness in performing the worst-case analysis of a variant of distributed source coding problem.
Resumo:
The delivery of products and services for construction-based businesses is increasingly becoming knowledge-driven and information-intensive. The proliferation of building information modelling (BIM) has increased business opportunities as well as introduced new challenges for the architectural, engineering and construction and facilities management (AEC/FM) industry. As such, the effective use, sharing and exchange of building life cycle information and knowledge management in building design, construction, maintenance and operation assumes a position of paramount importance. This paper identifies a subset of construction management (CM) relevant knowledge for different design conditions of building components through a critical, comprehensive review of synthesized literature and other information gathering and knowledge acquisition techniques. It then explores how such domain knowledge can be formalized as ontologies and, subsequently, a query vocabulary in order to equip BIM users with the capacity to query digital models of a building for the retrieval of useful and relevant domain-specific information. The formalized construction knowledge is validated through interviews with domain experts in relation to four case study projects. Additionally, retrospective analyses of several design conditions are used to demonstrate the soundness (realism), completeness, and appeal of the knowledge base and query-based reasoning approach in relation to the state-of-the-art tools, Solibri Model Checker and Navisworks. The knowledge engineering process and the methods applied in this research for information representation and retrieval could provide useful mechanisms to leverage BIM in support of a number of knowledge intensive CM/FM tasks and functions.
Resumo:
We propose a novel, language-neutral approach for searching online handwritten text using Frechet distance. Online handwritten data, which is available as a time series (x,y,t), is treated as representing a parameterized curve in two-dimensions and the problem of searching online handwritten text is posed as a problem of matching two curves in a two-dimensional Euclidean space. Frechet distance is a natural measure for matching curves. The main contribution of this paper is the formulation of a variant of Frechet distance that can be used for retrieving words even when only a prefix of the word is given as query. Extensive experiments on UNIPEN dataset(1) consisting of over 16,000 words written by 7 users show that our method outperforms the state-of-the-art DTW method. Experiments were also conducted on a Multilingual dataset, generated on a PDA, with encouraging results. Our approach can be used to implement useful, exciting features like auto-completion of handwriting in PDAs.
Resumo:
Mobile applications are being increasingly deployed on a massive scale in various mobile sensor grid database systems. With limited resources from the mobile devices, how to process the huge number of queries from mobile users with distributed sensor grid databases becomes a critical problem for such mobile systems. While the fundamental semantic cache technique has been investigated for query optimization in sensor grid database systems, the problem is still difficult due to the fact that more realistic multi-dimensional constraints have not been considered in existing methods. To solve the problem, a new semantic cache scheme is presented in this paper for location-dependent data queries in distributed sensor grid database systems. It considers multi-dimensional constraints or factors in a unified cost model architecture, determines the parameters of the cost model in the scheme by using the concept of Nash equilibrium from game theory, and makes semantic cache decisions from the established cost model. The scenarios of three factors of semantic, time and locations are investigated as special cases, which improve existing methods. Experiments are conducted to demonstrate the semantic cache scheme presented in this paper for distributed sensor grid database systems.
Resumo:
Modern database systems incorporate a query optimizer to identify the most efficient "query execution plan" for executing the declarative SQL queries submitted by users. A dynamic-programming-based approach is used to exhaustively enumerate the combinatorially large search space of plan alternatives and, using a cost model, to identify the optimal choice. While dynamic programming (DP) works very well for moderately complex queries with up to around a dozen base relations, it usually fails to scale beyond this stage due to its inherent exponential space and time complexity. Therefore, DP becomes practically infeasible for complex queries with a large number of base relations, such as those found in current decision-support and enterprise management applications. To address the above problem, a variety of approaches have been proposed in the literature. Some completely jettison the DP approach and resort to alternative techniques such as randomized algorithms, whereas others have retained DP by using heuristics to prune the search space to computationally manageable levels. In the latter class, a well-known strategy is "iterative dynamic programming" (IDP) wherein DP is employed bottom-up until it hits its feasibility limit, and then iteratively restarted with a significantly reduced subset of the execution plans currently under consideration. The experimental evaluation of IDP indicated that by appropriate choice of algorithmic parameters, it was possible to almost always obtain "good" (within a factor of twice of the optimal) plans, and in the few remaining cases, mostly "acceptable" (within an order of magnitude of the optimal) plans, and rarely, a "bad" plan. While IDP is certainly an innovative and powerful approach, we have found that there are a variety of common query frameworks wherein it can fail to consistently produce good plans, let alone the optimal choice. This is especially so when star or clique components are present, increasing the complexity of th- e join graphs. Worse, this shortcoming is exacerbated when the number of relations participating in the query is scaled upwards.
Resumo:
A large fraction of an XML document typically consists of text data. The XPath query language allows text search via the equal, contains, and starts-with predicates. Such predicates can be efficiently implemented using a compressed self-index of the document's text nodes. Most queries, however, contain some parts querying the text of the document, plus some parts querying the tree structure. It is therefore a challenge to choose an appropriate evaluation order for a given query, which optimally leverages the execution speeds of the text and tree indexes. Here the SXSI system is introduced. It stores the tree structure of an XML document using a bit array of opening and closing brackets plus a sequence of labels, and stores the text nodes of the document using a global compressed self-index. On top of these indexes sits an XPath query engine that is based on tree automata. The engine uses fast counting queries of the text index in order to dynamically determine whether to evaluate top-down or bottom-up with respect to the tree structure. The resulting system has several advantages over existing systems: (1) on pure tree queries (without text search) such as the XPathMark queries, the SXSI system performs on par or better than the fastest known systems MonetDB and Qizx, (2) on queries that use text search, SXSI outperforms the existing systems by 1-3 orders of magnitude (depending on the size of the result set), and (3) with respect to memory consumption, SXSI outperforms all other systems for counting-only queries.