997 resultados para Query Development


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper examines the effects of information request ambiguity and construct incongruence on end user's ability to develop SQL queries with an interactive relational database query language. In this experiment, ambiguity in information requests adversely affected accuracy and efficiency. Incongruities among the information request, the query syntax, and the data representation adversely affected accuracy, efficiency, and confidence. The results for ambiguity suggest that organizations might elicit better query development if end users were sensitized to the nature of ambiguities that could arise in their business contexts. End users could translate natural language queries into pseudo-SQL that could be examined for precision before the queries were developed. The results for incongruence suggest that better query development might ensue if semantic distances could be reduced by giving users data representations and database views that maximize construct congruence for the kinds of queries in typical domains. (C) 2001 Elsevier Science B.V. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Models of plant architecture allow us to explore how genotype environment interactions effect the development of plant phenotypes. Such models generate masses of data organised in complex hierarchies. This paper presents a generic system for creating and automatically populating a relational database from data generated by the widely used L-system approach to modelling plant morphogenesis. Techniques from compiler technology are applied to generate attributes (new fields) in the database, to simplify query development for the recursively-structured branching relationship. Use of biological terminology in an interactive query builder contributes towards making the system biologist-friendly. (C) 2002 Elsevier Science Ireland Ltd. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

During the last semester of the Master’s Degree in Artificial Intelligence, I carried out my internship working for TXT e-Solution on the ADMITTED project. This paper describes the work done in those months. The thesis will be divided into two parts representing the two different tasks I was assigned during the course of my experience. The First part will be about the introduction of the project and the work done on the admittedly library, maintaining the code base and writing the test suits. The work carried out is more connected to the Software engineer role, developing features, fixing bugs and testing. The second part will describe the experiments done on the Anomaly detection task using a Deep Learning technique called Autoencoder, this task is on the other hand more connected to the data science role. The two tasks were not done simultaneously but were dealt with one after the other, which is why I preferred to divide them into two separate parts of this paper.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Desarrollo de un sistema capaz de procesar consultas en lenguaje natural introducidas por el usuario mediante el teclado. El sistema es capaz de responder a consultas en castellano, relacionadas con un dominio de aplicación representado mediante una base de datos relacional.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The goal of this work was developing a query processing system using software agents. Open Agent Architecture framework is used for system development. The system supports queries in both Hindi and Malayalam; two prominent regional languages of India. Natural language processing techniques are used for meaning extraction from the plain query and information from database is given back to the user in his native language. The system architecture is designed in a structured way that it can be adapted to other regional languages of India. . This system can be effectively used in application areas like e-governance, agriculture, rural health, education, national resource planning, disaster management, information kiosks etc where people from all walks of life are involved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This is a Named Entity Based Question Answering System for Malayalam Language. Although a vast amount of information is available today in digital form, no effective information access mechanism exists to provide humans with convenient information access. Information Retrieval and Question Answering systems are the two mechanisms available now for information access. Information systems typically return a long list of documents in response to a user’s query which are to be skimmed by the user to determine whether they contain an answer. But a Question Answering System allows the user to state his/her information need as a natural language question and receives most appropriate answer in a word or a sentence or a paragraph. This system is based on Named Entity Tagging and Question Classification. Document tagging extracts useful information from the documents which will be used in finding the answer to the question. Question Classification extracts useful information from the question to determine the type of the question and the way in which the question is to be answered. Various Machine Learning methods are used to tag the documents. Rule-Based Approach is used for Question Classification. Malayalam belongs to the Dravidian family of languages and is one of the four major languages of this family. It is one of the 22 Scheduled Languages of India with official language status in the state of Kerala. It is spoken by 40 million people. Malayalam is a morphologically rich agglutinative language and relatively of free word order. Also Malayalam has a productive morphology that allows the creation of complex words which are often highly ambiguous. Document tagging tools such as Parts-of-Speech Tagger, Phrase Chunker, Named Entity Tagger, and Compound Word Splitter are developed as a part of this research work. No such tools were available for Malayalam language. Finite State Transducer, High Order Conditional Random Field, Artificial Immunity System Principles, and Support Vector Machines are the techniques used for the design of these document preprocessing tools. This research work describes how the Named Entity is used to represent the documents. Single sentence questions are used to test the system. Overall Precision and Recall obtained are 88.5% and 85.9% respectively. This work can be extended in several directions. The coverage of non-factoid questions can be increased and also it can be extended to include open domain applications. Reference Resolution and Word Sense Disambiguation techniques are suggested as the future enhancements

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Wnt family of secreted signalling molecules controls a wide range of developmental processes in all metazoans. In this investigation we concentrate on the role that members of this family play during the development of (1) the somites and (2) the neural crest. (3) We also isolate a novel component of the Wnt signalling pathway called Naked cuticle and investigate the role that this protein may play in both of the previously mentioned developmental processes. (1) In higher vertebrates the paraxial mesoderm undergoes a mesenchymal-to-epithelial transformation to form segmentally organised structures called somites. Experiments have shown that signals originating from the ectoderm overlying the somites or from midline structures are required for the formation of the somites, but their identity has yet to be determined. Wnt6 is a good candidate as a somite epithelialisation factor from the ectoderm since it is expressed in this tissue. In this study we show that injection of Wnt6-producing cells beneath the ectoderm at the level of the segmental plate or lateral to the segmental plate leads to the formation of numerous small epithelial somites. We show that Wnts are indeed responsible for the epithelialisation of somites by applying Wnt antagonists which result in the segmental plate being unable to form somites. These results show that Wnt6, the only member of this family to be localised to the chick paraxial ectoderm, is able to regulate the development of epithelial somites and that cellular organisation is pivotal in the execution of the differentiation programmes. (2) The neural crest is a population of multipotent progenitor cells that arise from the neural ectoderm in all vertebrate embryos and form a multitude of derivatives including the peripheral sensory neurons, the enteric nervous system, Schwann cells, pigment cells and parts of the craniofacial skeleton. The induction of the neural crest relies on an ectodermally derived signal, but the identity of the molecule performing this role in amniotes is not known. Here we show that Wnt6, a protein expressed in the ectoderm, induces neural crest production. (3) The intracellular response to Wnt signalling depends on the choice of signalling cascade activated in the responding cell. Cells can activate either the canonical pathway that modulates gene expression to control cellular differentiation and proliferation, or the non-canonical pathway that controls cell polarity and movement (Pandur et al. 2002b). Recent work has identified the protein Naked cuticle as an intracellular switch promoting the non-canonical pathway at the expense of the canonical pathway. We have cloned chick Naked cuticle-1 (cNkd1) and demonstrate that it is expressed in a dynamic manner during early embryogenesis. We show that it is expressed in the somites and in particular regions where cells are undergoing movement. Lastly our study shows that the expression of cNkd1 is regulated by Wnt expression originating from the neural tube. This study provides evidence that non-canonical Wnt signalling plays a part in somite development.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Establishing the mechanisms by which microbes interact with their environment, including eukaryotic hosts, is a major challenge that is essential for the economic utilisation of microbes and their products. Techniques for determining global gene expression profiles of microbes, such as microarray analyses, are often hampered by methodological restraints, particularly the recovery of bacterial transcripts (RNA) from complex mixtures and rapid degradation of RNA. A pioneering technology that avoids this problem is In Vivo Expression Technology (IVET). IVET is a 'promoter-trapping' methodology that can be used to capture nearly all bacterial promoters (genes) upregulated during a microbe-environment interaction. IVET is especially useful because there is virtually no limit to the type of environment used (examples to date include soil, oomycete, a host plant or animal) to select for active microbial promoters. Furthermore, IVET provides a powerful method to identify genes that are often overlooked during genomic annotation, and has proven to be a flexible technology that can provide even more information than identification of gene expression profiles. A derivative of IVET, termed resolvase-IVET (RIVET), can be used to provide spatio-temporal information about environment-specific gene expression. More recently, niche-specific genes captured during an IVET screen have been exploited to identify the regulatory mechanisms controlling their expression. Overall, IVET and its various spin-offs have proven to be a valuable and robust set of tools for analysing microbial gene expression in complex environments and providing new targets for biotechnological development.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of new technologies that use peer-to-peer networks grows every day, with the object to supply the need of sharing information, resources and services of databases around the world. Among them are the peer-to-peer databases that take advantage of peer-to-peer networks to manage distributed knowledge bases, allowing the sharing of information semantically related but syntactically heterogeneous. However, it is a challenge to ensure the efficient search for information without compromising the autonomy of each node and network flexibility, given the structural characteristics of these networks. On the other hand, some studies propose the use of ontology semantics by assigning standardized categorization of information. The main original contribution of this work is the approach of this problem with a proposal for optimization of queries supported by the Ant Colony algorithm and classification though ontologies. The results show that this strategy enables the semantic support to the searches in peer-to-peer databases, aiming to expand the results without compromising network performance. © 2011 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN] OBJECTIVES: To investigate to what extent bone mass accrual is determined by physical activity and changes in lean, fat, and total body mass during growth. METHODS: Twenty six physically active and 16 age matched control boys were followed up for three years. All subjects were prepubertal at the start of the survey (mean (SEM) age 9.4 (0.3) years). The weekly physical activity of the active boys included compulsory physical education sessions (80-90 minutes a week), three hours a week of extracurricular sports participation, and occasional sports competitions at weekends. The physical activity of the control group was limited to the compulsory physical education curriculum. Bone mineral content (BMC) and areal density (BMD), lean mass, and fat mass were measured by dual energy x ray absorptiometry. RESULTS: The effect of sports participation on femoral bone mass accrual was remarkable. Femoral BMC and BMD increased twice as much in the active group as in the controls over the three year period (p < 0.05). The greatest correlation was found between the increment in femoral bone mass and the increment in lean mass (BMC r = 0.67 and BMD r = 0.69, both p < 0.001). Multiple regression analysis revealed enhancement in lean mass as the best predictor of the increment in femoral bone BMC (R = 0.65) and BMD (R = 0.69). CONCLUSIONS: Long term sports participation during early adolescence results in greater accrual of bone mass. Enhancement of lean mass seems to be the best predictor of this bone mass accumulation. However, for a given muscle mass, a greater level of physical activity is associated with greater bone mass and density in peripubertal boys.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the last years, the importance of locating people and objects and communicating with them in real time has become a common occurrence in every day life. Nowadays, the state of the art of location systems for indoor environments has not a dominant technology as instead occurs in location systems for outdoor environments, where GPS is the dominant technology. In fact, each location technology for indoor environments presents a set of features that do not allow their use in the overall application scenarios, but due its characteristics, it can well coexist with other similar technologies, without being dominant and more adopted than the others indoor location systems. In this context, the European project SELECT studies the opportunity of collecting all these different features in an innovative system which can be used in a large number of application scenarios. The goal of this project is to realize a wireless system, where a network of fixed readers able to query one or more tags attached to objects to be located. The SELECT consortium is composed of European institutions and companies, including Datalogic S.p.A. and CNIT, which deal with software and firmware development of the baseband receiving section of the readers, whose function is to acquire and process the information received from generic tagged objects. Since the SELECT project has an highly innovative content, one of the key stages of the system design is represented by the debug phase. This work aims to study and develop tools and techniques that allow to perform the debug phase of the firmware of the baseband receiving section of the readers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The central objective of research in Information Retrieval (IR) is to discover new techniques to retrieve relevant information in order to satisfy an Information Need. The Information Need is satisfied when relevant information can be provided to the user. In IR, relevance is a fundamental concept which has changed over time, from popular to personal, i.e., what was considered relevant before was information for the whole population, but what is considered relevant now is specific information for each user. Hence, there is a need to connect the behavior of the system to the condition of a particular person and his social context; thereby an interdisciplinary sector called Human-Centered Computing was born. For the modern search engine, the information extracted for the individual user is crucial. According to the Personalized Search (PS), two different techniques are necessary to personalize a search: contextualization (interconnected conditions that occur in an activity), and individualization (characteristics that distinguish an individual). This movement of focus to the individual's need undermines the rigid linearity of the classical model overtaken the ``berry picking'' model which explains that the terms change thanks to the informational feedback received from the search activity introducing the concept of evolution of search terms. The development of Information Foraging theory, which observed the correlations between animal foraging and human information foraging, also contributed to this transformation through attempts to optimize the cost-benefit ratio. This thesis arose from the need to satisfy human individuality when searching for information, and it develops a synergistic collaboration between the frontiers of technological innovation and the recent advances in IR. The search method developed exploits what is relevant for the user by changing radically the way in which an Information Need is expressed, because now it is expressed through the generation of the query and its own context. As a matter of fact the method was born under the pretense to improve the quality of search by rewriting the query based on the contexts automatically generated from a local knowledge base. Furthermore, the idea of optimizing each IR system has led to develop it as a middleware of interaction between the user and the IR system. Thereby the system has just two possible actions: rewriting the query, and reordering the result. Equivalent actions to the approach was described from the PS that generally exploits information derived from analysis of user behavior, while the proposed approach exploits knowledge provided by the user. The thesis went further to generate a novel method for an assessment procedure, according to the "Cranfield paradigm", in order to evaluate this type of IR systems. The results achieved are interesting considering both the effectiveness achieved and the innovative approach undertaken together with the several applications inspired using a local knowledge base.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The thesis presents an account of an attempt to utilize expert systems within the domain of production planning and control. The use of expert systems was proposed due to the problematical nature of a particular function within British Steel Strip Products' Operations Department: the function of Order Allocation, allocating customer orders to a production week and site. Approaches to tackling problems within production planning and control are reviewed, as are the general capabilities of expert systems. The conclusions drawn are that the domain of production planning and control contains both `soft' and `hard' problems, and that while expert systems appear to be a useful technology for this domain, this usefulness has by no means yet been demonstrated. Also, it is argued that the main stream methodology for developing expert systems is unsuited for the domain. A problem-driven approach is developed and used to tackle the Order Allocation function. The resulting system, UAAMS, contained two expert components. One of these, the scheduling procedure was not fully implemented due to inadequate software. The second expert component, the product routing procedure, was untroubled by such difficulties, though it was unusable on its own; thus a second system was developed. This system, MICRO-X10, duplicated the function of X10, a complex database query routine used daily by Order Allocation. A prototype version of MICRO-X10 proved too slow to be useful but allowed implementation and maintenance issues to be analysed. In conclusion, the usefulness of the problem-driven approach to expert systems development within production planning and control is demonstrated but restrictions imposed by current expert system software are highlighted in that the abilities of such software to cope with `hard' scheduling constructs and also the slow processing speeds of such software can restrict the current usefulness of expert systems within production planning and control.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cloud computing can be defined as a distributed computational model by through resources (hardware, storage, development platforms and communication) are shared, as paid services accessible with minimal management effort and interaction. A great benefit of this model is to enable the use of various providers (e.g a multi-cloud architecture) to compose a set of services in order to obtain an optimal configuration for performance and cost. However, the multi-cloud use is precluded by the problem of cloud lock-in. The cloud lock-in is the dependency between an application and a cloud platform. It is commonly addressed by three strategies: (i) use of intermediate layer that stands to consumers of cloud services and the provider, (ii) use of standardized interfaces to access the cloud, or (iii) use of models with open specifications. This paper outlines an approach to evaluate these strategies. This approach was performed and it was found that despite the advances made by these strategies, none of them actually solves the problem of lock-in cloud. In this sense, this work proposes the use of Semantic Web to avoid cloud lock-in, where RDF models are used to specify the features of a cloud, which are managed by SPARQL queries. In this direction, this work: (i) presents an evaluation model that quantifies the problem of cloud lock-in, (ii) evaluates the cloud lock-in from three multi-cloud solutions and three cloud platforms, (iii) proposes using RDF and SPARQL on management of cloud resources, (iv) presents the cloud Query Manager (CQM), an SPARQL server that implements the proposal, and (v) comparing three multi-cloud solutions in relation to CQM on the response time and the effectiveness in the resolution of cloud lock-in.