15 resultados para Charged System Search
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
CODEX SEARCH es un motor de recuperación de información especializado en derecho de extranjería que está basado en herramientas y conocimiento lingüísticos. Un motor o Sistema de Recuperación de Información (SRI) es un software capaz de localizar información en grandes colecciones documentales (entorno no trivial) en formato electrónico. Mediante un estudio previo se ha detectado que la extranjería es un ámbito discursivo en el que resulta difícil expresar la necesidad de información en términos de una consulta formal, objeto de los sistemas de recuperación actuales. Por lo tanto, para desarrollar un SRI eficiente en el dominio indicado no basta con emplear un modelo tradicional de RI, es decir, comparar los términos de la pregunta con los de la respuesta, básicamente porque no expresan implicaciones y porque no tiene que haber necesariamente una relación 1 a 1. En este sentido, la solución lingüística propuesta se basa en incorporar el conocimiento del especialista mediante la integración en el sistema de una librería de casos. Los casos son ejemplos de procedimientos aplicados por expertos a la solución de problemas que han ocurrido en la realidad y que han terminado en éxito o fracaso. Los resultados obtenidos en esta primera fase son muy alentadores pero es necesario continuar la investigación en este campo para mejorar el rendimiento del prototipo al que se puede acceder desde &http://161.116.36.139/~codex/&.
Resumo:
One of the unresolved questions of modern physics is the nature of Dark Matter. Strong experimental evidences suggest that the presence of this elusive component in the energy budget of the Universe is quite significant, without, however, being able to provide conclusive information about its nature. The most plausible scenario is that of weakly interacting massive particles (WIMPs), that includes a large class of non-baryonic Dark Matter candidates with a mass typically between few tens of GeV and few TeVs, and a cross section of the order of weak interactions. Search for Dark Matter particles using very high energy gamma-ray Cherenkov telescopes is based on the model that WIMPs can self-annihilate, leading to production of detectable species, like photons. These photons are very energetic, and since unreflected by the Universe's magnetic fields, they can be traced straight to the source of their creation. The downside of the approach is a great amount of background radiation, coming from the conventional astrophysical objects, that usually hides clear signals of the Dark Matter particle interactions. That is why good choice of the observational candidates is the crucial factor in search for Dark Matter. With MAGIC (Major Atmospheric Gamma-ray Imaging Cherenkov Telescopes), a two-telescope ground-based system located in La Palma, Canary Islands, we choose objects like dwarf spheroidal satellite galaxies of the Milky Way and galaxy clusters for our search. Our idea is to increase chances for WIMPs detection by pointing to objects that are relatively close, with great amount of Dark Matter and with as-little-as-possible pollution from the stars. At the moment, several observation projects are ongoing and analyses are being performed.
Resumo:
Autonomous underwater vehicles (AUV) represent a challenging control problem with complex, noisy, dynamics. Nowadays, not only the continuous scientific advances in underwater robotics but the increasing number of subsea missions and its complexity ask for an automatization of submarine processes. This paper proposes a high-level control system for solving the action selection problem of an autonomous robot. The system is characterized by the use of reinforcement learning direct policy search methods (RLDPS) for learning the internal state/action mapping of some behaviors. We demonstrate its feasibility with simulated experiments using the model of our underwater robot URIS in a target following task
Resumo:
This paper proposes a high-level reinforcement learning (RL) control system for solving the action selection problem of an autonomous robot. Although the dominant approach, when using RL, has been to apply value function based algorithms, the system here detailed is characterized by the use of direct policy search methods. Rather than approximating a value function, these methodologies approximate a policy using an independent function approximator with its own parameters, trying to maximize the future expected reward. The policy based algorithm presented in this paper is used for learning the internal state/action mapping of a behavior. In this preliminary work, we demonstrate its feasibility with simulated experiments using the underwater robot GARBI in a target reaching task
Resumo:
In a search for new sensor systems and new methods for underwater vehicle positioning based on visual observation, this paper presents a computer vision system based on coded light projection. 3D information is taken from an underwater scene. This information is used to test obstacle avoidance behaviour. In addition, the main ideas for achieving stabilisation of the vehicle in front of an object are presented
Resumo:
Background: Single Nucleotide Polymorphisms, among other type of sequence variants, constitute key elements in genetic epidemiology and pharmacogenomics. While sequence data about genetic variation is found at databases such as dbSNP, clues about the functional and phenotypic consequences of the variations are generally found in biomedical literature. The identification of the relevant documents and the extraction of the information from them are hampered by the large size of literature databases and the lack of widely accepted standard notation for biomedical entities. Thus, automatic systems for the identification of citations of allelic variants of genes in biomedical texts are required. Results: Our group has previously reported the development of OSIRIS, a system aimed at the retrieval of literature about allelic variants of genes http://ibi.imim.es/osirisform.html. Here we describe the development of a new version of OSIRIS (OSIRISv1.2, http://ibi.imim.es/OSIRISv1.2.html webcite) which incorporates a new entity recognition module and is built on top of a local mirror of the MEDLINE collection and HgenetInfoDB: a database that collects data on human gene sequence variations. The new entity recognition module is based on a pattern-based search algorithm for the identification of variation terms in the texts and their mapping to dbSNP identifiers. The performance of OSIRISv1.2 was evaluated on a manually annotated corpus, resulting in 99% precision, 82% recall, and an F-score of 0.89. As an example, the application of the system for collecting literature citations for the allelic variants of genes related to the diseases intracranial aneurysm and breast cancer is presented. Conclusion: OSIRISv1.2 can be used to link literature references to dbSNP database entries with high accuracy, and therefore is suitable for collecting current knowledge on gene sequence variations and supporting the functional annotation of variation databases. The application of OSIRISv1.2 in combination with controlled vocabularies like MeSH provides a way to identify associations of biomedical interest, such as those that relate SNPs with diseases.
Resumo:
The public transportation is gaining importance every year basically duethe population growth, environmental policies and, route and streetcongestion. Too able an efficient management of all the resources relatedto public transportation, several techniques from different areas are beingapplied and several projects in Transportation Planning Systems, indifferent countries, are being developed. In this work, we present theGIST Planning Transportation Systems, a Portuguese project involving twouniversities and six public transportation companies. We describe indetail one of the most relevant modules of this project, the crew-scheduling module. The crew-scheduling module is based on the application of meta-heuristics, in particular GRASP, tabu search and geneticalgorithm to solve the bus-driver-scheduling problem. The metaheuristicshave been successfully incorporated in the GIST Planning TransportationSystems and are actually used by several companies.
Resumo:
Revenue management (RM) is a complicated business process that can best be described ascontrol of sales (using prices, restrictions, or capacity), usually using software as a tool to aiddecisions. RM software can play a mere informative role, supplying analysts with formatted andsummarized data who use it to make control decisions (setting a price or allocating capacity fora price point), or, play a deeper role, automating the decisions process completely, at the otherextreme. The RM models and algorithms in the academic literature by and large concentrateon the latter, completely automated, level of functionality.A firm considering using a new RM model or RM system needs to evaluate its performance.Academic papers justify the performance of their models using simulations, where customerbooking requests are simulated according to some process and model, and the revenue perfor-mance of the algorithm compared to an alternate set of algorithms. Such simulations, whilean accepted part of the academic literature, and indeed providing research insight, often lackcredibility with management. Even methodologically, they are usually awed, as the simula-tions only test \within-model" performance, and say nothing as to the appropriateness of themodel in the first place. Even simulations that test against alternate models or competition arelimited by their inherent necessity on fixing some model as the universe for their testing. Theseproblems are exacerbated with RM models that attempt to model customer purchase behav-ior or competition, as the right models for competitive actions or customer purchases remainsomewhat of a mystery, or at least with no consensus on their validity.How then to validate a model? Putting it another way, we want to show that a particularmodel or algorithm is the cause of a certain improvement to the RM process compared to theexisting process. We take care to emphasize that we want to prove the said model as the causeof performance, and to compare against a (incumbent) process rather than against an alternatemodel.In this paper we describe a \live" testing experiment that we conducted at Iberia Airlineson a set of flights. A set of competing algorithms control a set of flights during adjacentweeks, and their behavior and results are observed over a relatively long period of time (9months). In parallel, a group of control flights were managed using the traditional mix of manualand algorithmic control (incumbent system). Such \sandbox" testing, while common at manylarge internet search and e-commerce companies is relatively rare in the revenue managementarea. Sandbox testing has an undisputable model of customer behavior but the experimentaldesign and analysis of results is less clear. In this paper we describe the philosophy behind theexperiment, the organizational challenges, the design and setup of the experiment, and outlinethe analysis of the results. This paper is a complement to a (more technical) related paper thatdescribes the econometrics and statistical analysis of the results.
Resumo:
This paper presents a case study that explores the advantages that can be derived from the use of a design support system during the design of wastewater treatment plants (WWTP). With this objective in mind a simplified but plausible WWTP design case study has been generated with KBDS, a computer-based support system that maintains a historical record of the design process. The study shows how, by employing such a historical record, it is possible to: (1) rank different design proposals responding to a design problem; (2) study the influence of changing the weight of the arguments used in the selection of the most adequate proposal; (3) take advantage of keywords to assist the designer in the search of specific items within the historical records; (4) evaluate automatically thecompliance of alternative design proposals with respect to the design objectives; (5) verify the validity of previous decisions after the modification of the current constraints or specifications; (6) re-use the design records when upgrading an existing WWTP or when designing similar facilities; (7) generate documentation of the decision making process; and (8) associate a variety of documents as annotations to any component in the design history. The paper also shows one possible future role of design support systems as they outgrow their current reactive role as repositories of historical information and start to proactively support the generation of new knowledge during the design process
Resumo:
We compute the properties of a class of charged black holes in antide Sitter space-time, in diverse dimensions. These black holes are solutions of consistent Einstein-Maxwell truncations of gauged supergravities, which are shown to arise from the inclusion of rotation in the transverse space. We uncover rich thermodynamic phase structures for these systems, which display classic critical phenomena, including structures isomorphic to the van der WaalsMaxwell liquid-gas system. In that case, the phases are controlled by the universal cusp and swallowtail shapes familiar from catastrophe theory. All of the thermodynamics is consistent with field theory interpretations via holography, where the dual field theories can sometimes be found on the world volumes of coincident rotating branes.
Resumo:
The Lorentz-Dirac equation is not an unavoidable consequence of solely linear and angular momenta conservation for a point charge. It also requires an additional assumption concerning the elementary character of the charge. We here use a less restrictive elementarity assumption for a spinless charge and derive a system of conservation equations that are not properly the equation of motion because, as it contains an extra scalar variable, the future evolution of the charge is not determined. We show that a supplementary constitutive relation can be added so that the motion is determined and free from the troubles that are customary in the Lorentz-Dirac equation, i.e., preacceleration and runaways.
Resumo:
The problem of searchability in decentralized complex networks is of great importance in computer science, economy, and sociology. We present a formalism that is able to cope simultaneously with the problem of search and the congestion effects that arise when parallel searches are performed, and we obtain expressions for the average search cost both in the presence and the absence of congestion. This formalism is used to obtain optimal network structures for a system using a local search algorithm. It is found that only two classes of networks can be optimal: starlike configurations, when the number of parallel searches is small, and homogeneous-isotropic configurations, when it is large.
Resumo:
Semantic Web technology is able to provide the required computational semantics for interoperability of learning resources across different Learning Management Systems (LMS) and Learning Object Repositories (LOR). The EU research project LUISA (Learning Content Management System Using Innovative Semantic Web Services Architecture) addresses the development of a reference semantic architecture for the major challenges in the search, interchange and delivery of learning objects in a service-oriented context. One of the key issues, highlighted in this paper, is Digital Rights Management (DRM) interoperability. A Semantic Web approach to copyright management has been followed, which places a Copyright Ontology as the key component for interoperability among existing DRM systems and other licensing schemes like Creative Commons. Moreover, Semantic Web tools like reasoners, rule engines and semantic queries facilitate the implementation of an interoperable copyright management component in the LUISA architecture.
Resumo:
Open educational resources (OER) promise increased access, participation, quality, and relevance, in addition to cost reduction. These seemingly fantastic promises are based on the supposition that educators and learners will discover existing resources, improve them, and share the results, resulting in a virtuous cycle of improvement and re-use. By anecdotal metrics, existing web scale search is not working for OER. This situation impairs the cycle underlying the promise of OER, endangering long term growth and sustainability. While the scope of the problem is vast, targeted improvements in areas of curation, indexing, and data exchange can improve the situation, and create opportunities for further scale. I explore the way the system is currently inadequate, discuss areas for targeted improvement, and describe a prototype system built to test these ideas. I conclude with suggestions for further exploration and development.
Resumo:
Peer-reviewed