918 resultados para Information interfaces and presentation
Resumo:
The goal of the work reported here is to capture the commonsense knowledge of non-expert human contributors. Achieving this goal will enable more intelligent human-computer interfaces and pave the way for computers to reason about our world. In the domain of natural language processing, it will provide the world knowledge much needed for semantic processing of natural language. To acquire knowledge from contributors not trained in knowledge engineering, I take the following four steps: (i) develop a knowledge representation (KR) model for simple assertions in natural language, (ii) introduce cumulative analogy, a class of nearest-neighbor based analogical reasoning algorithms over this representation, (iii) argue that cumulative analogy is well suited for knowledge acquisition (KA) based on a theoretical analysis of effectiveness of KA with this approach, and (iv) test the KR model and the effectiveness of the cumulative analogy algorithms empirically. To investigate effectiveness of cumulative analogy for KA empirically, Learner, an open source system for KA by cumulative analogy has been implemented, deployed, and evaluated. (The site "1001 Questions," is available at http://teach-computers.org/learner.html). Learner acquires assertion-level knowledge by constructing shallow semantic analogies between a KA topic and its nearest neighbors and posing these analogies as natural language questions to human contributors. Suppose, for example, that based on the knowledge about "newspapers" already present in the knowledge base, Learner judges "newspaper" to be similar to "book" and "magazine." Further suppose that assertions "books contain information" and "magazines contain information" are also already in the knowledge base. Then Learner will use cumulative analogy from the similar topics to ask humans whether "newspapers contain information." Because similarity between topics is computed based on what is already known about them, Learner exhibits bootstrapping behavior --- the quality of its questions improves as it gathers more knowledge. By summing evidence for and against posing any given question, Learner also exhibits noise tolerance, limiting the effect of incorrect similarities. The KA power of shallow semantic analogy from nearest neighbors is one of the main findings of this thesis. I perform an analysis of commonsense knowledge collected by another research effort that did not rely on analogical reasoning and demonstrate that indeed there is sufficient amount of correlation in the knowledge base to motivate using cumulative analogy from nearest neighbors as a KA method. Empirically, evaluating the percentages of questions answered affirmatively, negatively and judged to be nonsensical in the cumulative analogy case compares favorably with the baseline, no-similarity case that relies on random objects rather than nearest neighbors. Of the questions generated by cumulative analogy, contributors answered 45% affirmatively, 28% negatively and marked 13% as nonsensical; in the control, no-similarity case 8% of questions were answered affirmatively, 60% negatively and 26% were marked as nonsensical.
Resumo:
When discussing the traditional and new missions of higher education (1996 Report to UNESCO of the International Commission on Education for the 21st Century) Jacques Delors stated that "Excessive attraction to social sciences has broken equilibrium of available graduates for workforce, thus causing doubts of graduates and employers on the quality of knowledge provided by higher education". Likewise, when discussing the progress of science and technology, the 1998 UNESCO World Conference on Higher Education concluded that "Another challenge concerts the latest advancements of Science, the sine qua non of sustainable development"; and that “with Information Technology, the unavoidable invasion of virtual reality has increased the distance between industrial and developing countries". Recreational Science has a long tradition all over the Educational World; it aims to show the basic aspects of Science, aims to entertain, and aims to induce thinking. Until a few years ago, this field of knowledge consisted of a few books, a few kits and other classical (yet innovative) ways to popularize the knowledge of Nature and the laws governing it. In Spain, the interest for recreational science has increased in the last years. First, new recreational books are being published and found in bookstores. Second the number of Science-related museums and exhibits is increasing. And third, new television shows are produced and new short science-based, superficial sketches are found in variety programs. However, actual programs in Spanish television dealing seriously with Science are scarce. Recreational Science, especially that related to physical phenomena like light or motion, is generally found at Science Museums because special equipment is required. On the contrary, Science related mathematics, quizzes and puzzles use to gather into books, e.g. the extensive collections by Martin Gardner. However, lately Science podcasts have entered the field of science communication. Not only traditional science journals and television channels are providing audio and video podcasts, but new websites deal exclusively with science podcasts, in particular on Recreational Science. In this communication we discuss the above mentioned trends and show our experience in the last two years in participating at Science Fairs and university-sponsored events to attract students to science and technology careers. We show a combination of real examples (e.g., mathemagic), imagination, use of information technology, and use of social networks. We present as well an experience on designing a computational, interactive tool to promote chemistry among high school, prospective students using computers ("Dancing with Bionanomolecules"). Like the concepts related to Web 2.0, it has been already proposed that a new framework for communication of science is emerging, i.e., Science Communication 2.0, where people and institutions develop new innovative ways to explain science topics to diverse publics – and where Recreational Science is likely to play a leading role
Resumo:
Poster for the School of Electronics and Computer Science, Learning Societies Lab Open Day, 27 February 2008 at the University of Southampton. Profile and presentation of the EdShare resource. The poster illustrates the philosophy of EdShare, how it relates to the Web 2.0 environment and its relationship to the education agenda in a University.
Resumo:
An introduction to high quality information resources and databases in Physics and related disciplines. Refers to resources of information and methods of searching them.
Resumo:
Redciencia es una propuesta de un nuevo medio de comunicación periodístico que busca difundir los avances científicos colombianos a través de un lenguaje multimedia e interactivo con miras a convertirse en un ejemplo de periodismo transmedia dentro de la web 2.0
Resumo:
El siguiente trabajo presenta recomendaciones orientadas a las Pymes Colombianas en cuanto a un proceso de internacionalización de sus productos en Irlanda, Italia, Letonia, Lituania y Luxemburgo, países pertenecientes a la Unión Europea; basados en el análisis de barreras de entrada, Tratado de Libre Comercio entre la Unión Europea y Colombia, así como el comportamiento de la balanza comercial entre ellos. Con el siguiente análisis, consolidamos información necesaria y útil para llevar a cabo la construcción de un plan de trabajo orientado a penetrar mercados europeos, generando un mayor alcance y crecimiento en clientes; para lograr una mayor visibilidad principalmente de los productos y del país, creando una necesidad en los mercados penetrados. Con el fin anteriormente mencionado, es que se hace un recorrido desde los conceptos básicos, pasando por la ruta exportadora y barreras hasta llegar a plantear oportunidades en esos mercados de la Unión Europea. Dentro de los conceptos básicos se hace mención a lo que significa una Pyme, foco principal de nuestro trabajo, resaltando su importancia dentro de la sociedad y principalmente en la economía de un país, aportando en la balanza comercial en el momento que se comienza un proceso de exportación de productos. Este análisis durante su desarrollo conlleva a plantear una serie de conclusiones y recomendaciones que serán de gran utilidad para los empresarios con intensión exportadora, así como también de brindar un aporte enriquecedor desde la mirada de futuras profesionales que en este documento plasman conocimientos obtenidos durante cinco años, además de la habilidad en la selección y búsqueda de información específica que sirve de apoyo para la presentación de este valioso tema.
Resumo:
We analyze the optimal provision of information in a procurement auction with horizontally differentiated goods. The buyer has private information about her preferred location on the product space and has access to a costless communication device. A seller who pays the entry cost may submit a bid comprising a location and a minimum price. We characterize the optimal information structure and show that the buyer prefers to attract only two bids. Further, additional sellers are inefficient since they reduce total and consumer surplus, gross of entry costs. We show that the buyer will not find it optimal to send public information to all sellers. On the other hand, she may profit from setting a minimum price and that a severe hold-up problem arises if she lacks commitment to set up the rules of the auction ex-ante.
Resumo:
Shape complexity has recently received attention from different fields, such as computer vision and psychology. In this paper, integral geometry and information theory tools are applied to quantify the shape complexity from two different perspectives: from the inside of the object, we evaluate its degree of structure or correlation between its surfaces (inner complexity), and from the outside, we compute its degree of interaction with the circumscribing sphere (outer complexity). Our shape complexity measures are based on the following two facts: uniformly distributed global lines crossing an object define a continuous information channel and the continuous mutual information of this channel is independent of the object discretisation and invariant to translations, rotations, and changes of scale. The measures introduced in this paper can be potentially used as shape descriptors for object recognition, image retrieval, object localisation, tumour analysis, and protein docking, among others
Resumo:
The human visual ability to perceive depth looks like a puzzle. We perceive three-dimensional spatial information quickly and efficiently by using the binocular stereopsis of our eyes and, what is mote important the learning of the most common objects which we achieved through living. Nowadays, modelling the behaviour of our brain is a fiction, that is why the huge problem of 3D perception and further, interpretation is split into a sequence of easier problems. A lot of research is involved in robot vision in order to obtain 3D information of the surrounded scene. Most of this research is based on modelling the stereopsis of humans by using two cameras as if they were two eyes. This method is known as stereo vision and has been widely studied in the past and is being studied at present, and a lot of work will be surely done in the future. This fact allows us to affirm that this topic is one of the most interesting ones in computer vision. The stereo vision principle is based on obtaining the three dimensional position of an object point from the position of its projective points in both camera image planes. However, before inferring 3D information, the mathematical models of both cameras have to be known. This step is known as camera calibration and is broadly describes in the thesis. Perhaps the most important problem in stereo vision is the determination of the pair of homologue points in the two images, known as the correspondence problem, and it is also one of the most difficult problems to be solved which is currently investigated by a lot of researchers. The epipolar geometry allows us to reduce the correspondence problem. An approach to the epipolar geometry is describes in the thesis. Nevertheless, it does not solve it at all as a lot of considerations have to be taken into account. As an example we have to consider points without correspondence due to a surface occlusion or simply due to a projection out of the camera scope. The interest of the thesis is focused on structured light which has been considered as one of the most frequently used techniques in order to reduce the problems related lo stereo vision. Structured light is based on the relationship between a projected light pattern its projection and an image sensor. The deformations between the pattern projected into the scene and the one captured by the camera, permits to obtain three dimensional information of the illuminated scene. This technique has been widely used in such applications as: 3D object reconstruction, robot navigation, quality control, and so on. Although the projection of regular patterns solve the problem of points without match, it does not solve the problem of multiple matching, which leads us to use hard computing algorithms in order to search the correct matches. In recent years, another structured light technique has increased in importance. This technique is based on the codification of the light projected on the scene in order to be used as a tool to obtain an unique match. Each token of light is imaged by the camera, we have to read the label (decode the pattern) in order to solve the correspondence problem. The advantages and disadvantages of stereo vision against structured light and a survey on coded structured light are related and discussed. The work carried out in the frame of this thesis has permitted to present a new coded structured light pattern which solves the correspondence problem uniquely and robust. Unique, as each token of light is coded by a different word which removes the problem of multiple matching. Robust, since the pattern has been coded using the position of each token of light with respect to both co-ordinate axis. Algorithms and experimental results are included in the thesis. The reader can see examples 3D measurement of static objects, and the more complicated measurement of moving objects. The technique can be used in both cases as the pattern is coded by a single projection shot. Then it can be used in several applications of robot vision. Our interest is focused on the mathematical study of the camera and pattern projector models. We are also interested in how these models can be obtained by calibration, and how they can be used to obtained three dimensional information from two correspondence points. Furthermore, we have studied structured light and coded structured light, and we have presented a new coded structured light pattern. However, in this thesis we started from the assumption that the correspondence points could be well-segmented from the captured image. Computer vision constitutes a huge problem and a lot of work is being done at all levels of human vision modelling, starting from a)image acquisition; b) further image enhancement, filtering and processing, c) image segmentation which involves thresholding, thinning, contour detection, texture and colour analysis, and so on. The interest of this thesis starts in the next step, usually known as depth perception or 3D measurement.
Resumo:
La comunitat científica que treballa en Intel·ligència Artificial (IA) ha dut a terme una gran quantitat de treball en com la IA pot ajudar a les persones a trobar el que volen dins d'Internet. La idea dels sistemes recomanadors ha estat extensament acceptada pels usuaris. La tasca principal d'un sistema recomanador és localitzar ítems, fonts d'informació i persones relacionades amb els interessos i preferències d'una persona o d'un grup de persones. Això comporta la construcció de models d'usuari i l'habilitat d'anticipar i predir les preferències de l'usuari. Aquesta tesi està focalitzada en l'estudi de tècniques d'IA que millorin el rendiment dels sistemes recomanadors. Inicialment, s'ha dut a terme un anàlisis detallat de l'actual estat de l'art en aquest camp. Aquest treball ha estat organitzat en forma de taxonomia on els sistemes recomanadors existents a Internet es classifiquen en 8 dimensions generals. Aquesta taxonomia ens aporta una base de coneixement indispensable pel disseny de la nostra proposta. El raonament basat en casos (CBR) és un paradigma per aprendre i raonar a partir de la experiència adequat per sistemes recomanadors degut als seus fonaments en el raonament humà. Aquesta tesi planteja una nova proposta de CBR aplicat al camp de la recomanació i un mecanisme d'oblit per perfils basats en casos que controla la rellevància i edat de les experiències passades. Els resultats experimentals demostren que aquesta proposta adapta millor els perfils als usuaris i soluciona el problema de la utilitat que pateixen el sistemes basats en CBR. Els sistemes recomanadors milloren espectacularment la qualitat dels resultats quan informació sobre els altres usuaris és utilitzada quan es recomana a un usuari concret. Aquesta tesi proposa l'agentificació dels sistemes recomanadors per tal de treure profit de propietats interessants dels agents com ara la proactivitat, la encapsulació o l'habilitat social. La col·laboració entre agents es realitza a partir del mètode de filtratge basat en la opinió i del mètode col·laboratiu de filtratge a partir de confiança. Els dos mètodes es basen en un model social de confiança que fa que els agents siguin menys vulnerables als altres quan col·laboren. Els resultats experimentals demostren que els agents recomanadors col·laboratius proposats milloren el rendiment del sistema mentre que preserven la privacitat de les dades personals de l'usuari. Finalment, aquesta tesi també proposa un procediment per avaluar sistemes recomanadors que permet la discussió científica dels resultats. Aquesta proposta simula el comportament dels usuaris al llarg del temps basat en perfils d'usuari reals. Esperem que aquesta metodologia d'avaluació contribueixi al progrés d'aquesta àrea de recerca.
Resumo:
Nowadays, companies are living great difficulties on managing their business due to constant and unpredictable economic market fluctuations. Recent changes in market trends (such as the constant demand for new products and services, mass customization and the drastic reduction of delivery time) lead companies to adopt strategies of creating partnerships with other companies as a way to respond effectively to such difficult economical times. Collaborative Networks’ concept born by the consequence of companies could no longer consider their internal business processes’ management as sufficient and tend to seek for a collaborative approach with other partners for their critical processes. Information technologies (ICT) assumed a major role acting as “enablers” of these kinds of networks, enhancing information sharing and business process integration. Several new trends concerning ICT architectures have been created to support collaborative networks requirements, but still doesn’t exist a common platform to reduce the needed integration effort on virtual organizations. This study aims to investigate the current technological solutions available in the market which enhances the management of companies’ business processes (specially, Collaborative Planning). Finally, the research work ends with the presentation of a conceptual model to answer to the constraints evaluated.
Resumo:
Enkhnaran will discuss issues for professional education raised by museums and tourism companies, which share similar objectives in the sense that each aim to provide their guests with quality information entertainment and a memorable experience. With limited budget capabilities, it is especially important for museums to co-operate with tourist companies in order to attract new and repeat visitors as well as generate important revenue.
Resumo:
This study compares the discrimination of successive visual number and successive auditory number using the same stimulus durations and presentation rates for both stimuli. The accuracy of the discrimination of successive number decreased as the presentation rate increased and the number in a series increased.
Resumo:
Planning a project with proper considerations of all necessary factors and managing a project to ensure its successful implementation will face a lot of challenges. Initial stage in planning a project for bidding a project is costly, time consuming and usually with poor accuracy on cost and effort predictions. On the other hand, detailed information for previous projects may be buried in piles of archived documents which can be increasingly difficult to learn from the previous experiences. Project portfolio has been brought into this field aiming to improve the information sharing and management among different projects. However, the amount of information that could be shared is still limited to generic information. This paper, we report a recently developed software system COBRA to automatically generate a project plan with effort estimation of time and cost based on data collected from previous completed projects. To maximise the data sharing and management among different projects, we proposed a method of using product based planning from PRINCE2 methodology. (Automated Project Information Sharing and Management System -�COBRA) Keywords: project management, product based planning, best practice, PRINCE2