891 resultados para 080704 Information Retrieval and Web Search
Resumo:
This thesis Entitled Buyer information and brand choice behaviour in markets with asymmetries.The period of transition set in by globalization and liberalization has ensued a onsiderable degree of homogeneity with western societies with respect to quantity and quality of goods and services.The study is aimed at finding out how the buyers adapt to the prevalent complex and dynamic market configuration by taking an archetypical situation of information gathering and brand- choice decision of select household consumer durables.The study was based on a set of 301 sample respondents who were either first time purchasers or repeat purchasers for household use, of the items under study in the sample area comprising of rural, urban and semi-urban areas. Data were collected using interview schedule and analysis of the same was done with standard statistical computer programs.Buyer confidence as perceived by buyers with respect to information acquisition and brand-choice represents the felt competence to effectively function in the market.In general, lower levels of education, income and occupation showed lower levels of search. The oldest were also low searchers. The repeat purchasers of the product searched less than the first purchasers. The most important source of information was word of mouth or information from others followed by television advertisements. The least important source of information was billboards, displays and similar forms of advertisements.The second factor is characterized by items representing ‘social attributes’ like, use by many others, use by peers, recommendation by significant others and reputation of the brand. The third factor represents ‘susceptibility to incentives and promotions’.
Resumo:
Scholarly communication over the past 10 to 15 years has gained a tremendous momentum with the advent of Internet and the World Wide Web. The web has transformed the ways by which people search, find, use and communicate information. Innovations in web technology since 2005 have brought out an array of new services and facilities and an enhanced version of the web named Web 2.0. Web 2.0 facilitates a collaborative environment in which the information users can interact with the information. Web 2.0 enables its users to create, annotate, review, share re-use and represent the information in new ways thereby optimizing the information dissemination
Resumo:
Degut a la falta d'informació, de temps, no saber a on buscar. . . moltes vegades no ens assabentem, o ho fem massa tard, d'events als que ens hauria agradat assistir, com podrien ser concerts,conferències, activitats esportives, etc. L'objectiu d'aquest projecte serà aprofitar les capacitats de les xarxes socials per crear un lloc web que permeti enviar i geolocalitzar events que podran ser revisats i promoguts pels usuaris, de forma que es pugui suplir aquesta mancança. La solució implementada haurà de proporcionar les següents funcionalitats: enviament d'events (permetrà afegir les dades principals d'un event i geolocalitzar-lo en el mapa); organització de la informació (es disposarà de categories i metacategories per agrupar els events, a més d'un sistema d'etiquetes que facilitarà les cerques en el contingut del web); exploració dels events existents (mitjançant el mapa es podrà veure les dades de qualsevol event); sistema de votació (atorgarà la capacitat per poder decidir quina informació és més rellevant); agenda personal (servirà per registrar events i d'aquesta manera poder rebre notificacions que informin de canvis o simplement que serveixin com a recordatori); comunicació entre usuaris (es realitzarà a través de comentaris al peu dels events i/o d'un xat intern); sindicació web (distribuirà el contingut utilitzant l'estàndard RSS; disponibilitat d'una API simple (permetrà l'accés a certa informació des d'aplicacions externes)
Resumo:
These slides support students in understanding how to respond to the challenge of: "I’ve been told not to use Google or Wikipedia to research my essay. What else is there?" The powerpoint guides students in how to identify high quality, up to date and relevant resources on the web that they can reliably draw upon for their academic assignments. The slides were created by the subject liaison librarian who supports the School of Electronics and Computer Science at the UNiversity of Southampton, Fiona Nichols.
Resumo:
A first year level introduction to finding and evaluating information (mostly on-line)
Resumo:
Esta dissertação resulta de uma investigação que levou a cabo um estudo webométrico sobre a presença das universidades portuguesas na Web, avaliando a visibilidade das instituições através do cálculo de um indicador webométrico, o Web Impact Factor. A World Wide Web é, na atualidade, um dos principais meios de difusão de Informação. Os Estudos Métricos da Informação visam quantificar e avaliar a produção de Informação, objeto de estudo de disciplinas como a Infometria, a Cienciometria e a Bibliometria. Recentemente, surgiram a Cibermetria e a Webometria como novas disciplinas que estudam a produção e difusão da Informação no contexto do Ciberespaço e da World Wide Web, respetivamente. As universidades, enquanto polos privilegiados de produção e difusão de conhecimento, são o objeto de estudo natural da Webometria e a avaliação da sua presença na World Wide Web contribui para a análise do desempenho destas instituições. Para a realização deste trabalho foi adotada a metodologia proposta por Noruzi, que calcula três categorias de Web Impact Factor: o WIF Total, o WIF Revisto e o Selflink WIF. De modo a calcular estas categorias, foram recolhidos dados quantitativos de inlinks, selflinks, número total de páginas e número de páginas indexadas pelo motor de pesquisa. O motor de pesquisa utilizado foi o Altavista, tendo sido realizadas pesquisas de expressões booleanas durante o primeiro semestre de 2009. Após a recolha, os dados foram tratados estatisticamente e procedeu-se ao cálculo das categorias do WIF. Conclui-se que existe uma maior visibilidade das universidades públicas portuguesas porque obtêm melhores resultados ao nível de duas categorias do Web Impact Factor: o WIF Revisto e o Selflink WIF.
Resumo:
The artificial grammar (AG) learning literature (see, e.g., Mathews et al., 1989; Reber, 1967) has relied heavily on a single measure of implicitly acquired knowledge. Recent work comparing this measure (string classification) with a more indirect measure in which participants make liking ratings of novel stimuli (e.g., Manza & Bornstein, 1995; Newell & Bright, 2001) has shown that string classification (which we argue can be thought of as an explicit, rather than an implicit, measure of memory) gives rise to more explicit knowledge of the grammatical structure in learning strings and is more resilient to changes in surface features and processing between encoding and retrieval. We report data from two experiments that extend these findings. In Experiment 1, we showed that a divided attention manipulation (at retrieval) interfered with explicit retrieval of AG knowledge but did not interfere with implicit retrieval. In Experiment 2, we showed that forcing participants to respond within a very tight deadline resulted in the same asymmetric interference pattern between the tasks. In both experiments, we also showed that the type of information being retrieved influenced whether interference was observed. The results are discussed in terms of the relatively automatic nature of implicit retrieval and also with respect to the differences between analytic and nonanalytic processing (Whittlesea Price, 2001).
Resumo:
The Web's link structure (termed the Web Graph) is a richly connected set of Web pages. Current applications use this graph for indexing and information retrieval purposes. In contrast the relationship between Web Graph and application is reversed by letting the structure of the Web Graph influence the behaviour of an application. Presents a novel Web crawling agent, AlienBot, the output of which is orthogonally coupled to the enemy generation strategy of a computer game. The Web Graph guides AlienBot, causing it to generate a stochastic process. Shows the effectiveness of such unorthodox coupling to both the playability of the game and the heuristics of the Web crawler. In addition, presents the results of the sample of Web pages collected by the crawling process. In particular, shows: how AlienBot was able to identify the power law inherent in the link structure of the Web; that 61.74 per cent of Web pages use some form of scripting technology; that the size of the Web can be estimated at just over 5.2 billion pages; and that less than 7 per cent of Web pages fully comply with some variant of (X)HTML.
Resumo:
In the emerging digital economy, the management of information in aerospace and construction organisations is facing a particular challenge due to the ever-increasing volume of information and the extensive use of information and communication technologies (ICTs). This paper addresses the problems of information overload and the value of information in both industries by providing some cross-disciplinary insights. In particular it identifies major issues and challenges in the current information evaluation practice in these two industries. Interviews were conducted to get a spectrum of industrial perspectives (director/strategic, project management and ICT/document management) on these issues in particular to information storage and retrieval strategies and the contrasting approaches to knowledge and information management of personalisation and codification. Industry feedback was collected by a follow-up workshop to strengthen the findings of the research. An information-handling agenda is outlined for the development of a future Information Evaluation Methodology (IEM) which could facilitate the practice of the codification of high-value information in order to support through-life knowledge and information management (K&IM) practice.
Resumo:
The need for consistent assimilation of satellite measurements for numerical weather prediction led operational meteorological centers to assimilate satellite radiances directly using variational data assimilation systems. More recently there has been a renewed interest in assimilating satellite retrievals (e.g., to avoid the use of relatively complicated radiative transfer models as observation operators for data assimilation). The aim of this paper is to provide a rigorous and comprehensive discussion of the conditions for the equivalence between radiance and retrieval assimilation. It is shown that two requirements need to be satisfied for the equivalence: (i) the radiance observation operator needs to be approximately linear in a region of the state space centered at the retrieval and with a radius of the order of the retrieval error; and (ii) any prior information used to constrain the retrieval should not underrepresent the variability of the state, so as to retain the information content of the measurements. Both these requirements can be tested in practice. When these requirements are met, retrievals can be transformed so as to represent only the portion of the state that is well constrained by the original radiance measurements and can be assimilated in a consistent and optimal way, by means of an appropriate observation operator and a unit matrix as error covariance. Finally, specific cases when retrieval assimilation can be more advantageous (e.g., when the estimate sought by the operational assimilation system depends on the first guess) are discussed.
Resumo:
The report examines the development of the Internet and Intranets in the world of business and commerce, drawing on previous literature and research. The new technology is explained, and key issues examined, such as the impact of the Internet on the surveyor's role as 'information broker' and its likely effect on clients' property requirements. The research is based on an analysis of 261 postal questionnaire responses and eight case study interviews from a sample of general practice and quantity surveying practices and corporates. For the first time the property profession is examined in detail and the key drivers, barriers and benefits of Internet use are identified for a range of different sized organisations.
Resumo:
Tagging provides support for retrieval and categorization of online content depending on users' tag choice. A number of models of tagging behaviour have been proposed to identify factors that are considered to affect taggers, such as users' tagging history. In this paper, we use Semiotics Analysis and Activity theory, to study the effect the system designer has over tagging behaviour. The framework we use shows the components that comprise the tagging system and how they interact together to direct tagging behaviour. We analysed two collaborative tagging systems: CiteULike and Delicious by studying their components by applying our framework. Using datasets from both systems, we found that 35% of CiteULike users did not provide tags compared to only 0.1% of Delicious users. This was directly linked to the type of tools used by the system designer to support tagging.
Resumo:
This paper presents an approach for assisting low-literacy readers in accessing Web online information. The oEducational FACILITAo tool is a Web content adaptation tool that provides innovative features and follows more intuitive interaction models regarding accessibility concerns. Especially, we propose an interaction model and a Web application that explore the natural language processing tasks of lexical elaboration and named entity labeling for improving Web accessibility. We report on the results obtained from a pilot study on usability analysis carried out with low-literacy users. The preliminary results show that oEducational FACILITAo improves the comprehension of text elements, although the assistance mechanisms might also confuse users when word sense ambiguity is introduced, by gathering, for a complex word, a list of synonyms with multiple meanings. This fact evokes a future solution in which the correct sense for a complex word in a sentence is identified, solving this pervasive characteristic of natural languages. The pilot study also identified that experienced computer users find the tool to be more useful than novice computer users do.
Resumo:
The TCABR data analysis and acquisition system has been upgraded to support a joint research programme using remote participation technologies. The architecture of the new system uses Java language as programming environment. Since application parameters and hardware in a joint experiment are complex with a large variability of components, requirements and specification solutions need to be flexible and modular, independent from operating system and computer architecture. To describe and organize the information on all the components and the connections among them, systems are developed using the extensible Markup Language (XML) technology. The communication between clients and servers uses remote procedure call (RPC) based on the XML (RPC-XML technology). The integration among Java language, XML and RPC-XML technologies allows to develop easily a standard data and communication access layer between users and laboratories using common software libraries and Web application. The libraries allow data retrieval using the same methods for all user laboratories in the joint collaboration, and the Web application allows a simple graphical user interface (GUI) access. The TCABR tokamak team in collaboration with the IPFN (Instituto de Plasmas e Fusao Nuclear, Instituto Superior Tecnico, Universidade Tecnica de Lisboa) is implementing this remote participation technologies. The first version was tested at the Joint Experiment on TCABR (TCABRJE), a Host Laboratory Experiment, organized in cooperation with the IAEA (International Atomic Energy Agency) in the framework of the IAEA Coordinated Research Project (CRP) on ""Joint Research Using Small Tokamaks"". (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Each year search engines like Google, Bing and Yahoo, complete trillions of search queries online. Students are especially dependent on these search tools because of their popularity, convenience and accessibility. However, what students are unaware of, by choice or naiveté is the amount of personal information that is collected during each search session, how that data is used and who is interested in their online behavior profile. Privacy policies are frequently updated in favor of the search companies but are lengthy and often are perused briefly or ignored entirely with little thought about how personal web habits are being exploited for analytics and marketing. As an Information Literacy instructor, and a member of the Electronic Frontier Foundation, I believe in the importance of educating college students and web users in general that they have a right to privacy online. Class discussions on the topic of web privacy have yielded an interesting perspective on internet search usage. Students are unaware of how their online behavior is recorded and have consistently expressed their hesitancy to use tools that disguise or delete their IP address because of the stigma that it may imply they have something to hide or are engaging in illegal activity. Additionally, students fear they will have to surrender the convenience of uber connectivity in their applications to maintain their privacy. The purpose of this lightning presentation is to provide educators with a lesson plan highlighting and simplifying the privacy terms for the three major search engines, Google, Bing and Yahoo. This presentation focuses on what data these search engines collect about users, how that data is used and alternative search solutions, like DuckDuckGo, for increased privacy. Students will directly benefit from this lesson because informed internet users can protect their data, feel safer online and become more effective web searchers.