992 resultados para Web-searching


Relevância:

70.00% 70.00%

Publicador:

Resumo:

The web is continuously evolving into a collection of many data, which results in the interest to collect and merge these data in a meaningful way. Based on that web data, this paper describes the building of an ontology resting on fuzzy clustering techniques. Through continual harvesting folksonomies by web agents, an entire automatic fuzzy grassroots ontology is built. This self-updating ontology can then be used for several practical applications in fields such as web structuring, web searching and web knowledge visualization.A potential application for online reputation analysis, added value and possible future studies are discussed in the conclusion.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The purpose of this paper is to explore the implementation of online learning in distance educational delivery at Yellow Fields University (pseudonymous) in Sri Lanka. The implementation of online distance education at the University included the use of blended learning. The policy initiative to introduce online for distance education in Sri Lanka was guided by the expectation of cost reduction and the implementation was financed under the Distance Education Modernization Project. The paper presents one case study of a larger multiple case study research that employed an ethnographic research approach in investigating the impact of ICT on distance education in Sri Lanka. Documents, questionnaires and qualitative interviews were used for data collection. There was a significant positive relationship between ownership of computers and students’ ability to use computer for word processing, emailing and Web searching. The lack of access to computers and the Internet, the lack of infrastructure, low levels of computer literacy, the lack of local language content, and the lack of formal student support services at the University were found to be major barriers to implementing compulsory online activities at the University

Relevância:

60.00% 60.00%

Publicador:

Resumo:

I consider the case for genuinely anonymous web searching. Big data seems to have it in for privacy. The story is well known, particularly since the dawn of the web. Vastly more personal information, monumental and quotidian, is gathered than in the pre-digital days. Once gathered it can be aggregated and analyzed to produce rich portraits, which in turn permit unnerving prediction of our future behavior. The new information can then be shared widely, limiting prospects and threatening autonomy. How should we respond? Following Nissenbaum (2011) and Brunton and Nissenbaum (2011 and 2013), I will argue that the proposed solutions—consent, anonymity as conventionally practiced, corporate best practices, and law—fail to protect us against routine surveillance of our online behavior. Brunton and Nissenbaum rightly maintain that, given the power imbalance between data holders and data subjects, obfuscation of one’s online activities is justified. Obfuscation works by generating “misleading, false, or ambiguous data with the intention of confusing an adversary or simply adding to the time or cost of separating good data from bad,” thus decreasing the value of the data collected (Brunton and Nissenbaum, 2011). The phenomenon is as old as the hills. Natural selection evidently blundered upon the tactic long ago. Take a savory butterfly whose markings mimic those of a toxic cousin. From the point of view of a would-be predator the data conveyed by the pattern is ambiguous. Is the bug lunch or potential last meal? In the light of the steep costs of a mistake, the savvy predator goes hungry. Online obfuscation works similarly, attempting for instance to disguise the surfer’s identity (Tor) or the nature of her queries (Howe and Nissenbaum 2009). Yet online obfuscation comes with significant social costs. First, it implies free riding. If I’ve installed an effective obfuscating program, I’m enjoying the benefits of an apparently free internet without paying the costs of surveillance, which are shifted entirely onto non-obfuscators. Second, it permits sketchy actors, from child pornographers to fraudsters, to operate with near impunity. Third, online merchants could plausibly claim that, when we shop online, surveillance is the price we pay for convenience. If we don’t like it, we should take our business to the local brick-and-mortar and pay with cash. Brunton and Nissenbaum have not fully addressed the last two costs. Nevertheless, I think the strict defender of online anonymity can meet these objections. Regarding the third, the future doesn’t bode well for offline shopping. Consider music and books. Intrepid shoppers can still find most of what they want in a book or record store. Soon, though, this will probably not be the case. And then there are those who, for perfectly good reasons, are sensitive about doing some of their shopping in person, perhaps because of their weight or sexual tastes. I argue that consumers should not have to pay the price of surveillance every time they want to buy that catchy new hit, that New York Times bestseller, or a sex toy.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The phenomenonal growth of the Internet has connected us to a vast amount of computation and information resources around the world. However, making use of these resources is difficult due to the unparalleled massiveness, high communication latency, share-nothing architecture and unreliable connection of the Internet. In this dissertation, we present a distributed software agent approach, which brings a new distributed problem-solving paradigm to the Internet computing researches with enhanced client-server scheme, inherent scalability and heterogeneity. Our study discusses the role of a distributed software agent in Internet computing and classifies it into three major categories by the objects it interacts with: computation agent, information agent and interface agent. The discussion of the problem domain and the deployment of the computation agent and the information agent are presented with the analysis, design and implementation of the experimental systems in high performance Internet computing and in scalable Web searching. ^ In the computation agent study, high performance Internet computing can be achieved with our proposed Java massive computation agent (JAM) model. We analyzed the JAM computing scheme and built a brutal force cipher text decryption prototype. In the information agent study, we discuss the scalability problem of the existing Web search engines and designed the approach of Web searching with distributed collaborative index agent. This approach can be used for constructing a more accurate, reusable and scalable solution to deal with the growth of the Web and of the information on the Web. ^ Our research reveals that with the deployment of the distributed software agent in Internet computing, we can have a more cost effective approach to make better use of the gigantic scale network of computation and information resources on the Internet. The case studies in our research show that we are now able to solve many practically hard or previously unsolvable problems caused by the inherent difficulties of Internet computing. ^

Relevância:

40.00% 40.00%

Publicador:

Relevância:

40.00% 40.00%

Publicador:

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The traditional characteristics and challenges for organizing and searching information on the World Wide Web are outlined and reviewed. The classification features of two of these methods, such as Google, in the case of automated search engines, and Yahoo! Directory, in the case of subject directories are analyzed. Recent advances in the Semantic Web, particularly the growing application of ontologies and Linked Data are also reviewed. Finally, some problems and prospects related to the use of classification and indexing on the World Wide Web are discussed, emphasizing the need of rethinking the role of classification in the organization of these resources and outlining the possibilities of applying Ranganathan's facet theories of classification.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Many queries sent to search engines refer to specific locations in the world. Location-based queries try to find local services and facilities around the user’s environment or in a particular area. This paper reviews the specifications of geospatial queries and discusses the similarities and differences between location-based queries and other queries. We introduce nine patterns for location-based queries containing either a service name alone or a service name accompanied by a location name. Our survey indicates that at least 22% of the Web queries have a geospatial dimension and most of these can be considered as location-based queries. We propose that location-based queries should be treated different from general queries to produce more relevant results.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Currently the media have made many new tools on their websites in order to broaden the dialogue with its users, a feature that has been called interactivity. The objective of this research is to describe the interactive resources of Chilean media websites. The analysis was conducted at 20 sites using a pattern of six dimensions with interactive forms which are today using identified. The findings indicate that digital media Chileans are expanding the possibilities of dialogue with users on social media, especially Twitter and Facebook, and the mediauser interaction is monological, that is to say, from the media to the user, but with very low feedback.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A Web aproximou a humanidade dos seus pares a um nível nunca antes visto. Com esta facilidade veio também o cibercrime, o terrorismo e outros fenómenos característicos de uma sociedade tecnológica, plenamente informatizada e onde as fronteiras terrestres pouco importam na limitação dos agentes ativos, nocivos ou não, deste sistema. Recentemente descobriu-se que as grandes nações “vigiam” atentamente os seus cidadãos, desrespeitando qualquer limite moral e tecnológico, podendo escutar conversas telefónicas, monitorizar o envio e receção de e-mails, monitorizar o tráfego Web do cidadão através de poderosíssimos programas de monitorização e vigilância. Noutros cantos do globo, nações em tumulto ou envoltas num manto da censura perseguem os cidadãos negando-lhes o acesso à Web. Mais mundanamente, há pessoas que coagem e invadem a privacidade de conhecidos e familiares, vasculhando todos os cantos dos seus computadores e hábitos de navegação. Neste sentido, após o estudo das tecnologias que permitem a vigilância constante dos utilizadores da Web, foram analisadas soluções que permitem conceder algum anónimato e segurança no tráfego Web. Para suportar o presente estudo, foi efetuada uma análise das plataformas que permitem uma navegação anónima e segura e um estudo das tecnologias e programas com potencial de violação de privacidade e intrusão informática usados por nações de grande notoriedade. Este trabalho teve como objetivo principal analisar as tecnologias de monitorização e de vigilância informática identificando as tecnologias disponíveis, procurando encontrar potenciais soluções no sentido de investigar a possibilidade de desenvolver e disponibilizar uma ferramenta multimédia alicerçada em Linux e em LiveDVD (Sistema Operativo Linux que corre a partir do DVD sem necessidade de instalação). Foram integrados recursos no protótipo com o intuito de proporcionar ao utilizador uma forma ágil e leiga para navegar na Web de forma segura e anónima, a partir de um sistema operativo (SO) virtualizado e previamente ajustado para o âmbito anteriormente descrito. O protótipo foi testado e avaliado por um conjunto de cidadãos no sentido de aferir o seu potencial. Termina-se o documento com as conclusões e o trabalho a desenvolver futuramente.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the last few years, we have observed an exponential increasing of the information systems, and parking information is one more example of them. The needs of obtaining reliable and updated information of parking slots availability are very important in the goal of traffic reduction. Also parking slot prediction is a new topic that has already started to be applied. San Francisco in America and Santander in Spain are examples of such projects carried out to obtain this kind of information. The aim of this thesis is the study and evaluation of methodologies for parking slot prediction and the integration in a web application, where all kind of users will be able to know the current parking status and also future status according to parking model predictions. The source of the data is ancillary in this work but it needs to be understood anyway to understand the parking behaviour. Actually, there are many modelling techniques used for this purpose such as time series analysis, decision trees, neural networks and clustering. In this work, the author explains the best techniques at this work, analyzes the result and points out the advantages and disadvantages of each one. The model will learn the periodic and seasonal patterns of the parking status behaviour, and with this knowledge it can predict future status values given a date. The data used comes from the Smart Park Ontinyent and it is about parking occupancy status together with timestamps and it is stored in a database. After data acquisition, data analysis and pre-processing was needed for model implementations. The first test done was with the boosting ensemble classifier, employed over a set of decision trees, created with C5.0 algorithm from a set of training samples, to assign a prediction value to each object. In addition to the predictions, this work has got measurements error that indicates the reliability of the outcome predictions being correct. The second test was done using the function fitting seasonal exponential smoothing tbats model. Finally as the last test, it has been tried a model that is actually a combination of the previous two models, just to see the result of this combination. The results were quite good for all of them, having error averages of 6.2, 6.6 and 5.4 in vacancies predictions for the three models respectively. This means from a parking of 47 places a 10% average error in parking slot predictions. This result could be even better with longer data available. In order to make this kind of information visible and reachable from everyone having a device with internet connection, a web application was made for this purpose. Beside the data displaying, this application also offers different functions to improve the task of searching for parking. The new functions, apart from parking prediction, were: - Park distances from user location. It provides all the distances to user current location to the different parks in the city. - Geocoding. The service for matching a literal description or an address to a concrete location. - Geolocation. The service for positioning the user. - Parking list panel. This is not a service neither a function, is just a better visualization and better handling of the information.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La indexació i la recerca de pàgines web es basa en l’anàlisi de text. La tecnologia actual encara no pot processar d’una manera eficient i suficientment ràpida el text contingut a les imatges de les pàgines web. Aquest fet planteja un problema important d’indexació però també d’inaccessibilitat. Per poder quantificar aquest problema hem desenvolupat una aplicació software que ens permet realitzar un estudi sobre aquesta situació. Hem utilitzat aquest software per analitzar un conjunt de pàgines web representatives de la situació actual a Internet. Aquests resultats obtinguts s’han analitzat i comparat amb estudis anteriors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Internet is increasingly used as a source of information on health issues and is probably a major source of patients' empowerment. This process is however limited by the frequently poor quality of web-based health information designed for consumers. A better diffusion of information about criteria defining the quality of the content of websites, and about useful methods designed for searching such needed information, could be particularly useful to patients and their relatives. A brief, six-items DISCERN version, characterized by a high specificity for detecting websites with good or very good content quality was recently developed. This tool could facilitate the identification of high-quality information on the web by patients and may improve the empowerment process initiated by the development of the health-related web.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: The Internet is increasingly used as a source of information for mental health issues. The burden of obsessive compulsive disorder (OCD) may lead persons with diagnosed or undiagnosed OCD, and their relatives, to search for good quality information on the Web. This study aimed to evaluate the quality of Web-based information on English-language sites dealing with OCD and to compare the quality of websites found through a general and a medically specialized search engine. METHODS: Keywords related to OCD were entered into Google and OmniMedicalSearch. Websites were assessed on the basis of accountability, interactivity, readability, and content quality. The "Health on the Net" (HON) quality label and the Brief DISCERN scale score were used as possible content quality indicators. Of the 235 links identified, 53 websites were analyzed. RESULTS: The content quality of the OCD websites examined was relatively good. The use of a specialized search engine did not offer an advantage in finding websites with better content quality. A score ≥16 on the Brief DISCERN scale is associated with better content quality. CONCLUSION: This study shows the acceptability of the content quality of OCD websites. There is no advantage in searching for information with a specialized search engine rather than a general one. Practical implications: The Internet offers a number of high quality OCD websites. It remains critical, however, to have a provider-patient talk about the information found on the Web.