959 resultados para Web access characterization


Relevância:

100.00% 100.00%

Publicador:

Resumo:

ACM Computing Classification System (1998): J.2.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A Web aproximou a humanidade dos seus pares a um nível nunca antes visto. Com esta facilidade veio também o cibercrime, o terrorismo e outros fenómenos característicos de uma sociedade tecnológica, plenamente informatizada e onde as fronteiras terrestres pouco importam na limitação dos agentes ativos, nocivos ou não, deste sistema. Recentemente descobriu-se que as grandes nações “vigiam” atentamente os seus cidadãos, desrespeitando qualquer limite moral e tecnológico, podendo escutar conversas telefónicas, monitorizar o envio e receção de e-mails, monitorizar o tráfego Web do cidadão através de poderosíssimos programas de monitorização e vigilância. Noutros cantos do globo, nações em tumulto ou envoltas num manto da censura perseguem os cidadãos negando-lhes o acesso à Web. Mais mundanamente, há pessoas que coagem e invadem a privacidade de conhecidos e familiares, vasculhando todos os cantos dos seus computadores e hábitos de navegação. Neste sentido, após o estudo das tecnologias que permitem a vigilância constante dos utilizadores da Web, foram analisadas soluções que permitem conceder algum anónimato e segurança no tráfego Web. Para suportar o presente estudo, foi efetuada uma análise das plataformas que permitem uma navegação anónima e segura e um estudo das tecnologias e programas com potencial de violação de privacidade e intrusão informática usados por nações de grande notoriedade. Este trabalho teve como objetivo principal analisar as tecnologias de monitorização e de vigilância informática identificando as tecnologias disponíveis, procurando encontrar potenciais soluções no sentido de investigar a possibilidade de desenvolver e disponibilizar uma ferramenta multimédia alicerçada em Linux e em LiveDVD (Sistema Operativo Linux que corre a partir do DVD sem necessidade de instalação). Foram integrados recursos no protótipo com o intuito de proporcionar ao utilizador uma forma ágil e leiga para navegar na Web de forma segura e anónima, a partir de um sistema operativo (SO) virtualizado e previamente ajustado para o âmbito anteriormente descrito. O protótipo foi testado e avaliado por um conjunto de cidadãos no sentido de aferir o seu potencial. Termina-se o documento com as conclusões e o trabalho a desenvolver futuramente.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Web content hosting, in which a Web server stores and provides Web access to documents for different customers, is becoming increasingly common. For example, a web server can host webpages for several different companies and individuals. Traditionally, Web Service Providers (WSPs) provide all customers with the same level of performance (best-effort service). Most service differentiation has been in the pricing structure (individual vs. business rates) or the connectivity type (dial-up access vs. leased line, etc.). This report presents DiffServer, a program that implements two simple, server-side, application-level mechanisms (server-centric and client-centric) to provide different levels of web service. The results of the experiments show that there is not much overhead due to the addition of this additional layer of abstraction between the client and the Apache web server under light load conditions. Also, the average waiting time for high priority requests decreases significantly after they are assigned priorities as compared to a FIFO approach.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

ACM Computing Classification System (1998): J.2.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

ACM Computing Classification System (1998): H.5.2, H.2.8, J.2, H.5.3.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Methods for accessing data on the Web have been the focus of active research over the past few years. In this thesis we propose a method for representing Web sites as data sources. We designed a Data Extractor data retrieval solution that allows us to define queries to Web sites and process resulting data sets. Data Extractor is being integrated into the MSemODB heterogeneous database management system. With its help database queries can be distributed over both local and Web data sources within MSemODB framework. ^ Data Extractor treats Web sites as data sources, controlling query execution and data retrieval. It works as an intermediary between the applications and the sites. Data Extractor utilizes a twofold “custom wrapper” approach for information retrieval. Wrappers for the majority of sites are easily built using a powerful and expressive scripting language, while complex cases are processed using Java-based wrappers that utilize specially designed library of data retrieval, parsing and Web access routines. In addition to wrapper development we thoroughly investigate issues associated with Web site selection, analysis and processing. ^ Data Extractor is designed to act as a data retrieval server, as well as an embedded data retrieval solution. We also use it to create mobile agents that are shipped over the Internet to the client's computer to perform data retrieval on behalf of the user. This approach allows Data Extractor to distribute and scale well. ^ This study confirms feasibility of building custom wrappers for Web sites. This approach provides accuracy of data retrieval, and power and flexibility in handling of complex cases. ^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Methods for accessing data on the Web have been the focus of active research over the past few years. In this thesis we propose a method for representing Web sites as data sources. We designed a Data Extractor data retrieval solution that allows us to define queries to Web sites and process resulting data sets. Data Extractor is being integrated into the MSemODB heterogeneous database management system. With its help database queries can be distributed over both local and Web data sources within MSemODB framework. Data Extractor treats Web sites as data sources, controlling query execution and data retrieval. It works as an intermediary between the applications and the sites. Data Extractor utilizes a two-fold "custom wrapper" approach for information retrieval. Wrappers for the majority of sites are easily built using a powerful and expressive scripting language, while complex cases are processed using Java-based wrappers that utilize specially designed library of data retrieval, parsing and Web access routines. In addition to wrapper development we thoroughly investigate issues associated with Web site selection, analysis and processing. Data Extractor is designed to act as a data retrieval server, as well as an embedded data retrieval solution. We also use it to create mobile agents that are shipped over the Internet to the client's computer to perform data retrieval on behalf of the user. This approach allows Data Extractor to distribute and scale well. This study confirms feasibility of building custom wrappers for Web sites. This approach provides accuracy of data retrieval, and power and flexibility in handling of complex cases.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Web-based course management and delivery is regarded by many institutions as a key factor in an increasingly competitive education and training world, but the systems currently available are largely unsatisfactory in terms of supporting collaborative work and access to practical science facilities. These limitations are less important in areas where “pen-and-paper” courseware is the mainstream, but become unacceptably restrictive when student assignments require real-time teamwork and access to laboratory equipment. This paper presents a web-accessible workbench for electronics design and test, which was developed in the scope of an European IST project entitled PEARL, with the aim of supporting two main features: full web access and collaborative learning facilities.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The fast development of distance learning tools such as Open Educational Resources (OER) and Massive Open Online Courses (MOOC or MOOCs) are indicators of a shift in the way in which digital teaching and learning are understood. MOOC are a new style of online classes that allow any person with web access, anywhere, usually free of charge, to participate through video lectures, computer graded tests and discussion forums. They have been capturing the attention of many higher education institutions around the world. This paper will give us an overview of the “Introduction to Differential Calculus” a MOOC Project, created by an engaged volunteer team of Mathematics lecturers from four schools of the Polytechnic Institute of Oporto (IPP). The MOOC theories and their popularity are presented and complemented by a discussion of some MOOC definitions and their inherent advantages and disadvantages. It will also explore what MOOC mean for Mathematics education. The Project development is revealed by focusing on used MOOC structure, as well as the quite a lot of types of course materials produced. It ends with a presentation of a short discussion about problems and challenges met throughout the development of the project. It is also our goal to contribute for a change in the way teaching and learning Mathematics is seen and practiced nowadays, trying to make education more accessible to as many people as possible and increase our institution (IPP) recognition.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Tässä insinöörityössä vertaillaan fyysisten koneiden konsolidointitekniikoita ja perehtyy tarkemmin Vmware ESX Server 2.5 -arkkitehtuuriin. Konsolidointimenetelmistä käsitellään: palveluiden keskittäminen, fyysinen konsolidointi, tietojen integrointi sekä sovellusten integrointi. Virtuaalijärjestelmistä vertailtiin markkinoiden yleisimmin käytössä olevat tuotteet. Vertailuun otettiin Vmwaren ESX sekä Server -tuote. Vastaavasti Microsoftilta valittiin tuote Virtual Server 2005. Projektissa toteutettiin Vmware ESX 2.5 -järjestelmän asennus sekä konfigurointi. Järjestelmään luotiin standardi virtuaalikoneita varten sekä määriteltiin Golden master -levykuva. Varsinaisessa konsolidointiosuudessa toteutettiin fyysisten laitteiden keskittämistä ja virtualisointia. Kohteena oli NT4-toimialue, tiedosto, tulostus, Exchange Outlook Web Access, Exchange 5.5 -tietokanta sekä SMTP -palvelimen siirto VMware ESX -järjestelmään. Työn lopputuloksean saavutettiin toimiva virtuaali-infrastruktuuri, johon voidaan tarvittaessa helposti luoda virtuaalikoneita.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Abstract Since its creation, the Internet has permeated our daily life. The web is omnipresent for communication, research and organization. This exploitation has resulted in the rapid development of the Internet. Nowadays, the Internet is the biggest container of resources. Information databases such as Wikipedia, Dmoz and the open data available on the net are a great informational potentiality for mankind. The easy and free web access is one of the major feature characterizing the Internet culture. Ten years earlier, the web was completely dominated by English. Today, the web community is no longer only English speaking but it is becoming a genuinely multilingual community. The availability of content is intertwined with the availability of logical organizations (ontologies) for which multilinguality plays a fundamental role. In this work we introduce a very high-level logical organization fully based on semiotic assumptions. We thus present the theoretical foundations as well as the ontology itself, named Linguistic Meta-Model. The most important feature of Linguistic Meta-Model is its ability to support the representation of different knowledge sources developed according to different underlying semiotic theories. This is possible because mast knowledge representation schemata, either formal or informal, can be put into the context of the so-called semiotic triangle. In order to show the main characteristics of Linguistic Meta-Model from a practical paint of view, we developed VIKI (Virtual Intelligence for Knowledge Induction). VIKI is a work-in-progress system aiming at exploiting the Linguistic Meta-Model structure for knowledge expansion. It is a modular system in which each module accomplishes a natural language processing task, from terminology extraction to knowledge retrieval. VIKI is a supporting system to Linguistic Meta-Model and its main task is to give some empirical evidence regarding the use of Linguistic Meta-Model without claiming to be thorough.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

BACKGROUND: Knowledge of normal heart weight ranges is important information for pathologists. Comparing the measured heart weight to reference values is one of the key elements used to determine if the heart is pathological, as heart weight increases in many cardiac pathologies. The current reference tables are old and in need of an update. AIMS: The purposes of this study are to establish new reference tables for normal heart weights in the local population and to determine the best predictive factor for normal heart weight. We also aim to provide technical support to calculate the predictive normal heart weight. METHODS: The reference values are based on retrospective analysis of adult Caucasian autopsy cases without any obvious pathology that were collected at the University Centre of Legal Medicine in Lausanne from 2007 to 2011. We selected 288 cases. The mean age was 39.2 years. There were 118 men and 170 women. Regression analyses were performed to assess the relationship of heart weight to body weight, body height, body mass index (BMI) and body surface area (BSA). RESULTS: The heart weight increased along with an increase in all the parameters studied. The mean heart weight was greater in men than in women at a similar body weight. BSA was determined to be the best predictor for normal heart weight. New reference tables for predicted heart weights are presented as a web application that enable the comparison of heart weights observed at autopsy with the reference values. CONCLUSIONS: The reference tables for heart weight and other organs should be systematically updated and adapted for the local population. Web access and smartphone applications for the predicted heart weight represent important investigational tools.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Current institutional and official representations of cancer are: coordination, integration, team approaches, quality management, full informed consent, patient centered communication and empowerment. Web access, comprehensive care plan summaries patient centered healthcare interactions and evidence-based programs are different ways of delivering the comprehensive care and follow-up cancer survivors deserve. The question remains, how to best explore and respect, in the meantime, more subjective dimensions such as patient beliefs, values, the meaning of the illness, preferences and needs. These aspects are fundamental elements in the construction of a trusting relationship, so as to find common ground, to be open to discuss anxiety and doubts and to provide information tailored to suit the patient's level of understanding, in order to reduce vulnerability to the feeling of being "lost in transition".

Relevância:

80.00% 80.00%

Publicador:

Resumo:

BACKGROUND: Knowledge of normal heart weight ranges is important information for pathologists. Comparing the measured heart weight to reference values is one of the key elements used to determine if the heart is pathological, as heart weight increases in many cardiac pathologies. The current reference tables are old and in need of an update. AIMS: The purposes of this study are to establish new reference tables for normal heart weights in the local population and to determine the best predictive factor for normal heart weight. We also aim to provide technical support to calculate the predictive normal heart weight. METHODS: The reference values are based on retrospective analysis of adult Caucasian autopsy cases without any obvious pathology that were collected at the University Centre of Legal Medicine in Lausanne from 2007 to 2011. We selected 288 cases. The mean age was 39.2 years. There were 118 men and 170 women. Regression analyses were performed to assess the relationship of heart weight to body weight, body height, body mass index (BMI) and body surface area (BSA). RESULTS: The heart weight increased along with an increase in all the parameters studied. The mean heart weight was greater in men than in women at a similar body weight. BSA was determined to be the best predictor for normal heart weight. New reference tables for predicted heart weights are presented as a web application that enable the comparison of heart weights observed at autopsy with the reference values. CONCLUSIONS: The reference tables for heart weight and other organs should be systematically updated and adapted for the local population. Web access and smartphone applications for the predicted heart weight represent important investigational tools.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this thesis I researched different possibilities to get calendar information via Internet in Lotus Domino server and how to apply this information in practice. Sonera Mobile Folder service supports Microsoft Exchange email server and Sonera wants to extend the support to Lotus Domino email server system. The aim of the work is to sign in to Lotus Domino server with a wanted user identifier and find this user's calendar and free/busy information. The best solution to find the wanted information is to use Domino Web Access via Internet browser. This way the wanted information is in standard XML form and it is easy to parse in to a wanted shape. You can also get extra benefits by using this kind of Web Access because often the firewalls of the companies do not block incoming service requests in HTTP-port. This means that users do not need to change the settings of the company system. In this thesis there is also an introduction of the current Mobile Folder and it is compared to other services similar to it.