874 resultados para Problematic internet use


Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the increasing demand for document transfer services such as the World Wide Web comes a need for better resource management to reduce the latency of documents in these systems. To address this need, we analyze the potential for document caching at the application level in document transfer services. We have collected traces of actual executions of Mosaic, reflecting over half a million user requests for WWW documents. Using those traces, we study the tradeoffs between caching at three levels in the system, and the potential for use of application-level information in the caching system. Our traces show that while a high hit rate in terms of URLs is achievable, a much lower hit rate is possible in terms of bytes, because most profitably-cached documents are small. We consider the performance of caching when applied at the level of individual user sessions, at the level of individual hosts, and at the level of a collection of hosts on a single LAN. We show that the performance gain achievable by caching at the session level (which is straightforward to implement) is nearly all of that achievable at the LAN level (where caching is more difficult to implement). However, when resource requirements are considered, LAN level caching becomes much more desirable, since it can achieve a given level of caching performance using a much smaller amount of cache space. Finally, we consider the use of organizational boundary information as an example of the potential for use of application-level information in caching. Our results suggest that distinguishing between documents produced locally and those produced remotely can provide useful leverage in designing caching policies, because of differences in the potential for sharing these two document types among multiple users.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As distributed information services like the World Wide Web become increasingly popular on the Internet, problems of scale are clearly evident. A promising technique that addresses many of these problems is service (or document) replication. However, when a service is replicated, clients then need the additional ability to find a "good" provider of that service. In this paper we report on techniques for finding good service providers without a priori knowledge of server location or network topology. We consider the use of two principal metrics for measuring distance in the Internet: hops, and round-trip latency. We show that these two metrics yield very different results in practice. Surprisingly, we show data indicating that the number of hops between two hosts in the Internet is not strongly correlated to round-trip latency. Thus, the distance in hops between two hosts is not necessarily a good predictor of the expected latency of a document transfer. Instead of using known or measured distances in hops, we show that the extra cost at runtime incurred by dynamic latency measurement is well justified based on the resulting improved performance. In addition we show that selection based on dynamic latency measurement performs much better in practice that any static selection scheme. Finally, the difference between the distribution of hops and latencies is fundamental enough to suggest differences in algorithms for server replication. We show that conclusions drawn about service replication based on the distribution of hops need to be revised when the distribution of latencies is considered instead.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes a novel protocol which uses the Internet Domain Name System (DNS) to partition Web clients into disjoint sets, each of which is associated with a single DNS server. We define an L-DNS cluster to be a grouping of Web Clients that use the same Local DNS server to resolve Internet host names. We identify such clusters in real-time using data obtained from a Web Server in conjunction with that server's Authoritative DNS―both instrumented with an implementation of our clustering algorithm. Using these clusters, we perform measurements from four distinct Internet locations. Our results show that L-DNS clustering enables a better estimation of proximity of a Web Client to a Web Server than previously proposed techniques. Thus, in a Content Distribution Network, a DNS-based scheme that redirects a request from a web client to one of many servers based on the client's name server coordinates (e.g., hops/latency/loss-rates between the client and servers) would perform better with our algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a tool called Gismo (Generator of Internet Streaming Media Objects and workloads). Gismo enables the specification of a number of streaming media access characteristics, including object popularity, temporal correlation of request, seasonal access patterns, user session durations, user interactivity times, and variable bit-rate (VBR) self-similarity and marginal distributions. The embodiment of these characteristics in Gismo enables the generation of realistic and scalable request streams for use in the benchmarking and comparative evaluation of Internet streaming media delivery techniques. To demonstrate the usefulness of Gismo, we present a case study that shows the importance of various workload characteristics in determining the effectiveness of proxy caching and server patching techniques in reducing bandwidth requirements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Growing interest in inference and prediction of network characteristics is justified by its importance for a variety of network-aware applications. One widely adopted strategy to characterize network conditions relies on active, end-to-end probing of the network. Active end-to-end probing techniques differ in (1) the structural composition of the probes they use (e.g., number and size of packets, the destination of various packets, the protocols used, etc.), (2) the entity making the measurements (e.g. sender vs. receiver), and (3) the techniques used to combine measurements in order to infer specific metrics of interest. In this paper, we present Periscope: a Linux API that enables the definition of new probing structures and inference techniques from user space through a flexible interface. PeriScope requires no support from clients beyond the ability to respond to ICMP ECHO REQUESTs and is designed to minimize user/kernel crossings and to ensure various constraints (e.g., back-to-back packet transmissions, fine-grained timing measurements) We show how to use Periscope for two different probing purposes, namely the measurement of shared packet losses between pairs of endpoints and for the measurement of subpath bandwidth. Results from Internet experiments for both of these goals are also presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

NetSketch is a tool that enables the specification of network-flow applications and the certification of desirable safety properties imposed thereon. NetSketch is conceived to assist system integrators in two types of activities: modeling and design. As a modeling tool, it enables the abstraction of an existing system so as to retain sufficient enough details to enable future analysis of safety properties. As a design tool, NetSketch enables the exploration of alternative safe designs as well as the identification of minimal requirements for outsourced subsystems. NetSketch embodies a lightweight formal verification philosophy, whereby the power (but not the heavy machinery) of a rigorous formalism is made accessible to users via a friendly interface. NetSketch does so by exposing tradeoffs between exactness of analysis and scalability, and by combining traditional whole-system analysis with a more flexible compositional analysis approach based on a strongly-typed, Domain-Specific Language (DSL) to specify network configurations at various levels of sketchiness along with invariants that need to be enforced thereupon. In this paper, we overview NetSketch, highlight its salient features, and illustrate how it could be used in applications, including the management/shaping of traffic flows in a vehicular network (as a proxy for CPS applications) and in a streaming media network (as a proxy for Internet applications). In a companion paper, we define the formal system underlying the operation of NetSketch, in particular the DSL behind NetSketch's user-interface when used in "sketch mode", and prove its soundness relative to appropriately-defined notions of validity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The measurement of users’ attitudes towards and confidence with using the Internet is an important yet poorly researched topic. Previous research has encountered issues that serve to obfuscate rather than clarify. Such issues include a lack of distinction between the terms ‘attitude’ and ‘self-efficacy’, the absence of a theoretical framework to measure each concept, and failure to follow well-established techniques for the measurement of both attitude and self-efficacy. Thus, the primary aim of this research was to develop two statistically reliable scales which independently measure attitudes towards the Internet and Internet self-efficacy. This research addressed the outlined issues by applying appropriate theoretical frameworks to each of the constructs under investigation. First, the well-known three component (affect, behaviour, cognition) model of attitudes was applied to previous Internet attitude statements. The scale was distributed to four large samples of participants. Exploratory factor analyses revealed four underlying factors in the scale: Internet Affect, Internet Exhilaration, Social Benefit of the Internet and Internet Detriment. The final scale contains 21 items, demonstrates excellent reliability and achieved excellent model fit in the confirmatory factor analysis. Second, Bandura’s (1997) model of self-efficacy was followed to develop a reliable measure of Internet self-efficacy. Data collected as part of this research suggests that there are ten main activities which individuals can carry out on the Internet. Preliminary analyses suggested that self-efficacy is confounded with previous experience; thus, individuals were invited to indicate how frequently they performed the listed Internet tasks in addition to rating their feelings of self-efficacy for each task. The scale was distributed to a sample of 841 participants. Results from the analyses suggest that the more frequently an individual performs an activity on the Internet, the higher their self-efficacy score for that activity. This suggests that frequency of use ought to be taken into account in individual’s self-efficacy scores to obtain a ‘true’ self-efficacy score for the individual. Thus, a formula was devised to incorporate participants’ previous experience of Internet tasks in their Internet self-efficacy scores. This formula was then used to obtain an overall Internet self-efficacy score for participants. Following the development of both scales, gender and age differences were explored in Internet attitudes and Internet self-efficacy scores. The analyses indicated that there were no gender differences between groups for Internet attitude or Internet self-efficacy scores. However, age group differences were identified for both attitudes and self-efficacy. Individuals aged 25-34 years achieved the highest scores on both the Internet attitude and Internet self-efficacy measures. Internet attitude and self-efficacy scores tended to decrease with age with older participants achieving lower scores on both measures than younger participants. It was also found that the more exposure individuals had to the Internet, the higher their Internet attitude and Internet self-efficacy scores. Examination of the relationship between attitude and self-efficacy found a significantly positive relationship between the two measures suggesting that the two constructs are related. Implication of such findings and directions for future research are outlined in detail in the Discussion section of this thesis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years, the storage and use of residual newborn screening (NBS) samples has gained attention. To inform ongoing policy discussions, this article provides an update of previous work on new policies, educational materials, and parental options regarding the storage and use of residual NBS samples. A review of state NBS Web sites was conducted for information related to the storage and use of residual NBS samples in January 2010. In addition, a review of current statutes and bills introduced between 2005 and 2009 regarding storage and/or use of residual NBS samples was conducted. Fourteen states currently provide information about the storage and/or use of residual NBS samples. Nine states provide parents the option to request destruction of the residual NBS sample after the required storage period or the option to exclude the sample for research uses. In the coming years, it is anticipated that more states will consider policies to address parental concerns about the storage and use of residual NBS samples. Development of new policies regarding storage and use of residual NBS samples will require careful consideration of impact on NBS programs, parent and provider educational materials, and respect for parents among other issues.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction and Aims: In recent years, unprecedented levels of Internet access and the widespread growth of emergent communication technologies have resulted in significantly greater population access for substance use researchers. Despite the research potential of such technologies, the use of the Internet to recruit individuals for participation in event-level research has been limited. The purpose of this paper is to provide a brief account of the methods and results from an online daily diary study of alcohol use. Design and Methods: Participants were recruited using Amazon's Mechanical Turk. Eligible participants completed a brief screener assessing demographics and health behaviours, with a subset of individuals subsequently recruited to participate in a 2 week daily diary study of alcohol use. Results: Multilevel models of the daily alcohol data derived from the Mechanical Turk sample (n=369) replicated several findings commonly reported in daily diary studies of alcohol use. Discussion and Conclusions: Results demonstrate that online participant recruitment and survey administration can be a fruitful method for conducting daily diary alcohol research. © 2014 Australasian Professional Society on Alcohol and other Drugs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The television and film industries are used to working on large projects. These projects use media and documents of various types, ranging from actual film and videotape to items such as PERT charts for project planning. Some items, such as scripts, evolve over a period and go through many versions. It is often necessary to attach information to these “objects” in order to manage, track, and retrieve them. On large productions there may be hundreds of personnel who need access to this material and who in their turn generate new items which form some part of the final production. The requirements for this industry in terms of an information system may be generalized and a distributed software architecture built, primarily using the internet, to serve the needs of these projects. This architecture must enable potentially very large collections of objects to be managed in a secure environment with distributed responsibilities held by many working on the production. Copyright © 2005 by the Society of Motion Picture and Television Engineers, Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Este trabajo tiene como propósito esencial, realizar un acercamiento para detectar e identificar las necesidades de información y el comportamiento informativo de entrenadores en deportes de combate. Para ello se aplicó un cuestionario a instructores de aikido, boxeo, esgrima, judo, karate, kendo, lima lama, lucha y taekwondo seleccionados mediante un muestreo no probabilístico por causalidad. En general encontramos que los principales temas de interés entre los instructores son: los programas de entrenamiento, nutrición y dietas de entrenamiento. Por otra parte, los entrenadores son más propensos a utilizar su experiencia, internet y cursos para obtener información. En contraste se nota que la biblioteca y los libros son poco usados.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The current global environment and the general increase in the spread and use of Information Technology and Communication (ICT) by companies and consumers, make the use of these technologies as essential to confront the growing competition in the market. Focused on this sector, in this research we analyze the use of electronic commerce, as through websites as through electronic markets, and the use of social networking tools as enablers of business. For this aim, we conducted a comparative analysis between the Andalusian olive oil cooperatives and other legal forms which are present in the sector.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper addresses the issue of the digital divide in students of public secondary schools at Chihuahua City, Mexico. It seeks to identify potential inequality of opportunities with regards to subjects’ access to information, knowledge and education through the ICT (internet, mobile telephony, broadband and television). The study takes three schools as investigative stage, using the survey as a data collection instrument, identifying patterns of behavior regarding: general knowledge of them, access to computer equipment and internet, and characterization of their use. Other aspects of analysis are the identification of the educational level of parents and access to technology resources available for academic and non-academic purposes in various application areas (home, school and social environment). The proposal concludes, that it is through the recollection of alternatives suggested by the teachers themselves to incorporate ICT for teaching purposes in a systematic and planned fashion, whose greatest reflection manifests in better digital literacy indicators.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Data identification is a key task for any Internet Service Provider (ISP) or network administrator. As port fluctuation and encryption become more common in P2P traffic wishing to avoid identification, new strategies must be developed to detect and classify such flows. This paper introduces a new method of separating P2P and standard web traffic that can be applied as part of a data mining process, based on the activity of the hosts on the network. Unlike other research, our method is aimed at classifying individual flows rather than just identifying P2P hosts or ports. Heuristics are analysed and a classification system proposed. The accuracy of the system is then tested using real network traffic from a core internet router showing over 99% accuracy in some cases. We expand on this proposed strategy to investigate its application to real-time, early classification problems. New proposals are made and the results of real-time experiments compared to those obtained in the data mining research. To the best of our knowledge this is the first research to use host based flow identification to determine a flows application within the early stages of the connection.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over the years, researchers from different disciplines have used a wide variety of research methods to assess the views of children. Qualitative methods such as focus groups and small group discussions are particularly common. Much rarer are large-scale quantitative surveys that are a valuable way of comparing data from across different age groups and countries and over time. To test the feasibility of carrying out large-scale quantitative research with children, the authors undertook a pilot survey in Northern Ireland in June 2008. There were two notable innovations: First, it was a survey of all Primary 7 children (age 10 and 11 years); second, it used the Internet to gather the information, which has not been done on this scale before. This article discusses the methodology used to implement the pilot study and evaluates the use of the Internet for carrying out survey research with children.