894 resultados para Trabajo por internet
Resumo:
Growing interest in inference and prediction of network characteristics is justified by its importance for a variety of network-aware applications. One widely adopted strategy to characterize network conditions relies on active, end-to-end probing of the network. Active end-to-end probing techniques differ in (1) the structural composition of the probes they use (e.g., number and size of packets, the destination of various packets, the protocols used, etc.), (2) the entity making the measurements (e.g. sender vs. receiver), and (3) the techniques used to combine measurements in order to infer specific metrics of interest. In this paper, we present Periscope: a Linux API that enables the definition of new probing structures and inference techniques from user space through a flexible interface. PeriScope requires no support from clients beyond the ability to respond to ICMP ECHO REQUESTs and is designed to minimize user/kernel crossings and to ensure various constraints (e.g., back-to-back packet transmissions, fine-grained timing measurements) We show how to use Periscope for two different probing purposes, namely the measurement of shared packet losses between pairs of endpoints and for the measurement of subpath bandwidth. Results from Internet experiments for both of these goals are also presented.
Resumo:
One relatively unexplored question about the Internet's physical structure concerns the geographical location of its components: routers, links and autonomous systems (ASes). We study this question using two large inventories of Internet routers and links, collected by different methods and about two years apart. We first map each router to its geographical location using two different state-of-the-art tools. We then study the relationship between router location and population density; between geographic distance and link density; and between the size and geographic extent of ASes. Our findings are consistent across the two datasets and both mapping methods. First, as expected, router density per person varies widely over different economic regions; however, in economically homogeneous regions, router density shows a strong superlinear relationship to population density. Second, the probability that two routers are directly connected is strongly dependent on distance; our data is consistent with a model in which a majority (up to 75-95%) of link formation is based on geographical distance (as in the Waxman topology generation method). Finally, we find that ASes show high variability in geographic size, which is correlated with other measures of AS size (degree and number of interfaces). Among small to medium ASes, ASes show wide variability in their geographic dispersal; however, all ASes exceeding a certain threshold in size are maximally dispersed geographically. These findings have many implications for the next generation of topology generators, which we envisage as producing router-level graphs annotated with attributes such as link latencies, AS identifiers and geographical locations.
Resumo:
This position paper outlines a new network architecture, i.e., a style of construction that identifies the objects and how they relate. We do not specify particular protocol implementations or specific interfaces and policies. After all, it should be possible to change protocols in an architecture without changing the architecture. Rather we outline the repeating patterns and structures, and how the proposed model would cope with the challenges faced by today's Internet (and that of the future). Our new architecture is based on the following principle: Application processes communicate via a distributed inter-process communication (IPC) facility. The application processes that make up this facility provide a protocol that implements an IPC mechanism, and a protocol for managing distributed IPC (routing, security and other management tasks). Existing implementation strategies, algorithms, and protocols can be cast and used within our proposed new structure.
Resumo:
The TCP/IP architecture was originally designed without taking security measures into consideration. Over the years, it has been subjected to many attacks, which has led to many patches to counter them. Our investigations into the fundamental principles of networking have shown that carefully following an abstract model of Interprocess Communication (IPC) addresses many problems [1]. Guided by this IPC principle, we designed a clean-slate Recursive INternet Architecture (RINA) [2]. In this paper, we show how, without the aid of cryptographic techniques, the bare-bones architecture of RINA can resist most of the security attacks faced by TCP/IP. We also show how hard it is for an intruder to compromise RINA. Then, we show how RINA inherently supports security policies in a more manageable, on-demand basis, in contrast to the rigid, piecemeal approach of TCP/IP.
Resumo:
Recent empirical studies have shown that Internet topologies exhibit power laws of the form for the following relationships: (P1) outdegree of node (domain or router) versus rank; (P2) number of nodes versus outdegree; (P3) number of node pairs y = x^α within a neighborhood versus neighborhood size (in hops); and (P4) eigenvalues of the adjacency matrix versus rank. However, causes for the appearance of such power laws have not been convincingly given. In this paper, we examine four factors in the formation of Internet topologies. These factors are (F1) preferential connectivity of a new node to existing nodes; (F2) incremental growth of the network; (F3) distribution of nodes in space; and (F4) locality of edge connections. In synthetically generated network topologies, we study the relevance of each factor in causing the aforementioned power laws as well as other properties, namely diameter, average path length and clustering coefficient. Different kinds of network topologies are generated: (T1) topologies generated using our parametrized generator, we call BRITE; (T2) random topologies generated using the well-known Waxman model; (T3) Transit-Stub topologies generated using GT-ITM tool; and (T4) regular grid topologies. We observe that some generated topologies may not obey power laws P1 and P2. Thus, the existence of these power laws can be used to validate the accuracy of a given tool in generating representative Internet topologies. Power laws P3 and P4 were observed in nearly all considered topologies, but different topologies showed different values of the power exponent α. Thus, while the presence of power laws P3 and P4 do not give strong evidence for the representativeness of a generated topology, the value of α in P3 and P4 can be used as a litmus test for the representativeness of a generated topology. We also find that factors F1 and F2 are the key contributors in our study which provide the resemblance of our generated topologies to that of the Internet.
Resumo:
In this position paper, we review basic control strategies that machines acting as "traffic controllers" could deploy in order to improve the management of Internet services. Such traffic controllers are likely to spur the widespread emergence of advanced applications, which have (so far) been hindered by the inability of the networking infrastructure to deliver on the promise of Quality-of-Service (QoS).
Resumo:
The measurement of users’ attitudes towards and confidence with using the Internet is an important yet poorly researched topic. Previous research has encountered issues that serve to obfuscate rather than clarify. Such issues include a lack of distinction between the terms ‘attitude’ and ‘self-efficacy’, the absence of a theoretical framework to measure each concept, and failure to follow well-established techniques for the measurement of both attitude and self-efficacy. Thus, the primary aim of this research was to develop two statistically reliable scales which independently measure attitudes towards the Internet and Internet self-efficacy. This research addressed the outlined issues by applying appropriate theoretical frameworks to each of the constructs under investigation. First, the well-known three component (affect, behaviour, cognition) model of attitudes was applied to previous Internet attitude statements. The scale was distributed to four large samples of participants. Exploratory factor analyses revealed four underlying factors in the scale: Internet Affect, Internet Exhilaration, Social Benefit of the Internet and Internet Detriment. The final scale contains 21 items, demonstrates excellent reliability and achieved excellent model fit in the confirmatory factor analysis. Second, Bandura’s (1997) model of self-efficacy was followed to develop a reliable measure of Internet self-efficacy. Data collected as part of this research suggests that there are ten main activities which individuals can carry out on the Internet. Preliminary analyses suggested that self-efficacy is confounded with previous experience; thus, individuals were invited to indicate how frequently they performed the listed Internet tasks in addition to rating their feelings of self-efficacy for each task. The scale was distributed to a sample of 841 participants. Results from the analyses suggest that the more frequently an individual performs an activity on the Internet, the higher their self-efficacy score for that activity. This suggests that frequency of use ought to be taken into account in individual’s self-efficacy scores to obtain a ‘true’ self-efficacy score for the individual. Thus, a formula was devised to incorporate participants’ previous experience of Internet tasks in their Internet self-efficacy scores. This formula was then used to obtain an overall Internet self-efficacy score for participants. Following the development of both scales, gender and age differences were explored in Internet attitudes and Internet self-efficacy scores. The analyses indicated that there were no gender differences between groups for Internet attitude or Internet self-efficacy scores. However, age group differences were identified for both attitudes and self-efficacy. Individuals aged 25-34 years achieved the highest scores on both the Internet attitude and Internet self-efficacy measures. Internet attitude and self-efficacy scores tended to decrease with age with older participants achieving lower scores on both measures than younger participants. It was also found that the more exposure individuals had to the Internet, the higher their Internet attitude and Internet self-efficacy scores. Examination of the relationship between attitude and self-efficacy found a significantly positive relationship between the two measures suggesting that the two constructs are related. Implication of such findings and directions for future research are outlined in detail in the Discussion section of this thesis.
Resumo:
OBJECTIVE: The Veterans Health Administration has developed My HealtheVet (MHV), a Web-based portal that links veterans to their care in the veteran affairs (VA) system. The objective of this study was to measure diabetic veterans' access to and use of the Internet, and their interest in using MHV to help manage their diabetes. MATERIALS AND METHODS: Cross-sectional mailed survey of 201 patients with type 2 diabetes and hemoglobin A(1c) > 8.0% receiving primary care at any of five primary care clinic sites affiliated with a VA tertiary care facility. Main measures included Internet usage, access, and attitudes; computer skills; interest in using the Internet; awareness of and attitudes toward MHV; demographics; and socioeconomic status. RESULTS: A majority of respondents reported having access to the Internet at home. Nearly half of all respondents had searched online for information about diabetes, including some who did not have home Internet access. More than a third obtained "some" or "a lot" of their health-related information online. Forty-one percent reported being "very interested" in using MHV to help track their home blood glucose readings, a third of whom did not have home Internet access. Factors associated with being "very interested" were as follows: having access to the Internet at home (p < 0.001), "a lot/some" trust in the Internet as a source of health information (p = 0.002), lower age (p = 0.03), and some college (p = 0.04). Neither race (p = 0.44) nor income (p = 0.25) was significantly associated with interest in MHV. CONCLUSIONS: This study found that a diverse sample of older VA patients with sub-optimally controlled diabetes had a level of familiarity with and access to the Internet comparable to an age-matched national sample. In addition, there was a high degree of interest in using the Internet to help manage their diabetes.
Resumo:
info:eu-repo/semantics/published
Resumo:
Argentina es considerada como el país de mayor competitividad para producir soja en el mundo, ya que ha logrado obtener el mayor rendimiento promedio de soja de primera y el menor costo de producción. Para esta campaña se espera una producción récord de cerca de 50 millones de toneladas, para la cual en el país hay alrededor de 25.000 cosechadoras, número insuficiente para una recolección eficiente. Normalmente la cosecha se realiza en otoño momento en el cual las inclemencias del tiempo pueden producir retardos en la recolección, lo que sumado al hecho de la simultaneidad de cultivos para cosechar, provoca que los operarios de las cosechadores aumenten las velocidades de trabajo, tanto de la máquina como del cilindro trillador lo que provoca incrementos de las pérdidas y del daño mecánico del grano, sobre todo en soja, que tiene un momento óptimo de recolección muy acotado. En el presente trabajo se evaluó el desempeño de una cosechadora convencional en la cosecha de un cultivo de soja con dos humedades de grano y tres velocidades de avance de la máquina. Se mensuraron las pérdidas por plataforma, por cola y de cosecha, contrastándose los resultados mediante ANOVA y Test de Tukey (p menor a 0,05). Los resultados obtenidos mostraron pérdidas con diferencias significativas entre las velocidades y entre las humedades, ensayadas, como así también entre las pérdidas por cola respecto a las de plataforma; pero no se encontró interacción entre las velocidades y las diferentes humedades. Se estima que los resultados obtenidos podrán aportar a los productores algunas herramientas más, a la hora de efectuar la cosecha de soja, especialmente cuando ésta alcance humedades de comercialización o aún menores, con el objetivo de minimizar las pérdidas ocasionadas por la máquina.
Resumo:
p.215-218
Resumo:
En este documento presentamos algunos de los resultados de un estudio que aporta evidencias de la capacidad de los alumnos de tercer grado para desarrollar pensamiento relacional y para comprender el significado del signo igual trabajando en un contexto de igualdades numéricas.
Resumo:
En esta propuesta queremos dar a conocer un taller que consideramos fiable, para ser puesto en el aula de clase y puesto a prueba en el área escolar, especialmente en bachillerato en el área de matemáticas; donde el niño se enfrentará al descubrimiento por sí solo de lo que sucede en una figura y a partir de regularidades, patrones; pueda expresar lo que encuentra desde la representación gráfica y tabular para llegar a la representación algebraica y a el significado y esencia del concepto de sucesión. Esta propuesta busca a través de figuras espiraladas introducir el trabajo con sucesiones donde se le propone al estudiante enfrentarse a una situación (observación de las figuras espiraladas) donde a partir de lo que ve: identifique, analice y deduzca el comportamiento de lo que sucede y pueda llevar esto a un lenguaje verbal y escrito con ayuda de representaciones gráficas y tabulares que le ayudarán a establecer regularidades y que permitirán dar sentido a lo que sucede con las figuras espiraladas.
Resumo:
Pensar que existen soluciones para cerrar la brecha entre el colegio y la universidad es utópico. Sin embargo, sí tiene sentido el trabajo que se haga con respecto al problema de la brecha para conocer y acercar los ideales y las expectativas que tienen las diferentes instituciones de educación. En la Universidad de los Andes fue evidente que dicho trabajo se podría orientar en diferentes direcciones y haciendo énfasis en la institución o bien en los profesores o bien en los estudiantes. Se podían abordar temas como: diseño curricular, creencias y actitudes de los profesores y de los estudiantes, métodos de enseñanza, concepciones sobre la enseñanza y el aprendizaje, dificultades y errores de aprendizaje y otros temas. Luego de varios traspiés en la elección del tema de investigación, elegimos finalmente explorar el tema del aprendizaje y considerar a los primíparos para el estudio por ser ellos los que viven realmente el proceso de transición del colegio a la universidad. Por otra parte, nos restringimos al área de precálculo motivados en parte porque en esta materia había un mayor índice de desaprobación. Concretamente, se propuso como objetivo general describir un perfil de aprendizaje en matemáticas del estudiante de Precálculo en el momento de ingresar a la Universidad. Del objetivo anterior se derivó el problema principal de este proyecto: definir los elementos conceptuales con los cuáles articular la descripción de dicho perfil. La presentación está dividida en cuatro partes, en la primera se expone un marco conceptual que presenta los elementos con los cuales se describirá el perfil, la segunda y tercera se refieren respectivamente a la metodología de la investigación y a los resultados obtenidos y la última a las conclusiones del trabajo.