931 resultados para Router ottico, Click, Reti ottiche, linux
Resumo:
“Point and click” interactions remain one of the key features of graphical user interfaces (GUIs). People with motion-impairments, however, can often have difficulty with accurate control of standard pointing devices. This paper discusses work that aims to reveal the nature of these difficulties through analyses that consider the cursor’s path of movement. A range of cursor measures was applied, and a number of them were found to be significant in capturing the differences between able-bodied users and motion-impaired users, as well as the differences between a haptic force feedback condition and a control condition. The cursor measures found in the literature, however, do not make up a comprehensive list, but provide a starting point for analysing cursor movements more completely. Six new cursor characteristics for motion-impaired users are introduced to capture aspects of cursor movement different from those already proposed.
Resumo:
People with motion-impairments can often have difficulty with accurate control of standard pointing devices for computer input. The nature of the difficulties may vary, so to be most effective, methods of assisting cursor control must be suited to each user's needs. The work presented here involves a study of cursor trajectories as a means of assessing the requirements of motion-impaired computer users. A new cursor characteristic is proposed that attempts to capture difficulties with moving the cursor in a smooth trajectory. A study was conducted to see if haptic tunnels could improve performance in "point and click" tasks. Results indicate that the tunnels reduced times to target for those users identified by the new characteristic as having the most difficulty moving in a smooth trajectory. This suggests that cursor characteristics have potential applications in performing assessments of a user's cursor control capabilities which can then be used to determine appropriate methods of assistance.
Resumo:
This paper describes a study of the cursor trajectories of motion-impaired users in "point and click" interactions. A characteristic of cursor movement is proposed that aims to capture the spatial distribution of cursor movement about a target. This characteristic indicates that users often exhibit increased cursor movement in the vicinity of the target, have more difficulty performing the "clicking" part of the interaction as compared to the navigation part, and tend to navigate directly toward the target during the middle portion of the cursor trajectory. The implications of these characteristic behaviours on interface design are discussed.
Resumo:
Point and click interactions using a mouse are an integral part of computer use for current desktop systems. Compared with younger users though, older adults experience greater difficulties performing cursor positioning tasks, and this can present limitations to using a computer easily and effectively. Target expansion is a technique for improving pointing performance, where the target dynamically grows as the cursor approaches. This has the advantage that targets conserve screen real estate in their unexpanded state, yet can still provide the benefits of a larger area to click on. This paper presents two studies of target expansion with older and younger participants, involving multidirectional point-select tasks with a computer mouse. Study 1 compares static versus expanding targets, and Study 2 compares static targets with three alternative techniques for expansion. Results show that expansion can improve times by up to 14%, and reduce error rates by up to 50%. Additionally, expanding targets are beneficial even when the expansion happens late in the movement, i.e. after the cursor has reached the expanded target area or even after it has reached the original target area. Participants’ subjective feedback on the target expansion are generally favorable, and this lends further support for the technique.
Resumo:
Older adult computer users often lose track of the mouse cursor and so resort to methods such as shaking the mouse or searching the entire screen to find the cursor again. Hence, this paper describes how a standard optical mouse was modified to include a touch sensor, activated by releasing and touching the mouse, which automatically centers the mouse cursor to the screen, potentially making it easier to find a ‘lost’ cursor. Six older adult computer users and six younger computer users were asked to compare the touch sensitive mouse with cursor centering with two alternative techniques for locating the mouse cursor: manually shaking the mouse and using the Windows sonar facility. The time taken to click on a target after a distractor task was recorded, and results show that centering the mouse was the fastest to use, with a 35% improvement over shaking the mouse. Five out of six older participants ranked the touch sensitive mouse with cursor centering as the easiest to use.
Resumo:
The use of three orthogonally tagged phosphine reagents to assist chemical work-up via phase-switch scavenging in conjunction with a modular flow reactor is described. These techniques (acidic, basic and Click chemistry) are used to prepare various amides and tri-substituted guanidines from in situ generated iminophosphoranes.
Resumo:
The use of virtualization in high-performance computing (HPC) has been suggested as a means to provide tailored services and added functionality that many users expect from full-featured Linux cluster environments. The use of virtual machines in HPC can offer several benefits, but maintaining performance is a crucial factor. In some instances the performance criteria are placed above the isolation properties. This selective relaxation of isolation for performance is an important characteristic when considering resilience for HPC environments that employ virtualization. In this paper we consider some of the factors associated with balancing performance and isolation in configurations that employ virtual machines. In this context, we propose a classification of errors based on the concept of “error zones”, as well as a detailed analysis of the trade-offs between resilience and performance based on the level of isolation provided by virtualization solutions. Finally, a set of experiments are performed using different virtualization solutions to elucidate the discussion.
Resumo:
A virtual system that emulates an ARM-based processor machine has been created to replace a traditional hardware-based system for teaching assembly language. The proposed virtual system integrates, in a single environment, all the development tools necessary to deliver introductory or advanced courses on modern assembly language programming. The virtual system runs a Linux operating system in either a graphical or console mode on a Windows or Linux host machine. No software licenses or extra hardware are required to use the virtual system, thus students are free to carry their own ARM emulator with them on a USB memory stick. Institutions adopting this, or a similar virtual system, can also benefit by reducing capital investment in hardware-based development kits and enable distance learning courses.
Resumo:
A supramolecular polymer based upon two complementary polymer components is formed by sequential deposition from solution in THF, using a piezoelectric drop-on-demand inkjet printer. Highly efficient cycloaddition or ‘click’ chemistry afforded a well-defined poly(ethylene glycol) featuring chain-folding diimide end groups, which possesses greatly enhanced solubility in THF relative to earlier materials featuring random diimide sequences. Blending the new polyimide with a complementary poly(ethylene glycol) system bearing pyrene end groups (which bind to the chain-folding diimide units) overcomes the limited solubility encountered previously with chain-folding polyimides in inkjet printing applications. The solution state properties of the resulting polymer blend were assessed via viscometry to confirm the presence of a supramolecular polymer before depositing the two electronically complementary polymers by inkjet printing techniques. The novel materials so produced offer an insight into ways of controlling the properties of printed materials through tuning the structure of the polymer at the (supra)molecular level.
Resumo:
IPTV is now offered by several operators in Europe, US and Asia using broadcast video over private IP networks that are isolated from Internet. IPTV services rely ontransmission of live (real-time) video and/or stored video. Video on Demand (VoD)and Time-shifted TV are implemented by IP unicast and Broadcast TV (BTV) and Near video on demand are implemented by IP multicast. IPTV services require QoS guarantees and can tolerate no more than 10-6 packet loss probability, 200 ms delay, and 50 ms jitter. Low delay is essential for satisfactory trick mode performance(pause, resume,fast forward) for VoD, and fast channel change time for BTV. Internet Traffic Engineering (TE) is defined in RFC 3272 and involves both capacity management and traffic management. Capacity management includes capacityplanning, routing control, and resource management. Traffic management includes (1)nodal traffic control functions such as traffic conditioning, queue management, scheduling, and (2) other functions that regulate traffic flow through the network orthat arbitrate access to network resources. An IPTV network architecture includes multiple networks (core network, metronetwork, access network and home network) that connects devices (super head-end, video hub office, video serving office, home gateway, set-top box). Each IP router in the core and metro networks implements some queueing and packet scheduling mechanism at the output link controller. Popular schedulers in IP networks include Priority Queueing (PQ), Class-Based Weighted Fair Queueing (CBWFQ), and Low Latency Queueing (LLQ) which combines PQ and CBWFQ.The thesis analyzes several Packet Scheduling algorithms that can optimize the tradeoff between system capacity and end user performance for the traffic classes. Before in the simulator FIFO,PQ,GPS queueing methods were implemented inside. This thesis aims to implement the LLQ scheduler inside the simulator and to evaluate the performance of these packet schedulers. The simulator is provided by ErnstNordström and Simulator was built in Visual C++ 2008 environmentand tested and analyzed in MatLab 7.0 under windows VISTA.
Resumo:
Background: The sensitivity to microenvironmental changes varies among animals and may be under genetic control. It is essential to take this element into account when aiming at breeding robust farm animals. Here, linear mixed models with genetic effects in the residual variance part of the model can be used. Such models have previously been fitted using EM and MCMC algorithms. Results: We propose the use of double hierarchical generalized linear models (DHGLM), where the squared residuals are assumed to be gamma distributed and the residual variance is fitted using a generalized linear model. The algorithm iterates between two sets of mixed model equations, one on the level of observations and one on the level of variances. The method was validated using simulations and also by re-analyzing a data set on pig litter size that was previously analyzed using a Bayesian approach. The pig litter size data contained 10,060 records from 4,149 sows. The DHGLM was implemented using the ASReml software and the algorithm converged within three minutes on a Linux server. The estimates were similar to those previously obtained using Bayesian methodology, especially the variance components in the residual variance part of the model. Conclusions: We have shown that variance components in the residual variance part of a linear mixed model can be estimated using a DHGLM approach. The method enables analyses of animal models with large numbers of observations. An important future development of the DHGLM methodology is to include the genetic correlation between the random effects in the mean and residual variance parts of the model as a parameter of the DHGLM.
Resumo:
Developing successful navigation and mapping strategies is an essential part of autonomous robot research. However, hardware limitations often make for inaccurate systems. This project serves to investigate efficient alternatives to mapping an environment, by first creating a mobile robot, and then applying machine learning to the robot and controlling systems to increase the robustness of the robot system. My mapping system consists of a semi-autonomous robot drone in communication with a stationary Linux computer system. There are learning systems running on both the robot and the more powerful Linux system. The first stage of this project was devoted to designing and building an inexpensive robot. Utilizing my prior experience from independent studies in robotics, I designed a small mobile robot that was well suited for simple navigation and mapping research. When the major components of the robot base were designed, I began to implement my design. This involved physically constructing the base of the robot, as well as researching and acquiring components such as sensors. Implementing the more complex sensors became a time-consuming task, involving much research and assistance from a variety of sources. A concurrent stage of the project involved researching and experimenting with different types of machine learning systems. I finally settled on using neural networks as the machine learning system to incorporate into my project. Neural nets can be thought of as a structure of interconnected nodes, through which information filters. The type of neural net that I chose to use is a type that requires a known set of data that serves to train the net to produce the desired output. Neural nets are particularly well suited for use with robotic systems as they can handle cases that lie at the extreme edges of the training set, such as may be produced by "noisy" sensor data. Through experimenting with available neural net code, I became familiar with the code and its function, and modified it to be more generic and reusable for multiple applications of neural nets.
Resumo:
Este trabalho apresenta um modelo de mecanismo para o controle de integridade de arquivos em sistemas operacionais, com a capacidade de bloquear o acesso à arquivos inválidos de forma a conter os danos causados por possíveis ataques. O modelo foi inicialmente formulado a partir do estudo detalhado do processo de controle de integridade, revelando diversos pontos críticos de segurança nele existentes, e da avaliação dos mecanismos atualmente implementados nos sistemas operacionais que oferecem, mesmo que indiretamente, algum tipo de garantia de integridade dos arquivos do sistema. Durante o seu desenvolvimento, a segurança do próprio modelo foi detalhadamente analisada de forma a enumerar os seus pontos críticos e possíveis soluções a serem empregadas, resultando na definição dos requisitos mínimos de segurança que devem ser encontrados nesse tipo de mecanismo. Em seguida, visando a validação do modelo especificado e decorrente disponibilização do mecanismo para uso prático e em estudos futuros, um protótipo foi implementado no sistema operacional GNU/Linux. Procurando confirmar a sua viabilidade, foram realizadas análises do impacto causado sobre o desempenho do sistema. Por fim, foi confirmada a importância e viabilidade do emprego do modelo apresentado como mecanismo adicional de segurança em sistemas operacionais. Além disso, foi colocado em evidência um campo de pesquisa ainda pouco explorado e portanto atrativo para a realização de novos estudos.
Resumo:
O presente trabalho explora a aplicação de técnicas de injeção de falhas, que simulam falhas transientes de hardware, para validar o mecanismo de detecção e de recuperação de erros, medir os tempos de indisponibilidade do banco de dados após a ocorrência de uma falha que tenha provocado um FUDVK. Adicionalmente, avalia e valida a ferramenta de injeção de falhas FIDe, utilizada nos experimentos, através de um conjunto significativo de testes de injeção de falhas no ambiente do SGBD. A plataforma experimental consiste de um computador Intel Pentium 550 MHz com 128 MB RAM, do sistema operacional Linux Conectiva kernel versão 2.2.13. O sistema alvo das injeções de falhas é o SGBD centralizado InterBase versão 4.0. As aplicações para a carga de trabalho foram escritas em VFULSWV SQL e executadas dentro de uma sessão chamada LVTO. Para a injeção de falhas foram utilizadas três técnicas distintas: 1) o comando NLOO do sistema operacional; 2) UHVHW geral no equipamento; 3) a ferramenta de injeção de falhas FIDe, desenvolvida no grupo de injeção de falhas do PPGC da UFRGS. Inicialmente são introduzidos e reforçados os conceitos básicos sobre o tema, que serão utilizados no decorrer do trabalho e são necessários para a compreensão deste estudo. Em seguida é apresentada a ferramenta de injeção de falhas Xception e são também analisados alguns experimentos que utilizam ferramentas de injeção de falhas em bancos de dados. Concluída a revisão bibliográfica é apresentada a ferramenta de injeção de falhas – o FIDe, o modelo de falhas adotado, a forma de abordagem, a plataforma de hardware e software, a metodologia e as técnicas utilizadas, a forma de condução dos experimentos realizados e os resultados obtidos com cada uma das técnicas. No total foram realizados 3625 testes de injeções de falhas. Com a primeira técnica foram realizadas 350 execuções, com a segunda técnica foram realizadas 75 execuções e com a terceira técnica 3200 execuções, em 80 testes diferentes. O modelo de falhas proposto para este trabalho refere-se a falhas de crash baseadas em corrupção de memória e registradores, parada de CPU, aborto de transações ou reset geral. Os experimentos foram divididos em três técnicas distintas, visando a maior cobertura possível de erros, e apresentam resultados bastante diferenciados. Os experimentos com o comando NLOO praticamente não afetaram o ambiente do banco de dados. Pequeno número de injeção de falhas com o FIDe afetaram significativamente a dependabilidade do SGBD e os experimentos com a técnica de UHVHW geral foram os que mais comprometeram a dependabilidade do SGBD.