930 resultados para Operating Systems (Computers)


Relevância:

80.00% 80.00%

Publicador:

Resumo:

A full assessment of para-­virtualization is important, because without knowledge about the various overheads, users can not understand whether using virtualization is a good idea or not. In this paper we are very interested in assessing the overheads of running various benchmarks on bare-­‐metal, as well as on para-­‐virtualization. The idea is to see what the overheads of para-­‐ virtualization are, as well as looking at the overheads of turning on monitoring and logging. The knowledge from assessing various benchmarks on these different systems will help a range of users understand the use of virtualization systems. In this paper we assess the overheads of using Xen, VMware, KVM and Citrix, see Table 1. These different virtualization systems are used extensively by cloud-­‐users. We are using various Netlib1 benchmarks, which have been developed by the University of Tennessee at Knoxville (UTK), and Oak Ridge National Laboratory (ORNL). In order to assess these virtualization systems, we run the benchmarks on bare-­‐metal, then on the para-­‐virtualization, and finally we turn on monitoring and logging. The later is important as users are interested in Service Level Agreements (SLAs) used by the Cloud providers, and the use of logging is a means of assessing the services bought and used from commercial providers. In this paper we assess the virtualization systems on three different systems. We use the Thamesblue supercomputer, the Hactar cluster and IBM JS20 blade server (see Table 2), which are all servers available at the University of Reading. A functional virtualization system is multi-­‐layered and is driven by the privileged components. Virtualization systems can host multiple guest operating systems, which run on its own domain, and the system schedules virtual CPUs and memory within each Virtual Machines (VM) to make the best use of the available resources. The guest-­‐operating system schedules each application accordingly. You can deploy virtualization as full virtualization or para-­‐virtualization. Full virtualization provides a total abstraction of the underlying physical system and creates a new virtual system, where the guest operating systems can run. No modifications are needed in the guest OS or application, e.g. the guest OS or application is not aware of the virtualized environment and runs normally. Para-­‐virualization requires user modification of the guest operating systems, which runs on the virtual machines, e.g. these guest operating systems are aware that they are running on a virtual machine, and provide near-­‐native performance. You can deploy both para-­‐virtualization and full virtualization across various virtualized systems. Para-­‐virtualization is an OS-­‐assisted virtualization; where some modifications are made in the guest operating system to enable better performance. In this kind of virtualization, the guest operating system is aware of the fact that it is running on the virtualized hardware and not on the bare hardware. In para-­‐virtualization, the device drivers in the guest operating system coordinate the device drivers of host operating system and reduce the performance overheads. The use of para-­‐virtualization [0] is intended to avoid the bottleneck associated with slow hardware interrupts that exist when full virtualization is employed. It has revealed [0] that para-­‐ virtualization does not impose significant performance overhead in high performance computing, and this in turn this has implications for the use of cloud computing for hosting HPC applications. The “apparent” improvement in virtualization has led us to formulate the hypothesis that certain classes of HPC applications should be able to execute in a cloud environment, with minimal performance degradation. In order to support this hypothesis, first it is necessary to define exactly what is meant by a “class” of application, and secondly it will be necessary to observe application performance, both within a virtual machine and when executing on bare hardware. A further potential complication is associated with the need for Cloud service providers to support Service Level Agreements (SLA), so that system utilisation can be audited.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

User interaction within a virtual environment may take various forms: a teleconferencing application will require users to speak to each other (Geak, 1993), with computer supported co-operative working; an Engineer may wish to pass an object to another user for examination; in a battle field simulation (McDonough, 1992), users might exchange fire. In all cases it is necessary for the actions of one user to be presented to the others sufficiently quickly to allow realistic interaction. In this paper we take a fresh look at the approach of virtual reality operating systems by tackling the underlying issues of creating real-time multi-user environments.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Användandet av mobila applikationer har växt radikalt de senaste åren och de samverkar med många system. Därför ställs det högre krav på kvaliteten och att applikationen ska anpassas till många olika enheter, operativsystem samt plattformar. Detta gör att test av mobila applikationer blivit viktigare och större. Detta arbete har bedrivits som en jämförande fallstudie inom området test av mobila applikationer samt testverktyg. Syftet har varit att beskriva hur testning av mobila applikationer sker idag vilket gjorts genom litteraturstudier och intervjuer med IT-företag. Ett annat syfte har varit att utvärdera fyra testverktyg, deras för- och nackdelar samt hur de kan användas vid testning av mobila applikationer och jämföras mot manuell testning utan testverktyg. Detta har gjorts genom att skapa förstahandserfarenheter baserat på användandet av testverktygen. Under arbetet har vi utgått från mobila applikationer som vi fått tillgång till av Triona, som varit vår samarbetspartner.Idag finns många olika testverktyg som kan användas som stöd för testningen men få företag har implementerat något eftersom det kräver både tid och kompetens samt valet av testverktyg kan vara svårt. Testverktygen har olika för- och nackdelar vilket gör att de passar olika bra beroende på typ av projekt och applikation. Fördelar med att använda testverktyg är möjligheten att kunna automatisera, testa på flera enheter samtidigt samt få tillgång till enheter via molnet. Utmaningarna är att det kan vara svårt att installera och lära sig testverktyget samt att licenserna kan vara dyra. Det är därför viktigt att redan innan implementationen veta vilka tester och applikationer testverktygen ska användas till samt vem som ska använda det. Utifrån vår studie kan slutsatsen dras att inget testverktyg är helt komplett men de kan bidra med olika funktioner vilket effektiviserar delar av testningen av mobila applikationer.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A área de Detecção de Intrusão, apesar de muito pesquisada, não responde a alguns problemas reais como níveis de ataques, dim ensão e complexidade de redes, tolerância a falhas, autenticação e privacidade, interoperabilidade e padronização. Uma pesquisa no Instituto de Informática da UFRGS, mais especificamente no Grupo de Segurança (GSEG), visa desenvolver um Sistema de Detecção de Intrusão Distribuído e com características de tolerância a falhas. Este projeto, denominado Asgaard, é a idealização de um sistema cujo objetivo não se restringe apenas a ser mais uma ferramenta de Detecção de Intrusão, mas uma plataforma que possibilite agregar novos módulos e técnicas, sendo um avanço em relação a outros Sistemas de Detecção atualmente em desenvolvimento. Um tópico ainda não abordado neste projeto seria a detecção de sniffers na rede, vindo a ser uma forma de prevenir que um ataque prossiga em outras estações ou redes interconectadas, desde que um intruso normalmente instala um sniffer após um ataque bem sucedido. Este trabalho discute as técnicas de detecção de sniffers, seus cenários, bem como avalia o uso destas técnicas em uma rede local. As técnicas conhecidas são testadas em um ambiente com diferentes sistemas operacionais, como linux e windows, mapeando os resultados sobre a eficiência das mesmas em condições diversas.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The multiphase flow occurrence in the oil and gas industry is common throughout fluid path, production, transportation and refining. The multiphase flow is defined as flow simultaneously composed of two or more phases with different properties and immiscible. An important computational tool for the design, planning and optimization production systems is multiphase flow simulation in pipelines and porous media, usually made by multiphase flow commercial simulators. The main purpose of the multiphase flow simulators is predicting pressure and temperature at any point at the production system. This work proposes the development of a multiphase flow simulator able to predict the dynamic pressure and temperature gradient in vertical, directional and horizontal wells. The prediction of pressure and temperature profiles was made by numerical integration using marching algorithm with empirical correlations and mechanistic model to predict pressure gradient. The development of this tool involved set of routines implemented through software programming Embarcadero C++ Builder® 2010 version, which allowed the creation of executable file compatible with Microsoft Windows® operating systems. The simulator validation was conduct by computational experiments and comparison the results with the PIPESIM®. In general, the developed simulator achieved excellent results compared with those obtained by PIPESIM and can be used as a tool to assist production systems development

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In academia, it is common to create didactic processors, facing practical disciplines in the area of Hardware Computer and can be used as subjects in software platforms, operating systems and compilers. Often, these processors are described without ISA standard, which requires the creation of compilers and other basic software to provide the hardware / software interface and hinder their integration with other processors and devices. Using reconfigurable devices described in a HDL language allows the creation or modification of any microarchitecture component, leading to alteration of the functional units of data path processor as well as the state machine that implements the control unit even as new needs arise. In particular, processors RISP enable modification of machine instructions, allowing entering or modifying instructions, and may even adapt to a new architecture. This work, as the object of study addressing educational soft-core processors described in VHDL, from a proposed methodology and its application on two processors with different complexity levels, shows that it s possible to tailor processors for a standard ISA without causing an increase in the level hardware complexity, ie without significant increase in chip area, while its level of performance in the application execution remains unchanged or is enhanced. The implementations also allow us to say that besides being possible to replace the architecture of a processor without changing its organization, RISP processor can switch between different instruction sets, which can be expanded to toggle between different ISAs, allowing a single processor become adaptive hybrid architecture, which can be used in embedded systems and heterogeneous multiprocessor environments

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The sanitation companies from Brazil has a great challenge for the XXI century: seek to mitigate the rate of physical waste (water, chemicals and electricity) and financial waste caused by inefficient operating systems drinking water supply, considering that currently we already face, in some cases, the scarcity of water resources. The supply systems are increasingly complex as they seek to minimize waste and at the same time better serve the growing number of users. However, this technological change is to reduce the complexity of the challenges posed by the need to include users with higher quality and efficiency in services. A major challenge for companies of water supplies is to provide a good quality service contemplating reducing expenditure on electricity. In this situation we developed a research by a method that seeks to control the pressure of the distribution systems that do not have the tank in your setup and the water comes out of the well directly to the distribution system. The method of pressure control (intelligent control) uses fuzzy logic to eliminate the waste of electricity and the leaks from the production of pumps that inject directly into the distribution system, which causes waste of energy when the consumption of households is reduced causing the saturation of the distribution system. This study was conducted at Green Club II condominium, located in the city of Parnamirim, state of Rio Grande do Norte, in order to study the pressure behavior of the output of the pump that injects water directly into the distribution system. The study was only possible because of the need we had to find a solution to some leaks in the existing distribution system and the extensions of the respective condominium residences, which sparked interest in developing a job in order to carry out the experiments contained in this research

Relevância:

80.00% 80.00%

Publicador:

Resumo:

To manage the complexity associated with the management of multimedia distributed systems, a solution must incorporate concepts of middleware in order to hide specific hardware and operating systems aspects. Applications in these systems can be implemented in different types of platforms, and the components of these systems must interact each with the other. Because of the variability of the state of the platforms implementation, a flexible approach should allow dynamic substitution of components in order to ensure the level of QoS of the running application . In this context, this work presents an approach in the layer of middleware that we are proposing for supporting dynamic substitution of components in the context the Cosmos framework , starting with the choice of target component, rising taking the decision, which, among components candidates will be chosen and concluding with the process defined for the exchange. The approach was defined considering the Cosmos QoS model and how it deals with dynamic reconfiguration

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In order to simplify computer management, several system administrators are adopting advanced techniques to manage software configuration of enterprise computer networks, but the tight coupling between hardware and software makes every PC an individual managed entity, lowering the scalability and increasing the costs to manage hundreds or thousands of PCs. Virtualization is an established technology, however its use is been more focused on server consolidation and virtual desktop infrastructure, not for managing distributed computers over a network. This paper discusses the feasibility of the Distributed Virtual Machine Environment, a new approach for enterprise computer management that combines virtualization and distributed system architecture as the basis of the management architecture. © 2008 IEEE.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents a network node embedded based on IEEE 1451 standard developed using structured programming to access the transducers in the WTIM. The NCAP was developed using Nios II processor and uClinux, a embedded operating system developed to features restricted hardware. Both hardware and software have dynamics features and they can be configured based in the application features. Based in this features, the NCAP was developed using the minimum components of hardware and software to that being implemented in remote environment like central point of data request. Many NCAP works are implemented with an object oriented structure. This is different from the surrounding implementations. In this project the NCAP was developed using structured programming. The tests of the NCAP were made using a ZigBee interface between NCAP and WTIM and the system demonstrated in areas of difficult access for long period of time due to need for low power consumption. © 2012 IEEE.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This article presents a new method to detect damage in structures based on the electromechanical impedance principle. The system follows the variations in the output voltage of piezoelectric transducers and does not compute the impedance itself. The proposed system is portable, autonomous, versatile, and could efficiently replace commercial instruments in different structural health monitoring applications. The identification of damage is performed by simply comparing the variations of root mean square voltage from response signals of piezoelectric transducers, such as lead zirconate titanate patches bonded to the structure, obtained for different frequencies of the excitation signal. The proposed system is not limited by the sampling rate of analog-to-digital converters, dispenses Fourier transform algorithms, and does not require a computer for processing, operating autonomously. A low-cost prototype based on microcontroller and digital synthesizer was built, and experiments were carried out on an aluminum structure and excellent results have been obtained. © The Author(s) 2012.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Pós-graduação em Direito - FCHS

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The primary objective of this study was to verify the existence of internal organization and processes influencing the decisions on adoption, choice, justification and implementation of innovative practices of scientific and technical activities undertaken in the elaborates of sugar, ethanol and energy companies of the midwest of the state of Sao Paulo. Using multivariate analysis of principal components and clusters, the variables were analyzed and the companies, classified. The taxonomy model adapted to contemporary situations described by Freeman (1975), was the 'parameter used with research information in personal interviews by semi-structured questionnaires. The investigated activities constituted of: basic research; applied research; experimental development; scientific and technical information; long-term planning. The variables analyzed were: production processes, operating systems, development of new products, projects and implementation of pilot plant for new products, development, knowledge, studies of other business segments, number of innovations, and the number of existing employees in the organizational structure for these purposes. It was concluded that the best companies with innovative attitudes are those that have incorporated into their organizational the structures, practices and techniques of scientific activities.