15 resultados para Sistemas de memória de computadores

em Universidade Federal do Rio Grande do Norte(UFRN)


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Este trabalho apresenta um levantamento dos problemas associados à influência da observabilidade e da visualização radial no projeto de sistemas de monitoramento para redes de grande magnitude e complexidade. Além disso, se propõe a apresentar soluções para parte desses problemas. Através da utilização da Teoria de Redes Complexas, são abordadas duas questões: (i) a localização e a quantidade de nós necessários para garantir uma aquisição de dados capaz de representar o estado da rede de forma efetiva e (ii) a elaboração de um modelo de visualização das informações da rede capaz de ampliar a capacidade de inferência e de entendimento de suas propriedades. A tese estabelece limites teóricos a estas questões e apresenta um estudo sobre a complexidade do monitoramento eficaz, eficiente e escalável de redes

Relevância:

40.00% 40.00%

Publicador:

Resumo:

It bet on the next generation of computers as architecture with multiple processors and/or multicore processors. In this sense there are challenges related to features interconnection, operating frequency, the area on chip, power dissipation, performance and programmability. The mechanism of interconnection and communication it was considered ideal for this type of architecture are the networks-on-chip, due its scalability, reusability and intrinsic parallelism. The networks-on-chip communication is accomplished by transmitting packets that carry data and instructions that represent requests and responses between the processing elements interconnected by the network. The transmission of packets is accomplished as in a pipeline between the routers in the network, from source to destination of the communication, even allowing simultaneous communications between pairs of different sources and destinations. From this fact, it is proposed to transform the entire infrastructure communication of network-on-chip, using the routing mechanisms, arbitration and storage, in a parallel processing system for high performance. In this proposal, the packages are formed by instructions and data that represent the applications, which are executed on routers as well as they are transmitted, using the pipeline and parallel communication transmissions. In contrast, traditional processors are not used, but only single cores that control the access to memory. An implementation of this idea is called IPNoSys (Integrated Processing NoC System), which has an own programming model and a routing algorithm that guarantees the execution of all instructions in the packets, preventing situations of deadlock, livelock and starvation. This architecture provides mechanisms for input and output, interruption and operating system support. As proof of concept was developed a programming environment and a simulator for this architecture in SystemC, which allows configuration of various parameters and to obtain several results to evaluate it

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Nowadays several electronics devices support digital videos. Some examples of these devices are cellphones, digital cameras, video cameras and digital televisions. However, raw videos present a huge amount of data, millions of bits, for their representation as the way they were captured. To store them in its primary form it would be necessary a huge amount of disk space and a huge bandwidth to allow the transmission of these data. The video compression becomes essential to make possible information storage and transmission. Motion Estimation is a technique used in the video coder that explores the temporal redundancy present in video sequences to reduce the amount of data necessary to represent the information. This work presents a hardware architecture of a motion estimation module for high resolution videos according to H.264/AVC standard. The H.264/AVC is the most advanced video coder standard, with several new features which allow it to achieve high compression rates. The architecture presented in this work was developed to provide a high data reuse. The data reuse schema adopted reduces the bandwidth required to execute motion estimation. The motion estimation is the task responsible for the largest share of the gains obtained with the H.264/AVC standard so this module is essential for final video coder performance. This work is included in Rede H.264 project which aims to develop Brazilian technology for Brazilian System of Digital Television

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The usual programs for load flow calculation were in general developped aiming the simulation of electric energy transmission, subtransmission and distribution systems. However, the mathematical methods and algorithms used by the formulations were based, in majority, just on the characteristics of the transmittion systems, which were the main concern focus of engineers and researchers. Though, the physical characteristics of these systems are quite different from the distribution ones. In the transmission systems, the voltage levels are high and the lines are generally very long. These aspects contribute the capacitive and inductive effects that appear in the system to have a considerable influence in the values of the interest quantities, reason why they should be taken into consideration. Still in the transmission systems, the loads have a macro nature, as for example, cities, neiborhoods, or big industries. These loads are, generally, practically balanced, what reduces the necessity of utilization of three-phase methodology for the load flow calculation. Distribution systems, on the other hand, present different characteristics: the voltage levels are small in comparison to the transmission ones. This almost annul the capacitive effects of the lines. The loads are, in this case, transformers, in whose secondaries are connected small consumers, in a sort of times, mono-phase ones, so that the probability of finding an unbalanced circuit is high. This way, the utilization of three-phase methodologies assumes an important dimension. Besides, equipments like voltage regulators, that use simultaneously the concepts of phase and line voltage in their functioning, need a three-phase methodology, in order to allow the simulation of their real behavior. For the exposed reasons, initially was developped, in the scope of this work, a method for three-phase load flow calculation in order to simulate the steady-state behaviour of distribution systems. Aiming to achieve this goal, the Power Summation Algorithm was used, as a base for developping the three phase method. This algorithm was already widely tested and approved by researchers and engineers in the simulation of radial electric energy distribution systems, mainly for single-phase representation. By our formulation, lines are modeled in three-phase circuits, considering the magnetic coupling between the phases; but the earth effect is considered through the Carson reduction. Its important to point out that, in spite of the loads being normally connected to the transformers secondaries, was considered the hypothesis of existence of star or delta loads connected to the primary circuit. To perform the simulation of voltage regulators, a new model was utilized, allowing the simulation of various types of configurations, according to their real functioning. Finally, was considered the possibility of representation of switches with current measuring in various points of the feeder. The loads are adjusted during the iteractive process, in order to match the current in each switch, converging to the measured value specified by the input data. In a second stage of the work, sensibility parameters were derived taking as base the described load flow, with the objective of suporting further optimization processes. This parameters are found by calculating of the partial derivatives of a variable in respect to another, in general, voltages, losses and reactive powers. After describing the calculation of the sensibility parameters, the Gradient Method was presented, using these parameters to optimize an objective function, that will be defined for each type of study. The first one refers to the reduction of technical losses in a medium voltage feeder, through the installation of capacitor banks; the second one refers to the problem of correction of voltage profile, through the instalation of capacitor banks or voltage regulators. In case of the losses reduction will be considered, as objective function, the sum of the losses in all the parts of the system. To the correction of the voltage profile, the objective function will be the sum of the square voltage deviations in each node, in respect to the rated voltage. In the end of the work, results of application of the described methods in some feeders are presented, aiming to give insight about their performance and acuity

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The lava Platform is increasing1y being adopted in the development of distributed sys¬tems with higb user demando This kind of application is more complex because it needs beyond attending the functional requirements, to fulfil1 the pre-established performance parameters. This work makes a study on the Java Vutual Machine (JVM), approaching its intemal aspects and exploring the garbage collection strategies existing in the literature and used by the NM. It also presents a set of tools that helps in the job of optimizing applications and others that help in the monitoring of applications in the production envi¬ronment. Doe to the great amount of technologies that aim to solve problems which are common to the application layer, it becomes difficult to choose the one with best time response and less memory usage. This work presents a brief introduction to each one of tbe possible technologies and realize comparative tests through a statistical analysis of the response time and garbage collection activity random variables. The obtained results supply engineers and managers with a subside to decide which technologies to use in large applications through the knowledge of how they behave in their environments and the amount of resources that they consume. The relation between the productivity of the technology and its performance is also considered ao important factor in this choice

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new method to perform TCP/IP fingerprinting is proposed. TCP/IP fingerprinting is the process of identify a remote machine through a TCP/IP based computer network. This method has many applications related to network security. Both intrusion and defence procedures may use this process to achieve their objectives. There are many known methods that perform this process in favorable conditions. However, nowadays there are many adversities that reduce the identification performance. This work aims the creation of a new OS fingerprinting tool that bypass these actual problems. The proposed method is based on the use of attractors reconstruction and neural networks to characterize and classify pseudo-random numbers generators

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This works presents a proposal to make automatic the identification of energy thefts in the meter systems through Fuzzy Logic and supervisory like SCADA. The solution we find by to collect datas from meters at customers units: voltage, current, power demand, angles conditions of phasors diagrams of voltages and currents, and taking these datas by fuzzy logic with expert knowledge into a fuzzy system. The parameters collected are computed by fuzzy logic, in engineering alghorithm, and the output shows to user if the customer researched may be consuming electrical energy without to pay for it, and these feedbacks have its own membership grades. The value of this solution is a need for reduce the losses that already sets more than twenty per cent. In such a way that it is an expert system that looks for decision make with assertivity, and it looks forward to find which problems there are on site and then it wont happen problems of relationship among the utility and the customer unit. The database of an electrical company was utilized and the datas from it were worked by the fuzzy proposal and algorithm developed and the result was confirmed

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of non-linear controllers gained space in the theoretical ambit and of practical applications on the moment that the arising of digital computers enabled the implementation of these methodologies. In comparison with the linear controllers more utilized, the non -linear controllers present the advantage of not requiring the linearity of the system to determine the parameters of control, which permits a more efficient control especially when the system presents a high level of non-linearity. Another additional advantage is the reduction of costs, since to obtain the efficient control through linear controllers it is necessary the utilization of sensors and more refined actuators than when it is utilized a non-linear controller. Among the non-linear theories of control, the method of control by gliding ways is detached for being a method that presents more robustness, before uncertainties. It is already confirmed that the adoption of compensation on the region of residual error permits to improve better the performance of these controllers. So, in this work it is described the development of a non-linear controller that looks for an association of strategy of control by gliding ways, with the fuzzy compensation technique. Through the implementation of some strategies of fuzzy compensation, it was searched the one which provided the biggest efficiency before a system with high level of nonlinearities and uncertainties. The electrohydraulic actuator was utilized as an example of research, and the results appoint to two configurations of compensation that permit a bigger reduction of the residual error

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The caffeine is a mild psychostimulant that has positive cognitive effects at low doses, while promotes detrimental effects on these processes at higher doses. The episodic-like memory can be evaluated in rodents through hippocampus-dependent tasks. The dentate gyrus is a hippocampal subregion in which neurogenesis occurs in adults, and it is believed that this process is related to the function of patterns separation, such as the identification of spatial and temporal patterns when discriminating events. Furthermore, neurogenesis is influenced spatial and contextual learning tasks. Our goal was to evaluate the performance of male Wistar rats in episodic-like tasks after acute or chronic caffeine treatment (15mg/kg or 30mg/kg). Moreover, we assessed the chronic effect of the caffeine treatment, as well as the influence of the hippocampus-dependent learning tasks, on the survival of new-born neurons at the beginning of treatment. For this purpose, we used BrdU to label the new cells generated in the dentate gyrus. Regarding the acute treatment, we found that the saline group presented a tendency to have better spatial and temporal discrimination than caffeine groups. The chronic caffeine group 15 mg/kg (low dose) showed the best discrimination of the temporal aspect of episodic-like memory, whereas the chronic caffeine group 30mg/kg (high dose) was able to discriminate temporal order, only in a condition of greater difficulty. Assessment of neurogenesis using immunohistochemistry for evaluating survival of new-born neurons generated in the dentate gyrus revealed no difference among groups of chronic treatment. Thus, the positive mnemonic effects of the chronic caffeine treatment were not related to neuronal survival. However, another plastic mechanism could explain the positive mnemonic effect, given that there was no improvement in the acute caffeine groups

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As we grow old, there are many cognitive processes which decline in the human brain. One of them is the memory, a function that allows retention and posterior use of knowledge learned during the life, understood as a result of multiple systems highly organized and spread in several neural regions. This work aimed to evaluate the recognition memory in adults over 45 years old through words and pictures recognition tasks and the use of two codification or learning conditions (same distracters and different distracters). Twelve individuals were studied (6 men and 6 women) aged between 45 and 88 years old and with similar demographic characteristics. They presented better performance on picture tasks rather than word tasks. Better results were also verified when the codification context had different distracters, which significantly reflected in a long term principally in elderly individuals. The results reached suggest that the codification context influenced the lists of pictures and words learning, mainly for the elderly ones, when compared to adults, and that these results can be related to the phenomena involved with the recognition memory, the recollection and familiarity

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Layered Double Hydroxides has become extremely promising materials due to its range of applications, easily obtained in the laboratory and reusability after calcination, so the knowledge regarding their properties is of utmost importance. In this study were synthesized layered double hydroxides of two systems, Mg-Al and Zn-Al, and such materials were analyzed with X-ray diffraction and, from these data, we determined the volume density, planar atomic density, size crystallite, lattice parameters, interplanar spacing and interlayer space available. Such materials were also subjected to thermogravimetric analysis reasons for heating 5, 10, 20 and 25 ° C / min to determine kinetic parameters for the formation of metaphases HTD and HTB based on theoretical models Ozawa, Flynn-Wall Starink and Model Free Kinetics. In addition, the layered double hydroxides synthesized in this working ratios were calcined heating 2.5 ° C / min and 20 ° C / min, and tested for adsorption of nitrate anion in aqueous solution batch system at time intervals 5 min, 15 min, 30 min, 1h, 2h and 4h. Such calcined materials were also subjected to exposure to the atmosphere and at intervals of 1 week, 2 weeks and 1 month were analyzed by infrared spectroscopy to study the kinetics of regeneration determining structural called "memory effect"

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A maioria da soluções apresentadas como candidatas à implementação de serviços de distribuição de áudio e vídeo, têm sido projetadas levando-se em consideração determinadas condições de infra-estrutura, formato dos fluxos dedeo a serem transmitidos, ou ainda os tipos de clientes que serão atendidos pelo serviço. Aplicações que utilizam serviços de distribuição dedeo normalmente precisam lidar com grandes oscilações na demanda pelo serviço devido a entrada e saída de usuários do serviço. Com exemplo, basta observar a enorme variação nos níveis de audiência de programas de televisão. Este comportamento coloca um importante requisito para esta classe de sistemas distribuídos: a capacidade de reconfiguração como conseqüência de variações na demanda. Esta dissertação apresenta um estudo que envolveu o uso de agentes móveis para implementar os servidores de um serviço de distribuição dedeo denominada DynaVideo. Uma das principais características deste serviço é a capacidade de ajustar sua configuração em conseqüência de variações na demanda. Como os servidores DynaVideo podem replicar-se e são implementados como código móvel, seu posicionamento pode ser otimizado para atender uma dada demanda e, como conseqüência, a configuração do serviço pode ser ajustada para minimizar o consumo de recursos necessários para distribuir vídeo para seus usuários. A principal contribuição desta dissertação foi provar a viabilidade do conceito de servidores implementados como agentes móveis Java baseados no ambiente de desenvolvimento de software Aglet.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is increasingly common use of a single computer system using different devices - personal computers, telephones cellular and others - and software platforms - systems graphical user interfaces, Web and other systems. Depending on the technologies involved, different software architectures may be employed. For example, in Web systems, it utilizes architecture client-server - usually extended in three layers. In systems with graphical interfaces, it is common architecture with the style MVC. The use of architectures with different styles hinders the interoperability of systems with multiple platforms. Another aggravating is that often the user interface in each of the devices have structure, appearance and behaviour different on each device, which leads to a low usability. Finally, the user interfaces specific to each of the devices involved, with distinct features and technologies is a job that needs to be done individually and not allow scalability. This study sought to address some of these problems by presenting a reference architecture platform-independent and that allows the user interface can be built from an abstract specification described in the language in the specification of the user interface, the MML. This solution is designed to offer greater interoperability between different platforms, greater consistency between the user interfaces and greater flexibility and scalability for the incorporation of new devices

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Os sensores inteligentes são dispositivos que se diferenciam dos sensores comuns por apresentar capacidade de processamento sobre os dados monitorados. Eles tipicamente são compostos por uma fonte de alimentação, transdutores (sensores e atuadores), memória, processador e transceptor. De acordo com o padrão IEEE 1451 um sensor inteligente pode ser dividido em módulos TIM e NCAP que devem se comunicar através de uma interface padronizada chamada TII. O módulo NCAP é a parte do sensor inteligente que comporta o processador. Portanto, ele é o responsável por atribuir a característica de inteligência ao sensor. Existem várias abordagens que podem ser utilizadas para o desenvolvimento desse módulo, dentre elas se destacam aquelas que utilizam microcontroladores de baixo custo e/ou FPGA. Este trabalho aborda o desenvolvimento de uma arquitetura hardware/software para um módulo NCAP segundo o padrão IEEE 1451.1. A infra-estrutura de hardware é composta por um driver de interface RS-232, uma memória RAM de 512kB, uma interface TII, o processador embarcado NIOS II e um simulador do módulo TIM. Para integração dos componentes de hardware é utilizada ferramenta de integração automática SOPC Builder. A infra-estrutura de software é composta pelo padrão IEEE 1451.1 e pela aplicação especí ca do NCAP que simula o monitoramento de pressão e temperatura em poços de petróleo com o objetivo de detectar vazamento. O módulo proposto é embarcado em uma FPGA e para a sua prototipação é usada a placa DE2 da Altera que contém a FPGA Cyclone II EP2C35F672C6. O processador embarcado NIOS II é utilizado para dar suporte à infra-estrutura de software do NCAP que é desenvolvido na linguagem C e se baseia no padrão IEEE 1451.1. A descrição do comportamento da infra-estrutura de hardware é feita utilizando a linguagem VHDL

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Considerando a era da computação móvel percebe-se que os sistemas de informaçãoestão passando por um processo de metamorfose para possibilitar que seus usuários utilizemnovas formas de acessos às informações a partir de dispositivos móveis. Isso se deveprincipalmente ao aumento da popularidade de dispositivos como smartphones e tablets.Impulsionado por esse novo cenário de computação, que está mudando velhos hábitos ecriando novas maneiras da sociedade acessar informações que até então só eram acessíveisatravés de computadores tradicionais, crescem as demandas por aplicações móveis corporativas.Esse aumento é ocasionado pela necessidade das empresas garantirem aos seusclientes novas formas de interações com seus serviços. Dessa forma, esse trabalho tem oobjetivo de apresentar um estudo referente ao desenvolvimento de aplicações móveis eum processo denominado Metamorphosis, que provê um conjunto de atividades organizadasem três fases: requisitos, projeto e implantação, para auxiliar no desenvolvimentode aplicações móveis corporativas baseadas em sistemas de informações web existentes.