901 resultados para Computer Network Resources
Resumo:
The increasing and intensive integration of distributed energy resources into distribution systems requires adequate methodologies to ensure a secure operation according to the smart grid paradigm. In this context, SCADA (Supervisory Control and Data Acquisition) systems are an essential infrastructure. This paper presents a conceptual design of a communication and resources management scheme based on an intelligent SCADA with a decentralized, flexible, and intelligent approach, adaptive to the context (context awareness). The methodology is used to support the energy resource management considering all the involved costs, power flows, and electricity prices leading to the network reconfiguration. The methodology also addresses the definition of the information access permissions of each player to each resource. The paper includes a 33-bus network used in a case study that considers an intensive use of distributed energy resources in five distinct implemented operation contexts.
Resumo:
In this paper, three approaches for the assessment of credibility in a web environment will be presented, namely the checklists model, the cognitive authority model and the contextual model. This theoretical framework was used to conduct a study about the assessment of web resources credibility among a sample of 195 students, from elementary and secondary schools in a municipality in Oporto district (Portugal). The practices that young people and children claim to use regarding the use of criteria for web resources selection will be presented. In addition, these results will be discussed and compared with the perceptions that these respondents have demonstrated for the use of criteria to establish or assess authorship, originality, or information resources structure. These results will be also discussed and compared with the perceptions that these respondents have demonstrated for the elements that make up each of these criteria.
Resumo:
Teaching and learning computer programming is as challenging as difficult. Assessing the work of students and providing individualised feedback to all is time-consuming and error prone for teachers and frequently involves a time delay. The existent tools and specifications prove to be insufficient in complex evaluation domains where there is a greater need to practice. At the same time Massive Open Online Courses (MOOC) are appearing revealing a new way of learning, more dynamic and more accessible. However this new paradigm raises serious questions regarding the monitoring of student progress and its timely feedback. This paper provides a conceptual design model for a computer programming learning environment. This environment uses the portal interface design model gathering information from a network of services such as repositories and program evaluators. The design model includes also the integration with learning management systems, a central piece in the MOOC realm, endowing the model with characteristics such as scalability, collaboration and interoperability. This model is not limited to the domain of computer programming and can be adapted to any complex area that requires systematic evaluation with immediate feedback.
Resumo:
Smart Grids (SGs) have emerged as the new paradigm for power system operation and management, being designed to include large amounts of distributed energy resources. This new paradigm requires new Energy Resource Management (ERM) methodologies considering different operation strategies and the existence of new management players such as several types of aggregators. This paper proposes a methodology to facilitate the coalition between distributed generation units originating Virtual Power Players (VPP) considering a game theory approach. The proposed approach consists in the analysis of the classifications that were attributed by each VPP to the distributed generation units, as well as in the analysis of the previous established contracts by each player. The proposed classification model is based in fourteen parameters including technical, economical and behavioural ones. Depending of the VPP strategies, size and goals, each parameter has different importance. VPP can also manage other type of energy resources, like storage units, electric vehicles, demand response programs or even parts of the MV and LV distribution network. A case study with twelve VPPs with different characteristics and one hundred and fifty real distributed generation units is included in the paper.
Resumo:
The high penetration of distributed energy resources (DER) in distribution networks and the competitive environment of electricity markets impose the use of new approaches in several domains. The network cost allocation, traditionally used in transmission networks, should be adapted and used in the distribution networks considering the specifications of the connected resources. The main goal is to develop a fairer methodology trying to distribute the distribution network use costs to all players which are using the network in each period. In this paper, a model considering different type of costs (fixed, losses, and congestion costs) is proposed comprising the use of a large set of DER, namely distributed generation (DG), demand response (DR) of direct load control type, energy storage systems (ESS), and electric vehicles with capability of discharging energy to the network, which is known as vehicle-to-grid (V2G). The proposed model includes three distinct phases of operation. The first phase of the model consists in an economic dispatch based on an AC optimal power flow (AC-OPF); in the second phase Kirschen's and Bialek's tracing algorithms are used and compared to evaluate the impact of each resource in the network. Finally, the MW-mile method is used in the third phase of the proposed model. A distribution network of 33 buses with large penetration of DER is used to illustrate the application of the proposed model.
Resumo:
O crescimento dos sistemas de informação e a sua utilização massiva criou uma nova realidade no acesso a experiências remotas que se encontram geograficamente distribuídas. Nestes últimos tempos, a temática dos laboratórios remotos apareceu nos mais diversos campos como o do ensino ou o de sistemas industriais de controlo e monitorização. Como o acesso aos laboratórios é efectuado através de um meio permissivo como é o caso da Internet, a informação pode estar à mercê de qualquer atacante. Assim, é necessário garantir a segurança do acesso, de forma a criar condições para que não se verifique a adulteração dos valores obtidos, bem como a existência de acessos não permitidos. Os mecanismos de segurança adoptados devem ter em consideração a necessidade de autenticação e autorização, sendo estes pontos críticos no que respeita à segurança, pois estes laboratórios podem estar a controlar equipamentos sensíveis e dispendiosos, podendo até eventualmente comprometer em certos casos o controlo e a monotorização de sistemas industriais. Este trabalho teve como objectivo a análise da segurança em redes, tendo sido realizado um estudo sobre os vários conceitos e mecanismos de segurança necessários para garantir a segurança nas comunicações entre laboratórios remotos. Dele resultam as três soluções apresentadas de comunicação segura para laboratórios remotos distribuídos geograficamente, recorrendo às tecnologias IPSec, OpenVPN e PPTP. De forma a minimizar custos, toda a implementação foi assente em software de código aberto e na utilização de um computador de baixo custo. No que respeita à criação das VPNs, estas foram configuradas de modo a permitir obter os resultados pretendidos na criação de uma ligação segura para laboratórios remotos. O pfSense mostrou-se a escolha acertada visto que suporta nativamente quaisquer das tecnologias que foram estudadas e implementadas, sem necessidade de usar recursos físicos muito caros, permitindo o uso de tecnologias de código aberto sem comprometer a segurança no funcionamento das soluções que suportam a segurança nas comunicações dos laboratórios remotos.
Resumo:
A Web aproximou a humanidade dos seus pares a um nível nunca antes visto. Com esta facilidade veio também o cibercrime, o terrorismo e outros fenómenos característicos de uma sociedade tecnológica, plenamente informatizada e onde as fronteiras terrestres pouco importam na limitação dos agentes ativos, nocivos ou não, deste sistema. Recentemente descobriu-se que as grandes nações “vigiam” atentamente os seus cidadãos, desrespeitando qualquer limite moral e tecnológico, podendo escutar conversas telefónicas, monitorizar o envio e receção de e-mails, monitorizar o tráfego Web do cidadão através de poderosíssimos programas de monitorização e vigilância. Noutros cantos do globo, nações em tumulto ou envoltas num manto da censura perseguem os cidadãos negando-lhes o acesso à Web. Mais mundanamente, há pessoas que coagem e invadem a privacidade de conhecidos e familiares, vasculhando todos os cantos dos seus computadores e hábitos de navegação. Neste sentido, após o estudo das tecnologias que permitem a vigilância constante dos utilizadores da Web, foram analisadas soluções que permitem conceder algum anónimato e segurança no tráfego Web. Para suportar o presente estudo, foi efetuada uma análise das plataformas que permitem uma navegação anónima e segura e um estudo das tecnologias e programas com potencial de violação de privacidade e intrusão informática usados por nações de grande notoriedade. Este trabalho teve como objetivo principal analisar as tecnologias de monitorização e de vigilância informática identificando as tecnologias disponíveis, procurando encontrar potenciais soluções no sentido de investigar a possibilidade de desenvolver e disponibilizar uma ferramenta multimédia alicerçada em Linux e em LiveDVD (Sistema Operativo Linux que corre a partir do DVD sem necessidade de instalação). Foram integrados recursos no protótipo com o intuito de proporcionar ao utilizador uma forma ágil e leiga para navegar na Web de forma segura e anónima, a partir de um sistema operativo (SO) virtualizado e previamente ajustado para o âmbito anteriormente descrito. O protótipo foi testado e avaliado por um conjunto de cidadãos no sentido de aferir o seu potencial. Termina-se o documento com as conclusões e o trabalho a desenvolver futuramente.
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.
Resumo:
Relatório de projeto de mestrado em Ensino de Informática
Resumo:
Hand gesture recognition for human computer interaction, being a natural way of human computer interaction, is an area of active research in computer vision and machine learning. This is an area with many different possible applications, giving users a simpler and more natural way to communicate with robots/systems interfaces, without the need for extra devices. So, the primary goal of gesture recognition research is to create systems, which can identify specific human gestures and use them to convey information or for device control. For that, vision-based hand gesture interfaces require fast and extremely robust hand detection, and gesture recognition in real time. In this study we try to identify hand features that, isolated, respond better in various situations in human-computer interaction. The extracted features are used to train a set of classifiers with the help of RapidMiner in order to find the best learner. A dataset with our own gesture vocabulary consisted of 10 gestures, recorded from 20 users was created for later processing. Experimental results show that the radial signature and the centroid distance are the features that when used separately obtain better results, with an accuracy of 91% and 90,1% respectively obtained with a Neural Network classifier. These to methods have also the advantage of being simple in terms of computational complexity, which make them good candidates for real-time hand gesture recognition.
Resumo:
Nowadays, many P2P applications proliferate in the Internet. The attractiveness of many of these systems relies on the collaborative approach used to exchange large resources without the dependence and associated constraints of centralized approaches where a single server is responsible to handle all the requests from the clients. As consequence, some P2P systems are also interesting and cost-effective approaches to be adopted by content-providers and other Internet players. However, there are several coexistence problems between P2P applications and In- ternet Service Providers (ISPs) due to the unforeseeable behavior of P2P traffic aggregates in ISP infrastructures. In this context, this work proposes a collaborative P2P/ISP system able to underpin the development of novel Traffic Engi- neering (TE) mechanisms contributing for a better coexistence between P2P applications and ISPs. Using the devised system, two TE methods are described being able to estimate and control the impact of P2P traffic aggregates on the ISP network links. One of the TE methods allows that ISP administrators are able to foresee the expected impact that a given P2P swarm will have in the underlying network infrastructure. The other TE method enables the definition of ISP friendly P2P topologies, where specific network links are protected from P2P traffic. As result, the proposed system and associated mechanisms will contribute for improved ISP resource management tasks and to foster the deployment of innovative ISP-friendly systems.
Resumo:
The Supplementary Material for this article can be found online at: http://journal.frontiersin.org/article/10.3389/fmicb. 2016.00275
Resumo:
Network protection, distribution networks, decentralised energy resources, communication links, IEC Communication and Substation Control Standards
Resumo:
An appropriate assessment of end-to-end network performance presumes highly efficient time tracking and measurement with precise time control of the stopping and resuming of program operation. In this paper, a novel approach to solving the problems of highly efficient and precise time measurements on PC-platforms and on ARM-architectures is proposed. A new unified High Performance Timer and a corresponding software library offer a unified interface to the known time counters and automatically identify the fastest and most reliable time source, available in the user space of a computing system. The research is focused on developing an approach of unified time acquisition from the PC hardware and accordingly substituting the common way of getting the time value through Linux system calls. The presented approach provides a much faster means of obtaining the time values with a nanosecond precision than by using conventional means. Moreover, it is capable of handling the sequential time value, precise sleep functions and process resuming. This ability means the reduction of wasting computer resources during the execution of a sleeping process from 100% (busy-wait) to 1-1.5%, whereas the benefits of very accurate process resuming times on long waits are maintained.
Resumo:
Functional connectivity in human brain can be represented as a network using electroencephalography (EEG) signals. These networks--whose nodes can vary from tens to hundreds--are characterized by neurobiologically meaningful graph theory metrics. This study investigates the degree to which various graph metrics depend upon the network size. To this end, EEGs from 32 normal subjects were recorded and functional networks of three different sizes were extracted. A state-space based method was used to calculate cross-correlation matrices between different brain regions. These correlation matrices were used to construct binary adjacency connectomes, which were assessed with regards to a number of graph metrics such as clustering coefficient, modularity, efficiency, economic efficiency, and assortativity. We showed that the estimates of these metrics significantly differ depending on the network size. Larger networks had higher efficiency, higher assortativity and lower modularity compared to those with smaller size and the same density. These findings indicate that the network size should be considered in any comparison of networks across studies.