156 resultados para Intelligent computing techniques
Resumo:
The continuous improvement of Ethernet technologies is boosting the eagerness of extending their use to cover factory-floor distributed real time applications. Indeed, it is remarkable the considerable amount of research work that has been devoted to the timing analysis of Ethernet-based technologies in the past few years. It happens, however, that the majority of those works are restricted to the analysis of sub-sets of the overall computing and communication system, thus without addressing timeliness in a holistic fashion. To this end, we address an approach, based on simulation, aiming at extracting temporal properties of commercial-off-the-shelf (COTS) Ethernet-based factory-floor distributed systems. This framework is applied to a specific COTS technology, Ethernet/IP. We reason about the modeling and simulation of Ethernet/IP-based systems, and on the use of statistical analysis techniques to provide useful results on timeliness. The approach is part of a wider framework related to the research project INDEPTH NDustrial-Ethernet ProTocols under Holistic analysis.
Resumo:
Field communication systems (fieldbuses) are widely used as the communication support for distributed computer-controlled systems (DCCS) within all sort of process control and manufacturing applications. There are several advantages in the use of fieldbuses as a replacement for the traditional point-to-point links between sensors/actuators and computer-based control systems, within which the most relevant is the decentralisation and distribution of the processing power over the field. A widely used fieldbus is the WorldFIP, which is normalised as European standard EN 50170. Using WorldFIP to support DCCS, an important issue is “how to guarantee the timing requirements of the real-time traffic?” WorldFIP has very interesting mechanisms to schedule data transfers, since it explicitly distinguishes periodic and aperiodic traffic. In this paper, we describe how WorldFIP handles these two types of traffic, and more importantly, we provide a comprehensive analysis on how to guarantee the timing requirements of the real-time traffic.
Resumo:
Graphics processors were originally developed for rendering graphics but have recently evolved towards being an architecture for general-purpose computations. They are also expected to become important parts of embedded systems hardware -- not just for graphics. However, this necessitates the development of appropriate timing analysis techniques which would be required because techniques developed for CPU scheduling are not applicable. The reason is that we are not interested in how long it takes for any given GPU thread to complete, but rather how long it takes for all of them to complete. We therefore develop a simple method for finding an upper bound on the makespan of a group of GPU threads executing the same program and competing for the resources of a single streaming multiprocessor (whose architecture is based on NVIDIA Fermi, with some simplifying assunptions). We then build upon this method to formulate the derivation of the exact worst-case makespan (and corresponding schedule) as an optimization problem. Addressing the issue of tractability, we also present a technique for efficiently computing a safe estimate of the worstcase makespan with minimal pessimism, which may be used when finding an exact value would take too long.
Resumo:
Kinematic redundancy occurs when a manipulator possesses more degrees of freedom than those required to execute a given task. Several kinematic techniques for redundant manipulators control the gripper through the pseudo-inverse of the Jacobian, but lead to a kind of chaotic inner motion with unpredictable arm configurations. Such algorithms are not easy to adapt to optimization schemes and, moreover, often there are multiple optimization objectives that can conflict between them. Unlike single optimization, where one attempts to find the best solution, in multi-objective optimization there is no single solution that is optimum with respect to all indices. Therefore, trajectory planning of redundant robots remains an important area of research and more efficient optimization algorithms are needed. This paper presents a new technique to solve the inverse kinematics of redundant manipulators, using a multi-objective genetic algorithm. This scheme combines the closed-loop pseudo-inverse method with a multi-objective genetic algorithm to control the joint positions. Simulations for manipulators with three or four rotational joints, considering the optimization of two objectives in a workspace without and with obstacles are developed. The results reveal that it is possible to choose several solutions from the Pareto optimal front according to the importance of each individual objective.
Resumo:
Several projects in the recent past have aimed at promoting Wireless Sensor Networks as an infrastructure technology, where several independent users can submit applications that execute concurrently across the network. Concurrent multiple applications cause significant energy-usage overhead on sensor nodes, that cannot be eliminated by traditional schemes optimized for single-application scenarios. In this paper, we outline two main optimization techniques for reducing power consumption across applications. First, we describe a compiler based approach that identifies redundant sensing requests across applications and eliminates those. Second, we cluster the radio transmissions together by concatenating packets from independent applications based on Rate-Harmonized Scheduling.
Resumo:
Consider a wireless sensor network (WSN) where a broadcast from a sensor node does not reach all sensor nodes in the network; such networks are often called multihop networks. Sensor nodes take individual sensor readings, however, in many cases, it is relevant to compute aggregated quantities of these readings. In fact, the minimum and maximum of all sensor readings at an instant are often interesting because they indicate abnormal behavior, for example if the maximum temperature is very high then it may be that a fire has broken out. In this context, we propose an algorithm for computing the min or max of sensor readings in a multihop network. This algorithm has the particularly interesting property of having a time complexity that does not depend on the number of sensor nodes; only the network diameter and the range of the value domain of sensor readings matter.
Resumo:
ARINC specification 653-2 describes the interface between application software and underlying middleware in a distributed real-time avionics system. The real-time workload in this system comprises of partitions, where each partition consists of one or more processes. Processes incur blocking and preemption overheads and can communicate with other processes in the system. In this work we develop compositional techniques for automated scheduling of such partitions and processes. At present, system designers manually schedule partitions based on interactions they have with the partition vendors. This approach is not only time consuming, but can also result in under utilization of resources. In contrast, the technique proposed in this paper is a principled approach for scheduling ARINC-653 partitions and therefore should facilitate system integration.
Resumo:
Consider a wireless sensor network (WSN) where a broadcast from a sensor node does not reach all sensor nodes in the network; such networks are often called multihop networks. Sensor nodes take sensor readings but individual sensor readings are not very important. It is important however to compute aggregated quantities of these sensor readings. The minimum and maximum of all sensor readings at an instant are often interesting because they indicate abnormal behavior, for example if the maximum temperature is very high then it may be that a fire has broken out. We propose an algorithm for computing the min or max of sensor reading in a multihop network. This algorithm has the particularly interesting property of having a time complexity that does not depend on the number of sensor nodes; only the network diameter and the range of the value domain of sensor readings matter.
Resumo:
Penalty and Barrier methods are normally used to solve Nonlinear Optimization Problems constrained problems. The problems appear in areas such as engineering and are often characterised by the fact that involved functions (objective and constraints) are non-smooth and/or their derivatives are not know. This means that optimization methods based on derivatives cannot net used. A Java based API was implemented, including only derivative-free optimizationmethods, to solve both constrained and unconstrained problems, which includes Penalty and Barriers methods. In this work a new penalty function, based on Fuzzy Logic, is presented. This function imposes a progressive penalization to solutions that violate the constraints. This means that the function imposes a low penalization when the violation of the constraints is low and a heavy penalisation when the violation is high. The value of the penalization is not known in beforehand, it is the outcome of a fuzzy inference engine. Numerical results comparing the proposed function with two of the classic penalty/barrier functions are presented. Regarding the presented results one can conclude that the prosed penalty function besides being very robust also exhibits a very good performance.
Resumo:
Wind resource evaluation in two sites located in Portugal was performed using the mesoscale modelling system Weather Research and Forecasting (WRF) and the wind resource analysis tool commonly used within the wind power industry, the Wind Atlas Analysis and Application Program (WAsP) microscale model. Wind measurement campaigns were conducted in the selected sites, allowing for a comparison between in situ measurements and simulated wind, in terms of flow characteristics and energy yields estimates. Three different methodologies were tested, aiming to provide an overview of the benefits and limitations of these methodologies for wind resource estimation. In the first methodology the mesoscale model acts like “virtual” wind measuring stations, where wind data was computed by WRF for both sites and inserted directly as input in WAsP. In the second approach, the same procedure was followed but here the terrain influences induced by the mesoscale model low resolution terrain data were removed from the simulated wind data. In the third methodology, the simulated wind data is extracted at the top of the planetary boundary layer height for both sites, aiming to assess if the use of geostrophic winds (which, by definition, are not influenced by the local terrain) can bring any improvement in the models performance. The obtained results for the abovementioned methodologies were compared with those resulting from in situ measurements, in terms of mean wind speed, Weibull probability density function parameters and production estimates, considering the installation of one wind turbine in each site. Results showed that the second tested approach is the one that produces values closest to the measured ones, and fairly acceptable deviations were found using this coupling technique in terms of estimated annual production. However, mesoscale output should not be used directly in wind farm sitting projects, mainly due to the mesoscale model terrain data poor resolution. Instead, the use of mesoscale output in microscale models should be seen as a valid alternative to in situ data mainly for preliminary wind resource assessments, although the application of mesoscale and microscale coupling in areas with complex topography should be done with extreme caution.
Resumo:
Dynamically reconfigurable SRAM-based field-programmable gate arrays (FPGAs) enable the implementation of reconfigurable computing systems where several applications may be run simultaneously, sharing the available resources according to their own immediate functional requirements. To exclude malfunctioning due to faulty elements, the reliability of all FPGA resources must be guaranteed. Since resource allocation takes place asynchronously, an online structural test scheme is the only way of ensuring reliable system operation. On the other hand, this test scheme should not disturb the operation of the circuit, otherwise availability would be compromised. System performance is also influenced by the efficiency of the management strategies that must be able to dynamically allocate enough resources when requested by each application. As those resources are allocated and later released, many small free resource blocks are created, which are left unused due to performance and routing restrictions. To avoid wasting logic resources, the FPGA logic space must be defragmented regularly. This paper presents a non-intrusive active replication procedure that supports the proposed test methodology and the implementation of defragmentation strategies, assuring both the availability of resources and their perfect working condition, without disturbing system operation.
Resumo:
Para muitos, o ato de ensinar, era e continua a ser uma “arte”, em que os professores e os grandes mestres mais eficientes são aqueles que têm a capacidade e a arte de fazer passar as suas mensagens e conhecimentos, de forma simples e apelativa, independentemente da área de estudo. A informação relacionada com a aula, é cada vez mais digital, sendo importante, por parte dos docentes, o domínio de tecnologias de criação, organização e disponibilização de conteúdos. Essa partilha foi inicialmente possível pelas páginas Web e mais tarde pelas plataformas LMS (Learning Management System). Criar um Website era uma tarefa complicada quer ao nível do seu custo quer ao nível do domínio da tecnologia Web e era por vezes necessário contratar profissionais para o efeito. Surgiram então as CMS (Content Management System) que são tecnologias Open Source, que permitem a gestão de conteúdos. Neste sentido foi realizado um estudo com o objetivo de aferir sobre as competências dos professores no domínio da partilha de Gestão de Conteúdos Digitais. O presente estudo permitiu retirar conclusões sobre o potencial e aplicabilidade das CMS no ensino. O principal objetivo do presente estudo incidiu no potencial de distribuição e partilha de Recursos Educativos Digitais organizados sobre o ponto de vista pedagógico aos alunos. Foi ainda analisado e estudado o papel do Cloud Computing no processo de partilha colaborativa de documentos. Foi delineado como suporte à presente investigação um curso modelo que por sua vez foi implementado nas três principais CMS da atualidade e avaliado o potencial de cada uma neste contexto. Finalmente foram apresentadas as conclusões retiradas do presente estudo.
Resumo:
Empowered by virtualisation technology, cloud infrastructures enable the construction of flexi- ble and elastic computing environments, providing an opportunity for energy and resource cost optimisation while enhancing system availability and achieving high performance. A crucial re- quirement for effective consolidation is the ability to efficiently utilise system resources for high- availability computing and energy-efficiency optimisation to reduce operational costs and carbon footprints in the environment. Additionally, failures in highly networked computing systems can negatively impact system performance substantially, prohibiting the system from achieving its initial objectives. In this paper, we propose algorithms to dynamically construct and readjust vir- tual clusters to enable the execution of users’ jobs. Allied with an energy optimising mechanism to detect and mitigate energy inefficiencies, our decision-making algorithms leverage virtuali- sation tools to provide proactive fault-tolerance and energy-efficiency to virtual clusters. We conducted simulations by injecting random synthetic jobs and jobs using the latest version of the Google cloud tracelogs. The results indicate that our strategy improves the work per Joule ratio by approximately 12.9% and the working efficiency by almost 15.9% compared with other state-of-the-art algorithms.
Resumo:
Extracting the semantic relatedness of terms is an important topic in several areas, including data mining, information retrieval and web recommendation. This paper presents an approach for computing the semantic relatedness of terms using the knowledge base of DBpedia — a community effort to extract structured information from Wikipedia. Several approaches to extract semantic relatedness from Wikipedia using bag-of-words vector models are already available in the literature. The research presented in this paper explores a novel approach using paths on an ontological graph extracted from DBpedia. It is based on an algorithm for finding and weighting a collection of paths connecting concept nodes. This algorithm was implemented on a tool called Shakti that extract relevant ontological data for a given domain from DBpedia using its SPARQL endpoint. To validate the proposed approach Shakti was used to recommend web pages on a Portuguese social site related to alternative music and the results of that experiment are reported in this paper.