988 resultados para DISTRIBUTED OPTIMIZATION
Resumo:
Distributed real-time systems such as automotive applications are becoming larger and more complex, thus, requiring the use of more powerful hardware and software architectures. Furthermore, those distributed applications commonly have stringent real-time constraints. This implies that such applications would gain in flexibility if they were parallelized and distributed over the system. In this paper, we consider the problem of allocating fixed-priority fork-join Parallel/Distributed real-time tasks onto distributed multi-core nodes connected through a Flexible Time Triggered Switched Ethernet network. We analyze the system requirements and present a set of formulations based on a constraint programming approach. Constraint programming allows us to express the relations between variables in the form of constraints. Our approach is guaranteed to find a feasible solution, if one exists, in contrast to other approaches based on heuristics. Furthermore, approaches based on constraint programming have shown to obtain solutions for these type of formulations in reasonable time.
Resumo:
Currently, due to the widespread use of computers and the internet, students are trading libraries for the World Wide Web and laboratories with simulation programs. In most courses, simulators are made available to students and can be used to proof theoretical results or to test a developing hardware/product. Although this is an interesting solution: low cost, easy and fast way to perform some courses work, it has indeed major disadvantages. As everything is currently being done with/in a computer, the students are loosing the “feel” of the real values of the magnitudes. For instance in engineering studies, and mainly in the first years, students need to learn electronics, algorithmic, mathematics and physics. All of these areas can use numerical analysis software, simulation software or spreadsheets and in the majority of the cases data used is either simulated or random numbers, but real data could be used instead. For example, if a course uses numerical analysis software and needs a dataset, the students can learn to manipulate arrays. Also, when using the spreadsheets to build graphics, instead of using a random table, students could use a real dataset based, for instance, in the room temperature and its variation across the day. In this work we present a framework which uses a simple interface allowing it to be used by different courses where the computers are the teaching/learning process in order to give a more realistic feeling to students by using real data. A framework is proposed based on a set of low cost sensors for different physical magnitudes, e.g. temperature, light, wind speed, which are connected to a central server, that the students have access with an Ethernet protocol or are connected directly to the student computer/laptop. These sensors use the communication ports available such as: serial ports, parallel ports, Ethernet or Universal Serial Bus (USB). Since a central server is used, the students are encouraged to use sensor values results in their different courses and consequently in different types of software such as: numerical analysis tools, spreadsheets or simply inside any programming language when a dataset is needed. In order to do this, small pieces of hardware were developed containing at least one sensor using different types of computer communication. As long as the sensors are attached in a server connected to the internet, these tools can also be shared between different schools. This allows sensors that aren't available in a determined school to be used by getting the values from other places that are sharing them. Another remark is that students in the more advanced years and (theoretically) more know how, can use the courses that have some affinities with electronic development to build new sensor pieces and expand the framework further. The final solution provided is very interesting, low cost, simple to develop, allowing flexibility of resources by using the same materials in several courses bringing real world data into the students computer works.
Resumo:
In this paper, we propose the Distributed using Optimal Priority Assignment (DOPA) heuristic that finds a feasible partitioning and priority assignment for distributed applications based on the linear transactional model. DOPA partitions the tasks and messages in the distributed system, and makes use of the Optimal Priority Assignment (OPA) algorithm known as Audsley’s algorithm, to find the priorities for that partition. The experimental results show how the use of the OPA algorithm increases in average the number of schedulable tasks and messages in a distributed system when compared to the use of Deadline Monotonic (DM) usually favoured in other works. Afterwards, we extend these results to the assignment of Parallel/Distributed applications and present a second heuristic named Parallel-DOPA (P-DOPA). In that case, we show how the partitioning process can be simplified by using the Distributed Stretch Transformation (DST), a parallel transaction transformation algorithm introduced in [1].
Resumo:
XXXIII Simpósio Brasileiro de Redes de Computadores e Sistemas Distribuídos (SBRC 2015), III Workshop de Comunicação em Sistemas Embarcados Críticos. Vitória, Brasil.
Resumo:
Fractional Calculus (FC) goes back to the beginning of the theory of differential calculus. Nevertheless, the application of FC just emerged in the last two decades, due to the progress in the area of chaos that revealed subtle relationships with the FC concepts. In the field of dynamical systems theory some work has been carried out but the proposed models and algorithms are still in a preliminary stage of establishment. Having these ideas in mind, the paper discusses a FC perspective in the study of the dynamics and control of some distributed parameter systems.
Resumo:
This work proposes a real-time algorithm to generate a trajectory for a 2 link planar robotic manipulator. The objective is to minimize the space/time ripple and the energy requirements or the time duration in the robot trajectories. The proposed method uses an off line genetic algorithm to calculate every possible trajectory between all cells of the workspace grid. The resultant trajectories are saved in several trees. Then any trajectory requested is constructed in real-time, from these trees. The article presents the results for several experiments.
Resumo:
Random amplified polymorphic DNA (RAPD) technique is a simple and reliable method to detect DNA polymorphism. Several factors can affect the amplification profiles, thereby causing false bands and non-reproducibility of assay. In this study, we analyzed the effect of changing the concentration of primer, magnesium chloride, template DNA and Taq DNA polymerase with the objective of determining their optimum concentration for the standardization of RAPD technique for genetic studies of Cuban Triatominae. Reproducible amplification patterns were obtained using 5 pmoL of primer, 2.5 mM of MgCl2, 25 ng of template DNA and 2 U of Taq DNA polymerase in 25 µL of the reaction. A panel of five random primers was used to evaluate the genetic variability of T. flavida. Three of these (OPA-1, OPA-2 and OPA-4) generated reproducible and distinguishable fingerprinting patterns of Triatominae. Numerical analysis of 52 RAPD amplified bands generated for all five primers was carried out with unweighted pair group method analysis (UPGMA). Jaccard's Similarity Coefficient data were used to construct a dendrogram. Two groups could be distinguished by RAPD data and these groups coincided with geographic origin, i.e. the populations captured in areas from east and west of Guanahacabibes, Pinar del Río. T. flavida present low interpopulation variability that could result in greater susceptibility to pesticides in control programs. The RAPD protocol and the selected primers are useful for molecular characterization of Cuban Triatominae.
Resumo:
Redundant manipulators have some advantages when compared with classical arms because they allow the trajectory optimization, both on the free space and on the presence of abstacles, and the resolution of singularities. For this type of manipulators, several kinetic algorithms adopt generalized inverse matrices. In this line of thought, the generalized inverse control scheme is tested through several experiments that reveal the difficulties that often arise. Motivated by theseproblems this paper presents a new method that ptimizes the manipulability through a least squre polynomialapproximation to determine the joints positions. Moreover, the article studies influence on the dynamics, when controlling redundant and hyper-redundant manipulators. The experiment confirm the superior performance of the proposed algorithm for redundant and hyper-redundant manipulators, revealing several fundamental properties of the chaotic phenomena, and gives a deeper insight towards the future development of superior trajectory control algorithms.
Resumo:
HHV-6 is the etiological agent of Exanthem subitum which is considered the sixth most frequent disease in infancy. In immuno-compromised hosts, reactivation of latent HHV-6 infection may cause severe acute disease. We developed a Sybr Green Real Time PCR for HHV-6 and compared the results with nested conventional PCR. A 214 pb PCR derived fragment was cloned using pGEM-T easy from Promega system. Subsequently, serial dilutions were made in a pool of negative leucocytes from 10-6 ng/µL (equivalent to 2465.8 molecules/µL) to 10-9 (equivalent to 2.46 molecules/µL). Dilutions of the plasmid were amplified by Sybr Green Real Time PCR, using primers HHV3 (5' TTG TGC GGG TCC GTT CCC ATC ATA 3)'and HHV4 (5' TCG GGA TAG AAA AAC CTA ATC CCT 3') and by conventional nested PCR using primers HHV1 (outer): 5'CAA TGC TTT TCT AGC CGC CTC TTC 3'; HHV2 (outer): 5' ACA TCT ATA ATT TTA GAC GAT CCC 3'; HHV3 (inner) and HHV4 (inner) 3'. The detection threshold was determined by plasmid serial dilutions. Threshold for Sybr Green real time PCR was 24.6 molecules/µL and for the nested PCR was 2.46 molecules/µL. We chose the Real Time PCR for diagnosing and quantifying HHV-6 DNA from samples using the new Sybr Green chemistry due to its sensitivity and lower risk of contamination.
Resumo:
Smart Grids (SGs) have emerged as the new paradigm for power system operation and management, being designed to include large amounts of distributed energy resources. This new paradigm requires new Energy Resource Management (ERM) methodologies considering different operation strategies and the existence of new management players such as several types of aggregators. This paper proposes a methodology to facilitate the coalition between distributed generation units originating Virtual Power Players (VPP) considering a game theory approach. The proposed approach consists in the analysis of the classifications that were attributed by each VPP to the distributed generation units, as well as in the analysis of the previous established contracts by each player. The proposed classification model is based in fourteen parameters including technical, economical and behavioural ones. Depending of the VPP strategies, size and goals, each parameter has different importance. VPP can also manage other type of energy resources, like storage units, electric vehicles, demand response programs or even parts of the MV and LV distribution network. A case study with twelve VPPs with different characteristics and one hundred and fifty real distributed generation units is included in the paper.
Resumo:
Further improvements in demand response programs implementation are needed in order to take full advantage of this resource, namely for the participation in energy and reserve market products, requiring adequate aggregation and remuneration of small size resources. The present paper focuses on SPIDER, a demand response simulation that has been improved in order to simulate demand response, including realistic power system simulation. For illustration of the simulator’s capabilities, the present paper is proposes a methodology focusing on the aggregation of consumers and generators, providing adequate tolls for the demand response program’s adoption by evolved players. The methodology proposed in the present paper focuses on a Virtual Power Player that manages and aggregates the available demand response and distributed generation resources in order to satisfy the required electrical energy demand and reserve. The aggregation of resources is addressed by the use of clustering algorithms, and operation costs for the VPP are minimized. The presented case study is based on a set of 32 consumers and 66 distributed generation units, running on 180 distinct operation scenarios.
Resumo:
The high penetration of distributed energy resources (DER) in distribution networks and the competitive environment of electricity markets impose the use of new approaches in several domains. The network cost allocation, traditionally used in transmission networks, should be adapted and used in the distribution networks considering the specifications of the connected resources. The main goal is to develop a fairer methodology trying to distribute the distribution network use costs to all players which are using the network in each period. In this paper, a model considering different type of costs (fixed, losses, and congestion costs) is proposed comprising the use of a large set of DER, namely distributed generation (DG), demand response (DR) of direct load control type, energy storage systems (ESS), and electric vehicles with capability of discharging energy to the network, which is known as vehicle-to-grid (V2G). The proposed model includes three distinct phases of operation. The first phase of the model consists in an economic dispatch based on an AC optimal power flow (AC-OPF); in the second phase Kirschen's and Bialek's tracing algorithms are used and compared to evaluate the impact of each resource in the network. Finally, the MW-mile method is used in the third phase of the proposed model. A distribution network of 33 buses with large penetration of DER is used to illustrate the application of the proposed model.
Resumo:
According to the new KDIGO (Kidney Disease Improving Global Outcomes) guidelines, the term of renal osteodystrophy, should be used exclusively in reference to the invasive diagnosis of bone abnormalities. Due to the low sensitivity and specificity of biochemical serum markers of bone remodelling,the performance of bone biopsies is highly stimulated in dialysis patients and after kidney transplantation. The tartrate-resistant acid phosphatase (TRACP) is an iso-enzyme of the group of acid phosphatases, which is highly expressed by activated osteoclasts and macrophages. TRACP in osteoclasts is in intracytoplasmic vesicles that transport the products of bone matrix degradation. Being present in activated osteoclasts, the identification of this enzyme by histochemistry in undecalcified bone biopsies is an excellent method to quantify the resorption of bone. Since it is an enzymatic histochemical method for a thermolabile enzyme, the temperature at which it is performed is particularly relevant. This study aimed to determine the optimal temperature for identification of TRACP in activated osteoclasts in undecalcified bone biopsies embedded in methylmethacrylate. We selected 10 cases of undecalcified bone biopsies from hemodialysis patients with the diagnosis of secondary hyperparathyroidism. Sections of 5 μm were stained to identify TRACP at different incubation temperatures (37º, 45º, 60º, 70º and 80ºC) for 30 minutes. Activated osteoclasts stained red and trabecular bone (mineralized bone) was contrasted with toluidine blue. This approach also increased the visibility of the trabecular bone resorption areas (Howship lacunae). Unlike what is suggested in the literature and in several international protocols, we found that the best results were obtained with temperatures between 60ºC and 70ºC. For technical reasons and according to the results of the present study, we recommended that, for an incubation time of 30 minutes, the reaction should be carried out at 60ºC. As active osteoclasts are usually scarce in a bone section, the standardization of the histochemistry method is of great relevance, to optimize the identification of these cells and increase the accuracy of the histomosphometric results. Our results, allowing an increase in osteoclasts contrast, also support the use of semi-automatic histomorphometric measurements.
Resumo:
Previously we have presented a model for generating human-like arm and hand movements on an unimanual anthropomorphic robot involved in human-robot collaboration tasks. The present paper aims to extend our model in order to address the generation of human-like bimanual movement sequences which are challenged by scenarios cluttered with obstacles. Movement planning involves large scale nonlinear constrained optimization problems which are solved using the IPOPT solver. Simulation studies show that the model generates feasible and realistic hand trajectories for action sequences involving the two hands. The computational costs involved in the planning allow for real-time human robot-interaction. A qualitative analysis reveals that the movements of the robot exhibit basic characteristics of human movements.
Resumo:
Dissertação para obtenção do Grau de Doutor em Engenharia Química, especialidade de Engenharia Bioquímica