988 resultados para Optimization framework
Resumo:
This work proposes a real-time algorithm to generate a trajectory for a 2 link planar robotic manipulator. The objective is to minimize the space/time ripple and the energy requirements or the time duration in the robot trajectories. The proposed method uses an off line genetic algorithm to calculate every possible trajectory between all cells of the workspace grid. The resultant trajectories are saved in several trees. Then any trajectory requested is constructed in real-time, from these trees. The article presents the results for several experiments.
Resumo:
Random amplified polymorphic DNA (RAPD) technique is a simple and reliable method to detect DNA polymorphism. Several factors can affect the amplification profiles, thereby causing false bands and non-reproducibility of assay. In this study, we analyzed the effect of changing the concentration of primer, magnesium chloride, template DNA and Taq DNA polymerase with the objective of determining their optimum concentration for the standardization of RAPD technique for genetic studies of Cuban Triatominae. Reproducible amplification patterns were obtained using 5 pmoL of primer, 2.5 mM of MgCl2, 25 ng of template DNA and 2 U of Taq DNA polymerase in 25 µL of the reaction. A panel of five random primers was used to evaluate the genetic variability of T. flavida. Three of these (OPA-1, OPA-2 and OPA-4) generated reproducible and distinguishable fingerprinting patterns of Triatominae. Numerical analysis of 52 RAPD amplified bands generated for all five primers was carried out with unweighted pair group method analysis (UPGMA). Jaccard's Similarity Coefficient data were used to construct a dendrogram. Two groups could be distinguished by RAPD data and these groups coincided with geographic origin, i.e. the populations captured in areas from east and west of Guanahacabibes, Pinar del Río. T. flavida present low interpopulation variability that could result in greater susceptibility to pesticides in control programs. The RAPD protocol and the selected primers are useful for molecular characterization of Cuban Triatominae.
Resumo:
Redundant manipulators have some advantages when compared with classical arms because they allow the trajectory optimization, both on the free space and on the presence of abstacles, and the resolution of singularities. For this type of manipulators, several kinetic algorithms adopt generalized inverse matrices. In this line of thought, the generalized inverse control scheme is tested through several experiments that reveal the difficulties that often arise. Motivated by theseproblems this paper presents a new method that ptimizes the manipulability through a least squre polynomialapproximation to determine the joints positions. Moreover, the article studies influence on the dynamics, when controlling redundant and hyper-redundant manipulators. The experiment confirm the superior performance of the proposed algorithm for redundant and hyper-redundant manipulators, revealing several fundamental properties of the chaotic phenomena, and gives a deeper insight towards the future development of superior trajectory control algorithms.
Resumo:
Consumer-electronics systems are becoming increasingly complex as the number of integrated applications is growing. Some of these applications have real-time requirements, while other non-real-time applications only require good average performance. For cost-efficient design, contemporary platforms feature an increasing number of cores that share resources, such as memories and interconnects. However, resource sharing causes contention that must be resolved by a resource arbiter, such as Time-Division Multiplexing. A key challenge is to configure this arbiter to satisfy the bandwidth and latency requirements of the real-time applications, while maximizing the slack capacity to improve performance of their non-real-time counterparts. As this configuration problem is NP-hard, a sophisticated automated configuration method is required to avoid negatively impacting design time. The main contributions of this article are: 1) An optimal approach that takes an existing integer linear programming (ILP) model addressing the problem and wraps it in a branch-and-price framework to improve scalability. 2) A faster heuristic algorithm that typically provides near-optimal solutions. 3) An experimental evaluation that quantitatively compares the branch-and-price approach to the previously formulated ILP model and the proposed heuristic. 4) A case study of an HD video and graphics processing system that demonstrates the practical applicability of the approach.
Resumo:
13th International Conference on Autonomous Robot Systems (Robotica), 2013
Resumo:
Presented at SEMINAR "ACTION TEMPS RÉEL:INFRASTRUCTURES ET SERVICES SYSTÉMES". 10, Apr, 2015. Brussels, Belgium.
Resumo:
The world is increasingly in a global community. The rapid technological development of communication and information technologies allows the transmission of knowledge in real-time. In this context, it is imperative that the most developed countries are able to develop their own strategies to stimulate the industrial sector to keep up-to-date and being competitive in a dynamic and volatile global market so as to maintain its competitive capacities and by consequence, permits the maintenance of a pacific social state to meet the human and social needs of the nation. The path traced of competitiveness through technological differentiation in industrialization allows a wider and innovative field of research. Already we are facing a new phase of organization and industrial technology that begins to change the way we relate with the industry, society and the human interaction in the world of work in current standards. This Thesis, develop an analysis of Industrie 4.0 Framework, Challenges and Perspectives. Also, an analysis of German reality in facing to approach the future challenge in this theme, the competition expected to win in future global markets, points of domestic concerns felt in its industrial fabric household face this challenge and proposes recommendations for a more effective implementation of its own strategy. The methods of research consisted of a comprehensive review and strategically analysis of existing global literature on the topic, either directly or indirectly, in parallel with the analysis of questionnaires and data analysis performed by entities representing the industry at national and world global placement. The results found by this multilevel analysis, allowed concluding that this is a theme that is only in the beginning for construction the platform to engage the future Internet of Things in the industrial environment Industrie 4.0. This dissertation allows stimulate the need of achievements of more strategically and operational approach within the society itself as a whole to clarify the existing weaknesses in this area, so that the National Strategy can be implemented with effective approaches and planned actions for a direct training plan in a more efficiently path in education for the theme.
Resumo:
The recent technological advancements and market trends are causing an interesting phenomenon towards the convergence of High-Performance Computing (HPC) and Embedded Computing (EC) domains. On one side, new kinds of HPC applications are being required by markets needing huge amounts of information to be processed within a bounded amount of time. On the other side, EC systems are increasingly concerned with providing higher performance in real-time, challenging the performance capabilities of current architectures. The advent of next-generation many-core embedded platforms has the chance of intercepting this converging need for predictable high-performance, allowing HPC and EC applications to be executed on efficient and powerful heterogeneous architectures integrating general-purpose processors with many-core computing fabrics. To this end, it is of paramount importance to develop new techniques for exploiting the massively parallel computation capabilities of such platforms in a predictable way. P-SOCRATES will tackle this important challenge by merging leading research groups from the HPC and EC communities. The time-criticality and parallelisation challenges common to both areas will be addressed by proposing an integrated framework for executing workload-intensive applications with real-time requirements on top of next-generation commercial-off-the-shelf (COTS) platforms based on many-core accelerated architectures. The project will investigate new HPC techniques that fulfil real-time requirements. The main sources of indeterminism will be identified, proposing efficient mapping and scheduling algorithms, along with the associated timing and schedulability analysis, to guarantee the real-time and performance requirements of the applications.
Resumo:
HHV-6 is the etiological agent of Exanthem subitum which is considered the sixth most frequent disease in infancy. In immuno-compromised hosts, reactivation of latent HHV-6 infection may cause severe acute disease. We developed a Sybr Green Real Time PCR for HHV-6 and compared the results with nested conventional PCR. A 214 pb PCR derived fragment was cloned using pGEM-T easy from Promega system. Subsequently, serial dilutions were made in a pool of negative leucocytes from 10-6 ng/µL (equivalent to 2465.8 molecules/µL) to 10-9 (equivalent to 2.46 molecules/µL). Dilutions of the plasmid were amplified by Sybr Green Real Time PCR, using primers HHV3 (5' TTG TGC GGG TCC GTT CCC ATC ATA 3)'and HHV4 (5' TCG GGA TAG AAA AAC CTA ATC CCT 3') and by conventional nested PCR using primers HHV1 (outer): 5'CAA TGC TTT TCT AGC CGC CTC TTC 3'; HHV2 (outer): 5' ACA TCT ATA ATT TTA GAC GAT CCC 3'; HHV3 (inner) and HHV4 (inner) 3'. The detection threshold was determined by plasmid serial dilutions. Threshold for Sybr Green real time PCR was 24.6 molecules/µL and for the nested PCR was 2.46 molecules/µL. We chose the Real Time PCR for diagnosing and quantifying HHV-6 DNA from samples using the new Sybr Green chemistry due to its sensitivity and lower risk of contamination.
Resumo:
In the traditional paradigm, the large power plants supply the reactive power required at a transmission level and the capacitors and transformer tap changer were also used at a distribution level. However, in a near future will be necessary to schedule both active and reactive power at a distribution level, due to the high number of resources connected in distribution levels. This paper proposes a new multi-objective methodology to deal with the optimal resource scheduling considering the distributed generation, electric vehicles and capacitor banks for the joint active and reactive power scheduling. The proposed methodology considers the minimization of the cost (economic perspective) of all distributed resources, and the minimization of the voltage magnitude difference (technical perspective) in all buses. The Pareto front is determined and a fuzzy-based mechanism is applied to present the best compromise solution. The proposed methodology has been tested in the 33-bus distribution network. The case study shows the results of three different scenarios for the economic, technical, and multi-objective perspectives, and the results demonstrated the importance of incorporating the reactive scheduling in the distribution network using the multi-objective perspective to obtain the best compromise solution for the economic and technical perspectives.
Resumo:
Dissertation presented to obtain the degree of Doctor in Electrical and Computer Engineering, specialization on Collaborative Enterprise Networks
Resumo:
According to the new KDIGO (Kidney Disease Improving Global Outcomes) guidelines, the term of renal osteodystrophy, should be used exclusively in reference to the invasive diagnosis of bone abnormalities. Due to the low sensitivity and specificity of biochemical serum markers of bone remodelling,the performance of bone biopsies is highly stimulated in dialysis patients and after kidney transplantation. The tartrate-resistant acid phosphatase (TRACP) is an iso-enzyme of the group of acid phosphatases, which is highly expressed by activated osteoclasts and macrophages. TRACP in osteoclasts is in intracytoplasmic vesicles that transport the products of bone matrix degradation. Being present in activated osteoclasts, the identification of this enzyme by histochemistry in undecalcified bone biopsies is an excellent method to quantify the resorption of bone. Since it is an enzymatic histochemical method for a thermolabile enzyme, the temperature at which it is performed is particularly relevant. This study aimed to determine the optimal temperature for identification of TRACP in activated osteoclasts in undecalcified bone biopsies embedded in methylmethacrylate. We selected 10 cases of undecalcified bone biopsies from hemodialysis patients with the diagnosis of secondary hyperparathyroidism. Sections of 5 μm were stained to identify TRACP at different incubation temperatures (37º, 45º, 60º, 70º and 80ºC) for 30 minutes. Activated osteoclasts stained red and trabecular bone (mineralized bone) was contrasted with toluidine blue. This approach also increased the visibility of the trabecular bone resorption areas (Howship lacunae). Unlike what is suggested in the literature and in several international protocols, we found that the best results were obtained with temperatures between 60ºC and 70ºC. For technical reasons and according to the results of the present study, we recommended that, for an incubation time of 30 minutes, the reaction should be carried out at 60ºC. As active osteoclasts are usually scarce in a bone section, the standardization of the histochemistry method is of great relevance, to optimize the identification of these cells and increase the accuracy of the histomosphometric results. Our results, allowing an increase in osteoclasts contrast, also support the use of semi-automatic histomorphometric measurements.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Electrotécnica e de Computadores
Resumo:
The complexity of systems is considered an obstacle to the progress of the IT industry. Autonomic computing is presented as the alternative to cope with the growing complexity. It is a holistic approach, in which the systems are able to configure, heal, optimize, and protect by themselves. Web-based applications are an example of systems where the complexity is high. The number of components, their interoperability, and workload variations are factors that may lead to performance failures or unavailability scenarios. The occurrence of these scenarios affects the revenue and reputation of businesses that rely on these types of applications. In this article, we present a self-healing framework for Web-based applications (SHõWA). SHõWA is composed by several modules, which monitor the application, analyze the data to detect and pinpoint anomalies, and execute recovery actions autonomously. The monitoring is done by a small aspect-oriented programming agent. This agent does not require changes to the application source code and includes adaptive and selective algorithms to regulate the level of monitoring. The anomalies are detected and pinpointed by means of statistical correlation. The data analysis detects changes in the server response time and analyzes if those changes are correlated with the workload or are due to a performance anomaly. In the presence of per- formance anomalies, the data analysis pinpoints the anomaly. Upon the pinpointing of anomalies, SHõWA executes a recovery procedure. We also present a study about the detection and localization of anomalies, the accuracy of the data analysis, and the performance impact induced by SHõWA. Two benchmarking applications, exercised through dynamic workloads, and different types of anomaly were considered in the study. The results reveal that (1) the capacity of SHõWA to detect and pinpoint anomalies while the number of end users affected is low; (2) SHõWA was able to detect anomalies without raising any false alarm; and (3) SHõWA does not induce a significant performance overhead (throughput was affected in less than 1%, and the response time delay was no more than 2 milliseconds).
Resumo:
Previously we have presented a model for generating human-like arm and hand movements on an unimanual anthropomorphic robot involved in human-robot collaboration tasks. The present paper aims to extend our model in order to address the generation of human-like bimanual movement sequences which are challenged by scenarios cluttered with obstacles. Movement planning involves large scale nonlinear constrained optimization problems which are solved using the IPOPT solver. Simulation studies show that the model generates feasible and realistic hand trajectories for action sequences involving the two hands. The computational costs involved in the planning allow for real-time human robot-interaction. A qualitative analysis reveals that the movements of the robot exhibit basic characteristics of human movements.