973 resultados para Optimization techniques
Resumo:
New generation embedded systems demand high performance, efficiency and flexibility. Reconfigurable hardware can provide all these features. However the costly reconfiguration process and the lack of management support have prevented a broader use of these resources. To solve these issues we have developed a scheduler that deals with task-graphs at run-time, steering its execution in the reconfigurable resources while carrying out both prefetch and replacement techniques that cooperate to hide most of the reconfiguration delays. In our scheduling environment task-graphs are analyzed at design-time to extract useful information. This information is used at run-time to obtain near-optimal schedules, escaping from local-optimum decisions, while only carrying out simple computations. Moreover, we have developed a hardware implementation of the scheduler that applies all the optimization techniques while introducing a delay of only a few clock cycles. In the experiments our scheduler clearly outperforms conventional run-time schedulers based on As-Soon-As-Possible techniques. In addition, our replacement policy, specially designed for reconfigurable systems, achieves almost optimal results both regarding reuse and performance.
Resumo:
The optimal capacities and locations of a sequence of landfills are studied, and the interactions between these characteristics are considered. Deciding the capacity of a landfill has some spatial implications since it affects the feasible region for the remaining landfills, and some temporal implications because the capacity determines the lifetime of the landfill and hence the moment of time when the next landfills should be constructed. Some general mathematical properties of the solution are provided and interpreted from an economic point of view. The resulting problem turns out to be non-convex and, therefore, it cannot be solved by conventional optimization techniques. Some global optimization methods are used to solve the problem in a particular case in order to illustrate how the solution depends on the parameter values.
Resumo:
Over 2 million Anterior Cruciate Ligament (ACL) injuries occur annually worldwide resulting in considerable economic and health burdens (e.g., suffering, surgery, loss of function, risk for re-injury, and osteoarthritis). Current screening methods are effective but they generally rely on expensive and time-consuming biomechanical movement analysis, and thus are impractical solutions. In this dissertation, I report on a series of studies that begins to investigate one potentially efficient alternative to biomechanical screening, namely skilled observational risk assessment (e.g., having experts estimate risk based on observations of athletes movements). Specifically, in Study 1 I discovered that ACL injury risk can be accurately and reliably estimated with nearly instantaneous visual inspection when observed by skilled and knowledgeable professionals. Modern psychometric optimization techniques were then used to develop a robust and efficient 5-item test of ACL injury risk prediction skill—i.e., the ACL Injury-Risk-Estimation Quiz or ACL-IQ. Study 2 cross-validated the results from Study 1 in a larger representative sample of both skilled (Exercise Science/Sports Medicine) and un-skilled (General Population) groups. In accord with research on human expertise, quantitative structural and process modeling of risk estimation indicated that superior performance was largely mediated by specific strategies and skills (e.g., ignoring irrelevant information), independent of domain general cognitive abilities (e.g., metal rotation, general decision skill). These cognitive models suggest that ACL-IQ is a trainable skill, providing a foundation for future research and applications in training, decision support, and ultimately clinical screening investigations. Overall, I present the first evidence that observational ACL injury risk prediction is possible including a robust technology for fast, accurate and reliable measurement—i.e., the ACL-IQ. Discussion focuses on applications and outreach including a web platform that was developed to house the test, provide a repository for further data collection, and increase public and professional awareness and outreach (www.ACL-IQ.org). Future directions and general applications of the skilled movement analysis approach are also discussed.
Resumo:
Today, modern System-on-a-Chip (SoC) systems have grown rapidly due to the increased processing power, while maintaining the size of the hardware circuit. The number of transistors on a chip continues to increase, but current SoC designs may not be able to exploit the potential performance, especially with energy consumption and chip area becoming two major concerns. Traditional SoC designs usually separate software and hardware. Thus, the process of improving the system performance is a complicated task for both software and hardware designers. The aim of this research is to develop hardware acceleration workflow for software applications. Thus, system performance can be improved with constraints of energy consumption and on-chip resource costs. The characteristics of software applications can be identified by using profiling tools. Hardware acceleration can have significant performance improvement for highly mathematical calculations or repeated functions. The performance of SoC systems can then be improved, if the hardware acceleration method is used to accelerate the element that incurs performance overheads. The concepts mentioned in this study can be easily applied to a variety of sophisticated software applications. The contributions of SoC-based hardware acceleration in the hardware-software co-design platform include the following: (1) Software profiling methods are applied to H.264 Coder-Decoder (CODEC) core. The hotspot function of aimed application is identified by using critical attributes such as cycles per loop, loop rounds, etc. (2) Hardware acceleration method based on Field-Programmable Gate Array (FPGA) is used to resolve system bottlenecks and improve system performance. The identified hotspot function is then converted to a hardware accelerator and mapped onto the hardware platform. Two types of hardware acceleration methods – central bus design and co-processor design, are implemented for comparison in the proposed architecture. (3) System specifications, such as performance, energy consumption, and resource costs, are measured and analyzed. The trade-off of these three factors is compared and balanced. Different hardware accelerators are implemented and evaluated based on system requirements. 4) The system verification platform is designed based on Integrated Circuit (IC) workflow. Hardware optimization techniques are used for higher performance and less resource costs. Experimental results show that the proposed hardware acceleration workflow for software applications is an efficient technique. The system can reach 2.8X performance improvements and save 31.84% energy consumption by applying the Bus-IP design. The Co-processor design can have 7.9X performance and save 75.85% energy consumption.
Resumo:
In this work we analyze an optimal control problem for a system of two hydroelectric power stations in cascade with reversible turbines. The objective is to optimize the profit of power production while respecting the system’s restrictions. Some of these restrictions translate into state constraints and the cost function is nonconvex. This increases the complexity of the optimal control problem. The problem is solved numerically and two different approaches are adopted. These approaches focus on global optimization techniques (Chen-Burer algorithm) and on a projection estimation refinement method (PERmethod). PERmethod is used as a technique to reduce the dimension of the problem. Results and execution time of the two procedures are compared.
Resumo:
Random amplified polymorphic DNA (RAPD) technique is a simple and reliable method to detect DNA polymorphism. Several factors can affect the amplification profiles, thereby causing false bands and non-reproducibility of assay. In this study, we analyzed the effect of changing the concentration of primer, magnesium chloride, template DNA and Taq DNA polymerase with the objective of determining their optimum concentration for the standardization of RAPD technique for genetic studies of Cuban Triatominae. Reproducible amplification patterns were obtained using 5 pmoL of primer, 2.5 mM of MgCl2, 25 ng of template DNA and 2 U of Taq DNA polymerase in 25 µL of the reaction. A panel of five random primers was used to evaluate the genetic variability of T. flavida. Three of these (OPA-1, OPA-2 and OPA-4) generated reproducible and distinguishable fingerprinting patterns of Triatominae. Numerical analysis of 52 RAPD amplified bands generated for all five primers was carried out with unweighted pair group method analysis (UPGMA). Jaccard's Similarity Coefficient data were used to construct a dendrogram. Two groups could be distinguished by RAPD data and these groups coincided with geographic origin, i.e. the populations captured in areas from east and west of Guanahacabibes, Pinar del Río. T. flavida present low interpopulation variability that could result in greater susceptibility to pesticides in control programs. The RAPD protocol and the selected primers are useful for molecular characterization of Cuban Triatominae.
Resumo:
Computed tomography (CT) is a modality of choice for the study of the musculoskeletal system for various indications including the study of bone, calcifications, internal derangements of joints (with CT arthrography), as well as periprosthetic complications. However, CT remains intrinsically limited by the fact that it exposes patients to ionizing radiation. Scanning protocols need to be optimized to achieve diagnostic image quality at the lowest radiation dose possible. In this optimization process, the radiologist needs to be familiar with the parameters used to quantify radiation dose and image quality. CT imaging of the musculoskeletal system has certain specificities including the focus on high-contrast objects (i.e., in CT of bone or CT arthrography). These characteristics need to be taken into account when defining a strategy to optimize dose and when choosing the best combination of scanning parameters. In the first part of this review, we present the parameters used for the evaluation and quantification of radiation dose and image quality. In the second part, we discuss different strategies to optimize radiation dose and image quality at CT, with a focus on the musculoskeletal system and the use of novel iterative reconstruction techniques.
Resumo:
Computed tomography (CT) is a modality of choice for the study of the musculoskeletal system for various indications including the study of bone, calcifications, internal derangements of joints (with CT arthrography), as well as periprosthetic complications. However, CT remains intrinsically limited by the fact that it exposes patients to ionizing radiation. Scanning protocols need to be optimized to achieve diagnostic image quality at the lowest radiation dose possible. In this optimization process, the radiologist needs to be familiar with the parameters used to quantify radiation dose and image quality. CT imaging of the musculoskeletal system has certain specificities including the focus on high-contrast objects (i.e., in CT of bone or CT arthrography). These characteristics need to be taken into account when defining a strategy to optimize dose and when choosing the best combination of scanning parameters. In the first part of this review, we present the parameters used for the evaluation and quantification of radiation dose and image quality. In the second part, we discuss different strategies to optimize radiation dose and image quality of CT, with a focus on the musculoskeletal system and the use of novel iterative reconstruction techniques.
Resumo:
This thesis considers optimization problems arising in printed circuit board assembly. Especially, the case in which the electronic components of a single circuit board are placed using a single placement machine is studied. Although there is a large number of different placement machines, the use of collect-and-place -type gantry machines is discussed because of their flexibility and increasing popularity in the industry. Instead of solving the entire control optimization problem of a collect-andplace machine with a single application, the problem is divided into multiple subproblems because of its hard combinatorial nature. This dividing technique is called hierarchical decomposition. All the subproblems of the one PCB - one machine -context are described, classified and reviewed. The derived subproblems are then either solved with exact methods or new heuristic algorithms are developed and applied. The exact methods include, for example, a greedy algorithm and a solution based on dynamic programming. Some of the proposed heuristics contain constructive parts while others utilize local search or are based on frequency calculations. For the heuristics, it is made sure with comprehensive experimental tests that they are applicable and feasible. A number of quality functions will be proposed for evaluation and applied to the subproblems. In the experimental tests, artificially generated data from Markov-models and data from real-world PCB production are used. The thesis consists of an introduction and of five publications where the developed and used solution methods are described in their full detail. For all the problems stated in this thesis, the methods proposed are efficient enough to be used in the PCB assembly production in practice and are readily applicable in the PCB manufacturing industry.
Resumo:
The cellular structure of healthy food products, with added dietary fiber and low in calories, is an important factor that contributes to the assessment of quality, which can be quantified by image analysis of visual texture. This study seeks to compare image analysis techniques (binarization using Otsu’s method and the default ImageJ algorithm, a variation of the iterative intermeans method) for quantification of differences in the crumb structure of breads made with different percentages of whole-wheat flour and fat replacer, and discuss the behavior of the parameters number of cells, mean cell area, cell density, and circularity using response surface methodology. Comparative analysis of the results achieved with the Otsu and default ImageJ algorithms showed a significant difference between the studied parameters. The Otsu method demonstrated the crumb structure of the analyzed breads more reliably than the default ImageJ algorithm, and is thus the most suitable in terms of structural representation of the crumb texture.
Resumo:
The thesis deals with the preparation and dielectric characterization of Poly aniline and its analogues in ISM band frequency of 2-4 GHz that includes part of the microwave region (300 MHz to 300 GHz) of the electromagnetic spectrum and an initial dielectric study in the high frequency [O.05MHz-13 MHz]. PolyaniIine has been synthesized by an in situ doping reaction under different temperature and in the presence of inorganic dopants such as HCl H2S04, HN03, HCl04 and organic dopants such as camphorsulphonic acid [CSA], toluenesulphonic acid {TSA) and naphthalenesulphonic acid [NSA]. The variation in dielectric properties with change in reaction temperature, dopants and frequency has been studied. The effect of codopants and microemulsions on the dielectric properties has also been studied in the ISM band. The ISM band of frequencies (2-4 GHz) is of great utility in Industrial, Scientific and Medical (ISM) applications. Microwave heating is a very efficient method of heating dielectric materials and is extensively used in industrial as well as household heating applications.
Resumo:
The research activity described in this thesis is focused mainly on the study of finite-element techniques applied to thermo-fluid dynamic problems of plant components and on the study of dynamic simulation techniques applied to integrated building design in order to enhance the energy performance of the building. The first part of this doctorate thesis is a broad dissertation on second law analysis of thermodynamic processes with the purpose of including the issue of the energy efficiency of buildings within a wider cultural context which is usually not considered by professionals in the energy sector. In particular, the first chapter includes, a rigorous scheme for the deduction of the expressions for molar exergy and molar flow exergy of pure chemical fuels. The study shows that molar exergy and molar flow exergy coincide when the temperature and pressure of the fuel are equal to those of the environment in which the combustion reaction takes place. A simple method to determine the Gibbs free energy for non-standard values of the temperature and pressure of the environment is then clarified. For hydrogen, carbon dioxide, and several hydrocarbons, the dependence of the molar exergy on the temperature and relative humidity of the environment is reported, together with an evaluation of molar exergy and molar flow exergy when the temperature and pressure of the fuel are different from those of the environment. As an application of second law analysis, a comparison of the thermodynamic efficiency of a condensing boiler and of a heat pump is also reported. The second chapter presents a study of borehole heat exchangers, that is, a polyethylene piping network buried in the soil which allows a ground-coupled heat pump to exchange heat with the ground. After a brief overview of low-enthalpy geothermal plants, an apparatus designed and assembled by the author to carry out thermal response tests is presented. Data obtained by means of in situ thermal response tests are reported and evaluated by means of a finite-element simulation method, implemented through the software package COMSOL Multyphysics. The simulation method allows the determination of the precise value of the effective thermal properties of the ground and of the grout, which are essential for the design of borehole heat exchangers. In addition to the study of a single plant component, namely the borehole heat exchanger, in the third chapter is presented a thorough process for the plant design of a zero carbon building complex. The plant is composed of: 1) a ground-coupled heat pump system for space heating and cooling, with electricity supplied by photovoltaic solar collectors; 2) air dehumidifiers; 3) thermal solar collectors to match 70% of domestic hot water energy use, and a wood pellet boiler for the remaining domestic hot water energy use and for exceptional winter peaks. This chapter includes the design methodology adopted: 1) dynamic simulation of the building complex with the software package TRNSYS for evaluating the energy requirements of the building complex; 2) ground-coupled heat pumps modelled by means of TRNSYS; and 3) evaluation of the total length of the borehole heat exchanger by an iterative method developed by the author. An economic feasibility and an exergy analysis of the proposed plant, compared with two other plants, are reported. The exergy analysis was performed by considering the embodied energy of the components of each plant and the exergy loss during the functioning of the plants.
Resumo:
During the last few decades an unprecedented technological growth has been at the center of the embedded systems design paramount, with Moore’s Law being the leading factor of this trend. Today in fact an ever increasing number of cores can be integrated on the same die, marking the transition from state-of-the-art multi-core chips to the new many-core design paradigm. Despite the extraordinarily high computing power, the complexity of many-core chips opens the door to several challenges. As a result of the increased silicon density of modern Systems-on-a-Chip (SoC), the design space exploration needed to find the best design has exploded and hardware designers are in fact facing the problem of a huge design space. Virtual Platforms have always been used to enable hardware-software co-design, but today they are facing with the huge complexity of both hardware and software systems. In this thesis two different research works on Virtual Platforms are presented: the first one is intended for the hardware developer, to easily allow complex cycle accurate simulations of many-core SoCs. The second work exploits the parallel computing power of off-the-shelf General Purpose Graphics Processing Units (GPGPUs), with the goal of an increased simulation speed. The term Virtualization can be used in the context of many-core systems not only to refer to the aforementioned hardware emulation tools (Virtual Platforms), but also for two other main purposes: 1) to help the programmer to achieve the maximum possible performance of an application, by hiding the complexity of the underlying hardware. 2) to efficiently exploit the high parallel hardware of many-core chips in environments with multiple active Virtual Machines. This thesis is focused on virtualization techniques with the goal to mitigate, and overtake when possible, some of the challenges introduced by the many-core design paradigm.
Resumo:
Nowadays computing platforms consist of a very large number of components that require to be supplied with diferent voltage levels and power requirements. Even a very small platform, like a handheld computer, may contain more than twenty diferent loads and voltage regulators. The power delivery designers of these systems are required to provide, in a very short time, the right power architecture that optimizes the performance, meets electrical specifications plus cost and size targets. The appropriate selection of the architecture and converters directly defines the performance of a given solution. Therefore, the designer needs to be able to evaluate a significant number of options in order to know with good certainty whether the selected solutions meet the size, energy eficiency and cost targets. The design dificulties of selecting the right solution arise due to the wide range of power conversion products provided by diferent manufacturers. These products range from discrete components (to build converters) to complete power conversion modules that employ diferent manufacturing technologies. Consequently, in most cases it is not possible to analyze all the alternatives (combinations of power architectures and converters) that can be built. The designer has to select a limited number of converters in order to simplify the analysis. In this thesis, in order to overcome the mentioned dificulties, a new design methodology for power supply systems is proposed. This methodology integrates evolutionary computation techniques in order to make possible analyzing a large number of possibilities. This exhaustive analysis helps the designer to quickly define a set of feasible solutions and select the best trade-off in performance according to each application. The proposed approach consists of two key steps, one for the automatic generation of architectures and other for the optimized selection of components. In this thesis are detailed the implementation of these two steps. The usefulness of the methodology is corroborated by contrasting the results using real problems and experiments designed to test the limits of the algorithms.