910 resultados para Multi-objective Optimization (MOO)


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thermal effects are rapidly gaining importance in nanometer heterogeneous integrated systems. Increased power density, coupled with spatio-temporal variability of chip workload, cause lateral and vertical temperature non-uniformities (variations) in the chip structure. The assumption of an uniform temperature for a large circuit leads to inaccurate determination of key design parameters. To improve design quality, we need precise estimation of temperature at detailed spatial resolution which is very computationally intensive. Consequently, thermal analysis of the designs needs to be done at multiple levels of granularity. To further investigate the flow of chip/package thermal analysis we exploit the Intel Single Chip Cloud Computer (SCC) and propose a methodology for calibration of SCC on-die temperature sensors. We also develop an infrastructure for online monitoring of SCC temperature sensor readings and SCC power consumption. Having the thermal simulation tool in hand, we propose MiMAPT, an approach for analyzing delay, power and temperature in digital integrated circuits. MiMAPT integrates seamlessly into industrial Front-end and Back-end chip design flows. It accounts for temperature non-uniformities and self-heating while performing analysis. Furthermore, we extend the temperature variation aware analysis of designs to 3D MPSoCs with Wide-I/O DRAM. We improve the DRAM refresh power by considering the lateral and vertical temperature variations in the 3D structure and adapting the per-DRAM-bank refresh period accordingly. We develop an advanced virtual platform which models the performance, power, and thermal behavior of a 3D-integrated MPSoC with Wide-I/O DRAMs in detail. Moving towards real-world multi-core heterogeneous SoC designs, a reconfigurable heterogeneous platform (ZYNQ) is exploited to further study the performance and energy efficiency of various CPU-accelerator data sharing methods in heterogeneous hardware architectures. A complete hardware accelerator featuring clusters of OpenRISC CPUs, with dynamic address remapping capability is built and verified on a real hardware.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In a world focused on the need to produce energy for a growing population, while reducing atmospheric emissions of carbon dioxide, organic Rankine cycles represent a solution to fulfil this goal. This study focuses on the design and optimization of axial-flow turbines for organic Rankine cycles. From the turbine designer point of view, most of this fluids exhibit some peculiar characteristics, such as small enthalpy drop, low speed of sound, large expansion ratio. A computational model for the prediction of axial-flow turbine performance is developed and validated against experimental data. The model allows to calculate turbine performance within a range of accuracy of ±3%. The design procedure is coupled with an optimization process, performed using a genetic algorithm where the turbine total-to-static efficiency represents the objective function. The computational model is integrated in a wider analysis of thermodynamic cycle units, by providing the turbine optimal design. First, the calculation routine is applied in the context of the Draugen offshore platform, where three heat recovery systems are compared. The turbine performance is investigated for three competing bottoming cycles: organic Rankine cycle (operating cyclopentane), steam Rankine cycle and air bottoming cycle. Findings indicate the air turbine as the most efficient solution (total-to-static efficiency = 0.89), while the cyclopentane turbine results as the most flexible and compact technology (2.45 ton/MW and 0.63 m3/MW). Furthermore, the study shows that, for organic and steam Rankine cycles, the optimal design configurations for the expanders do not coincide with those of the thermodynamic cycles. This suggests the possibility to obtain a more accurate analysis by including the computational model in the simulations of the thermodynamic cycles. Afterwards, the performance analysis is carried out by comparing three organic fluids: cyclopentane, MDM and R245fa. Results suggest MDM as the most effective fluid from the turbine performance viewpoint (total-to-total efficiency = 0.89). On the other hand, cyclopentane guarantees a greater net power output of the organic Rankine cycle (P = 5.35 MW), while R245fa represents the most compact solution (1.63 ton/MW and 0.20 m3/MW). Finally, the influence of the composition of an isopentane/isobutane mixture on both the thermodynamic cycle performance and the expander isentropic efficiency is investigated. Findings show how the mixture composition affects the turbine efficiency and so the cycle performance. Moreover, the analysis demonstrates that the use of binary mixtures leads to an enhancement of the thermodynamic cycle performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

During the last few decades an unprecedented technological growth has been at the center of the embedded systems design paramount, with Moore’s Law being the leading factor of this trend. Today in fact an ever increasing number of cores can be integrated on the same die, marking the transition from state-of-the-art multi-core chips to the new many-core design paradigm. Despite the extraordinarily high computing power, the complexity of many-core chips opens the door to several challenges. As a result of the increased silicon density of modern Systems-on-a-Chip (SoC), the design space exploration needed to find the best design has exploded and hardware designers are in fact facing the problem of a huge design space. Virtual Platforms have always been used to enable hardware-software co-design, but today they are facing with the huge complexity of both hardware and software systems. In this thesis two different research works on Virtual Platforms are presented: the first one is intended for the hardware developer, to easily allow complex cycle accurate simulations of many-core SoCs. The second work exploits the parallel computing power of off-the-shelf General Purpose Graphics Processing Units (GPGPUs), with the goal of an increased simulation speed. The term Virtualization can be used in the context of many-core systems not only to refer to the aforementioned hardware emulation tools (Virtual Platforms), but also for two other main purposes: 1) to help the programmer to achieve the maximum possible performance of an application, by hiding the complexity of the underlying hardware. 2) to efficiently exploit the high parallel hardware of many-core chips in environments with multiple active Virtual Machines. This thesis is focused on virtualization techniques with the goal to mitigate, and overtake when possible, some of the challenges introduced by the many-core design paradigm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This Thesis aims at building and discussing mathematical models applications focused on Energy problems, both on the thermal and electrical side. The objective is to show how mathematical programming techniques developed within Operational Research can give useful answers in the Energy Sector, how they can provide tools to support decision making processes of Companies operating in the Energy production and distribution and how they can be successfully used to make simulations and sensitivity analyses to better understand the state of the art and convenience of a particular technology by comparing it with the available alternatives. The first part discusses the fundamental mathematical background followed by a comprehensive literature review about mathematical modelling in the Energy Sector. The second part presents mathematical models for the District Heating strategic network design and incremental network design. The objective is the selection of an optimal set of new users to be connected to an existing thermal network, maximizing revenues, minimizing infrastructure and operational costs and taking into account the main technical requirements of the real world application. Results on real and randomly generated benchmark networks are discussed with particular attention to instances characterized by big networks dimensions. The third part is devoted to the development of linear programming models for optimal battery operation in off-grid solar power schemes, with consideration of battery degradation. The key contribution of this work is the inclusion of battery degradation costs in the optimisation models. As available data on relating degradation costs to the nature of charge/discharge cycles are limited, we concentrate on investigating the sensitivity of operational patterns to the degradation cost structure. The objective is to investigate the combination of battery costs and performance at which such systems become economic. We also investigate how the system design should change when battery degradation is taken into account.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays the rise of non-recurring engineering (NRE) costs associated with complexity is becoming a major factor in SoC design, limiting both scaling opportunities and the flexibility advantages offered by the integration of complex computational units. The introduction of embedded programmable elements can represent an appealing solution, able both to guarantee the desired flexibility and upgradabilty and to widen the SoC market. In particular embedded FPGA (eFPGA) cores can provide bit-level optimization for those applications which benefits from synthesis, paying on the other side in terms of performance penalties and area overhead with respect to standard cell ASIC implementations. In this scenario this thesis proposes a design methodology for a synthesizable programmable device designed to be embedded in a SoC. A soft-core embedded FPGA (eFPGA) is hence presented and analyzed in terms of the opportunities given by a fully synthesizable approach, following an implementation flow based on Standard-Cell methodology. A key point of the proposed eFPGA template is that it adopts a Multi-Stage Switching Network (MSSN) as the foundation of the programmable interconnects, since it can be efficiently synthesized and optimized through a standard cell based implementation flow, ensuring at the same time an intrinsic congestion-free network topology. The evaluation of the flexibility potentialities of the eFPGA has been performed using different technology libraries (STMicroelectronics CMOS 65nm and BCD9s 0.11μm) through a design space exploration in terms of area-speed-leakage tradeoffs, enabled by the full synthesizability of the template. Since the most relevant disadvantage of the adopted soft approach, compared to a hardcore, is represented by a performance overhead increase, the eFPGA analysis has been made targeting small area budgets. The generation of the configuration bitstream has been obtained thanks to the implementation of a custom CAD flow environment, and has allowed functional verification and performance evaluation through an application-aware analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Systems Biology is an innovative way of doing biology recently raised in bio-informatics contexts, characterised by the study of biological systems as complex systems with a strong focus on the system level and on the interaction dimension. In other words, the objective is to understand biological systems as a whole, putting on the foreground not only the study of the individual parts as standalone parts, but also of their interaction and of the global properties that emerge at the system level by means of the interaction among the parts. This thesis focuses on the adoption of multi-agent systems (MAS) as a suitable paradigm for Systems Biology, for developing models and simulation of complex biological systems. Multi-agent system have been recently introduced in informatics context as a suitabe paradigm for modelling and engineering complex systems. Roughly speaking, a MAS can be conceived as a set of autonomous and interacting entities, called agents, situated in some kind of nvironment, where they fruitfully interact and coordinate so as to obtain a coherent global system behaviour. The claim of this work is that the general properties of MAS make them an effective approach for modelling and building simulations of complex biological systems, following the methodological principles identified by Systems Biology. In particular, the thesis focuses on cell populations as biological systems. In order to support the claim, the thesis introduces and describes (i) a MAS-based model conceived for modelling the dynamics of systems of cells interacting inside cell environment called niches. (ii) a computational tool, developed for implementing the models and executing the simulations. The tool is meant to work as a kind of virtual laboratory, on top of which kinds of virtual experiments can be performed, characterised by the definition and execution of specific models implemented as MASs, so as to support the validation, falsification and improvement of the models through the observation and analysis of the simulations. A hematopoietic stem cell system is taken as reference case study for formulating a specific model and executing virtual experiments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La tesi, che si colloca all'interno di un progetto di esplorazione degli approcci alla programmazione multi-piattaforma tra Java e iOS, mira a proseguire ed ampliare lo studio del tool RoboVM, in particolare grazie allo sviluppo dell'applicazione iTuCSoN, porting del Command Line Interpreter contenuto in TuCSoN (http://tucson.apice.unibo.it/)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modeling of tumor growth has been performed according to various approaches addressing different biocomplexity levels and spatiotemporal scales. Mathematical treatments range from partial differential equation based diffusion models to rule-based cellular level simulators, aiming at both improving our quantitative understanding of the underlying biological processes and, in the mid- and long term, constructing reliable multi-scale predictive platforms to support patient-individualized treatment planning and optimization. The aim of this paper is to establish a multi-scale and multi-physics approach to tumor modeling taking into account both the cellular and the macroscopic mechanical level. Therefore, an already developed biomodel of clinical tumor growth and response to treatment is self-consistently coupled with a biomechanical model. Results are presented for the free growth case of the imageable component of an initially point-like glioblastoma multiforme tumor. The composite model leads to significant tumor shape corrections that are achieved through the utilization of environmental pressure information and the application of biomechanical principles. Using the ratio of smallest to largest moment of inertia of the tumor material to quantify the effect of our coupled approach, we have found a tumor shape correction of 20\% by coupling biomechanics to the cellular simulator as compared to a cellular simulation without preferred growth directions. We conclude that the integration of the two models provides additional morphological insight into realistic tumor growth behavior. Therefore, it might be used for the development of an advanced oncosimulator focusing on tumor types for which morphology plays an important role in surgical and/or radio-therapeutic treatment planning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this study was to investigate whether it is possible to pool together diffusion spectrum imaging data from four different scanners, located at three different sites. Two of the scanners had identical configuration whereas two did not. To measure the variability, we extracted three scalar maps (ADC, FA and GFA) from the DSI and utilized a region and a tract-based analysis. Additionally, a phantom study was performed to rule out some potential factors arising from the scanner performance in case some systematic bias occurred in the subject study. This work was split into three experiments: intra-scanner reproducibility, reproducibility with twin-scanner settings and reproducibility with other configurations. Overall for the intra-scanner and twin-scanner experiments, the region-based analysis coefficient of variation (CV) was in a range of 1%-4.2% and below 3% for almost every bundle for the tract-based analysis. The uncinate fasciculus showed the worst reproducibility, especially for FA and GFA values (CV 3.7-6%). For the GFA and FA maps, an ICC value of 0.7 and above is observed in almost all the regions/tracts. Looking at the last experiment, it was found that there is a very high similarity of the outcomes from the two scanners with identical setting. However, this was not the case for the two other imagers. Given the fact that the overall variation in our study is low for the imagers with identical settings, our findings support the feasibility of cross-site pooling of DSI data from identical scanners.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The problem of optimal design of a multi-gravity-assist space trajectories, with free number of deep space maneuvers (MGADSM) poses multi-modal cost functions. In the general form of the problem, the number of design variables is solution dependent. To handle global optimization problems where the number of design variables varies from one solution to another, two novel genetic-based techniques are introduced: hidden genes genetic algorithm (HGGA) and dynamic-size multiple population genetic algorithm (DSMPGA). In HGGA, a fixed length for the design variables is assigned for all solutions. Independent variables of each solution are divided into effective and ineffective (hidden) genes. Hidden genes are excluded in cost function evaluations. Full-length solutions undergo standard genetic operations. In DSMPGA, sub-populations of fixed size design spaces are randomly initialized. Standard genetic operations are carried out for a stage of generations. A new population is then created by reproduction from all members based on their relative fitness. The resulting sub-populations have different sizes from their initial sizes. The process repeats, leading to increasing the size of sub-populations of more fit solutions. Both techniques are applied to several MGADSM problems. They have the capability to determine the number of swing-bys, the planets to swing by, launch and arrival dates, and the number of deep space maneuvers as well as their locations, magnitudes, and directions in an optimal sense. The results show that solutions obtained using the developed tools match known solutions for complex case studies. The HGGA is also used to obtain the asteroids sequence and the mission structure in the global trajectory optimization competition (GTOC) problem. As an application of GA optimization to Earth orbits, the problem of visiting a set of ground sites within a constrained time frame is solved. The J2 perturbation and zonal coverage are considered to design repeated Sun-synchronous orbits. Finally, a new set of orbits, the repeated shadow track orbits (RSTO), is introduced. The orbit parameters are optimized such that the shadow of a spacecraft on the Earth visits the same locations periodically every desired number of days.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The novel approach to carbon capture and storage (CCS) described in this dissertation is a significant departure from the conventional approach to CCS. The novel approach uses a sodium carbonate solution to first capture CO2 from post combustion flue gas streams. The captured CO2 is then reacted with an alkaline industrial waste material, at ambient conditions, to regenerate the carbonate solution and permanently store the CO2 in the form of an added value carbonate mineral. Conventional CCS makes use of a hazardous amine solution for CO2 capture, a costly thermal regeneration stage, and the underground storage of supercritical CO2. The objective of the present dissertation was to examine each individual stage (capture and storage) of the proposed approach to CCS. Study of the capture stage found that a 2% w/w sodium carbonate solution was optimal for CO2 absorption in the present system. The 2% solution yielded the best tradeoff between the CO2 absorption rate and the CO2 absorption capacity of the solutions tested. Examination of CO2 absorption in the presence of flue gas impurities (NOx and SOx) found that carbonate solutions possess a significant advantage over amine solutions, that they could be used for multi-pollutant capture. All the NOx and SOx fed to the carbonate solution was able to be captured. Optimization studies found that it was possible to increase the absorption rate of CO2 into the carbonate solution by adding a surfactant to the solution to chemically alter the gas bubble size. The absorption rate of CO2 was increased by as much as 14%. Three coal combustion fly ash materials were chosen as the alkaline industrial waste materials to study the storage CO2 and regeneration the absorbent. X-ray diffraction analysis on reacted fly ash samples confirmed that the captured CO2 reacts with the fly ash materials to form a carbonate mineral, specifically calcite. Studies found that after a five day reaction time, 75% utilization of the waste material for CO2 storage could be achieved, while regenerating the absorbent. The regenerated absorbent exhibited a nearly identical CO2 absorption capacity and CO2 absorption rate as a fresh Na2CO3 solution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In contemporary societies there are different ways to perceive the relation between identity and alterity and to describe the difference between “us” and “them”, residents and foreigners. Anthropologist Sandra Wallman sustains that in multi-cultural urban spaces the frontiers of diversity are not only burdensome markers of identity, but rather they could also represent new chances to define “identity” and “alterity”. These frontiers, in fact, can work like interfaces through which to build time after time, in a creative way, a relationship with the other. From this point of view, the concept of boundary can offer many opportunities to creatively define the relation with the other and to sign new options for cognitive and physical movement. On the other side, in many cases we have a plenty of mechanisms of exclusion that transforms a purely empirical distinction between “us” and “them” in an ontological contrast, as in the case when the immigrant undergoes hostilities through discriminatory language. Even though these forms of racism are undoubtedly objectionable from a theoretical point of view, they are anyway socially “real”, in the sense that they are perpetually reaffirmed and strengthened in public opinion. They are in fact implicit “truths”, realities that are considered objective, common opinions that are part of day-to-day existence. That is the reason why an anthropological prospective including the study of “common sense” should be adopted in our present day studies on migration, as pointed out by American anthropologist Michael Herzfeld. My primary goal is to analyze with such a critical approach same pre-conditions of racism and exclusion in contemporary multi-cultural urban spaces. On the other hand, this essay would also investigate positive strategies of comparing, interchanging, and negotiating alterity in social work. I suggest that this approach can offer positive solutions in coping with “diversity” and in working out policies for recognizing a common identity which, at the same time, do not throw away the relevance of political and economic power.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present in this paper several contributions on the collision detection optimization centered on hardware performance. We focus on the broad phase which is the first step of the collision detection process and propose three new ways of parallelization of the well-known Sweep and Prune algorithm. We first developed a multi-core model takes into account the number of available cores. Multi-core architecture enables us to distribute geometric computations with use of multi-threading. Critical writing section and threads idling have been minimized by introducing new data structures for each thread. Programming with directives, like OpenMP, appears to be a good compromise for code portability. We then proposed a new GPU-based algorithm also based on the "Sweep and Prune" that has been adapted to multi-GPU architectures. Our technique is based on a spatial subdivision method used to distribute computations among GPUs. Results show that significant speed-up can be obtained by passing from 1 to 4 GPUs in a large-scale environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A major objective in ecology is to find general patterns, and to establish the rules and underlying mechanisms that generate those patterns. Nevertheless, most of our current insights in ecology are based on case studies of a single or few species, whereas multi-species experimental studies remain rare. We underline the power of the multi-species experimental approach for addressing general ecological questions, e. g. on species environmental responses or on patterns of among-and within-species variation. We present simulations that show that the accuracy of estimates of between-group differences is increased by maximizing the number of species rather than the number of populations or individuals per species. Thus, the more species a multi-species experiment includes, the more powerful it is. In addition, we discuss some inevitable methodological challenges of multi-species experiments. While we acknowledge the value of single-or few-species experiments, we strongly advocate the use of multi-species experiments for addressing ecological questions at a more general level.