973 resultados para Mixed-integer dynamic optimization
Resumo:
Thermal effects are rapidly gaining importance in nanometer heterogeneous integrated systems. Increased power density, coupled with spatio-temporal variability of chip workload, cause lateral and vertical temperature non-uniformities (variations) in the chip structure. The assumption of an uniform temperature for a large circuit leads to inaccurate determination of key design parameters. To improve design quality, we need precise estimation of temperature at detailed spatial resolution which is very computationally intensive. Consequently, thermal analysis of the designs needs to be done at multiple levels of granularity. To further investigate the flow of chip/package thermal analysis we exploit the Intel Single Chip Cloud Computer (SCC) and propose a methodology for calibration of SCC on-die temperature sensors. We also develop an infrastructure for online monitoring of SCC temperature sensor readings and SCC power consumption. Having the thermal simulation tool in hand, we propose MiMAPT, an approach for analyzing delay, power and temperature in digital integrated circuits. MiMAPT integrates seamlessly into industrial Front-end and Back-end chip design flows. It accounts for temperature non-uniformities and self-heating while performing analysis. Furthermore, we extend the temperature variation aware analysis of designs to 3D MPSoCs with Wide-I/O DRAM. We improve the DRAM refresh power by considering the lateral and vertical temperature variations in the 3D structure and adapting the per-DRAM-bank refresh period accordingly. We develop an advanced virtual platform which models the performance, power, and thermal behavior of a 3D-integrated MPSoC with Wide-I/O DRAMs in detail. Moving towards real-world multi-core heterogeneous SoC designs, a reconfigurable heterogeneous platform (ZYNQ) is exploited to further study the performance and energy efficiency of various CPU-accelerator data sharing methods in heterogeneous hardware architectures. A complete hardware accelerator featuring clusters of OpenRISC CPUs, with dynamic address remapping capability is built and verified on a real hardware.
Resumo:
Aim of this research is the development and validation of a comprehensive multibody motorcycle model featuring rigid-ring tires, taking into account both slope and roughness of road surfaces. A novel parametrization for the general kinematics of the motorcycle is proposed, using a mixed reference-point and relative-coordinates approach. The resulting description, developed in terms of dependent coordinates, makes it possible to efficiently include rigid-ring kinematics as well as road elevation and slope. The equations of motion for the multibody system are derived symbolically and the constraint equations arising from the dependent-coordinate formulation are handled using a projection technique. Therefore the resulting system of equations can be integrated in time domain using a standard ODE algorithm. The model is validated with respect to maneuvers experimentally measured on the race track, showing consistent results and excellent computational efficiency. More in detail, it is also capable of reproducing the chatter vibration of racing motorcycles. The chatter phenomenon, appearing during high speed cornering maneuvers, consists of a self-excited vertical oscillation of both the front and rear unsprung masses in the range of frequency between 17 and 22 Hz. A critical maneuver is numerically simulated, and a self-excited vibration appears, consistent with the experimentally measured chatter vibration. Finally, the driving mechanism for the self-excitation is highlighted and a physical interpretation is proposed.
Resumo:
Combinatorial Optimization is becoming ever more crucial, in these days. From natural sciences to economics, passing through urban centers administration and personnel management, methodologies and algorithms with a strong theoretical background and a consolidated real-word effectiveness is more and more requested, in order to find, quickly, good solutions to complex strategical problems. Resource optimization is, nowadays, a fundamental ground for building the basements of successful projects. From the theoretical point of view, Combinatorial Optimization rests on stable and strong foundations, that allow researchers to face ever more challenging problems. However, from the application point of view, it seems that the rate of theoretical developments cannot cope with that enjoyed by modern hardware technologies, especially with reference to the one of processors industry. In this work we propose new parallel algorithms, designed for exploiting the new parallel architectures available on the market. We found that, exposing the inherent parallelism of some resolution techniques (like Dynamic Programming), the computational benefits are remarkable, lowering the execution times by more than an order of magnitude, and allowing to address instances with dimensions not possible before. We approached four Combinatorial Optimization’s notable problems: Packing Problem, Vehicle Routing Problem, Single Source Shortest Path Problem and a Network Design problem. For each of these problems we propose a collection of effective parallel solution algorithms, either for solving the full problem (Guillotine Cuts and SSSPP) or for enhancing a fundamental part of the solution method (VRP and ND). We endorse our claim by presenting computational results for all problems, either on standard benchmarks from the literature or, when possible, on data from real-world applications, where speed-ups of one order of magnitude are usually attained, not uncommonly scaling up to 40 X factors.
Resumo:
Model based calibration has gained popularity in recent years as a method to optimize increasingly complex engine systems. However virtually all model based techniques are applied to steady state calibration. Transient calibration is by and large an emerging technology. An important piece of any transient calibration process is the ability to constrain the optimizer to treat the problem as a dynamic one and not as a quasi-static process. The optimized air-handling parameters corresponding to any instant of time must be achievable in a transient sense; this in turn depends on the trajectory of the same parameters over previous time instances. In this work dynamic constraint models have been proposed to translate commanded to actually achieved air-handling parameters. These models enable the optimization to be realistic in a transient sense. The air handling system has been treated as a linear second order system with PD control. Parameters for this second order system have been extracted from real transient data. The model has been shown to be the best choice relative to a list of appropriate candidates such as neural networks and first order models. The selected second order model was used in conjunction with transient emission models to predict emissions over the FTP cycle. It has been shown that emission predictions based on air-handing parameters predicted by the dynamic constraint model do not differ significantly from corresponding emissions based on measured air-handling parameters.
Resumo:
This is the second part of a study investigating a model-based transient calibration process for diesel engines. The first part addressed the data requirements and data processing required for empirical transient emission and torque models. The current work focuses on modelling and optimization. The unexpected result of this investigation is that when trained on transient data, simple regression models perform better than more powerful methods such as neural networks or localized regression. This result has been attributed to extrapolation over data that have estimated rather than measured transient air-handling parameters. The challenges of detecting and preventing extrapolation using statistical methods that work well with steady-state data have been explained. The concept of constraining the distribution of statistical leverage relative to the distribution of the starting solution to prevent extrapolation during the optimization process has been proposed and demonstrated. Separate from the issue of extrapolation is preventing the search from being quasi-static. Second-order linear dynamic constraint models have been proposed to prevent the search from returning solutions that are feasible if each point were run at steady state, but which are unrealistic in a transient sense. Dynamic constraint models translate commanded parameters to actually achieved parameters that then feed into the transient emission and torque models. Combined model inaccuracies have been used to adjust the optimized solutions. To frame the optimization problem within reasonable dimensionality, the coefficients of commanded surfaces that approximate engine tables are adjusted during search iterations, each of which involves simulating the entire transient cycle. The resulting strategy, different from the corresponding manual calibration strategy and resulting in lower emissions and efficiency, is intended to improve rather than replace the manual calibration process.
Resumo:
During osteoporosis induction in sheep, side effects of the steroids were observed in previous studies. The aim of this study was to improve the induction regimen consisting of ovariectomy, calcium/vitamin D- restricted diet and methylprednisolone (-MP)- medication with respect to the bone metabolism and to reduce the adverse side effects. Thirty-six ewes (age 6.5 +/- 0.6 years) were divided into four MP-administration groups (n = 9) with a total dose of 1800 mg MP: group 1: 20 mg/day, group 2: 60 mg/every third day, group 3: 3 x 500 mg and 1 x 300 mg at intervals of three weeks, group 4: weekly administration, starting at 70 mg and weekly reduction by 10 mg. After double-labelling with Calcein Green and Xylenol Orange, bone biopsy specimens were taken from the iliac crest (IC) at the beginning and four weeks after the last MP injection, and additionally from the vertebral body (VB) at the end of the experiment. Bone samples were processed into stained and fluorescent sections, static and dynamic measurements were performed. There were no significant differences for static parameters between the groups initially. The bone perimeter and the bone area values were significantly higher in the VB than in the IC (Pm: 26%, p < 0.0001, Ar: 11%, p < 0.0166). A significant decrease (20%) of the bone area was observed after corticosteroid-induced osteoporosis (p < 0.0004). For the dynamic parameters, no significant difference between the groups was found. Presence of Calcein Green and Xylenol Orange labels were noted in 50% of the biopsies in the IC, 100% in the VB. Group 3 showed the lowest prevalence of adverse side effects. The bone metabolism changes were observed in all four groups, and the VB bone metabolism was higher when compared to the IC. In conclusion, when using equal amounts of steroids adverse side effects can be reduced by decreasing the number of administrations without reducing the effect regarding corticosteroid-induced osteoporosis. This information is useful to reduce the discomfort of the animals in this sheep model of corticosteroid-induced osteoporosis.
Resumo:
The aim of this work is to investigate to what extent it is possible to use the secondary collimator jaws to reduce the transmitted radiation through the multileaf collimator (MLC) during an intensity modulated radiation therapy (IMRT). A method is developed and introduced where the jaws follow the open window of the MLC dynamically (dJAW method). With the aid of three academic cases (Closed MLC, Sliding-gap, and Chair) and two clinical cases (prostate and head and neck) the feasibility of the dJAW method and the influence of this method on the applied dose distributions are investigated. For this purpose the treatment planning system Eclipse and the Research-Toolbox were used as well as measurements within a solid water phantom were performed. The transmitted radiation through the closed MLC leads to an inhomogeneous dose distribution. In this case, the measured dose within a plane perpendicular to the central axis differs up to 40% (referring to the maximum dose within this plane) for 6 and 15 MV. The calculated dose with Eclipse is clearly more homogeneous. For the Sliding-gap case this difference is still up to 9%. Among other things, these differences depend on the depth of the measurement within the solid water phantom and on the application method. In the Chair case, the dose in regions where no dose is desired is locally reduced by up to 50% using the dJAW method instead of the conventional method. The dose inside the chair-shaped region decreased up to 4% if the same number of monitor units (MU) as for the conventional method was applied. The undesired dose in the volume body minus the planning target volume in the clinical cases prostate and head and neck decreased up to 1.8% and 1.5%, while the number of the applied MU increased up to 3.1% and 2.8%, respectively. The new dJAW method has the potential to enhance the optimization of the conventional IMRT to a further step.
Resumo:
Linear programs, or LPs, are often used in optimization problems, such as improving manufacturing efficiency of maximizing the yield from limited resources. The most common method for solving LPs is the Simplex Method, which will yield a solution, if one exists, but over the real numbers. From a purely numerical standpoint, it will be an optimal solution, but quite often we desire an optimal integer solution. A linear program in which the variables are also constrained to be integers is called an integer linear program or ILP. It is the focus of this report to present a parallel algorithm for solving ILPs. We discuss a serial algorithm using a breadth-first branch-and-bound search to check the feasible solution space, and then extend it into a parallel algorithm using a client-server model. In the parallel mode, the search may not be truly breadth-first, depending on the solution time for each node in the solution tree. Our search takes advantage of pruning, often resulting in super-linear improvements in solution time. Finally, we present results from sample ILPs, describe a few modifications to enhance the algorithm and improve solution time, and offer suggestions for future work.
Resumo:
Ethanol from lignocellulosic feedstocks is not currently competitive with corn-based ethanol in terms of yields and commercial feasibility. Through optimization of the pretreatment and fermentation steps this could change. The overall goal of this study was to evaluate, characterize, and optimize ethanol production from lignocellulosic feedstocks by the yeasts Saccharomyces cerevisiae (strain Ethanol Red, ER) and Pichia stipitis CBS 6054. Through a series of fermentations and growth studies, P. stipitis CBS 6054 and S. cerevisiae (ER) were evaluated on their ability to produce ethanol from both single substrate (xylose and glucose) and mixed substrate (five sugars present in hemicellulose) fermentations. The yeasts were also evaluated on their ability to produce ethanol from dilute acid pretreated hydrolysate and enzymatic hydrolysate. Hardwood (aspen), softwood (balsam), and herbaceous (switchgrass) hydrolysates were also tested to determine the effect of the source of the feedstock. P. stipitis produced ethanol from 66-98% of the theoretical yield throughout the fermentation studies completed over the course of this work. S. cerevisiae (ER) was determined to not be ideal for dilute acid pretreated lignocellulose because it was not able to utilize all the sugars found in hemicellulose. S. cerevisiae (ER) was instead used to optimize enzymatic pretreated lignocellulose that contained only glucose monomers. It was able to produce ethanol from enzymatically pretreated hydrolysate but the sugar level was so low (>3 g/L) that it would not be commercially feasible. Two lignocellulosic degradation products, furfural and acetic acid, were evaluated for whether or not they had an inhibitory effect on biomass production, substrate utilization, and ethanol production by P. stipitis and S. cerevisiae (ER). It was determined that inhibition is directly related to the concentration of the inhibitor and the organism. The final phase for this thesis focused on adapting P. stipitis CBS 6054 to toxic compounds present in dilute acid pretreated hydrolysate through directed evolution. Cultures were transferred to increasing concentrations of dilute acid pretreated hydrolysate in the fermentation media. The adapted strains’ fermentation capabilities were tested against the unadapted parent strain at each hydrolysate concentration. The fermentation capabilities of the adapted strain were significantly improved over the unadapted parentstrain. On media containing 60% hydrolysate the adapted strain yielded 0.30 g_ethanol/g_sugar ± 0.033 (g/g) and the unadapted parent strain yielded 0.11 g/g ±0.028. The culture has been successfully adapted to growth on media containing 65%, 70%, 75%, and 80% hydrolysate but with below optimal ethanol yields (0.14-0.19 g/g). Cell recycle could be a viable option for improving ethanol yields in these cases. A study was conducted to determine the optimal media for production of ethanol from xylose and mixed substrate fermentations by P. stipitis. Growth, substrate utilization, and ethanol production were the three factors used to evaluate the media. The three media tested were Yeast Peptone (YP), Yeast Nitrogen Base (YNB), and Corn Steep Liquor (CSL). The ethanol yields (g/g) for each medium are as follows: YP - 0.40-0.42, YNB -0.28-.030, and CSL - 0.44-.051. The results show that media containing CSL result in slightly higher ethanol yields then other fermentation media. P. stipitis was successfully adapted to dilute acid pretreated aspen hydrolysate in increasing concentrations in order to produce higher ethanol yields compared to the unadapted parent strain. S. cerevisiae (ER) produced ethanol from enzymatic pretreated cellulose containing low concentrations of glucose (1-3g/L). These results show that fermentations of lignocellulosic feedstocks can be optimized based on the substrate and organism for increased ethanol yields.
Resumo:
Particulate matter (PM) emissions standards set by the US Environmental Protection Agency (EPA) have become increasingly stringent over the years. The EPA regulation for PM in heavy duty diesel engines has been reduced to 0.01 g/bhp-hr for the year 2010. Heavy duty diesel engines make use of an aftertreatment filtration device, the Diesel Particulate Filter (DPF). DPFs are highly efficient in filtering PM (known as soot) and are an integral part of 2010 heavy duty diesel aftertreatment system. PM is accumulated in the DPF as the exhaust gas flows through it. This PM needs to be removed by oxidation periodically for the efficient functioning of the filter. This oxidation process is also known as regeneration. There are 2 types of regeneration processes, namely active regeneration (oxidation of PM by external means) and passive oxidation (oxidation of PM by internal means). Active regeneration occurs typically in high temperature regions, about 500 - 600 °C, which is much higher than normal diesel exhaust temperatures. Thus, the exhaust temperature has to be raised with the help of external devices like a Diesel Oxidation Catalyst (DOC) or a fuel burner. The O2 oxidizes PM producing CO2 as oxidation product. In passive oxidation, one way of regeneration is by the use of NO2. NO2 oxidizes the PM producing NO and CO2 as oxidation products. The passive oxidation process occurs at lower temperatures (200 - 400 °C) in comparison to the active regeneration temperatures. Generally, DPF substrate walls are washcoated with catalyst material to speed up the rate of PM oxidation. The catalyst washcoat is observed to increase the rate of PM oxidation. The goal of this research is to develop a simple mathematical model to simulate the PM depletion during the active regeneration process in a DPF (catalyzed and non-catalyzed). A simple, zero-dimensional kinetic model was developed in MATLAB. Experimental data required for calibration was obtained by active regeneration experiments performed on PM loaded mini DPFs in an automated flow reactor. The DPFs were loaded with PM from the exhaust of a commercial heavy duty diesel engine. The model was calibrated to the data obtained from active regeneration experiments. Numerical gradient based optimization techniques were used to estimate the kinetic parameters of the model.
Resumo:
A range of societal issues have been caused by fossil fuel consumption in the transportation sector in the United States (U.S.), including health related air pollution, climate change, the dependence on imported oil, and other oil related national security concerns. Biofuels production from various lignocellulosic biomass types such as wood, forest residues, and agriculture residues have the potential to replace a substantial portion of the total fossil fuel consumption. This research focuses on locating biofuel facilities and designing the biofuel supply chain to minimize the overall cost. For this purpose an integrated methodology was proposed by combining the GIS technology with simulation and optimization modeling methods. The GIS based methodology was used as a precursor for selecting biofuel facility locations by employing a series of decision factors. The resulted candidate sites for biofuel production served as inputs for simulation and optimization modeling. As a precursor to simulation or optimization modeling, the GIS-based methodology was used to preselect potential biofuel facility locations for biofuel production from forest biomass. Candidate locations were selected based on a set of evaluation criteria, including: county boundaries, a railroad transportation network, a state/federal road transportation network, water body (rivers, lakes, etc.) dispersion, city and village dispersion, a population census, biomass production, and no co-location with co-fired power plants. The simulation and optimization models were built around key supply activities including biomass harvesting/forwarding, transportation and storage. The built onsite storage served for spring breakup period where road restrictions were in place and truck transportation on certain roads was limited. Both models were evaluated using multiple performance indicators, including cost (consisting of the delivered feedstock cost, and inventory holding cost), energy consumption, and GHG emissions. The impact of energy consumption and GHG emissions were expressed in monetary terms to keep consistent with cost. Compared with the optimization model, the simulation model represents a more dynamic look at a 20-year operation by considering the impacts associated with building inventory at the biorefinery to address the limited availability of biomass feedstock during the spring breakup period. The number of trucks required per day was estimated and the inventory level all year around was tracked. Through the exchange of information across different procedures (harvesting, transportation, and biomass feedstock processing procedures), a smooth flow of biomass from harvesting areas to a biofuel facility was implemented. The optimization model was developed to address issues related to locating multiple biofuel facilities simultaneously. The size of the potential biofuel facility is set up with an upper bound of 50 MGY and a lower bound of 30 MGY. The optimization model is a static, Mathematical Programming Language (MPL)-based application which allows for sensitivity analysis by changing inputs to evaluate different scenarios. It was found that annual biofuel demand and biomass availability impacts the optimal results of biofuel facility locations and sizes.
Resumo:
The problem of optimal design of a multi-gravity-assist space trajectories, with free number of deep space maneuvers (MGADSM) poses multi-modal cost functions. In the general form of the problem, the number of design variables is solution dependent. To handle global optimization problems where the number of design variables varies from one solution to another, two novel genetic-based techniques are introduced: hidden genes genetic algorithm (HGGA) and dynamic-size multiple population genetic algorithm (DSMPGA). In HGGA, a fixed length for the design variables is assigned for all solutions. Independent variables of each solution are divided into effective and ineffective (hidden) genes. Hidden genes are excluded in cost function evaluations. Full-length solutions undergo standard genetic operations. In DSMPGA, sub-populations of fixed size design spaces are randomly initialized. Standard genetic operations are carried out for a stage of generations. A new population is then created by reproduction from all members based on their relative fitness. The resulting sub-populations have different sizes from their initial sizes. The process repeats, leading to increasing the size of sub-populations of more fit solutions. Both techniques are applied to several MGADSM problems. They have the capability to determine the number of swing-bys, the planets to swing by, launch and arrival dates, and the number of deep space maneuvers as well as their locations, magnitudes, and directions in an optimal sense. The results show that solutions obtained using the developed tools match known solutions for complex case studies. The HGGA is also used to obtain the asteroids sequence and the mission structure in the global trajectory optimization competition (GTOC) problem. As an application of GA optimization to Earth orbits, the problem of visiting a set of ground sites within a constrained time frame is solved. The J2 perturbation and zonal coverage are considered to design repeated Sun-synchronous orbits. Finally, a new set of orbits, the repeated shadow track orbits (RSTO), is introduced. The orbit parameters are optimized such that the shadow of a spacecraft on the Earth visits the same locations periodically every desired number of days.
Resumo:
In general, vascular contributions to the in vivo magnetic resonance (MR) brain spectrum are too small to be relevant. In cerebral uptake studies, however, vascular contributions may constitute a major confounder. MR visibility of vascular Phe was investigated by recording localized spectra from fully oxygenated and well-mixed whole blood. Blood Phe levels determined by MR spectroscopy (MRS) and ion-exchange chromatography showed excellent correlation. In addition, effects of blood flow were shown to have a small effect on signal amplitude with the MRS methodology used. Hence, blood Phe is almost completely MR visible at 1.5 T, even though it is severely broadened at higher fields. Without appropriate correction, cerebral Phe influx in studies of brain Phe uptake in phenylketonuria patients or healthy subjects would appear to be faster and lead to higher levels. Similar effects are envisaged for studies of ethanol or glucose uptake across the blood-brain barrier.
Resumo:
Spatial tracking is one of the most challenging and important parts of Mixed Reality environments. Many applications, especially in the domain of Augmented Reality, rely on the fusion of several tracking systems in order to optimize the overall performance. While the topic of spatial tracking sensor fusion has already seen considerable interest, most results only deal with the integration of carefully arranged setups as opposed to dynamic sensor fusion setups. A crucial prerequisite for correct sensor fusion is the temporal alignment of the tracking data from several sensors. Tracking sensors are typically encountered in Mixed Reality applications, are generally not synchronized. We present a general method to calibrate the temporal offset between different sensors by the Time Delay Estimation method which can be used to perform on-line temporal calibration. By applying Time Delay Estimation on the tracking data, we show that the temporal offset between generic Mixed Reality spatial tracking sensors can be calibrated. To show the correctness and the feasibility of this approach, we have examined different variations of our method and evaluated various combinations of tracking sensors. We furthermore integrated this time synchronization method into our UBITRACK Mixed Reality tracking framework to provide facilities for calibration and real-time data alignment.
Resumo:
Forests near the Mediterranean coast have been shaped by millennia of human disturbance. Consequently, ecological studies relying on modern observations or historical records may have difficulty assessing natural vegetation dynamics under current and future climate. We combined a sedimentary pollen record from Lago di Massacciucoli, Tuscany, Italy with simulations from the LandClim dynamic vegetation model to determine what vegetation preceded intense human disturbance, how past changes in vegetation relate to fire and browsing, and the potential of an extinct vegetation type under present climate. We simulated vegetation dynamics near Lago di Massaciucoli for the last 7,000 years using a local chironomid-inferred temperature reconstruction with combinations of three fire regimes (small infrequent, large infrequent, small frequent) and three browsing intensities (no browsing, light browsing, and moderate browsing), and compared model output to pollen data. Simulations with low disturbance support pollen-inferred evidence for a mixed forest dominated by Quercus ilex (a Mediterranean species) and Abies alba (a montane species). Whereas pollen data record the collapse of A. alba after 6000 cal yr bp, simulated populations expanded with declining summer temperatures during the late Holocene. Simulations with increased fire and browsing are consistent with evidence for expansion by deciduous species after A. alba collapsed. According to our combined paleo-environmental and modeling evidence, mixed Q. ilex and A. alba forests remain possible with current climate and limited disturbance, and provide a viable management objective for ecosystems near the Mediterranean coast and in regions that are expected to experience a mediterranean-type climate in the future.