906 resultados para Normalization-based optimization
Resumo:
Vapor sensors have been used for many years. Their applications range from detection of toxic gases and dangerous chemicals in industrial environments, the monitoring of landmines and other explosives, to the monitoring of atmospheric conditions. Microelectrical mechanical systems (MEMS) fabrication technologies provide a way to fabricate sensitive devices. One type of MEMS vapor sensors is based on mass changing detection and the sensors have a functional chemical coating for absorbing the chemical vapor of interest. The principle of the resonant mass sensor is that the resonant frequency will experience a large change due to a small mass of gas vapor change. This thesis is trying to build analytical micro-cantilever and micro-tilting plate models, which can make optimization more efficient. Several objectives need to be accomplished: (1) Build an analytical model of MEMS resonant mass sensor based on micro-tilting plate with the effects of air damping. (2) Perform design optimization of micro-tilting plate with a hole in the center. (3) Build an analytical model of MEMS resonant mass sensor based on micro-cantilever with the effects of air damping. (4) Perform design optimization of micro-cantilever by COMSOL. Analytical models of micro-tilting plate with a hole in the center are compared with a COMSOL simulation model and show good agreement. The analytical models have been used to do design optimization that maximizes sensitivity. The micro-cantilever analytical model does not show good agreement with a COMSOL simulation model. To further investigate, the air damping pressures at several points on the micro-cantilever have been compared between analytical model and COMSOL model. The analytical model is inadequate for two reasons. First, the model’s boundary condition assumption is not realistic. Second, the deflection shape of the cantilever changes with the hole size, and the model does not account for this. Design optimization of micro-cantilever is done by COMSOL.
Resumo:
Previous work has shown that high-temperature short-term spike thermal annealing of hydrogenated amorphous silicon (a-Si:H) photovoltaic thermal (PVT) systems results in higher electrical energy output. The relationship between temperature and performance of a-Si:H PVT is not simple as high temperatures during thermal annealing improves the immediate electrical performance following an anneal, but during the anneal it creates a marked drop in electrical performance. In addition, the power generation of a-Si:H PVT depends on both the environmental conditions and the Staebler-Wronski Effect kinetics. In order to improve the performance of a-Si:H PVT systems further, this paper reports on the effect of various dispatch strategies on system electrical performance. Utilizing experimental results from thermal annealing, an annealing model simulation for a-Si:Hbased PVT was developed and applied to different cities in the U.S. to investigate potential geographic effects on the dispatch optimization of the overall electrical PVT systems performance and annual electrical yield. The results showed that spike thermal annealing once per day maximized the improved electrical energy generation. In the outdoor operating condition this ideal behavior deteriorates and optimization rules are required to be implemented.
Resumo:
Recently, the interest of the automotive market for hybrid vehicles has increased due to the more restrictive pollutants emissions legislation and to the necessity of decreasing the fossil fuel consumption, since such solution allows a consistent improvement of the vehicle global efficiency. The term hybridization regards the energy flow in the powertrain of a vehicle: a standard vehicle has, usually, only one energy source and one energy tank; instead, a hybrid vehicle has at least two energy sources. In most cases, the prime mover is an internal combustion engine (ICE) while the auxiliary energy source can be mechanical, electrical, pneumatic or hydraulic. It is expected from the control unit of a hybrid vehicle the use of the ICE in high efficiency working zones and to shut it down when it is more convenient, while using the EMG at partial loads and as a fast torque response during transients. However, the battery state of charge may represent a limitation for such a strategy. That’s the reason why, in most cases, energy management strategies are based on the State Of Charge, or SOC, control. Several studies have been conducted on this topic and many different approaches have been illustrated. The purpose of this dissertation is to develop an online (usable on-board) control strategy in which the operating modes are defined using an instantaneous optimization method that minimizes the equivalent fuel consumption of a hybrid electric vehicle. The equivalent fuel consumption is calculated by taking into account the total energy used by the hybrid powertrain during the propulsion phases. The first section presents the hybrid vehicles characteristics. The second chapter describes the global model, with a particular focus on the energy management strategies usable for the supervisory control of such a powertrain. The third chapter shows the performance of the implemented controller on a NEDC cycle compared with the one obtained with the original control strategy.
Resumo:
The first goal of this study is to analyse a real-world multiproduct onshore pipeline system in order to verify its hydraulic configuration and operational feasibility by constructing a simulation model step by step from its elementary building blocks that permits to copy the operation of the real system as precisely as possible. The second goal is to develop this simulation model into a user-friendly tool that one could use to find an “optimal” or “best” product batch schedule for a one year time period. Such a batch schedule could change dynamically as perturbations occur during operation that influence the behaviour of the entire system. The result of the simulation, the ‘best’ batch schedule is the one that minimizes the operational costs in the system. The costs involved in the simulation are inventory costs, interface costs, pumping costs, and penalty costs assigned to any unforeseen situations. The key factor to determine the performance of the simulation model is the way time is represented. In our model an event based discrete time representation is selected as most appropriate for our purposes. This means that the time horizon is divided into intervals of unequal lengths based on events that change the state of the system. These events are the arrival/departure of the tanker ships, the openings and closures of loading/unloading valves of storage tanks at both terminals, and the arrivals/departures of trains/trucks at the Delivery Terminal. In the feasibility study we analyse the system’s operational performance with different Head Terminal storage capacity configurations. For these alternative configurations we evaluated the effect of different tanker ship delay magnitudes on the number of critical events and product interfaces generated, on the duration of pipeline stoppages, the satisfaction of the product demand and on the operative costs. Based on the results and the bottlenecks identified, we propose modifications in the original setup.
Resumo:
Today, modern System-on-a-Chip (SoC) systems have grown rapidly due to the increased processing power, while maintaining the size of the hardware circuit. The number of transistors on a chip continues to increase, but current SoC designs may not be able to exploit the potential performance, especially with energy consumption and chip area becoming two major concerns. Traditional SoC designs usually separate software and hardware. Thus, the process of improving the system performance is a complicated task for both software and hardware designers. The aim of this research is to develop hardware acceleration workflow for software applications. Thus, system performance can be improved with constraints of energy consumption and on-chip resource costs. The characteristics of software applications can be identified by using profiling tools. Hardware acceleration can have significant performance improvement for highly mathematical calculations or repeated functions. The performance of SoC systems can then be improved, if the hardware acceleration method is used to accelerate the element that incurs performance overheads. The concepts mentioned in this study can be easily applied to a variety of sophisticated software applications. The contributions of SoC-based hardware acceleration in the hardware-software co-design platform include the following: (1) Software profiling methods are applied to H.264 Coder-Decoder (CODEC) core. The hotspot function of aimed application is identified by using critical attributes such as cycles per loop, loop rounds, etc. (2) Hardware acceleration method based on Field-Programmable Gate Array (FPGA) is used to resolve system bottlenecks and improve system performance. The identified hotspot function is then converted to a hardware accelerator and mapped onto the hardware platform. Two types of hardware acceleration methods – central bus design and co-processor design, are implemented for comparison in the proposed architecture. (3) System specifications, such as performance, energy consumption, and resource costs, are measured and analyzed. The trade-off of these three factors is compared and balanced. Different hardware accelerators are implemented and evaluated based on system requirements. 4) The system verification platform is designed based on Integrated Circuit (IC) workflow. Hardware optimization techniques are used for higher performance and less resource costs. Experimental results show that the proposed hardware acceleration workflow for software applications is an efficient technique. The system can reach 2.8X performance improvements and save 31.84% energy consumption by applying the Bus-IP design. The Co-processor design can have 7.9X performance and save 75.85% energy consumption.
Resumo:
The low-frequency electromagnetic compatibility (EMC) is an increasingly important aspect in the design of practical systems to ensure the functional safety and reliability of complex products. The opportunities for using numerical techniques to predict and analyze system’s EMC are therefore of considerable interest in many industries. As the first phase of study, a proper model, including all the details of the component, was required. Therefore, the advances in EMC modeling were studied with classifying analytical and numerical models. The selected model was finite element (FE) modeling, coupled with the distributed network method, to generate the model of the converter’s components and obtain the frequency behavioral model of the converter. The method has the ability to reveal the behavior of parasitic elements and higher resonances, which have critical impacts in studying EMI problems. For the EMC and signature studies of the machine drives, the equivalent source modeling was studied. Considering the details of the multi-machine environment, including actual models, some innovation in equivalent source modeling was performed to decrease the simulation time dramatically. Several models were designed in this study and the voltage current cube model and wire model have the best result. The GA-based PSO method is used as the optimization process. Superposition and suppression of the fields in coupling the components were also studied and verified. The simulation time of the equivalent model is 80-100 times lower than the detailed model. All tests were verified experimentally. As the application of EMC and signature study, the fault diagnosis and condition monitoring of an induction motor drive was developed using radiated fields. In addition to experimental tests, the 3DFE analysis was coupled with circuit-based software to implement the incipient fault cases. The identification was implemented using ANN for seventy various faulty cases. The simulation results were verified experimentally. Finally, the identification of the types of power components were implemented. The results show that it is possible to identify the type of components, as well as the faulty components, by comparing the amplitudes of their stray field harmonics. The identification using the stray fields is nondestructive and can be used for the setups that cannot go offline and be dismantled
Resumo:
The increasing trend of disaster victims globally is posing a complex challenge for disaster management authorities. Moreover, to accomplish successful transition between preparedness and response, it is important to consider the different features inherent to each type of disaster. Floods are portrayed as one of the most frequent and harmful disasters, hence introducing the necessity to develop a tool for disaster preparedness to perform efficient and effective flood management. The purpose of the article is to introduce a method to simultaneously define the proper location of shelters and distribution centers, along with the allocation of prepositioned goods and distribution decisions required to satisfy flood victims. The tool combines the use of a raster geographical information system (GIS) and an optimization model. The GIS determines the flood hazard of the city areas aiming to assess the flood situation and to discard floodable facilities. Then, the multi-commodity multimodal optimization model is solved to obtain the Pareto frontier of two criteria: distance and cost. The methodology was applied to a case study in the flood of Villahermosa, Mexico, in 2007, and the results were compared to an optimized scenario of the guidelines followed by Mexican authorities, concluding that the value of the performance measures was improved using the developed method. Furthermore, the results exhibited the possibility to provide adequate care for people affected with less facilities than the current approach and the advantages of considering more than one distribution center for relief prepositioning.
Resumo:
Minimization of undesirable temperature gradients in all dimensions of a planar solid oxide fuel cell (SOFC) is central to the thermal management and commercialization of this electrochemical reactor. This article explores the effective operating variables on the temperature gradient in a multilayer SOFC stack and presents a trade-off optimization. Three promising approaches are numerically tested via a model-based sensitivity analysis. The numerically efficient thermo-chemical model that had already been developed by the authors for the cell scale investigations (Tang et al. Chem. Eng. J. 2016, 290, 252-262) is integrated and extended in this work to allow further thermal studies at commercial scales. Initially, the most common approach for the minimization of stack's thermal inhomogeneity, i.e., usage of the excess air, is critically assessed. Subsequently, the adjustment of inlet gas temperatures is introduced as a complementary methodology to reduce the efficiency loss due to application of excess air. As another practical approach, regulation of the oxygen fraction in the cathode coolant stream is examined from both technical and economic viewpoints. Finally, a multiobjective optimization calculation is conducted to find an operating condition in which stack's efficiency and temperature gradient are maximum and minimum, respectively.
Resumo:
Image and video compression play a major role in the world today, allowing the storage and transmission of large multimedia content volumes. However, the processing of this information requires high computational resources, hence the improvement of the computational performance of these compression algorithms is very important. The Multidimensional Multiscale Parser (MMP) is a pattern-matching-based compression algorithm for multimedia contents, namely images, achieving high compression ratios, maintaining good image quality, Rodrigues et al. [2008]. However, in comparison with other existing algorithms, this algorithm takes some time to execute. Therefore, two parallel implementations for GPUs were proposed by Ribeiro [2016] and Silva [2015] in CUDA and OpenCL-GPU, respectively. In this dissertation, to complement the referred work, we propose two parallel versions that run the MMP algorithm in CPU: one resorting to OpenMP and another that converts the existing OpenCL-GPU into OpenCL-CPU. The proposed solutions are able to improve the computational performance of MMP by 3 and 2:7 , respectively. The High Efficiency Video Coding (HEVC/H.265) is the most recent standard for compression of image and video. Its impressive compression performance, makes it a target for many adaptations, particularly for holoscopic image/video processing (or light field). Some of the proposed modifications to encode this new multimedia content are based on geometry-based disparity compensations (SS), developed by Conti et al. [2014], and a Geometric Transformations (GT) module, proposed by Monteiro et al. [2015]. These compression algorithms for holoscopic images based on HEVC present an implementation of specific search for similar micro-images that is more efficient than the one performed by HEVC, but its implementation is considerably slower than HEVC. In order to enable better execution times, we choose to use the OpenCL API as the GPU enabling language in order to increase the module performance. With its most costly setting, we are able to reduce the GT module execution time from 6.9 days to less then 4 hours, effectively attaining a speedup of 45 .
Resumo:
The aim of this study was to establish guidelines for the optimization of biologic therapies for health professionals involved in the management of patients with RA, AS and PsA. Recommendations were established via consensus by a panel of experts in rheumatology and hospital pharmacy, based on analysis of available scientific evidence obtained from four systematic reviews and on the clinical experience of panellists. The Delphi method was used to evaluate these recommendations, both between panellists and among a wider group of rheumatologists. Previous concepts concerning better management of RA, AS and PsA were reviewed and, more specifically, guidelines for the optimization of biologic therapies used to treat these diseases were formulated. Recommendations were made with the aim of establishing a plan for when and how to taper biologic treatment in patients with these diseases. The recommendations established herein aim not only to provide advice on how to improve the risk:benefit ratio and efficiency of such treatments, but also to reduce variability in daily clinical practice in the use of biologic therapies for rheumatic diseases
Resumo:
In most agroecosystems, nitrogen (N) is the most important nutrient limiting plant growth. One management strategy that affects N cycling and N use efficiency (NUE) is conservation agriculture (CA), an agricultural system based on a combination of minimum tillage, crop residue retention and crop rotation. Available results on the optimization of NUE in CA are inconsistent and studies that cover all three components of CA are scarce. Presently, CA is promoted in the Yaqui Valley in Northern Mexico, the country´s major wheat-producing area in which from 1968 to 1995, fertilizer application rates for the cultivation of irrigated durum wheat (Triticum durum L.) at 6 t ha-1 increased from 80 to 250 kg ha-1, demonstrating the high intensification potential in this region. Given major knowledge gaps on N availability in CA this thesis summarizes the current knowledge of N management in CA and provides insights in the effects of tillage practice, residue management and crop rotation on wheat grain quality and N cycling. Major aims of the study were to identify N fertilizer application strategies that improve N use efficiency and reduce N immobilization in CA with the ultimate goal to stabilize cereal yields, maintain grain quality, minimize N losses into the environment and reduce farmers’ input costs. Soil physical and chemical properties in CA were measured and compared with those in conventional systems and permanent beds with residue burning focusing on their relationship to plant N uptake and N cycling in the soil and how they are affected by tillage and N fertilizer timing, method and doses. For N fertilizer management, we analyzed how placement, time and amount of N fertilizer influenced yield and quality parameters of durum and bread wheat in CA systems. Overall, grain quality parameters, in particular grain protein concentration decreased with zero-tillage and increasing amount of residues left on the field compared with conventional systems. The second part of the dissertation provides an overview of applied methodologies to measure NUE and its components. We evaluated the methodology of ion exchange resin cartridges under irrigated, intensive agricultural cropping systems on Vertisols to measure nitrate leaching losses which through drainage channels ultimately end up in the Sea of Cortez where they lead to algae blooming. A throughout analysis of N inputs and outputs was conducted to calculate N balances in three different tillage-straw systems. As fertilizer inputs are high, N balances were positive in all treatments indicating the risk of N leaching or volatilization during or in subsequent cropping seasons and during heavy rain fall in summer. Contrary to common belief, we did not find negative effects of residue burning on soil nutrient status, yield or N uptake. A labeled fertilizer experiment with urea 15N was implemented in micro-plots to measure N fertilizer recovery and the effects of residual fertilizer N in the soil from summer maize on the following winter crop wheat. Obtained N fertilizer recovery rates for maize grain were with an average of 11% very low for all treatments.
Resumo:
Stroke stands for one of the most frequent causes of death, without distinguishing age or genders. Despite representing an expressive mortality fig-ure, the disease also causes long-term disabilities with a huge recovery time, which goes in parallel with costs. However, stroke and health diseases may also be prevented considering illness evidence. Therefore, the present work will start with the development of a decision support system to assess stroke risk, centered on a formal framework based on Logic Programming for knowledge rep-resentation and reasoning, complemented with a Case Based Reasoning (CBR) approach to computing. Indeed, and in order to target practically the CBR cycle, a normalization and an optimization phases were introduced, and clustering methods were used, then reducing the search space and enhancing the cases re-trieval one. On the other hand, and aiming at an improvement of the CBR theo-retical basis, the predicates` attributes were normalized to the interval 0…1, and the extensions of the predicates that match the universe of discourse were re-written, and set not only in terms of an evaluation of its Quality-of-Information (QoI), but also in terms of an assessment of a Degree-of-Confidence (DoC), a measure of one`s confidence that they fit into a given interval, taking into account their domains, i.e., each predicate attribute will be given in terms of a pair (QoI, DoC), a simple and elegant way to represent data or knowledge of the type incomplete, self-contradictory, or even unknown.
Resumo:
In the last decades the automotive sector has seen a technological revolution, due mainly to the more restrictive regulation, the newly introduced technologies and, as last, to the poor resources of fossil fuels remaining on Earth. Promising solution in vehicles’ propulsion are represented by alternative architectures and energy sources, for example fuel-cells and pure electric vehicles. The automotive transition to new and green vehicles is passing through the development of hybrid vehicles, that usually combine positive aspects of each technology. To fully exploit the powerful of hybrid vehicles, however, it is important to manage the powertrain’s degrees of freedom in the smartest way possible, otherwise hybridization would be worthless. To this aim, this dissertation is focused on the development of energy management strategies and predictive control functions. Such algorithms have the goal of increasing the powertrain overall efficiency and contextually increasing the driver safety. Such control algorithms have been applied to an axle-split Plug-in Hybrid Electric Vehicle with a complex architecture that allows more than one driving modes, including the pure electric one. The different energy management strategies investigated are mainly three: the vehicle baseline heuristic controller, in the following mentioned as rule-based controller, a sub-optimal controller that can include also predictive functionalities, referred to as Equivalent Consumption Minimization Strategy, and a vehicle global optimum control technique, called Dynamic Programming, also including the high-voltage battery thermal management. During this project, different modelling approaches have been applied to the powertrain, including Hardware-in-the-loop, and diverse powertrain high-level controllers have been developed and implemented, increasing at each step their complexity. It has been proven the potential of using sophisticated powertrain control techniques, and that the gainable benefits in terms of fuel economy are largely influenced by the chose energy management strategy, even considering the powerful vehicle investigated.
Resumo:
A High-Performance Computing job dispatcher is a critical software that assigns the finite computing resources to submitted jobs. This resource assignment over time is known as the on-line job dispatching problem in HPC systems. The fact the problem is on-line means that solutions must be computed in real-time, and their required time cannot exceed some threshold to do not affect the normal system functioning. In addition, a job dispatcher must deal with a lot of uncertainty: submission times, the number of requested resources, and duration of jobs. Heuristic-based techniques have been broadly used in HPC systems, at the cost of achieving (sub-)optimal solutions in a short time. However, the scheduling and resource allocation components are separated, thus generates a decoupled decision that may cause a performance loss. Optimization-based techniques are less used for this problem, although they can significantly improve the performance of HPC systems at the expense of higher computation time. Nowadays, HPC systems are being used for modern applications, such as big data analytics and predictive model building, that employ, in general, many short jobs. However, this information is unknown at dispatching time, and job dispatchers need to process large numbers of them quickly while ensuring high Quality-of-Service (QoS) levels. Constraint Programming (CP) has been shown to be an effective approach to tackle job dispatching problems. However, state-of-the-art CP-based job dispatchers are unable to satisfy the challenges of on-line dispatching, such as generate dispatching decisions in a brief period and integrate current and past information of the housing system. Given the previous reasons, we propose CP-based dispatchers that are more suitable for HPC systems running modern applications, generating on-line dispatching decisions in a proper time and are able to make effective use of job duration predictions to improve QoS levels, especially for workloads dominated by short jobs.
Resumo:
In the last decades, global food supply chains had to deal with the increasing awareness of the stakeholders and consumers about safety, quality, and sustainability. In order to address these new challenges for food supply chain systems, an integrated approach to design, control, and optimize product life cycle is required. Therefore, it is essential to introduce new models, methods, and decision-support platforms tailored to perishable products. This thesis aims to provide novel practice-ready decision-support models and methods to optimize the logistics of food items with an integrated and interdisciplinary approach. It proposes a comprehensive review of the main peculiarities of perishable products and the environmental stresses accelerating their quality decay. Then, it focuses on top-down strategies to optimize the supply chain system from the strategical to the operational decision level. Based on the criticality of the environmental conditions, the dissertation evaluates the main long-term logistics investment strategies to preserve products quality. Several models and methods are proposed to optimize the logistics decisions to enhance the sustainability of the supply chain system while guaranteeing adequate food preservation. The models and methods proposed in this dissertation promote a climate-driven approach integrating climate conditions and their consequences on the quality decay of products in innovative models supporting the logistics decisions. Given the uncertain nature of the environmental stresses affecting the product life cycle, an original stochastic model and solving method are proposed to support practitioners in controlling and optimizing the supply chain systems when facing uncertain scenarios. The application of the proposed decision-support methods to real case studies proved their effectiveness in increasing the sustainability of the perishable product life cycle. The dissertation also presents an industry application of a global food supply chain system, further demonstrating how the proposed models and tools can be integrated to provide significant savings and sustainability improvements.