933 resultados para Optimal fusion performance
Resumo:
Due to lack of information on the use of non-protein energy sources in diets for pacu (Piaractus mesopotamicus), a 2 x 2 x 3 factorial experiment was conducted to evaluate the performance and digestibility of 12 diets containing approximately two crude protein (CP; 220 and 250 g kg(-1)), two lipid (40 and 80 g kg(-1)) and three carbohydrate levels (410, 460 and 500 g kg(-1)). The pacu juveniles-fed diets containing 220 g kg(-1) CP did not respond (P > 0.05) to increased dietary lipid and carbohydrate levels, but the fish-fed diets containing 250 g kg(-1) CP showed a better feed conversion ratio. There were interactions in weight gain (WG), specific growth rate (SGR), crude protein intake (CPI) and feed conversion rate (FCR) dependent on dietary carbohydrate and lipid levels, showing positive effects of increasing carbohydrate levels only for fish-fed diets containing 80 g kg(-1) lipid level. However, when the diets contained 40 g kg(-1) lipid, the best energy productive value (EPV) results were obtained at 460 g kg(-1) carbohydrate. A higher usage of lipids (80 g kg(-1)) reduced CPI and was detrimental to protein [apparent digestibility coefficient (ADC)(CP)] and energy (ADC(GE)), but did not affect growth. The ADC(GE) improved proportionally as dietary carbohydrate levels increased (P < 0.05), increasing the concentration of digestible energy. In addition, the WG, CPI, ADC(GE) results showed best use of the energy from carbohydrates when dietary protein level was 250 g kg(-1) CP. The utilization of 250 g kg(-1) CP in feeds for juvenile pacu for optimal growth is suggested. Therefore, the optimum dietary lipid and carbohydrate levels depend on their combinations. It can be stated that pacu uses carbohydrates as effectively as lipids in the maximization of protein usage, as long as it is not lower than 250 g kg(-1) CP or approximately 230 g kg(-1) digestible protein.
Resumo:
Purpose: To evaluate and compare the performance of Ripplet Type-1 transform and directional discrete cosine transform (DDCT) and their combinations for improved representation of MRI images while preserving its fine features such as edges along the smooth curves and textures. Methods: In a novel image representation method based on fusion of Ripplet type-1 and conventional/directional DCT transforms, source images were enhanced in terms of visual quality using Ripplet and DDCT and their various combinations. The enhancement achieved was quantified on the basis of peak signal to noise ratio (PSNR), mean square error (MSE), structural content (SC), average difference (AD), maximum difference (MD), normalized cross correlation (NCC), and normalized absolute error (NAE). To determine the attributes of both transforms, these transforms were combined to represent the entire image as well. All the possible combinations were tested to present a complete study of combinations of the transforms and the contrasts were evaluated amongst all the combinations. Results: While using the direct combining method (DDCT) first and then the Ripplet method, a PSNR value of 32.3512 was obtained which is comparatively higher than the PSNR values of the other combinations. This novel designed technique gives PSNR value approximately equal to the PSNR’s of parent techniques. Along with this, it was able to preserve edge information, texture information and various other directional image features. The fusion of DDCT followed by the Ripplet reproduced the best images. Conclusion: The transformation of images using Ripplet followed by DDCT ensures a more efficient method for the representation of images with preservation of its fine details like edges and textures.
Resumo:
A camera maps 3-dimensional (3D) world space to a 2-dimensional (2D) image space. In the process it loses the depth information, i.e., the distance from the camera focal point to the imaged objects. It is impossible to recover this information from a single image. However, by using two or more images from different viewing angles this information can be recovered, which in turn can be used to obtain the pose (position and orientation) of the camera. Using this pose, a 3D reconstruction of imaged objects in the world can be computed. Numerous algorithms have been proposed and implemented to solve the above problem; these algorithms are commonly called Structure from Motion (SfM). State-of-the-art SfM techniques have been shown to give promising results. However, unlike a Global Positioning System (GPS) or an Inertial Measurement Unit (IMU) which directly give the position and orientation respectively, the camera system estimates it after implementing SfM as mentioned above. This makes the pose obtained from a camera highly sensitive to the images captured and other effects, such as low lighting conditions, poor focus or improper viewing angles. In some applications, for example, an Unmanned Aerial Vehicle (UAV) inspecting a bridge or a robot mapping an environment using Simultaneous Localization and Mapping (SLAM), it is often difficult to capture images with ideal conditions. This report examines the use of SfM methods in such applications and the role of combining multiple sensors, viz., sensor fusion, to achieve more accurate and usable position and reconstruction information. This project investigates the role of sensor fusion in accurately estimating the pose of a camera for the application of 3D reconstruction of a scene. The first set of experiments is conducted in a motion capture room. These results are assumed as ground truth in order to evaluate the strengths and weaknesses of each sensor and to map their coordinate systems. Then a number of scenarios are targeted where SfM fails. The pose estimates obtained from SfM are replaced by those obtained from other sensors and the 3D reconstruction is completed. Quantitative and qualitative comparisons are made between the 3D reconstruction obtained by using only a camera versus that obtained by using the camera along with a LIDAR and/or an IMU. Additionally, the project also works towards the performance issue faced while handling large data sets of high-resolution images by implementing the system on the Superior high performance computing cluster at Michigan Technological University.
Resumo:
The Homogeneous Charge Compression Ignition (HCCI) engine is a promising combustion concept for reducing NOx and particulate matter (PM) emissions and providing a high thermal efficiency in internal combustion engines. This concept though has limitations in the areas of combustion control and achieving stable combustion at high loads. For HCCI to be a viable option for on-road vehicles, further understanding of its combustion phenomenon and its control are essential. Thus, this thesis has a focus on both the experimental setup of an HCCI engine at Michigan Technological University (MTU) and also developing a physical numerical simulation model called the Sequential Model for Residual Affected HCCI (SMRH) to investigate performance of HCCI engines. The primary focus is on understanding the effects of intake and exhaust valve timings on HCCI combustion. For the experimental studies, this thesis provided the contributions for development of HCCI setup at MTU. In particular, this thesis made contributions in the areas of measurement of valve profiles, measurement of piston to valve contact clearance for procuring new pistons for further studies of high geometric compression ratio HCCI engines. It also consists of developing and testing a supercharging station and the setup of an electrical air heater to extend the HCCI operating region. The HCCI engine setup is based on a GM 2.0 L LHU Gen 1 engine which is a direct injected engine with variable valve timing (VVT) capabilities. For the simulation studies, a computationally efficient modeling platform has been developed and validated against experimental data from a single cylinder HCCI engine. In-cylinder pressure trace, combustion phasing (CA10, CA50, BD) and performance metrics IMEP, thermal efficiency, and CO emission are found to be in good agreement with experimental data for different operating conditions. Effects of phasing intake and exhaust valves are analyzed using SMRH. In addition, a novel index called Fuel Efficiency and Emissions (FEE) index is defined and is used to determine the optimal valve timings for engine operation through the use of FEE contour maps.
Design Optimization of Modern Machine-drive Systems for Maximum Fault Tolerant and Optimal Operation
Resumo:
Modern electric machine drives, particularly three phase permanent magnet machine drive systems represent an indispensable part of high power density products. Such products include; hybrid electric vehicles, large propulsion systems, and automation products. Reliability and cost of these products are directly related to the reliability and cost of these systems. The compatibility of the electric machine and its drive system for optimal cost and operation has been a large challenge in industrial applications. The main objective of this dissertation is to find a design and control scheme for the best compromise between the reliability and optimality of the electric machine-drive system. The effort presented here is motivated by the need to find new techniques to connect the design and control of electric machines and drive systems. A highly accurate and computationally efficient modeling process was developed to monitor the magnetic, thermal, and electrical aspects of the electric machine in its operational environments. The modeling process was also utilized in the design process in form finite element based optimization process. It was also used in hardware in the loop finite element based optimization process. The modeling process was later employed in the design of a very accurate and highly efficient physics-based customized observers that are required for the fault diagnosis as well the sensorless rotor position estimation. Two test setups with different ratings and topologies were numerically and experimentally tested to verify the effectiveness of the proposed techniques. The modeling process was also employed in the real-time demagnetization control of the machine. Various real-time scenarios were successfully verified. It was shown that this process gives the potential to optimally redefine the assumptions in sizing the permanent magnets of the machine and DC bus voltage of the drive for the worst operating conditions. The mathematical development and stability criteria of the physics-based modeling of the machine, design optimization, and the physics-based fault diagnosis and the physics-based sensorless technique are described in detail. To investigate the performance of the developed design test-bed, software and hardware setups were constructed first. Several topologies of the permanent magnet machine were optimized inside the optimization test-bed. To investigate the performance of the developed sensorless control, a test-bed including a 0.25 (kW) surface mounted permanent magnet synchronous machine example was created. The verification of the proposed technique in a range from medium to very low speed, effectively show the intelligent design capability of the proposed system. Additionally, to investigate the performance of the developed fault diagnosis system, a test-bed including a 0.8 (kW) surface mounted permanent magnet synchronous machine example with trapezoidal back electromotive force was created. The results verify the use of the proposed technique under dynamic eccentricity, DC bus voltage variations, and harmonic loading condition make the system an ideal case for propulsion systems.
Resumo:
The rapid growth of virtualized data centers and cloud hosting services is making the management of physical resources such as CPU, memory, and I/O bandwidth in data center servers increasingly important. Server management now involves dealing with multiple dissimilar applications with varying Service-Level-Agreements (SLAs) and multiple resource dimensions. The multiplicity and diversity of resources and applications are rendering administrative tasks more complex and challenging. This thesis aimed to develop a framework and techniques that would help substantially reduce data center management complexity. We specifically addressed two crucial data center operations. First, we precisely estimated capacity requirements of client virtual machines (VMs) while renting server space in cloud environment. Second, we proposed a systematic process to efficiently allocate physical resources to hosted VMs in a data center. To realize these dual objectives, accurately capturing the effects of resource allocations on application performance is vital. The benefits of accurate application performance modeling are multifold. Cloud users can size their VMs appropriately and pay only for the resources that they need; service providers can also offer a new charging model based on the VMs performance instead of their configured sizes. As a result, clients will pay exactly for the performance they are actually experiencing; on the other hand, administrators will be able to maximize their total revenue by utilizing application performance models and SLAs. This thesis made the following contributions. First, we identified resource control parameters crucial for distributing physical resources and characterizing contention for virtualized applications in a shared hosting environment. Second, we explored several modeling techniques and confirmed the suitability of two machine learning tools, Artificial Neural Network and Support Vector Machine, to accurately model the performance of virtualized applications. Moreover, we suggested and evaluated modeling optimizations necessary to improve prediction accuracy when using these modeling tools. Third, we presented an approach to optimal VM sizing by employing the performance models we created. Finally, we proposed a revenue-driven resource allocation algorithm which maximizes the SLA-generated revenue for a data center.
Resumo:
In this paper, a real-time optimal control technique for non-linear plants is proposed. The control system makes use of the cell-mapping (CM) techniques, widely used for the global analysis of highly non-linear systems. The CM framework is employed for designing approximate optimal controllers via a control variable discretization. Furthermore, CM-based designs can be improved by the use of supervised feedforward artificial neural networks (ANNs), which have proved to be universal and efficient tools for function approximation, providing also very fast responses. The quantitative nature of the approximate CM solutions fits very well with ANNs characteristics. Here, we propose several control architectures which combine, in a different manner, supervised neural networks and CM control algorithms. On the one hand, different CM control laws computed for various target objectives can be employed for training a neural network, explicitly including the target information in the input vectors. This way, tracking problems, in addition to regulation ones, can be addressed in a fast and unified manner, obtaining smooth, averaged and global feedback control laws. On the other hand, adjoining CM and ANNs are also combined into a hybrid architecture to address problems where accuracy and real-time response are critical. Finally, some optimal control problems are solved with the proposed CM, neural and hybrid techniques, illustrating their good performance.
Resumo:
The purpose of this study was to establish the optimal allometric models to predict International Ski Federation’s ski-ranking points for sprint competitions (FISsprint) among elite female cross-country skiers based on maximal oxygen uptake (V̇O2max) and lean mass (LM). Ten elite female cross-country skiers (age: 24.5±2.8 years [mean ± SD]) completed a treadmill roller-skiing test to determine V̇O2max (ie, aerobic power) using the diagonal stride technique, whereas LM (ie, a surrogate indicator of anaerobic capacity) was determined by dual-emission X-ray anthropometry. The subjects’ FISsprint were used as competitive performance measures. Power function modeling was used to predict the skiers’ FISsprint based on V̇O2max, LM, and body mass. The subjects’ test and performance data were as follows: V̇O2max, 4.0±0.3 L min-1; LM, 48.9±4.4 kg; body mass, 64.0±5.2 kg; and FISsprint, 116.4±59.6 points. The following power function models were established for the prediction of FISsprint: 3.91×105 ∙ VO -6.002maxand 6.95×1010 ∙ LM-5.25; these models explained 66% (P=0.0043) and 52% (P=0.019), respectively, of the variance in the FISsprint. Body mass failed to contribute to both models; hence, the models are based on V̇O2max and LM expressed absolutely. The results demonstrate that the physiological variables that reflect aerobic power and anaerobic capacity are important indicators of competitive sprint performance among elite female skiers. To accurately indicate performance capability among elite female skiers, the presented power function models should be used. Skiers whose V̇O2max differs by 1% will differ in their FISsprint by 5.8%, whereas the corresponding 1% difference in LM is related to an FISsprint difference of 5.1%, where both differences are in favor of the skier with higher V̇O2max or LM. It is recommended that coaches use the absolute expression of these variables to monitor skiers’ performance-related training adaptations linked to changes in aerobic power and anaerobic capacity.
Resumo:
This dissertation consists of three papers. The first paper "Managing the Workload: an Experiment on Individual Decision Making and Performance" experimentally investigates how decision-making in workload management affects individual performance. I designed a laboratory experiment in order to exogenously manipulate the schedule of work faced by each subject and to identify its impact on final performance. Through the mouse click-tracking technique, I also collected interesting behavioral measures on organizational skills. I found that a non-negligible share of individuals performs better under externally imposed schedules than in the unconstrained case. However, such constraints are detrimental for those good in self-organizing. The second chapter, "On the allocation of effort with multiple tasks and piecewise monotonic hazard function", tests the optimality of a scheduling model, proposed in a different literature, for the decisional problem faced in the experiment. Under specific assumptions, I find that such model identifies what would be the optimal scheduling of the tasks in the Admission Test. The third paper "The Effects of Scholarships and Tuition Fees Discounts on Students' Performances: Which Monetary Incentives work Better?" explores how different levels of monetary incentives affect the achievement of students in tertiary education. I used a Regression Discontinuity Design to exploit the assignment of different monetary incentives, to study the effects of such liquidity provision on performance outcomes, ceteris paribus. The results show that a monetary increase in the scholarships generates no effect on performance since the achievements of the recipients are all centered near the requirements for non-returning the benefit. Secondly, students, who are actually paying some share of the total cost of college attendance, surprisingly, perform better than those whose cost is completely subsidized. A lower benefit, relatively to a higher aid, it motivates students to finish early and not to suffer the extra cost of a delayed graduation.
Resumo:
In this thesis, we deal with the design of experiments in the drug development process, focusing on the design of clinical trials for treatment comparisons (Part I) and the design of preclinical laboratory experiments for proteins development and manufacturing (Part II). In Part I we propose a multi-purpose design methodology for sequential clinical trials. We derived optimal allocations of patients to treatments for testing the efficacy of several experimental groups by also taking into account ethical considerations. We first consider exponential responses for survival trials and we then present a unified framework for heteroscedastic experimental groups that encompasses the general ANOVA set-up. The very good performance of the suggested optimal allocations, in terms of both inferential and ethical characteristics, are illustrated analytically and through several numerical examples, also performing comparisons with other designs proposed in the literature. Part II concerns the planning of experiments for processes composed of multiple steps in the context of preclinical drug development and manufacturing. Following the Quality by Design paradigm, the objective of the multi-step design strategy is the definition of the manufacturing design space of the whole process and, as we consider the interactions among the subsequent steps, our proposal ensures the quality and the safety of the final product, by enabling more flexibility and process robustness in the manufacturing.
Resumo:
This PhD project aimed to (i) investigate the effects of three nutritional strategies (supplementation of a synbiotic, a muramidase, or arginine) on growth performance, gut health, and metabolism of broilers fed without antibiotics under thermoneutral and heat stress conditions and to (ii) explore the impacts of heat stress on hypothalamic regulation of feed intake in three broiler lines from diverse stages of genetic selection and in the red jungle fowl, the ancestor of domestic chickens. Synbiotic improved feed efficiency and footpad health, increased Firmicutes and reduced Bacteroidetes in the ceca of birds kept in thermoneutral conditions, while did not mitigate the impacts of heat stress on growth performance. Under optimal thermal conditions, muramidase increased final body weight and reduced cumulative feed intake and feed conversion ratio in a dose-dependent way. The highest dose reduced the risk of footpad lesions, cecal alpha diversity, the Firmicutes to Bacteroidetes ratio, and butyrate producers, increased Bacteroidaceae and Lactobacillaceae, plasmatic levels of bioenergetic metabolites, and reduced the levels of pro-oxidant metabolites. The same dose, however, failed to reduce the effects of heat stress on growth performance. Arginine supplementation improved growth rate, final body weight, and feed efficiency, increased plasmatic levels of arginine and creatine and hepatic levels of creatine and essential amino acids, reduced alpha diversity, Firmicutes, and Proteobacteria (especially Escherichia coli), and increased Bacteroidetes and Lactobacillus salivarius in the ceca of thermoneutral birds. No arginine-mediated attenuation of heat stress was found. Heat stress altered protein metabolism and caused the accumulation of antioxidant and protective molecules in oxidative stress-sensitive tissues. Arginine supplementation, however, may have partially counterbalanced the effects of heat stress on energy homeostasis. Stable gene expression of (an)orexigenic neuropeptides was found in the four chicken populations studied, but responses to hypoxia and heat stress appeared to be related to feed intake regulation.
Resumo:
Emissions of CO2 are constantly growing since the beginning of industrial era. Interruption of the production of major emitters sectors (energy and agriculture) is not a viable way and reducing all the emission through carbon capture and storage (CCS) is not economically viable and little publicly accepted, therefore, it becomes fundamentals to take actions like retrofitting already developed infrastructure employing cleanest resources, modify the actual processes limiting the emissions, and reduce the emissions already present through direct air capture. The present thesis will deeply discuss the aspects mentioned in regard to syngas and hydrogen production since they have a central role in the market of energy and chemicals. Among the strategies discussed, greater emphasis is given to the application of looping technologies and to direct air capture processes, as they have been the main point of this work. Particularly, chemical looping methane reforming to syngas was studied with Aspen Plus thermodynamic simulations, thermogravimetric analysis characterization (TGA) and testing in a fixed bed reactor. The process was studied cyclically exploiting the redox properties of a Ce-based oxide oxygen carrier synthetized with a simple forming procedure. The two steps of the looping cycles were studied isothermally at 900 °C and 950° C with a mixture of 10 %CH4 in N2 and of 3% O2 in N2, for carrier reduction and oxidation, respectively. During the stay abroad, in collaboration with the EHT of Zurich, a CO2 capture process in presence of amine solid sorbents was investigated, studying the difference in the performance achievable with the use of contactors of different geometry. The process was studied at two concentrations (382 ppm CO2 in N2 and 5.62% CO2 in N2) and at different flow rates, to understand the dynamics of the adsorption process and to define the mass transfer limiting step.
Resumo:
Decarbonization of maritime transport requires immediate action. In the short term, ship weather routing can provide greenhouse gas emission reductions, even for existing ships and without retrofitting them. Weather routing is based on making optimal use of both envi- ronmental information and knowledge about vessel seakeeping and performance. Combining them at a state-of-the-art level and making use of path planning in realistic conditions can be challenging. To address these topics in an open-source framework, this thesis led to the development of a new module called bateau , and to its combination with the ship routing model VISIR. bateau includes both hull geometry and propulsion modelling for various vessel types. It has two objectives: to predict the sustained speed in a seaway and to estimate the CO2 emission rate during the voyage. Various semi-empirical approaches were used in bateau to predict the ship hydro- and aerodynamical resistance in both head and oblique seas. Assuming that the ship sails at a constant engine load, the involuntary speed loss due to waves was estimated. This thesis also attempted to clarify the role played by the actual representation of the sea state. In particular, the influence of the wave steepness parameter was assessed. For dealing with ships with a greater superstructure, the wind added resistance was also estimated. Numerical experiments via bateau were conducted for both a medium and a large-size container ships, a bulk-carrier, and a tanker. The simulations of optimal routes were carried out for a feeder containership during voyages in the North Indian Ocean and in the South China Sea. Least-CO2 routes were compared to the least-distance ones, assessing the relative CO2 savings. Analysis fields from the Copernicus Marine Service were used in the numerical experiments.
Resumo:
Continuum parallel robots (CPRs) are manipulators employing multiple flexible beams arranged in parallel and connected to a rigid end-effector. CPRs promise higher payload and accuracy than serial CRs while keeping great flexibility. As the risk of injury during accidental contacts between a human and a CPR should be reduced, CPRs may be used in large-scale collaborative tasks or assisted robotic surgery. There exist various CPR designs, but the prototype conception is rarely based on performance considerations, and the CPRs realization in mainly based on intuitions or rigid-link parallel manipulators architectures. This thesis focuses on the performance analysis of CPRs, and the tools needed for such evaluation, such as workspace computation algorithms. In particular, workspace computation strategies for CPRs are essential for the performance assessment, since the CPRs workspace may be used as a performance index or it can serve for optimal-design tools. Two new workspace computation algorithms are proposed in this manuscript, the former focusing on the workspace volume computation and the certification of its numerical results, while the latter aims at computing the workspace boundary only. Due to the elastic nature of CPRs, a key performance indicator for these robots is the stability of their equilibrium configurations. This thesis proposes the experimental validation of the equilibrium stability assessment on a real prototype, demonstrating limitations of some commonly used assumptions. Additionally, a performance index measuring the distance to instability is originally proposed in this manuscript. Differently from the majority of the existing approaches, the clear advantage of the proposed index is a sound physical meaning; accordingly, the index can be used for a more straightforward performance quantification, and to derive robot specifications.
Resumo:
Embedded systems are increasingly integral to daily life, improving and facilitating the efficiency of modern Cyber-Physical Systems which provide access to sensor data, and actuators. As modern architectures become increasingly complex and heterogeneous, their optimization becomes a challenging task. Additionally, ensuring platform security is important to avoid harm to individuals and assets. This study primarily addresses challenges in contemporary Embedded Systems, focusing on platform optimization and security enforcement. The initial section of this study delves into the application of machine learning methods to efficiently determine the optimal number of cores for a parallel RISC-V cluster to minimize energy consumption using static source code analysis. Results demonstrate that automated platform configuration is not only viable but also that there is a moderate performance trade-off when relying solely on static features. The second part focuses on addressing the problem of heterogeneous device mapping, which involves assigning tasks to the most suitable computational device in a heterogeneous platform for optimal runtime. The contribution of this section lies in the introduction of novel pre-processing techniques, along with a training framework called Siamese Networks, that enhances the classification performance of DeepLLVM, an advanced approach for task mapping. Importantly, these proposed approaches are independent from the specific deep-learning model used. Finally, this research work focuses on addressing issues concerning the binary exploitation of software running in modern Embedded Systems. It proposes an architecture to implement Control-Flow Integrity in embedded platforms with a Root-of-Trust, aiming to enhance security guarantees with limited hardware modifications. The approach involves enhancing the architecture of a modern RISC-V platform for autonomous vehicles by implementing a side-channel communication mechanism that relays control-flow changes executed by the process running on the host core to the Root-of-Trust. This approach has limited impact on performance and it is effective in enhancing the security of embedded platforms.