998 resultados para machining regime optimization


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this article, we study the thermal performance of phase-change material (PCM)-based heat sinks under cyclic heat load and subjected to melt convection. Plate fin type heat sinks made of aluminum and filled with PCM are considered in this study. The heat sink is heated from the bottom. For a prescribed value of heat flux, design of such a heat sink can be optimized with respect to its geometry, with the objective of minimizing the temperature rise during heating and ensuring complete solidification of PCM at the end of the cooling period for a given cycle. For given length and base plate thickness of a heat sink, a genetic algorithm (GA)-based optimization is carried out with respect to geometrical variables such as fin thickness, fin height, and the number of fins. The thermal performance of the heat sink for a given set of parameters is evaluated using an enthalpy-based heat transfer model, which provides the necessary data for the optimization algorithm. The effect of melt convection is studied by taking two cases, one without melt convection (conduction regime) and the other with convection. The results show that melt convection alters the results of geometrical optimization.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Monodisperse colloidal gold-indium (AuIn2) intermetallic nanoparticles have been synthesized from Au and In colloids using the digestive ripening process. Formation of the intermetallic proceeds via digestive ripening facilitated atomic diffusion of Au and In atoms from the Au and In nanoparticles followed simultaneously by their growth in the solution. Optimization of the reaction temperature was found to be crucial for the formation of AuIn2 intermetallic from gold and indium nanoparticles. Transmission electron microscopy revealed the presence of nearly monodisperse nanoparticles of Au and AuIn2 with particle size distribution of 3.7 +/- 1.0 nm and 5.0 +/- 1.6 nm, respectively. UV-visible spectral studies brought out the absence of SPR band in pure AuIn2 intermetallic nanoparticles. Optical study and electron microscopy, in combination with powder X-ray diffraction established phase pure AuIn2 intermetallic nanoparticles unambiguously. The potential of such an unprecedented approach has been further exploited in the synthesis of Ag3In intermetallic nanoparticles with the dimension of less than 10 nm. (C) 2014 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Of all laser-based processes, laser machining has received little attention compared with others such as cutting, welding, heat treatment and cleaning. The reasons for this are unclear, although much can be gained from the development of an effcient laser machining process capable of processing diffcult materials such as high-performance steels and aerospace alloys. Existing laser machining processes selectively remove material by melt shearing and evaporation. Removing material by melting and evaporation leads to very low wall plug effciencies, and the process has difficulty competing with conventional mechanical removal methods. Adopting a laser machining solution for some materials offers the best prospects of effcient manufacturing operations. This paper presents a new laser machining process that relies on melt shear removal provided by a vertical high-speed gas vortex. Experimental and theoretical studies of a simple machining geometry have identifed a stable vortex regime that can be used to remove laser-generated melt effectively. The resultant combination of laser and vortex is employed in machining trials on 43A carbon steel. Results have shown that laser slot machining can be performed in a stable regime at speeds up to 150mm/min with slot depths of 4mm at an incident CO2 laser power level of 600 W. Slot forming mechanisms and process variables are discussed for the case of steel. Methods of bulk machining through multislot machining strategies are also presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A method for VVER-1000 fuel rearrangement optimization that takes into account both cladding durability and fuel burnup and which is suitable for any regime of normal reactor operation has been established. The main stages involved in solving the problem of fuel rearrangement optimization are discussed in detail. Using the proposed fuel rearrangement efficiency criterion, a simple example VVER-1000 fuel rearrangement optimization problem is solved under deterministic and uncertain conditions. It is shown that the deterministic and robust (in the face of uncertainty) solutions of the rearrangement optimization problem are similar in principle, but the robust solution is, as might be anticipated, more conservative. © 2013 Elsevier B.V.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Composite materials are finding increasing use on primary aerostructures to meet demanding performance targets while reducing environmental impact. This paper presents a finite-element-based preliminary optimization methodology for postbuckling stiffened panels, which takes into account damage mechanisms that lead to delamination and subsequent failure by stiffener debonding. A global-local modeling approach is adopted in which the boundary conditions on the local model are extracted directly from the global model. The optimization procedure is based on a genetic algorithm that maximizes damage resistance within the postbuckling regime. This routine is linked to a finite element package and the iterative procedure automated. For a given loading condition, the procedure optimized the stacking sequence of several areas of the panel, leading to an evolved panel that displayed superior damage resistance in comparison with nonoptimized designs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A parallel kinematic machine (PKM) topology can only give its best performance when its geometrical parameters are optimized. In this paper, dimensional synthesis of a newly developed PKM is presented for the first time. An optimization method is developed with the objective to maximize both workspace volume and global dexterity of the PKM. Results show that the method can effectively identify design parameter changes under different weighted objectives. The PKM with optimized dimensions has a large workspace to footprint ratio and a large well-conditioned workspace, hence justifies its suitability for large volume machining.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Glenn Research Centre of NASA, USA (www.grc.nasa.gov/WWW/SiC/, silicon carbide electronics) is in pursuit of realizing bulk manufacturing of silicon carbide (SiC), specifically by mechanical means. Single point diamond turning (SPDT) technology which employs diamond (the hardest naturally-occurring material realized to date) as a cutting tool to cut a workpiece is a highly productive manufacturing process. However, machining SiC using SPDT is a complex process and, while several experimental and analytical studies presented to date aid in the understanding of several critical processes of machining SiC, the current knowledge on the ductile behaviour of SiC is still sparse. This is due to a number of simultaneously occurring physical phenomena that may take place on multiple length and time scales. For example, nucleation of dislocation can take place at small inclusions that are of a few atoms in size and once nucleated, the interaction of these nucleations can manifest stresses on the micrometre length scales. The understanding of how stresses manifest during fracture in the brittle range, or dislocations/phase transformations in the ductile range, is crucial in understanding the brittle–ductile transition in SiC. Furthermore, there is a need to incorporate an appropriate simulation-based approach in the manufacturing research on SiC, owing primarily to the number of uncertainties in the experimental research that includes wear of the cutting tool, poor controllability of the nano-regime machining scale (effective thickness of cut), and coolant effects (interfacial phenomena between the tool, workpiece/chip and coolant), etc. In this review, these two problems are combined together to posit an improved understanding on the current theoretical knowledge on the SPDT of SiC obtained from molecular dynamics simulation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Molecular dynamics (MD) simulation has enhanced our understanding about ductile-regime machining of brittle materials such as silicon and germanium. In particular, MD simulation has helped understand the occurrence of brittle–ductile transition due to the high-pressure phase transformation (HPPT), which induces Herzfeld–Mott transition. In this paper, relevant MD simulation studies in conjunction with experimental studies are reviewed with a focus on (i) the importance of machining variables: undeformed chip thickness, feed rate, depth of cut, geometry of the cutting tool in influencing the state of the deviatoric stresses to cause HPPT in silicon, (ii) the influence of material properties: role of fracture toughness and hardness, crystal structure and anisotropy of the material, and (iii) phenomenological understanding of the wear of diamond cutting tools, which are all non-trivial for cost-effective manufacturing of silicon. The ongoing developmental work on potential energy functions is reviewed to identify opportunities for overcoming the current limitations of MD simulations. Potential research areas relating to how MD simulation might help improve existing manufacturing technologies are identified which may be of particular interest to early stage researchers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Continuous research endeavors on hard turning (HT), both on machine tools and cutting tools, have made the previously reported daunting limits easily attainable in the modern scenario. This presents an opportunity for a systematic investigation on finding the current attainable limits of hard turning using a CNC turret lathe. Accordingly, this study aims to contribute to the existing literature by providing the latest experimental results of hard turning of AISI 4340 steel (69 HRC) using a CBN cutting tool. An orthogonal array was developed using a set of judiciously chosen cutting parameters. Subsequently, the longitudinal turning trials were carried out in accordance with a well-designed full factorial-based Taguchi matrix. The speculation indeed proved correct as a mirror finished optical quality machined surface (an average surface roughness value of 45 nm) was achieved by the conventional cutting method. Furthermore, Signal-to-noise (S/N) ratio analysis, Analysis of variance (ANOVA), and Multiple regression analysis were carried out on the experimental datasets to assert the dominance of each machining variable in dictating the machined surface roughness and to optimize the machining parameters. One of the key findings was that when feed rate during hard turning approaches very low (about 0.02mm/rev), it could alone be most significant (99.16%) parameter in influencing the machined surface roughness (Ra). This has, however also been shown that low feed rate results in high tool wear, so the selection of machining parameters for carrying out hard turning must be governed by a trade-off between the cost and quality considerations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The work presented in this Ph.D thesis was developed in the context of complex network theory, from a statistical physics standpoint. We examine two distinct problems in this research field, taking a special interest in their respective critical properties. In both cases, the emergence of criticality is driven by a local optimization dynamics. Firstly, a recently introduced class of percolation problems that attracted a significant amount of attention from the scientific community, and was quickly followed up by an abundance of other works. Percolation transitions were believed to be continuous, until, recently, an 'explosive' percolation problem was reported to undergo a discontinuous transition, in [93]. The system's evolution is driven by a metropolis-like algorithm, apparently producing a discontinuous jump on the giant component's size at the percolation threshold. This finding was subsequently supported by number of other experimental studies [96, 97, 98, 99, 100, 101]. However, in [1] we have proved that the explosive percolation transition is actually continuous. The discontinuity which was observed in the evolution of the giant component's relative size is explained by the unusual smallness of the corresponding critical exponent, combined with the finiteness of the systems considered in experiments. Therefore, the size of the jump vanishes as the system's size goes to infinity. Additionally, we provide the complete theoretical description of the critical properties for a generalized version of the explosive percolation model [2], as well as a method [3] for a precise calculation of percolation's critical properties from numerical data (useful when exact results are not available). Secondly, we study a network flow optimization model, where the dynamics consists of consecutive mergings and splittings of currents flowing in the network. The current conservation constraint does not impose any particular criterion for the split of current among channels outgoing nodes, allowing us to introduce an asymmetrical rule, observed in several real systems. We solved analytically the dynamic equations describing this model in the high and low current regimes. The solutions found are compared with numerical results, for the two regimes, showing an excellent agreement. Surprisingly, in the low current regime, this model exhibits some features usually associated with continuous phase transitions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

L’apprentissage supervisé de réseaux hiérarchiques à grande échelle connaît présentement un succès fulgurant. Malgré cette effervescence, l’apprentissage non-supervisé représente toujours, selon plusieurs chercheurs, un élément clé de l’Intelligence Artificielle, où les agents doivent apprendre à partir d’un nombre potentiellement limité de données. Cette thèse s’inscrit dans cette pensée et aborde divers sujets de recherche liés au problème d’estimation de densité par l’entremise des machines de Boltzmann (BM), modèles graphiques probabilistes au coeur de l’apprentissage profond. Nos contributions touchent les domaines de l’échantillonnage, l’estimation de fonctions de partition, l’optimisation ainsi que l’apprentissage de représentations invariantes. Cette thèse débute par l’exposition d’un nouvel algorithme d'échantillonnage adaptatif, qui ajuste (de fa ̧con automatique) la température des chaînes de Markov sous simulation, afin de maintenir une vitesse de convergence élevée tout au long de l’apprentissage. Lorsqu’utilisé dans le contexte de l’apprentissage par maximum de vraisemblance stochastique (SML), notre algorithme engendre une robustesse accrue face à la sélection du taux d’apprentissage, ainsi qu’une meilleure vitesse de convergence. Nos résultats sont présent ́es dans le domaine des BMs, mais la méthode est générale et applicable à l’apprentissage de tout modèle probabiliste exploitant l’échantillonnage par chaînes de Markov. Tandis que le gradient du maximum de vraisemblance peut-être approximé par échantillonnage, l’évaluation de la log-vraisemblance nécessite un estimé de la fonction de partition. Contrairement aux approches traditionnelles qui considèrent un modèle donné comme une boîte noire, nous proposons plutôt d’exploiter la dynamique de l’apprentissage en estimant les changements successifs de log-partition encourus à chaque mise à jour des paramètres. Le problème d’estimation est reformulé comme un problème d’inférence similaire au filtre de Kalman, mais sur un graphe bi-dimensionnel, où les dimensions correspondent aux axes du temps et au paramètre de température. Sur le thème de l’optimisation, nous présentons également un algorithme permettant d’appliquer, de manière efficace, le gradient naturel à des machines de Boltzmann comportant des milliers d’unités. Jusqu’à présent, son adoption était limitée par son haut coût computationel ainsi que sa demande en mémoire. Notre algorithme, Metric-Free Natural Gradient (MFNG), permet d’éviter le calcul explicite de la matrice d’information de Fisher (et son inverse) en exploitant un solveur linéaire combiné à un produit matrice-vecteur efficace. L’algorithme est prometteur: en terme du nombre d’évaluations de fonctions, MFNG converge plus rapidement que SML. Son implémentation demeure malheureusement inefficace en temps de calcul. Ces travaux explorent également les mécanismes sous-jacents à l’apprentissage de représentations invariantes. À cette fin, nous utilisons la famille de machines de Boltzmann restreintes “spike & slab” (ssRBM), que nous modifions afin de pouvoir modéliser des distributions binaires et parcimonieuses. Les variables latentes binaires de la ssRBM peuvent être rendues invariantes à un sous-espace vectoriel, en associant à chacune d’elles, un vecteur de variables latentes continues (dénommées “slabs”). Ceci se traduit par une invariance accrue au niveau de la représentation et un meilleur taux de classification lorsque peu de données étiquetées sont disponibles. Nous terminons cette thèse sur un sujet ambitieux: l’apprentissage de représentations pouvant séparer les facteurs de variations présents dans le signal d’entrée. Nous proposons une solution à base de ssRBM bilinéaire (avec deux groupes de facteurs latents) et formulons le problème comme l’un de “pooling” dans des sous-espaces vectoriels complémentaires.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The influence, was investigated, of abiotic parameters on the isolation of protoplasts from in vitro seedling cotyledons of white lupin. The protoplasts were found to be competent in withstanding a wide range of osmotic potentials of the enzyme medium, however, −2.25 MPa (0.5 M mannitol), resulted in the highest yield of protoplasts. The pH of the isolation medium also had a profound effect on protoplast production. Vacuum infiltration of the enzyme solution into the cotyledon tissue resulted in a progressive drop in the yield of protoplasts. The speed and duration of orbital agitation of the cotyledon tissue played a significant role in the release of protoplasts and a two step (stationary-gyratory) regime was found to be better than the gyratory-only system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Liquid matrix-assisted laser desorption/ionization (MALDI) allows the generation of predominantly multiply charged ions in atmospheric pressure (AP) MALDI ion sources for mass spectrometry (MS) analysis. The charge state distribution of the generated ions and the efficiency of the ion source in generating such ions crucially depend on the desolvation regime of the MALDI plume after desorption in the AP-tovacuum inlet. Both high temperature and a flow regime with increased residence time of the desorbed plume in the desolvation region promote the generation of multiply charged ions. Without such measures the application of an electric ion extraction field significantly increases the ion signal intensity of singly charged species while the detection of multiply charged species is less dependent on the extraction field. In general, optimization of high temperature application facilitates the predominant formation and detection of multiply charged compared to singly charged ion species. In this study an experimental setup and optimization strategy is described for liquid AP-MALDI MS which improves the ionization effi- ciency of selected ion species up to 14 times. In combination with ion mobility separation, the method allows the detection of multiply charged peptide and protein ions for analyte solution concentrations as low as 2 fmol/lL (0.5 lL, i.e. 1 fmol, deposited on the target) with very low sample consumption in the low nL-range.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, the effect of the milling time on the densification of the alumina ceramics with or without 5wt.%Y 2O 3, is evaluated, using high-energy ball milling. The milling was performed with different times of 0, 2, 5 or 10 hours. All powders, milled at different times, were characterized by X-Ray Diffraction presenting a reduction of the crystalline degree and crystallite size as function of the milling time increasing. The powders were compacted by cold uniaxial pressing and sintered at 1550°C-60min. Green density of the compacts presented an increasing as function of the milling time and sintered samples presented evolution on the densification as function of the reduction of the crystallite size of the milled powders. © (2010) Trans Tech Publications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Goal Programming (GP) is an important analytical approach devised to solve many realworld problems. The first GP model is known as Weighted Goal Programming (WGP). However, Multi-Choice Aspirations Level (MCAL) problems cannot be solved by current GP techniques. In this paper, we propose a Multi-Choice Mixed Integer Goal Programming model (MCMI-GP) for the aggregate production planning of a Brazilian sugar and ethanol milling company. The MC-MIGP model was based on traditional selection and process methods for the design of lots, representing the production system of sugar, alcohol, molasses and derivatives. The research covers decisions on the agricultural and cutting stages, sugarcane loading and transportation by suppliers and, especially, energy cogeneration decisions; that is, the choice of production process, including storage stages and distribution. The MCMIGP allows decision-makers to set multiple aspiration levels for their problems in which the more/higher, the better and the less/lower, the better in the aspiration levels are addressed. An application of the proposed model for real problems in a Brazilian sugar and ethanol mill was conducted; producing interesting results that are herein reported and commented upon. Also, it was made a comparison between MCMI GP and WGP models using these real cases. © 2013 Elsevier Inc.