933 resultados para Metals - Formability - Simulation methods


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A high-frequency cyclonverter acts as a direct ac-to-ac power converter circuit that does not require a diode bidge rectifier. Bridgeless topology makes it possible to remove forward voltage drop losses that are present in a diode bridge. In addition, the on-state losses can be reduced to 1.5 times the on-state resistance of switches in half-bridge operation of the cycloconverter. A high-frequency cycloconverter is reviewed and the charging effect of the dc-capacitors in ``back-to-back'' or synchronous mode operation operation is analyzed. In addition, a control method is introduced for regulating dc-voltage of the ac-side capacitors in synchronous operation mode. The controller regulates the dc-capacitors and prevents switches from reaching overvoltage level. This can be accomplished by variating phase-shift between the upper and the lower gate signals. By adding phase-shift between the gate signal pairs, the charge stored in the energy storage capacitors can be discharged through the resonant load and substantially, the output resonant current amplitude can be improved. The above goals are analyzed and illustrated with simulation. Theory is supported with practical measurements where the proposed control method is implemented in an FPGA device and tested with a high-frequency cycloconverter using super-junction power MOSFETs as switching devices.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this thesis is to search how to match the demand and supply effectively in industrial and project-oriented business environment. The demand-supply balancing process is searched through three different phases: the demand planning and forecasting, synchronization of demand and supply and measurement of the results. The thesis contains a single case study that has been implemented in a company called Outotec. In the case study the demand is planned and forecasted with qualitative (judgmental) forecasting method. The quantitative forecasting methods are searched further to support the demand forecast and long term planning. The sales and operations planning process is used in the synchronization of the demand and supply. The demand forecast is applied in the management of a supply chain of critical unit of elemental analyzer. Different meters on operational and strategic level are proposed for the measurement of performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this thesis is to propose a novel control method for teleoperated electrohydraulic servo systems that implements a reliable haptic sense between the human and manipulator interaction, and an ideal position control between the manipulator and the task environment interaction. The proposed method has the characteristics of a universal technique independent of the actual control algorithm and it can be applied with other suitable control methods as a real-time control strategy. The motivation to develop this control method is the necessity for a reliable real-time controller for teleoperated electrohydraulic servo systems that provides highly accurate position control based on joystick inputs with haptic capabilities. The contribution of the research is that the proposed control method combines a directed random search method and a real-time simulation to develop an intelligent controller in which each generation of parameters is tested on-line by the real-time simulator before being applied to the real process. The controller was evaluated on a hydraulic position servo system. The simulator of the hydraulic system was built based on Markov chain Monte Carlo (MCMC) method. A Particle Swarm Optimization algorithm combined with the foraging behavior of E. coli bacteria was utilized as the directed random search engine. The control strategy allows the operator to be plugged into the work environment dynamically and kinetically. This helps to ensure the system has haptic sense with high stability, without abstracting away the dynamics of the hydraulic system. The new control algorithm provides asymptotically exact tracking of both, the position and the contact force. In addition, this research proposes a novel method for re-calibration of multi-axis force/torque sensors. The method makes several improvements to traditional methods. It can be used without dismantling the sensor from its application and it requires smaller number of standard loads for calibration. It is also more cost efficient and faster in comparison to traditional calibration methods. The proposed method was developed in response to re-calibration issues with the force sensors utilized in teleoperated systems. The new approach aimed to avoid dismantling of the sensors from their applications for applying calibration. A major complication with many manipulators is the difficulty accessing them when they operate inside a non-accessible environment; especially if those environments are harsh; such as in radioactive areas. The proposed technique is based on design of experiment methodology. It has been successfully applied to different force/torque sensors and this research presents experimental validation of use of the calibration method with one of the force sensors which method has been applied to.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation describes an approach for developing a real-time simulation for working mobile vehicles based on multibody modeling. The use of multibody modeling allows comprehensive description of the constrained motion of the mechanical systems involved and permits real-time solving of the equations of motion. By carefully selecting the multibody formulation method to be used, it is possible to increase the accuracy of the multibody model while at the same time solving equations of motion in real-time. In this study, a multibody procedure based on semi-recursive and augmented Lagrangian methods for real-time dynamic simulation application is studied in detail. In the semirecursive approach, a velocity transformation matrix is introduced to describe the dependent coordinates into relative (joint) coordinates, which reduces the size of the generalized coordinates. The augmented Lagrangian method is based on usage of global coordinates and, in that method, constraints are accounted using an iterative process. A multibody system can be modelled as either rigid or flexible bodies. When using flexible bodies, the system can be described using a floating frame of reference formulation. In this method, the deformation mode needed can be obtained from the finite element model. As the finite element model typically involves large number of degrees of freedom, reduced number of deformation modes can be obtained by employing model order reduction method such as Guyan reduction, Craig-Bampton method and Krylov subspace as shown in this study The constrained motion of the working mobile vehicles is actuated by the force from the hydraulic actuator. In this study, the hydraulic system is modeled using lumped fluid theory, in which the hydraulic circuit is divided into volumes. In this approach, the pressure wave propagation in the hoses and pipes is neglected. The contact modeling is divided into two stages: contact detection and contact response. Contact detection determines when and where the contact occurs, and contact response provides the force acting at the collision point. The friction between tire and ground is modelled using the LuGre friction model, which describes the frictional force between two surfaces. Typically, the equations of motion are solved in the full matrices format, where the sparsity of the matrices is not considered. Increasing the number of bodies and constraint equations leads to the system matrices becoming large and sparse in structure. To increase the computational efficiency, a technique for solution of sparse matrices is proposed in this dissertation and its implementation demonstrated. To assess the computing efficiency, augmented Lagrangian and semi-recursive methods are implemented employing a sparse matrix technique. From the numerical example, the results show that the proposed approach is applicable and produced appropriate results within the real-time period.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Euclidean distance matrix analysis (EDMA) methods are used to distinguish whether or not significant difference exists between conformational samples of antibody complementarity determining region (CDR) loops, isolated LI loop and LI in three-loop assembly (LI, L3 and H3) obtained from Monte Carlo simulation. After the significant difference is detected, the specific inter-Ca distance which contributes to the difference is identified using EDMA.The estimated and improved mean forms of the conformational samples of isolated LI loop and LI loop in three-loop assembly, CDR loops of antibody binding site, are described using EDMA and distance geometry (DGEOM). To the best of our knowledge, it is the first time the EDMA methods are used to analyze conformational samples of molecules obtained from Monte Carlo simulations. Therefore, validations of the EDMA methods using both positive control and negative control tests for the conformational samples of isolated LI loop and LI in three-loop assembly must be done. The EDMA-I bootstrap null hypothesis tests showed false positive results for the comparison of six samples of the isolated LI loop and true positive results for comparison of conformational samples of isolated LI loop and LI in three-loop assembly. The bootstrap confidence interval tests revealed true negative results for comparisons of six samples of the isolated LI loop, and false negative results for the conformational comparisons between isolated LI loop and LI in three-loop assembly. Different conformational sample sizes are further explored by combining the samples of isolated LI loop to increase the sample size, or by clustering the sample using self-organizing map (SOM) to narrow the conformational distribution of the samples being comparedmolecular conformations. However, there is no improvement made for both bootstrap null hypothesis and confidence interval tests. These results show that more work is required before EDMA methods can be used reliably as a method for comparison of samples obtained by Monte Carlo simulations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Volume(density)-independent pair-potentials cannot describe metallic cohesion adequately as the presence of the free electron gas renders the total energy strongly dependent on the electron density. The embedded atom method (EAM) addresses this issue by replacing part of the total energy with an explicitly density-dependent term called the embedding function. Finnis and Sinclair proposed a model where the embedding function is taken to be proportional to the square root of the electron density. Models of this type are known as Finnis-Sinclair many body potentials. In this work we study a particular parametrization of the Finnis-Sinclair type potential, called the "Sutton-Chen" model, and a later version, called the "Quantum Sutton-Chen" model, to study the phonon spectra and the temperature variation thermodynamic properties of fcc metals. Both models give poor results for thermal expansion, which can be traced to rapid softening of transverse phonon frequencies with increasing lattice parameter. We identify the power law decay of the electron density with distance assumed by the model as the main cause of this behaviour and show that an exponentially decaying form of charge density improves the results significantly. Results for Sutton-Chen and our improved version of Sutton-Chen models are compared for four fcc metals: Cu, Ag, Au and Pt. The calculated properties are the phonon spectra, thermal expansion coefficient, isobaric heat capacity, adiabatic and isothermal bulk moduli, atomic root-mean-square displacement and Gr\"{u}neisen parameter. For the sake of comparison we have also considered two other models where the distance-dependence of the charge density is an exponential multiplied by polynomials. None of these models exhibits the instability against thermal expansion (premature melting) as shown by the Sutton-Chen model. We also present results obtained via pure pair potential models, in order to identify advantages and disadvantages of methods used to obtain the parameters of these potentials.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A wide range of tests for heteroskedasticity have been proposed in the econometric and statistics literature. Although a few exact homoskedasticity tests are available, the commonly employed procedures are quite generally based on asymptotic approximations which may not provide good size control in finite samples. There has been a number of recent studies that seek to improve the reliability of common heteroskedasticity tests using Edgeworth, Bartlett, jackknife and bootstrap methods. Yet the latter remain approximate. In this paper, we describe a solution to the problem of controlling the size of homoskedasticity tests in linear regression contexts. We study procedures based on the standard test statistics [e.g., the Goldfeld-Quandt, Glejser, Bartlett, Cochran, Hartley, Breusch-Pagan-Godfrey, White and Szroeter criteria] as well as tests for autoregressive conditional heteroskedasticity (ARCH-type models). We also suggest several extensions of the existing procedures (sup-type of combined test statistics) to allow for unknown breakpoints in the error variance. We exploit the technique of Monte Carlo tests to obtain provably exact p-values, for both the standard and the new tests suggested. We show that the MC test procedure conveniently solves the intractable null distribution problem, in particular those raised by the sup-type and combined test statistics as well as (when relevant) unidentified nuisance parameter problems under the null hypothesis. The method proposed works in exactly the same way with both Gaussian and non-Gaussian disturbance distributions [such as heavy-tailed or stable distributions]. The performance of the procedures is examined by simulation. The Monte Carlo experiments conducted focus on : (1) ARCH, GARCH, and ARCH-in-mean alternatives; (2) the case where the variance increases monotonically with : (i) one exogenous variable, and (ii) the mean of the dependent variable; (3) grouped heteroskedasticity; (4) breaks in variance at unknown points. We find that the proposed tests achieve perfect size control and have good power.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose finite sample tests and confidence sets for models with unobserved and generated regressors as well as various models estimated by instrumental variables methods. The validity of the procedures is unaffected by the presence of identification problems or \"weak instruments\", so no detection of such problems is required. We study two distinct approaches for various models considered by Pagan (1984). The first one is an instrument substitution method which generalizes an approach proposed by Anderson and Rubin (1949) and Fuller (1987) for different (although related) problems, while the second one is based on splitting the sample. The instrument substitution method uses the instruments directly, instead of generated regressors, in order to test hypotheses about the \"structural parameters\" of interest and build confidence sets. The second approach relies on \"generated regressors\", which allows a gain in degrees of freedom, and a sample split technique. For inference about general possibly nonlinear transformations of model parameters, projection techniques are proposed. A distributional theory is obtained under the assumptions of Gaussian errors and strictly exogenous regressors. We show that the various tests and confidence sets proposed are (locally) \"asymptotically valid\" under much weaker assumptions. The properties of the tests proposed are examined in simulation experiments. In general, they outperform the usual asymptotic inference methods in terms of both reliability and power. Finally, the techniques suggested are applied to a model of Tobin’s q and to a model of academic performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Contexte. Les études cas-témoins sont très fréquemment utilisées par les épidémiologistes pour évaluer l’impact de certaines expositions sur une maladie particulière. Ces expositions peuvent être représentées par plusieurs variables dépendant du temps, et de nouvelles méthodes sont nécessaires pour estimer de manière précise leurs effets. En effet, la régression logistique qui est la méthode conventionnelle pour analyser les données cas-témoins ne tient pas directement compte des changements de valeurs des covariables au cours du temps. Par opposition, les méthodes d’analyse des données de survie telles que le modèle de Cox à risques instantanés proportionnels peuvent directement incorporer des covariables dépendant du temps représentant les histoires individuelles d’exposition. Cependant, cela nécessite de manipuler les ensembles de sujets à risque avec précaution à cause du sur-échantillonnage des cas, en comparaison avec les témoins, dans les études cas-témoins. Comme montré dans une étude de simulation précédente, la définition optimale des ensembles de sujets à risque pour l’analyse des données cas-témoins reste encore à être élucidée, et à être étudiée dans le cas des variables dépendant du temps. Objectif: L’objectif général est de proposer et d’étudier de nouvelles versions du modèle de Cox pour estimer l’impact d’expositions variant dans le temps dans les études cas-témoins, et de les appliquer à des données réelles cas-témoins sur le cancer du poumon et le tabac. Méthodes. J’ai identifié de nouvelles définitions d’ensemble de sujets à risque, potentiellement optimales (le Weighted Cox model and le Simple weighted Cox model), dans lesquelles différentes pondérations ont été affectées aux cas et aux témoins, afin de refléter les proportions de cas et de non cas dans la population source. Les propriétés des estimateurs des effets d’exposition ont été étudiées par simulation. Différents aspects d’exposition ont été générés (intensité, durée, valeur cumulée d’exposition). Les données cas-témoins générées ont été ensuite analysées avec différentes versions du modèle de Cox, incluant les définitions anciennes et nouvelles des ensembles de sujets à risque, ainsi qu’avec la régression logistique conventionnelle, à des fins de comparaison. Les différents modèles de régression ont ensuite été appliqués sur des données réelles cas-témoins sur le cancer du poumon. Les estimations des effets de différentes variables de tabac, obtenues avec les différentes méthodes, ont été comparées entre elles, et comparées aux résultats des simulations. Résultats. Les résultats des simulations montrent que les estimations des nouveaux modèles de Cox pondérés proposés, surtout celles du Weighted Cox model, sont bien moins biaisées que les estimations des modèles de Cox existants qui incluent ou excluent simplement les futurs cas de chaque ensemble de sujets à risque. De plus, les estimations du Weighted Cox model étaient légèrement, mais systématiquement, moins biaisées que celles de la régression logistique. L’application aux données réelles montre de plus grandes différences entre les estimations de la régression logistique et des modèles de Cox pondérés, pour quelques variables de tabac dépendant du temps. Conclusions. Les résultats suggèrent que le nouveau modèle de Cox pondéré propose pourrait être une alternative intéressante au modèle de régression logistique, pour estimer les effets d’expositions dépendant du temps dans les études cas-témoins

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Les titres financiers sont souvent modélisés par des équations différentielles stochastiques (ÉDS). Ces équations peuvent décrire le comportement de l'actif, et aussi parfois certains paramètres du modèle. Par exemple, le modèle de Heston (1993), qui s'inscrit dans la catégorie des modèles à volatilité stochastique, décrit le comportement de l'actif et de la variance de ce dernier. Le modèle de Heston est très intéressant puisqu'il admet des formules semi-analytiques pour certains produits dérivés, ainsi qu'un certain réalisme. Cependant, la plupart des algorithmes de simulation pour ce modèle font face à quelques problèmes lorsque la condition de Feller (1951) n'est pas respectée. Dans ce mémoire, nous introduisons trois nouveaux algorithmes de simulation pour le modèle de Heston. Ces nouveaux algorithmes visent à accélérer le célèbre algorithme de Broadie et Kaya (2006); pour ce faire, nous utiliserons, entre autres, des méthodes de Monte Carlo par chaînes de Markov (MCMC) et des approximations. Dans le premier algorithme, nous modifions la seconde étape de la méthode de Broadie et Kaya afin de l'accélérer. Alors, au lieu d'utiliser la méthode de Newton du second ordre et l'approche d'inversion, nous utilisons l'algorithme de Metropolis-Hastings (voir Hastings (1970)). Le second algorithme est une amélioration du premier. Au lieu d'utiliser la vraie densité de la variance intégrée, nous utilisons l'approximation de Smith (2007). Cette amélioration diminue la dimension de l'équation caractéristique et accélère l'algorithme. Notre dernier algorithme n'est pas basé sur une méthode MCMC. Cependant, nous essayons toujours d'accélérer la seconde étape de la méthode de Broadie et Kaya (2006). Afin de réussir ceci, nous utilisons une variable aléatoire gamma dont les moments sont appariés à la vraie variable aléatoire de la variance intégrée par rapport au temps. Selon Stewart et al. (2007), il est possible d'approximer une convolution de variables aléatoires gamma (qui ressemble beaucoup à la représentation donnée par Glasserman et Kim (2008) si le pas de temps est petit) par une simple variable aléatoire gamma.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Humic substances are complex polymeric structures.No other polymers with such a wide range of properties are so widely distributed in nature.But still their moleculer structures are unknown. A structural knowledge is essential in determining their reactivity with metals.In the present work structural elucidation of humic acids from three different mangrove ecosystems of Cochin area is done with the available data from functional group analysis and various spectroscopic methods.13C NMR spectra of the solid samples with CPMAS,IR and SEM are very promising in revealing the complex structures of these polymeric substances.Sorptional studies on the sediment and humic acid of mangrove ecosystem reveals that the major portion of the organic matter is not extractable with Sodium hydroxide and humic acid only a small portion of the total organic matter. Humic acid is a good complexing agent and scavenger. Due to the nonextractable nature of the organic matter present with the sediment left after alkali extraction it is a better scavenger.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The thesis embodies the results of a study on the variations in the parameters of productivity of two test species, a chlorophycean alga and a diatom. The chlorophycean alga Scenedesmus abundans was isolated from a fresh water pond whereas the diatom Nitzschia clausii was from the Cochin backwaters. Their growth parameters and their variations due to the effect of addition of some heavy metals have been studied. The growth parameters include biomass, production, respiration, photosynthetic pigments and end products of photosynthesis. The cell numbers were estimated by using a haemocytometer and production and respiration by oxygen light and dark technique. Spectrophotometric analysis for pigments, anthrone method for carbohydrate and heated biuret method for protein were the different methods employed in the present investigation. The present study is confined to nickel, cobalt, trivalent and hexavalent chromium. Different metals are discharged from various industries in and around Cochin. The effects of these metals individually and in combination are studied. Experiments to determine the effects of interaction of metals in combination enabled the assessment of the antagonistic and synergistic effect of metals on test species. The concentration or accumulation of metals on algae was determined by Atomic Absorption Spectrophotometry. The thesis has been divided into seven chapters. The introductory chapter explains the relevance of the present investigation. Chapter two presents the review of literature based on the work in relation to toxicity. Third chapter gives a detailed description of the material and specialized methods followed for the study. The effects of various metals selected for study - nickel, cobalt, trivalent and hexavalent chromium on the qualitative and quantitative aspects of productivity forms the subject of matter of the fourth chapter. The fifth chapter gives the impact of metals in combination on two species of algae. A general discussion and summary are included in the sixth and seventh chapters

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present scientific investigation of the effects of copper, mercury and cadmium has focussed on their effects on two commercially important marine bivalve species, Perna indica (brown mussel) and Donax incarnatus (wedge clam), conspicuous representatives of the tropical intertidal areas. The investigation centred around delineating the cause and effects of heavy metal stress, individually and in combination on these species under laboratory conditions. A clear understanding of the cause and effect can be had only if laboratory experiments are conducted employing sub-lethal concentrations of the above toxicants. Therefore, during the course of the investigation, sub-lethal concentrations of copper, mercury and cadmium were employed to assess the concentration dependent effects on survival, ventilation rate, O:N ratio and tissues. The results obtained are compared with the already available information and partitioned in sections to make a meaningful presentation.The thesis is presented in five chapters comprising INTRODUCTION, ACUTE TOXICITY, VENTILATION RATE, OXYGEN : NITROGEN RATIO and HISTOPATHOLOGY. Each chapter has been divided into various sections such as INTRODUCTION, REVIEW OF LITERATURE, MATERIAL AND METHODS, RESULTS and DISCUSSION

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mangroves are considered to play a significant role in global carbon cycling. Themangrove forests would fix CO2 by photosynthesis into mangrove lumber and thus decrease the possibility of a catastrophic series of events - global warming by atmospheric CO2, melting of the polar ice caps, and inundation of the great coastal cities of the world. The leaf litter and roots are the main contributors to mangrove sediments, though algal production and allochthonous detritus can also be trapped (Kristensen et al, 2008) by mangroves due to their high organic matter content and reducing nature are excellent metal retainers. Environmental pollution due to metals is of major concern. This is due to the basic fact that metals are not biodegradable or perishable the way most organic pollutants are. While most organic toxicants can be destroyed by combustion and converted into compounds such as C0, C02, SOX, NOX, metals can't be destroyed. At the most the valance and physical form of metals may change. Concentration of metals present naturally in air, water and soil is very low. Metals released into the environment through anthropogenic activities such as burning of fossils fuels, discharge of industrial effluents, mining, dumping of sewage etc leads to the development of higher than tolerable or toxic levels of metals in the environment leading to metal pollution. Of course, a large number of heavy metals such as Fe, Mn, Cu, Ni, Zn, Co, Cr, Mo, and V are essential to plants and animals and deficiency of these metals may lead to diseases, but at higher levels, it would lead to metal toxicity. Almost all industrial processes and urban activities involve release of at least trace quantities of half a dozen metals in different forms. Heavy metal pollution in the environment can remain dormant for a long time and surface with a vengeance. Once an area gets toxified with metals, it is almost impossible to detoxify it. The symptoms of metal toxicity are often quite similar to the symptoms of other common diseases such as respiratory problems, digestive disorders, skin diseases, hypertension, diabetes, jaundice etc making it all the more difficult to diagnose metal poisoning. For example the Minamata disease caused by mercury pollution in addition to affecting the nervous system can disturb liver function and cause diabetes and hypertension. The damage caused by heavy metals does not end up with the affected person. The harmful effects can be transferred to the person's progenies. Ironically heavy metal pollution is a direct offshoot of our increasing ability to mass produce metals and use them in all spheres of existence. Along with conventional physico- chemical methods, biosystem approachment is also being constantly used for combating metal pollution

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The performance of different correlation functionals has been tested for alkali metals, Li to Cs, interacting with cluster models simulating different active sites of the Si(111) surface. In all cases, the ab initio Hartree-Fock density has been obtained and used as a starting point. The electronic correlation energy is then introduced as an a posteriori correction to the Hartree-Fock energy using different correlation functionals. By making use of the ionic nature of the interaction and of different dissociation limits we have been able to prove that all functionals tested introduce the right correlation energy, although to a different extent. Hence, correlation functionals appear as an effective and easy way to introduce electronic correlation in the ab initio Hartree-Fock description of the chemisorption bond in complex systems where conventional configuration interaction techniques cannot be used. However, the calculated energies may differ by some tens of eV. Therefore, these methods can be employed to get a qualitative idea of how important correlation effects are, but they have some limitations if accurate binding energies are to be obtained.