984 resultados para Coating techniques


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Superfine mineral materials are mainly resulted from the pulverization of natural mineral resources, and are a type of new materials that can replace traditional materials and enjoy the most extensive application and the highest degree of consumption in the present day market. As a result, superfine mineral materials have a very broad and promising prospect in terms of market potential. Superfine pulverization technology is the only way for the in-depth processing of most of the traditional materials, and is also one of the major means for which mineral materials can realize their application. China is rich in natural resources such as heavy calcite, kaolin, wollastonite, etc., which enjoy a very wide market of application in paper making, rubber, plastics, painting, coating, medicine, environment-friendly recycle paper and fine chemical industries, for example. However, because the processing of these resources is generally at the low level, economic benefit and scale for the processing of these resources have not been realized to their full potential even up to now. Big difference in product indices and superfine processing equipment and technologies between China and advanced western countries still exists. Based on resource assessment and market potential analysis, an in-depth study was carried out in this paper about the superfine pulverization technology and superfine pulverized mineral materials from the point of mineralogical features, determination of processing technologies, analytical methods and applications, by utilizing a variety of modern analytical methods in mineralogy, superfine pulverization technology, macromolecular chemistry, material science and physical chemistry together with computer technology and so on. The focus was placed on the innovative study about the in-depth processing technology and the processing apparatus for kaolin and heavy calcite as well as the application of superfine products. The main contents and the major achievements of this study are listed as follows: 1. Superfine pulverization processing of mineral materials shall be integrated with the study of their crystal structures and chemical composition. And special attention shall be put on the post-processing technologies, rather than on the indices for particle size, of these materials, based on their fields of application. Both technical feasibility and economic feasibility shall be taken into account for the study about superfine pulverization technologies, since these two kinds of feasibilities serve as the premise for the industrialized application of superfine pulverized mineral materials. Based on this principle, preposed chemical treatment method, technology of synchronized superfine pulverization and gradation, processing technology and apparatus of integrated modification and depolymerization were utilized in this study, and narrow distribution in terms of particle size, good dispersibility, good application effects, low consumption as well as high effectiveness of superfine products were achieved in this study. Heavy calcite and kaolin are two kinds of superfine mineral materials that enjoy the highest consumption in the industry. Heavy calcite is mainly applied in paper making, coating and plastics industries, the hard kaolin in northern China is mainly used in macromolecular materials and chemical industries, while the soft kaolin in southern China is mainly used for paper making. On the other hand, superfine pulverized heavy calcite and kaolin can both be used as the functional additives to cement, a kind of material that enjoys the biggest consumption in the world. A variety of analytical methods and instruments such as transmission and scanning electron microscopy, X-ray diffraction analysis, infrared analysis, laser particle size analysis and so on were applied for the elucidation of the properties and the mechanisms for the functions of superfine mineral materials as used in plastics and high-performance cement. Detection of superfine mineral materials is closely related to the post-processing and application of these materials. Traditional detection and analytical methods for superfine mineral materials include optical microscopy, infrared spectral analysis and a series of microbeam techniques such as transmission and scanning electron microscopy, X-ray diffraction analysis, and so on. In addition to these traditional methods, super-weak luminescent photon detection technology of high precision, high sensitivity and high signal to noise ratio was also utilized by the author for the first time in the study of superfine mineral materials, in an attempt to explore a completely new method and means for the study of the characterization of superfine materials. The experimental results are really exciting! The innovation of this study is represented in the following aspects: 1. In this study, preposed chemical treatment method, technology of synchronized superfine pulverization and gradation, processing technology and apparatus of integrated modification and depolymerization were utilized in an innovative way, and narrow distribution in terms of particle size, good dispersibility, good application effects, low consumption as well as high effectiveness of superfine products were achieved in the industrialized production process*. Moreover, a new modification technology and related directions for producing the chemicals were invented, and the modification technology was even awarded a patent. 2. The detection technology of super-weak luminescent photon of high precision, high sensitivity and high signal to noise ratio was utilized for the first time in this study to explore the superfine mineral materials, and the experimental results can be compared with those acquired with scanning electron microscopy and has demonstrated its unique advantages. It can be expected that further study may possibly help to result in a completely new method and means for the characterization of superfine materials. 3. During the heating of kaolinite and its decomposition into pianlinite, the diffraction peaks disappear gradually. First comes the disappearance of the reflection of the basal plane (001), and then comes the slow disappearance of the (hkl) diffraction peaks. And this was first discovered during the experiments by the author, and it has never before reported by other scholars. 4. The first discovery of the functions that superfine mineral materials can be used as dispersants in plastics, and the first discovery of the comprehensive functions that superfine mineral materials can also be used as activators, water-reducing agents and aggregates in high-performance cement were made in this study, together with a detailed discussion. This study was jointly supported by two key grants from Guangdong Province for Scientific and Technological Research in the 10th Five-year Plan Period (1,200,000 yuan for Preparation technology, apparatus and post-processing research by using sub-micron superfine pulverization machinery method, and 300,000 yuan for Method and instruments for biological photon technology in the characterization of nanometer materials), and two grants from Guangdong Province for 100 projects for scientific and technological innovation (700,000 yuan for Pilot experimentation of superfine and modified heavy calcite used in paper-making, rubber and plastics industry, and 400,000 yuan for Study of superfine, modified wollastonite of large length-to-diameter ratio).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Carbonaceous deposits formed during the temperature-programmed surface reaction (TPSR) of methane dehydro-aromatization (MDA) over Mo/HZSM-5 catalysts have been investigated by TPH, TPCO2 and TPO, in combination with thermal gravimetric analysis (TG). The TPO profiles of the coked catalyst after TPSR of MDA show two temperature peaks: one is at about 776 K and the other at about 865 K. The succeeding TPH experiments only resulted in the diminishing of the area of the high-temperature peak, and had no effect on the area of the low-temperature peak. On the other hand, the TPO profiles of the coked catalyst after succeeding TPCO2 experiments exhibited obvious reduction in the areas of both the high-and low-temperature peaks, particularly in the area of the low-temperature peak. On the basis of TPSR, TPR and TPCO2 experiments and the corresponding TG analysis, quantitative analysis of the coke and the kinetics of its burning-off process have been studied. (C) 2001 Elsevier Science B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This report explores the relation between image intensity and object shape. It is shown that image intensity is related to surface orientation and that a variation in image intensity is related to surface curvature. Computational methods are developed which use the measured intensity variation across surfaces of smooth objects to determine surface orientation. In general, surface orientation is not determined locally by the intensity value recorded at each image point. Tools are needed to explore the problem of determining surface orientation from image intensity. The notion of gradient space , popularized by Huffman and Mackworth, is used to represent surface orientation. The notion of a reflectance map, originated by Horn, is used to represent the relation between surface orientation image intensity. The image Hessian is defined and used to represent surface curvature. Properties of surface curvature are expressed as constraints on possible surface orientations corresponding to a given image point. Methods are presented which embed assumptions about surface curvature in algorithms for determining surface orientation from the intensities recorded in a single view. If additional images of the same object are obtained by varying the direction of incident illumination, then surface orientation is determined locally by the intensity values recorded at each image point. This fact is exploited in a new technique called photometric stereo. The visual inspection of surface defects in metal castings is considered. Two casting applications are discussed. The first is the precision investment casting of turbine blades and vanes for aircraft jet engines. In this application, grain size is an important process variable. The existing industry standard for estimating the average grain size of metals is implemented and demonstrated on a sample turbine vane. Grain size can be computed form the measurements obtained in an image, once the foreshortening effects of surface curvature are accounted for. The second is the green sand mold casting of shuttle eyes for textile looms. Here, physical constraints inherent to the casting process translate into these constraints, it is necessary to interpret features of intensity as features of object shape. Both applications demonstrate that successful visual inspection requires the ability to interpret observed changes in intensity in the context of surface topography. The theoretical tools developed in this report provide a framework for this interpretation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

C.R. Bull, R. Zwiggelaar and R.D. Speller, 'Review of inspection techniques based on the elastic and inelastic scattering of X-rays and their potential in the food and agricultural industry', Journal of Food Engineering 33 (1-2), 167-179 (1997)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Accurate knowledge of traffic demands in a communication network enables or enhances a variety of traffic engineering and network management tasks of paramount importance for operational networks. Directly measuring a complete set of these demands is prohibitively expensive because of the huge amounts of data that must be collected and the performance impact that such measurements would impose on the regular behavior of the network. As a consequence, we must rely on statistical techniques to produce estimates of actual traffic demands from partial information. The performance of such techniques is however limited due to their reliance on limited information and the high amount of computations they incur, which limits their convergence behavior. In this paper we study strategies to improve the convergence of a powerful statistical technique based on an Expectation-Maximization iterative algorithm. First we analyze modeling approaches to generating starting points. We call these starting points informed priors since they are obtained using actual network information such as packet traces and SNMP link counts. Second we provide a very fast variant of the EM algorithm which extends its computation range, increasing its accuracy and decreasing its dependence on the quality of the starting point. Finally, we study the convergence characteristics of our EM algorithm and compare it against a recently proposed Weighted Least Squares approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Training data for supervised learning neural networks can be clustered such that the input/output pairs in each cluster are redundant. Redundant training data can adversely affect training time. In this paper we apply two clustering algorithms, ART2 -A and the Generalized Equality Classifier, to identify training data clusters and thus reduce the training data and training time. The approach is demonstrated for a high dimensional nonlinear continuous time mapping. The demonstration shows six-fold decrease in training time at little or no loss of accuracy in the handling of evaluation data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A massive change is currently taking place in the manner in which power networks are operated. Traditionally, power networks consisted of large power stations which were controlled from centralised locations. The trend in modern power networks is for generated power to be produced by a diverse array of energy sources which are spread over a large geographical area. As a result, controlling these systems from a centralised controller is impractical. Thus, future power networks will be controlled by a large number of intelligent distributed controllers which must work together to coordinate their actions. The term Smart Grid is the umbrella term used to denote this combination of power systems, artificial intelligence, and communications engineering. This thesis focuses on the application of optimal control techniques to Smart Grids with a focus in particular on iterative distributed MPC. A novel convergence and stability proof for iterative distributed MPC based on the Alternating Direction Method of Multipliers is derived. Distributed and centralised MPC, and an optimised PID controllers' performance are then compared when applied to a highly interconnected, nonlinear, MIMO testbed based on a part of the Nordic power grid. Finally, a novel tuning algorithm is proposed for iterative distributed MPC which simultaneously optimises both the closed loop performance and the communication overhead associated with the desired control.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There is much common ground between the areas of coding theory and systems theory. Fitzpatrick has shown that a Göbner basis approach leads to efficient algorithms in the decoding of Reed-Solomon codes and in scalar interpolation and partial realization. This thesis simultaneously generalizes and simplifies that approach and presents applications to discrete-time modeling, multivariable interpolation and list decoding. Gröbner basis theory has come into its own in the context of software and algorithm development. By generalizing the concept of polynomial degree, term orders are provided for multivariable polynomial rings and free modules over polynomial rings. The orders are not, in general, unique and this adds, in no small way, to the power and flexibility of the technique. As well as being generating sets for ideals or modules, Gröbner bases always contain a element which is minimal with respect tot the corresponding term order. Central to this thesis is a general algorithm, valid for any term order, that produces a Gröbner basis for the solution module (or ideal) of elements satisfying a sequence of generalized congruences. These congruences, based on shifts and homomorphisms, are applicable to a wide variety of problems, including key equations and interpolations. At the core of the algorithm is an incremental step. Iterating this step lends a recursive/iterative character to the algorithm. As a consequence, not all of the input to the algorithm need be available from the start and different "paths" can be taken to reach the final solution. The existence of a suitable chain of modules satisfying the criteria of the incremental step is a prerequisite for applying the algorithm.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Error correcting codes are combinatorial objects, designed to enable reliable transmission of digital data over noisy channels. They are ubiquitously used in communication, data storage etc. Error correction allows reconstruction of the original data from received word. The classical decoding algorithms are constrained to output just one codeword. However, in the late 50’s researchers proposed a relaxed error correction model for potentially large error rates known as list decoding. The research presented in this thesis focuses on reducing the computational effort and enhancing the efficiency of decoding algorithms for several codes from algorithmic as well as architectural standpoint. The codes in consideration are linear block codes closely related to Reed Solomon (RS) codes. A high speed low complexity algorithm and architecture are presented for encoding and decoding RS codes based on evaluation. The implementation results show that the hardware resources and the total execution time are significantly reduced as compared to the classical decoder. The evaluation based encoding and decoding schemes are modified and extended for shortened RS codes and software implementation shows substantial reduction in memory footprint at the expense of latency. Hermitian codes can be seen as concatenated RS codes and are much longer than RS codes over the same aphabet. A fast, novel and efficient VLSI architecture for Hermitian codes is proposed based on interpolation decoding. The proposed architecture is proven to have better than Kötter’s decoder for high rate codes. The thesis work also explores a method of constructing optimal codes by computing the subfield subcodes of Generalized Toric (GT) codes that is a natural extension of RS codes over several dimensions. The polynomial generators or evaluation polynomials for subfield-subcodes of GT codes are identified based on which dimension and bound for the minimum distance are computed. The algebraic structure for the polynomials evaluating to subfield is used to simplify the list decoding algorithm for BCH codes. Finally, an efficient and novel approach is proposed for exploiting powerful codes having complex decoding but simple encoding scheme (comparable to RS codes) for multihop wireless sensor network (WSN) applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Modern neuroscience relies heavily on sophisticated tools that allow us to visualize and manipulate cells with precise spatial and temporal control. Transgenic mouse models, for example, can be used to manipulate cellular activity in order to draw conclusions about the molecular events responsible for the development, maintenance and refinement of healthy and/or diseased neuronal circuits. Although it is fairly well established that circuits respond to activity-dependent competition between neurons, we have yet to understand either the mechanisms underlying these events or the higher-order plasticity that synchronizes entire circuits. In this thesis we aimed to develop and characterize transgenic mouse models that can be used to directly address these outstanding biological questions in different ways. We present SLICK-H, a Cre-expressing mouse line that can achieve drug-inducible, widespread, neuron-specific manipulations in vivo. This model is a clear improvement over existing models because of its particularly strong, widespread, and even distribution pattern that can be tightly controlled in the absence of drug induction. We also present SLICK-V::Ptox, a mouse line that, through expression of the tetanus toxin light chain, allows long-term inhibition of neurotransmission in a small subset (<1%) of fluorescently labeled pyramidal cells. This model, which can be used to study how a silenced cell performs in a wildtype environment, greatly facilitates the in vivo study of activity-dependent competition in the mammalian brain. As an initial application we used this model to show that tetanus toxin-expressing CA1 neurons experience a 15% - 19% decrease in apical dendritic spine density. Finally, we also describe the attempt to create additional Cre-driven mouse lines that would allow conditional alteration of neuronal activity either by hyperpolarization or inhibition of neurotransmission. Overall, the models characterized in this thesis expand upon the wealth of tools available that aim to dissect neuronal circuitry by genetically manipulating neurons in vivo.