927 resultados para Industrial automation techniques
Resumo:
Draglines are used extensively for overburden stripping in Australian open cut coal mines. This paper outlines the design of a computer control system to implement an automated swing cycle on a production dragline. Subsystems and sensors have been developed to satisfy the constraints imposed by the task, the harsh operating environment and the mines production requirements.
Resumo:
Different human activities like combustion of fossil fuels, biomass burning, industrial and agricultural activities, emit a large amount of particulates into the atmosphere. As a consequence, the air we inhale contains significant amount of suspended particles, including organic and inorganic solids and liquids, as well as various microorganism, which are solely responsible for a number of pulmonary diseases. Developing a numerical model for transport and deposition of foreign particles in realistic lung geometry is very challenging due to the complex geometrical structure of the human lung. In this study, we have numerically investigated the airborne particle transport and its deposition in human lung surface. In order to obtain the appropriate results of particle transport and deposition in human lung, we have generated realistic lung geometry from the CT scan obtained from a local hospital. For a more accurate approach, we have also created a mucus layer inside the geometry, adjacent to the lung surface and added all apposite mucus layer properties to the wall surface. The Lagrangian particle tracking technique is employed by using ANSYS FLUENT solver to simulate the steady-state inspiratory flow. Various injection techniques have been introduced to release the foreign particles through the inlet of the geometry. In order to investigate the effects of particle size on deposition, numerical calculations are carried out for different sizes of particles ranging from 1 micron to 10 micron. The numerical results show that particle deposition pattern is completely dependent on its initial position and in case of realistic geometry; most of the particles are deposited on the rough wall surface of the lung geometry instead of carinal region.
Resumo:
A novel and economical experimental technique has been developed to assess industrial aerosol deposition in various idealized porous channel configurations. This judicious examination on aerosol penetration in porous channels will assist engineers to better optimize designs for various engineering applications. Deposition patterns differ with porosity due to geometric configurations of the channel and superficial inlet velocities. Interestingly, it is found that two configurations of similar porosity exhibit significantly higher deposition fractions. Inertial impaction is profound at the leading edge of all obstacles, whereas particle build-up is observed at the trailing edge of the obstructions. A qualitative analysis shows that the numerical results are in good agreement with experimental results.
Resumo:
Purpose – Ideally, there is no wear in hydrodynamic lubrication regime. A small amount of wear occurs during start and stop of the machines and the amount of wear is so small that it is difficult to measure with accuracy. Various wear measuring techniques have been used where out-of-roundness was found to be the most reliable method of measuring small wear quantities in journal bearings. This technique was further developed to achieve higher accuracy in measuring small wear quantities. The method proved to be reliable as well as inexpensive. The paper aims to discuss these issues. Design/methodology/approach – In an experimental study, the effect of antiwear additives was studied on journal bearings lubricated with oil containing solid contaminants. The test duration was too long and the wear quantities achieved were too small. To minimise the test duration, short tests of about 90 min duration were conducted and wear was measured recording changes in variety of parameters related to weight, geometry and wear debris. The out-of-roundness was found to be the most effective method. This method was further refined by enlarging the out-of-roundness traces on a photocopier. The method was proved to be reliable and inexpensive. Findings – Study revealed that the most commonly used wear measurement techniques such as weight loss, roughness changes and change in particle count were not adequate for measuring small wear quantities in journal bearings. Out-of-roundness method with some refinements was found to be one of the most reliable methods for measuring small wear quantities in journal bearings working in hydrodynamic lubrication regime. By enlarging the out-of-roundness traces and determining the worn area of the bearing cross-section, weight loss in bearings was calculated, which was repeatable and reliable. Research limitations/implications – This research is a basic in nature where a rudimentary solution has been developed for measuring small wear quantities in rotary devices such as journal bearings. The method requires enlarging traces on a photocopier and determining the shape of the worn area on an out-of-roundness trace on a transparency, which is a simple but a crude method. This may require an automated procedure to determine the weight loss from the out-of-roundness traces directly. This method can be very useful in reducing test duration and measuring wear quantities with higher precision in situations where wear quantities are very small. Practical implications – This research provides a reliable method of measuring wear of circular geometry. The Talyrond equipment used for measuring the change in out-of-roundness due to wear of bearings indicates that this equipment has high potential to be used as a wear measuring device also. Measurement of weight loss from the traces is an enhanced capability of this equipment and this research may lead to the development of a modified version of Talyrond type of equipment for wear measurements in circular machine components. Originality/value – Wear measurement in hydrodynamic bearings requires long duration tests to achieve adequate wear quantities. Out-of-roundness is one of the geometrical parameters that changes with progression of wear in a circular shape components. Thus, out-of-roundness is found to be an effective wear measuring parameter that relates to change in geometry. Method of increasing the sensitivity and enlargement of out-of-roundness traces is original work through which area of worn cross-section can be determined and weight loss can be derived for materials of known density with higher precision.
Resumo:
Mixed integer programming and parallel-machine job shop scheduling are used to solve the sugarcane rail transport scheduling problem. Constructive heuristics and metaheuristics were developed to produce a more efficient scheduling system and so reduce operating costs. The solutions were tested on small and large size problems. High-quality solutions and improved CPU time are the result of developing new hybrid techniques which consist of different ways of integrating simulated annealing and Tabu search techniques.
Resumo:
Phosphorus has a number of indispensable biochemical roles, but its natural deposition and the low solubility of phosphates as well as their rapid transformation to insoluble forms make the element commonly the growth-limiting nutrient, particularly in aquatic ecosystems. Famously, phosphorus that reaches water bodies is commonly the main cause of eutrophication. This undesirable process can severely affect many aquatic biotas in the world. More management practices are proposed but long-term monitoring of phosphorus level is necessary to ensure that the eutrophication won't occur. Passive sampling techniques, which have been developed over the last decades, could provide several advantages to the conventional sampling methods including simpler sampling devices, more cost-effective sampling campaign, providing flow proportional load as well as representative average of concentrations of phosphorus in the environment. Although some types of passive samplers are commercially available, their uses are still scarcely reported in the literature. In Japan, there is limited application of passive sampling technique to monitor phosphorus even in the field of agricultural environment. This paper aims to introduce the relatively new P-sampling techniques and their potential to use in environmental monitoring studies.
Resumo:
As there are a myriad of micro organic pollutants that can affect the well-being of human and other organisms in the environment the need for an effective monitoring tool is eminent. Passive sampling techniques, which have been developed over the last decades, could provide several advantages to the conventional sampling methods including simpler sampling devices, more cost-effective sampling campaign, providing time-integrated load as well as representative average of concentrations of pollutants in the environment. Those techniques have been applied to monitor many pollutants caused by agricultural activities, i.e. residues of pesticides, veterinary drugs and so on. Several types of passive samplers are commercially available and their uses are widely accepted. However, not many applications of those techniques have been found in Japan, especially in the field of agricultural environment. This paper aims to introduce the field of passive sampling and then to describe some applications of passive sampling techniques in environmental monitoring studies related to the agriculture industry.
Resumo:
This thesis presents the development of a rapid, sensitive and reproducible spectroscopic method for the detection of TNT in forensic and environmental applications. Simple nano sensors prepared by cost effective methods were utilized as sensitive platforms for the detection of TNT by surface enhanced Raman spectroscopy. The optimization of the substrate and the careful selection of a suitable recognition molecule contributed to the significant improvements of sensitive and selective targeting over current detection methods. The work presented in this thesis paves the way for effective detection and monitoring of explosives residues in law enforcement and environmental health applications.
Resumo:
Frogs have received increasing attention due to their effectiveness for indicating the environment change. Therefore, it is important to monitor and assess frogs. With the development of sensor techniques, large volumes of audio data (including frog calls) have been collected and need to be analysed. After transforming the audio data into its spectrogram representation using short-time Fourier transform, the visual inspection of this representation motivates us to use image processing techniques for analysing audio data. Applying acoustic event detection (AED) method to spectrograms, acoustic events are firstly detected from which ridges are extracted. Three feature sets, Mel-frequency cepstral coefficients (MFCCs), AED feature set and ridge feature set, are then used for frog call classification with a support vector machine classifier. Fifteen frog species widely spread in Queensland, Australia, are selected to evaluate the proposed method. The experimental results show that ridge feature set can achieve an average classification accuracy of 74.73% which outperforms the MFCCs (38.99%) and AED feature set (67.78%).
Resumo:
A pulsewidth modulation (PWM) technique is proposed for minimizing the rms torque ripple in inverter-fed induction motor drives subject to a given average switching frequency of the inverter. The proposed PWM technique is a combination of optimal continuous modulation and discontinuous modulation. The proposed technique is evaluated both theoretically as well as experimentally and is compared with well-known PWM techniques. It is shown that the proposed method reduces the rms torque ripple by about 30% at the rated speed of the motor drive, compared to conventional space vector PWM.
Resumo:
Eleanor Smith [pseudonym], teacher : I was talking to the kids about MacDonalds*/I forget exactly what the context was*/I said ‘‘ah, the Americans call them French fries, and, you know, MacDonalds is an American chain and they call them French fries because the Americans call them French fries’’, and this little Australian kid in the front row, very Australian child, said to me, ‘‘I call them French fries!’’ . . . Um, a fourth grade boy whom I taught in 1993 at this school, the world basketball championships were on . . . Americans were playing their dream machine and the Boomers were up against them . . . and, ah, this boy was very interested in basketball . . . but it’s not in my blood, not in the way cricket is for example . . . Um, Um, and I said to this fellow, ‘‘um, well’’, I said, ‘‘Australia’s up against Dream Machine tomorrow’’. He [Jason, pseudonym] said, ‘‘Ah, you know, Boomers probably won’t win’’. . . . I said, ‘‘Well that’s sport, mate’’. I said, ‘‘You never know in sport. Australia might win’’. And he looked at me and he said, ‘‘I’m not going for Australia, I’m going for America’’. This is from an Australian boy! And I thought so strong is the hype, so strong is the, is the, power of the media, etc., that this boy is not [pause], I can’t tell you how outraged I was. Here’s me as an Australian and I don’t even support basketball, it’s not even my sport, um, but that he would respond like that because of the power of the American machine that’s converting kids’ minds, the way they think, where they’re putting their loyalties, etc. I was just appalled, but that’s where he was. And when I asked kids for their favourite place, he said Los Angeles.
Resumo:
The random early detection (RED) technique has seen a lot of research over the years. However, the functional relationship between RED performance and its parameters viz,, queue weight (omega(q)), marking probability (max(p)), minimum threshold (min(th)) and maximum threshold (max(th)) is not analytically availa ble. In this paper, we formulate a probabilistic constrained optimization problem by assuming a nonlinear relationship between the RED average queue length and its parameters. This problem involves all the RED parameters as the variables of the optimization problem. We use the barrier and the penalty function approaches for its Solution. However (as above), the exact functional relationship between the barrier and penalty objective functions and the optimization variable is not known, but noisy samples of these are available for different parameter values. Thus, for obtaining the gradient and Hessian of the objective, we use certain recently developed simultaneous perturbation stochastic approximation (SPSA) based estimates of these. We propose two four-timescale stochastic approximation algorithms based oil certain modified second-order SPSA updates for finding the optimum RED parameters. We present the results of detailed simulation experiments conducted over different network topologies and network/traffic conditions/settings, comparing the performance of Our algorithms with variants of RED and a few other well known adaptive queue management (AQM) techniques discussed in the literature.
Resumo:
Remote sensing provides a lucid and effective means for crop coverage identification. Crop coverage identification is a very important technique, as it provides vital information on the type and extent of crop cultivated in a particular area. This information has immense potential in the planning for further cultivation activities and for optimal usage of the available fertile land. As the frontiers of space technology advance, the knowledge derived from the satellite data has also grown in sophistication. Further, image classification forms the core of the solution to the crop coverage identification problem. No single classifier can prove to satisfactorily classify all the basic crop cover mapping problems of a cultivated region. We present in this paper the experimental results of multiple classification techniques for the problem of crop cover mapping of a cultivated region. A detailed comparison of the algorithms inspired by social behaviour of insects and conventional statistical method for crop classification is presented in this paper. These include the Maximum Likelihood Classifier (MLC), Particle Swarm Optimisation (PSO) and Ant Colony Optimisation (ACO) techniques. The high resolution satellite image has been used for the experiments.
Resumo:
The purpose of this article is to show the applicability and benefits of the techniques of design of experiments as an optimization tool for discrete simulation models. The simulated systems are computational representations of real-life systems; its characteristics include a constant evolution that follows the occurrence of discrete events along the time. In this study, a production system, designed with the business philosophy JIT (Just in Time) is used, which seeks to achieve excellence in organizations through waste reduction in all the operational aspects. The most typical tool of JIT systems is the KANBAN production control that seeks to synchronize demand with flow of materials, minimize work in process, and define production metrics. Using experimental design techniques for stochastic optimization, the impact of the operational factors on the efficiency of the KANBAN / CONWIP simulation model is analyzed. The results show the effectiveness of the integration of experimental design techniques and discrete simulation models in the calculation of the operational parameters. Furthermore, the reliability of the methodologies found was improved with a new statistical consideration.
Resumo:
The development of techniques for scaling up classifiers so that they can be applied to problems with large datasets of training examples is one of the objectives of data mining. Recently, AdaBoost has become popular among machine learning community thanks to its promising results across a variety of applications. However, training AdaBoost on large datasets is a major problem, especially when the dimensionality of the data is very high. This paper discusses the effect of high dimensionality on the training process of AdaBoost. Two preprocessing options to reduce dimensionality, namely the principal component analysis and random projection are briefly examined. Random projection subject to a probabilistic length preserving transformation is explored further as a computationally light preprocessing step. The experimental results obtained demonstrate the effectiveness of the proposed training process for handling high dimensional large datasets.