16 resultados para Roadway live load model
Resumo:
The present study was done with two different servo-systems. In the first system, a servo-hydraulic system was identified and then controlled by a fuzzy gainscheduling controller. The second servo-system, an electro-magnetic linear motor in suppressing the mechanical vibration and position tracking of a reference model are studied by using a neural network and an adaptive backstepping controller respectively. Followings are some descriptions of research methods. Electro Hydraulic Servo Systems (EHSS) are commonly used in industry. These kinds of systems are nonlinearin nature and their dynamic equations have several unknown parameters.System identification is a prerequisite to analysis of a dynamic system. One of the most promising novel evolutionary algorithms is the Differential Evolution (DE) for solving global optimization problems. In the study, the DE algorithm is proposed for handling nonlinear constraint functionswith boundary limits of variables to find the best parameters of a servo-hydraulic system with flexible load. The DE guarantees fast speed convergence and accurate solutions regardless the initial conditions of parameters. The control of hydraulic servo-systems has been the focus ofintense research over the past decades. These kinds of systems are nonlinear in nature and generally difficult to control. Since changing system parameters using the same gains will cause overshoot or even loss of system stability. The highly non-linear behaviour of these devices makes them ideal subjects for applying different types of sophisticated controllers. The study is concerned with a second order model reference to positioning control of a flexible load servo-hydraulic system using fuzzy gainscheduling. In the present research, to compensate the lack of dampingin a hydraulic system, an acceleration feedback was used. To compare the results, a pcontroller with feed-forward acceleration and different gains in extension and retraction is used. The design procedure for the controller and experimental results are discussed. The results suggest that using the fuzzy gain-scheduling controller decrease the error of position reference tracking. The second part of research was done on a PermanentMagnet Linear Synchronous Motor (PMLSM). In this study, a recurrent neural network compensator for suppressing mechanical vibration in PMLSM with a flexible load is studied. The linear motor is controlled by a conventional PI velocity controller, and the vibration of the flexible mechanism is suppressed by using a hybrid recurrent neural network. The differential evolution strategy and Kalman filter method are used to avoid the local minimum problem, and estimate the states of system respectively. The proposed control method is firstly designed by using non-linear simulation model built in Matlab Simulink and then implemented in practical test rig. The proposed method works satisfactorily and suppresses the vibration successfully. In the last part of research, a nonlinear load control method is developed and implemented for a PMLSM with a flexible load. The purpose of the controller is to track a flexible load to the desired position reference as fast as possible and without awkward oscillation. The control method is based on an adaptive backstepping algorithm whose stability is ensured by the Lyapunov stability theorem. The states of the system needed in the controller are estimated by using the Kalman filter. The proposed controller is implemented and tested in a linear motor test drive and responses are presented.
Resumo:
Technological development brings more and more complex systems to the consumer markets. The time required for bringing a new product to market is crucial for the competitive edge of a company. Simulation is used as a tool to model these products and their operation before actual live systems are built. The complexity of these systems can easily require large amounts of memory and computing power. Distributed simulation can be used to meet these demands. Distributed simulation has its problems. Diworse, a distributed simulation environment, was used in this study to analyze the different factors that affect the time required for the simulation of a system. Examples of these factors are the simulation algorithm, communication protocols, partitioning of the problem, distributionof the problem, capabilities of the computing and communications equipment and the external load. Offices offer vast amounts of unused capabilities in the formof idle workstations. The use of this computing power for distributed simulation requires the simulation to adapt to a changing load situation. This requires all or part of the simulation work to be removed from a workstation when the owner wishes to use the workstation again. If load balancing is not performed, the simulation suffers from the workstation's reduced performance, which also hampers the owner's work. Operation of load balancing in Diworse is studied and it is shown to perform better than no load balancing, as well as which different approaches for load balancing are discussed.
Resumo:
Industry's growing need for higher productivity is placing new demands on mechanisms connected with electrical motors, because these can easily lead to vibration problems due to fast dynamics. Furthermore, the nonlinear effects caused by a motor frequently reduce servo stability, which diminishes the controller's ability to predict and maintain speed. Hence, the flexibility of a mechanism and its control has become an important area of research. The basic approach in control system engineering is to assume that the mechanism connected to a motor is rigid, so that vibrations in the tool mechanism, reel, gripper or any apparatus connected to the motor are not taken into account. This might reduce the ability of the machine system to carry out its assignment and shorten the lifetime of the equipment. Nonetheless, it is usually more important to know how the mechanism, or in other words the load on the motor, behaves. A nonlinear load control method for a permanent magnet linear synchronous motor is developed and implemented in the thesis. The purpose of the controller is to track a flexible load to the desired velocity reference as fast as possible and without awkward oscillations. The control method is based on an adaptive backstepping algorithm with its stability ensured by the Lyapunov stability theorem. As a reference controller for the backstepping method, a hybrid neural controller is introduced in which the linear motor itself is controlled by a conventional PI velocity controller and the vibration of the associated flexible mechanism is suppressed from an outer control loop using a compensation signal from a multilayer perceptron network. To avoid the local minimum problem entailed in neural networks, the initial weights are searched for offline by means of a differential evolution algorithm. The states of a mechanical system for controllers are estimated using the Kalman filter. The theoretical results obtained from the control design are validated with the lumped mass model for a mechanism. Generalization of the mechanism allows the methods derived here to be widely implemented in machine automation. The control algorithms are first designed in a specially introduced nonlinear simulation model and then implemented in the physical linear motor using a DSP (Digital Signal Processor) application. The measurements prove that both controllers are capable of suppressing vibration, but that the backstepping method is superior to others due to its accuracy of response and stability properties.
Resumo:
COD discharges out of processes have increased in line with elevating brightness demands for mechanical pulp and papers. The share of lignin-like substances in COD discharges is on average 75%. In this thesis, a plant dynamic model was created and validated as a means to predict COD loading and discharges out of a mill. The assays were carried out in one paper mill integrate producing mechanical printing papers. The objective in the modeling of plant dynamics was to predict day averages of COD load and discharges out of mills. This means that online data, like 1) the level of large storage towers of pulp and white water 2) pulp dosages, 3) production rates and 4) internal white water flows and discharges were used to create transients into the balances of solids and white water, referred to as “plant dynamics”. A conversion coefficient was verified between TOC and COD. The conversion coefficient was used for predicting the flows from TOC to COD to the waste water treatment plant. The COD load was modeled with similar uncertainty as in reference TOC sampling. The water balance of waste water treatment was validated by the reference concentration of COD. The difference of COD predictions against references was within the same deviation of TOC-predictions. The modeled yield losses and retention values of TOC in pulping and bleaching processes and the modeled fixing of colloidal TOC to solids between the pulping plant and the aeration basin in the waste water treatment plant were similar to references presented in literature. The valid water balances of the waste water treatment plant and the reduction model of lignin-like substances produced a valid prediction of COD discharges out of the mill. A 30% increase in the release of lignin-like substances in the form of production problems was observed in pulping and bleaching processes. The same increase was observed in COD discharges out of waste water treatment. In the prediction of annual COD discharge, it was noticed that the reduction of lignin has a wide deviation from year to year and from one mill to another. This made it difficult to compare the parameters of COD discharges validated in plant dynamic simulation with another mill producing mechanical printing papers. However, a trend of moving from unbleached towards high-brightness TMP in COD discharges was valid.
Resumo:
This thesis concentrates on developing a practical local approach methodology based on micro mechanical models for the analysis of ductile fracture of welded joints. Two major problems involved in the local approach, namely the dilational constitutive relation reflecting the softening behaviour of material, and the failure criterion associated with the constitutive equation, have been studied in detail. Firstly, considerable efforts were made on the numerical integration and computer implementation for the non trivial dilational Gurson Tvergaard model. Considering the weaknesses of the widely used Euler forward integration algorithms, a family of generalized mid point algorithms is proposed for the Gurson Tvergaard model. Correspondingly, based on the decomposition of stresses into hydrostatic and deviatoric parts, an explicit seven parameter expression for the consistent tangent moduli of the algorithms is presented. This explicit formula avoids any matrix inversion during numerical iteration and thus greatly facilitates the computer implementation of the algorithms and increase the efficiency of the code. The accuracy of the proposed algorithms and other conventional algorithms has been assessed in a systematic manner in order to highlight the best algorithm for this study. The accurate and efficient performance of present finite element implementation of the proposed algorithms has been demonstrated by various numerical examples. It has been found that the true mid point algorithm (a = 0.5) is the most accurate one when the deviatoric strain increment is radial to the yield surface and it is very important to use the consistent tangent moduli in the Newton iteration procedure. Secondly, an assessment of the consistency of current local failure criteria for ductile fracture, the critical void growth criterion, the constant critical void volume fraction criterion and Thomason's plastic limit load failure criterion, has been made. Significant differences in the predictions of ductility by the three criteria were found. By assuming the void grows spherically and using the void volume fraction from the Gurson Tvergaard model to calculate the current void matrix geometry, Thomason's failure criterion has been modified and a new failure criterion for the Gurson Tvergaard model is presented. Comparison with Koplik and Needleman's finite element results shows that the new failure criterion is fairly accurate indeed. A novel feature of the new failure criterion is that a mechanism for void coalescence is incorporated into the constitutive model. Hence the material failure is a natural result of the development of macroscopic plastic flow and the microscopic internal necking mechanism. By the new failure criterion, the critical void volume fraction is not a material constant and the initial void volume fraction and/or void nucleation parameters essentially control the material failure. This feature is very desirable and makes the numerical calibration of void nucleation parameters(s) possible and physically sound. Thirdly, a local approach methodology based on the above two major contributions has been built up in ABAQUS via the user material subroutine UMAT and applied to welded T joints. By using the void nucleation parameters calibrated from simple smooth and notched specimens, it was found that the fracture behaviour of the welded T joints can be well predicted using present methodology. This application has shown how the damage parameters of both base material and heat affected zone (HAZ) material can be obtained in a step by step manner and how useful and capable the local approach methodology is in the analysis of fracture behaviour and crack development as well as structural integrity assessment of practical problems where non homogeneous materials are involved. Finally, a procedure for the possible engineering application of the present methodology is suggested and discussed.
Resumo:
Comprehensive understanding of the heat transfer processes that take place during circulating fluidized bed (CFB) combustion is one of the most important issues in CFB technology development. This leads to possibility of predicting, evaluation and proper design of combustion and heat transfer mechanisms. The aim of this thesis is to develop a model for circulating fluidized bed boiler operation. Empirical correlations are used for determining heat transfer coefficients in each part of the furnace. The proposed model is used both in design and offdesign conditions. During off-design simulations fuel moisture content and boiler load effects on boiler operation have been investigated. In theoretical part of the thesis, fuel properties of most typical classes of biomass are widely reviewed. Various schemes of biomass utilization are presented and, especially, concerning circulating fluidized bed boilers. In addition, possible negative effects of biomass usage in boilers are briefly discussed.
Resumo:
This Master´s thesis investigates the performance of the Olkiluoto 1 and 2 APROS model in case of fast transients. The thesis includes a general description of the Olkiluoto 1 and 2 nuclear power plants and of the most important safety systems. The theoretical background of the APROS code as well as the scope and the content of the Olkiluoto 1 and 2 APROS model are also described. The event sequences of the anticipated operation transients considered in the thesis are presented in detail as they will form the basis for the analysis of the APROS calculation results. The calculated fast operational transient situations comprise loss-of-load cases and two cases related to a inadvertent closure of one main steam isolation valve. As part of the thesis work, the inaccurate initial data values found in the original 1-D reactor core model were corrected. The input data needed for the creation of a more accurate 3-D core model were defined. The analysis of the APROS calculation results showed that while the main results were in good accordance with the measured plant data, also differences were detected. These differences were found to be caused by deficiencies and uncertainties related to the calculation model. According to the results the reactor core and the feedwater systems cause most of the differences between the calculated and measured values. Based on these findings, it will be possible to develop the APROS model further to make it a reliable and accurate tool for the analysis of the operational transients and possible plant modifications.
Resumo:
The Repair of segmental defects in load-bearing long bones is a challenging task because of the diversity of the load affecting the area; axial, bending, shearing and torsional forces all come together to test the stability/integrity of the bone. The natural biomechanical requirements for bone restorative materials include strength to withstand heavy loads, and adaptivity to conform into a biological environment without disturbing or damaging it. Fiber-reinforced composite (FRC) materials have shown promise, as metals and ceramics have been too rigid, and polymers alone are lacking in strength which is needed for restoration. The versatility of the fiber-reinforced composites also allows tailoring of the composite to meet the multitude of bone properties in the skeleton. The attachment and incorporation of a bone substitute to bone has been advanced by different surface modification methods. Most often this is achieved by the creation of surface texture, which allows bone growth, onto the substitute, creating a mechanical interlocking. Another method is to alter the chemical properties of the surface to create bonding with the bone – for example with a hydroxyapatite (HA) or a bioactive glass (BG) coating. A novel fiber-reinforced composite implant material with a porous surface was developed for bone substitution purposes in load-bearing applications. The material’s biomechanical properties were tailored with unidirectional fiber reinforcement to match the strength of cortical bone. To advance bone growth onto the material, an optimal surface porosity was created by a dissolution process, and an addition of bioactive glass to the material was explored. The effects of dissolution and orientation of the fiber reinforcement were also evaluated for bone-bonding purposes. The Biological response to the implant material was evaluated in a cell culture study to assure the safety of the materials combined. To test the material’s properties in a clinical setting, an animal model was used. A critical-size bone defect in a rabbit’s tibia was used to test the material in a load-bearing application, with short- and long-term follow-up, and a histological evaluation of the incorporation to the host bone. The biomechanical results of the study showed that the material is durable and the tailoring of the properties can be reproduced reliably. The Biological response - ex vivo - to the created surface structure favours the attachment and growth of bone cells, with the additional benefit of bioactive glass appearing on the surface. No toxic reactions to possible agents leaching from the material could be detected in the cell culture study when compared to a nontoxic control material. The mechanical interlocking was enhanced - as expected - with the porosity, whereas the reinforcing fibers protruding from the surface of the implant gave additional strength when tested in a bone-bonding model. Animal experiments verified that the material is capable of withstanding load-bearing conditions in prolonged use without breaking of the material or creating stress shielding effects to the host bone. A Histological examination verified the enhanced incorporation to host bone with an abundance of bone growth onto and over the material. This was achieved with minimal tissue reactions to a foreign body. An FRC implant with surface porosity displays potential in the field of reconstructive surgery, especially regarding large bone defects with high demands on strength and shape retention in load-bearing areas or flat bones such as facial / cranial bones. The benefits of modifying the strength of the material and adjusting the surface properties with fiber reinforcement and bone-bonding additives to meet the requirements of different bone qualities are still to be fully discovered.
Resumo:
In this thesis, a model called CFB3D is validated for oxygen combustion in circulating fluidized bed boiler. The first part of the work consists of literature review in which circulating fluidized bed and oxygen combustion technologies are studied. In addition, the modeling of circulating fluidized bed furnaces is discussed and currently available industrial scale three-dimensional furnace models are presented. The main features of CFB3D model are presented along with the theories and equations related to the model parameters used in this work. The second part of this work consists of the actual research and modeling work including measurements, model setup, and modeling results. The objectives of this thesis is to study how well CFB3D model works with oxygen combustion compared to air combustion in circulating fluidized bed boiler and what model parameters need to be adjusted when changing from air to oxygen combustion. The study is performed by modeling two air combustion cases and two oxygen combustion cases with comparable boiler loads. The cases are measured at Ciuden 30 MWth Flexi-Burn demonstration plant in April 2012. The modeled furnace temperatures match with the measurements as well in oxygen combustion cases as in air combustion cases but the modeled gas concentrations differ from the measurements clearly more in oxygen combustion cases. However, the same model parameters are optimal for both air and oxygen combustion cases. When the boiler load is changed, some combustion and heat transfer related model parameters need to be adjusted. To improve the accuracy of modeling results, better flow dynamics model should be developed in the CFB3D model. Additionally, more measurements are needed from the lower furnace to find the best model parameters for each case. The validation work needs to be continued in order to improve the modeling results and model predictability.
Resumo:
This thesis presents a one-dimensional, semi-empirical dynamic model for the simulation and analysis of a calcium looping process for post-combustion CO2 capture. Reduction of greenhouse emissions from fossil fuel power production requires rapid actions including the development of efficient carbon capture and sequestration technologies. The development of new carbon capture technologies can be expedited by using modelling tools. Techno-economical evaluation of new capture processes can be done quickly and cost-effectively with computational models before building expensive pilot plants. Post-combustion calcium looping is a developing carbon capture process which utilizes fluidized bed technology with lime as a sorbent. The main objective of this work was to analyse the technological feasibility of the calcium looping process at different scales with a computational model. A one-dimensional dynamic model was applied to the calcium looping process, simulating the behaviour of the interconnected circulating fluidized bed reactors. The model incorporates fundamental mass and energy balance solvers to semi-empirical models describing solid behaviour in a circulating fluidized bed and chemical reactions occurring in the calcium loop. In addition, fluidized bed combustion, heat transfer and core-wall layer effects were modelled. The calcium looping model framework was successfully applied to a 30 kWth laboratory scale and a pilot scale unit 1.7 MWth and used to design a conceptual 250 MWth industrial scale unit. Valuable information was gathered from the behaviour of a small scale laboratory device. In addition, the interconnected behaviour of pilot plant reactors and the effect of solid fluidization on the thermal and carbon dioxide balances of the system were analysed. The scale-up study provided practical information on the thermal design of an industrial sized unit, selection of particle size and operability in different load scenarios.
Resumo:
Eutrophication caused by anthropogenic nutrient pollution has become one of the most severe threats to water bodies. Nutrients enter water bodies from atmospheric precipitation, industrial and domestic wastewaters and surface runoff from agricultural and forest areas. As point pollution has been significantly reduced in developed countries in recent decades, agricultural non-point sources have been increasingly identified as the largest source of nutrient loading in water bodies. In this study, Lake Säkylän Pyhäjärvi and its catchment are studied as an example of a long-term, voluntary-based, co-operative model of lake and catchment management. Lake Pyhäjärvi is located in the centre of an intensive agricultural area in southwestern Finland. More than 20 professional fishermen operate in the lake area, and the lake is used as a drinking water source and for various recreational activities. Lake Pyhäjärvi is a good example of a large and shallow lake that suffers from eutrophication and is subject to measures to improve this undesired state under changing conditions. Climate change is one of the most important challenges faced by Lake Pyhäjärvi and other water bodies. The results show that climatic variation affects the amounts of runoff and nutrient loading and their timing during the year. The findings from the study area concerning warm winters and their influences on nutrient loading are in accordance with the IPCC scenarios of future climate change. In addition to nutrient reduction measures, the restoration of food chains (biomanipulation) is a key method in water quality management. The food-web structure in Lake Pyhäjärvi has, however, become disturbed due to mild winters, short ice cover and low fish catch. Ice cover that enables winter seining is extremely important to the water quality and ecosystem of Lake Pyhäjärvi, as the vendace stock is one of the key factors affecting the food web and the state of the lake. New methods for the reduction of nutrient loading and the treatment of runoff waters from agriculture, such as sand filters, were tested in field conditions. The results confirm that the filter technique is an applicable method for nutrient reduction, but further development is needed. The ability of sand filters to absorb nutrients can be improved with nutrient binding compounds, such as lime. Long-term hydrological, chemical and biological research and monitoring data on Lake Pyhäjärvi and its catchment provide a basis for water protection measures and improve our understanding of the complicated physical, chemical and biological interactions between the terrestrial and aquatic realms. In addition to measurements carried out in field conditions, Lake Pyhäjärvi and its catchment were studied using various modelling methods. In the calibration and validation of models, long-term and wide-ranging time series data proved to be valuable. Collaboration between researchers, modellers and local water managers further improves the reliability and usefulness of models. Lake Pyhäjärvi and its catchment can also be regarded as a good research laboratory from the point of view of the Baltic Sea. The main problem in both of them is eutrophication caused by excess nutrients, and nutrient loading has to be reduced – especially from agriculture. Mitigation measures are also similar in both cases.
Resumo:
The use of exact coordinates of pebbles and fuel particles of pebble bed reactor modelling becoming possible in Monte Carlo reactor physics calculations is an important development step. This allows exact modelling of pebble bed reactors with realistic pebble beds without the placing of pebbles in regular lattices. In this study the multiplication coefficient of the HTR-10 pebble bed reactor is calculated with the Serpent reactor physics code and, using this multiplication coefficient, the amount of pebbles required for the critical load of the reactor. The multiplication coefficient is calculated using pebble beds produced with the discrete element method and three different material libraries in order to compare the results. The received results are lower than those from measured at the experimental reactor and somewhat lower than those gained with other codes in earlier studies.
Resumo:
The growing population in cities increases the energy demand and affects the environment by increasing carbon emissions. Information and communications technology solutions which enable energy optimization are needed to address this growing energy demand in cities and to reduce carbon emissions. District heating systems optimize the energy production by reusing waste energy with combined heat and power plants. Forecasting the heat load demand in residential buildings assists in optimizing energy production and consumption in a district heating system. However, the presence of a large number of factors such as weather forecast, district heating operational parameters and user behavioural parameters, make heat load forecasting a challenging task. This thesis proposes a probabilistic machine learning model using a Naive Bayes classifier, to forecast the hourly heat load demand for three residential buildings in the city of Skellefteå, Sweden over a period of winter and spring seasons. The district heating data collected from the sensors equipped at the residential buildings in Skellefteå, is utilized to build the Bayesian network to forecast the heat load demand for horizons of 1, 2, 3, 6 and 24 hours. The proposed model is validated by using four cases to study the influence of various parameters on the heat load forecast by carrying out trace driven analysis in Weka and GeNIe. Results show that current heat load consumption and outdoor temperature forecast are the two parameters with most influence on the heat load forecast. The proposed model achieves average accuracies of 81.23 % and 76.74 % for a forecast horizon of 1 hour in the three buildings for winter and spring seasons respectively. The model also achieves an average accuracy of 77.97 % for three buildings across both seasons for the forecast horizon of 1 hour by utilizing only 10 % of the training data. The results indicate that even a simple model like Naive Bayes classifier can forecast the heat load demand by utilizing less training data.
Resumo:
With the new age of Internet of Things (IoT), object of everyday such as mobile smart devices start to be equipped with cheap sensors and low energy wireless communication capability. Nowadays mobile smart devices (phones, tablets) have become an ubiquitous device with everyone having access to at least one device. There is an opportunity to build innovative applications and services by exploiting these devices’ untapped rechargeable energy, sensing and processing capabilities. In this thesis, we propose, develop, implement and evaluate LoadIoT a peer-to-peer load balancing scheme that can distribute tasks among plethora of mobile smart devices in the IoT world. We develop and demonstrate an android-based proof of concept load-balancing application. We also present a model of the system which is used to validate the efficiency of the load balancing approach under varying application scenarios. Load balancing concepts can be apply to IoT scenario linked to smart devices. It is able to reduce the traffic send to the Cloud and the energy consumption of the devices. The data acquired from the experimental outcomes enable us to determine the feasibility and cost-effectiveness of a load balanced P2P smart phone-based applications.
Resumo:
With the development of electronic devices, more and more mobile clients are connected to the Internet and they generate massive data every day. We live in an age of “Big Data”, and every day we generate hundreds of million magnitude data. By analyzing the data and making prediction, we can carry out better development plan. Unfortunately, traditional computation framework cannot meet the demand, so the Hadoop would be put forward. First the paper introduces the background and development status of Hadoop, compares the MapReduce in Hadoop 1.0 and YARN in Hadoop 2.0, and analyzes the advantages and disadvantages of them. Because the resource management module is the core role of YARN, so next the paper would research about the resource allocation module including the resource management, resource allocation algorithm, resource preemption model and the whole resource scheduling process from applying resource to finishing allocation. Also it would introduce the FIFO Scheduler, Capacity Scheduler, and Fair Scheduler and compare them. The main work has been done in this paper is researching and analyzing the Dominant Resource Fair algorithm of YARN, putting forward a maximum resource utilization algorithm based on Dominant Resource Fair algorithm. The paper also provides a suggestion to improve the unreasonable facts in resource preemption model. Emphasizing “fairness” during resource allocation is the core concept of Dominant Resource Fair algorithm of YARM. Because the cluster is multiple users and multiple resources, so the user’s resource request is multiple too. The DRF algorithm would divide the user’s resources into dominant resource and normal resource. For a user, the dominant resource is the one whose share is highest among all the request resources, others are normal resource. The DRF algorithm requires the dominant resource share of each user being equal. But for these cases where different users’ dominant resource amount differs greatly, emphasizing “fairness” is not suitable and can’t promote the resource utilization of the cluster. By analyzing these cases, this thesis puts forward a new allocation algorithm based on DRF. The new algorithm takes the “fairness” into consideration but not the main principle. Maximizing the resource utilization is the main principle and goal of the new algorithm. According to comparing the result of the DRF and new algorithm based on DRF, we found that the new algorithm has more high resource utilization than DRF. The last part of the thesis is to install the environment of YARN and use the Scheduler Load Simulator (SLS) to simulate the cluster environment.