1000 resultados para Ocean Modeling
Resumo:
This thesis investigated building information modeling (BIM) from a material supplier’s point of view. The objective was to gain understanding about how a building material supplier could benefit from the growing use of BIM in the AEC (architectural, engineering and construction) industry. Increasing amount of inquiries related to BIM from customers and other interest groups had awoken target company’s interest towards BIM. This thesis acts as a pre-study for the target company related to potential of BIM. First of all BIM and its meaning from a material supplier’s point of view was defined based on a literature review. To reveal the potential benefits of BIM for a material supplier a questionnaire survey and in total of 11 interviews were conducted. Based on the literature review and analyzed results it came clear that BIM offers benefits also for material suppliers. Product libraries and material databases for BIM tools can act as an important marketing channel for material suppliers. Material suppliers could also utilize the information from the BIM models to schedule their deliveries more precisely and potentially even to schedule their own production. All this needs deeper cooperation between material suppliers, contractors and other stakeholders in the AEC industry. Based on the results also first steps for the target company to utilize the growing use of BIM were defined.
Resumo:
In many engineering applications, compliant piping systems conveying liquids are subjected to inelastic deformations due to severe pressure surges such as plastic tubes in modern water supply transmission lines and metallic pipings in nuclear power plants. In these cases the design of such systems may require an adequate modeling of the interactions between the fluid dynamics and the inelastic structural pipe motions. The reliability of the prediction of fluid-pipe behavior depends mainly on the adequacy of the constitutive equations employed in the analysis. In this paper it is proposed a systematic and general approach to consistently incorporate different kinds of inelastic behaviors of the pipe material in a fluid-structure interaction analysis. The main feature of the constitutive equations considered in this work is that a very simple numerical technique can be used for solving the coupled equations describing the dynamics of the fluid and pipe wall. Numerical examples concerning the analysis of polyethylene and stainless steel pipe networks are presented to illustrate the versatility of the proposed approach.
Resumo:
This paper gives a detailed presentation of the Substitution-Newton-Raphson method, suitable for large sparse non-linear systems. It combines the Successive Substitution method and the Newton-Raphson method in such way as to take the best advantages of both, keeping the convergence features of the Newton-Raphson with the low requirements of memory and time of the Successive Substitution schemes. The large system is solved employing few effective variables, using the greatest possible part of the model equations in substitution fashion to fix the remaining variables, but maintaining the convergence characteristics of the Newton-Raphson. The methodology is exemplified through a simple algebraic system, and applied to a simple thermodynamic, mechanical and heat transfer modeling of a single-stage vapor compression refrigeration system. Three distinct approaches for reproducing the thermodynamic properties of the refrigerant R-134a are compared: the linear interpolation from tabulated data, the use of polynomial fitted curves and the use of functions derived from the Helmholtz free energy.
Resumo:
Industrial applications demand that robots operate in agreement with the position and orientation of their end effector. It is necessary to solve the kinematics inverse problem. This allows the displacement of the joints of the manipulator to be determined, to accomplish a given objective. Complete studies of dynamical control of joint robotics are also necessary. Initially, this article focuses on the implementation of numerical algorithms for the solution of the kinematics inverse problem and the modeling and simulation of dynamic systems. This is done using real time implementation. The modeling and simulation of dynamic systems are performed emphasizing off-line programming. In sequence, a complete study of the control strategies is carried out through the study of several elements of a robotic joint, such as: DC motor, inertia, and gearbox. Finally a trajectory generator, used as input for a generic group of joints, is developed and a proposal of the controller's implementation of joints, using EPLD development system, is presented.
Resumo:
Longitudinal surveys are increasingly used to collect event history data on person-specific processes such as transitions between labour market states. Surveybased event history data pose a number of challenges for statistical analysis. These challenges include survey errors due to sampling, non-response, attrition and measurement. This study deals with non-response, attrition and measurement errors in event history data and the bias caused by them in event history analysis. The study also discusses some choices faced by a researcher using longitudinal survey data for event history analysis and demonstrates their effects. These choices include, whether a design-based or a model-based approach is taken, which subset of data to use and, if a design-based approach is taken, which weights to use. The study takes advantage of the possibility to use combined longitudinal survey register data. The Finnish subset of European Community Household Panel (FI ECHP) survey for waves 1–5 were linked at person-level with longitudinal register data. Unemployment spells were used as study variables of interest. Lastly, a simulation study was conducted in order to assess the statistical properties of the Inverse Probability of Censoring Weighting (IPCW) method in a survey data context. The study shows how combined longitudinal survey register data can be used to analyse and compare the non-response and attrition processes, test the missingness mechanism type and estimate the size of bias due to non-response and attrition. In our empirical analysis, initial non-response turned out to be a more important source of bias than attrition. Reported unemployment spells were subject to seam effects, omissions, and, to a lesser extent, overreporting. The use of proxy interviews tended to cause spell omissions. An often-ignored phenomenon classification error in reported spell outcomes, was also found in the data. Neither the Missing At Random (MAR) assumption about non-response and attrition mechanisms, nor the classical assumptions about measurement errors, turned out to be valid. Both measurement errors in spell durations and spell outcomes were found to cause bias in estimates from event history models. Low measurement accuracy affected the estimates of baseline hazard most. The design-based estimates based on data from respondents to all waves of interest and weighted by the last wave weights displayed the largest bias. Using all the available data, including the spells by attriters until the time of attrition, helped to reduce attrition bias. Lastly, the simulation study showed that the IPCW correction to design weights reduces bias due to dependent censoring in design-based Kaplan-Meier and Cox proportional hazard model estimators. The study discusses implications of the results for survey organisations collecting event history data, researchers using surveys for event history analysis, and researchers who develop methods to correct for non-sampling biases in event history data.
Resumo:
Med prediktion avses att man skattar det framtida värdet på en observerbar storhet. Kännetecknande för det bayesianska paradigmet är att osäkerhet gällande okända storheter uttrycks i form av sannolikheter. En bayesiansk prediktiv modell är således en sannolikhetsfördelning över de möjliga värden som en observerbar, men ännu inte observerad storhet kan anta. I de artiklar som ingår i avhandlingen utvecklas metoder, vilka bl.a. tillämpas i analys av kromatografiska data i brottsutredningar. Med undantag för den första artikeln, bygger samtliga metoder på bayesiansk prediktiv modellering. I artiklarna betraktas i huvudsak tre olika typer av problem relaterade till kromatografiska data: kvantifiering, parvis matchning och klustring. I den första artikeln utvecklas en icke-parametrisk modell för mätfel av kromatografiska analyser av alkoholhalt i blodet. I den andra artikeln utvecklas en prediktiv inferensmetod för jämförelse av två stickprov. Metoden tillämpas i den tredje artik eln för jämförelse av oljeprover i syfte att kunna identifiera den förorenande källan i samband med oljeutsläpp. I den fjärde artikeln härleds en prediktiv modell för klustring av data av blandad diskret och kontinuerlig typ, vilken bl.a. tillämpas i klassificering av amfetaminprover med avseende på produktionsomgångar.
Resumo:
Investigation of high pressure pretreatment process for gold leaching is the objective of the present master's thesis. The gold ores and concentrates which cannot be easily treated by leaching process are called "refractory". These types of ores or concentrates often have high content of sulfur and arsenic that renders the precious metal inaccessible to the leaching agents. Since the refractory ores in gold manufacturing industry take a considerable share, the pressure oxidation method (autoclave method) is considered as one of the possible ways to overcome the related problems. Mathematical modeling is the main approach in this thesis which was used for investigation of high pressure oxidation process. For this task, available information from literature concerning this phenomenon, including chemistry, mass transfer and kinetics, reaction conditions, applied apparatus and application, was collected and studied. The modeling part includes investigation of pyrite oxidation kinetics in order to create a descriptive mathematical model. The following major steps are completed: creation of process model by using the available knowledge; estimation of unknown parameters and determination of goodness of the fit; study of the reliability of the model and its parameters.
Resumo:
Fireside deposits can be found in many types of utility and industrial furnaces. The deposits in furnaces are problematic because they can reduce heat transfer, block gas paths and cause corrosion. To tackle these problems, it is vital to estimate the influence of deposits on heat transfer, to minimize deposit formation and to optimize deposit removal. It is beneficial to have a good understanding of the mechanisms of fireside deposit formation. Numerical modeling is a powerful tool for investigating the heat transfer in furnaces, and it can provide valuable information for understanding the mechanisms of deposit formation. In addition, a sub-model of deposit formation is generally an essential part of a comprehensive furnace model. This work investigates two specific processes of fireside deposit formation in two industrial furnaces. The first process is the slagging wall found in furnaces with molten deposits running on the wall. A slagging wall model is developed to take into account the two-layer structure of the deposits. With the slagging wall model, the thickness and the surface temperature of the molten deposit layer can be calculated. The slagging wall model is used to predict the surface temperature and the heat transfer to a specific section of a super-heater tube panel with the boundary condition obtained from a Kraft recovery furnace model. The slagging wall model is also incorporated into the computational fluid dynamics (CFD)-based Kraft recovery furnace model and applied on the lower furnace walls. The implementation of the slagging wall model includes a grid simplification scheme. The wall surface temperature calculated with the slagging wall model is used as the heat transfer boundary condition. Simulation of a Kraft recovery furnace is performed, and it is compared with two other cases and measurements. In the two other cases, a uniform wall surface temperature and a wall surface temperature calculated with a char bed burning model are used as the heat transfer boundary conditions. In this particular furnace, the wall surface temperatures from the three cases are similar and are in the correct range of the measurements. Nevertheless, the wall surface temperature profiles with the slagging wall model and the char bed burning model are different because the deposits are represented differently in the two models. In addition, the slagging wall model is proven to be computationally efficient. The second process is deposit formation due to thermophoresis of fine particles to the heat transfer surface. This process is considered in the simulation of a heat recovery boiler of the flash smelting process. In order to determine if the small dust particles stay on the wall, a criterion based on the analysis of forces acting on the particle is applied. Time-dependent simulation of deposit formation in the heat recovery boiler is carried out and the influence of deposits on heat transfer is investigated. The locations prone to deposit formation are also identified in the heat recovery boiler. Modeling of the two processes in the two industrial furnaces enhances the overall understanding of the processes. The sub-models developed in this work can be applied in other similar deposit formation processes with carefully-defined boundary conditions.
Resumo:
In this study we discuss the electronic, structural, and optical properties of titanium dioxide nanoparticles, and also the properties of Ni(II) diimine dithiolato complexes as dyes in dye-sensitized TiO2 based solar cells. The abovementioned properties have been modeled by using computational codes based on the density functional theory. The results achieved show slight evidence on the structure-dependent band gap broadening, and clear blue-shifts in absorption spectra and refractive index functions of ultra-small TiO2 particles. It is also shown that these properties are strongly dependent on the shape of the nanoparticles. Regarding the Ni(II) diimine dithiolato complexes as dyes in dye-sensitized TiO2 based solar cells, it is shown that based on the experimental electrochemical investigation and DFT studies all studied diimine derivatives could serve as potential candidates for the light harvesting, but the e ciencies of the dyes studied are not very promising.
Resumo:
The aim of this work is to study the results of tensile tests for austenitic stainless steel type 304 and make accurate FE-models according to the results of the tests. Tensile tests were made at Central Research Institute of Structural Material, Prometey at Saint Petersburg and Mariyenburg in Russia. The test specimens for the tensile tests were produced at Lappeenranta University of Technology in a Laboratory of Steel Structures. In total 4 different tests were made, two with base material specimens and two with transverse butt weld specimens. Each kind of a specimen was tested at room temperature and at low temperature. By comparing the results of room and low temperature tests of similar test specimen we get to study the results of work hardening that affect the austenitic steels at below room temperature. The produced specimens are to be modeled accurately and then imported for nonlinear FEM- analyzing. Using the data gained from the tensile tests the aim is to get the models work like the specimens did during the tests. By using the analyzed results of the FE-models the aim is to calculate and get the stress-strain curves that correspond to the results acquired from the tensile tests.
Resumo:
In the present work, liquid-solid flow in industrial scale is modeled using the commercial software of Computational Fluid Dynamics (CFD) ANSYS Fluent 14.5. In literature, there are few studies on liquid-solid flow in industrial scale, but any information about the particular case with modified geometry cannot be found. The aim of this thesis is to describe the strengths and weaknesses of the multiphase models, when a large-scale application is studied within liquid-solid flow, including the boundary-layer characteristics. The results indicate that the selection of the most appropriate multiphase model depends on the flow regime. Thus, careful estimations of the flow regime are recommended to be done before modeling. The computational tool is developed for this purpose during this thesis. The homogeneous multiphase model is valid only for homogeneous suspension, the discrete phase model (DPM) is recommended for homogeneous and heterogeneous suspension where pipe Froude number is greater than 1.0, while the mixture and Eulerian models are able to predict also flow regimes, where pipe Froude number is smaller than 1.0 and particles tend to settle. With increasing material density ratio and decreasing pipe Froude number, the Eulerian model gives the most accurate results, because it does not include simplifications in Navier-Stokes equations like the other models. In addition, the results indicate that the potential location of erosion in the pipe depends on material density ratio. Possible sedimentation of particles can cause erosion and increase pressure drop as well. In the pipe bend, especially secondary flows, perpendicular to the main flow, affect the location of erosion.
Resumo:
The power rating of wind turbines is constantly increasing; however, keeping the voltage rating at the low-voltage level results in high kilo-ampere currents. An alternative for increasing the power levels without raising the voltage level is provided by multiphase machines. Multiphase machines are used for instance in ship propulsion systems, aerospace applications, electric vehicles, and in other high-power applications including wind energy conversion systems. A machine model in an appropriate reference frame is required in order to design an efficient control for the electric drive. Modeling of multiphase machines poses a challenge because of the mutual couplings between the phases. Mutual couplings degrade the drive performance unless they are properly considered. In certain multiphase machines there is also a problem of high current harmonics, which are easily generated because of the small current path impedance of the harmonic components. However, multiphase machines provide special characteristics compared with the three-phase counterparts: Multiphase machines have a better fault tolerance, and are thus more robust. In addition, the controlled power can be divided among more inverter legs by increasing the number of phases. Moreover, the torque pulsation can be decreased and the harmonic frequency of the torque ripple increased by an appropriate multiphase configuration. By increasing the number of phases it is also possible to obtain more torque per RMS ampere for the same volume, and thus, increase the power density. In this doctoral thesis, a decoupled d–q model of double-star permanent-magnet (PM) synchronous machines is derived based on the inductance matrix diagonalization. The double-star machine is a special type of multiphase machines. Its armature consists of two three-phase winding sets, which are commonly displaced by 30 electrical degrees. In this study, the displacement angle between the sets is considered a parameter. The diagonalization of the inductance matrix results in a simplified model structure, in which the mutual couplings between the reference frames are eliminated. Moreover, the current harmonics are mapped into a reference frame, in which they can be easily controlled. The work also presents methods to determine the machine inductances by a finite-element analysis and by voltage-source inverters on-site. The derived model is validated by experimental results obtained with an example double-star interior PM (IPM) synchronous machine having the sets displaced by 30 electrical degrees. The derived transformation, and consequently, the decoupled d–q machine model, are shown to model the behavior of an actual machine with an acceptable accuracy. Thus, the proposed model is suitable to be used for the model-based control design of electric drives consisting of double-star IPM synchronous machines.
Resumo:
Lectio praecursoria, Åbo Akademi University 7 June 2013.
Resumo:
The aim of this study was to model light interception and distribution in the mixed canopy of Common cocklebur (Xanthium stramarium) with corn. An experiment was conducted in factorial arrangement on the basis of randomized complete blocks design with three replications in Gonabad in 2006-2007 and 2007-2008 seasons. The factors used in this experiment include corn density of 7.5, 8.5 and 9.5 plants per meter of row and density of Common cocklebur of zero, 2, 4, 6 and 8 plants per meter of row. INTERCOM model was used through replacing parabolic function with triangular function of leaf area density. Vertical distribution of the species' leaf area showed that corn has concentrated the most leaf area in layer of 80 to 100 cm while Common cocklebur has concentrated in 35-50 cm of canopy height. Model sensitivity analysis showed that leaf area index, species' height, height where maximum leaf area is seen (hm), and extinction coefficient have influence on light interception rate of any species. In both species, the distribution density of leaf area at the canopy length fit a triangular function, and the height in which maximum leaf area was observed was changed by change in density. There was a correlation between percentage of the radiation absorbed by the weed and percentage of corn seed yield loss (r² = 0.89). Ideal type of corn was determined until the stage of tasseling in competition with weed. This determination indicates that the corn needs more height and leaf area, as well as less extinction coefficient to successfully fight against the weed.
Resumo:
The theoretical research of the study focused to business process management and business process modeling, the goal was to found a new business process modeling method for electrical accessories manufacturing enterprise. The focus was to find few options for business process modeling methods where company could have chosen the best one for its needs The study was carried out as a qualitative research with an action study and a case study as the most important ways collect data. In the empirical part of the study examples of company’s processes modeled with the new modeling method and process modeling process are presented. The new way of modeling processes improves especially visual presentation of the processes and improves the understanding how employees should work in the organizational interfaces of the process and in the interfaces between different processes. The results of the study is a new unified way to model company’s processes, which makes it easier to understand and create the process models. This improved readability makes it possible to reduce the costs that were created from the unclear old process models.