995 resultados para graphical methods


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nowadays software testing and quality assurance have a great value in software development process. Software testing does not mean a concrete discipline, it is the process of validation and verification that starts from the idea of future product and finishes at the end of product’s maintenance. The importance of software testing methods and tools that can be applied on different testing phases is highly stressed in industry. The initial objectives for this thesis were to provide a sufficient literature review on different testing phases and for each of the phases define the method that can be effectively used for improving software’s quality. Software testing phases, chosen for study are: unit testing, integration testing, functional testing, system testing, acceptance testing and usability testing. The research showed that there are many software testing methods that can be applied at different phases and in the most of the cases the choice of the method should be done depending on software type and its specification. In the thesis the problem, concerned to each of the phases was identified; the method that can help in eliminating this problem was suggested and particularly described.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Three sensitive spectrophotometric methods are presented for the determination of finasteride in bulk and in tablets. The methods rely on the use of bromate-bromide reagent and three dyes namely, methyl orange, indigocarmine and thymol blue as reagents. They involve the addition of a measured excess of bromate-bromide reagent to finasteride in acid medium, and after the bromination reaction is judged to be complete, the unreacted bromine is determined by reacting with a fixed amount of either methylorange and measuring the absorbance at 520 nm (method A) or indigocarmine and measuring the absorbance at 610 nm (method B) or thymol blue and measuring the absorbance at 550 nm (method C). In all the methods, the amount of insitu generated bromine reacted corresponds to the amount of finasteride. The absorbance measured at the respective wavelength is found increase linearly with the concentration of finasteride. Beer's law is obeyed in the ranges 0.25- 2.0, 0.5-6.0 and 1-12 µg mL-1 for method A, method B and method C, respectively. The calculated molar absorptivity values are 5.7x10(4), 3.12x10(4) and 1.77x10(4) L mol-1 cm-1 respectively, for method A, method B and method C, and the corresponding Sandell sensitivity values are 0.0065, 0.012 and 0.021 µg cm-2. The limits of detection (LOD) and quantification (LOQ) are also reported for all the methods. Accuracy and, intra-day and inter-day precisions of the methods were established according to the current ICH guidelines. The methods were successfully applied to the determination of finasteride in commercially available tablets and the results were found to closely agree with the label claim. The results of the methods were statistically compared with those of a reference method by applying Student's t-test and F-test. The accuracy and reliability of the methods were further confirmed by performing recovery tests via standard addition procedure.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The number of autonomous wireless sensor and control nodes has been increasing rapidly during the last decade. Until recently, these wireless nodes have been powered with batteries, which have lead to a short life cycle and high maintenance need. Due to these battery-related problems, new energy sources have been studied to power wireless nodes. One solution is energy harvesting, i.e. extracting energy from the ambient environment. Energy harvesting can provide a long-lasting power source for sensor nodes, with no need for maintenance. In this thesis, various energy harvesting technologies are studied whilst focusing on the theory of each technology and the state-of-the-art solutions of published studies and commercial solutions. In addition to energy harvesting, energy storage and energy management solutions are also studied as a subsystem of a whole energy source solution. Wireless nodes are also used in heavy-duty vehicles. Therefore a reliable, long-lasting and maintenance-free power source is also needed in this kind of environment. A forestry harvester has been used as a case study to study the feasibility of energy harvesting in a forestry harvester’s sliding boom. The energy harvester should be able to produce few milliwatts to power the target system, an independent limit switch.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A direct, extraction-free spectrophotometric method has been developed for the determination of acebutolol hydrochloride (ABH) in pharmaceutical preparations. The method is based on ion-pair complex formation between the drug and two acidic dyes (sulphonaphthalein) namely bromocresol green (BCG) and bromothymol blue (BTB). Conformity to Beer's law enabled the assay of the drug in the range of 0.5-13.8 µg mL-1 with BCG and 1.8-15.9 µg mL-1 with BTB. Compared with a reference method, the results obtained were of equal accuracy and precision. In addition, these methods were also found to be specific for the analysis of acebutolol hydrochloride in the presence of excipients, which are co-formulated in the drug.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Highly sensitive and selective spectrophotometric methods (A and B) were developed for the determination of micro amounts of olanzapine (OLZ). Method A (direct method) is based on the oxidation of olanzapine with a known excess of iodine monochloride (ICl) in an acidic medium. Under the same condition, thymol blue was iodinated by unreacted ICl, and the absorbance of uniodinated thymol blue was measured at 536 nm. The decrease in ICl concentration is a measure of drug concentration. In method B (indirect method), oxidation of OLZ by a known excess of Ce(IV) in sulfuric acid medium followed by the reaction of unreacted Ce(IV) with leuco crystal violet (LCV) to crystal violet (CV), which is measured in an acetate buffer medium ( pH 4.9) at 580 nm. These methods obey the Beer's law in the concentration range of 0.2-1.6 µg mL-1 (method A) and 0.1-1.4 µg mL-1 (method B). The developed procedures have been successfully applied to the determination of OLZ in pure and in dosage forms. The results exhibit no interference from the presence of excipients. The reliability of the methods was established by parallel determination of OLZ against the reference method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cutin and suberin are structural and protective polymers of plant surfaces. The epidermal cells of the aerial parts of plants are covered with an extracellular cuticular layer, which consists of polyester cutin, highly resistant cutan, cuticular waxes and polysaccharides which link the layer to the epidermal cells. A similar protective layer is formed by a polyaromatic-polyaliphatic biopolymer suberin, which is present particularly in the cell walls of the phellem layer of periderm of the underground parts of plants (e.g. roots and tubers) and the bark of trees. In addition, suberization is also a major factor in wound healing and wound periderm formation regardless of the plants’ tissue. Knowledge of the composition and functions of cuticular and suberin polymers is important for understanding the physiological properties for the plants and for nutritional quality when these plants are consumed as foods. The aims of the practical work were to assess the chemical composition of cuticular polymers of several northern berries and seeds and suberin of two varieties of potatoes. Cutin and suberin were studied as isolated polymers and further after depolymerization as soluble monomers and solid residues. Chemical and enzymatic depolymerization techniques were compared and a new chemical depolymerization method was developed. Gas chromatographic analysis with mass spectrometric detection (GC-MS) was used to assess the monomer compositions. Polymer investigations were conducted with solid state carbon-13 cross polarization magic angle spinning nuclear magnetic resonance spectroscopy (13C CP-MAS NMR), Fourier transform infrared spectroscopy (FTIR) and microscopic analysis. Furthermore, the development of suberin over one year of post-harvest storage was investigated and the cuticular layers from berries grown in the North and South of Finland were compared. The results show that the amounts of isolated cuticular layers and cutin monomers, as well as monomeric compositions vary greatly between the berries. The monomer composition of seeds was found to differ from the corresponding berry peel monomers. The berry cutin monomers were composed mostly of long-chain aliphatic ω-hydroxy acids, with various mid-chain functionalities (double-bonds, epoxy, hydroxy and keto groups). Substituted α,ω-diacids predominated over ω-hydroxy acids in potato suberin monomers and slight differences were found between the varieties. The newly-developed closed tube chemical method was found to be suitable for cutin and suberin analysis and preferred over the solvent-consuming and laborious reflux method. Enzymatic hydrolysis with cutinase was less effective than chemical methanolysis and showed specificity towards α,ω-diacid bonds. According to 13C CP-MAS NMR and FTIR, the depolymerization residues contained significant amounts of aromatic structures, polysaccharides and possible cutan-type aliphatic moieties. Cultivation location seems to have effect on cuticular composition. The materials studied contained significant amounts of different types of biopolymers that could be utilized for several purposes with or without further processing. The importance of the so-called waste material from industrial processes of berries and potatoes as a source of either dietary fiber or specialty chemicals should be further investigated in detail. The evident impact of cuticular and suberin polymers, among other fiber components, on human health should be investigated in clinical trials. These by-product materials may be used as value-added fiber fractions in the food industry and as raw materials for specialty chemicals such as lubricants and emulsifiers, or as building blocks for novel polymers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Currently, numerous high-throughput technologies are available for the study of human carcinomas. In literature, many variations of these techniques have been described. The common denominator for these methodologies is the high amount of data obtained in a single experiment, in a short time period, and at a fairly low cost. However, these methods have also been described with several problems and limitations. The purpose of this study was to test the applicability of two selected high-throughput methods, cDNA and tissue microarrays (TMA), in cancer research. Two common human malignancies, breast and colorectal cancer, were used as examples. This thesis aims to present some practical considerations that need to be addressed when applying these techniques. cDNA microarrays were applied to screen aberrant gene expression in breast and colon cancers. Immunohistochemistry was used to validate the results and to evaluate the association of selected novel tumour markers with the outcome of the patients. The type of histological material used in immunohistochemistry was evaluated especially considering the applicability of whole tissue sections and different types of TMAs. Special attention was put on the methodological details in the cDNA microarray and TMA experiments. In conclusion, many potential tumour markers were identified in the cDNA microarray analyses. Immunohistochemistry could be applied to validate the observed gene expression changes of selected markers and to associate their expression change with patient outcome. In the current experiments, both TMAs and whole tissue sections could be used for this purpose. This study showed for the first time that securin and p120 catenin protein expression predict breast cancer outcome and the immunopositivity of carbonic anhydrase IX associates with the outcome of rectal cancer. The predictive value of these proteins was statistically evident also in multivariate analyses with up to a 13.1- fold risk for cancer specific death in a specific subgroup of patients.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this work was to develop and validate simple, accurate and precise spectroscopic methods (multicomponent, dual wavelength and simultaneous equations) for the simultaneous estimation and dissolution testing of ofloxacin and ornidazole tablet dosage forms. The medium of dissolution used was 900 ml of 0.01N HCl, using a paddle apparatus at a stirring rate of 50 rpm. The drug release was evaluated by developed and validated spectroscopic methods. Ofloxacin and ornidazole showed 293.4 and 319.6nm as λmax in 0.01N HCl. The methods were validated to meet requirements for a global regulatory filing. The validation included linearity, precision and accuracy. In addition, recovery studies and dissolution studies of three different tablets were compared and the results obtained show no significant difference among products.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The skill of programming is a key asset for every computer science student. Many studies have shown that this is a hard skill to learn and the outcomes of programming courses have often been substandard. Thus, a range of methods and tools have been developed to assist students’ learning processes. One of the biggest fields in computer science education is the use of visualizations as a learning aid and many visualization based tools have been developed to aid the learning process during last few decades. Studies conducted in this thesis focus on two different visualizationbased tools TRAKLA2 and ViLLE. This thesis includes results from multiple empirical studies about what kind of effects the introduction and usage of these tools have on students’ opinions and performance, and what kind of implications there are from a teacher’s point of view. The results from studies in this thesis show that students preferred to do web-based exercises, and felt that those exercises contributed to their learning. The usage of the tool motivated students to work harder during their course, which was shown in overall course performance and drop-out statistics. We have also shown that visualization-based tools can be used to enhance the learning process, and one of the key factors is the higher and active level of engagement (see. Engagement Taxonomy by Naps et al., 2002). The automatic grading accompanied with immediate feedback helps students to overcome obstacles during the learning process, and to grasp the key element in the learning task. These kinds of tools can help us to cope with the fact that many programming courses are overcrowded with limited teaching resources. These tools allows us to tackle this problem by utilizing automatic assessment in exercises that are most suitable to be done in the web (like tracing and simulation) since its supports students’ independent learning regardless of time and place. In summary, we can use our course’s resources more efficiently to increase the quality of the learning experience of the students and the teaching experience of the teacher, and even increase performance of the students. There are also methodological results from this thesis which contribute to developing insight into the conduct of empirical evaluations of new tools or techniques. When we evaluate a new tool, especially one accompanied with visualization, we need to give a proper introduction to it and to the graphical notation used by tool. The standard procedure should also include capturing the screen with audio to confirm that the participants of the experiment are doing what they are supposed to do. By taken such measures in the study of the learning impact of visualization support for learning, we can avoid drawing false conclusion from our experiments. As computer science educators, we face two important challenges. Firstly, we need to start to deliver the message in our own institution and all over the world about the new – scientifically proven – innovations in teaching like TRAKLA2 and ViLLE. Secondly, we have the relevant experience of conducting teaching related experiment, and thus we can support our colleagues to learn essential know-how of the research based improvement of their teaching. This change can transform academic teaching into publications and by utilizing this approach we can significantly increase the adoption of the new tools and techniques, and overall increase the knowledge of best-practices. In future, we need to combine our forces and tackle these universal and common problems together by creating multi-national and multiinstitutional research projects. We need to create a community and a platform in which we can share these best practices and at the same time conduct multi-national research projects easily.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The results shown in this thesis are based on selected publications of the 2000s decade. The work was carried out in several national and EC funded public research projects and in close cooperation with industrial partners. The main objective of the thesis was to study and quantify the most important phenomena of circulating fluidized bed combustors by developing and applying proper experimental and modelling methods using laboratory scale equipments. An understanding of the phenomena plays an essential role in the development of combustion and emission performance, and the availability and controls of CFB boilers. Experimental procedures to study fuel combustion behaviour under CFB conditions are presented in the thesis. Steady state and dynamic measurements under well controlled conditions were carried out to produce the data needed for the development of high efficiency, utility scale CFB technology. The importance of combustion control and furnace dynamics is emphasized when CFB boilers are scaled up with a once through steam cycle. Qualitative information on fuel combustion characteristics was obtained directly by comparing flue gas oxygen responses during the impulse change experiments with fuel feed. A one-dimensional, time dependent model was developed to analyse the measurement data Emission formation was studied combined with fuel combustion behaviour. Correlations were developed for NO, N2O, CO and char loading, as a function of temperature and oxygen concentration in the bed area. An online method to characterize char loading under CFB conditions was developed and validated with the pilot scale CFB tests. Finally, a new method to control air and fuel feeds in CFB combustion was introduced. The method is based on models and an analysis of the fluctuation of the flue gas oxygen concentration. The effect of high oxygen concentrations on fuel combustion behaviour was also studied to evaluate the potential of CFB boilers to apply oxygenfiring technology to CCS. In future studies, it will be necessary to go through the whole scale up chain from laboratory phenomena devices through pilot scale test rigs to large scale, commercial boilers in order to validate the applicability and scalability of the, results. This thesis shows the chain between the laboratory scale phenomena test rig (bench scale) and the CFB process test rig (pilot). CFB technology has been scaled up successfully from an industrial scale to a utility scale during the last decade. The work shown in the thesis, for its part, has supported the development by producing new detailed information on combustion under CFB conditions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Credit risk assessment is an integral part of banking. Credit risk means that the return will not materialise in case the customer fails to fulfil its obligations. Thus a key component of banking is setting acceptance criteria for granting loans. Theoretical part of the study focuses on key components of credit assessment methods of Banks in the literature when extending credits to large corporations. Main component is Basel II Accord, which sets regulatory requirement for credit risk assessment methods of banks. Empirical part comprises, as primary source, analysis of major Nordic banks’ annual reports and risk management reports. As secondary source complimentary interviews were carried out with senior credit risk assessment personnel. The findings indicate that all major Nordic banks are using combination of quantitative and qualitative information in credit risk assessment model when extending credits to large corporations. The relative input of qualitative information depends on the selected approach to the credit rating, i.e. point-in-time or through-the-cycle.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Forest inventories are used to estimate forest characteristics and the condition of forest for many different applications: operational tree logging for forest industry, forest health state estimation, carbon balance estimation, land-cover and land use analysis in order to avoid forest degradation etc. Recent inventory methods are strongly based on remote sensing data combined with field sample measurements, which are used to define estimates covering the whole area of interest. Remote sensing data from satellites, aerial photographs or aerial laser scannings are used, depending on the scale of inventory. To be applicable in operational use, forest inventory methods need to be easily adjusted to local conditions of the study area at hand. All the data handling and parameter tuning should be objective and automated as much as possible. The methods also need to be robust when applied to different forest types. Since there generally are no extensive direct physical models connecting the remote sensing data from different sources to the forest parameters that are estimated, mathematical estimation models are of "black-box" type, connecting the independent auxiliary data to dependent response data with linear or nonlinear arbitrary models. To avoid redundant complexity and over-fitting of the model, which is based on up to hundreds of possibly collinear variables extracted from the auxiliary data, variable selection is needed. To connect the auxiliary data to the inventory parameters that are estimated, field work must be performed. In larger study areas with dense forests, field work is expensive, and should therefore be minimized. To get cost-efficient inventories, field work could partly be replaced with information from formerly measured sites, databases. The work in this thesis is devoted to the development of automated, adaptive computation methods for aerial forest inventory. The mathematical model parameter definition steps are automated, and the cost-efficiency is improved by setting up a procedure that utilizes databases in the estimation of new area characteristics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The development of correct programs is a core problem in computer science. Although formal verification methods for establishing correctness with mathematical rigor are available, programmers often find these difficult to put into practice. One hurdle is deriving the loop invariants and proving that the code maintains them. So called correct-by-construction methods aim to alleviate this issue by integrating verification into the programming workflow. Invariant-based programming is a practical correct-by-construction method in which the programmer first establishes the invariant structure, and then incrementally extends the program in steps of adding code and proving after each addition that the code is consistent with the invariants. In this way, the program is kept internally consistent throughout its development, and the construction of the correctness arguments (proofs) becomes an integral part of the programming workflow. A characteristic of the approach is that programs are described as invariant diagrams, a graphical notation similar to the state charts familiar to programmers. Invariant-based programming is a new method that has not been evaluated in large scale studies yet. The most important prerequisite for feasibility on a larger scale is a high degree of automation. The goal of the Socos project has been to build tools to assist the construction and verification of programs using the method. This thesis describes the implementation and evaluation of a prototype tool in the context of the Socos project. The tool supports the drawing of the diagrams, automatic derivation and discharging of verification conditions, and interactive proofs. It is used to develop programs that are correct by construction. The tool consists of a diagrammatic environment connected to a verification condition generator and an existing state-of-the-art theorem prover. Its core is a semantics for translating diagrams into verification conditions, which are sent to the underlying theorem prover. We describe a concrete method for 1) deriving sufficient conditions for total correctness of an invariant diagram; 2) sending the conditions to the theorem prover for simplification; and 3) reporting the results of the simplification to the programmer in a way that is consistent with the invariantbased programming workflow and that allows errors in the program specification to be efficiently detected. The tool uses an efficient automatic proof strategy to prove as many conditions as possible automatically and lets the remaining conditions be proved interactively. The tool is based on the verification system PVS and i uses the SMT (Satisfiability Modulo Theories) solver Yices as a catch-all decision procedure. Conditions that were not discharged automatically may be proved interactively using the PVS proof assistant. The programming workflow is very similar to the process by which a mathematical theory is developed inside a computer supported theorem prover environment such as PVS. The programmer reduces a large verification problem with the aid of the tool into a set of smaller problems (lemmas), and he can substantially improve the degree of proof automation by developing specialized background theories and proof strategies to support the specification and verification of a specific class of programs. We demonstrate this workflow by describing in detail the construction of a verified sorting algorithm. Tool-supported verification often has little to no presence in computer science (CS) curricula. Furthermore, program verification is frequently introduced as an advanced and purely theoretical topic that is not connected to the workflow taught in the early and practically oriented programming courses. Our hypothesis is that verification could be introduced early in the CS education, and that verification tools could be used in the classroom to support the teaching of formal methods. A prototype of Socos has been used in a course at Åbo Akademi University targeted at first and second year undergraduate students. We evaluate the use of Socos in the course as part of a case study carried out in 2007.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

New luminometric particle-based methods were developed to quantify protein and to count cells. The developed methods rely on the interaction of the sample with nano- or microparticles and different principles of detection. In fluorescence quenching, timeresolved luminescence resonance energy transfer (TR-LRET), and two-photon excitation fluorescence (TPX) methods, the sample prevents the adsorption of labeled protein to the particles. Depending on the system, the addition of the analyte increases or decreases the luminescence. In the dissociation method, the adsorbed protein protects the Eu(III) chelate on the surface of the particles from dissociation at a low pH. The experimental setups are user-friendly and rapid and do not require hazardous test compounds and elevated temperatures. The sensitivity of the quantification of protein (from 40 to 500 pg bovine serum albumin in a sample) was 20-500-fold better than in most sensitive commercial methods. The quenching method exhibited low protein-to-protein variability and the dissociation method insensitivity to the assay contaminants commonly found in biological samples. Less than ten eukaryotic cells were detected and quantified with all the developed methods under optimized assay conditions. Furthermore, two applications, the method for detection of the aggregation of protein and the cell viability test, were developed by utilizing the TR-LRET method. The detection of the aggregation of protein was allowed at a more than 10,000 times lower concentration, 30 μg/L, compared to the known methods of UV240 absorbance and dynamic light scattering. The TR-LRET method was combined with a nucleic acid assay with cell-impermeable dye to measure the percentage of dead cells in a single tube test with cell counts below 1000 cells/tube.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The objective of this dissertation is to improve the dynamic simulation of fluid power circuits. A fluid power circuit is a typical way to implement power transmission in mobile working machines, e.g. cranes, excavators etc. Dynamic simulation is an essential tool in developing controllability and energy-efficient solutions for mobile machines. Efficient dynamic simulation is the basic requirement for the real-time simulation. In the real-time simulation of fluid power circuits there exist numerical problems due to the software and methods used for modelling and integration. A simulation model of a fluid power circuit is typically created using differential and algebraic equations. Efficient numerical methods are required since differential equations must be solved in real time. Unfortunately, simulation software packages offer only a limited selection of numerical solvers. Numerical problems cause noise to the results, which in many cases leads the simulation run to fail. Mathematically the fluid power circuit models are stiff systems of ordinary differential equations. Numerical solution of the stiff systems can be improved by two alternative approaches. The first is to develop numerical solvers suitable for solving stiff systems. The second is to decrease the model stiffness itself by introducing models and algorithms that either decrease the highest eigenvalues or neglect them by introducing steady-state solutions of the stiff parts of the models. The thesis proposes novel methods using the latter approach. The study aims to develop practical methods usable in dynamic simulation of fluid power circuits using explicit fixed-step integration algorithms. In this thesis, twomechanisms whichmake the systemstiff are studied. These are the pressure drop approaching zero in the turbulent orifice model and the volume approaching zero in the equation of pressure build-up. These are the critical areas to which alternative methods for modelling and numerical simulation are proposed. Generally, in hydraulic power transmission systems the orifice flow is clearly in the turbulent area. The flow becomes laminar as the pressure drop over the orifice approaches zero only in rare situations. These are e.g. when a valve is closed, or an actuator is driven against an end stopper, or external force makes actuator to switch its direction during operation. This means that in terms of accuracy, the description of laminar flow is not necessary. But, unfortunately, when a purely turbulent description of the orifice is used, numerical problems occur when the pressure drop comes close to zero since the first derivative of flow with respect to the pressure drop approaches infinity when the pressure drop approaches zero. Furthermore, the second derivative becomes discontinuous, which causes numerical noise and an infinitely small integration step when a variable step integrator is used. A numerically efficient model for the orifice flow is proposed using a cubic spline function to describe the flow in the laminar and transition areas. Parameters for the cubic spline function are selected such that its first derivative is equal to the first derivative of the pure turbulent orifice flow model in the boundary condition. In the dynamic simulation of fluid power circuits, a tradeoff exists between accuracy and calculation speed. This investigation is made for the two-regime flow orifice model. Especially inside of many types of valves, as well as between them, there exist very small volumes. The integration of pressures in small fluid volumes causes numerical problems in fluid power circuit simulation. Particularly in realtime simulation, these numerical problems are a great weakness. The system stiffness approaches infinity as the fluid volume approaches zero. If fixed step explicit algorithms for solving ordinary differential equations (ODE) are used, the system stability would easily be lost when integrating pressures in small volumes. To solve the problem caused by small fluid volumes, a pseudo-dynamic solver is proposed. Instead of integration of the pressure in a small volume, the pressure is solved as a steady-state pressure created in a separate cascade loop by numerical integration. The hydraulic capacitance V/Be of the parts of the circuit whose pressures are solved by the pseudo-dynamic method should be orders of magnitude smaller than that of those partswhose pressures are integrated. The key advantage of this novel method is that the numerical problems caused by the small volumes are completely avoided. Also, the method is freely applicable regardless of the integration routine applied. The superiority of both above-mentioned methods is that they are suited for use together with the semi-empirical modelling method which necessarily does not require any geometrical data of the valves and actuators to be modelled. In this modelling method, most of the needed component information can be taken from the manufacturer’s nominal graphs. This thesis introduces the methods and shows several numerical examples to demonstrate how the proposed methods improve the dynamic simulation of various hydraulic circuits.