958 resultados para Finite Difference Time Domain Method


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Biogeochemical processes in the coastal region, including the coastal area of the Great Lakes, are of great importance due to the complex physical, chemical and biological characteristics that differ from those on either the adjoining land or open water systems. Particle-reactive radioisotopes, both naturally occurring (210Pb, 210Po and 7Be) and man-made (137Cs), have proven to be useful tracers for these processes in many systems. However, a systematic isotope study on the northwest coast of the Keweenaw Peninsula in Lake Superior has not yet been performed. In this dissertation research, field sampling, laboratory measurements and numerical modeling were conducted to understand the biogeochemistry of the radioisotope tracers and some particulate-related coastal processes. In the first part of the dissertation, radioisotope activities of 210Po and 210Pb in a variability of samples (dissolved, suspended particle, sediment trap materials, surficial sediment) were measured. A completed picture of the distribution and disequilibrium of this pair of isotopes was drawn. The application of a simple box model utilizing these field observations reveals short isotope residence times in the water column and a significant contribution of sediment resuspension (for both particles and isotopes). The results imply a highly dynamic coastal region. In the second part of this dissertation, this conclusion is examined further. Based on intensive sediment coring, the spatial distribution of isotope inventories (mainly 210Pb, 137Cs and 7Be) in the nearshore region was determined. Isotope-based focusing factors categorized most of the sampling sites as non- or temporary depositional zones. A twodimensional steady-state box-in-series model was developed and applied to individual transects with the 210Pb inventories as model input. The modeling framework included both water column and upper sediments down to the depth of unsupported 210Pb penetration. The model was used to predict isotope residence times and cross-margin fluxes of sediments and isotopes at different locations along each transect. The time scale for sediment focusing from the nearshore to offshore regions of the transect was on the order of 10 years. The possibility of sediment longshore movement was indicated by high inventory ratios of 137Cs: 210Pb. Local deposition of fine particles, including fresh organic carbon, may explain the observed distribution of benthic organisms such as Diporeia. In the last part of this dissertation, isotope tracers, 210Pb and 210Po, were coupled into a hydrodynamic model for Lake Superior. The model was modified from an existing 2-D finite difference physical-biological model which has previously been successfully applied on Lake Superior. Using the field results from part one of this dissertation as initial conditions, the model was used to predict the isotope distribution in the water column; reasonable results were achieved. The modeling experiments demonstrated the potential for using a hydrodynamic model to study radioisotope biogeochemistry in the lake, although further refinements are necessary.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In process industries, make-and-pack production is used to produce food and beverages, chemicals, and metal products, among others. This type of production process allows the fabrication of a wide range of products in relatively small amounts using the same equipment. In this article, we consider a real-world production process (cf. Honkomp et al. 2000. The curse of reality – why process scheduling optimization problems are diffcult in practice. Computers & Chemical Engineering, 24, 323–328.) comprising sequence-dependent changeover times, multipurpose storage units with limited capacities, quarantine times, batch splitting, partial equipment connectivity, and transfer times. The planning problem consists of computing a production schedule such that a given demand of packed products is fulfilled, all technological constraints are satisfied, and the production makespan is minimised. None of the models in the literature covers all of the technological constraints that occur in such make-and-pack production processes. To close this gap, we develop an efficient mixed-integer linear programming model that is based on a continuous time domain and general-precedence variables. We propose novel types of symmetry-breaking constraints and a preprocessing procedure to improve the model performance. In an experimental analysis, we show that small- and moderate-sized instances can be solved to optimality within short CPU times.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Patients suffering from cystic fibrosis (CF) show thick secretions, mucus plugging and bronchiectasis in bronchial and alveolar ducts. This results in substantial structural changes of the airway morphology and heterogeneous ventilation. Disease progression and treatment effects are monitored by so-called gas washout tests, where the change in concentration of an inert gas is measured over a single or multiple breaths. The result of the tests based on the profile of the measured concentration is a marker for the severity of the ventilation inhomogeneity strongly affected by the airway morphology. However, it is hard to localize underlying obstructions to specific parts of the airways, especially if occurring in the lung periphery. In order to support the analysis of lung function tests (e.g. multi-breath washout), we developed a numerical model of the entire airway tree, coupling a lumped parameter model for the lung ventilation with a 4th-order accurate finite difference model of a 1D advection-diffusion equation for the transport of an inert gas. The boundary conditions for the flow problem comprise the pressure and flow profile at the mouth, which is typically known from clinical washout tests. The natural asymmetry of the lung morphology is approximated by a generic, fractal, asymmetric branching scheme which we applied for the conducting airways. A conducting airway ends when its dimension falls below a predefined limit. A model acinus is then connected to each terminal airway. The morphology of an acinus unit comprises a network of expandable cells. A regional, linear constitutive law describes the pressure-volume relation between the pleural gap and the acinus. The cyclic expansion (breathing) of each acinus unit depends on the resistance of the feeding airway and on the flow resistance and stiffness of the cells themselves. Special care was taken in the development of a conservative numerical scheme for the gas transport across bifurcations, handling spatially and temporally varying advective and diffusive fluxes over a wide range of scales. Implicit time integration was applied to account for the numerical stiffness resulting from the discretized transport equation. Local or regional modification of the airway dimension, resistance or tissue stiffness are introduced to mimic pathological airway restrictions typical for CF. This leads to a more heterogeneous ventilation of the model lung. As a result the concentration in some distal parts of the lung model remains increased for a longer duration. The inert gas concentration at the mouth towards the end of the expirations is composed of gas from regions with very different washout efficiency. This results in a steeper slope of the corresponding part of the washout profile.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

MRSI grids frequently show spectra with poor quality, mainly because of the high sensitivity of MRS to field inhomogeneities. These poor quality spectra are prone to quantification and/or interpretation errors that can have a significant impact on the clinical use of spectroscopic data. Therefore, quality control of the spectra should always precede their clinical use. When performed manually, quality assessment of MRSI spectra is not only a tedious and time-consuming task, but is also affected by human subjectivity. Consequently, automatic, fast and reliable methods for spectral quality assessment are of utmost interest. In this article, we present a new random forest-based method for automatic quality assessment of (1) H MRSI brain spectra, which uses a new set of MRS signal features. The random forest classifier was trained on spectra from 40 MRSI grids that were classified as acceptable or non-acceptable by two expert spectroscopists. To account for the effects of intra-rater reliability, each spectrum was rated for quality three times by each rater. The automatic method classified these spectra with an area under the curve (AUC) of 0.976. Furthermore, in the subset of spectra containing only the cases that were classified every time in the same way by the spectroscopists, an AUC of 0.998 was obtained. Feature importance for the classification was also evaluated. Frequency domain skewness and kurtosis, as well as time domain signal-to-noise ratios (SNRs) in the ranges 50-75 ms and 75-100 ms, were the most important features. Given that the method is able to assess a whole MRSI grid faster than a spectroscopist (approximately 3 s versus approximately 3 min), and without loss of accuracy (agreement between classifier trained with just one session and any of the other labelling sessions, 89.88%; agreement between any two labelling sessions, 89.03%), the authors suggest its implementation in the clinical routine. The method presented in this article was implemented in jMRUI's SpectrIm plugin. Copyright © 2016 John Wiley & Sons, Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: The failure rate of health information systems is high, partially due to fragmented, incomplete, or incorrect identification and description of specific and critical domain requirements. In order to systematically transform the requirements of work into real information system, an explicit conceptual framework is essential to summarize the work requirements and guide system design. Recently, Butler, Zhang, and colleagues proposed a conceptual framework called Work Domain Ontology (WDO) to formally represent users’ work. This WDO approach has been successfully demonstrated in a real world design project on aircraft scheduling. However, as a top level conceptual framework, this WDO has not defined an explicit and well specified schema (WDOS) , and it does not have a generalizable and operationalized procedure that can be easily applied to develop WDO. Moreover, WDO has not been developed for any concrete healthcare domain. These limitations hinder the utility of WDO in real world information system in general and in health information system in particular. Objective: The objective of this research is to formalize the WDOS, operationalize a procedure to develop WDO, and evaluate WDO approach using Self-Nutrition Management (SNM) work domain. Method: Concept analysis was implemented to formalize WDOS. Focus group interview was conducted to capture concepts in SNM work domain. Ontology engineering methods were adopted to model SNM WDO. Part of the concepts under the primary goal “staying healthy” for SNM were selected and transformed into a semi-structured survey to evaluate the acceptance, explicitness, completeness, consistency, experience dependency of SNM WDO. Result: Four concepts, “goal, operation, object and constraint”, were identified and formally modeled in WDOS with definitions and attributes. 72 SNM WDO concepts under primary goal were selected and transformed into semi-structured survey questions. The evaluation indicated that the major concepts of SNM WDO were accepted by 41 overweight subjects. SNM WDO is generally independent of user domain experience but partially dependent on SNM application experience. 23 of 41 paired concepts had significant correlations. Two concepts were identified as ambiguous concepts. 8 extra concepts were recommended towards the completeness of SNM WDO. Conclusion: The preliminary WDOS is ready with an operationalized procedure. SNM WDO has been developed to guide future SNM application design. This research is an essential step towards Work-Centered Design (WCD).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Interface discontinuity factors based on the Generalized Equivalence Theory are commonly used in nodal homogenized diffusion calculations so that diffusion average values approximate heterogeneous higher order solutions. In this paper, an additional form of interface correction factors is presented in the frame of the Analytic Coarse Mesh Finite Difference Method (ACMFD), based on a correction of the modal fluxes instead of the physical fluxes. In the ACMFD formulation, implemented in COBAYA3 code, the coupled multigroup diffusion equations inside a homogenized region are reduced to a set of uncoupled modal equations through diagonalization of the multigroup diffusion matrix. Then, physical fluxes are transformed into modal fluxes in the eigenspace of the diffusion matrix. It is possible to introduce interface flux discontinuity jumps as the difference of heterogeneous and homogeneous modal fluxes instead of introducing interface discontinuity factors as the ratio of heterogeneous and homogeneous physical fluxes. The formulation in the modal space has been implemented in COBAYA3 code and assessed by comparison with solutions using classical interface discontinuity factors in the physical space

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Within the framework of the Collaborative Project for a European Sodium Fast Reactor, the reactor physics group at UPM is working on the extension of its in-house multi-scale advanced deterministic code COBAYA3 to Sodium Fast Reactors (SFR). COBAYA3 is a 3D multigroup neutron kinetics diffusion code that can be used either as a pin-by-pin code or as a stand-alone nodal code by using the analytic nodal diffusion solver ANDES. It is coupled with thermalhydraulics codes such as COBRA-TF and FLICA, allowing transient analysis of LWR at both fine-mesh and coarse-mesh scales. In order to enable also 3D pin-by-pin and nodal coupled NK-TH simulations of SFR, different developments are in progress. This paper presents the first steps towards the application of COBAYA3 to this type of reactors. ANDES solver, already extended to triangular-Z geometry, has been applied to fast reactor steady-state calculations. The required cross section libraries were generated with ERANOS code for several configurations. The limitations encountered in the application of the Analytic Coarse Mesh Finite Difference (ACMFD) method –implemented inside ANDES– to fast reactors are presented and the sensitivity of the method when using a high number of energy groups is studied. ANDES performance is assessed by comparison with the results provided by ERANOS, using a mini-core model in 33 energy groups. Furthermore, a benchmark from the NEA for a small 3D FBR in hexagonal-Z geometry and 4 energy groups is also employed to verify the behavior of the code with few energy groups.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The development of a global instability analysis code coupling a time-stepping approach, as applied to the solution of BiGlobal and TriGlobal instability analysis 1, 2 and finite-volume-based spatial discretization, as used in standard aerodynamics codes is presented. The key advantage of the time-stepping method over matrix-formulation approaches is that the former provides a solution to the computer-storage issues associated with the latter methodology. To-date both approaches are successfully in use to analyze instability in complex geometries, although their relative advantages have never been quantified. The ultimate goal of the present work is to address this issue in the context of spatial discretization schemes typically used in industry. The time-stepping approach of Chiba 3 has been implemented in conjunction with two direct numerical simulation algorithms, one based on the typically-used in this context high-order method and another based on low-order methods representative of those in common use in industry. The two codes have been validated with solutions of the BiGlobal EVP and it has been showed that small errors in the base flow do not have affect significantly the results. As a result, a three-dimensional compressible unsteady second-order code for global linear stability has been successfully developed based on finite-volume spatial discretization and time-stepping method with the ability to study complex geometries by means of unstructured and hybrid meshes

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper reports the studies carried out to develop and calibrate the optimal models for the objectives of this work. In particular, quarter bogie model for vehicle, rail-wheel contact with Lagrangian multiplier method, 2D spatial discretization were selected as the optimal decisions. Furthermore, the 3D model of coupled vehicle-track also has been developed to contrast the results obtained in the 2D model. The calculations were carried out in the time domain and envelopes of relevant results were obtained for several track profiles and speed ranges. Distributed elevation irregularities were generated based on power spectral density (PSD) distributions. The results obtained include the wheel-rail contact forces, forces transmitted to the bogie by primary suspension. The latter loads are relevant for the purpose of evaluating the performance of the infrastructure

Relevância:

100.00% 100.00%

Publicador:

Resumo:

En este Proyecto se pretende establecer la forma de realizar un análisis correcto y ajustado de las redes SMATV (Satellite Master Antenna Television), incluidas dentro de las ICT (Infraestructura Común de Telecomunicaciones), mediante el método de análisis TDA (Time Domain Analysis). Para ello, en primer lugar se procederá a hacer un estudio teórico sobre las ICT’s y sobre las bases en las que se sustenta el método de análisis TDA que sirva como puente introductorio al tema principal de este proyecto. Este tema es el de, mediante el programa de simulación AWR, caracterizar la señal más adecuada para realizar medidas de calidad en las redes SMATV mediante la técnica del TDA y ser capaz de realizar un estudio conciso de estas. Esto se pretende conseguir mediante la definición más correcta de los parámetros de la señal de entrada que se introduciría en la red en futuras medidas de prueba. Una vez conseguida una señal "tipo", se caracterizarán diferentes dispositivos o elementos que forman las redes SMATV para comprobar que la medida realizada con el método del TDA es igual de válida que realizada con el método de análisis vectorial de redes (VNA). ABSTRACT This project aims to establish how to perform a proper analysis and set of SMATV networks (Satellite Master Antenna Television), included within the ICT (Common Telecommunications Infrastructure) by the method of analysis TDA (Time Domain Analysis). To do this, first it will proceed to make a theoretical study on the ICT's and the basis on which the method of analysis TDA is based, introduction that serve as a bridge to the main issue of this project. This issue is about characterizing the most appropriate signal quality measurements in SMATV networks using the technique of AD through the AWR simulation program, and be able to make a concise study of these. This is intended to achieve through the proper definition of the parameters of the input signal, that would be introduced into the network in future test measures. Once achieved a signal "type", will be characterized different devices or elements forming SMATV networks to check that the measure on the TDA method is as valid as on the method of vector network analysis (VNA) .

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Performing three-dimensional pin-by-pin full core calculations based on an improved solution of the multi-group diffusion equation is an affordable option nowadays to compute accurate local safety parameters for light water reactors. Since a transport approximation is solved, appropriate correction factors, such as interface discontinuity factors, are required to nearly reproduce the fully heterogeneous transport solution. Calculating exact pin-by-pin discontinuity factors requires the knowledge of the heterogeneous neutron flux distribution, which depends on the boundary conditions of the pin-cell as well as the local variables along the nuclear reactor operation. As a consequence, it is impractical to compute them for each possible configuration; however, inaccurate correction factors are one major source of error in core analysis when using multi-group diffusion theory. An alternative to generate accurate pin-by-pin interface discontinuity factors is to build a functional-fitting that allows incorporating the environment dependence in the computed values. This paper suggests a methodology to consider the neighborhood effect based on the Analytic Coarse-Mesh Finite Difference method for the multi-group diffusion equation. It has been applied to both definitions of interface discontinuity factors, the one based on the Generalized Equivalence Theory and the one based on Black-Box Homogenization, and for different few energy groups structures. Conclusions are drawn over the optimal functional-fitting and demonstrative results are obtained with the multi-group pin-by-pin diffusion code COBAYA3 for representative PWR configurations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this chapter, we are going to describe the main features as well as the basic steps of the Boundary Element Method (BEM) as applied to elastostatic problems and to compare them with other numerical procedures. As we shall show, it is easy to appreciate the adventages of the BEM, but it is also advisable to refrain from a possible unrestrained enthusiasm, as there are also limitations to its usefulness in certain types of problems. The number of these problems, nevertheless, is sufficient to justify the interest and activity that the new procedure has aroused among researchers all over the world. Briefly speaking, the most frequently used version of the BEM as applied to elastostatics works with the fundamental solution, i.e. the singular solution of the governing equations, as an influence function and tries to satisfy the boundary conditions of the problem with the aid of a discretization scheme which consists exclusively of boundary elements. As in other numerical methods, the BEM was developed thanks to the computational possibilities offered by modern computers on totally "classical" basis. That is, the theoretical grounds are based on linear elasticity theory, incorporated long ago into the curricula of most engineering schools. Its delay in gaining popularity is probably due to the enormous momentum with which Finite Element Method (FEM) penetrated the professional and academic media. Nevertheless, the fact that these methods were developed before the BEM has been beneficial because de BEM successfully uses those results and techniques studied in past decades. Some authors even consider the BEM as a particular case of the FEM while others view both methods as special cases of the general weighted residual technique. The first paper usually cited in connection with the BEM as applied to elastostatics is that of Rizzo, even though the works of Jaswon et al., Massonet and Oliveira were published at about the same time, the reason probably being the attractiveness of the "direct" approach over the "indirect" one. The work of Tizzo and the subssequent work of Cruse initiated a fruitful period with applicatons of the direct BEM to problems of elastostacs, elastodynamics, fracture, etc. The next key contribution was that of Lachat and Watson incorporating all the FEM discretization philosophy in what is sometimes called the "second BEM generation". This has no doubt, led directly to the current developments. Among the various researchers who worked on elastostatics by employing the direct BEM, one can additionallly mention Rizzo and Shippy, Cruse et al., Lachat and Watson, Alarcón et al., Brebbia el al, Howell and Doyle, Kuhn and Möhrmann and Patterson and Sheikh, and among those who used the indirect BEM, one can additionally mention Benjumea and Sikarskie, Butterfield, Banerjee et al., Niwa et al., and Altiero and Gavazza. An interesting version of the indirct method, called the Displacement Discontinuity Method (DDM) has been developed by Crounh. A comprehensive study on various special aspects of the elastostatic BEM has been done by Heisse, while review-type articles on the subject have been reported by Watson and Hartmann. At the present time, the method is well established and is being used for the solution of variety of problems in engineering mechanics. Numerous introductory and advanced books have been published as well as research-orientated ones. In this sense, it is worth noting the series of conferences promoted by Brebbia since 1978, wich have provoked a continuous research effort all over the world in relation to the BEM. In the following sections, we shall concentrate on developing the direct BEM as applied to elastostatics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a time-domain stochastic system identification method based on Maximum Likelihood Estimation and the Expectation Maximization algorithm. The effectiveness of this structural identification method is evaluated through numerical simulation in the context of the ASCE benchmark problem on structural health monitoring. Modal parameters (eigenfrequencies, damping ratios and mode shapes) of the benchmark structure have been estimated applying the proposed identification method to a set of 100 simulated cases. The numerical results show that the proposed method estimates all the modal parameters reasonably well in the presence of 30% measurement noise even. Finally, advantages and disadvantages of the method have been discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

During the last two decades the topic of human induced vibration has attracted a lot of attention among civil engineering practitioners and academics alike. Usually this type of problem may be encountered in pedestrian footbridges or floors of paperless offices. Slender designs are becoming increasingly popular, and as a consequence, the importance of paying attention to vibration serviceability also increases. This paper resumes the results obtained from measurements taken at different points of an aluminium catwalk which is 6 m in length by 0.6 m in width. Measurements were carried out when subjecting the structure to different actions:1)Static test: a steel cylinder of 35 kg was placed in the middle of the catwalk; 2)Dynamic test: this test consists of exciting the structure with singles impulses; 3)Dynamic test: people walking on the catwalk. Identification of the mechanical properties of the structure is an achievement of the paper. Indirect methods were used to estimate properties including the support stiffness, the beam bending stiffness, the mass of the structure (using Rayleigh method and iterative matrix method), the natural frequency (using the time domain and frequency domain analysis) and the damping ratio (by calculating the logarithmic decrement). Experimental results and numerical predictions for the response of an aluminium catwalk subjected to walking loads have been compared. The damping of this light weight structure depends on the amplitude of vibration which complicates the tuning of a structural model. In the light of the results obtained it seems that the used walking load model is not appropriate as the predicted transient vibration values (TTVs) are much higher than the measured ones.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Una evolución del método de diferencias finitas ha sido el desarrollo del método de diferencias finitas generalizadas (MDFG) que se puede aplicar a mallas irregulares o nubes de puntos. En este método se emplea una expansión en serie de Taylor junto con una aproximación por mínimos cuadrados móviles (MCM). De ese modo, las fórmulas explícitas de diferencias para nubes irregulares de puntos se pueden obtener fácilmente usando el método de Cholesky. El MDFG-MCM es un método sin malla que emplea únicamente puntos. Una contribución de esta Tesis es la aplicación del MDFG-MCM al caso de la modelización de problemas anisótropos elípticos de conductividad eléctrica incluyendo el caso de tejidos reales cuando la dirección de las fibras no es fija, sino que varía a lo largo del tejido. En esta Tesis también se muestra la extensión del método de diferencias finitas generalizadas a la solución explícita de ecuaciones parabólicas anisótropas. El método explícito incluye la formulación de un límite de estabilidad para el caso de nubes irregulares de nodos que es fácilmente calculable. Además se presenta una nueva solución analítica para una ecuación parabólica anisótropa y el MDFG-MCM explícito se aplica al caso de problemas parabólicos anisótropos de conductividad eléctrica. La evidente dificultad de realizar mediciones directas en electrocardiología ha motivado un gran interés en la simulación numérica de modelos cardiacos. La contribución más importante de esta Tesis es la aplicación de un esquema explícito con el MDFG-MCM al caso de la modelización monodominio de problemas de conductividad eléctrica. En esta Tesis presentamos un algoritmo altamente eficiente, exacto y condicionalmente estable para resolver el modelo monodominio, que describe la actividad eléctrica del corazón. El modelo consiste en una ecuación en derivadas parciales parabólica anisótropa (EDP) que está acoplada con un sistema de ecuaciones diferenciales ordinarias (EDOs) que describen las reacciones electroquímicas en las células cardiacas. El sistema resultante es difícil de resolver numéricamente debido a su complejidad. Proponemos un método basado en una separación de operadores y un método sin malla para resolver la EDP junto a un método de Runge-Kutta para resolver el sistema de EDOs de la membrana y las corrientes iónicas. ABSTRACT An evolution of the method of finite differences has been the development of generalized finite difference (GFD) method that can be applied to irregular grids or clouds of points. In this method a Taylor series expansion is used together with a moving least squares (MLS) approximation. Then, the explicit difference formulae for irregular clouds of points can be easily obtained using a simple Cholesky method. The MLS-GFD is a mesh-free method using only points. A contribution of this Thesis is the application of the MLS-GFDM to the case of modelling elliptic anisotropic electrical conductivity problems including the case of real tissues when the fiber direction is not fixed, but varies throughout the tissue. In this Thesis the extension of the generalized finite difference method to the explicit solution of parabolic anisotropic equations is also given. The explicit method includes a stability limit formulated for the case of irregular clouds of nodes that can be easily calculated. Also a new analytical solution for homogeneous parabolic anisotropic equation has been presented and an explicit MLS- GFDM has been applied to the case of parabolic anisotropic electrical conductivity problems. The obvious difficulty of performing direct measurements in electrocardiology has motivated wide interest in the numerical simulation of cardiac models. The main contribution of this Thesis is the application of an explicit scheme based in the MLS-GFDM to the case of modelling monodomain electrical conductivity problems using operator splitting including the case of anisotropic real tissues. In this Thesis we present a highly efficient, accurate and conditionally stable algorithm to solve a monodomain model, which describes the electrical activity in the heart. The model consists of a parabolic anisotropic partial differential equation (PDE), which is coupled to systems of ordinary differential equations (ODEs) describing electrochemical reactions in the cardiac cells. The resulting system is challenging to solve numerically, because of its complexity. We propose a method based on operator splitting and a meshless method for solving the PDE together with a Runge-Kutta method for solving the system of ODE’s for the membrane and ionic currents.