938 resultados para Digital Mathematical Library


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mathematical models have provided key insights into the pathogenesis of hepatitis C virus (HCV) in vivo, suggested predominant mechanism(s) of drug action, explained confounding patterns of viral load changes in HCV infected patients undergoing therapy, and presented a framework for therapy optimization. In this article, I present an overview of the major advances in the mathematical modeling of HCV dynamics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Interaction between the hepatitis C virus (HCV) envelope protein E2 and the host receptor CD81 is essential for HCV entry into target cells. The number of E2-CD81 complexes necessary for HCV entry has remained difficult to estimate experimentally. Using the recently developed cell culture systems that allow persistent HCV infection in vitro, the dependence of HCV entry and kinetics on CD81 expression has been measured. We reasoned that analysis of the latter experiments using a mathematical model of viral kinetics may yield estimates of the number of E2-CD81 complexes necessary for HCV entry. Here, we constructed a mathematical model of HCV viral kinetics in vitro, in which we accounted explicitly for the dependence of HCV entry on CD81 expression. Model predictions of viral kinetics are in quantitative agreement with experimental observations. Specifically, our model predicts triphasic viral kinetics in vitro, where the first phase is characterized by cell proliferation, the second by the infection of susceptible cells and the third by the growth of cells refractory to infection. By fitting model predictions to the above data, we were able to estimate the threshold number of E2-CD81 complexes necessary for HCV entry into human hepatoma-derived cells. We found that depending on the E2-CD81 binding affinity, between 1 and 13 E2-CD81 complexes are necessary for HCV entry. With this estimate, our model captured data from independent experiments that employed different HCV clones and cells with distinct CD81 expression levels, indicating that the estimate is robust. Our study thus quantifies the molecular requirements of HCV entry and suggests guidelines for intervention strategies that target the E2-CD81 interaction. Further, our model presents a framework for quantitative analyses of cell culture studies now extensively employed to investigate HCV infection.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ergonomic design of products demands accurate human dimensions-anthropometric data. Manual measurement over live subjects, has several limitations like long time, required presence of subjects for every new measurement, physical contact etc. Hence the data currently available is limited and anthropometric data related to facial features is difficult to obtain. In this paper, we discuss a methodology to automatically detect facial features and landmarks from scanned human head models. Segmentation of face into meaningful patches corresponding to facial features is achieved by Watershed algorithms and Mathematical Morphology tools. Many Important physiognomical landmarks are identified heuristically.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a comprehensive numerical study of spiral-and scroll-wave dynamics in a state-of-the-art mathematical model for human ventricular tissue with fiber rotation, transmural heterogeneity, myocytes, and fibroblasts. Our mathematical model introduces fibroblasts randomly, to mimic diffuse fibrosis, in the ten Tusscher-Noble-Noble-Panfilov (TNNP) model for human ventricular tissue; the passive fibroblasts in our model do not exhibit an action potential in the absence of coupling with myocytes; and we allow for a coupling between nearby myocytes and fibroblasts. Our study of a single myocyte-fibroblast (MF) composite, with a single myocyte coupled to N-f fibroblasts via a gap-junctional conductance G(gap), reveals five qualitatively different responses for this composite. Our investigations of two-dimensional domains with a random distribution of fibroblasts in a myocyte background reveal that, as the percentage P-f of fibroblasts increases, the conduction velocity of a plane wave decreases until there is conduction failure. If we consider spiral-wave dynamics in such a medium we find, in two dimensions, a variety of nonequilibrium states, temporally periodic, quasiperiodic, chaotic, and quiescent, and an intricate sequence of transitions between them; we also study the analogous sequence of transitions for three-dimensional scroll waves in a three-dimensional version of our mathematical model that includes both fiber rotation and transmural heterogeneity. We thus elucidate random-fibrosis-induced nonequilibrium transitions, which lead to conduction block for spiral waves in two dimensions and scroll waves in three dimensions. We explore possible experimental implications of our mathematical and numerical studies for plane-, spiral-, and scroll-wave dynamics in cardiac tissue with fibrosis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cardiac fibroblasts, when coupled functionally with myocytes, can modulate the electrophysiological properties of cardiac tissue. We present systematic numerical studies of such modulation of electrophysiological properties in mathematical models for (a) single myocyte-fibroblast (MF) units and (b) two-dimensional (2D) arrays of such units; our models build on earlier ones and allow for zero-, one-, and two-sided MF couplings. Our studies of MF units elucidate the dependence of the action-potential (AP) morphology on parameters such as E-f, the fibroblast resting-membrane potential, the fibroblast conductance G(f), and the MF gap-junctional coupling G(gap). Furthermore, we find that our MF composite can show autorhythmic and oscillatory behaviors in addition to an excitable response. Our 2D studies use (a) both homogeneous and inhomogeneous distributions of fibroblasts, (b) various ranges for parameters such as G(gap), G(f), and E-f, and (c) intercellular couplings that can be zero-sided, one-sided, and two-sided connections of fibroblasts with myocytes. We show, in particular, that the plane-wave conduction velocity CV decreases as a function of G(gap), for zero-sided and one-sided couplings; however, for two-sided coupling, CV decreases initially and then increases as a function of G(gap), and, eventually, we observe that conduction failure occurs for low values of G(gap). In our homogeneous studies, we find that the rotation speed and stability of a spiral wave can be controlled either by controlling G(gap) or E-f. Our studies with fibroblast inhomogeneities show that a spiral wave can get anchored to a local fibroblast inhomogeneity. We also study the efficacy of a low-amplitude control scheme, which has been suggested for the control of spiral-wave turbulence in mathematical models for cardiac tissue, in our MF model both with and without heterogeneities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Early afterdepolarizations (EADs), which are abnormal oscillations of the membrane potential at the plateau phase of an action potential, are implicated in the development of cardiac arrhythmias like Torsade de Pointes. We carry out extensive numerical simulations of the TP06 and ORd mathematical models for human ventricular cells with EADs. We investigate the different regimes in both these models, namely, the parameter regimes where they exhibit (1) a normal action potential (AP) with no EADs, (2) an AP with EADs, and (3) an AP with EADs that does not go back to the resting potential. We also study the dependence of EADs on the rate of at which we pace a cell, with the specific goal of elucidating EADs that are induced by slow or fast rate pacing. In our simulations in two-and three-dimensional domains, in the presence of EADs, we find the following wave types: (A) waves driven by the fast sodium current and the L-type calcium current (Na-Ca-mediated waves); (B) waves driven only by the L-type calcium current (Ca-mediated waves); (C) phase waves, which are pseudo-travelling waves. Furthermore, we compare the wave patterns of the various wave-types (Na-Ca-mediated, Ca-mediated, and phase waves) in both these models. We find that the two models produce qualitatively similar results in terms of exhibiting Na-Ca-mediated wave patterns that are more chaotic than those for the Ca-mediated and phase waves. However, there are quantitative differences in the wave patterns of each wave type. The Na-Ca-mediated waves in the ORd model show short-lived spirals but the TP06 model does not. The TP06 model supports more Ca-mediated spirals than those in the ORd model, and the TP06 model exhibits more phase-wave patterns than does the ORd model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pretende verificar a conveniência do armazenamento do Diário da Câmara dos Deputados na Biblioteca Digital da Câmara dos Deputados, com vistas à recuperação do seu conteúdo informacional. A revisão de literatura ressalta a importância do DCD como fonte oficial de publicidade das atividades parlamentares, bem como traz a fundamentação da Biblioteca Digital como repositório institucional e ferramenta importante para apoiar a organização e recuperação da informação bibliográfica na Câmara dos Deputados. O objetivo geral do estudo consistiu em propor um conjunto de metadados de descrição física e temática como requisitos para a organização e representação dos conteúdos informacionais do DCD. A metodologia utilizada permitiu, além do embasamento teórico, a análise dos sistemas de informação da Casa Legislativa, os conteúdos do Diário e, por meio da aplicação de questionário, verificar a visão dos usuários da Coordenação de Relacionamento, Pesquisa e Informação sobre a recuperação das informações do Diário a partir dos sistemas de informação corporativos e locais. Em conclusão constatou-se que a grande maioria das informações publicadas no DCD é passível de recuperação em diversos sistemas de informação, o que, no entanto, não garante a qualidade e tempestividade na recuperação.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The dissertation is concerned with the mathematical study of various network problems. First, three real-world networks are considered: (i) the human brain network (ii) communication networks, (iii) electric power networks. Although these networks perform very different tasks, they share similar mathematical foundations. The high-level goal is to analyze and/or synthesis each of these systems from a “control and optimization” point of view. After studying these three real-world networks, two abstract network problems are also explored, which are motivated by power systems. The first one is “flow optimization over a flow network” and the second one is “nonlinear optimization over a generalized weighted graph”. The results derived in this dissertation are summarized below.

Brain Networks: Neuroimaging data reveals the coordinated activity of spatially distinct brain regions, which may be represented mathematically as a network of nodes (brain regions) and links (interdependencies). To obtain the brain connectivity network, the graphs associated with the correlation matrix and the inverse covariance matrix—describing marginal and conditional dependencies between brain regions—have been proposed in the literature. A question arises as to whether any of these graphs provides useful information about the brain connectivity. Due to the electrical properties of the brain, this problem will be investigated in the context of electrical circuits. First, we consider an electric circuit model and show that the inverse covariance matrix of the node voltages reveals the topology of the circuit. Second, we study the problem of finding the topology of the circuit based on only measurement. In this case, by assuming that the circuit is hidden inside a black box and only the nodal signals are available for measurement, the aim is to find the topology of the circuit when a limited number of samples are available. For this purpose, we deploy the graphical lasso technique to estimate a sparse inverse covariance matrix. It is shown that the graphical lasso may find most of the circuit topology if the exact covariance matrix is well-conditioned. However, it may fail to work well when this matrix is ill-conditioned. To deal with ill-conditioned matrices, we propose a small modification to the graphical lasso algorithm and demonstrate its performance. Finally, the technique developed in this work will be applied to the resting-state fMRI data of a number of healthy subjects.

Communication Networks: Congestion control techniques aim to adjust the transmission rates of competing users in the Internet in such a way that the network resources are shared efficiently. Despite the progress in the analysis and synthesis of the Internet congestion control, almost all existing fluid models of congestion control assume that every link in the path of a flow observes the original source rate. To address this issue, a more accurate model is derived in this work for the behavior of the network under an arbitrary congestion controller, which takes into account of the effect of buffering (queueing) on data flows. Using this model, it is proved that the well-known Internet congestion control algorithms may no longer be stable for the common pricing schemes, unless a sufficient condition is satisfied. It is also shown that these algorithms are guaranteed to be stable if a new pricing mechanism is used.

Electrical Power Networks: Optimal power flow (OPF) has been one of the most studied problems for power systems since its introduction by Carpentier in 1962. This problem is concerned with finding an optimal operating point of a power network minimizing the total power generation cost subject to network and physical constraints. It is well known that OPF is computationally hard to solve due to the nonlinear interrelation among the optimization variables. The objective is to identify a large class of networks over which every OPF problem can be solved in polynomial time. To this end, a convex relaxation is proposed, which solves the OPF problem exactly for every radial network and every meshed network with a sufficient number of phase shifters, provided power over-delivery is allowed. The concept of “power over-delivery” is equivalent to relaxing the power balance equations to inequality constraints.

Flow Networks: In this part of the dissertation, the minimum-cost flow problem over an arbitrary flow network is considered. In this problem, each node is associated with some possibly unknown injection, each line has two unknown flows at its ends related to each other via a nonlinear function, and all injections and flows need to satisfy certain box constraints. This problem, named generalized network flow (GNF), is highly non-convex due to its nonlinear equality constraints. Under the assumption of monotonicity and convexity of the flow and cost functions, a convex relaxation is proposed, which always finds the optimal injections. A primary application of this work is in the OPF problem. The results of this work on GNF prove that the relaxation on power balance equations (i.e., load over-delivery) is not needed in practice under a very mild angle assumption.

Generalized Weighted Graphs: Motivated by power optimizations, this part aims to find a global optimization technique for a nonlinear optimization defined over a generalized weighted graph. Every edge of this type of graph is associated with a weight set corresponding to the known parameters of the optimization (e.g., the coefficients). The motivation behind this problem is to investigate how the (hidden) structure of a given real/complex valued optimization makes the problem easy to solve, and indeed the generalized weighted graph is introduced to capture the structure of an optimization. Various sufficient conditions are derived, which relate the polynomial-time solvability of different classes of optimization problems to weak properties of the generalized weighted graph such as its topology and the sign definiteness of its weight sets. As an application, it is proved that a broad class of real and complex optimizations over power networks are polynomial-time solvable due to the passivity of transmission lines and transformers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Signal processing techniques play important roles in the design of digital communication systems. These include information manipulation, transmitter signal processing, channel estimation, channel equalization and receiver signal processing. By interacting with communication theory and system implementing technologies, signal processing specialists develop efficient schemes for various communication problems by wisely exploiting various mathematical tools such as analysis, probability theory, matrix theory, optimization theory, and many others. In recent years, researchers realized that multiple-input multiple-output (MIMO) channel models are applicable to a wide range of different physical communications channels. Using the elegant matrix-vector notations, many MIMO transceiver (including the precoder and equalizer) design problems can be solved by matrix and optimization theory. Furthermore, the researchers showed that the majorization theory and matrix decompositions, such as singular value decomposition (SVD), geometric mean decomposition (GMD) and generalized triangular decomposition (GTD), provide unified frameworks for solving many of the point-to-point MIMO transceiver design problems.

In this thesis, we consider the transceiver design problems for linear time invariant (LTI) flat MIMO channels, linear time-varying narrowband MIMO channels, flat MIMO broadcast channels, and doubly selective scalar channels. Additionally, the channel estimation problem is also considered. The main contributions of this dissertation are the development of new matrix decompositions, and the uses of the matrix decompositions and majorization theory toward the practical transmit-receive scheme designs for transceiver optimization problems. Elegant solutions are obtained, novel transceiver structures are developed, ingenious algorithms are proposed, and performance analyses are derived.

The first part of the thesis focuses on transceiver design with LTI flat MIMO channels. We propose a novel matrix decomposition which decomposes a complex matrix as a product of several sets of semi-unitary matrices and upper triangular matrices in an iterative manner. The complexity of the new decomposition, generalized geometric mean decomposition (GGMD), is always less than or equal to that of geometric mean decomposition (GMD). The optimal GGMD parameters which yield the minimal complexity are derived. Based on the channel state information (CSI) at both the transmitter (CSIT) and receiver (CSIR), GGMD is used to design a butterfly structured decision feedback equalizer (DFE) MIMO transceiver which achieves the minimum average mean square error (MSE) under the total transmit power constraint. A novel iterative receiving detection algorithm for the specific receiver is also proposed. For the application to cyclic prefix (CP) systems in which the SVD of the equivalent channel matrix can be easily computed, the proposed GGMD transceiver has K/log_2(K) times complexity advantage over the GMD transceiver, where K is the number of data symbols per data block and is a power of 2. The performance analysis shows that the GGMD DFE transceiver can convert a MIMO channel into a set of parallel subchannels with the same bias and signal to interference plus noise ratios (SINRs). Hence, the average bit rate error (BER) is automatically minimized without the need for bit allocation. Moreover, the proposed transceiver can achieve the channel capacity simply by applying independent scalar Gaussian codes of the same rate at subchannels.

In the second part of the thesis, we focus on MIMO transceiver design for slowly time-varying MIMO channels with zero-forcing or MMSE criterion. Even though the GGMD/GMD DFE transceivers work for slowly time-varying MIMO channels by exploiting the instantaneous CSI at both ends, their performance is by no means optimal since the temporal diversity of the time-varying channels is not exploited. Based on the GTD, we develop space-time GTD (ST-GTD) for the decomposition of linear time-varying flat MIMO channels. Under the assumption that CSIT, CSIR and channel prediction are available, by using the proposed ST-GTD, we develop space-time geometric mean decomposition (ST-GMD) DFE transceivers under the zero-forcing or MMSE criterion. Under perfect channel prediction, the new system minimizes both the average MSE at the detector in each space-time (ST) block (which consists of several coherence blocks), and the average per ST-block BER in the moderate high SNR region. Moreover, the ST-GMD DFE transceiver designed under an MMSE criterion maximizes Gaussian mutual information over the equivalent channel seen by each ST-block. In general, the newly proposed transceivers perform better than the GGMD-based systems since the super-imposed temporal precoder is able to exploit the temporal diversity of time-varying channels. For practical applications, a novel ST-GTD based system which does not require channel prediction but shares the same asymptotic BER performance with the ST-GMD DFE transceiver is also proposed.

The third part of the thesis considers two quality of service (QoS) transceiver design problems for flat MIMO broadcast channels. The first one is the power minimization problem (min-power) with a total bitrate constraint and per-stream BER constraints. The second problem is the rate maximization problem (max-rate) with a total transmit power constraint and per-stream BER constraints. Exploiting a particular class of joint triangularization (JT), we are able to jointly optimize the bit allocation and the broadcast DFE transceiver for the min-power and max-rate problems. The resulting optimal designs are called the minimum power JT broadcast DFE transceiver (MPJT) and maximum rate JT broadcast DFE transceiver (MRJT), respectively. In addition to the optimal designs, two suboptimal designs based on QR decomposition are proposed. They are realizable for arbitrary number of users.

Finally, we investigate the design of a discrete Fourier transform (DFT) modulated filterbank transceiver (DFT-FBT) with LTV scalar channels. For both cases with known LTV channels and unknown wide sense stationary uncorrelated scattering (WSSUS) statistical channels, we show how to optimize the transmitting and receiving prototypes of a DFT-FBT such that the SINR at the receiver is maximized. Also, a novel pilot-aided subspace channel estimation algorithm is proposed for the orthogonal frequency division multiplexing (OFDM) systems with quasi-stationary multi-path Rayleigh fading channels. Using the concept of a difference co-array, the new technique can construct M^2 co-pilots from M physical pilot tones with alternating pilot placement. Subspace methods, such as MUSIC and ESPRIT, can be used to estimate the multipath delays and the number of identifiable paths is up to O(M^2), theoretically. With the delay information, a MMSE estimator for frequency response is derived. It is shown through simulations that the proposed method outperforms the conventional subspace channel estimator when the number of multipaths is greater than or equal to the number of physical pilots minus one.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fuzzification is introduced into gray-scale mathematical morphology by using two-input one-output fuzzy rule-based inference systems. The fuzzy inferring dilation or erosion is defined from the approximate reasoning of the two consequences of a dilation or an erosion and an extended rank-order operation. The fuzzy inference systems with numbers of rules and fuzzy membership functions are further reduced to a simple fuzzy system formulated by only an exponential two-input one-output function. Such a one-function fuzzy inference system is able to approach complex fuzzy inference systems by using two specified parameters within it-a proportion to characterize the fuzzy degree and an exponent to depict the nonlinearity in the inferring. The proposed fuzzy inferring morphological operators tend to keep the object details comparable to the structuring element and to smooth the conventional morphological operations. Based on digital area coding of a gray-scale image, incoherently optical correlation for neighboring connection, and optical thresholding for rank-order operations, a fuzzy inference system can be realized optically in parallel. (C) 1996 Society of Photo-Optical Instrumentation Engineers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A cascaded Fresnel digital hologram (CFDH) is proposed, together with its mathematical derivation. Its application to watermarking has been demonstrated by a simulation procedure, in which the watermark image to be hidden is encoded into the phase of the host image. The watermark image can be deciphered by the CFDH setup, the reconstructed image shows good quality and the error is almost close to zero. Compared with previous technique, this is a lensless architecture which minimizes the hardware requirement, and it is used for the encryption of digital image.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A cascaded Fresnel digital hologram (CFDH) is proposed, together with its mathematical derivation. Its application to watermarking has been demonstrated by a simulation procedure, in which the watermark image to be hidden is encoded into the phase of the host image. The watermark image can be deciphered by the CFDH setup, the reconstructed image shows good quality and the error is almost closed to zeros. Compared with previous technique, this is a lensless architecture, which minimizes the hardware requirement. (c) 2006 Elsevier GmbH. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The experimental portion of this thesis tries to estimate the density of the power spectrum of very low frequency semiconductor noise, from 10-6.3 cps to 1. cps with a greater accuracy than that achieved in previous similar attempts: it is concluded that the spectrum is 1/fα with α approximately 1.3 over most of the frequency range, but appearing to have a value of about 1 in the lowest decade. The noise sources are, among others, the first stage circuits of a grounded input silicon epitaxial operational amplifier. This thesis also investigates a peculiar form of stationarity which seems to distinguish flicker noise from other semiconductor noise.

In order to decrease by an order of magnitude the pernicious effects of temperature drifts, semiconductor "aging", and possible mechanical failures associated with prolonged periods of data taking, 10 independent noise sources were time-multiplexed and their spectral estimates were subsequently averaged. If the sources have similar spectra, it is demonstrated that this reduces the necessary data-taking time by a factor of 10 for a given accuracy.

In view of the measured high temperature sensitivity of the noise sources, it was necessary to combine the passive attenuation of a special-material container with active control. The noise sources were placed in a copper-epoxy container of high heat capacity and medium heat conductivity, and that container was immersed in a temperature controlled circulating ethylene-glycol bath.

Other spectra of interest, estimated from data taken concurrently with the semiconductor noise data were the spectra of the bath's controlled temperature, the semiconductor surface temperature, and the power supply voltage amplitude fluctuations. A brief description of the equipment constructed to obtain the aforementioned data is included.

The analytical portion of this work is concerned with the following questions: what is the best final spectral density estimate given 10 statistically independent ones of varying quality and magnitude? How can the Blackman and Tukey algorithm which is used for spectral estimation in this work be improved upon? How can non-equidistant sampling reduce data processing cost? Should one try to remove common trands shared by supposedly statistically independent noise sources and, if so, what are the mathematical difficulties involved? What is a physically plausible mathematical model that can account for flicker noise and what are the mathematical implications on its statistical properties? Finally, the variance of the spectral estimate obtained through the Blackman/Tukey algorithm is analyzed in greater detail; the variance is shown to diverge for α ≥ 1 in an assumed power spectrum of k/|f|α, unless the assumed spectrum is "truncated".

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Constitutive modeling in granular materials has historically been based on macroscopic experimental observations that, while being usually effective at predicting the bulk behavior of these type of materials, suffer important limitations when it comes to understanding the physics behind grain-to-grain interactions that induce the material to macroscopically behave in a given way when subjected to certain boundary conditions.

The advent of the discrete element method (DEM) in the late 1970s helped scientists and engineers to gain a deeper insight into some of the most fundamental mechanisms furnishing the grain scale. However, one of the most critical limitations of classical DEM schemes has been their inability to account for complex grain morphologies. Instead, simplified geometries such as discs, spheres, and polyhedra have typically been used. Fortunately, in the last fifteen years, there has been an increasing development of new computational as well as experimental techniques, such as non-uniform rational basis splines (NURBS) and 3D X-ray Computed Tomography (3DXRCT), which are contributing to create new tools that enable the inclusion of complex grain morphologies into DEM schemes.

Yet, as the scientific community is still developing these new tools, there is still a gap in thoroughly understanding the physical relations connecting grain and continuum scales as well as in the development of discrete techniques that can predict the emergent behavior of granular materials without resorting to phenomenology, but rather can directly unravel the micro-mechanical origin of macroscopic behavior.

In order to contribute towards closing the aforementioned gap, we have developed a micro-mechanical analysis of macroscopic peak strength, critical state, and residual strength in two-dimensional non-cohesive granular media, where typical continuum constitutive quantities such as frictional strength and dilation angle are explicitly related to their corresponding grain-scale counterparts (e.g., inter-particle contact forces, fabric, particle displacements, and velocities), providing an across-the-scale basis for better understanding and modeling granular media.

In the same way, we utilize a new DEM scheme (LS-DEM) that takes advantage of a mathematical technique called level set (LS) to enable the inclusion of real grain shapes into a classical discrete element method. After calibrating LS-DEM with respect to real experimental results, we exploit part of its potential to study the dependency of critical state (CS) parameters such as the critical state line (CSL) slope, CSL intercept, and CS friction angle on the grain's morphology, i.e., sphericity, roundness, and regularity.

Finally, we introduce a first computational algorithm to ``clone'' the grain morphologies of a sample of real digital grains. This cloning algorithm allows us to generate an arbitrary number of cloned grains that satisfy the same morphological features (e.g., roundness and aspect ratio) displayed by their real parents and can be included into a DEM simulation of a given mechanical phenomenon. In turn, this will help with the development of discrete techniques that can directly predict the engineering scale behavior of granular media without resorting to phenomenology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A engenharia geotécnica é uma das grandes áreas da engenharia civil que estuda a interação entre as construções realizadas pelo homem ou de fenômenos naturais com o ambiente geológico, que na grande maioria das vezes trata-se de solos parcialmente saturados. Neste sentido, o desempenho de obras como estabilização, contenção de barragens, muros de contenção, fundações e estradas estão condicionados a uma correta predição do fluxo de água no interior dos solos. Porém, como a área das regiões a serem estudas com relação à predição do fluxo de água são comumente da ordem de quilômetros quadrados, as soluções dos modelos matemáticos exigem malhas computacionais de grandes proporções, ocasionando sérias limitações associadas aos requisitos de memória computacional e tempo de processamento. A fim de contornar estas limitações, métodos numéricos eficientes devem ser empregados na solução do problema em análise. Portanto, métodos iterativos para solução de sistemas não lineares e lineares esparsos de grande porte devem ser utilizados neste tipo de aplicação. Em suma, visto a relevância do tema, esta pesquisa aproximou uma solução para a equação diferencial parcial de Richards pelo método dos volumes finitos em duas dimensões, empregando o método de Picard e Newton com maior eficiência computacional. Para tanto, foram utilizadas técnicas iterativas de resolução de sistemas lineares baseados no espaço de Krylov com matrizes pré-condicionadoras com a biblioteca numérica Portable, Extensible Toolkit for Scientific Computation (PETSc). Os resultados indicam que quando se resolve a equação de Richards considerando-se o método de PICARD-KRYLOV, não importando o modelo de avaliação do solo, a melhor combinação para resolução dos sistemas lineares é o método dos gradientes biconjugados estabilizado mais o pré-condicionador SOR. Por outro lado, quando se utiliza as equações de van Genuchten deve ser optar pela combinação do método dos gradientes conjugados em conjunto com pré-condicionador SOR. Quando se adota o método de NEWTON-KRYLOV, o método gradientes biconjugados estabilizado é o mais eficiente na resolução do sistema linear do passo de Newton, com relação ao pré-condicionador deve-se dar preferência ao bloco Jacobi. Por fim, há evidências que apontam que o método PICARD-KRYLOV pode ser mais vantajoso que o método de NEWTON-KRYLOV, quando empregados na resolução da equação diferencial parcial de Richards.