917 resultados para PHASE-SPACE APPROACH


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper explores how wikis may be used to support primary education students’ collaborative interaction and how such an interaction process can be characterised. The overall aim of this study is to analyse the collaborative processes of students working together in a wiki environment, in order to see how primary students can actively create a shared context for learning in the wiki. Educational literature has already reported that wikis may support collaborative knowledge-construction processes, but in our study we claim that a dialogic perspective is needed to accomplish this. Students must develop an intersubjective orientation towards each others’ perspectives, to co-construct knowledge about a topic. For this purpose, our project utilised a ‘Thinking Together’ approach to help students develop an intersubjective orientation towards one another and to support the creation of a ‘dialogic space’ to co-construct new understanding in a wiki science project. The students’ asynchronous interaction process in a primary classroom -- which led to the creation of a science text in the wiki -- was analysed and characterised, using a dialogic approach to the study of CSCL practices. Our results illustrate how the Thinking Together approach became embedded within the wiki environment and in the students’ collaborative processes. We argue that a dialogic approach for examining interaction can be used to help design more effective pedagogic approaches related to the use of wikis in education and to equip learners with the competences they need to participate in the global knowledge-construction era.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background:Average energies of nuclear collective modes may be efficiently and accurately computed using a nonrelativistic constrained approach without reliance on a random phase approximation (RPA). Purpose: To extend the constrained approach to the relativistic domain and to establish its impact on the calibration of energy density functionals. Methods: Relativistic RPA calculations of the giant monopole resonance (GMR) are compared against the predictions of the corresponding constrained approach using two accurately calibrated energy density functionals. Results: We find excellent agreement at the 2% level or better between the predictions of the relativistic RPA and the corresponding constrained approach for magic (or semimagic) nuclei ranging from 16 O to 208 Pb. Conclusions: An efficient and accurate method is proposed for incorporating nuclear collective excitations into the calibration of future energy density functionals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Formation of nanosized droplets/bubbles from a metastable bulk phase is connected to many unresolved scientific questions. We analyze the properties and stability of multicomponent droplets and bubbles in the canonical ensemble, and compare with single-component systems. The bubbles/droplets are described on the mesoscopic level by square gradient theory. Furthermore, we compare the results to a capillary model which gives a macroscopic description. Remarkably, the solutions of the square gradient model, representing bubbles and droplets, are accurately reproduced by the capillary model except in the vicinity of the spinodals. The solutions of the square gradient model form closed loops, which shows the inherent symmetry and connected nature of bubbles and droplets. A thermodynamic stability analysis is carried out, where the second variation of the square gradient description is compared to the eigenvalues of the Hessian matrix in the capillary description. The analysis shows that it is impossible to stabilize arbitrarily small bubbles or droplets in closed systems and gives insight into metastable regions close to the minimum bubble/droplet radii. Despite the large difference in complexity, the square gradient and the capillary model predict the same finite threshold sizes and very similar stability limits for bubbles and droplets, both for single-component and two-component systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

L'imagerie par résonance magnétique (IRM) peut fournir aux cardiologues des informations diagnostiques importantes sur l'état de la maladie de l'artère coronarienne dans les patients. Le défi majeur pour l'IRM cardiaque est de gérer toutes les sources de mouvement qui peuvent affecter la qualité des images en réduisant l'information diagnostique. Cette thèse a donc comme but de développer des nouvelles techniques d'acquisitions des images IRM, en changeant les techniques de compensation du mouvement, pour en augmenter l'efficacité, la flexibilité, la robustesse et pour obtenir plus d'information sur le tissu et plus d'information temporelle. Les techniques proposées favorisent donc l'avancement de l'imagerie des coronaires dans une direction plus maniable et multi-usage qui peut facilement être transférée dans l'environnement clinique. La première partie de la thèse s'est concentrée sur l'étude du mouvement des artères coronariennes sur des patients en utilisant la techniques d'imagerie standard (rayons x), pour mesurer la précision avec laquelle les artères coronariennes retournent dans la même position battement après battement (repositionnement des coronaires). Nous avons découvert qu'il y a des intervalles dans le cycle cardiaque, tôt dans la systole et à moitié de la diastole, où le repositionnement des coronaires est au minimum. En réponse nous avons développé une nouvelle séquence d'acquisition (T2-post) capable d'acquérir les données aussi tôt dans la systole. Cette séquence a été testée sur des volontaires sains et on a pu constater que la qualité de visualisation des artère coronariennes est égale à celle obtenue avec les techniques standard. De plus, le rapport signal sur bruit fourni par la séquence d'acquisition proposée est supérieur à celui obtenu avec les techniques d'imagerie standard. La deuxième partie de la thèse a exploré un paradigme d'acquisition des images cardiaques complètement nouveau pour l'imagerie du coeur entier. La technique proposée dans ce travail acquiert les données sans arrêt (free-running) au lieu d'être synchronisée avec le mouvement cardiaque. De cette façon, l'efficacité de la séquence d'acquisition est augmentée de manière significative et les images produites représentent le coeur entier dans toutes les phases cardiaques (quatre dimensions, 4D). Par ailleurs, l'auto-navigation de la respiration permet d'effectuer cette acquisition en respiration libre. Cette technologie rend possible de visualiser et évaluer l'anatomie du coeur et de ses vaisseaux ainsi que la fonction cardiaque en quatre dimensions et avec une très haute résolution spatiale et temporelle, sans la nécessité d'injecter un moyen de contraste. Le pas essentiel qui a permis le développement de cette technique est l'utilisation d'une trajectoire d'acquisition radiale 3D basée sur l'angle d'or. Avec cette trajectoire, il est possible d'acquérir continûment les données d'espace k, puis de réordonner les données et choisir les paramètres temporel des images 4D a posteriori. L'acquisition 4D a été aussi couplée avec un algorithme de reconstructions itératif (compressed sensing) qui permet d'augmenter la résolution temporelle tout en augmentant la qualité des images. Grâce aux images 4D, il est possible maintenant de visualiser les artères coronariennes entières dans chaque phase du cycle cardiaque et, avec les mêmes données, de visualiser et mesurer la fonction cardiaque. La qualité des artères coronariennes dans les images 4D est la même que dans les images obtenues avec une acquisition 3D standard, acquise en diastole Par ailleurs, les valeurs de fonction cardiaque mesurées au moyen des images 4D concorde avec les valeurs obtenues avec les images 2D standard. Finalement, dans la dernière partie de la thèse une technique d'acquisition a temps d'écho ultra-court (UTE) a été développée pour la visualisation in vivo des calcifications des artères coronariennes. Des études récentes ont démontré que les acquisitions UTE permettent de visualiser les calcifications dans des plaques athérosclérotiques ex vivo. Cepandent le mouvement du coeur a entravé jusqu'à maintenant l'utilisation des techniques UTE in vivo. Pour résoudre ce problème nous avons développé une séquence d'acquisition UTE avec trajectoire radiale 3D et l'avons testée sur des volontaires. La technique proposée utilise une auto-navigation 3D pour corriger le mouvement respiratoire et est synchronisée avec l'ECG. Trois échos sont acquis pour extraire le signal de la calcification avec des composants au T2 très court tout en permettant de séparer le signal de la graisse depuis le signal de l'eau. Les résultats sont encore préliminaires mais on peut affirmer que la technique développé peut potentiellement montrer les calcifications des artères coronariennes in vivo. En conclusion, ce travail de thèse présente trois nouvelles techniques pour l'IRM du coeur entier capables d'améliorer la visualisation et la caractérisation de la maladie athérosclérotique des coronaires. Ces techniques fournissent des informations anatomiques et fonctionnelles en quatre dimensions et des informations sur la composition du tissu auparavant indisponibles. CORONARY artery magnetic resonance imaging (MRI) has the potential to provide the cardiologist with relevant diagnostic information relative to coronary artery disease of patients. The major challenge of cardiac MRI, though, is dealing with all sources of motions that can corrupt the images affecting the diagnostic information provided. The current thesis, thus, focused on the development of new MRI techniques that change the standard approach to cardiac motion compensation in order to increase the efficiency of cardioavscular MRI, to provide more flexibility and robustness, new temporal information and new tissue information. The proposed approaches help in advancing coronary magnetic resonance angiography (MRA) in the direction of an easy-to-use and multipurpose tool that can be translated to the clinical environment. The first part of the thesis focused on the study of coronary artery motion through gold standard imaging techniques (x-ray angiography) in patients, in order to measure the precision with which the coronary arteries assume the same position beat after beat (coronary artery repositioning). We learned that intervals with minimal coronary artery repositioning occur in peak systole and in mid diastole and we responded with a new pulse sequence (T2~post) that is able to provide peak-systolic imaging. Such a sequence was tested in healthy volunteers and, from the image quality comparison, we learned that the proposed approach provides coronary artery visualization and contrast-to-noise ratio (CNR) comparable with the standard acquisition approach, but with increased signal-to-noise ratio (SNR). The second part of the thesis explored a completely new paradigm for whole- heart cardiovascular MRI. The proposed techniques acquires the data continuously (free-running), instead of being triggered, thus increasing the efficiency of the acquisition and providing four dimensional images of the whole heart, while respiratory self navigation allows for the scan to be performed in free breathing. This enabling technology allows for anatomical and functional evaluation in four dimensions, with high spatial and temporal resolution and without the need for contrast agent injection. The enabling step is the use of a golden-angle based 3D radial trajectory, which allows for a continuous sampling of the k-space and a retrospective selection of the timing parameters of the reconstructed dataset. The free-running 4D acquisition was then combined with a compressed sensing reconstruction algorithm that further increases the temporal resolution of the 4D dataset, while at the same time increasing the overall image quality by removing undersampling artifacts. The obtained 4D images provide visualization of the whole coronary artery tree in each phases of the cardiac cycle and, at the same time, allow for the assessment of the cardiac function with a single free- breathing scan. The quality of the coronary arteries provided by the frames of the free-running 4D acquisition is in line with the one obtained with the standard ECG-triggered one, and the cardiac function evaluation matched the one measured with gold-standard stack of 2D cine approaches. Finally, the last part of the thesis focused on the development of ultrashort echo time (UTE) acquisition scheme for in vivo detection of calcification in the coronary arteries. Recent studies showed that UTE imaging allows for the coronary artery plaque calcification ex vivo, since it is able to detect the short T2 components of the calcification. The heart motion, though, prevented this technique from being applied in vivo. An ECG-triggered self-navigated 3D radial triple- echo UTE acquisition has then been developed and tested in healthy volunteers. The proposed sequence combines a 3D self-navigation approach with a 3D radial UTE acquisition enabling data collection during free breathing. Three echoes are simultaneously acquired to extract the short T2 components of the calcification while a water and fat separation technique allows for proper visualization of the coronary arteries. Even though the results are still preliminary, the proposed sequence showed great potential for the in vivo visualization of coronary artery calcification. In conclusion, the thesis presents three novel MRI approaches aimed at improved characterization and assessment of atherosclerotic coronary artery disease. These approaches provide new anatomical and functional information in four dimensions, and support tissue characterization for coronary artery plaques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Freshwater ecosystems and their biodiversity are presently seriously threatened by global development and population growth, leading to increases in nutrient inputs and intensification of eutrophication-induced problems in receiving fresh waters, particularly in lakes. Climate change constitutes another threat exacerbating the symptoms of eutrophication and species migration and loss. Unequivocal evidence of climate change impacts is still highly fragmented despite the intensive research, in part due to the variety and uncertainty of climate models and underlying emission scenarios but also due to the different approaches applied to study its effects. We first describe the strengths and weaknesses of the multi-faceted approaches that are presently available for elucidating the effects of climate change in lakes, including space-for-time substitution, time series, experiments, palaeoecology and modelling. Reviewing combined results from studies based on the various approaches, we describe the likely effects of climate changes on biological communities, trophic dynamics and the ecological state of lakes. We further discuss potential mitigation and adaptation measures to counteract the effects of climate change on lakes and, finally, we highlight some of the future challenges that we face to improve our capacity for successful prediction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

How do plants that move and spread across landscapes become branded as weeds and thereby objects of contention and control? We outline a political ecology approach that builds on a Lefebvrian understanding of the production of space, identifying three scalar moments that make plants into 'weeds' in different spatial contexts and landscapes. The three moments are: the operational scale, which relates to empirical phenomena in nature and society; the observational scale, which defines formal concepts of these phenomena and their implicit or explicit 'biopower' across institutional and spatial categories; and the interpretive scale, which is communicated through stories and actions expressing human feelings or concerns regarding the phenomena and processes of socio-spatial change. Together, these three scalar moments interact to produce a political ecology of landscape transformation, where biophysical and socio-cultural processes of daily life encounter formal categories and modes of control as well as emotive and normative expectations in shaping landscapes. Using three exemplar 'weeds' - acacia, lantana and ambrosia - our political ecology approach to landscape transformations shows that weeds do not act alone and that invasives are not inherently bad organisms. Humans and weeds go together; plants take advantage of spaces and opportunities that we create. Human desires for preserving certain social values in landscapes in contradiction to actual transformations is often at the heart of definitions of and conflicts over weeds or invasives.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Notre consommation en eau souterraine, en particulier comme eau potable ou pour l'irrigation, a considérablement augmenté au cours des années. De nombreux problèmes font alors leur apparition, allant de la prospection de nouvelles ressources à la remédiation des aquifères pollués. Indépendamment du problème hydrogéologique considéré, le principal défi reste la caractérisation des propriétés du sous-sol. Une approche stochastique est alors nécessaire afin de représenter cette incertitude en considérant de multiples scénarios géologiques et en générant un grand nombre de réalisations géostatistiques. Nous rencontrons alors la principale limitation de ces approches qui est le coût de calcul dû à la simulation des processus d'écoulements complexes pour chacune de ces réalisations. Dans la première partie de la thèse, ce problème est investigué dans le contexte de propagation de l'incertitude, oú un ensemble de réalisations est identifié comme représentant les propriétés du sous-sol. Afin de propager cette incertitude à la quantité d'intérêt tout en limitant le coût de calcul, les méthodes actuelles font appel à des modèles d'écoulement approximés. Cela permet l'identification d'un sous-ensemble de réalisations représentant la variabilité de l'ensemble initial. Le modèle complexe d'écoulement est alors évalué uniquement pour ce sousensemble, et, sur la base de ces réponses complexes, l'inférence est faite. Notre objectif est d'améliorer la performance de cette approche en utilisant toute l'information à disposition. Pour cela, le sous-ensemble de réponses approximées et exactes est utilisé afin de construire un modèle d'erreur, qui sert ensuite à corriger le reste des réponses approximées et prédire la réponse du modèle complexe. Cette méthode permet de maximiser l'utilisation de l'information à disposition sans augmentation perceptible du temps de calcul. La propagation de l'incertitude est alors plus précise et plus robuste. La stratégie explorée dans le premier chapitre consiste à apprendre d'un sous-ensemble de réalisations la relation entre les modèles d'écoulement approximé et complexe. Dans la seconde partie de la thèse, cette méthodologie est formalisée mathématiquement en introduisant un modèle de régression entre les réponses fonctionnelles. Comme ce problème est mal posé, il est nécessaire d'en réduire la dimensionnalité. Dans cette optique, l'innovation du travail présenté provient de l'utilisation de l'analyse en composantes principales fonctionnelles (ACPF), qui non seulement effectue la réduction de dimensionnalités tout en maximisant l'information retenue, mais permet aussi de diagnostiquer la qualité du modèle d'erreur dans cet espace fonctionnel. La méthodologie proposée est appliquée à un problème de pollution par une phase liquide nonaqueuse et les résultats obtenus montrent que le modèle d'erreur permet une forte réduction du temps de calcul tout en estimant correctement l'incertitude. De plus, pour chaque réponse approximée, une prédiction de la réponse complexe est fournie par le modèle d'erreur. Le concept de modèle d'erreur fonctionnel est donc pertinent pour la propagation de l'incertitude, mais aussi pour les problèmes d'inférence bayésienne. Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont les algorithmes les plus communément utilisés afin de générer des réalisations géostatistiques en accord avec les observations. Cependant, ces méthodes souffrent d'un taux d'acceptation très bas pour les problèmes de grande dimensionnalité, résultant en un grand nombre de simulations d'écoulement gaspillées. Une approche en deux temps, le "MCMC en deux étapes", a été introduite afin d'éviter les simulations du modèle complexe inutiles par une évaluation préliminaire de la réalisation. Dans la troisième partie de la thèse, le modèle d'écoulement approximé couplé à un modèle d'erreur sert d'évaluation préliminaire pour le "MCMC en deux étapes". Nous démontrons une augmentation du taux d'acceptation par un facteur de 1.5 à 3 en comparaison avec une implémentation classique de MCMC. Une question reste sans réponse : comment choisir la taille de l'ensemble d'entrainement et comment identifier les réalisations permettant d'optimiser la construction du modèle d'erreur. Cela requiert une stratégie itérative afin que, à chaque nouvelle simulation d'écoulement, le modèle d'erreur soit amélioré en incorporant les nouvelles informations. Ceci est développé dans la quatrième partie de la thèse, oú cette méthodologie est appliquée à un problème d'intrusion saline dans un aquifère côtier. -- Our consumption of groundwater, in particular as drinking water and for irrigation, has considerably increased over the years and groundwater is becoming an increasingly scarce and endangered resource. Nofadays, we are facing many problems ranging from water prospection to sustainable management and remediation of polluted aquifers. Independently of the hydrogeological problem, the main challenge remains dealing with the incomplete knofledge of the underground properties. Stochastic approaches have been developed to represent this uncertainty by considering multiple geological scenarios and generating a large number of realizations. The main limitation of this approach is the computational cost associated with performing complex of simulations in each realization. In the first part of the thesis, we explore this issue in the context of uncertainty propagation, where an ensemble of geostatistical realizations is identified as representative of the subsurface uncertainty. To propagate this lack of knofledge to the quantity of interest (e.g., the concentration of pollutant in extracted water), it is necessary to evaluate the of response of each realization. Due to computational constraints, state-of-the-art methods make use of approximate of simulation, to identify a subset of realizations that represents the variability of the ensemble. The complex and computationally heavy of model is then run for this subset based on which inference is made. Our objective is to increase the performance of this approach by using all of the available information and not solely the subset of exact responses. Two error models are proposed to correct the approximate responses follofing a machine learning approach. For the subset identified by a classical approach (here the distance kernel method) both the approximate and the exact responses are knofn. This information is used to construct an error model and correct the ensemble of approximate responses to predict the "expected" responses of the exact model. The proposed methodology makes use of all the available information without perceptible additional computational costs and leads to an increase in accuracy and robustness of the uncertainty propagation. The strategy explored in the first chapter consists in learning from a subset of realizations the relationship between proxy and exact curves. In the second part of this thesis, the strategy is formalized in a rigorous mathematical framework by defining a regression model between functions. As this problem is ill-posed, it is necessary to reduce its dimensionality. The novelty of the work comes from the use of functional principal component analysis (FPCA), which not only performs the dimensionality reduction while maximizing the retained information, but also allofs a diagnostic of the quality of the error model in the functional space. The proposed methodology is applied to a pollution problem by a non-aqueous phase-liquid. The error model allofs a strong reduction of the computational cost while providing a good estimate of the uncertainty. The individual correction of the proxy response by the error model leads to an excellent prediction of the exact response, opening the door to many applications. The concept of functional error model is useful not only in the context of uncertainty propagation, but also, and maybe even more so, to perform Bayesian inference. Monte Carlo Markov Chain (MCMC) algorithms are the most common choice to ensure that the generated realizations are sampled in accordance with the observations. Hofever, this approach suffers from lof acceptance rate in high dimensional problems, resulting in a large number of wasted of simulations. This led to the introduction of two-stage MCMC, where the computational cost is decreased by avoiding unnecessary simulation of the exact of thanks to a preliminary evaluation of the proposal. In the third part of the thesis, a proxy is coupled to an error model to provide an approximate response for the two-stage MCMC set-up. We demonstrate an increase in acceptance rate by a factor three with respect to one-stage MCMC results. An open question remains: hof do we choose the size of the learning set and identify the realizations to optimize the construction of the error model. This requires devising an iterative strategy to construct the error model, such that, as new of simulations are performed, the error model is iteratively improved by incorporating the new information. This is discussed in the fourth part of the thesis, in which we apply this methodology to a problem of saline intrusion in a coastal aquifer.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Very large molecular systems can be calculated with the so called CNDOL approximate Hamiltonians that have been developed by avoiding oversimplifications and only using a priori parameters and formulas from the simpler NDO methods. A new diagonal monoelectronic term named CNDOL/21 shows great consistency and easier SCF convergence when used together with an appropriate function for charge repulsion energies that is derived from traditional formulas. It is possible to obtain a priori molecular orbitals and electron excitation properties after the configuration interaction of single excited determinants with reliability, maintaining interpretative possibilities even being a simplified Hamiltonian. Tests with some unequivocal gas phase maxima of simple molecules (benzene, furfural, acetaldehyde, hexyl alcohol, methyl amine, 2,5 dimethyl 2,4 hexadiene, and ethyl sulfide) ratify the general quality of this approach in comparison with other methods. The calculation of large systems as porphine in gas phase and a model of the complete retinal binding pocket in rhodopsin with 622 basis functions on 280 atoms at the quantum mechanical level show reliability leading to a resulting first allowed transition in 483 nm, very similar to the known experimental value of 500 nm of "dark state." In this very important case, our model gives a central role in this excitation to a charge transfer from the neighboring Glu(-) counterion to the retinaldehyde polyene chain. Tests with gas phase maxima of some important molecules corroborate the reliability of CNDOL/2 Hamiltonians.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Coherent anti-Stokes Raman scattering is the powerful method of laser spectroscopy in which significant successes are achieved. However, the non-linear nature of CARS complicates the analysis of the received spectra. The objective of this Thesis is to develop a new phase retrieval algorithm for CARS. It utilizes the maximum entropy method and the new wavelet approach for spectroscopic background correction of a phase function. The method was developed to be easily automated and used on a large number of spectra of different substances.. The algorithm was successfully tested on experimental data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Improving educational quality is an important public policy goal. However, its success requires identifying factors associated with student achievement. At the core of these proposals lies the principle that increased public school quality can make school system more efficient, resulting in correspondingly stronger performance by students. Nevertheless, the public educational system is not devoid of competition which arises, among other factors, through the efficiency of management and the geographical location of schools. Moreover, families in Spain appear to choose a school on the grounds of location. In this environment, the objective of this paper is to analyze whether geographical space has an impact on the relationship between the level of technical quality of public schools (measured by the efficiency score) and the school demand index. To do this, an empirical application is performed on a sample of 1,695 public schools in the region of Catalonia (Spain). This application shows the effects of spatial autocorrelation on the estimation of the parameters and how these problems are addressed through spatial econometrics models. The results confirm that space has a moderating effect on the relationship between efficiency and school demand, although only in urban municipalities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The transport of macromolecules, such as low-density lipoprotein (LDL), and their accumulation in the layers of the arterial wall play a critical role in the creation and development of atherosclerosis. Atherosclerosis is a disease of large arteries e.g., the aorta, coronary, carotid, and other proximal arteries that involves a distinctive accumulation of LDL and other lipid-bearing materials in the arterial wall. Over time, plaque hardens and narrows the arteries. The flow of oxygen-rich blood to organs and other parts of the body is reduced. This can lead to serious problems, including heart attack, stroke, or even death. It has been proven that the accumulation of macromolecules in the arterial wall depends not only on the ease with which materials enter the wall, but also on the hindrance to the passage of materials out of the wall posed by underlying layers. Therefore, attention was drawn to the fact that the wall structure of large arteries is different than other vessels which are disease-resistant. Atherosclerosis tends to be localized in regions of curvature and branching in arteries where fluid shear stress (shear rate) and other fluid mechanical characteristics deviate from their normal spatial and temporal distribution patterns in straight vessels. On the other hand, the smooth muscle cells (SMCs) residing in the media layer of the arterial wall respond to mechanical stimuli, such as shear stress. Shear stress may affect SMC proliferation and migration from the media layer to intima. This occurs in atherosclerosis and intimal hyperplasia. The study of blood flow and other body fluids and of heat transport through the arterial wall is one of the advanced applications of porous media in recent years. The arterial wall may be modeled in both macroscopic (as a continuous porous medium) and microscopic scales (as a heterogeneous porous medium). In the present study, the governing equations of mass, heat and momentum transport have been solved for different species and interstitial fluid within the arterial wall by means of computational fluid dynamics (CFD). Simulation models are based on the finite element (FE) and finite volume (FV) methods. The wall structure has been modeled by assuming the wall layers as porous media with different properties. In order to study the heat transport through human tissues, the simulations have been carried out for a non-homogeneous model of porous media. The tissue is composed of blood vessels, cells, and an interstitium. The interstitium consists of interstitial fluid and extracellular fibers. Numerical simulations are performed in a two-dimensional (2D) model to realize the effect of the shape and configuration of the discrete phase on the convective and conductive features of heat transfer, e.g. the interstitium of biological tissues. On the other hand, the governing equations of momentum and mass transport have been solved in the heterogeneous porous media model of the media layer, which has a major role in the transport and accumulation of solutes across the arterial wall. The transport of Adenosine 5´-triphosphate (ATP) is simulated across the media layer as a benchmark to observe how SMCs affect on the species mass transport. In addition, the transport of interstitial fluid has been simulated while the deformation of the media layer (due to high blood pressure) and its constituents such as SMCs are also involved in the model. In this context, the effect of pressure variation on shear stress is investigated over SMCs induced by the interstitial flow both in 2D and three-dimensional (3D) geometries for the media layer. The influence of hypertension (high pressure) on the transport of lowdensity lipoprotein (LDL) through deformable arterial wall layers is also studied. This is due to the pressure-driven convective flow across the arterial wall. The intima and media layers are assumed as homogeneous porous media. The results of the present study reveal that ATP concentration over the surface of SMCs and within the bulk of the media layer is significantly dependent on the distribution of cells. Moreover, the shear stress magnitude and distribution over the SMC surface are affected by transmural pressure and the deformation of the media layer of the aorta wall. This work reflects the fact that the second or even subsequent layers of SMCs may bear shear stresses of the same order of magnitude as the first layer does if cells are arranged in an arbitrary manner. This study has brought new insights into the simulation of the arterial wall, as the previous simplifications have been ignored. The configurations of SMCs used here with elliptic cross sections of SMCs closely resemble the physiological conditions of cells. Moreover, the deformation of SMCs with high transmural pressure which follows the media layer compaction has been studied for the first time. On the other hand, results demonstrate that LDL concentration through the intima and media layers changes significantly as wall layers compress with transmural pressure. It was also noticed that the fraction of leaky junctions across the endothelial cells and the area fraction of fenestral pores over the internal elastic lamina affect the LDL distribution dramatically through the thoracic aorta wall. The simulation techniques introduced in this work can also trigger new ideas for simulating porous media involved in any biomedical, biomechanical, chemical, and environmental engineering applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present an analytical procedure to perform the local noise analysis of a semiconductor junction when both the drift and diffusive parts of the current are important. The method takes into account space-inhomogeneous and hot-carriers conditions in the framework of the drift-diffusion model, and it can be effectively applied to the local noise analysis of different devices: n+nn+ diodes, Schottky barrier diodes, field-effect transistors, etc., operating under strongly inhomogeneous distributions of the electric field and charge concentration

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present an analytical procedure to perform the local noise analysis of a semiconductor junction when both the drift and diffusive parts of the current are important. The method takes into account space-inhomogeneous and hot-carriers conditions in the framework of the drift-diffusion model, and it can be effectively applied to the local noise analysis of different devices: n+nn+ diodes, Schottky barrier diodes, field-effect transistors, etc., operating under strongly inhomogeneous distributions of the electric field and charge concentration

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A neural network procedure to solve inverse chemical kinetic problems is discussed in this work. Rate constants are calculated from the product concentration of an irreversible consecutive reaction: the hydrogenation of Citral molecule, a process with industrial interest. Simulated and experimental data are considered. Errors in the simulated data, up to 7% in the concentrations, were assumed to investigate the robustness of the inverse procedure. Also, the proposed method is compared with two common methods in nonlinear analysis; the Simplex and Levenberg-Marquardt approaches. In all situations investigated, the neural network approach was numerically stable and robust with respect to deviations in the initial conditions or experimental noises.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A simple and fast approach for solid phase extraction is herein described, and used to determine trace amounts of Pb2+ and Cu2+ metal ions. The solid phase support is sodium dodecyl sulfate (SDS)-coated γ-alumina modified with bis(2-hydroxy acetophenone)-1,6-hexanediimine (BHAH) ligand. The adsorbed ions were stripped from the solid phase by 6 mL of 4 M nitric acid as eluent. The eluting solution was analyzed by flame atomic absorption spectrometry (FAAS). The sorption recovery of metal ions was investigated with regard to the effects of pH, amount of ligand, γ-alumina and surfactant and the amount and type of eluent. Complexation of BHAH with Pb2+ or Cu2+ ions was examined via spectrophotometry using the HypSpec program. The detection limit for Cu2+ was 7.9 µg L-1 with a relative standard deviation of 1.67%, while that for Pb2+ was 6.4 µg L-1 with a relative standard deviation of 1.64%. A preconcentration factor of 100 was achieved for these ions. The method was successfully applied to determine analyte concentrations in samples of liver, parsley, cabbage, and water.