970 resultados para Mathematical diffusion models
Resumo:
Dieser Beitrag stellt ein Vorgehen zur Entwicklung einer Methodik zur Generierung einer praxisnahen Datenbasis für numerische Untersuchungen im Rahmen der maritimen Leercontainerlogistik vor. Das Vorgehen wird an einem exemplarischen Anwendungsfall verdeutlicht. Die Resultate sollen Testläufe für Szenarien der Leercontainerlogistik unterstützen und somit eine Basis für die Entwicklung und Bewertung organisatorischer Verbesserungsansätze, mathematischer Optimierungsmodelle, entsprechender Lösungsalgorithmen und praxisnaher Simulationsumgebungen bilden.
Resumo:
The work presented here aims to reduce the cost of multijunction solar cell technology by developing ways to manufacture them on cheap substrates such as silicon. In particular, our main objective is the growth of III-V semiconductors on silicon substrates for photovoltaic applications. The goal is to create a GaAsP/Si virtual substrates onto which other III-V cells could be integrated with an interesting efficiency potential. This technology involves several challenges due to the difficulty of growing III-V materials on silicon. In this paper, our first work done aimed at developing such structure is presented. It was focused on the development of phosphorus diffusion models on silicon and on the preparation of an optimal silicon surface to grow on it III-V materials.
Resumo:
In this paper some mathematical programming models are exposed in order to set the number of services on a specified system of bus lines, which are intended to assist high demand levels which may arise because of the disruption of Rapid Transit services or during the celebration of massive events. By means of this model two types of basic magnitudes can be determined, basically: a) the number of bus units assigned to each line and b) the number of services that should be assigned to those units. In these models, passenger flow assignment to lines can be considered of the system optimum type, in the sense that the assignment of units and of services is carried out minimizing a linear combination of operation costs and total travel time of users. The models consider delays experienced by buses as a consequence of the get in/out of the passengers, queueing at stations and the delays that passengers experience waiting at the stations. For the case of a congested strategy based user optimal passenger assignment model with strict capacities on the bus lines, the use of the method of successive averages is shown.
Resumo:
Mathematical models used for the understanding of coastal seabed morphology play a key role in beach nourishment projects. These projects have become the fundamental strategy for coastal maintenance during the last few years. Accordingly, the accuracy of these models is vital to optimize the costs of coastal regeneration projects. Planning of such interventions requires methodologies that do not generate uncertainties in their interpretation. A study and comparison of mathematical simulation models of the coastline is carried out in this paper, as well as elements that are part of the model that are a source of uncertainty. The equilibrium profile (EP) and the offshore limit corresponding to the depth of closure (DoC) have been analyzed taking into account different timescale ranges. The results have thus been compared using data sets from three different periods which are identified as present, past and future. Accuracy in data collection for the beach profiles and the definition of the median grain size calculation using collected samples are the two main factors that have been taken into account in this paper. These data can generate high uncertainties and can produce a lack of accuracy in nourishment projects. Together they can generate excessive costs due to possible excess or shortage of sand used for the nourishment. The main goal of this paper is the development of a new methodology to increase the accuracy of the existing equilibrium beach profile models, providing an improvement to the inputs used in such models and in the fitting of the formulae used to obtain seabed shape. This new methodology has been applied and tested on Valencia's beaches.
Resumo:
Molecular transport in phase space is crucial for chemical reactions because it defines how pre-reactive molecular configurations are found during the time evolution of the system. Using Molecular Dynamics (MD) simulated atomistic trajectories we test the assumption of the normal diffusion in the phase space for bulk water at ambient conditions by checking the equivalence of the transport to the random walk model. Contrary to common expectations we have found that some statistical features of the transport in the phase space differ from those of the normal diffusion models. This implies a non-random character of the path search process by the reacting complexes in water solutions. Our further numerical experiments show that a significant long period of non-stationarity in the transition probabilities of the segments of molecular trajectories can account for the observed non-uniform filling of the phase space. Surprisingly, the characteristic periods in the model non-stationarity constitute hundreds of nanoseconds, that is much longer time scales compared to typical lifetime of known liquid water molecular structures (several picoseconds).
Resumo:
The research presented in this thesis was developed as part of DIBANET, an EC funded project aiming to develop an energetically self-sustainable process for the production of diesel miscible biofuels (i.e. ethyl levulinate) via acid hydrolysis of selected biomass feedstocks. Three thermal conversion technologies, pyrolysis, gasification and combustion, were evaluated in the present work with the aim of recovering the energy stored in the acid hydrolysis solid residue (AHR). Mainly consisting of lignin and humins, the AHR can contain up to 80% of the energy in the original feedstock. Pyrolysis of AHR proved unsatisfactory, so attention focussed on gasification and combustion with the aim of producing heat and/or power to supply the energy demanded by the ethyl levulinate production process. A thermal processing rig consisting on a Laminar Entrained Flow Reactor (LEFR) equipped with solid and liquid collection and online gas analysis systems was designed and built to explore pyrolysis, gasification and air-blown combustion of AHR. Maximum liquid yield for pyrolysis of AHR was 30wt% with volatile conversion of 80%. Gas yield for AHR gasification was 78wt%, with 8wt% tar yields and conversion of volatiles close to 100%. 90wt% of the AHR was transformed into gas by combustion, with volatile conversions above 90%. 5volO2%-95vol%N2 gasification resulted in a nitrogen diluted, low heating value gas (2MJ/m3). Steam and oxygen-blown gasification of AHR were additionally investigated in a batch gasifier at KTH in Sweden. Steam promoted the formation of hydrogen (25vol%) and methane (14vol%) improving the gas heating value to 10MJ/m3, below the typical for steam gasification due to equipment limitations. Arrhenius kinetic parameters were calculated using data collected with the LEFR to provide reaction rate information for process design and optimisation. Activation energy (EA) and pre-exponential factor (ko in s-1) for pyrolysis (EA=80kJ/mol, lnko=14), gasification (EA=69kJ/mol, lnko=13) and combustion (EA=42kJ/mol, lnko=8) were calculated after linearly fitting the data using the random pore model. Kinetic parameters for pyrolysis and combustion were also determined by dynamic thermogravimetric analysis (TGA), including studies of the original biomass feedstocks for comparison. Results obtained by differential and integral isoconversional methods for activation energy determination were compared. Activation energy calculated by the Vyazovkin method was 103-204kJ/mol for pyrolysis of untreated feedstocks and 185-387kJ/mol for AHRs. Combustion activation energy was 138-163kJ/mol for biomass and 119-158 for AHRs. The non-linear least squares method was used to determine reaction model and pre-exponential factor. Pyrolysis and combustion of biomass were best modelled by a combination of third order reaction and 3 dimensional diffusion models, while AHR decomposed following the third order reaction for pyrolysis and the 3 dimensional diffusion for combustion.
Resumo:
AMS Subj. Classification: 92C30
Resumo:
The random walk models with temporal correlation (i.e. memory) are of interest in the study of anomalous diffusion phenomena. The random walk and its generalizations are of prominent place in the characterization of various physical, chemical and biological phenomena. The temporal correlation is an essential feature in anomalous diffusion models. These temporal long-range correlation models can be called non-Markovian models, otherwise, the short-range time correlation counterparts are Markovian ones. Within this context, we reviewed the existing models with temporal correlation, i.e. entire memory, the elephant walk model, or partial memory, alzheimer walk model and walk model with a gaussian memory with profile. It is noticed that these models shows superdiffusion with a Hurst exponent H > 1/2. We study in this work a superdiffusive random walk model with exponentially decaying memory. This seems to be a self-contradictory statement, since it is well known that random walks with exponentially decaying temporal correlations can be approximated arbitrarily well by Markov processes and that central limit theorems prohibit superdiffusion for Markovian walks with finite variance of step sizes. The solution to the apparent paradox is that the model is genuinely non-Markovian, due to a time-dependent decay constant associated with the exponential behavior. In the end, we discuss ideas for future investigations.
Resumo:
Several previous studies have shown that submarine mass-movements can profoundly impact the shape of pore water profiles. Therefore, pore water geochemistry and diffusion models were proposed as tools for identifying and dating recent (max. several thousands of years old) mass-transport deposits (MTDs). In particular, sulfate profiles evidentially indicate transient pore water conditions generated by submarine landslides. After mass-movements that result in the deposition of sediment packages with distinct pore water signatures, the sulfate profiles can be kink-shaped and evolve into the concave and linear shape with time due to molecular diffusion. Here we present data from the RV METEOR cruise M78/3 along the continental margin off Uruguay and Argentina. Sulfate profiles of 15 gravity cores are compared with the respective acoustic facies recorded by a sediment echosounder system. Our results show that in this very dynamic depositional setting, non-steady state profiles occur often, but are not exclusively associated with mass-movements. Three sites that show acoustic indications for recent MTDs are presented in detail. Where recent MTDs are identified, a geochemical transport/reaction model is used to estimate the time that has elapsed since the perturbation of the pore water system and, thus, the timing of the MTD emplacement. We conclude that geochemical analyses are a powerful complementary tool in the identification of recent MTDs and provide a simple and accurate way of dating such deposits.
Resumo:
X-ray computed tomography (CT) imaging constitutes one of the most widely used diagnostic tools in radiology today with nearly 85 million CT examinations performed in the U.S in 2011. CT imparts a relatively high amount of radiation dose to the patient compared to other x-ray imaging modalities and as a result of this fact, coupled with its popularity, CT is currently the single largest source of medical radiation exposure to the U.S. population. For this reason, there is a critical need to optimize CT examinations such that the dose is minimized while the quality of the CT images is not degraded. This optimization can be difficult to achieve due to the relationship between dose and image quality. All things being held equal, reducing the dose degrades image quality and can impact the diagnostic value of the CT examination.
A recent push from the medical and scientific community towards using lower doses has spawned new dose reduction technologies such as automatic exposure control (i.e., tube current modulation) and iterative reconstruction algorithms. In theory, these technologies could allow for scanning at reduced doses while maintaining the image quality of the exam at an acceptable level. Therefore, there is a scientific need to establish the dose reduction potential of these new technologies in an objective and rigorous manner. Establishing these dose reduction potentials requires precise and clinically relevant metrics of CT image quality, as well as practical and efficient methodologies to measure such metrics on real CT systems. The currently established methodologies for assessing CT image quality are not appropriate to assess modern CT scanners that have implemented those aforementioned dose reduction technologies.
Thus the purpose of this doctoral project was to develop, assess, and implement new phantoms, image quality metrics, analysis techniques, and modeling tools that are appropriate for image quality assessment of modern clinical CT systems. The project developed image quality assessment methods in the context of three distinct paradigms, (a) uniform phantoms, (b) textured phantoms, and (c) clinical images.
The work in this dissertation used the “task-based” definition of image quality. That is, image quality was broadly defined as the effectiveness by which an image can be used for its intended task. Under this definition, any assessment of image quality requires three components: (1) A well defined imaging task (e.g., detection of subtle lesions), (2) an “observer” to perform the task (e.g., a radiologists or a detection algorithm), and (3) a way to measure the observer’s performance in completing the task at hand (e.g., detection sensitivity/specificity).
First, this task-based image quality paradigm was implemented using a novel multi-sized phantom platform (with uniform background) developed specifically to assess modern CT systems (Mercury Phantom, v3.0, Duke University). A comprehensive evaluation was performed on a state-of-the-art CT system (SOMATOM Definition Force, Siemens Healthcare) in terms of noise, resolution, and detectability as a function of patient size, dose, tube energy (i.e., kVp), automatic exposure control, and reconstruction algorithm (i.e., Filtered Back-Projection– FPB vs Advanced Modeled Iterative Reconstruction– ADMIRE). A mathematical observer model (i.e., computer detection algorithm) was implemented and used as the basis of image quality comparisons. It was found that image quality increased with increasing dose and decreasing phantom size. The CT system exhibited nonlinear noise and resolution properties, especially at very low-doses, large phantom sizes, and for low-contrast objects. Objective image quality metrics generally increased with increasing dose and ADMIRE strength, and with decreasing phantom size. The ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality at the same dose (increase in detectability index by up to 163% depending on iterative strength). The use of automatic exposure control resulted in more consistent image quality with changing phantom size.
Based on those results, the dose reduction potential of ADMIRE was further assessed specifically for the task of detecting small (<=6 mm) low-contrast (<=20 HU) lesions. A new low-contrast detectability phantom (with uniform background) was designed and fabricated using a multi-material 3D printer. The phantom was imaged at multiple dose levels and images were reconstructed with FBP and ADMIRE. Human perception experiments were performed to measure the detection accuracy from FBP and ADMIRE images. It was found that ADMIRE had equivalent performance to FBP at 56% less dose.
Using the same image data as the previous study, a number of different mathematical observer models were implemented to assess which models would result in image quality metrics that best correlated with human detection performance. The models included naïve simple metrics of image quality such as contrast-to-noise ratio (CNR) and more sophisticated observer models such as the non-prewhitening matched filter observer model family and the channelized Hotelling observer model family. It was found that non-prewhitening matched filter observers and the channelized Hotelling observers both correlated strongly with human performance. Conversely, CNR was found to not correlate strongly with human performance, especially when comparing different reconstruction algorithms.
The uniform background phantoms used in the previous studies provided a good first-order approximation of image quality. However, due to their simplicity and due to the complexity of iterative reconstruction algorithms, it is possible that such phantoms are not fully adequate to assess the clinical impact of iterative algorithms because patient images obviously do not have smooth uniform backgrounds. To test this hypothesis, two textured phantoms (classified as gross texture and fine texture) and a uniform phantom of similar size were built and imaged on a SOMATOM Flash scanner (Siemens Healthcare). Images were reconstructed using FBP and a Sinogram Affirmed Iterative Reconstruction (SAFIRE). Using an image subtraction technique, quantum noise was measured in all images of each phantom. It was found that in FBP, the noise was independent of the background (textured vs uniform). However, for SAFIRE, noise increased by up to 44% in the textured phantoms compared to the uniform phantom. As a result, the noise reduction from SAFIRE was found to be up to 66% in the uniform phantom but as low as 29% in the textured phantoms. Based on this result, it clear that further investigation was needed into to understand the impact that background texture has on image quality when iterative reconstruction algorithms are used.
To further investigate this phenomenon with more realistic textures, two anthropomorphic textured phantoms were designed to mimic lung vasculature and fatty soft tissue texture. The phantoms (along with a corresponding uniform phantom) were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Scans were repeated a total of 50 times in order to get ensemble statistics of the noise. A novel method of estimating the noise power spectrum (NPS) from irregularly shaped ROIs was developed. It was found that SAFIRE images had highly locally non-stationary noise patterns with pixels near edges having higher noise than pixels in more uniform regions. Compared to FBP, SAFIRE images had 60% less noise on average in uniform regions for edge pixels, noise was between 20% higher and 40% lower. The noise texture (i.e., NPS) was also highly dependent on the background texture for SAFIRE. Therefore, it was concluded that quantum noise properties in the uniform phantoms are not representative of those in patients for iterative reconstruction algorithms and texture should be considered when assessing image quality of iterative algorithms.
The move beyond just assessing noise properties in textured phantoms towards assessing detectability, a series of new phantoms were designed specifically to measure low-contrast detectability in the presence of background texture. The textures used were optimized to match the texture in the liver regions actual patient CT images using a genetic algorithm. The so called “Clustured Lumpy Background” texture synthesis framework was used to generate the modeled texture. Three textured phantoms and a corresponding uniform phantom were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Images were reconstructed with FBP and SAFIRE and analyzed using a multi-slice channelized Hotelling observer to measure detectability and the dose reduction potential of SAFIRE based on the uniform and textured phantoms. It was found that at the same dose, the improvement in detectability from SAFIRE (compared to FBP) was higher when measured in a uniform phantom compared to textured phantoms.
The final trajectory of this project aimed at developing methods to mathematically model lesions, as a means to help assess image quality directly from patient images. The mathematical modeling framework is first presented. The models describe a lesion’s morphology in terms of size, shape, contrast, and edge profile as an analytical equation. The models can be voxelized and inserted into patient images to create so-called “hybrid” images. These hybrid images can then be used to assess detectability or estimability with the advantage that the ground truth of the lesion morphology and location is known exactly. Based on this framework, a series of liver lesions, lung nodules, and kidney stones were modeled based on images of real lesions. The lesion models were virtually inserted into patient images to create a database of hybrid images to go along with the original database of real lesion images. ROI images from each database were assessed by radiologists in a blinded fashion to determine the realism of the hybrid images. It was found that the radiologists could not readily distinguish between real and virtual lesion images (area under the ROC curve was 0.55). This study provided evidence that the proposed mathematical lesion modeling framework could produce reasonably realistic lesion images.
Based on that result, two studies were conducted which demonstrated the utility of the lesion models. The first study used the modeling framework as a measurement tool to determine how dose and reconstruction algorithm affected the quantitative analysis of liver lesions, lung nodules, and renal stones in terms of their size, shape, attenuation, edge profile, and texture features. The same database of real lesion images used in the previous study was used for this study. That database contained images of the same patient at 2 dose levels (50% and 100%) along with 3 reconstruction algorithms from a GE 750HD CT system (GE Healthcare). The algorithms in question were FBP, Adaptive Statistical Iterative Reconstruction (ASiR), and Model-Based Iterative Reconstruction (MBIR). A total of 23 quantitative features were extracted from the lesions under each condition. It was found that both dose and reconstruction algorithm had a statistically significant effect on the feature measurements. In particular, radiation dose affected five, three, and four of the 23 features (related to lesion size, conspicuity, and pixel-value distribution) for liver lesions, lung nodules, and renal stones, respectively. MBIR significantly affected 9, 11, and 15 of the 23 features (including size, attenuation, and texture features) for liver lesions, lung nodules, and renal stones, respectively. Lesion texture was not significantly affected by radiation dose.
The second study demonstrating the utility of the lesion modeling framework focused on assessing detectability of very low-contrast liver lesions in abdominal imaging. Specifically, detectability was assessed as a function of dose and reconstruction algorithm. As part of a parallel clinical trial, images from 21 patients were collected at 6 dose levels per patient on a SOMATOM Flash scanner. Subtle liver lesion models (contrast = -15 HU) were inserted into the raw projection data from the patient scans. The projections were then reconstructed with FBP and SAFIRE (strength 5). Also, lesion-less images were reconstructed. Noise, contrast, CNR, and detectability index of an observer model (non-prewhitening matched filter) were assessed. It was found that SAFIRE reduced noise by 52%, reduced contrast by 12%, increased CNR by 87%. and increased detectability index by 65% compared to FBP. Further, a 2AFC human perception experiment was performed to assess the dose reduction potential of SAFIRE, which was found to be 22% compared to the standard of care dose.
In conclusion, this dissertation provides to the scientific community a series of new methodologies, phantoms, analysis techniques, and modeling tools that can be used to rigorously assess image quality from modern CT systems. Specifically, methods to properly evaluate iterative reconstruction have been developed and are expected to aid in the safe clinical implementation of dose reduction technologies.
Resumo:
Once the preserve of university academics and research laboratories with high-powered and expensive computers, the power of sophisticated mathematical fire models has now arrived on the desk top of the fire safety engineer. It is a revolution made possible by parallel advances in PC technology and fire modelling software. But while the tools have proliferated, there has not been a corresponding transfer of knowledge and understanding of the discipline from expert to general user. It is a serious shortfall of which the lack of suitable engineering courses dealing with the subject is symptomatic, if not the cause. The computational vehicles to run the models and an understanding of fire dynamics are not enough to exploit these sophisticated tools. Too often, they become 'black boxes' producing magic answers in exciting three-dimensional colour graphics and client-satisfying 'virtual reality' imagery. As well as a fundamental understanding of the physics and chemistry of fire, the fire safety engineer must have at least a rudimentary understanding of the theoretical basis supporting fire models to appreciate their limitations and capabilities. The five day short course, "Principles and Practice of Fire Modelling" run by the University of Greenwich attempt to bridge the divide between the expert and the general user, providing them with the expertise they need to understand the results of mathematical fire modelling. The course and associated text book, "Mathematical Modelling of Fire Phenomena" are aimed at students and professionals with a wide and varied background, they offer a friendly guide through the unfamiliar terrain of mathematical modelling. These concepts and techniques are introduced and demonstrated in seminars. Those attending also gain experience in using the methods during "hands-on" tutorial and workshop sessions. On completion of this short course, those participating should: - be familiar with the concept of zone and field modelling; - be familiar with zone and field model assumptions; - have an understanding of the capabilities and limitations of modelling software packages for zone and field modelling; - be able to select and use the most appropriate mathematical software and demonstrate their use in compartment fire applications; and - be able to interpret model predictions. The result is that the fire safety engineer is empowered to realise the full value of mathematical models to help in the prediction of fire development, and to determine the consequences of fire under a variety of conditions. This in turn enables him or her to design and implement safety measures which can potentially control, or at the very least reduce the impact of fire.
Resumo:
The focus of the current dissertation is to study qualitatively the underlying physics of vortex-shedding and wake dynamics in long aspect-ratio aerodynamics in incompressible viscous flow through the use of the KLE method. We carried out a long series of numerical experiments in the cases of flow around the cylinder at low Reynolds numbers. The study of flow at low Reynolds numbers provides an insight in the fluid physics and also plays a critical role when applying to stalled turbine rotors. Many of the conclusions about the qualitative nature of the physical mechanisms characterizing vortex formation, shedding and further interaction analyzed here at low Re could be extended to other Re regimes and help to understand the separation of the boundary layers in airfoils and other aerodynamic surfaces. In the long run, it aims to provide a better understanding of the complex multi-physics problems involving fluid-structure-control interaction through improved mathematical computational models of the multi-physics process. Besides the scientific conclusions produced, the research work on streamlined and bluff-body condition will also serve as a valuable guide for the future design of blade aerodynamics and the placement of wind turbines and hydrakinetic turbines, increasing the efficiency in the use of expensive workforce, supplies, and infrastructure. After the introductory section describing the main fields of application of wind power and hydrokinetic turbines, we describe the main features and theoretical background of the numerical method used here. Then, we present the analysis of the numerical experimentation results for the oscillatory regime right before the onset of vortex shedding for circular cylinders. We verified the wake length of the closed near-wake behind the cylinder and analysed the decay of the wake at the wake formation region, and then studied the St-Re relationship at the Reynolds numbers before the wake sheds compared to the experimental data. We found a theoretical model that describes the time evolution of the amplitude of fluctuations in the vorticity field on the twin vortex wake, which accurately matches the numerical results in terms of the frequency of the oscillation and rate of decay. We also proposed a model based on an analog circuit that is able to interpret the concerning flow by reducing the number of degrees of freedom. It follows the idea of the non-linear oscillator and resembles the dynamics mechanism of the closed near-wake with a common configured sine wave oscillator. This low-dimensional circuital model may also help to understand the underlying physical mechanisms, related to vorticity transport, that give origin to those oscillations.
Resumo:
This research develops an econometric framework to analyze time series processes with bounds. The framework is general enough that it can incorporate several different kinds of bounding information that constrain continuous-time stochastic processes between discretely-sampled observations. It applies to situations in which the process is known to remain within an interval between observations, by way of either a known constraint or through the observation of extreme realizations of the process. The main statistical technique employs the theory of maximum likelihood estimation. This approach leads to the development of the asymptotic distribution theory for the estimation of the parameters in bounded diffusion models. The results of this analysis present several implications for empirical research. The advantages are realized in the form of efficiency gains, bias reduction and in the flexibility of model specification. A bias arises in the presence of bounding information that is ignored, while it is mitigated within this framework. An efficiency gain arises, in the sense that the statistical methods make use of conditioning information, as revealed by the bounds. Further, the specification of an econometric model can be uncoupled from the restriction to the bounds, leaving the researcher free to model the process near the bound in a way that avoids bias from misspecification. One byproduct of the improvements in model specification is that the more precise model estimation exposes other sources of misspecification. Some processes reveal themselves to be unlikely candidates for a given diffusion model, once the observations are analyzed in combination with the bounding information. A closer inspection of the theoretical foundation behind diffusion models leads to a more general specification of the model. This approach is used to produce a set of algorithms to make the model computationally feasible and more widely applicable. Finally, the modeling framework is applied to a series of interest rates, which, for several years, have been constrained by the lower bound of zero. The estimates from a series of diffusion models suggest a substantial difference in estimation results between models that ignore bounds and the framework that takes bounding information into consideration.
Resumo:
We discuss in this paper equations describing processes involving non-linear and higher-order diffusion. We focus on a particular case (u(t) = 2 lambda (2)(uu(x))(x) + lambda (2)u(xxxx)), which is put into analogy with the KdV equation. A balance of nonlinearity and higher-order diffusion enables the existence of self-similar solutions, describing diffusive shocks. These shocks are continuous solutions with a discontinuous higher-order derivative at the shock front. We argue that they play a role analogous to the soliton solutions in the dispersive case. We also discuss several physical instances where such equations are relevant.
Resumo:
The partial replacement of NaCl by KCl is a promising alternative to produce a cheese with lower sodium content since KCl does not change the final quality of the cheese product. In order to assure proper salt proportions, mathematical models are employed to control the product process and simulate the multicomponent diffusion during the reduced salt cheese ripening period. The generalized Fick's Second Law is widely accepted as the primary mass transfer model within solid foods. The Finite Element Method (FEM) was used to solve the system of differential equations formed. Therefore, a NaCl and KCl multicomponent diffusion was simulated using a 20% (w/w) static brine with 70% NaCl and 30% KCl during Prato cheese (a Brazilian semi-hard cheese) salting and ripening. The theoretical results were compared with experimental data, and indicated that the deviation was 4.43% for NaCl and 4.72% for KCl validating the proposed model for the production of good quality, reduced-sodium cheeses.