887 resultados para cashew nut kernel


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Opinnäytetyön tavoitteena oli kehittää uusi gluteeniton leipomotuote. Tuotekehitystyö tehtiin yhdessä Vuohelan Herkkupuodin kanssa. Tuotekehitystyö tapahtui Vuohelan leipomossa, Hartolassa. Prosessi alkoi keväällä 2007 ja tuote oli tarkoitus lanseerata Keliakia-messuilla marras-joulukuun vaihteessa. Tuotekehitysprosessi aloitettiin suunnitelman tekemisestä. Suunnitelmaa seurasi tutkiminen, joka tarkoitti tässä tapauksessa asiakaskyselyä. Kyselyn avulla haluttiin selvittää millaisia gluteenittomia leipomotuotteita kaivattiin markkinoille. Asiakaskyselyn tulokset eivät olleet tuotekehityksen koko lähtökohta, mutta ne olivat suuntaa-antavia. Tärkeitä kyselystä esiin nousseita seikkoja olivat pähkinättömyys ja soijattomuus, kuitupitoisuus, vähärasvaisuus ja -suolaisuus sekä tuotteiden kallis hintataso. Kilpailijoiden tuotteet olivat painottuneet makeisiin leivonnaisiin. Messuilla lanseeraaminen asetti omat vaatimuksensa tuotteelle. Osa ideoinnissa keksityistä tuotteista oli tulossa tuotantoon myöhemmin. Suunniteltavaksi tuotteeksi valittiin pähkinätön ja soijaton, vähäsuolainen ja -rasvainen, kuitupitoinen, ei-makea tuote, joka on helppo nauttia yksittäin. Seuraavaksi suunniteltiin tuotteen rakennetta, koostumusta ja makua. Ensin tehtiin neljä taikinaa, joista valittiin koostumukseltaan ja maultaan paras. Tämän jälkeen kokeiltiin eri makuja taikinoiden sekaan, päälle ja sisään. Paras lopputulos oli sisään levitetty pesto-vuohenjuustotäyte. Kustannussyistä valittiin käytettäväksi tavallista pestoa, joka sisältää cashew-pähkinää. Pähkinättömyys ei siis toteutunut tavoitteesta huolimatta. Pesto voidaan vaihtaa myöhemmin ilman pähkinää valmistettavaan pestoon, mikäli sitä saadaan tulevaisuudessa tukusta. Tuotekehitysprosessin tavoitteena oli kehittää uusi gluteeniton leipomotuote, jota voidaan myydä niin omassa myymälässä kuin vähittäiskaupoissa. Tuotekehitysprosessi eteni sujuvassa yhteistyössä nopealla aikataululla. Aikataulussa ei valitettavasti pysytty ja tuotetta ei ehditty viimeistellä Keliakia-messuille. Tuotteen lanseeraus viivästyi vuoteen 2008. Tuotesuunnittelu ei kuitenkaan mennyt hukkaan, sillä onnistuttiin kehittämään markkinoilta puuttuva tuote, joka voidaan tuoda markkinoille ylpeänä. Tuotekehityksen tuloksena syntyi onnistunut välipalatuote, joka on helppo nauttia ilman muita lisukkeita.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Notre consommation en eau souterraine, en particulier comme eau potable ou pour l'irrigation, a considérablement augmenté au cours des années. De nombreux problèmes font alors leur apparition, allant de la prospection de nouvelles ressources à la remédiation des aquifères pollués. Indépendamment du problème hydrogéologique considéré, le principal défi reste la caractérisation des propriétés du sous-sol. Une approche stochastique est alors nécessaire afin de représenter cette incertitude en considérant de multiples scénarios géologiques et en générant un grand nombre de réalisations géostatistiques. Nous rencontrons alors la principale limitation de ces approches qui est le coût de calcul dû à la simulation des processus d'écoulements complexes pour chacune de ces réalisations. Dans la première partie de la thèse, ce problème est investigué dans le contexte de propagation de l'incertitude, oú un ensemble de réalisations est identifié comme représentant les propriétés du sous-sol. Afin de propager cette incertitude à la quantité d'intérêt tout en limitant le coût de calcul, les méthodes actuelles font appel à des modèles d'écoulement approximés. Cela permet l'identification d'un sous-ensemble de réalisations représentant la variabilité de l'ensemble initial. Le modèle complexe d'écoulement est alors évalué uniquement pour ce sousensemble, et, sur la base de ces réponses complexes, l'inférence est faite. Notre objectif est d'améliorer la performance de cette approche en utilisant toute l'information à disposition. Pour cela, le sous-ensemble de réponses approximées et exactes est utilisé afin de construire un modèle d'erreur, qui sert ensuite à corriger le reste des réponses approximées et prédire la réponse du modèle complexe. Cette méthode permet de maximiser l'utilisation de l'information à disposition sans augmentation perceptible du temps de calcul. La propagation de l'incertitude est alors plus précise et plus robuste. La stratégie explorée dans le premier chapitre consiste à apprendre d'un sous-ensemble de réalisations la relation entre les modèles d'écoulement approximé et complexe. Dans la seconde partie de la thèse, cette méthodologie est formalisée mathématiquement en introduisant un modèle de régression entre les réponses fonctionnelles. Comme ce problème est mal posé, il est nécessaire d'en réduire la dimensionnalité. Dans cette optique, l'innovation du travail présenté provient de l'utilisation de l'analyse en composantes principales fonctionnelles (ACPF), qui non seulement effectue la réduction de dimensionnalités tout en maximisant l'information retenue, mais permet aussi de diagnostiquer la qualité du modèle d'erreur dans cet espace fonctionnel. La méthodologie proposée est appliquée à un problème de pollution par une phase liquide nonaqueuse et les résultats obtenus montrent que le modèle d'erreur permet une forte réduction du temps de calcul tout en estimant correctement l'incertitude. De plus, pour chaque réponse approximée, une prédiction de la réponse complexe est fournie par le modèle d'erreur. Le concept de modèle d'erreur fonctionnel est donc pertinent pour la propagation de l'incertitude, mais aussi pour les problèmes d'inférence bayésienne. Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont les algorithmes les plus communément utilisés afin de générer des réalisations géostatistiques en accord avec les observations. Cependant, ces méthodes souffrent d'un taux d'acceptation très bas pour les problèmes de grande dimensionnalité, résultant en un grand nombre de simulations d'écoulement gaspillées. Une approche en deux temps, le "MCMC en deux étapes", a été introduite afin d'éviter les simulations du modèle complexe inutiles par une évaluation préliminaire de la réalisation. Dans la troisième partie de la thèse, le modèle d'écoulement approximé couplé à un modèle d'erreur sert d'évaluation préliminaire pour le "MCMC en deux étapes". Nous démontrons une augmentation du taux d'acceptation par un facteur de 1.5 à 3 en comparaison avec une implémentation classique de MCMC. Une question reste sans réponse : comment choisir la taille de l'ensemble d'entrainement et comment identifier les réalisations permettant d'optimiser la construction du modèle d'erreur. Cela requiert une stratégie itérative afin que, à chaque nouvelle simulation d'écoulement, le modèle d'erreur soit amélioré en incorporant les nouvelles informations. Ceci est développé dans la quatrième partie de la thèse, oú cette méthodologie est appliquée à un problème d'intrusion saline dans un aquifère côtier. -- Our consumption of groundwater, in particular as drinking water and for irrigation, has considerably increased over the years and groundwater is becoming an increasingly scarce and endangered resource. Nofadays, we are facing many problems ranging from water prospection to sustainable management and remediation of polluted aquifers. Independently of the hydrogeological problem, the main challenge remains dealing with the incomplete knofledge of the underground properties. Stochastic approaches have been developed to represent this uncertainty by considering multiple geological scenarios and generating a large number of realizations. The main limitation of this approach is the computational cost associated with performing complex of simulations in each realization. In the first part of the thesis, we explore this issue in the context of uncertainty propagation, where an ensemble of geostatistical realizations is identified as representative of the subsurface uncertainty. To propagate this lack of knofledge to the quantity of interest (e.g., the concentration of pollutant in extracted water), it is necessary to evaluate the of response of each realization. Due to computational constraints, state-of-the-art methods make use of approximate of simulation, to identify a subset of realizations that represents the variability of the ensemble. The complex and computationally heavy of model is then run for this subset based on which inference is made. Our objective is to increase the performance of this approach by using all of the available information and not solely the subset of exact responses. Two error models are proposed to correct the approximate responses follofing a machine learning approach. For the subset identified by a classical approach (here the distance kernel method) both the approximate and the exact responses are knofn. This information is used to construct an error model and correct the ensemble of approximate responses to predict the "expected" responses of the exact model. The proposed methodology makes use of all the available information without perceptible additional computational costs and leads to an increase in accuracy and robustness of the uncertainty propagation. The strategy explored in the first chapter consists in learning from a subset of realizations the relationship between proxy and exact curves. In the second part of this thesis, the strategy is formalized in a rigorous mathematical framework by defining a regression model between functions. As this problem is ill-posed, it is necessary to reduce its dimensionality. The novelty of the work comes from the use of functional principal component analysis (FPCA), which not only performs the dimensionality reduction while maximizing the retained information, but also allofs a diagnostic of the quality of the error model in the functional space. The proposed methodology is applied to a pollution problem by a non-aqueous phase-liquid. The error model allofs a strong reduction of the computational cost while providing a good estimate of the uncertainty. The individual correction of the proxy response by the error model leads to an excellent prediction of the exact response, opening the door to many applications. The concept of functional error model is useful not only in the context of uncertainty propagation, but also, and maybe even more so, to perform Bayesian inference. Monte Carlo Markov Chain (MCMC) algorithms are the most common choice to ensure that the generated realizations are sampled in accordance with the observations. Hofever, this approach suffers from lof acceptance rate in high dimensional problems, resulting in a large number of wasted of simulations. This led to the introduction of two-stage MCMC, where the computational cost is decreased by avoiding unnecessary simulation of the exact of thanks to a preliminary evaluation of the proposal. In the third part of the thesis, a proxy is coupled to an error model to provide an approximate response for the two-stage MCMC set-up. We demonstrate an increase in acceptance rate by a factor three with respect to one-stage MCMC results. An open question remains: hof do we choose the size of the learning set and identify the realizations to optimize the construction of the error model. This requires devising an iterative strategy to construct the error model, such that, as new of simulations are performed, the error model is iteratively improved by incorporating the new information. This is discussed in the fourth part of the thesis, in which we apply this methodology to a problem of saline intrusion in a coastal aquifer.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Työssä on tavoitteena saada tarkastuksilla käytettävä tarkastuslista, jonka avulla tarkastuksista saadaan laadukas ja yhtenäinen tuote. Työssä käydään läpi pääpiirteittäin koneturvallisuuteen vaikuttavat lait, standardit ja direktiivi. Käsitellään lain ja sen pohjalta annettujen säädöksien kehitys ja nykypäivä. Lisäksi selvitetään, kenelle kuuluu, kussakin koneen elinkaaren vaiheessa, vastata turvallisuudesta ja kuinka sitä tulee seurata, sekä mitä vaaditaan säädöksien yhteensovittamiselta. Pääpaino on sähköjärjestelmien koneturvallisuutta koskevissa vaatimuksissa ja siinä mitä sähköjärjestelmiltä vaaditaan. Käsitellään lyhyesti, kenen tulee missäkin koneen kehitysvaiheessa tehdä turvallisuustoimenpiteisiin vaadittava riskien arviointi. Sähköjärjestelmien vaatimukset jaotellaan työssä kahteen ryhmään eli sähköisiin vaatimuksiin ja ohjauspiireille asetettuihin vaatimuksiin. Sähköiset vaatimukset on jaoteltu kuuteen alakohtaan, siten kun ne on järjestetty standardissa. Ohjauspiirit on vastaavasti jaoteltu seitsemää alakohtaa standardin mukaisesti. Tarkastuksen vaatimukset ovat esitetty kussakin kohdassa, kuten ne on esitetty standardissakin. Työhön on liitetty työkalu, minkä avulla koneen tarkastamiseen liittyvät alueet eivät ole pelkästään muistin varassa. Työkalulla saadaan yhtenäistettyä tarkastajien suorittamat tarkastukset, siten että ne vastaavat työturvallisuuslain mukaista tarkastusta. Työn aikana on suoritettu yksi todellinen koneeseen kohdistuva tarkastus, mikä on anta-nut työkalun kehittymiselle oman todellisuuspohjansa. Tarkastus on suoritettu Saint-Gobain Isover Oy:n tehtaalla.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract This work studies the multi-label classification of turns in simple English Wikipedia talk pages into dialog acts. The treated dataset was created and multi-labeled by (Ferschke et al., 2012). The first part analyses dependences between labels, in order to examine the annotation coherence and to determine a classification method. Then, a multi-label classification is computed, after transforming the problem into binary relevance. Regarding features, whereas (Ferschke et al., 2012) use features such as uni-, bi-, and trigrams, time distance between turns or the indentation level of the turn, other features are considered here: lemmas, part-of-speech tags and the meaning of verbs (according to WordNet). The dataset authors applied approaches such as Naive Bayes or Support Vector Machines. The present paper proposes, as an alternative, to use Schoenberg transformations which, following the example of kernel methods, transform original Euclidean distances into other Euclidean distances, in a space of high dimensionality. Résumé Ce travail étudie la classification supervisée multi-étiquette en actes de dialogue des tours de parole des contributeurs aux pages de discussion de Simple English Wikipedia (Wikipédia en anglais simple). Le jeu de données considéré a été créé et multi-étiqueté par (Ferschke et al., 2012). Une première partie analyse les relations entre les étiquettes pour examiner la cohérence des annotations et pour déterminer une méthode de classification. Ensuite, une classification supervisée multi-étiquette est effectuée, après recodage binaire des étiquettes. Concernant les variables, alors que (Ferschke et al., 2012) utilisent des caractéristiques telles que les uni-, bi- et trigrammes, le temps entre les tours de parole ou l'indentation d'un tour de parole, d'autres descripteurs sont considérés ici : les lemmes, les catégories morphosyntaxiques et le sens des verbes (selon WordNet). Les auteurs du jeu de données ont employé des approches telles que le Naive Bayes ou les Séparateurs à Vastes Marges (SVM) pour la classification. Cet article propose, de façon alternative, d'utiliser et d'étendre l'analyse discriminante linéaire aux transformations de Schoenberg qui, à l'instar des méthodes à noyau, transforment les distances euclidiennes originales en d'autres distances euclidiennes, dans un espace de haute dimensionnalité.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

GDP has usually been used as a proxy for human well-being. Nevertheless, other social aspects should also be considered, such as life expectancy, infant mortality, educational enrolment and crime issues. With this paper we investigate not only economic convergence but also social convergence between regions in a developing country, Colombia, in the period 1975-2005. We consider several techniques in our analysis: sigma convergence, stochastic kernel estimations, and also several empirical models to find out the beta convergence parameter (cross section and panel estimates, with and without spatial dependence). The main results confirm that we can talk about convergence in Colombia in key social variables, although not in the classic economic variable, GDP per capita. We have also found that spatial autocorrelation reinforces convergence processes through deepening market and social factors, while isolation condemns regions to nonconvergence.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The most suitable method for estimation of size diversity is investigated. Size diversity is computed on the basis of the Shannon diversity expression adapted for continuous variables, such as size. It takes the form of an integral involving the probability density function (pdf) of the size of the individuals. Different approaches for the estimation of pdf are compared: parametric methods, assuming that data come from a determinate family of pdfs, and nonparametric methods, where pdf is estimated using some kind of local evaluation. Exponential, generalized Pareto, normal, and log-normal distributions have been used to generate simulated samples using estimated parameters from real samples. Nonparametric methods include discrete computation of data histograms based on size intervals and continuous kernel estimation of pdf. Kernel approach gives accurate estimation of size diversity, whilst parametric methods are only useful when the reference distribution have similar shape to the real one. Special attention is given for data standardization. The division of data by the sample geometric mean is proposedas the most suitable standardization method, which shows additional advantages: the same size diversity value is obtained when using original size or log-transformed data, and size measurements with different dimensionality (longitudes, areas, volumes or biomasses) may be immediately compared with the simple addition of ln k where kis the dimensionality (1, 2, or 3, respectively). Thus, the kernel estimation, after data standardization by division of sample geometric mean, arises as the most reliable and generalizable method of size diversity evaluation

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aim: Emerging polyploids may depend on environmental niche shifts for successful establishment. Using the alpine plant Ranunculus kuepferi as a model system, we explore the niche shift hypothesis at different spatial resolutions and in contrasting parts of the species range. Location: European Alps. Methods: We sampled 12 individuals from each of 102 populations of R. kuepferi across the Alps, determined their ploidy levels, derived coarse-grain (100x100m) environmental descriptors for all sampling sites by downscaling WorldClim maps, and calculated fine-scale environmental descriptors (2x2m) from indicator values of the vegetation accompanying the sampled individuals. Both coarse and fine-scale variables were further computed for 8239 vegetation plots from across the Alps. Subsequently, we compared niche optima and breadths of diploid and tetraploid cytotypes by combining principal components analysis and kernel smoothing procedures. Comparisons were done separately for coarse and fine-grain data sets and for sympatric, allopatric and the total set of populations. Results: All comparisons indicate that the niches of the two cytotypes differ in optima and/or breadths, but results vary in important details. The whole-range analysis suggests differentiation along the temperature gradient to be most important. However, sympatric comparisons indicate that this climatic shift was not a direct response to competition with diploid ancestors. Moreover, fine-grained analyses demonstrate niche contraction of tetraploids, especially in the sympatric range, that goes undetected with coarse-grained data. Main conclusions: Although the niche optima of the two cytotypes differ, separation along ecological gradients was probably less decisive for polyploid establishment than a shift towards facultative apomixis, a particularly effective strategy to avoid minority cytotype exclusion. In addition, our results suggest that coarse-grained analyses overestimate niche breadths of widely distributed taxa. Niche comparison analyses should hence be conducted at environmental data resolutions appropriate for the organism and question under study.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper a colour texture segmentation method, which unifies region and boundary information, is proposed. The algorithm uses a coarse detection of the perceptual (colour and texture) edges of the image to adequately place and initialise a set of active regions. Colour texture of regions is modelled by the conjunction of non-parametric techniques of kernel density estimation (which allow to estimate the colour behaviour) and classical co-occurrence matrix based texture features. Therefore, region information is defined and accurate boundary information can be extracted to guide the segmentation process. Regions concurrently compete for the image pixels in order to segment the whole image taking both information sources into account. Furthermore, experimental results are shown which prove the performance of the proposed method

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis deals with distance transforms which are a fundamental issue in image processing and computer vision. In this thesis, two new distance transforms for gray level images are presented. As a new application for distance transforms, they are applied to gray level image compression. The new distance transforms are both new extensions of the well known distance transform algorithm developed by Rosenfeld, Pfaltz and Lay. With some modification their algorithm which calculates a distance transform on binary images with a chosen kernel has been made to calculate a chessboard like distance transform with integer numbers (DTOCS) and a real value distance transform (EDTOCS) on gray level images. Both distance transforms, the DTOCS and EDTOCS, require only two passes over the graylevel image and are extremely simple to implement. Only two image buffers are needed: The original gray level image and the binary image which defines the region(s) of calculation. No other image buffers are needed even if more than one iteration round is performed. For large neighborhoods and complicated images the two pass distance algorithm has to be applied to the image more than once, typically 3 10 times. Different types of kernels can be adopted. It is important to notice that no other existing transform calculates the same kind of distance map as the DTOCS. All the other gray weighted distance function, GRAYMAT etc. algorithms find the minimum path joining two points by the smallest sum of gray levels or weighting the distance values directly by the gray levels in some manner. The DTOCS does not weight them that way. The DTOCS gives a weighted version of the chessboard distance map. The weights are not constant, but gray value differences of the original image. The difference between the DTOCS map and other distance transforms for gray level images is shown. The difference between the DTOCS and EDTOCS is that the EDTOCS calculates these gray level differences in a different way. It propagates local Euclidean distances inside a kernel. Analytical derivations of some results concerning the DTOCS and the EDTOCS are presented. Commonly distance transforms are used for feature extraction in pattern recognition and learning. Their use in image compression is very rare. This thesis introduces a new application area for distance transforms. Three new image compression algorithms based on the DTOCS and one based on the EDTOCS are presented. Control points, i.e. points that are considered fundamental for the reconstruction of the image, are selected from the gray level image using the DTOCS and the EDTOCS. The first group of methods select the maximas of the distance image to new control points and the second group of methods compare the DTOCS distance to binary image chessboard distance. The effect of applying threshold masks of different sizes along the threshold boundaries is studied. The time complexity of the compression algorithms is analyzed both analytically and experimentally. It is shown that the time complexity of the algorithms is independent of the number of control points, i.e. the compression ratio. Also a new morphological image decompression scheme is presented, the 8 kernels' method. Several decompressed images are presented. The best results are obtained using the Delaunay triangulation. The obtained image quality equals that of the DCT images with a 4 x 4

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The fractIons of dichloromethane extracts of leaves from andiroba (Carapa guianensis - Meliaceae), caapi (Banisteriopsis caapi - Malpighiaceae), cocoa (Theobroma cacao - Sterculiaceae), Brazil nut (Bertholletia excelsa - Lecytidaceae), cupuaçu (Theobroma grandiflorum - Sterculiaceae), marupá (Simaruba amara - Simaroubaceae) and rubber tree (Hevea brasiliensis - Euphorbiaceae), were analyzed by HT-HRGC and HT-HRGC-MS. Esters of homologous series of fatty acids and long chain alcohols, phytol, amyrines and tocopherols were characterized. The characterization of the compounds was based mainly in mass spectra data and in addition by usual spectrometric data (¹H and 13C NMR, IR).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

To improve tannin assay in cashew apple, several parameters were examined, including (1) extraction solvents, (2) effects of water and boiling time on butanol acid reaction and (3) correlation between vanillin and butanol acid assay of tannin in cashew apples. The 50-70% acetone extracted the greatest amount of tannin from cashew apples. Concentrations of water in butanol reagents were adjusted and boiling time of butanol reaction was reduced at 15 min. Tannin of unripe cashew apples was purified on Sephadex LH-20, aiming to obtain tannin standard for butanol assay. The vanillin assay presented high correlation with the butanol acid assay.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Social, technological, and economic time series are divided by events which are usually assumed to be random, albeit with some hierarchical structure. It is well known that the interevent statistics observed in these contexts differs from the Poissonian profile by being long-tailed distributed with resting and active periods interwoven. Understanding mechanisms generating consistent statistics has therefore become a central issue. The approach we present is taken from the continuous-time random-walk formalism and represents an analytical alternative to models of nontrivial priority that have been recently proposed. Our analysis also goes one step further by looking at the multifractal structure of the interevent times of human decisions. We here analyze the intertransaction time intervals of several financial markets. We observe that empirical data describe a subtle multifractal behavior. Our model explains this structure by taking the pausing-time density in the form of a superstatistics where the integral kernel quantifies the heterogeneous nature of the executed tasks. A stretched exponential kernel provides a multifractal profile valid for a certain limited range. A suggested heuristic analytical profile is capable of covering a broader region.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The production of cashew apple wine has the purpose of minimizing the wastage in the Brazilian cashew production. Knowing that the cashew apple fermentation produces a good cashew wine, a study of alcoholic fermentation kinetics of the cashew apple and the physico-chemical characterization of the product were made. The cashew wine was produced in an stirred batch reactor. The results of the physico-chemical analysis of volatiles, residual sugars, total acidity and pH of cashew wine showed that their concentrations were within the standard limits established by the Brazilian legislation for fruit wines.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of this study is to evaluate the crystal structure of binary mixtures of palm kernel fat and fish oil, before and after chemical and enzymatic interesterification. The crystal structure was analyzed by polarized light microscopy. The addition of fish oil didn't change the palm kernel fat crystallization characteristics, spherullites of types A and B being observed. However, due to chemical and enzymatic interesterification, smaller crystals were obtained. There was no difference between chemical and enzymatic interesterification, probably as a function of acyl migration in discontinuous processes catalyzed by lipases.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The speed of traveling fronts for a two-dimensional model of a delayed reactiondispersal process is derived analytically and from simulations of molecular dynamics. We show that the one-dimensional (1D) and two-dimensional (2D) versions of a given kernel do not yield always the same speed. It is also shown that the speeds of time-delayed fronts may be higher than those predicted by the corresponding non-delayed models. This result is shown for systems with peaked dispersal kernels which lead to ballistic transport