15 resultados para Hold-up problem

em CentAUR: Central Archive University of Reading - UK


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Time dependent gas hold-up generated in the 0.3 and 0.6 m diameter vessels using high viscosity castor oil and carboxy methyl cellulose (CMC) solution was compared on the basis of impeller speed (N) and gas velocity (V-G). Two types of hold-up were distinguished-the hold-up due to tiny bubbles (epsilon(ft)) and total hold-up (epsilon(f)), which included large and tiny bubbles. It was noted that vessel diameter (i.e. the scale of operation) significantly influences (i) the trends and the values of epsilon(f) and epsilon(ft), and (ii) the values of tau (a constant reflecting the time dependency of hold-up). The results showed that a scale independent correlation for gas hold-up of the form epsilon(f) or epsilon(ft) = A(N or P-G/V)(a) (V-G)(b), where "a" and "b" are positive constants is not appropriate for viscous liquids. This warrants further investigations into the effect of vessel diameter on gas hold-up in impeller agitated high viscosity liquids (mu or mu(a) > 0.4 Pa s). (C) 2003 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Bubble inclusion is one of the fastest growing operations practiced in the food industry. A variety of aerated foods is currently available in supermarkets, and newer products are emerging all the time. This paper aims to combine knowledge on chocolate aeration with studies performed on bubble formation and dispersion characteristics. More specifically, we have investigated bubble formation induced by applying vacuum. Experimental methods to determine gas hold-up (volume fraction of air), bubble section distributions along specific planes, and chocolate rheological properties are presented. This study concludes that decreasing pressures elevate gas hold-up values due to an increase in the number of bubble nuclei being formed and release of a greater volume of dissolved gases. Furthermore, bubbles are observed to be larger at lower pressures for a set amount of gas because the internal pressure needs to be in equilibrium with the surrounding pressures. Temperature-induced changes to the properties of the chocolate have less of an effect on bubble formation. On the other hand, when different fats and emulsifiers are added to a standard chocolate recipe, milk fat was found to increase, significantly, the gas hold-up values and the mean bubble-section diameters. It is hypothesized that this behavior is related to the way milk fats, which contain different fatty acids to cocoa butter, crystallize and influence the setting properties of the final product. It is highlighted that apparent viscosity values at low shear rate, as well as setting behavior, play an important role in terms of bubble formation and entrainment.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Bubbles impart a very unique texture, chew, and mouth feel to foods. However, little is known about the relationship between structure of such products and consumer response in terms of mouth-feel and eating experience. The objective of this article is to investigate the sensory properties of 4 types of bubble-containing chocolates, produced by using different gases: carbon dioxide, nitrogen, nitrous oxide, and argon. The structure of these chocolates were characterized in terms of (1) gas hold-up values determined by density measurements and (2) bubble size distribution which was measured by undertaking an image analysis of X-ray microtomograph sections. Bubble size distributions were obtained by measuring bubble volumes after reconstructing 3D images from the tomographic sections. A sensory study was undertaken by a nonexpert panel of 20 panelists and their responses were analyzed using qualitative descriptive analysis (QDA). The results show that chocolates made from the 4 gases could be divided into 2 groups on the basis of bubble volume and gas hold-up: the samples produced using carbon dioxide and nitrous oxide had a distinctly higher gas hold-up containing larger bubbles in comparison with those produced using argon and nitrogen. The sensory study also demonstrated that chocolates made with the latter were perceived to be harder, less aerated, slow to melt in the mouth, and having overall flavor intensity. These products were further found to be creamier than the chocolates made by using carbon dioxide and nitrous oxide; the latter sample also showed a higher intensity of cocoa flavor.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This article examines the determinants of concentration of creditors. The empirical evidence drawn from this article supports the proposition of Bolton and Scharfstein (1996) that for negotiation reasons, high-quality borrowers tend to borrow from multiple sources and is contrary to the theoretical prediction of Bris and Welch (2005). This finding implies the existence of hold-up problems in financing small businesses where information conveyance is difficult between lenders. It is further supported by the evidence that dispersed bank relationships are associated with relationships of a longer history and a closer physical distance to lenders.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this article, we use the no-response test idea, introduced in Luke and Potthast (2003) and Potthast (Preprint) and the inverse obstacle problem, to identify the interface of the discontinuity of the coefficient gamma of the equation del (.) gamma(x)del + c(x) with piecewise regular gamma and bounded function c(x). We use infinitely many Cauchy data as measurement and give a reconstructive method to localize the interface. We will base this multiwave version of the no-response test on two different proofs. The first one contains a pointwise estimate as used by the singular sources method. The second one is built on an energy (or an integral) estimate which is the basis of the probe method. As a conclusion of this, the probe and the singular sources methods are equivalent regarding their convergence and the no-response test can be seen as a unified framework for these methods. As a further contribution, we provide a formula to reconstruct the values of the jump of gamma(x), x is an element of partial derivative D at the boundary. A second consequence of this formula is that the blow-up rate of the indicator functions of the probe and singular sources methods at the interface is given by the order of the singularity of the fundamental solution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

CVD is a common killer in both the Western world and the developing world. It is a multifactorial disease that is influenced by many environmental and genetic factors. Although public health advice to date has been principally in the form of prescribed population-based recommendations, this approach has been surprisingly unsuccessful in reducing CVD risk. This outcome may be explained, in part, by the extreme variability in response to dietary manipulations between individuals and interactions between diet and an individual's genetic background, which are defined by the term 'nutrigenetics'. The shift towards personalised nutritional advice is a very attractive proposition. In principle an individual could be genotyped and given dietary advice specifically tailored to their genetic make-up. Evidence-based research into interactions between fixed genetic variants, nutrient intake and biomarkers of CVD risk is increasing, but still limited. The present paper will review the evidence for interactions between dietary fat and three common polymorphisms in the apoE, apoAI and PPAR gamma genes. Increased knowledge of how these and other genes influence dietary response should increase the understanding of personalised nutrition. While targeted dietary advice may have considerable potential for reducing CVD risk, the ethical issues associated with its routine use need careful consideration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A fast Knowledge-based Evolution Strategy, KES, for the multi-objective minimum spanning tree, is presented. The proposed algorithm is validated, for the bi-objective case, with an exhaustive search for small problems (4-10 nodes), and compared with a deterministic algorithm, EPDA and NSGA-II for larger problems (up to 100 nodes) using benchmark hard instances. Experimental results show that KES finds the true Pareto fronts for small instances of the problem and calculates good approximation Pareto sets for larger instances tested. It is shown that the fronts calculated by YES are superior to NSGA-II fronts and almost as good as those established by EPDA. KES is designed to be scalable to multi-objective problems and fast due to its small complexity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tobacco addiction represents a major public health problem, and most addicted smokers take up the habit during adolescence. We need to know why. With the aim of gaining a better understanding of the meanings smoking and tobacco addiction hold for young people, 85 focused interviews were conducted with adolescent children from economically deprived areas of Northern Ireland. Through adopting a qualitative approach within the community rather than the school context, the adolescent children were given the opportunity to freely express their views in confidence. Children seem to differentiate conceptually between child smoking and adult smoking. Whereas adults smoke to cope with life and are thus perceived by children as lacking control over their consumption, child smoking is motivated by attempts to achieve the status of cool and hard, and to gain group membership. Adults have personal reasons for smoking, while child smoking is profoundly social. Adults are perceived as dependent on nicotine, and addiction is at the core of the children's understanding of adult smoking. Child smoking, on the other hand, is seen as oriented around social relations so that addiction is less relevant. These ideas leave young people vulnerable to nicotine addiction. It is clearly important that health promotion efforts seek to understand and take into account the actions of children within the context of their own world-view to secure their health

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Top Down Induction of Decision Trees (TDIDT) is the most commonly used method of constructing a model from a dataset in the form of classification rules to classify previously unseen data. Alternative algorithms have been developed such as the Prism algorithm. Prism constructs modular rules which produce qualitatively better rules than rules induced by TDIDT. However, along with the increasing size of databases, many existing rule learning algorithms have proved to be computational expensive on large datasets. To tackle the problem of scalability, parallel classification rule induction algorithms have been introduced. As TDIDT is the most popular classifier, even though there are strongly competitive alternative algorithms, most parallel approaches to inducing classification rules are based on TDIDT. In this paper we describe work on a distributed classifier that induces classification rules in a parallel manner based on Prism.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In a series of papers, Killworth and Blundell have proposed to study the effects of a background mean flow and topography on Rossby wave propagation by means of a generalized eigenvalue problem formulated in terms of the vertical velocity, obtained from a linearization of the primitive equations of motion. However, it has been known for a number of years that this eigenvalue problem contains an error, which Killworth was prevented from correcting himself by his unfortunate passing and whose correction is therefore taken up in this note. Here, the author shows in the context of quasigeostrophic (QG) theory that the error can ulti- mately be traced to the fact that the eigenvalue problem for the vertical velocity is fundamentally a non- linear one (the eigenvalue appears both in the numerator and denominator), unlike that for the pressure. The reason that this nonlinear term is lacking in the Killworth and Blundell theory comes from neglecting the depth dependence of a depth-dependent term. This nonlinear term is shown on idealized examples to alter significantly the Rossby wave dispersion relation in the high-wavenumber regime but is otherwise irrelevant in the long-wave limit, in which case the eigenvalue problems for the vertical velocity and pressure are both linear. In the general dispersive case, however, one should first solve the generalized eigenvalue problem for the pressure vertical structure and, if needed, diagnose the vertical velocity vertical structure from the latter.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The response of the Southern Ocean to a repeating seasonal cycle of ozone loss is studied in two coupled climate models and found to comprise both fast and slow processes. The fast response is similar to the inter-annual signature of the Southern Annular Mode (SAM) on Sea Surface Temperature (SST), on to which the ozone-hole forcing projects in the summer. It comprises enhanced northward Ekman drift inducing negative summertime SST anomalies around Antarctica, earlier sea ice freeze-up the following winter, and northward expansion of the sea ice edge year-round. The enhanced northward Ekman drift, however, results in upwelling of warm waters from below the mixed layer in the region of seasonal sea ice. With sustained bursts of westerly winds induced by ozone-hole depletion, this warming from below eventually dominates over the cooling from anomalous Ekman drift. The resulting slow-timescale response (years to decades) leads to warming of SSTs around Antarctica and ultimately a reduction in sea-ice cover year-round. This two-timescale behavior - rapid cooling followed by slow but persistent warming - is found in the two coupled models analysed, one with an idealized geometry, the other a complex global climate model with realistic geometry. Processes that control the timescale of the transition from cooling to warming, and their uncertainties are described. Finally we discuss the implications of our results for rationalizing previous studies of the effect of the ozone-hole on SST and sea-ice extent. %Interannual variability in the Southern Annular Mode (SAM) and sea ice covary such that an increase and southward shift in the surface westerlies (a positive phase of the SAM) coincides with a cooling of Sea Surface Temperature (SST) around 70-50$^\circ$S and an expansion of the sea ice cover, as seen in observations and models alike. Yet, in modeling studies, the Southern Ocean warms and sea ice extent decreases in response to sustained, multi-decadal positive SAM-like wind anomalies driven by 20th century ozone depletion. Why does the Southern Ocean appear to have disparate responses to SAM-like variability on interannual and multidecadal timescales? Here it is demonstrated that the response of the Southern Ocean to ozone depletion has a fast and a slow response. The fast response is similar to the interannual variability signature of the SAM. It is dominated by an enhanced northward Ekman drift, which transports heat northward and causes negative SST anomalies in summertime, earlier sea ice freeze-up the following winter, and northward expansion of the sea ice edge year round. The enhanced northward Ekman drift causes a region of Ekman divergence around 70-50$^\circ$S, which results in upwelling of warmer waters from below the mixed layer. With sustained westerly wind enhancement in that latitudinal band, the warming due to the anomalous upwelling of warm waters eventually dominates over the cooling from the anomalous Ekman drift. Hence, the slow response ultimately results in a positive SST anomaly and a reduction in the sea ice cover year round. We demonstrate this behavior in two models: one with an idealized geometry and another, more detailed, global climate model. However, the models disagree on the timescale of transition from the fast (cooling) to the slow (warming) response. Processes that controls this transition and their uncertainties are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Advances in hardware technologies allow to capture and process data in real-time and the resulting high throughput data streams require novel data mining approaches. The research area of Data Stream Mining (DSM) is developing data mining algorithms that allow us to analyse these continuous streams of data in real-time. The creation and real-time adaption of classification models from data streams is one of the most challenging DSM tasks. Current classifiers for streaming data address this problem by using incremental learning algorithms. However, even so these algorithms are fast, they are challenged by high velocity data streams, where data instances are incoming at a fast rate. This is problematic if the applications desire that there is no or only a very little delay between changes in the patterns of the stream and absorption of these patterns by the classifier. Problems of scalability to Big Data of traditional data mining algorithms for static (non streaming) datasets have been addressed through the development of parallel classifiers. However, there is very little work on the parallelisation of data stream classification techniques. In this paper we investigate K-Nearest Neighbours (KNN) as the basis for a real-time adaptive and parallel methodology for scalable data stream classification tasks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We establish a general framework for a class of multidimensional stochastic processes over [0,1] under which with probability one, the signature (the collection of iterated path integrals in the sense of rough paths) is well-defined and determines the sample paths of the process up to reparametrization. In particular, by using the Malliavin calculus we show that our method applies to a class of Gaussian processes including fractional Brownian motion with Hurst parameter H>1/4, the Ornstein–Uhlenbeck process and the Brownian bridge.