925 resultados para Ingredients


Relevância:

10.00% 10.00%

Publicador:

Resumo:

[EN]The inclusion of two different crab meals in diets on fillet quality parameters was investigated in a six month growth trial with red porgy (Pagus pagrus). A high quality fish meal and fish oil diet was used as a control (Diet C). Fish meal protein in the control diet was replaced by increasing levels of protein from a river crab meal (Procamburus clarkii) (CR) and a marine crab meal (Chaceon affinis) (CM) at 10% and 20% each of them. The inclusion of both crab meals in diets, either at 10% and 20% substitution levels, did not affect the texture quality parameters of flesh except for the adhesiveness, where animal fed on CR20 showed the smallest value respect to those fed the Diet C. Compared to the control fish, a reduction of the fillet lipid oxidation indicated by the Tbars index was observed for fish fed both crab meal based diets, at the higher inclusion level (20%). Increasing dietary levels of the marine crab meal showed an increment of the monoenoicos, n-9 and oleic fatty acid content in the fillets. Results indicate that both crab meals used in present study are suitable as alternative ingredients for red porgy diets in terms of fish flesh quality.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

[EN] The effect of dietary inclusion of two type of crab meal on growth, feed utilization and skin coloration performance was investigated in a growth trial with red porgy (Pagrus pagrus) of 233g initial body weight, during 6 month feeding period. High quality fish meal and fish oil diet was used as a control (Diet C). Protein of fish meal in the control was replaced by increasing dietary levels of protein derived from a river crab meal (Procamburus clarkii) (CR) and a marine crab meal (Chaceon affinis) (CM) at 10% and 20% each of them. Regarding growth results, fish fed the CM20 diet showed the highest values in absolute final weight and percent of the initial weight. For animals fed the crab meal based diets, the colour result was better than that of fish fed the control one, showing skin redness similar to that of the wild specimens. For both ingredients, increasing dietary inclusions were accompanied for an increment of the colour saturation, being in this case the obtained value for the CR meal higher than those for the CM meal. Present results indicate that crab meals used in this study are suitable as partial replacers for fish meal in diets for the red porgy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Galaxy clusters occupy a special position in the cosmic hierarchy as they are the largest bound structures in the Universe. There is now general agreement on a hierarchical picture for the formation of cosmic structures, in which galaxy clusters are supposed to form by accretion of matter and merging between smaller units. During merger events, shocks are driven by the gravity of the dark matter in the diffuse barionic component, which is heated up to the observed temperature. Radio and hard-X ray observations have discovered non-thermal components mixed with the thermal Intra Cluster Medium (ICM) and this is of great importance as it calls for a “revision” of the physics of the ICM. The bulk of present information comes from the radio observations which discovered an increasing number of Mpcsized emissions from the ICM, Radio Halos (at the cluster center) and Radio Relics (at the cluster periphery). These sources are due to synchrotron emission from ultra relativistic electrons diffusing through µG turbulent magnetic fields. Radio Halos are the most spectacular evidence of non-thermal components in the ICM and understanding the origin and evolution of these sources represents one of the most challenging goal of the theory of the ICM. Cluster mergers are the most energetic events in the Universe and a fraction of the energy dissipated during these mergers could be channelled into the amplification of the magnetic fields and into the acceleration of high energy particles via shocks and turbulence driven by these mergers. Present observations of Radio Halos (and possibly of hard X-rays) can be best interpreted in terms of the reacceleration scenario in which MHD turbulence injected during these cluster mergers re-accelerates high energy particles in the ICM. The physics involved in this scenario is very complex and model details are difficult to test, however this model clearly predicts some simple properties of Radio Halos (and resulting IC emission in the hard X-ray band) which are almost independent of the details of the adopted physics. In particular in the re-acceleration scenario MHD turbulence is injected and dissipated during cluster mergers and thus Radio Halos (and also the resulting hard X-ray IC emission) should be transient phenomena (with a typical lifetime <» 1 Gyr) associated with dynamically disturbed clusters. The physics of the re-acceleration scenario should produce an unavoidable cut-off in the spectrum of the re-accelerated electrons, which is due to the balance between turbulent acceleration and radiative losses. The energy at which this cut-off occurs, and thus the maximum frequency at which synchrotron radiation is produced, depends essentially on the efficiency of the acceleration mechanism so that observations at high frequencies are expected to catch only the most efficient phenomena while, in principle, low frequency radio surveys may found these phenomena much common in the Universe. These basic properties should leave an important imprint in the statistical properties of Radio Halos (and of non-thermal phenomena in general) which, however, have not been addressed yet by present modellings. The main focus of this PhD thesis is to calculate, for the first time, the expected statistics of Radio Halos in the context of the re-acceleration scenario. In particular, we shall address the following main questions: • Is it possible to model “self-consistently” the evolution of these sources together with that of the parent clusters? • How the occurrence of Radio Halos is expected to change with cluster mass and to evolve with redshift? How the efficiency to catch Radio Halos in galaxy clusters changes with the observing radio frequency? • How many Radio Halos are expected to form in the Universe? At which redshift is expected the bulk of these sources? • Is it possible to reproduce in the re-acceleration scenario the observed occurrence and number of Radio Halos in the Universe and the observed correlations between thermal and non-thermal properties of galaxy clusters? • Is it possible to constrain the magnetic field intensity and profile in galaxy clusters and the energetic of turbulence in the ICM from the comparison between model expectations and observations? Several astrophysical ingredients are necessary to model the evolution and statistical properties of Radio Halos in the context of re-acceleration model and to address the points given above. For these reason we deserve some space in this PhD thesis to review the important aspects of the physics of the ICM which are of interest to catch our goals. In Chapt. 1 we discuss the physics of galaxy clusters, and in particular, the clusters formation process; in Chapt. 2 we review the main observational properties of non-thermal components in the ICM; and in Chapt. 3 we focus on the physics of magnetic field and of particle acceleration in galaxy clusters. As a relevant application, the theory of Alfv´enic particle acceleration is applied in Chapt. 4 where we report the most important results from calculations we have done in the framework of the re-acceleration scenario. In this Chapter we show that a fraction of the energy of fluid turbulence driven in the ICM by the cluster mergers can be channelled into the injection of Alfv´en waves at small scales and that these waves can efficiently re-accelerate particles and trigger Radio Halos and hard X-ray emission. The main part of this PhD work, the calculation of the statistical properties of Radio Halos and non-thermal phenomena as expected in the context of the re-acceleration model and their comparison with observations, is presented in Chapts.5, 6, 7 and 8. In Chapt.5 we present a first approach to semi-analytical calculations of statistical properties of giant Radio Halos. The main goal of this Chapter is to model cluster formation, the injection of turbulence in the ICM and the resulting particle acceleration process. We adopt the semi–analytic extended Press & Schechter (PS) theory to follow the formation of a large synthetic population of galaxy clusters and assume that during a merger a fraction of the PdV work done by the infalling subclusters in passing through the most massive one is injected in the form of magnetosonic waves. Then the processes of stochastic acceleration of the relativistic electrons by these waves and the properties of the ensuing synchrotron (Radio Halos) and inverse Compton (IC, hard X-ray) emission of merging clusters are computed under the assumption of a constant rms average magnetic field strength in emitting volume. The main finding of these calculations is that giant Radio Halos are naturally expected only in the more massive clusters, and that the expected fraction of clusters with Radio Halos is consistent with the observed one. In Chapt. 6 we extend the previous calculations by including a scaling of the magnetic field strength with cluster mass. The inclusion of this scaling allows us to derive the expected correlations between the synchrotron radio power of Radio Halos and the X-ray properties (T, LX) and mass of the hosting clusters. For the first time, we show that these correlations, calculated in the context of the re-acceleration model, are consistent with the observed ones for typical µG strengths of the average B intensity in massive clusters. The calculations presented in this Chapter allow us to derive the evolution of the probability to form Radio Halos as a function of the cluster mass and redshift. The most relevant finding presented in this Chapter is that the luminosity functions of giant Radio Halos at 1.4 GHz are expected to peak around a radio power » 1024 W/Hz and to flatten (or cut-off) at lower radio powers because of the decrease of the electron re-acceleration efficiency in smaller galaxy clusters. In Chapt. 6 we also derive the expected number counts of Radio Halos and compare them with available observations: we claim that » 100 Radio Halos in the Universe can be observed at 1.4 GHz with deep surveys, while more than 1000 Radio Halos are expected to be discovered in the next future by LOFAR at 150 MHz. This is the first (and so far unique) model expectation for the number counts of Radio Halos at lower frequency and allows to design future radio surveys. Based on the results of Chapt. 6, in Chapt.7 we present a work in progress on a “revision” of the occurrence of Radio Halos. We combine past results from the NVSS radio survey (z » 0.05 − 0.2) with our ongoing GMRT Radio Halos Pointed Observations of 50 X-ray luminous galaxy clusters (at z » 0.2−0.4) and discuss the possibility to test our model expectations with the number counts of Radio Halos at z » 0.05 − 0.4. The most relevant limitation in the calculations presented in Chapt. 5 and 6 is the assumption of an “averaged” size of Radio Halos independently of their radio luminosity and of the mass of the parent clusters. This assumption cannot be released in the context of the PS formalism used to describe the formation process of clusters, while a more detailed analysis of the physics of cluster mergers and of the injection process of turbulence in the ICM would require an approach based on numerical (possible MHD) simulations of a very large volume of the Universe which is however well beyond the aim of this PhD thesis. On the other hand, in Chapt.8 we report our discovery of novel correlations between the size (RH) of Radio Halos and their radio power and between RH and the cluster mass within the Radio Halo region, MH. In particular this last “geometrical” MH − RH correlation allows us to “observationally” overcome the limitation of the “average” size of Radio Halos. Thus in this Chapter, by making use of this “geometrical” correlation and of a simplified form of the re-acceleration model based on the results of Chapt. 5 and 6 we are able to discuss expected correlations between the synchrotron power and the thermal cluster quantities relative to the radio emitting region. This is a new powerful tool of investigation and we show that all the observed correlations (PR − RH, PR − MH, PR − T, PR − LX, . . . ) now become well understood in the context of the re-acceleration model. In addition, we find that observationally the size of Radio Halos scales non-linearly with the virial radius of the parent cluster, and this immediately means that the fraction of the cluster volume which is radio emitting increases with cluster mass and thus that the non-thermal component in clusters is not self-similar.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Phenol and cresols represent a good example of primary chemical building blocks of which 2.8 million tons are currently produced in Europe each year. Currently, these primary phenolic building blocks are produced by refining processes from fossil hydrocarbons: 5% of the world-wide production comes from coal (which contains 0.2% of phenols) through the distillation of the tar residue after the production of coke, while 95% of current world production of phenol is produced by the distillation and cracking of crude oil. In nature phenolic compounds are present in terrestrial higher plants and ferns in several different chemical structures while they are essentially absent in lower organisms and in animals. Biomass (which contain 3-8% of phenols) represents a substantial source of secondary chemical building blocks presently underexploited. These phenolic derivatives are currently used in tens thousand of tons to produce high cost products such as food additives and flavours (i.e. vanillin), fine chemicals (i.e. non-steroidal anti-inflammatory drugs such as ibuprofen or flurbiprofen) and polymers (i.e. poly p-vinylphenol, a photosensitive polymer for electronic and optoelectronic applications). European agrifood waste represents a low cost abundant raw material (250 millions tons per year) which does not subtract land use and processing resources from necessary sustainable food production. The class of phenolic compounds is essentially constituted by simple phenols, phenolic acids, hydroxycinnamic acid derivatives, flavonoids and lignans. As in the case of coke production, the removal of the phenolic contents from biomass upgrades also the residual biomass. Focusing on the phenolic component of agrifood wastes, huge processing and marketing opportunities open since phenols are used as chemical intermediates for a large number of applications, ranging from pharmaceuticals, agricultural chemicals, food ingredients etc. Following this approach we developed a biorefining process to recover the phenolic fraction of wheat bran based on enzymatic commercial biocatalysts in completely water based process, and polymeric resins with the aim of substituting secondary chemical building blocks with the same compounds naturally present in biomass. We characterized several industrial enzymatic product for their ability to hydrolize the different molecular features that are present in wheat bran cell walls structures, focusing on the hydrolysis of polysaccharidic chains and phenolics cross links. This industrial biocatalysts were tested on wheat bran and the optimized process allowed to liquefy up to the 60 % of the treated matter. The enzymatic treatment was also able to solubilise up to the 30 % of the alkali extractable ferulic acid. An extraction process of the phenolic fraction of the hydrolyzed wheat bran based on an adsorbtion/desorption process on styrene-polyvinyl benzene weak cation-exchange resin Amberlite IRA 95 was developed. The efficiency of the resin was tested on different model system containing ferulic acid and the adsorption and desorption working parameters optimized for the crude enzymatic hydrolyzed wheat bran. The extraction process developed had an overall yield of the 82% and allowed to obtain concentrated extracts containing up to 3000 ppm of ferulic acid. The crude enzymatic hydrolyzed wheat bran and the concentrated extract were finally used as substrate in a bioconversion process of ferulic acid into vanillin through resting cells fermentation. The bioconversion process had a yields in vanillin of 60-70% within 5-6 hours of fermentation. Our findings are the first step on the way to demonstrating the economical feasibility for the recovery of biophenols from agrifood wastes through a whole crop approach in a sustainable biorefining process.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Traditional software engineering approaches and metaphors fall short when applied to areas of growing relevance such as electronic commerce, enterprise resource planning, and mobile computing: such areas, in fact, generally call for open architectures that may evolve dynamically over time so as to accommodate new components and meet new requirements. This is probably one of the main reasons that the agent metaphor and the agent-oriented paradigm are gaining momentum in these areas. This thesis deals with the engineering of complex software systems in terms of the agent paradigm. This paradigm is based on the notions of agent and systems of interacting agents as fundamental abstractions for designing, developing and managing at runtime typically distributed software systems. However, today the engineer often works with technologies that do not support the abstractions used in the design of the systems. For this reason the research on methodologies becomes the basic point in the scientific activity. Currently most agent-oriented methodologies are supported by small teams of academic researchers, and as a result, most of them are in an early stage and still in the first context of mostly \academic" approaches for agent-oriented systems development. Moreover, such methodologies are not well documented and very often defined and presented only by focusing on specific aspects of the methodology. The role played by meta- models becomes fundamental for comparing and evaluating the methodologies. In fact a meta-model specifies the concepts, rules and relationships used to define methodologies. Although it is possible to describe a methodology without an explicit meta-model, formalising the underpinning ideas of the methodology in question is valuable when checking its consistency or planning extensions or modifications. A good meta-model must address all the different aspects of a methodology, i.e. the process to be followed, the work products to be generated and those responsible for making all this happen. In turn, specifying the work products that must be developed implies dening the basic modelling building blocks from which they are built. As a building block, the agent abstraction alone is not enough to fully model all the aspects related to multi-agent systems in a natural way. In particular, different perspectives exist on the role that environment plays within agent systems: however, it is clear at least that all non-agent elements of a multi-agent system are typically considered to be part of the multi-agent system environment. The key role of environment as a first-class abstraction in the engineering of multi-agent system is today generally acknowledged in the multi-agent system community, so environment should be explicitly accounted for in the engineering of multi-agent system, working as a new design dimension for agent-oriented methodologies. At least two main ingredients shape the environment: environment abstractions - entities of the environment encapsulating some functions -, and topology abstractions - entities of environment that represent the (either logical or physical) spatial structure. In addition, the engineering of non-trivial multi-agent systems requires principles and mechanisms for supporting the management of the system representation complexity. These principles lead to the adoption of a multi-layered description, which could be used by designers to provide different levels of abstraction over multi-agent systems. The research in these fields has lead to the formulation of a new version of the SODA methodology where environment abstractions and layering principles are exploited for en- gineering multi-agent systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

CHEMICAL COMMUNICATION IN BLATTARIA: CONTRIBUTION TO THE IMPROVEMENT OF THE CONTROL TECNIQUES The management of cockroach infestations in urban environment has undergone some changes in recent years by moving to the predominant use of baits, thanks to the awareness of the risks connected with the use of spray insecticides. The effectiveness of a bait is determined by the collective performance of its components, including active and inert ingredients, the food attractant and any other attractive odour. This research has focused on the behavioral responses of Italian sinanthropic cockroaches to semiochemicals and food attractants, for the purpose of evaluating a possible practical application and of contributing to the improvement of control techniques. Behavioral and morphological studies have been carried out on Supella longipalpa (F.), a small cockroach that is spreading in Italy. Behavioral assays showed that the fourth and fifth tergites of females are significantly more attractive than other region of the body. Cuticular pores and ducts ending in glandular structures (observed with a S.E.M. = Scanning Electron Microscope) are present in large number on these tergites, suggesting that they could be involved in the production and the release of sexual pheromone. Cockroaches produce an aggregation pheromone that is excreted along with their frass and that consists of volatile and non-volatile compounds, mainly amines and steroidal glycosides. The effectiveness of faecal extracts obtained from excrements of Blattella germanica (L.), Blatta orientalis L., Periplaneta americana (L.) and S. longipalpa was evaluated, both at intraspecific and interspecific level, using a "Y" tube olfactometer. Bioassays showed that faecal extracts obtained with methanol have a good intraspecific attractiveness and, in some cases, they showed also interspecific behavioral responses. A gel was prepared, having physical characteristics that could give a good resistance to dehydration, as a potential basis for a new bait; the gel was then added faecal extracts, obtained with methanol from B. germanica and S. longipalpa frass. Arena-tests showed that the new gel containing faecal extracts is more attractive than some commercial gel formulations used as comparison: it was the only product that could attract 100% of insects placed in the arenas in 4-5 days. In conclusion, the substances involved in chemical communication of Blattaria may be able to effectively increase the attractiveness of products for monitoring and controlling cockroaches.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Mixed integer programming is up today one of the most widely used techniques for dealing with hard optimization problems. On the one side, many practical optimization problems arising from real-world applications (such as, e.g., scheduling, project planning, transportation, telecommunications, economics and finance, timetabling, etc) can be easily and effectively formulated as Mixed Integer linear Programs (MIPs). On the other hand, 50 and more years of intensive research has dramatically improved on the capability of the current generation of MIP solvers to tackle hard problems in practice. However, many questions are still open and not fully understood, and the mixed integer programming community is still more than active in trying to answer some of these questions. As a consequence, a huge number of papers are continuously developed and new intriguing questions arise every year. When dealing with MIPs, we have to distinguish between two different scenarios. The first one happens when we are asked to handle a general MIP and we cannot assume any special structure for the given problem. In this case, a Linear Programming (LP) relaxation and some integrality requirements are all we have for tackling the problem, and we are ``forced" to use some general purpose techniques. The second one happens when mixed integer programming is used to address a somehow structured problem. In this context, polyhedral analysis and other theoretical and practical considerations are typically exploited to devise some special purpose techniques. This thesis tries to give some insights in both the above mentioned situations. The first part of the work is focused on general purpose cutting planes, which are probably the key ingredient behind the success of the current generation of MIP solvers. Chapter 1 presents a quick overview of the main ingredients of a branch-and-cut algorithm, while Chapter 2 recalls some results from the literature in the context of disjunctive cuts and their connections with Gomory mixed integer cuts. Chapter 3 presents a theoretical and computational investigation of disjunctive cuts. In particular, we analyze the connections between different normalization conditions (i.e., conditions to truncate the cone associated with disjunctive cutting planes) and other crucial aspects as cut rank, cut density and cut strength. We give a theoretical characterization of weak rays of the disjunctive cone that lead to dominated cuts, and propose a practical method to possibly strengthen those cuts arising from such weak extremal solution. Further, we point out how redundant constraints can affect the quality of the generated disjunctive cuts, and discuss possible ways to cope with them. Finally, Chapter 4 presents some preliminary ideas in the context of multiple-row cuts. Very recently, a series of papers have brought the attention to the possibility of generating cuts using more than one row of the simplex tableau at a time. Several interesting theoretical results have been presented in this direction, often revisiting and recalling other important results discovered more than 40 years ago. However, is not clear at all how these results can be exploited in practice. As stated, the chapter is a still work-in-progress and simply presents a possible way for generating two-row cuts from the simplex tableau arising from lattice-free triangles and some preliminary computational results. The second part of the thesis is instead focused on the heuristic and exact exploitation of integer programming techniques for hard combinatorial optimization problems in the context of routing applications. Chapters 5 and 6 present an integer linear programming local search algorithm for Vehicle Routing Problems (VRPs). The overall procedure follows a general destroy-and-repair paradigm (i.e., the current solution is first randomly destroyed and then repaired in the attempt of finding a new improved solution) where a class of exponential neighborhoods are iteratively explored by heuristically solving an integer programming formulation through a general purpose MIP solver. Chapters 7 and 8 deal with exact branch-and-cut methods. Chapter 7 presents an extended formulation for the Traveling Salesman Problem with Time Windows (TSPTW), a generalization of the well known TSP where each node must be visited within a given time window. The polyhedral approaches proposed for this problem in the literature typically follow the one which has been proven to be extremely effective in the classical TSP context. Here we present an overall (quite) general idea which is based on a relaxed discretization of time windows. Such an idea leads to a stronger formulation and to stronger valid inequalities which are then separated within the classical branch-and-cut framework. Finally, Chapter 8 addresses the branch-and-cut in the context of Generalized Minimum Spanning Tree Problems (GMSTPs) (i.e., a class of NP-hard generalizations of the classical minimum spanning tree problem). In this chapter, we show how some basic ideas (and, in particular, the usage of general purpose cutting planes) can be useful to improve on branch-and-cut methods proposed in the literature.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the wide range of data that the nutrition subject offers to the historical observation, this investigation focuses on one of the functions that food serves in the social context: that is, to signify cultural identity. In this context, we will analyse the ways in which industrially produced pasta has come to its status as one of the symbolic forms of twentieth-century Italian food, contributing to a sense of social identity that forms part of the process of nation-building developed during the XX century. The nature of the relationship between pasta and Italian food is analysed for a period of almost a century (1886-1984) through a variety of different sources: government enquiries, cookery books, gastronomic guides and menus of official dinners. The assemblage of such documents in one study allows investigation of certain themes throughout a wide range of gastronomical cultures active within the national borders. In this way, links are made between the production, adoption, reception and dissemination of the ingredients and Italian Unification.This method has made it possible to restore one possible form of historical knowledge of twentieth-century gastronomy and of the experiences by which it was influenced.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Bread dough and particularly wheat dough, due to its viscoelastic behaviour, is probably the most dynamic and complicated rheological system and its characteristics are very important since they highly affect final products’ textural and sensorial properties. The study of dough rheology has been a very challenging task for many researchers since it can provide numerous information about dough formulation, structure and processing. This explains why dough rheology has been a matter of investigation for several decades. In this research rheological assessment of doughs and breads was performed by using empirical and fundamental methods at both small and large deformation, in order to characterize different types of doughs and final products such as bread. In order to study the structural aspects of food products, image analysis techniques was used for the integration of the information coming from empirical and fundamental rheological measurements. Evaluation of dough properties was carried out by texture profile analysis (TPA), dough stickiness (Chen and Hoseney cell) and uniaxial extensibility determination (Kieffer test) by using a Texture Analyser; small deformation rheological measurements, were performed on a controlled stress–strain rheometer; moreover the structure of different doughs was observed by using the image analysis; while bread characteristics were studied by using texture profile analysis (TPA) and image analysis. The objective of this research was to understand if the different rheological measurements were able to characterize and differentiate the different samples analysed. This in order to investigate the effect of different formulation and processing conditions on dough and final product from a structural point of view. For this aim the following different materials were performed and analysed: - frozen dough realized without yeast; - frozen dough and bread made with frozen dough; - doughs obtained by using different fermentation method; - doughs made by Kamut® flour; - dough and bread realized with the addition of ginger powder; - final products coming from different bakeries. The influence of sub-zero storage time on non-fermented and fermented dough viscoelastic performance and on final product (bread) was evaluated by using small deformation and large deformation methods. In general, the longer the sub-zero storage time the lower the positive viscoelastic attributes. The effect of fermentation time and of different type of fermentation (straight-dough method; sponge-and-dough procedure and poolish method) on rheological properties of doughs were investigated using empirical and fundamental analysis and image analysis was used to integrate this information throughout the evaluation of the dough’s structure. The results of fundamental rheological test showed that the incorporation of sourdough (poolish method) provoked changes that were different from those seen in the others type of fermentation. The affirmative action of some ingredients (extra-virgin olive oil and a liposomic lecithin emulsifier) to improve rheological characteristics of Kamut® dough has been confirmed also when subjected to low temperatures (24 hours and 48 hours at 4°C). Small deformation oscillatory measurements and large deformation mechanical tests performed provided useful information on the rheological properties of samples realized by using different amounts of ginger powder, showing that the sample with the highest amount of ginger powder (6%) had worse rheological characteristics compared to the other samples. Moisture content, specific volume, texture and crumb grain characteristics are the major quality attributes of bread products. The different sample analyzed, “Coppia Ferrarese”, “Pane Comune Romagnolo” and “Filone Terra di San Marino”, showed a decrease of crumb moisture and an increase in hardness over the storage time. Parameters such as cohesiveness and springiness, evaluated by TPA that are indicator of quality of fresh bread, decreased during the storage. By using empirical rheological tests we found several differences among the samples, due to the different ingredients used in formulation and the different process adopted to prepare the sample, but since these products are handmade, the differences could be account as a surplus value. In conclusion small deformation (in fundamental units) and large deformation methods showed a significant role in monitoring the influence of different ingredients used in formulation, different processing and storage conditions on dough viscoelastic performance and on final product. Finally the knowledge of formulation, processing and storage conditions together with the evaluation of structural and rheological characteristics is fundamental for the study of complex matrices like bakery products, where numerous variable can influence their final quality (e.g. raw material, bread-making procedure, time and temperature of the fermentation and baking).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Gnocchi is a typical Italian potato-based fresh pasta that can be either homemade or industrially manufactured. The homemade traditional product is consumed fresh on the day it is produced, whereas the industrially manufactured one is vacuum-packed in polyethylene and usually stored at refrigerated conditions. At industrial level, most kinds of gnocchi are usually produced by using some potato derivatives (i.e. flakes, dehydrated products or flour) to which soft wheat flour, salt, some emulsifiers and aromas are added. Recently, a novel type of gnocchi emerged on the Italian pasta market, since it would be as much similar as possible to the traditional homemade one. It is industrially produced from fresh potatoes as main ingredient and soft wheat flour, pasteurized liquid eggs and salt, moreover this product undergoes steam cooking and mashing industrial treatments. Neither preservatives nor emulsifiers are included in the recipe. The main aim of this work was to get inside the industrial manufacture of gnocchi, in order to improve the quality characteristics of the final product, by the study of the main steps of the production, starting from the raw and steam cooked tubers, through the semi-finished materials, such as the potato puree and the formulated dough. For this purpose the investigation of the enzymatic activity of the raw and steam cooked potatoes, the main characteristics of the puree (colour, texture and starch), the interaction among ingredients of differently formulated doughs and the basic quality aspects of the final product have been performed. Results obtained in this work indicated that steam cooking influenced the analysed enzymes (Pectin methylesterase and α- and β-amylases) in different tissues of the tuber. PME resulted still active in the cortex, it therefore may affect the texture of cooked potatoes to be used as main ingredient in the production of gnocchi. Starch degrading enzymes (α- and β-amylases) were inactivated both in the cortex and in the pith of the tuber. The study performed on the potato puree showed that, between the two analysed samples, the product which employed dual lower pressure treatments seemed to be the most suitable to the production of gnocchi, in terms of its better physicochemical and textural properties. It did not evidence aggregation phenomena responsible of hard lumps, which may occur in this kind of semi-finished product. The textural properties of gnocchi doughs were not influenced by the different formulation as expected. Among the ingredients involved in the preparation of the different samples, soft wheat flour seemed to be the most crucial in affecting the quality features of gnocchi doughs. As a consequence of the interactive effect of the ingredients on the physicochemical and textural characteristics of the different doughs, a uniform and well-defined split-up among samples was not obtained. In the comparison of different kinds of gnocchi, the optimal physicochemical and textural properties were detected in the sample made with fresh tubers. This was probably caused not only by the use of fresh steam cooked potatoes, but also by the pasteurized liquid eggs and by the absence of any kind of emulsifier, additive or preserving substance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Die Ziele der vorliegenden Arbeit waren 1) die Entwicklung und Validierung von sensitiven und substanz-spezifischen Methoden für die quantitative Bestimmung von anionischen, nichtionischen und amphoteren Tensiden und deren Metaboliten in wässrigen Umweltproben unter Einsatz leistungsfähiger, massenspektrometrischer Analysengeräte,2) die Gewinnung von aeroben, polaren Abbauprodukten aus Tensiden in einem die realen Umweltbedingungen simulierenden Labor-Festbettbioreaktor (FBBR), dessen Biozönose oberflächenwasserbürtig war,3) zur Aufklärung des Abbaumechanismus von Tensiden neue, in 2) gewonnene Metabolite zu identifizieren und massenspektrometrisch zu charakterisieren ebenso wie den Primärabbau und den weiteren Abbau zu verfolgen,4) durch quantitative Untersuchungen von Tensiden und deren Abbauprodukten in Abwasser und Oberflächenwasser Informationen zu ihrem Eintrag und Verhalten bei unterschiedlichen hydrologischen und klimatischen Bedingungen zu erhalten,5) das Verhalten von persistenten Tensidmetaboliten in Wasserwerken, die belastetes Oberflächenwasser aufbereiten, zu untersuchen und deren Vorkommen im Trinkwasser zu bestimmen,6) mögliche Schadwirkungen von neu entdeckten Metabolite mittels ökotoxikologischer Biotests abzuschätzen,7) durch Vergleich der Felddaten mit den Ergebnissen der Laborversuche die Umweltrelevanz der Abbaustudien zu belegen. Die Auswahl der untersuchten Verbindungen erfolgte unter Berücksichtigung ihres Produktionsvolumens und der Neuheit auf dem Tensidmarkt. Sie umfasste die Waschmittelinhaltsstoffe lineare Alkylbenzol-sulfonate (LAS), welches das Tensid mit der höchsten Produktionsmenge darstellte, die beiden nichtionischen Tenside Alkylglucamide (AG) und Alkylpolyglucoside (APG), ebenso wie das amphotere Tensid Cocamidopropylbetain (CAPB). Außerdem wurde der polymere Farbübertragungsinhibitor Polyvinylpyrrolidon (PVP) untersucht.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Being basic ingredients of numerous daily-life products with significant industrial importance as well as basic building blocks for biomaterials, charged hydrogels continue to pose a series of unanswered challenges for scientists even after decades of practical applications and intensive research efforts. Despite a rather simple internal structure it is mainly the unique combination of short- and long-range forces which render scientific investigations of their characteristic properties to be quite difficult. Hence early on computer simulations were used to link analytical theory and empirical experiments, bridging the gap between the simplifying assumptions of the models and the complexity of real world measurements. Due to the immense numerical effort, even for high performance supercomputers, system sizes and time scales were rather restricted until recently, whereas it only now has become possible to also simulate a network of charged macromolecules. This is the topic of the presented thesis which investigates one of the fundamental and at the same time highly fascinating phenomenon of polymer research: The swelling behaviour of polyelectrolyte networks. For this an extensible simulation package for the research on soft matter systems, ESPResSo for short, was created which puts a particular emphasis on mesoscopic bead-spring-models of complex systems. Highly efficient algorithms and a consistent parallelization reduced the necessary computation time for solving equations of motion even in case of long-ranged electrostatics and large number of particles, allowing to tackle even expensive calculations and applications. Nevertheless, the program has a modular and simple structure, enabling a continuous process of adding new potentials, interactions, degrees of freedom, ensembles, and integrators, while staying easily accessible for newcomers due to a Tcl-script steering level controlling the C-implemented simulation core. Numerous analysis routines provide means to investigate system properties and observables on-the-fly. Even though analytical theories agreed on the modeling of networks in the past years, our numerical MD-simulations show that even in case of simple model systems fundamental theoretical assumptions no longer apply except for a small parameter regime, prohibiting correct predictions of observables. Applying a "microscopic" analysis of the isolated contributions of individual system components, one of the particular strengths of computer simulations, it was then possible to describe the behaviour of charged polymer networks at swelling equilibrium in good solvent and close to the Theta-point by introducing appropriate model modifications. This became possible by enhancing known simple scaling arguments with components deemed crucial in our detailed study, through which a generalized model could be constructed. Herewith an agreement of the final system volume of swollen polyelectrolyte gels with results of computer simulations could be shown successfully over the entire investigated range of parameters, for different network sizes, charge fractions, and interaction strengths. In addition, the "cell under tension" was presented as a self-regulating approach for predicting the amount of swelling based on the used system parameters only. Without the need for measured observables as input, minimizing the free energy alone already allows to determine the the equilibrium behaviour. In poor solvent the shape of the network chains changes considerably, as now their hydrophobicity counteracts the repulsion of like-wise charged monomers and pursues collapsing the polyelectrolytes. Depending on the chosen parameters a fragile balance emerges, giving rise to fascinating geometrical structures such as the so-called pear-necklaces. This behaviour, known from single chain polyelectrolytes under similar environmental conditions and also theoretically predicted, could be detected for the first time for networks as well. An analysis of the total structure factors confirmed first evidences for the existence of such structures found in experimental results.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Zusammenfassung Die vorliegende Arbeit hat zum Ziel, pharmazeutisch-technologische Möglichkeiten der Retardierung bei ausgewählten Antiasthmatika zur pulmonalen Applikation anzuwenden. Dafür sollten Mikropartikel hergestellt und pharmazeutisch sowie biopharmazeutisch charakterisiert werden. Als Modellsubstanzen werden das Glukokortikoid Budesonid und das β2-Sympathomimetikum Salbutamol in Form seiner Base und seines Salzes verwendet. Die Auswahl erfolgt nach physikochemischen (Lipophilie, Molekulargewicht) und therapeutischen (Halbwertszeit der Wirkung, Applikationsfrequenz) Gesichtspunkten. Mikropartikel auf Polymerbasis ermöglichen eine kontrollierte Freigabe der Arzneistoffe über einen vorausbestimmten Zeitraum. Es erfolgt die Auswahl physiologisch unbedenklicher Hilfsstoffe (Polylaktide R 202H/ Poly(laktid-co-glykolide) RG 502H, RG 752-S) mit unterschiedlichen Anteilen an Coglykolid sowie unterschiedlichen Molekulargewichten, die sich prinzipiell zur Verzögerung der Freisetzung eignen und sich bei der parenteralen Applikation bereits bewährt haben. Die Sprühtrocknung wird als geeignetes pharmazeutisch-technologisches Verfahren zur Präparation von Mikropartikeln im Teilchengrößenbereich von 1- 10 Mikrometern beschrieben, welche den Wirkstoff mit möglichst hoher Beladung verkapselt. Die sprühgetrockneten Pulver sollen pharmazeutisch physikochemisch mittels Rasterelektronenmikroskopie (Morphologie), Laserdiffraktometrie (Teilchengrößenverteilung), DSC und Röntgenpulverdiffraktometrie (thermisches Verhalten) und mittels Stickstoff-Tief-Temperatur Adsorptionsverfahren (spezifische Oberfläche) charakterisiert werden. Zusätzlich wird die Wirkstoffbeladung der sprühgetrockneten Polymer-Mikropartikel mittels HPLC ermittelt. Die biopharmazeutische Charakterisierung der sprühgetrockneten Pulver erfolgt über die in-vitro Freigabekinetik und die Stabilität der Mikropartikel. Zusätzlich werden Versuche an Zellkulturen und in-vivo Versuche an Mäusen durchgeführt, um die Effekte der sprühgetrockneten Mikropartikel und des Hilfsstoffs hinsichtlich der Freisetzungsretardierung zu testen. Bei den in-vivo Versuchen werden der Atemwegswiderstand und die Verlängerung der exspiratorischen Phase (penh) als Parameter für einen antiasthmatischen Effekt gewählt. Die Lungenlavage Flüssigkeit wird zusätzlich überprüft. Die Ergebnisse zeigen, dass es mit Hilfe der Sprühtrocknung möglich ist, Polymer-Mikropartikel herzustellen, die aufgrund ihrer Partikelgröße von d50 ≤ 5,8 µm fähig sind, die unteren Abschnitte der Lunge zu erreichen. Die Morphologie der Mikropartikel ist abhängig vom zu versprühenden Produkt. Thermodynamisch und röntgenpulverdiffraktometrisch betrachtet handelt es sich um amorphe Produkte, die aber über lange Zeit in diesem Zustand stabil sind. Die Wiederfindung der eingesetzten Arzneistoffmenge in den sprühgetrockneten Polymer-Mikropartikeln und die Freigabeversuche zur Charakterisierung der Retardierungseigenschaften der verwendeten Polymere ergeben, dass es mit Hilfe der Sprühtrocknung von Budesonid und Salbutamol mit den Polymeren möglich ist, retardierende Mikropartikel herzustellen. Die Wiederfindung von Budesonid und Salbutamol in den sprühgetrockneten Polymer-Mikropartikeln entspricht nahezu der eingesetzten Menge. Bei Salbutamolsulfat ist dies nicht der Fall. In Zellkulturversuchen der murinen Zellinie RAW 264.7 ergaben sich Hinweise darauf, dass bei Konzentrationen von 10-6 M und 10-8 M, die Downregulation der IL-6 Konzentration durch die Sprüheinbettung von 9,1 % Budesonid mit PLGA in stärkerem Ausmaß erfolgte, als bei unverkapseltem Budesonid. Zusätzlich wurden in-vivo Versuche mit intranasaler und intraperitonealer Gabe durchgeführt. Die Budesonid-Polymer Sprüheinbettung wurde mit unverkapseltem Budesonid vergleichen. Nach intraperitonealer Gabe hatte die Sprüheinbettung mit Budesonid die besten Effekte hinsichtlich der Unterdrückung des penh und des Atemwegswiderstands auch bei steigenden Metacholinkonzentrationen. Die Auswertung der Lungenlavage Flüssigkeit zeigt sehr deutlich die Downregulation der IL-6 Konzentration in der Lunge durch die Sprüheinbettung mit Budesonid. Zur Zeit werden Vorbereitungen getroffen, ein Gerät zu testen, das in der Lage ist, ein Mikrospray zu generieren, so dass eine intratracheale Verabreichung möglich wäre.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Deep convection by pyro-cumulonimbus clouds (pyroCb) can transport large amounts of forest fire smoke into the upper troposphere and lower stratosphere. Here, results from numerical simulations of such deep convective smoke transport are presented. The structure, shape and injection height of the pyroCb simulated for a specific case study are in good agreement with observations. The model results confirm that substantial amounts of smoke are injected into the lower stratosphere. Small-scale mixing processes at the cloud top result in a significant enhancement of smoke injection into the stratosphere. Sensitivity studies show that the release of sensible heat by the fire plays an important role for the dynamics of the pyroCb. Furthermore, the convection is found to be very sensitive to background meteorological conditions. While the abundance of aerosol particles acting as cloud condensation nuclei (CCN) has a strong influence on the microphysical structure of the pyroCb, the CCN effect on the convective dynamics is rather weak. The release of latent heat dominates the overall energy budget of the pyroCb. Since most of the cloud water originates from moisture entrained from the background atmosphere, the fire-released moisture contributes only minor to convection dynamics. Sufficient fire heating, favorable meteorological conditions, and small-scale mixing processes at the cloud top are identified as the key ingredients for troposphere-to-stratosphere transport by pyroCb convection.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The role of the human gut microbiota in impacting host’s health has been widely studied in the last decade. Notably, it has been recently demonstrated that diet and nutritional status are among the most important modifiable determinants of human health, through a plethora of presumptive mechanisms among which microbiota-mediated processes are thought to have a relevant role. At present, probiotics and prebiotics represent a useful dietary approach for influencing the composition and activity of the human gut microbial community. The present study is composed of two main sections, aimed at elucidating the probiotic potential of the yeast strain K. marxianus B0399, as well as the promising putative prebiotic activity ascribable to four different flours, naturally enriched in dietary fibres content. Here, by in vitro studies we demonstrated that K. marxianus B0399 possesses a number of beneficial and strain-specific properties desirable for a microorganism considered for application as a probiotics. Successively, we investigated the impact of a novel probiotic yoghurt containing B. animalis subsp. lactis Bb12 and K. marxianus B0399 on the gut microbiota of a cohort of subjects suffering from IBS and enrolled in a in vivo clinical study. We demonstrated that beneficial effects described for the probiotic yoghurt were not associated to significant modifications of the human intestinal microbiota. Additionally, using a colonic model system we investigated the impact of different flours (wholegrain rye and wheat, chickpeas and lentils 50:50, and barley milled grains) on the intestinal microbiota composition and metabolomic output, combining molecular and cellular analysis with a NMR metabolomics approach. We demonstrated that each tested flour showed peculiar and positive modulations of the intestinal microbiota composition and its small molecule metabolome, thus supporting the utilisation of these ingredients in the development of a variety of potentially prebiotic food products aimed at improving human health.