924 resultados para Alternative food systems
Resumo:
Fraud is a global problem that has required more attention due to an accentuated expansion of modern technology and communication. When statistical techniques are used to detect fraud, whether a fraud detection model is accurate enough in order to provide correct classification of the case as a fraudulent or legitimate is a critical factor. In this context, the concept of bootstrap aggregating (bagging) arises. The basic idea is to generate multiple classifiers by obtaining the predicted values from the adjusted models to several replicated datasets and then combining them into a single predictive classification in order to improve the classification accuracy. In this paper, for the first time, we aim to present a pioneer study of the performance of the discrete and continuous k-dependence probabilistic networks within the context of bagging predictors classification. Via a large simulation study and various real datasets, we discovered that the probabilistic networks are a strong modeling option with high predictive capacity and with a high increment using the bagging procedure when compared to traditional techniques. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
OBJECTIVE: To describe and compare three alternative methods for controlling classical friction: Self-ligating brackets (SLB), special brackets (SB) and special elastomeric ligatures (SEB). METHODS: The study compared Damon MX, Smart Clip, In-Ovation and Easy Clip self-ligating bracket systems, the special Synergy brackets and Morelli's twin bracket with special 8-shaped elastomeric ligatures. New and used Morelli brackets with new and used elastomeric ligatures were used as control. All brackets had 0.022 x 0.028-in slots. 0.014-in nickel-titanium and stainless steel 0.019 x 0.025-in wires were tied to first premolar steel brackets using each archwire ligation method and pulled by an Instron machine at a speed of 0.5 mm/minute. Prior to the mechanical tests the absence of binding in the device was ruled out. Statistical analysis consisted of the Kruskal-Wallis test and multiple non-parametric analyses at a 1% significance level. RESULTS: When a 0.014-in archwire was employed, all ligation methods exhibited classical friction forces close to zero, except Morelli brackets with new and old elastomeric ligatures, which displayed 64 and 44 centiNewtons, respectively. When a 0.019 x 0.025-in archwire was employed, all ligation methods exhibited values close to zero, except the In-Ovation brackets, which yielded 45 cN, and the Morelli brackets with new and old elastomeric ligatures, which displayed 82 and 49 centiNewtons, respectively. CONCLUSIONS: Damon MX, Easy Clip, Smart Clip, Synergy bracket systems and 8-shaped ligatures proved to be equally effective alternatives for controlling classical friction using 0.014-in nickel-titanium archwires and 0.019 x 0.025-in steel archwires, while the In-Ovation was efficient with 0.014-in archwires but with 0.019 x 0.025-in archwires it exhibited friction that was similar to conventional brackets with used elastomeric ligatures.
Resumo:
Modern food production is a complex, globalized system in which what we eat and how it is produced are increasingly disconnected. This thesis examines some of the ways in which global trade has changed the mix of inputs to food and feed, and how this affects food security and our perceptions of sustainability. One useful indicator of the ecological impact of trade in food and feed products is the Appropriated Ecosystem Areas (ArEAs), which estimates the terrestrial and aquatic areas needed to produce all the inputs to particular products. The method is introduced in Paper I and used to calculate and track changes in imported subsidies to Swedish agriculture over the period 1962-1994. In 1994, Swedish consumers needed agricultural areas outside their national borders to satisfy more than a third of their food consumption needs. The method is then applied to Swedish meat production in Paper II to show that the term “Made in Sweden” is often a misnomer. In 1999, almost 80% of manufactured feed for Swedish pigs, cattle and chickens was dependent on imported inputs, mainly from Europe, Southeast Asia and South America. Paper III examines ecosystem subsidies to intensive aquaculture in two nations: shrimp production in Thailand and salmon production in Norway. In both countries, aquaculture was shown to rely increasingly on imported subsidies. The rapid expansion of aquaculture turned these countries from fishmeal net exporters to fishmeal net importers, increasingly using inputs from the Southeastern Pacific Ocean. As the examined agricultural and aquacultural production systems became globalized, levels of dependence on other nations’ ecosystems, the number of external supply sources, and the distance to these sources steadily increased. Dependence on other nations is not problematic, as long as we are able to acknowledge these links and sustainably manage resources both at home and abroad. However, ecosystem subsidies are seldom recognized or made explicit in national policy or economic accounts. Economic systems are generally not designed to receive feedbacks when the status of remote ecosystems changes, much less to respond in an ecologically sensitive manner. Papers IV and V discuss the problem of “masking” of the true environmental costs of production for trade. One of our conclusions is that, while the ArEAs approach is a useful tool for illuminating environmentally-based subsidies in the policy arena, it does not reflect all of the costs. Current agricultural and aquacultural production methods have generated substantial increases in production levels, but if policy continues to support the focus on yield and production increases alone, taking the work of ecosystems for granted, vulnerability can result. Thus, a challenge is to develop a set of complementary tools that can be used in economic accounting at national and international scales that address ecosystem support and performance. We conclude that future resilience in food production systems will require more explicit links between consumers and the work of supporting ecosystems, locally and in other regions of the world, and that food security planning will require active management of the capacity of all involved ecosystems to sustain food production.
Resumo:
Trabajo realizado por: Garijo, J. C., Hernández León, S.
Resumo:
Introduction 1.1 Occurrence of polycyclic aromatic hydrocarbons (PAH) in the environment Worldwide industrial and agricultural developments have released a large number of natural and synthetic hazardous compounds into the environment due to careless waste disposal, illegal waste dumping and accidental spills. As a result, there are numerous sites in the world that require cleanup of soils and groundwater. Polycyclic aromatic hydrocarbons (PAHs) are one of the major groups of these contaminants (Da Silva et al., 2003). PAHs constitute a diverse class of organic compounds consisting of two or more aromatic rings with various structural configurations (Prabhu and Phale, 2003). Being a derivative of benzene, PAHs are thermodynamically stable. In addition, these chemicals tend to adhere to particle surfaces, such as soils, because of their low water solubility and strong hydrophobicity, and this results in greater persistence under natural conditions. This persistence coupled with their potential carcinogenicity makes PAHs problematic environmental contaminants (Cerniglia, 1992; Sutherland, 1992). PAHs are widely found in high concentrations at many industrial sites, particularly those associated with petroleum, gas production and wood preserving industries (Wilson and Jones, 1993). 1.2 Remediation technologies Conventional techniques used for the remediation of soil polluted with organic contaminants include excavation of the contaminated soil and disposal to a landfill or capping - containment - of the contaminated areas of a site. These methods have some drawbacks. The first method simply moves the contamination elsewhere and may create significant risks in the excavation, handling and transport of hazardous material. Additionally, it is very difficult and increasingly expensive to find new landfill sites for the final disposal of the material. The cap and containment method is only an interim solution since the contamination remains on site, requiring monitoring and maintenance of the isolation barriers long into the future, with all the associated costs and potential liability. A better approach than these traditional methods is to completely destroy the pollutants, if possible, or transform them into harmless substances. Some technologies that have been used are high-temperature incineration and various types of chemical decomposition (for example, base-catalyzed dechlorination, UV oxidation). However, these methods have significant disadvantages, principally their technological complexity, high cost , and the lack of public acceptance. Bioremediation, on the contrast, is a promising option for the complete removal and destruction of contaminants. 1.3 Bioremediation of PAH contaminated soil & groundwater Bioremediation is the use of living organisms, primarily microorganisms, to degrade or detoxify hazardous wastes into harmless substances such as carbon dioxide, water and cell biomass Most PAHs are biodegradable unter natural conditions (Da Silva et al., 2003; Meysami and Baheri, 2003) and bioremediation for cleanup of PAH wastes has been extensively studied at both laboratory and commercial levels- It has been implemented at a number of contaminated sites, including the cleanup of the Exxon Valdez oil spill in Prince William Sound, Alaska in 1989, the Mega Borg spill off the Texas coast in 1990 and the Burgan Oil Field, Kuwait in 1994 (Purwaningsih, 2002). Different strategies for PAH bioremediation, such as in situ , ex situ or on site bioremediation were developed in recent years. In situ bioremediation is a technique that is applied to soil and groundwater at the site without removing the contaminated soil or groundwater, based on the provision of optimum conditions for microbiological contaminant breakdown.. Ex situ bioremediation of PAHs, on the other hand, is a technique applied to soil and groundwater which has been removed from the site via excavation (soil) or pumping (water). Hazardous contaminants are converted in controlled bioreactors into harmless compounds in an efficient manner. 1.4 Bioavailability of PAH in the subsurface Frequently, PAH contamination in the environment is occurs as contaminants that are sorbed onto soilparticles rather than in phase (NAPL, non aqueous phase liquids). It is known that the biodegradation rate of most PAHs sorbed onto soil is far lower than rates measured in solution cultures of microorganisms with pure solid pollutants (Alexander and Scow, 1989; Hamaker, 1972). It is generally believed that only that fraction of PAHs dissolved in the solution can be metabolized by microorganisms in soil. The amount of contaminant that can be readily taken up and degraded by microorganisms is defined as bioavailability (Bosma et al., 1997; Maier, 2000). Two phenomena have been suggested to cause the low bioavailability of PAHs in soil (Danielsson, 2000). The first one is strong adsorption of the contaminants to the soil constituents which then leads to very slow release rates of contaminants to the aqueous phase. Sorption is often well correlated with soil organic matter content (Means, 1980) and significantly reduces biodegradation (Manilal and Alexander, 1991). The second phenomenon is slow mass transfer of pollutants, such as pore diffusion in the soil aggregates or diffusion in the organic matter in the soil. The complex set of these physical, chemical and biological processes is schematically illustrated in Figure 1. As shown in Figure 1, biodegradation processes are taking place in the soil solution while diffusion processes occur in the narrow pores in and between soil aggregates (Danielsson, 2000). Seemingly contradictory studies can be found in the literature that indicate the rate and final extent of metabolism may be either lower or higher for sorbed PAHs by soil than those for pure PAHs (Van Loosdrecht et al., 1990). These contrasting results demonstrate that the bioavailability of organic contaminants sorbed onto soil is far from being well understood. Besides bioavailability, there are several other factors influencing the rate and extent of biodegradation of PAHs in soil including microbial population characteristics, physical and chemical properties of PAHs and environmental factors (temperature, moisture, pH, degree of contamination). Figure 1: Schematic diagram showing possible rate-limiting processes during bioremediation of hydrophobic organic contaminants in a contaminated soil-water system (not to scale) (Danielsson, 2000). 1.5 Increasing the bioavailability of PAH in soil Attempts to improve the biodegradation of PAHs in soil by increasing their bioavailability include the use of surfactants , solvents or solubility enhancers.. However, introduction of synthetic surfactant may result in the addition of one more pollutant. (Wang and Brusseau, 1993).A study conducted by Mulder et al. showed that the introduction of hydropropyl-ß-cyclodextrin (HPCD), a well-known PAH solubility enhancer, significantly increased the solubilization of PAHs although it did not improve the biodegradation rate of PAHs (Mulder et al., 1998), indicating that further research is required in order to develop a feasible and efficient remediation method. Enhancing the extent of PAHs mass transfer from the soil phase to the liquid might prove an efficient and environmentally low-risk alternative way of addressing the problem of slow PAH biodegradation in soil.
Resumo:
Today’s pet food industry is growing rapidly, with pet owners demanding high-quality diets for their pets. The primary role of diet is to provide enough nutrients to meet metabolic requirements, while giving the consumer a feeling of well-being. Diet nutrient composition and digestibility are of crucial importance for health and well being of animals. A recent strategy to improve the quality of food is the use of “nutraceuticals” or “Functional foods”. At the moment, probiotics and prebiotics are among the most studied and frequently used functional food compounds in pet foods. The present thesis reported results from three different studies. The first study aimed to develop a simple laboratory method to predict pet foods digestibility. The developed method was based on the two-step multi-enzymatic incubation assay described by Vervaeke et al. (1989), with some modification in order to better represent the digestive physiology of dogs. A trial was then conducted to compare in vivo digestibility of pet-foods and in vitro digestibility using the newly developed method. Correlation coefficients showed a close correlation between digestibility data of total dry matter and crude protein obtained with in vivo and in vitro methods (0.9976 and 0.9957, respectively). Ether extract presented a lower correlation coefficient, although close to 1 (0.9098). Based on the present results, the new method could be considered as an alternative system of evaluation of dog foods digestibility, reducing the need for using experimental animals in digestibility trials. The second parte of the study aimed to isolate from dog faeces a Lactobacillus strain capable of exert a probiotic effect on dog intestinal microflora. A L. animalis strain was isolated from the faeces of 17 adult healthy dogs..The isolated strain was first studied in vitro when it was added to a canine faecal inoculum (at a final concentration of 6 Log CFU/mL) that was incubated in anaerobic serum bottles and syringes which simulated the large intestine of dogs. Samples of fermentation fluid were collected at 0, 4, 8, and 24 hours for analysis (ammonia, SCFA, pH, lactobacilli, enterococci, coliforms, clostridia). Consequently, the L. animalis strain was fed to nine dogs having lactobacilli counts lower than 4.5 Log CFU per g of faeces. The study indicated that the L animalis strain was able to survive gastrointestinal passage and transitorily colonize the dog intestine. Both in vitro and in vivo results showed that the L. animalis strain positively influenced composition and metabolism of the intestinal microflora of dogs. The third trail investigated in vitro the effects of several non-digestible oligosaccharides (NDO) on dog intestinal microflora composition and metabolism. Substrates were fermented using a canine faecal inoculum that was incubated in anaerobic serum bottles and syringes. Substrates were added at the final concentration of 1g/L (inulin, FOS, pectin, lactitol, gluconic acid) or 4g/L (chicory). Samples of fermentation fluid were collected at 0, 6, and 24 hours for analysis (ammonia, SCFA, pH, lactobacilli, enterococci, coliforms). Gas production was measured throughout the 24 h of the study. Among the tested NDO lactitol showed the best prebiotic properties. In fact, it reduced coliforms and increased lactobacilli counts, enhanced microbial fermentation and promoted the production of SCFA while decreasing BCFA. All the substrates that were investigated showed one or more positive effects on dog faecal microflora metabolism or composition. Further studies (in particular in vivo studies with dogs) will be needed to confirm the prebiotic properties of lactitol and evaluate its optimal level of inclusion in the diet.
Resumo:
Chemistry can contribute, in many different ways to solve the challenges we are facing to modify our inefficient and fossil-fuel based energy system. The present work was motivated by the search for efficient photoactive materials to be employed in the context of the energy problem: materials to be utilized in energy efficient devices and in the production of renewable electricity and fuels. We presented a new class of copper complexes, that could find application in lighting techhnologies, by serving as luminescent materials in LEC, OLED, WOLED devices. These technologies may provide substantial energy savings in the lighting sector. Moreover, recently, copper complexes have been used as light harvesting compounds in dye sensitized photoelectrochemical solar cells, which offer a viable alternative to silicon-based photovoltaic technologies. We presented also a few supramolecular systems containing fullerene, e.g. dendrimers, dyads and triads.The most complex among these arrays, which contain porphyrin moieties, are presented in the final chapter. They undergo photoinduced energy- and electron transfer processes also with long-lived charge separated states, i.e. the fundamental processes to power artificial photosynthetic systems.
Resumo:
Marine soft bottom systems show a high variability across multiple spatial and temporal scales. Both natural and anthropogenic sources of disturbance act together in affecting benthic sedimentary characteristics and species distribution. The description of such spatial variability is required to understand the ecological processes behind them. However, in order to have a better estimate of spatial patterns, methods that take into account the complexity of the sedimentary system are required. This PhD thesis aims to give a significant contribution both in improving the methodological approaches to the study of biological variability in soft bottom habitats and in increasing the knowledge of the effect that different process (both natural and anthropogenic) could have on the benthic communities of a large area in the North Adriatic Sea. Beta diversity is a measure of the variability in species composition, and Whittaker’s index has become the most widely used measure of beta-diversity. However, application of the Whittaker index to soft bottom assemblages of the Adriatic Sea highlighted its sensitivity to rare species (species recorded in a single sample). This over-weighting of rare species induces biased estimates of the heterogeneity, thus it becomes difficult to compare assemblages containing a high proportion of rare species. In benthic communities, the unusual large number of rare species is frequently attributed to a combination of sampling errors and insufficient sampling effort. In order to reduce the influence of rare species on the measure of beta diversity, I have developed an alternative index based on simple probabilistic considerations. It turns out that this probability index is an ordinary Michaelis-Menten transformation of Whittaker's index but behaves more favourably when species heterogeneity increases. The suggested index therefore seems appropriate when comparing patterns of complexity in marine benthic assemblages. Although the new index makes an important contribution to the study of biodiversity in sedimentary environment, it remains to be seen which processes, and at what scales, influence benthic patterns. The ability to predict the effects of ecological phenomena on benthic fauna highly depends on both spatial and temporal scales of variation. Once defined, implicitly or explicitly, these scales influence the questions asked, the methodological approaches and the interpretation of results. Problem often arise when representative samples are not taken and results are over-generalized, as can happen when results from small-scale experiments are used for resource planning and management. Such issues, although globally recognized, are far from been resolved in the North Adriatic Sea. This area is potentially affected by both natural (e.g. river inflow, eutrophication) and anthropogenic (e.g. gas extraction, fish-trawling) sources of disturbance. Although few studies in this area aimed at understanding which of these processes mainly affect macrobenthos, these have been conducted at a small spatial scale, as they were designated to examine local changes in benthic communities or particular species. However, in order to better describe all the putative processes occurring in the entire area, a high sampling effort performed at a large spatial scale is required. The sedimentary environment of the western part of the Adriatic Sea was extensively studied in this thesis. I have described, in detail, spatial patterns both in terms of sedimentary characteristics and macrobenthic organisms and have suggested putative processes (natural or of human origin) that might affect the benthic environment of the entire area. In particular I have examined the effect of off shore gas platforms on benthic diversity and tested their effect over a background of natural spatial variability. The results obtained suggest that natural processes in the North Adriatic such as river outflow and euthrophication show an inter-annual variability that might have important consequences on benthic assemblages, affecting for example their spatial pattern moving away from the coast and along a North to South gradient. Depth-related factors, such as food supply, light, temperature and salinity play an important role in explaining large scale benthic spatial variability (i.e., affecting both the abundance patterns and beta diversity). Nonetheless, more locally, effects probably related to an organic enrichment or pollution from Po river input has been observed. All these processes, together with few human-induced sources of variability (e.g. fishing disturbance), have a higher effect on macrofauna distribution than any effect related to the presence of gas platforms. The main effect of gas platforms is restricted mainly to small spatial scales and related to a change in habitat complexity due to a natural dislodgement or structure cleaning of mussels that colonize their legs. The accumulation of mussels on the sediment reasonably affects benthic infauna composition. All the components of the study presented in this thesis highlight the need to carefully consider methodological aspects related to the study of sedimentary habitats. With particular regards to the North Adriatic Sea, a multi-scale analysis along natural and anthopogenic gradients was useful for detecting the influence of all the processes affecting the sedimentary environment. In the future, applying a similar approach may lead to an unambiguous assessment of the state of the benthic community in the North Adriatic Sea. Such assessment may be useful in understanding if any anthropogenic source of disturbance has a negative effect on the marine environment, and if so, planning sustainable strategies for a proper management of the affected area.
Resumo:
In the last decades, the increase of industrial activities and of the request for the world food requirement, the intensification of natural resources exploitation, directly connected to pollution, have aroused an increasing interest of the public opinion towards initiatives linked to the regulation of food production, as well to the institution of a modern legislation for the consumer guardianship. This work was planned taking into account some important thematics related to marine environment, collecting and showing the data obtained from the studies made on different marine species of commercial interest (Chamelea gallina, Mytilus edulis, Ostrea edulis, Crassostrea gigas, Salmo salar, Gadus morhua). These studies have evaluated the effects of important physic and chemical parameters variations (temperature, xenobiotics like drugs, hydrocarbons and pesticides) on cells involved in the immune defence (haemocytes) and on some important enzymatic systems involved in xenobiotic biotransformation processes (cytochrome P450 complex) and in the related antioxidant defence processes (Superoxide dismutase, Catalase, Heat Shock Protein), from a biochemical and bimolecular point of view. Oxygen is essential in the biological answer of a living organism. Its consume in the normal cellular breathing physiological processes and foreign substances biotransformation, leads to reactive oxygen species (ROS) formation, potentially toxic and responsible of biological macromolecules damages with consequent pathologies worsening. Such processes can bring to a qualitative alteration of the derived products, but also to a general state of suffering that in the most serious cases can provoke the death of the organism, with important repercussions in economic field, in the output of the breedings, of fishing and of aquaculture. In this study it seemed interesting to apply also alternative methodologies currently in use in the medical field (cytofluorimetry) and in proteomic studies (bidimensional electrophoresis, mass spectrometry) with the aim of identify new biomarkers to place beside the traditional methods for the control of the animal origin food quality. From the results it’s possible to point out some relevant aspects from each experiment: 1. The cytofluorimetric techniques applied to O. edulis and C. gigas could bring to important developments in the search of alternative methods that quickly allows to identify with precision the origin of a specific sample, contributing to oppose possible alimentary frauds, in this case for example related to presence of a different species, also under a qualitative profile, but morpholgically similar. A concrete perspective for the application in the inspective field of this method has to be confirmed by further laboratory tests that take also in account in vivo experiments to evaluate the effect in the whole organism of the factors evaluated only on haemocytes in vitro. These elements suggest therefore the possibility to suit the cytofluorimetric methods for the study of animal organisms of food interest, still before these enter the phase of industrial working processes, giving useful information about the possible presence of contaminants sources that can induce an increase of the immune defence and an alteration of normal cellular parameter values. 2. C. gallina immune system has shown an interesting answer to benzo[a]pyrene (B[a]P) exposure, dose and time dependent, with a significant decrease of the expression and of the activity of one of the most important enzymes involved in the antioxidant defence in haemocytes and haemolymph. The data obtained are confirmed by several measurements of physiological parameters, that together with the decrease of the activity of 7-etossi-resourifine-O-deetilase (EROD linked to xenobiotic biotransformation processes) during exposure, underline the major effects of B[a]P action. The identification of basal levels of EROD supports the possible presence of CYP1A subfamily in the invertebrates, still today controversial, never identified previously in C. gallina and never isolated in the immune cells, as confirmed instead in this study with the identification of CYP1A-immunopositive protein (CYP1A-IPP). This protein could reveal a good biomarker at the base of a simple and quick method that could give clear information about specific pollutants presence, even at low concentrations in the environment where usually these organisms are fished before being commercialized. 3. In this experiment it has been evaluated the effect of the antibiotic chloramphenicol (CA) in an important species of commercial interest, Chamelea gallina. Chloramphenicol is a drug still used in some developing countries, also in veterinary field. Controls to evaluate its presence in the alimentary products of animal origin, can reveal ineffective whereas the concentration results to be below the limit of sensitivity of the instruments usually used in this type of analysis. Negative effects of CA towards the CYP1A- IPP proteins, underlined in this work, seem to be due to the attack of free radicals resultant from the action of the antibiotic. This brings to a meaningful alteration of the biotransformation mechanisms through the free radicals. It seems particularly interesting to pay attention to the narrow relationships in C. gallina, between SOD/CAT and CYP450 system, actively involved in detoxification mechanism, especially if compared with the few similar works today present about mollusc, a group that is composed by numerous species that enter in the food field and on which constant controls are necessary to evaluate in a rapid and effective way the presence of possible contaminations. 4. The investigations on fishes (Gadus morhua, and Salmo salar) and on a bivalve mollusc (Mytilus edulis) have allowed to evaluate different aspects related to the possibility to identify a biomarker for the evaluation of the health of organisms of food interest and consequently for the quality of the final product through 2DE methodologies. In the seafood field these techniques are currently used with a discreet success only for vertebrates (fishes), while in the study of the invertebrates (molluscs) there are a lot of difficulties. The results obtained in this work have underline several problems in the correct identification of the isolated proteins in animal organisms of which doesn’t currently exist a complete genomic sequence. This brings to attribute some identities on the base of the comparison with similar proteins in other animal groups, incurring in the possibility to obtain inaccurate data and above all discordant with those obtained on the same animals by other authors. Nevertheless the data obtained in this work after MALDI-ToF analysis, result however objective and the spectra collected could be again analyzed in the future after the update of genomic database related to the species studied. 4-A. The investigation about the presence of HSP70 isoforms directly induced by different phenomena of stress like B[a]P presence, has used bidimensional electrophoresis methods in C. gallina, that have allowed to isolate numerous protein on 2DE gels, allowing the collection of several spots currently in phase of analysis with MALDI-ToF-MS. The present preliminary work has allowed therefore to acquire and to improve important methodologies in the study of cellular parameters and in the proteomic field, that is not only revealed of great potentiality in the application in medical and veterinary field, but also in the field of the inspection of the foods with connections to the toxicology and the environmental pollution. Such study contributes therefore to the search of rapid and new methodologies, that can increase the inspective strategies, integrating themselves with those existing, but improving at the same time the general background of information related to the state of health of the considered animal organism, with the possibility, still hypothetical, to replace in particular cases the employment of the traditional techniques in the alimentary field.
Resumo:
In order to improve the animal welfare, the Council Directive 1999/74/EC (defining minimum standards for the welfare of laying hens) will ban conventional cage systems since 2012, in favour of enriched cages or floor systems. As a consequence an increased risk of bacterial contamination of eggshell is expected (EFSA, 2005). Furthermore egg-associated salmonellosis is an important public health problem throughout the world (Roberts et al., 1994). In this regard the introduction of efficient measures to reduce eggshell contamination by S. Enteritidis or other bacterial pathogens, and thus to prevent any potential or additional food safety risk for Human health, may be envisaged. The hot air pasteurization can be a viable alternative for the decontamination of the surface of the egg shell. Few studies have been performed on the decontamination power of this technique on table eggs (Hou et al, 1996; James et al., 2002). The aim of this study was to develop innovative techniques to remove surface contamination of shell eggs by hot air under natural or forced convection. Initially two simplified finite element models describing the thermal interaction between the air and egg were developed, respectively for the natural and forced convection. The numerical models were validated using an egg simulant equipped by type-K thermocouple (Chromel/Alumel). Once validated, the models allowed the selection of a thermal cycle with an inner temperature always lower than 55°C. Subsequently a specific apparatus composed by two hot air generators, one cold air generator and rolling cylinder support, was built to physically condition the eggs. The decontamination power of the thermal treatments was evaluated on shell eggs experimentally inoculated with either Salmonella Enteritidis, Escherichia coli, Listeria monocytogenes and on shell eggs containing only the indigenous microflora. The applicability of treatments was further evaluated by comparing quality traits of treated and not treated eggs immediately after the treatment and after 28 days of storage at 20°C. The results showed that the treatment characterized by two shots of hot air at 350°C for 8 sec, spaced by a cooling interval of 32 (forced convection), reduce the bacterial population of more than 90% (Salmonella enteritidis and Listeria monocytogenes). No statistically significant results were obtained comparing E. coli treated and not treated eggs as well as indigenous microflora treated and not treated eggs. A reduction of 2.6 log was observed on Salmonella enteritidis load of eggs immediately after the treatment in oven at 200°C for 200 minutes (natural convection). Furthermore no detrimental effects on quality traits of treated eggs were recorded. These results support the hot air techniques for the surface decontamination of table eggs as an effective industrial process.
Resumo:
In the last years of research, I focused my studies on different physiological problems. Together with my supervisors, I developed/improved different mathematical models in order to create valid tools useful for a better understanding of important clinical issues. The aim of all this work is to develop tools for learning and understanding cardiac and cerebrovascular physiology as well as pathology, generating research questions and developing clinical decision support systems useful for intensive care unit patients. I. ICP-model Designed for Medical Education We developed a comprehensive cerebral blood flow and intracranial pressure model to simulate and study the complex interactions in cerebrovascular dynamics caused by multiple simultaneous alterations, including normal and abnormal functional states of auto-regulation of the brain. Individual published equations (derived from prior animal and human studies) were implemented into a comprehensive simulation program. Included in the normal physiological modelling was: intracranial pressure, cerebral blood flow, blood pressure, and carbon dioxide (CO2) partial pressure. We also added external and pathological perturbations, such as head up position and intracranial haemorrhage. The model performed clinically realistically given inputs of published traumatized patients, and cases encountered by clinicians. The pulsatile nature of the output graphics was easy for clinicians to interpret. The manoeuvres simulated include changes of basic physiological inputs (e.g. blood pressure, central venous pressure, CO2 tension, head up position, and respiratory effects on vascular pressures) as well as pathological inputs (e.g. acute intracranial bleeding, and obstruction of cerebrospinal outflow). Based on the results, we believe the model would be useful to teach complex relationships of brain haemodynamics and study clinical research questions such as the optimal head-up position, the effects of intracranial haemorrhage on cerebral haemodynamics, as well as the best CO2 concentration to reach the optimal compromise between intracranial pressure and perfusion. We believe this model would be useful for both beginners and advanced learners. It could be used by practicing clinicians to model individual patients (entering the effects of needed clinical manipulations, and then running the model to test for optimal combinations of therapeutic manoeuvres). II. A Heterogeneous Cerebrovascular Mathematical Model Cerebrovascular pathologies are extremely complex, due to the multitude of factors acting simultaneously on cerebral haemodynamics. In this work, the mathematical model of cerebral haemodynamics and intracranial pressure dynamics, described in the point I, is extended to account for heterogeneity in cerebral blood flow. The model includes the Circle of Willis, six regional districts independently regulated by autoregulation and CO2 reactivity, distal cortical anastomoses, venous circulation, the cerebrospinal fluid circulation, and the intracranial pressure-volume relationship. Results agree with data in the literature and highlight the existence of a monotonic relationship between transient hyperemic response and the autoregulation gain. During unilateral internal carotid artery stenosis, local blood flow regulation is progressively lost in the ipsilateral territory with the presence of a steal phenomenon, while the anterior communicating artery plays the major role to redistribute the available blood flow. Conversely, distal collateral circulation plays a major role during unilateral occlusion of the middle cerebral artery. In conclusion, the model is able to reproduce several different pathological conditions characterized by heterogeneity in cerebrovascular haemodynamics and can not only explain generalized results in terms of physiological mechanisms involved, but also, by individualizing parameters, may represent a valuable tool to help with difficult clinical decisions. III. Effect of Cushing Response on Systemic Arterial Pressure. During cerebral hypoxic conditions, the sympathetic system causes an increase in arterial pressure (Cushing response), creating a link between the cerebral and the systemic circulation. This work investigates the complex relationships among cerebrovascular dynamics, intracranial pressure, Cushing response, and short-term systemic regulation, during plateau waves, by means of an original mathematical model. The model incorporates the pulsating heart, the pulmonary circulation and the systemic circulation, with an accurate description of the cerebral circulation and the intracranial pressure dynamics (same model as in the first paragraph). Various regulatory mechanisms are included: cerebral autoregulation, local blood flow control by oxygen (O2) and/or CO2 changes, sympathetic and vagal regulation of cardiovascular parameters by several reflex mechanisms (chemoreceptors, lung-stretch receptors, baroreceptors). The Cushing response has been described assuming a dramatic increase in sympathetic activity to vessels during a fall in brain O2 delivery. With this assumption, the model is able to simulate the cardiovascular effects experimentally observed when intracranial pressure is artificially elevated and maintained at constant level (arterial pressure increase and bradicardia). According to the model, these effects arise from the interaction between the Cushing response and the baroreflex response (secondary to arterial pressure increase). Then, patients with severe head injury have been simulated by reducing intracranial compliance and cerebrospinal fluid reabsorption. With these changes, oscillations with plateau waves developed. In these conditions, model results indicate that the Cushing response may have both positive effects, reducing the duration of the plateau phase via an increase in cerebral perfusion pressure, and negative effects, increasing the intracranial pressure plateau level, with a risk of greater compression of the cerebral vessels. This model may be of value to assist clinicians in finding the balance between clinical benefits of the Cushing response and its shortcomings. IV. Comprehensive Cardiopulmonary Simulation Model for the Analysis of Hypercapnic Respiratory Failure We developed a new comprehensive cardiopulmonary model that takes into account the mutual interactions between the cardiovascular and the respiratory systems along with their short-term regulatory mechanisms. The model includes the heart, systemic and pulmonary circulations, lung mechanics, gas exchange and transport equations, and cardio-ventilatory control. Results show good agreement with published patient data in case of normoxic and hyperoxic hypercapnia simulations. In particular, simulations predict a moderate increase in mean systemic arterial pressure and heart rate, with almost no change in cardiac output, paralleled by a relevant increase in minute ventilation, tidal volume and respiratory rate. The model can represent a valid tool for clinical practice and medical research, providing an alternative way to experience-based clinical decisions. In conclusion, models are not only capable of summarizing current knowledge, but also identifying missing knowledge. In the former case they can serve as training aids for teaching the operation of complex systems, especially if the model can be used to demonstrate the outcome of experiments. In the latter case they generate experiments to be performed to gather the missing data.
Resumo:
In such territories where food production is mostly scattered in several small / medium size or even domestic farms, a lot of heterogeneous residues are produced yearly, since farmers usually carry out different activities in their properties. The amount and composition of farm residues, therefore, widely change during year, according to the single production process periodically achieved. Coupling high efficiency micro-cogeneration energy units with easy handling biomass conversion equipments, suitable to treat different materials, would provide many important advantages to the farmers and to the community as well, so that the increase in feedstock flexibility of gasification units is nowadays seen as a further paramount step towards their wide spreading in rural areas and as a real necessity for their utilization at small scale. Two main research topics were thought to be of main concern at this purpose, and they were therefore discussed in this work: the investigation of fuels properties impact on gasification process development and the technical feasibility of small scale gasification units integration with cogeneration systems. According to these two main aspects, the present work was thus divided in two main parts. The first one is focused on the biomass gasification process, that was investigated in its theoretical aspects and then analytically modelled in order to simulate thermo-chemical conversion of different biomass fuels, such as wood (park waste wood and softwood), wheat straw, sewage sludge and refuse derived fuels. The main idea is to correlate the results of reactor design procedures with the physical properties of biomasses and the corresponding working conditions of gasifiers (temperature profile, above all), in order to point out the main differences which prevent the use of the same conversion unit for different materials. At this scope, a gasification kinetic free model was initially developed in Excel sheets, considering different values of air to biomass ratio and the downdraft gasification technology as particular examined application. The differences in syngas production and working conditions (process temperatures, above all) among the considered fuels were tried to be connected to some biomass properties, such elementary composition, ash and water contents. The novelty of this analytical approach was the use of kinetic constants ratio in order to determine oxygen distribution among the different oxidation reactions (regarding volatile matter only) while equilibrium of water gas shift reaction was considered in gasification zone, by which the energy and mass balances involved in the process algorithm were linked together, as well. Moreover, the main advantage of this analytical tool is the easiness by which the input data corresponding to the particular biomass materials can be inserted into the model, so that a rapid evaluation on their own thermo-chemical conversion properties is possible to be obtained, mainly based on their chemical composition A good conformity of the model results with the other literature and experimental data was detected for almost all the considered materials (except for refuse derived fuels, because of their unfitting chemical composition with the model assumptions). Successively, a dimensioning procedure for open core downdraft gasifiers was set up, by the analysis on the fundamental thermo-physical and thermo-chemical mechanisms which are supposed to regulate the main solid conversion steps involved in the gasification process. Gasification units were schematically subdivided in four reaction zones, respectively corresponding to biomass heating, solids drying, pyrolysis and char gasification processes, and the time required for the full development of each of these steps was correlated to the kinetics rates (for pyrolysis and char gasification processes only) and to the heat and mass transfer phenomena from gas to solid phase. On the basis of this analysis and according to the kinetic free model results and biomass physical properties (particles size, above all) it was achieved that for all the considered materials char gasification step is kinetically limited and therefore temperature is the main working parameter controlling this step. Solids drying is mainly regulated by heat transfer from bulk gas to the inner layers of particles and the corresponding time especially depends on particle size. Biomass heating is almost totally achieved by the radiative heat transfer from the hot walls of reactor to the bed of material. For pyrolysis, instead, working temperature, particles size and the same nature of biomass (through its own pyrolysis heat) have all comparable weights on the process development, so that the corresponding time can be differently depending on one of these factors according to the particular fuel is gasified and the particular conditions are established inside the gasifier. The same analysis also led to the estimation of reaction zone volumes for each biomass fuel, so as a comparison among the dimensions of the differently fed gasification units was finally accomplished. Each biomass material showed a different volumes distribution, so that any dimensioned gasification unit does not seem to be suitable for more than one biomass species. Nevertheless, since reactors diameters were found out quite similar for all the examined materials, it could be envisaged to design a single units for all of them by adopting the largest diameter and by combining together the maximum heights of each reaction zone, as they were calculated for the different biomasses. A total height of gasifier as around 2400mm would be obtained in this case. Besides, by arranging air injecting nozzles at different levels along the reactor, gasification zone could be properly set up according to the particular material is in turn gasified. Finally, since gasification and pyrolysis times were found to considerably change according to even short temperature variations, it could be also envisaged to regulate air feeding rate for each gasified material (which process temperatures depend on), so as the available reactor volumes would be suitable for the complete development of solid conversion in each case, without even changing fluid dynamics behaviour of the unit as well as air/biomass ratio in noticeable measure. The second part of this work dealt with the gas cleaning systems to be adopted downstream the gasifiers in order to run high efficiency CHP units (i.e. internal engines and micro-turbines). Especially in the case multi–fuel gasifiers are assumed to be used, weightier gas cleaning lines need to be envisaged in order to reach the standard gas quality degree required to fuel cogeneration units. Indeed, as the more heterogeneous feed to the gasification unit, several contaminant species can simultaneously be present in the exit gas stream and, as a consequence, suitable gas cleaning systems have to be designed. In this work, an overall study on gas cleaning lines assessment is carried out. Differently from the other research efforts carried out in the same field, the main scope is to define general arrangements for gas cleaning lines suitable to remove several contaminants from the gas stream, independently on the feedstock material and the energy plant size The gas contaminant species taken into account in this analysis were: particulate, tars, sulphur (in H2S form), alkali metals, nitrogen (in NH3 form) and acid gases (in HCl form). For each of these species, alternative cleaning devices were designed according to three different plant sizes, respectively corresponding with 8Nm3/h, 125Nm3/h and 350Nm3/h gas flows. Their performances were examined on the basis of their optimal working conditions (efficiency, temperature and pressure drops, above all) and their own consumption of energy and materials. Successively, the designed units were combined together in different overall gas cleaning line arrangements, paths, by following some technical constraints which were mainly determined from the same performance analysis on the cleaning units and from the presumable synergic effects by contaminants on the right working of some of them (filters clogging, catalysts deactivation, etc.). One of the main issues to be stated in paths design accomplishment was the tars removal from the gas stream, preventing filters plugging and/or line pipes clogging At this scope, a catalytic tars cracking unit was envisaged as the only solution to be adopted, and, therefore, a catalytic material which is able to work at relatively low temperatures was chosen. Nevertheless, a rapid drop in tars cracking efficiency was also estimated for this same material, so that an high frequency of catalysts regeneration and a consequent relevant air consumption for this operation were calculated in all of the cases. Other difficulties had to be overcome in the abatement of alkali metals, which condense at temperatures lower than tars, but they also need to be removed in the first sections of gas cleaning line in order to avoid corrosion of materials. In this case a dry scrubber technology was envisaged, by using the same fine particles filter units and by choosing for them corrosion resistant materials, like ceramic ones. Besides these two solutions which seem to be unavoidable in gas cleaning line design, high temperature gas cleaning lines were not possible to be achieved for the two larger plant sizes, as well. Indeed, as the use of temperature control devices was precluded in the adopted design procedure, ammonia partial oxidation units (as the only considered methods for the abatement of ammonia at high temperature) were not suitable for the large scale units, because of the high increase of reactors temperature by the exothermic reactions involved in the process. In spite of these limitations, yet, overall arrangements for each considered plant size were finally designed, so that the possibility to clean the gas up to the required standard degree was technically demonstrated, even in the case several contaminants are simultaneously present in the gas stream. Moreover, all the possible paths defined for the different plant sizes were compared each others on the basis of some defined operational parameters, among which total pressure drops, total energy losses, number of units and secondary materials consumption. On the basis of this analysis, dry gas cleaning methods proved preferable to the ones including water scrubber technology in al of the cases, especially because of the high water consumption provided by water scrubber units in ammonia adsorption process. This result is yet connected to the possibility to use activated carbon units for ammonia removal and Nahcolite adsorber for chloride acid. The very high efficiency of this latter material is also remarkable. Finally, as an estimation of the overall energy loss pertaining the gas cleaning process, the total enthalpy losses estimated for the three plant sizes were compared with the respective gas streams energy contents, these latter obtained on the basis of low heating value of gas only. This overall study on gas cleaning systems is thus proposed as an analytical tool by which different gas cleaning line configurations can be evaluated, according to the particular practical application they are adopted for and the size of cogeneration unit they are connected to.
Resumo:
This thesis explores the capabilities of heterogeneous multi-core systems, based on multiple Graphics Processing Units (GPUs) in a standard desktop framework. Multi-GPU accelerated desk side computers are an appealing alternative to other high performance computing (HPC) systems: being composed of commodity hardware components fabricated in large quantities, their price-performance ratio is unparalleled in the world of high performance computing. Essentially bringing “supercomputing to the masses”, this opens up new possibilities for application fields where investing in HPC resources had been considered unfeasible before. One of these is the field of bioelectrical imaging, a class of medical imaging technologies that occupy a low-cost niche next to million-dollar systems like functional Magnetic Resonance Imaging (fMRI). In the scope of this work, several computational challenges encountered in bioelectrical imaging are tackled with this new kind of computing resource, striving to help these methods approach their true potential. Specifically, the following main contributions were made: Firstly, a novel dual-GPU implementation of parallel triangular matrix inversion (TMI) is presented, addressing an crucial kernel in computation of multi-mesh head models of encephalographic (EEG) source localization. This includes not only a highly efficient implementation of the routine itself achieving excellent speedups versus an optimized CPU implementation, but also a novel GPU-friendly compressed storage scheme for triangular matrices. Secondly, a scalable multi-GPU solver for non-hermitian linear systems was implemented. It is integrated into a simulation environment for electrical impedance tomography (EIT) that requires frequent solution of complex systems with millions of unknowns, a task that this solution can perform within seconds. In terms of computational throughput, it outperforms not only an highly optimized multi-CPU reference, but related GPU-based work as well. Finally, a GPU-accelerated graphical EEG real-time source localization software was implemented. Thanks to acceleration, it can meet real-time requirements in unpreceeded anatomical detail running more complex localization algorithms. Additionally, a novel implementation to extract anatomical priors from static Magnetic Resonance (MR) scansions has been included.
Resumo:
The consumer demand for natural, minimally processed, fresh like and functional food has lead to an increasing interest in emerging technologies. The aim of this PhD project was to study three innovative food processing technologies currently used in the food sector. Ultrasound-assisted freezing, vacuum impregnation and pulsed electric field have been investigated through laboratory scale systems and semi-industrial pilot plants. Furthermore, analytical and sensory techniques have been developed to evaluate the quality of food and vegetable matrix obtained by traditional and emerging processes. Ultrasound was found to be a valuable technique to improve the freezing process of potatoes, anticipating the beginning of the nucleation process, mainly when applied during the supercooling phase. A study of the effects of pulsed electric fields on phenol and enzymatic profile of melon juice has been realized and the statistical treatment of data was carried out through a response surface method. Next, flavour enrichment of apple sticks has been realized applying different techniques, as atmospheric, vacuum, ultrasound technologies and their combinations. The second section of the thesis deals with the development of analytical methods for the discrimination and quantification of phenol compounds in vegetable matrix, as chestnut bark extracts and olive mill waste water. The management of waste disposal in mill sector has been approached with the aim of reducing the amount of waste, and at the same time recovering valuable by-products, to be used in different industrial sectors. Finally, the sensory analysis of boiled potatoes has been carried out through the development of a quantitative descriptive procedure for the study of Italian and Mexican potato varieties. An update on flavour development in fresh and cooked potatoes has been realized and a sensory glossary, including general and specific definitions related to organic products, used in the European project Ecropolis, has been drafted.
Resumo:
Lo studio condotto durante il Dottorato di Ricerca è stato focalizzato sulla valutazione e sul monitoraggio delle diverse degradazioni termossidative in oli da frittura. Per raggiungere tale obiettivo si è ritenuto opportuno procedere mediante uno screening dei principali oli presenti sul mercato italiano e successiva messa a punto di due miscele di oli vegetali che sono state sottoposte a due piani sperimentali di frittura controllata e standardizzata in laboratorio, seguiti da due piani di frittura eseguiti in due differenti situazioni reali quali mensa e ristorante. Ognuna delle due miscele è stata messa a confronto con due oli di riferimento. A tal fine è stata identificato il profilo in acidi grassi, la stabilità ossidativa ed idrolitica, il punto di fumo, i composti polari, il contenuto in tocoferoli totali, ed i composti volatili sia sugli oli crudi che sottoposti ai diversi processi di frittura. Lo studio condotto ha permesso di identificare una delle miscele ideate come valida alternativa all’impiego dell’olio di palma ampiamente utilizzato nelle fritture degli alimenti, oltre a fornire delle indicazioni più precise sulla tipologia e sull’entità delle modificazioni che avvengono in frittura, a seconda delle condizioni impiegate.