922 resultados para the Bologna process
Resumo:
My aim is to develop a theory of cooperation within the organization and empirically test it. Drawing upon social exchange theory, social identity theory, the idea of collective intentions, and social constructivism, the main assumption of my work implies that both cooperation and the organization itself are continually shaped and restructured by actions, judgments, and symbolic interpretations of the parties involved. Therefore, I propose that the decision to cooperate, expressed say as an intention to cooperate, reflects and depends on a three step social process shaped by the interpretations of the actors involved. The first step entails an instrumental evaluation of cooperation in terms of social exchange. In the second step, this “social calculus” is translated into cognitive, emotional and evaluative reactions directed toward the organization. Finally, once the identification process is completed and membership awareness is established, I propose that individuals will start to think largely in terms of “We” instead of “I”. Self-goals are redefined at the collective level, and the outcomes for self, others, and the organization become practically interchangeable. I decided to apply my theory to an important cooperative problem in management research: knowledge exchange within organizations. Hence, I conducted a quantitative survey among the members of the virtual community, “www.borse.it” (n=108). Within this community, members freely decide to exchange their knowledge about the stock market among themselves. Because of the confirmatory requirements and the structural complexity of the theory proposed (i.e., the proposal that instrumental evaluations will induce social identity and this in turn will causes collective intentions), I use Structural Equation Modeling to test all hypotheses in this dissertation. The empirical survey-based study found support for the theory of cooperation proposed in this dissertation. The findings suggest that an appropriate conceptualization of the decision to exchange knowledge is one where collective intentions depend proximally on social identity (i.e., cognitive identification, affective commitment, and evaluative engagement) with the organization, and this identity depends on instrumental evaluations of cooperators (i.e., perceived value of the knowledge received, assessment of past reciprocity, expected reciprocity, and expected social outcomes of the exchange). Furthermore, I find that social identity fully mediates the effects of instrumental motives on collective intentions.
Resumo:
The presented study carried out an analysis on rural landscape changes. In particular the study focuses on the understanding of driving forces acting on the rural built environment using a statistical spatial model implemented through GIS techniques. It is well known that the study of landscape changes is essential for a conscious decision making in land planning. From a bibliography review results a general lack of studies dealing with the modeling of rural built environment and hence a theoretical modelling approach for such purpose is needed. The advancement in technology and modernity in building construction and agriculture have gradually changed the rural built environment. In addition, the phenomenon of urbanization of a determined the construction of new volumes that occurred beside abandoned or derelict rural buildings. Consequently there are two types of transformation dynamics affecting mainly the rural built environment that can be observed: the conversion of rural buildings and the increasing of building numbers. It is the specific aim of the presented study to propose a methodology for the development of a spatial model that allows the identification of driving forces that acted on the behaviours of the building allocation. In fact one of the most concerning dynamic nowadays is related to an irrational expansion of buildings sprawl across landscape. The proposed methodology is composed by some conceptual steps that cover different aspects related to the development of a spatial model: the selection of a response variable that better describe the phenomenon under study, the identification of possible driving forces, the sampling methodology concerning the collection of data, the most suitable algorithm to be adopted in relation to statistical theory and method used, the calibration process and evaluation of the model. A different combination of factors in various parts of the territory generated favourable or less favourable conditions for the building allocation and the existence of buildings represents the evidence of such optimum. Conversely the absence of buildings expresses a combination of agents which is not suitable for building allocation. Presence or absence of buildings can be adopted as indicators of such driving conditions, since they represent the expression of the action of driving forces in the land suitability sorting process. The existence of correlation between site selection and hypothetical driving forces, evaluated by means of modeling techniques, provides an evidence of which driving forces are involved in the allocation dynamic and an insight on their level of influence into the process. GIS software by means of spatial analysis tools allows to associate the concept of presence and absence with point futures generating a point process. Presence or absence of buildings at some site locations represent the expression of these driving factors interaction. In case of presences, points represent locations of real existing buildings, conversely absences represent locations were buildings are not existent and so they are generated by a stochastic mechanism. Possible driving forces are selected and the existence of a causal relationship with building allocations is assessed through a spatial model. The adoption of empirical statistical models provides a mechanism for the explanatory variable analysis and for the identification of key driving variables behind the site selection process for new building allocation. The model developed by following the methodology is applied to a case study to test the validity of the methodology. In particular the study area for the testing of the methodology is represented by the New District of Imola characterized by a prevailing agricultural production vocation and were transformation dynamic intensively occurred. The development of the model involved the identification of predictive variables (related to geomorphologic, socio-economic, structural and infrastructural systems of landscape) capable of representing the driving forces responsible for landscape changes.. The calibration of the model is carried out referring to spatial data regarding the periurban and rural area of the study area within the 1975-2005 time period by means of Generalised linear model. The resulting output from the model fit is continuous grid surface where cells assume values ranged from 0 to 1 of probability of building occurrences along the rural and periurban area of the study area. Hence the response variable assesses the changes in the rural built environment occurred in such time interval and is correlated to the selected explanatory variables by means of a generalized linear model using logistic regression. Comparing the probability map obtained from the model to the actual rural building distribution in 2005, the interpretation capability of the model can be evaluated. The proposed model can be also applied to the interpretation of trends which occurred in other study areas, and also referring to different time intervals, depending on the availability of data. The use of suitable data in terms of time, information, and spatial resolution and the costs related to data acquisition, pre-processing, and survey are among the most critical aspects of model implementation. Future in-depth studies can focus on using the proposed model to predict short/medium-range future scenarios for the rural built environment distribution in the study area. In order to predict future scenarios it is necessary to assume that the driving forces do not change and that their levels of influence within the model are not far from those assessed for the time interval used for the calibration.
Resumo:
Bread dough and particularly wheat dough, due to its viscoelastic behaviour, is probably the most dynamic and complicated rheological system and its characteristics are very important since they highly affect final products’ textural and sensorial properties. The study of dough rheology has been a very challenging task for many researchers since it can provide numerous information about dough formulation, structure and processing. This explains why dough rheology has been a matter of investigation for several decades. In this research rheological assessment of doughs and breads was performed by using empirical and fundamental methods at both small and large deformation, in order to characterize different types of doughs and final products such as bread. In order to study the structural aspects of food products, image analysis techniques was used for the integration of the information coming from empirical and fundamental rheological measurements. Evaluation of dough properties was carried out by texture profile analysis (TPA), dough stickiness (Chen and Hoseney cell) and uniaxial extensibility determination (Kieffer test) by using a Texture Analyser; small deformation rheological measurements, were performed on a controlled stress–strain rheometer; moreover the structure of different doughs was observed by using the image analysis; while bread characteristics were studied by using texture profile analysis (TPA) and image analysis. The objective of this research was to understand if the different rheological measurements were able to characterize and differentiate the different samples analysed. This in order to investigate the effect of different formulation and processing conditions on dough and final product from a structural point of view. For this aim the following different materials were performed and analysed: - frozen dough realized without yeast; - frozen dough and bread made with frozen dough; - doughs obtained by using different fermentation method; - doughs made by Kamut® flour; - dough and bread realized with the addition of ginger powder; - final products coming from different bakeries. The influence of sub-zero storage time on non-fermented and fermented dough viscoelastic performance and on final product (bread) was evaluated by using small deformation and large deformation methods. In general, the longer the sub-zero storage time the lower the positive viscoelastic attributes. The effect of fermentation time and of different type of fermentation (straight-dough method; sponge-and-dough procedure and poolish method) on rheological properties of doughs were investigated using empirical and fundamental analysis and image analysis was used to integrate this information throughout the evaluation of the dough’s structure. The results of fundamental rheological test showed that the incorporation of sourdough (poolish method) provoked changes that were different from those seen in the others type of fermentation. The affirmative action of some ingredients (extra-virgin olive oil and a liposomic lecithin emulsifier) to improve rheological characteristics of Kamut® dough has been confirmed also when subjected to low temperatures (24 hours and 48 hours at 4°C). Small deformation oscillatory measurements and large deformation mechanical tests performed provided useful information on the rheological properties of samples realized by using different amounts of ginger powder, showing that the sample with the highest amount of ginger powder (6%) had worse rheological characteristics compared to the other samples. Moisture content, specific volume, texture and crumb grain characteristics are the major quality attributes of bread products. The different sample analyzed, “Coppia Ferrarese”, “Pane Comune Romagnolo” and “Filone Terra di San Marino”, showed a decrease of crumb moisture and an increase in hardness over the storage time. Parameters such as cohesiveness and springiness, evaluated by TPA that are indicator of quality of fresh bread, decreased during the storage. By using empirical rheological tests we found several differences among the samples, due to the different ingredients used in formulation and the different process adopted to prepare the sample, but since these products are handmade, the differences could be account as a surplus value. In conclusion small deformation (in fundamental units) and large deformation methods showed a significant role in monitoring the influence of different ingredients used in formulation, different processing and storage conditions on dough viscoelastic performance and on final product. Finally the knowledge of formulation, processing and storage conditions together with the evaluation of structural and rheological characteristics is fundamental for the study of complex matrices like bakery products, where numerous variable can influence their final quality (e.g. raw material, bread-making procedure, time and temperature of the fermentation and baking).
Resumo:
Age-related physiological changes in the gastrointestinal tract, as well as modification in lifestyle, nutritional behaviour, and functionality of the host immune system, inevitably affect the gut microbiota. The study presented here is focused on the application and comparison of two different microarray approaches for the characterization of the human gut microbiota, the HITChip and the HTF-Microb.Array, with particular attention to the effects of the aging process on the composition of this ecosystem. By using the Human Intestinal Tract Chip (HITChip), recently developed at the Wageningen University, The Netherland, we explored the age-related changes of gut microbiota during the whole adult lifespan, from young adults, through elderly to centenarians. We observed that the microbial composition and diversity of the gut ecosystem of young adults and seventy-years old people is highly similar but differs significantly from that of the centenarians. After 100 years of symbiotic association with the human host, the microbiota is characterized by a rearrangement in the Firmicutes population and an enrichment of facultative anaerobes. The presence of such a compromised microbiota in the centenarians is associated with an increased inflammation status, also known as inflamm-aging, as determined by a range of peripheral blood inflammatory markers. In parallel, we overtook the development of our own phylogenetic microarray with a lower number of targets, aiming the description of the human gut microbiota structure at high taxonomic level. The resulting chip was called High Taxonomic level Fingerprinting Microbiota Array (HTF-Microb.Array), and was based on the Ligase Detection Reaction (LDR) technology, which allowed us to develop a fast and sensitive tool for the fingerprint of the human gut microbiota in terms of presence/absence of the principal groups. The validation on artificial DNA mixes, as well as the pilot study involving eight healthy young adults, demonstrated that the HTF-Microb.Array can be used to successfully characterize the human gut microbiota, allowing us to obtain results which are in approximate accordance with the most recent characterizations. Conversely, the evaluation of the relative abundance of the target groups on the bases of the relative fluorescence intensity probes response still has some hindrances, as demonstrated by comparing the HTF.Microb.Array and HITChip high taxonomic level fingerprints of the same centenarians.
Resumo:
The research performed during the PhD candidature was intended to evaluate the quality of white wines, as a function of the reduction in SO2 use during the first steps of the winemaking process. In order to investigate the mechanism and intensity of interactions occurring between lysozyme and the principal macro-components of musts and wines, a series of experiments on model wine solutions were undertaken, focusing attention on the polyphenols, SO2, oenological tannins, pectines, ethanol, and sugar components. In the second part of this research program, a series of conventional sulphite added vinifications were compared to vinifications in which sulphur dioxide was replaced by lysozyme and consequently define potential winemaking protocols suitable for the production of SO2-free wines. To reach the final goal, the technological performance of two selected yeast strains with a low aptitude to produce SO2 during fermentation were also evaluated. The data obtained suggested that the addition of lysozyme and oenological tannins during the alcoholic fermentation could represent a promising alternative to the use of sulphur dioxide and a reliable starting point for the production of SO2-free wines. The different vinification protocols studied influenced the composition of the volatile profile in wines at the end of the alcoholic fermentation, especially with regards to alcohols and ethyl esters also a consequence of the yeast’s response to the presence or absence of sulphites during fermentation, contributing in different ways to the sensory profiles of wines. In fact, the aminoacids analysis showed that lysozyme can affect the consumption of nitrogen as a function of the yeast strain used in fermentation. During the bottle storage, the evolution of volatile compounds is affected by the presence of SO2 and oenological tannins, confirming their positive role in scaveging oxygen and maintaining the amounts of esters over certain levels, avoiding a decline in the wine’s quality. Even though a natural decrease was found on phenolic profiles due to oxidation effects caused by the presence of oxygen dissolved in the medium during the storage period, the presence of SO2 together with tannins contrasted the decay of phenolic content at the end of the fermentation. Tannins also showed a central role in preserving the polyphenolic profile of wines during the storage period, confirming their antioxidant property, acting as reductants. Our study focused on the fundamental chemistry relevant to the oxidative phenolic spoilage of white wines has demonstrated the suitability of glutathione to inhibit the production of yellow xanthylium cation pigments generated from flavanols and glyoxylic acid at the concentration that it typically exists in wine. The ability of glutathione to bind glyoxylic acid rather than acetaldehyde may enable glutathione to be used as a ‘switch’ for glyoxylic acid-induced polymerisation mechanisms, as opposed to the equivalent acetaldehyde polymerisation, in processes such as microoxidation. Further research is required to assess the ability of glutathione to prevent xanthylium cation production during the in-situ production of glyoxylic acid and in the presence of sulphur dioxide.
Resumo:
Nowadays, it is clear that the target of creating a sustainable future for the next generations requires to re-think the industrial application of chemistry. It is also evident that more sustainable chemical processes may be economically convenient, in comparison with the conventional ones, because fewer by-products means lower costs for raw materials, for separation and for disposal treatments; but also it implies an increase of productivity and, as a consequence, smaller reactors can be used. In addition, an indirect gain could derive from the better public image of the company, marketing sustainable products or processes. In this context, oxidation reactions play a major role, being the tool for the production of huge quantities of chemical intermediates and specialties. Potentially, the impact of these productions on the environment could have been much worse than it is, if a continuous efforts hadn’t been spent to improve the technologies employed. Substantial technological innovations have driven the development of new catalytic systems, the improvement of reactions and process technologies, contributing to move the chemical industry in the direction of a more sustainable and ecological approach. The roadmap for the application of these concepts includes new synthetic strategies, alternative reactants, catalysts heterogenisation and innovative reactor configurations and process design. Actually, in order to implement all these ideas into real projects, the development of more efficient reactions is one primary target. Yield, selectivity and space-time yield are the right metrics for evaluating the reaction efficiency. In the case of catalytic selective oxidation, the control of selectivity has always been the principal issue, because the formation of total oxidation products (carbon oxides) is thermodynamically more favoured than the formation of the desired, partially oxidized compound. As a matter of fact, only in few oxidation reactions a total, or close to total, conversion is achieved, and usually the selectivity is limited by the formation of by-products or co-products, that often implies unfavourable process economics; moreover, sometimes the cost of the oxidant further penalizes the process. During my PhD work, I have investigated four reactions that are emblematic of the new approaches used in the chemical industry. In the Part A of my thesis, a new process aimed at a more sustainable production of menadione (vitamin K3) is described. The “greener” approach includes the use of hydrogen peroxide in place of chromate (from a stoichiometric oxidation to a catalytic oxidation), also avoiding the production of dangerous waste. Moreover, I have studied the possibility of using an heterogeneous catalytic system, able to efficiently activate hydrogen peroxide. Indeed, the overall process would be carried out in two different steps: the first is the methylation of 1-naphthol with methanol to yield 2-methyl-1-naphthol, the second one is the oxidation of the latter compound to menadione. The catalyst for this latter step, the reaction object of my investigation, consists of Nb2O5-SiO2 prepared with the sol-gel technique. The catalytic tests were first carried out under conditions that simulate the in-situ generation of hydrogen peroxide, that means using a low concentration of the oxidant. Then, experiments were carried out using higher hydrogen peroxide concentration. The study of the reaction mechanism was fundamental to get indications about the best operative conditions, and improve the selectivity to menadione. In the Part B, I explored the direct oxidation of benzene to phenol with hydrogen peroxide. The industrial process for phenol is the oxidation of cumene with oxygen, that also co-produces acetone. This can be considered a case of how economics could drive the sustainability issue; in fact, the new process allowing to obtain directly phenol, besides avoiding the co-production of acetone (a burden for phenol, because the market requirements for the two products are quite different), might be economically convenient with respect to the conventional process, if a high selectivity to phenol were obtained. Titanium silicalite-1 (TS-1) is the catalyst chosen for this reaction. Comparing the reactivity results obtained with some TS-1 samples having different chemical-physical properties, and analyzing in detail the effect of the more important reaction parameters, we could formulate some hypothesis concerning the reaction network and mechanism. Part C of my thesis deals with the hydroxylation of phenol to hydroquinone and catechol. This reaction is already industrially applied but, for economical reason, an improvement of the selectivity to the para di-hydroxilated compound and a decrease of the selectivity to the ortho isomer would be desirable. Also in this case, the catalyst used was the TS-1. The aim of my research was to find out a method to control the selectivity ratio between the two isomers, and finally to make the industrial process more flexible, in order to adapt the process performance in function of fluctuations of the market requirements. The reaction was carried out in both a batch stirred reactor and in a re-circulating fixed-bed reactor. In the first system, the effect of various reaction parameters on catalytic behaviour was investigated: type of solvent or co-solvent, and particle size. With the second reactor type, I investigated the possibility to use a continuous system, and the catalyst shaped in extrudates (instead of powder), in order to avoid the catalyst filtration step. Finally, part D deals with the study of a new process for the valorisation of glycerol, by means of transformation into valuable chemicals. This molecule is nowadays produced in big amount, being a co-product in biodiesel synthesis; therefore, it is considered a raw material from renewable resources (a bio-platform molecule). Initially, we tested the oxidation of glycerol in the liquid-phase, with hydrogen peroxide and TS-1. However, results achieved were not satisfactory. Then we investigated the gas-phase transformation of glycerol into acrylic acid, with the intermediate formation of acrolein; the latter can be obtained by dehydration of glycerol, and then can be oxidized into acrylic acid. Actually, the oxidation step from acrolein to acrylic acid is already optimized at an industrial level; therefore, we decided to investigate in depth the first step of the process. I studied the reactivity of heterogeneous acid catalysts based on sulphated zirconia. Tests were carried out both in aerobic and anaerobic conditions, in order to investigate the effect of oxygen on the catalyst deactivation rate (one main problem usually met in glycerol dehydration). Finally, I studied the reactivity of bifunctional systems, made of Keggin-type polyoxometalates, either alone or supported over sulphated zirconia, in this way combining the acid functionality (necessary for the dehydrative step) with the redox one (necessary for the oxidative step). In conclusion, during my PhD work I investigated reactions that apply the “green chemistry” rules and strategies; in particular, I studied new greener approaches for the synthesis of chemicals (Part A and Part B), the optimisation of reaction parameters to make the oxidation process more flexible (Part C), and the use of a bioplatform molecule for the synthesis of a chemical intermediate (Part D).
Resumo:
In such territories where food production is mostly scattered in several small / medium size or even domestic farms, a lot of heterogeneous residues are produced yearly, since farmers usually carry out different activities in their properties. The amount and composition of farm residues, therefore, widely change during year, according to the single production process periodically achieved. Coupling high efficiency micro-cogeneration energy units with easy handling biomass conversion equipments, suitable to treat different materials, would provide many important advantages to the farmers and to the community as well, so that the increase in feedstock flexibility of gasification units is nowadays seen as a further paramount step towards their wide spreading in rural areas and as a real necessity for their utilization at small scale. Two main research topics were thought to be of main concern at this purpose, and they were therefore discussed in this work: the investigation of fuels properties impact on gasification process development and the technical feasibility of small scale gasification units integration with cogeneration systems. According to these two main aspects, the present work was thus divided in two main parts. The first one is focused on the biomass gasification process, that was investigated in its theoretical aspects and then analytically modelled in order to simulate thermo-chemical conversion of different biomass fuels, such as wood (park waste wood and softwood), wheat straw, sewage sludge and refuse derived fuels. The main idea is to correlate the results of reactor design procedures with the physical properties of biomasses and the corresponding working conditions of gasifiers (temperature profile, above all), in order to point out the main differences which prevent the use of the same conversion unit for different materials. At this scope, a gasification kinetic free model was initially developed in Excel sheets, considering different values of air to biomass ratio and the downdraft gasification technology as particular examined application. The differences in syngas production and working conditions (process temperatures, above all) among the considered fuels were tried to be connected to some biomass properties, such elementary composition, ash and water contents. The novelty of this analytical approach was the use of kinetic constants ratio in order to determine oxygen distribution among the different oxidation reactions (regarding volatile matter only) while equilibrium of water gas shift reaction was considered in gasification zone, by which the energy and mass balances involved in the process algorithm were linked together, as well. Moreover, the main advantage of this analytical tool is the easiness by which the input data corresponding to the particular biomass materials can be inserted into the model, so that a rapid evaluation on their own thermo-chemical conversion properties is possible to be obtained, mainly based on their chemical composition A good conformity of the model results with the other literature and experimental data was detected for almost all the considered materials (except for refuse derived fuels, because of their unfitting chemical composition with the model assumptions). Successively, a dimensioning procedure for open core downdraft gasifiers was set up, by the analysis on the fundamental thermo-physical and thermo-chemical mechanisms which are supposed to regulate the main solid conversion steps involved in the gasification process. Gasification units were schematically subdivided in four reaction zones, respectively corresponding to biomass heating, solids drying, pyrolysis and char gasification processes, and the time required for the full development of each of these steps was correlated to the kinetics rates (for pyrolysis and char gasification processes only) and to the heat and mass transfer phenomena from gas to solid phase. On the basis of this analysis and according to the kinetic free model results and biomass physical properties (particles size, above all) it was achieved that for all the considered materials char gasification step is kinetically limited and therefore temperature is the main working parameter controlling this step. Solids drying is mainly regulated by heat transfer from bulk gas to the inner layers of particles and the corresponding time especially depends on particle size. Biomass heating is almost totally achieved by the radiative heat transfer from the hot walls of reactor to the bed of material. For pyrolysis, instead, working temperature, particles size and the same nature of biomass (through its own pyrolysis heat) have all comparable weights on the process development, so that the corresponding time can be differently depending on one of these factors according to the particular fuel is gasified and the particular conditions are established inside the gasifier. The same analysis also led to the estimation of reaction zone volumes for each biomass fuel, so as a comparison among the dimensions of the differently fed gasification units was finally accomplished. Each biomass material showed a different volumes distribution, so that any dimensioned gasification unit does not seem to be suitable for more than one biomass species. Nevertheless, since reactors diameters were found out quite similar for all the examined materials, it could be envisaged to design a single units for all of them by adopting the largest diameter and by combining together the maximum heights of each reaction zone, as they were calculated for the different biomasses. A total height of gasifier as around 2400mm would be obtained in this case. Besides, by arranging air injecting nozzles at different levels along the reactor, gasification zone could be properly set up according to the particular material is in turn gasified. Finally, since gasification and pyrolysis times were found to considerably change according to even short temperature variations, it could be also envisaged to regulate air feeding rate for each gasified material (which process temperatures depend on), so as the available reactor volumes would be suitable for the complete development of solid conversion in each case, without even changing fluid dynamics behaviour of the unit as well as air/biomass ratio in noticeable measure. The second part of this work dealt with the gas cleaning systems to be adopted downstream the gasifiers in order to run high efficiency CHP units (i.e. internal engines and micro-turbines). Especially in the case multi–fuel gasifiers are assumed to be used, weightier gas cleaning lines need to be envisaged in order to reach the standard gas quality degree required to fuel cogeneration units. Indeed, as the more heterogeneous feed to the gasification unit, several contaminant species can simultaneously be present in the exit gas stream and, as a consequence, suitable gas cleaning systems have to be designed. In this work, an overall study on gas cleaning lines assessment is carried out. Differently from the other research efforts carried out in the same field, the main scope is to define general arrangements for gas cleaning lines suitable to remove several contaminants from the gas stream, independently on the feedstock material and the energy plant size The gas contaminant species taken into account in this analysis were: particulate, tars, sulphur (in H2S form), alkali metals, nitrogen (in NH3 form) and acid gases (in HCl form). For each of these species, alternative cleaning devices were designed according to three different plant sizes, respectively corresponding with 8Nm3/h, 125Nm3/h and 350Nm3/h gas flows. Their performances were examined on the basis of their optimal working conditions (efficiency, temperature and pressure drops, above all) and their own consumption of energy and materials. Successively, the designed units were combined together in different overall gas cleaning line arrangements, paths, by following some technical constraints which were mainly determined from the same performance analysis on the cleaning units and from the presumable synergic effects by contaminants on the right working of some of them (filters clogging, catalysts deactivation, etc.). One of the main issues to be stated in paths design accomplishment was the tars removal from the gas stream, preventing filters plugging and/or line pipes clogging At this scope, a catalytic tars cracking unit was envisaged as the only solution to be adopted, and, therefore, a catalytic material which is able to work at relatively low temperatures was chosen. Nevertheless, a rapid drop in tars cracking efficiency was also estimated for this same material, so that an high frequency of catalysts regeneration and a consequent relevant air consumption for this operation were calculated in all of the cases. Other difficulties had to be overcome in the abatement of alkali metals, which condense at temperatures lower than tars, but they also need to be removed in the first sections of gas cleaning line in order to avoid corrosion of materials. In this case a dry scrubber technology was envisaged, by using the same fine particles filter units and by choosing for them corrosion resistant materials, like ceramic ones. Besides these two solutions which seem to be unavoidable in gas cleaning line design, high temperature gas cleaning lines were not possible to be achieved for the two larger plant sizes, as well. Indeed, as the use of temperature control devices was precluded in the adopted design procedure, ammonia partial oxidation units (as the only considered methods for the abatement of ammonia at high temperature) were not suitable for the large scale units, because of the high increase of reactors temperature by the exothermic reactions involved in the process. In spite of these limitations, yet, overall arrangements for each considered plant size were finally designed, so that the possibility to clean the gas up to the required standard degree was technically demonstrated, even in the case several contaminants are simultaneously present in the gas stream. Moreover, all the possible paths defined for the different plant sizes were compared each others on the basis of some defined operational parameters, among which total pressure drops, total energy losses, number of units and secondary materials consumption. On the basis of this analysis, dry gas cleaning methods proved preferable to the ones including water scrubber technology in al of the cases, especially because of the high water consumption provided by water scrubber units in ammonia adsorption process. This result is yet connected to the possibility to use activated carbon units for ammonia removal and Nahcolite adsorber for chloride acid. The very high efficiency of this latter material is also remarkable. Finally, as an estimation of the overall energy loss pertaining the gas cleaning process, the total enthalpy losses estimated for the three plant sizes were compared with the respective gas streams energy contents, these latter obtained on the basis of low heating value of gas only. This overall study on gas cleaning systems is thus proposed as an analytical tool by which different gas cleaning line configurations can be evaluated, according to the particular practical application they are adopted for and the size of cogeneration unit they are connected to.
Resumo:
The MTDL (multi-target-directed ligand) design strategy is used to develop single chemical entities that are able to simultaneously modulate multiple targets. The development of such compounds might disclose new avenues for the treatment of a variety of pathologies (e.g. cancer, AIDS, neurodegenerative diseases), for which an effective cure is urgently needed. This strategy has been successfully applied to Alzheimer’s disease (AD) due to its multifactorial nature, involving cholinergic dysfunction, amyloid aggregation, and oxidative stress. Despite many biological entities have been recognized as possible AD-relevant, only four achetylcholinesterase inhibitors (AChEIs) and one NMDA receptor antagonist are used in therapy. Unfortunately, such compounds are not disease-modifying agents behaving only as cognition enhancers. Therefore, MTDL strategy is emerging as a powerful drug design paradigm: pharmacophores of different drugs are combined in the same structure to afford hybrid molecules. In principle, each pharmacophore of these new drugs should retain the ability to interact with its specific site(s) on the target and, consequently, to produce specific pharmacological responses that, taken together, should slow or block the neurodegenerative process. To this end, the design and synthesis of several examples of MTDLs for combating neurodegenerative diseases have been published. This seems to be the more appropriate approach for addressing the complexity of AD and may provide new drugs for tackling the multifactorial nature of AD, and hopefully stopping its progression. According to this emerging strategy, in this work thesis different classes of new molecular structures, based on the MTDL approach, have been developed. Moreover, curcumin and its constrained analogs have currently received remarkable interest as they have a unique conjugated structure which shows a pleiotropic profile that we considered a suitable framework in developing MTDLs. In fact, beside the well-known direct antioxidant activity, curcumin displays a wide range of biological properties including anti-inflammatory and anti-amyloidogenic activities and an indirect antioxidant action through activation of the cytoprotective enzyme heme oxygenase (HO-1). Thus, since many lines of evidence suggest that oxidative stess and mitochondria impairment have a cental role in age-related neurodegenerative diseases such as AD, we designed mitochondria-targeted antioxidants by connecting curcumin analogs to different polyamine chains that, with the aid of electrostatic force, might drive the selected antioxidant moiety into mitochondria.
Resumo:
Food technologies today mean reducing agricultural food waste, improvement of food security, enhancement of food sensory properties, enlargement of food market and food economies. Food technologists must be high-skilled technicians with good scientific knowledge of food hygiene, food chemistry, industrial technologies and food engineering, sensory evaluation experience and analytical chemistry. Their role is to apply the modern vision of science in the field of human nutrition, rising up knowledge in food science. The present PhD project starts with the aim of studying and improving frozen fruits quality. Freezing process in very powerful in preserve initial raw material characteristics, but pre-treatment before the freezing process are necessary to improve quality, in particular to improve texture and enzymatic activity of frozen foods. Osmotic Dehydration (OD) and Vacuum Impregnation (VI), are useful techniques to modify fruits and vegetables composition and prepare them to freezing process. These techniques permit to introduce cryo-protective agent into the food matrices, without significant changes of the original structure, but cause a slight leaching of important intrinsic compounds. Phenolic and polyphenolic compounds for example in apples and nectarines treated with hypertonic solutions are slightly decreased, but the effect of concentration due to water removal driven out from the osmotic gradient, cause a final content of phenolic compounds similar to that of the raw material. In many experiment, a very important change in fruit composition regard the aroma profile. This occur in strawberries osmo-dehydrated under vacuum condition or under atmospheric pressure condition. The increment of some volatiles, probably due to fermentative metabolism induced by the osmotic stress of hypertonic treatment, induce a sensory profile modification of frozen fruits, that in some way result in a better acceptability of consumer, that prefer treated frozen fruits to untreated frozen fruits. Among different processes used, a very interesting result was obtained with the application of a osmotic pre-treatment driven out at refrigerated temperature for long time. The final quality of frozen strawberries was very high and a peculiar increment of phenolic profile was detected. This interesting phenomenon was probably due to induction of phenolic biological synthesis (for example as reaction to osmotic stress), or to hydrolysis of polymeric phenolic compounds. Aside this investigation in the cryo-stabilization and dehydrofreezing of fruits, deeper investigation in VI techniques were carried out, as studies of changes in vacuum impregnated prickly pear texture, and in use of VI and ultrasound (US) in aroma enrichment of fruit pieces. Moreover, to develop sensory evaluation tools and analytical chemistry determination (of volatiles and phenolic compounds), some researches were bring off and published in these fields. Specifically dealing with off-flavour development during storage of boiled potato, and capillary zonal electrophoresis (CZE) and high performance liquid chromatography (HPLC) determination of phenolic compounds.
Resumo:
With the business environments no longer confined to geographical borders, the new wave of digital technologies has given organizations an enormous opportunity to bring together their distributed workforce and develop the ability to work together despite being apart (Prasad & Akhilesh, 2002). resupposing creativity to be a social process, the way that this phenomenon occurs when the configuration of the team is substantially modified will be questioned. Very little is known about the impact of interpersonal relationships in the creativity (Kurtzberg & Amabile, 2001). In order to analyse the ways in which the creative process may be developed, we ought to be taken into consideration the fact that participants are dealing with a quite an atypical situation. Firstly, in these cases socialization takes place amongst individuals belonging to a geographically dispersed workplace, where interpersonal relationships are mediated by the computer, and where trust must be developed among persons who have never met one another. Participants not only have multiple addresses and locations, but above all different nationalities, and different cultures, attitudes, thoughts, and working patterns, and languages. Therefore, the central research question of this thesis is as follows: “How does the creative process unfold in globally distributed teams?” With a qualitative approach, we used the case study of the Business Unit of Volvo 3P, an arm of Volvo Group. Throughout this research, we interviewed seven teams engaged in the development of a new product in the chassis and cab areas, for the brands Volvo and Renault Trucks, teams that were geographically distributed in Brazil, Sweden, France and India. Our research suggests that corporate values, alongside with intrinsic motivation and task which lay down the necessary foundations for the development of the creative process in GDT.
Resumo:
Fire blight, caused by the gram negative bacterium Erwinia amylovora, is one of the most destructive bacterial diseases of Pomaceous plants. Therefore, the development of reliable methods to control this disease is desperately needed. This research investigated the possibility to interfere, by altering plant metabolism, on the interactions occurring between Erwinia amylovora, the host plant and the epiphytic microbial community in order to obtain a more effective control of fire blight. Prohexadione-calcium and trinexapac-ethyl, two dioxygenase inhibitors, were chosen as a chemical tool to influence plant metabolism. These compounds inhibit the 2-oxoglutarate-dependent dioxygenases and, therefore, they greatly influence plant metabolism. Moreover, dioxygenase inhibitors were found to enhance plant resistance to a wide range of pathogens. In particular, dioxygenase inhibitors application seems a promising method to control fire blight. From cited literature, it is assumed that these compounds increase plant defence mainly by a transient alteration of flavonoids metabolism. We tried to demonstrate, that the reduction of susceptibility to disease could be partially due to an indirect influence on the microbial community established on plant surface. The possibility to influence the interactions occurring in the epiphytic microbial community is particularly interesting, in fact, the relationships among different bacterial populations on plant surface is a key factor for a more effective biological control of plant diseases. Furthermore, we evaluated the possibility to combine the application of dioxygenase inhibitors with biological control in order to develop an integrate strategy for control of fire blight. The first step for this study was the isolation of a pathogenic strain of E. amylovora. In addition, we isolated different epiphytic bacteria, which respond to general requirements for biological control agents. Successively, the effect of dioxygenase inhibitors treatment on microbial community was investigated on different plant organs (stigmas, nectaries and leaves). An increase in epiphytic microbial population was found. Further experiments were performed with aim to explain this effect. In particular, changes in sugar content of nectar were observed. These changes, decreasing the osmotic potential of nectar, might allow a more consistent growth of epiphytic bacteria on blossoms. On leaves were found similar differences as well. As far as the interactions between E. amylovora and host plant, they were deeply investigated by advanced microscopical analysis. The influence of dioxygenase inhibitors and SAR inducers application on the infection process and migration of pathogen inside different plant tissues was studied. These microscopical techniques, combined with the use of gpf-labelled E. amylovora, allowed the development of a bioassay method for resistance inducers efficacy screening. The final part of the work demonstrated that the reduction of disease susceptibility observed in plants treated with prohexadione-calcium is mainly due to the accumulation of a novel phytoalexins: luteoforol. This 3-deoxyflavonoid was proven to have a strong antimicrobial activity.
Resumo:
In gasoline Port Fuel Injection (PFI) and Direct Injection (GDI) internal combustion engines, the liquid fuel might be injected into a gaseous ambient in a superheated state, resulting in flash boiling of the fuel. The importance to investigate and predict such a process is due to the influence it has on the liquid fuel atomization and vaporization and thus on combustion, with direct implications on engine performances and exhaust gas emissions. The topic of the present PhD research involves the numerical analysis of the behaviour of the superheated fuel during the injection process, in high pressure injection systems like the ones equipping GDI engines. Particular emphasis is on the investigation of the effects of the fuel superheating degree on atomization dynamics and spray characteristics. The present work is a look at the flash evaporation and flash boiling modeling, from an engineering point of view, addressed to keep the complex physics involved as simple as possible, however capturing the main characteristics of a superheated fuel injection.
Resumo:
Researches performed during the PhD course intended to assess innovative applications of near-infrared spectroscopy in reflectance (NIR) in the production chain of beer. The purpose is to measure by NIR the "malting quality" (MQ) parameter of barley, to monitor the malting process and to know if a certain type of barley is suitable for the production of beer and spirits. Moreover, NIR will be applied to monitor the brewing process. First of all, it was possible to check the quality of the raw materials like barley, maize and barley malt using a rapid, non-destructive and reliable method, with a low error of prediction. The more interesting result obtained at this level was that the repeatability of the NIR calibration models developed was comparable with the one of the reference method. Moreover, about malt, new kinds of validation were used in order to estimate the real predictive power of the proposed calibration models and to understand the long-term effects. Furthermore, the precision of all the calibration models developed for malt evaluation was estimated and statistically compared with the reference methods, with good results. Then, new calibration models were developed for monitoring the malting process, measuring the moisture content and other malt quality parameters during germination. Moreover it was possible to obtain by NIR an estimate of the "malting quality" (MQ) of barley and to predict whether if its germination will be rapid and uniform and if a certain type of barley is suitable for the production of beer and spirits. Finally, the NIR technique was applied to monitor the brewing process, using correlations between NIR spectra of beer and analytical parameters, and to assess beer quality. These innovative results are potentially very useful for the actors involved in the beer production chain, especially the calibration models suitable for the control of the malting process and for the assessment of the “malting quality” of barley, which need to be deepened in future studies.
Resumo:
3D video-fluoroscopy is an accurate but cumbersome technique to estimate natural or prosthetic human joint kinematics. This dissertation proposes innovative methodologies to improve the 3D fluoroscopic analysis reliability and usability. Being based on direct radiographic imaging of the joint, and avoiding soft tissue artefact that limits the accuracy of skin marker based techniques, the fluoroscopic analysis has a potential accuracy of the order of mm/deg or better. It can provide fundamental informations for clinical and methodological applications, but, notwithstanding the number of methodological protocols proposed in the literature, time consuming user interaction is exploited to obtain consistent results. The user-dependency prevented a reliable quantification of the actual accuracy and precision of the methods, and, consequently, slowed down the translation to the clinical practice. The objective of the present work was to speed up this process introducing methodological improvements in the analysis. In the thesis, the fluoroscopic analysis was characterized in depth, in order to evaluate its pros and cons, and to provide reliable solutions to overcome its limitations. To this aim, an analytical approach was followed. The major sources of error were isolated with in-silico preliminary studies as: (a) geometric distortion and calibration errors, (b) 2D images and 3D models resolutions, (c) incorrect contour extraction, (d) bone model symmetries, (e) optimization algorithm limitations, (f) user errors. The effect of each criticality was quantified, and verified with an in-vivo preliminary study on the elbow joint. The dominant source of error was identified in the limited extent of the convergence domain for the local optimization algorithms, which forced the user to manually specify the starting pose for the estimating process. To solve this problem, two different approaches were followed: to increase the optimal pose convergence basin, the local approach used sequential alignments of the 6 degrees of freedom in order of sensitivity, or a geometrical feature-based estimation of the initial conditions for the optimization; the global approach used an unsupervised memetic algorithm to optimally explore the search domain. The performances of the technique were evaluated with a series of in-silico studies and validated in-vitro with a phantom based comparison with a radiostereometric gold-standard. The accuracy of the method is joint-dependent, and for the intact knee joint, the new unsupervised algorithm guaranteed a maximum error lower than 0.5 mm for in-plane translations, 10 mm for out-of-plane translation, and of 3 deg for rotations in a mono-planar setup; and lower than 0.5 mm for translations and 1 deg for rotations in a bi-planar setups. The bi-planar setup is best suited when accurate results are needed, such as for methodological research studies. The mono-planar analysis may be enough for clinical application when the analysis time and cost may be an issue. A further reduction of the user interaction was obtained for prosthetic joints kinematics. A mixed region-growing and level-set segmentation method was proposed and halved the analysis time, delegating the computational burden to the machine. In-silico and in-vivo studies demonstrated that the reliability of the new semiautomatic method was comparable to a user defined manual gold-standard. The improved fluoroscopic analysis was finally applied to a first in-vivo methodological study on the foot kinematics. Preliminary evaluations showed that the presented methodology represents a feasible gold-standard for the validation of skin marker based foot kinematics protocols.
Resumo:
Life is full of uncertainties. Legal rules should have a clear intention, motivation and purpose in order to diminish daily uncertainties. However, practice shows that their consequences are complex and hard to predict. For instance, tort law has the general objectives of deterring future negligent behavior and compensating the victims of someone else's negligence. Achieving these goals are particularly difficult in medical malpractice cases. To start with, when patients search for medical care they are typically sick in the first place. In case harm materializes during the treatment, it might be very hard to assess if it was due to substandard medical care or to the patient's poor health conditions. Moreover, the practice of medicine has a positive externality on the society, meaning that the design of legal rules is crucial: for instance, it should not result in physicians avoiding practicing their activity just because they are afraid of being sued even when they acted according to the standard level of care. The empirical literature on medical malpractice has been developing substantially in the past two decades, with the American case being the most studied one. Evidence from civil law tradition countries is more difficult to find. The aim of this thesis is to contribute to the empirical literature on medical malpractice, using two civil law countries as a case-study: Spain and Italy. The goal of this thesis is to investigate, in the first place, some of the consequences of having two separate sub-systems (administrative and civil) coexisting within the same legal system, which is common in civil law tradition countries with a public national health system (such as Spain, France and Portugal). When this holds, different procedures might apply depending on the type of hospital where the injury took place (essentially whether it is a public hospital or a private hospital). Therefore, a patient injured in a public hospital should file a claim in administrative courts while a patient suffering an identical medical accident should file a claim in civil courts. A natural question that the reader might pose is why should both administrative and civil courts decide medical malpractice cases? Moreover, can this specialization of courts influence how judges decide medical malpractice cases? In the past few years, there was a general concern with patient safety, which is currently on the agenda of several national governments. Some initiatives have been taken at the international level, with the aim of preventing harm to patients during treatment and care. A negligently injured patient might present a claim against the health care provider with the aim of being compensated for the economic loss and for pain and suffering. In several European countries, health care is mainly provided by a public national health system, which means that if a patient harmed in a public hospital succeeds in a claim against the hospital, public expenditures increase because the State takes part in the litigation process. This poses a problem in a context of increasing national health expenditures and public debt. In Italy, with the aim of increasing patient safety, some regions implemented a monitoring system on medical malpractice claims. However, if properly implemented, this reform shall also allow for a reduction in medical malpractice insurance costs. This thesis is organized as follows. Chapter 1 provides a review of the empirical literature on medical malpractice, where studies on outcomes and merit of claims, costs and defensive medicine are presented. Chapter 2 presents an empirical analysis of medical malpractice claims arriving to the Spanish Supreme Court. The focus is on reversal rates for civil and administrative decisions. Administrative decisions appealed by the plaintiff have the highest reversal rates. The results show a bias in lower administrative courts, which tend to focus on the State side. We provide a detailed explanation for these results, which can rely on the organization of administrative judges career. Chapter 3 assesses predictors of compensation in medical malpractice cases appealed to the Spanish Supreme Court and investigates the amount of damages attributed to patients. The results show horizontal equity between administrative and civil decisions (controlling for observable case characteristics) and vertical inequity (patients suffering more severe injuries tend to receive higher payouts). In order to execute these analyses, a database of medical malpractice decisions appealed to the Administrative and Civil Chambers of the Spanish Supreme Court from 2006 until 2009 (designated by the Spanish Supreme Court Medical Malpractice Dataset (SSCMMD)) has been created. A description of how the SSCMMD was built and of the Spanish legal system is presented as well. Chapter 4 includes an empirical investigation of the effect of a monitoring system for medical malpractice claims on insurance premiums. In Italy, some regions adopted this policy in different years, while others did not. The study uses data on insurance premiums from Italian public hospitals for the years 2001-2008. This is a significant difference as most of the studies use the insurance company as unit of analysis. Although insurance premiums have risen from 2001 to 2008, the increase was lower for regions adopting a monitoring system for medical claims. Possible implications of this system are also provided. Finally, Chapter 5 discusses the main findings, describes possible future research and concludes.