821 resultados para Value-based leadership
Resumo:
In the last years of research, I focused my studies on different physiological problems. Together with my supervisors, I developed/improved different mathematical models in order to create valid tools useful for a better understanding of important clinical issues. The aim of all this work is to develop tools for learning and understanding cardiac and cerebrovascular physiology as well as pathology, generating research questions and developing clinical decision support systems useful for intensive care unit patients. I. ICP-model Designed for Medical Education We developed a comprehensive cerebral blood flow and intracranial pressure model to simulate and study the complex interactions in cerebrovascular dynamics caused by multiple simultaneous alterations, including normal and abnormal functional states of auto-regulation of the brain. Individual published equations (derived from prior animal and human studies) were implemented into a comprehensive simulation program. Included in the normal physiological modelling was: intracranial pressure, cerebral blood flow, blood pressure, and carbon dioxide (CO2) partial pressure. We also added external and pathological perturbations, such as head up position and intracranial haemorrhage. The model performed clinically realistically given inputs of published traumatized patients, and cases encountered by clinicians. The pulsatile nature of the output graphics was easy for clinicians to interpret. The manoeuvres simulated include changes of basic physiological inputs (e.g. blood pressure, central venous pressure, CO2 tension, head up position, and respiratory effects on vascular pressures) as well as pathological inputs (e.g. acute intracranial bleeding, and obstruction of cerebrospinal outflow). Based on the results, we believe the model would be useful to teach complex relationships of brain haemodynamics and study clinical research questions such as the optimal head-up position, the effects of intracranial haemorrhage on cerebral haemodynamics, as well as the best CO2 concentration to reach the optimal compromise between intracranial pressure and perfusion. We believe this model would be useful for both beginners and advanced learners. It could be used by practicing clinicians to model individual patients (entering the effects of needed clinical manipulations, and then running the model to test for optimal combinations of therapeutic manoeuvres). II. A Heterogeneous Cerebrovascular Mathematical Model Cerebrovascular pathologies are extremely complex, due to the multitude of factors acting simultaneously on cerebral haemodynamics. In this work, the mathematical model of cerebral haemodynamics and intracranial pressure dynamics, described in the point I, is extended to account for heterogeneity in cerebral blood flow. The model includes the Circle of Willis, six regional districts independently regulated by autoregulation and CO2 reactivity, distal cortical anastomoses, venous circulation, the cerebrospinal fluid circulation, and the intracranial pressure-volume relationship. Results agree with data in the literature and highlight the existence of a monotonic relationship between transient hyperemic response and the autoregulation gain. During unilateral internal carotid artery stenosis, local blood flow regulation is progressively lost in the ipsilateral territory with the presence of a steal phenomenon, while the anterior communicating artery plays the major role to redistribute the available blood flow. Conversely, distal collateral circulation plays a major role during unilateral occlusion of the middle cerebral artery. In conclusion, the model is able to reproduce several different pathological conditions characterized by heterogeneity in cerebrovascular haemodynamics and can not only explain generalized results in terms of physiological mechanisms involved, but also, by individualizing parameters, may represent a valuable tool to help with difficult clinical decisions. III. Effect of Cushing Response on Systemic Arterial Pressure. During cerebral hypoxic conditions, the sympathetic system causes an increase in arterial pressure (Cushing response), creating a link between the cerebral and the systemic circulation. This work investigates the complex relationships among cerebrovascular dynamics, intracranial pressure, Cushing response, and short-term systemic regulation, during plateau waves, by means of an original mathematical model. The model incorporates the pulsating heart, the pulmonary circulation and the systemic circulation, with an accurate description of the cerebral circulation and the intracranial pressure dynamics (same model as in the first paragraph). Various regulatory mechanisms are included: cerebral autoregulation, local blood flow control by oxygen (O2) and/or CO2 changes, sympathetic and vagal regulation of cardiovascular parameters by several reflex mechanisms (chemoreceptors, lung-stretch receptors, baroreceptors). The Cushing response has been described assuming a dramatic increase in sympathetic activity to vessels during a fall in brain O2 delivery. With this assumption, the model is able to simulate the cardiovascular effects experimentally observed when intracranial pressure is artificially elevated and maintained at constant level (arterial pressure increase and bradicardia). According to the model, these effects arise from the interaction between the Cushing response and the baroreflex response (secondary to arterial pressure increase). Then, patients with severe head injury have been simulated by reducing intracranial compliance and cerebrospinal fluid reabsorption. With these changes, oscillations with plateau waves developed. In these conditions, model results indicate that the Cushing response may have both positive effects, reducing the duration of the plateau phase via an increase in cerebral perfusion pressure, and negative effects, increasing the intracranial pressure plateau level, with a risk of greater compression of the cerebral vessels. This model may be of value to assist clinicians in finding the balance between clinical benefits of the Cushing response and its shortcomings. IV. Comprehensive Cardiopulmonary Simulation Model for the Analysis of Hypercapnic Respiratory Failure We developed a new comprehensive cardiopulmonary model that takes into account the mutual interactions between the cardiovascular and the respiratory systems along with their short-term regulatory mechanisms. The model includes the heart, systemic and pulmonary circulations, lung mechanics, gas exchange and transport equations, and cardio-ventilatory control. Results show good agreement with published patient data in case of normoxic and hyperoxic hypercapnia simulations. In particular, simulations predict a moderate increase in mean systemic arterial pressure and heart rate, with almost no change in cardiac output, paralleled by a relevant increase in minute ventilation, tidal volume and respiratory rate. The model can represent a valid tool for clinical practice and medical research, providing an alternative way to experience-based clinical decisions. In conclusion, models are not only capable of summarizing current knowledge, but also identifying missing knowledge. In the former case they can serve as training aids for teaching the operation of complex systems, especially if the model can be used to demonstrate the outcome of experiments. In the latter case they generate experiments to be performed to gather the missing data.
Resumo:
The "sustainability" concept relates to the prolonging of human economic systems with as little detrimental impact on ecological systems as possible. Construction that exhibits good environmental stewardship and practices that conserve resources in a manner that allow growth and development to be sustained for the long-term without degrading the environment are indispensable in a developed society. Past, current and future advancements in asphalt as an environmentally sustainable paving material are especially important because the quantities of asphalt used annually in Europe as well as in the U.S. are large. The asphalt industry is still developing technological improvements that will reduce the environmental impact without affecting the final mechanical performance. Warm mix asphalt (WMA) is a type of asphalt mix requiring lower production temperatures compared to hot mix asphalt (HMA), while aiming to maintain the desired post construction properties of traditional HMA. Lowering the production temperature reduce the fuel usage and the production of emissions therefore and that improve conditions for workers and supports the sustainable development. Even the crumb-rubber modifier (CRM), with shredded automobile tires and used in the United States since the mid 1980s, has proven to be an environmentally friendly alternative to conventional asphalt pavement. Furthermore, the use of waste tires is not only relevant in an environmental aspect but also for the engineering properties of asphalt [Pennisi E., 1992]. This research project is aimed to demonstrate the dual value of these Asphalt Mixes in regards to the environmental and mechanical performance and to suggest a low environmental impact design procedure. In fact, the use of eco-friendly materials is the first phase towards an eco-compatible design but it cannot be the only step. The eco-compatible approach should be extended also to the design method and material characterization because only with these phases is it possible to exploit the maximum potential properties of the used materials. Appropriate asphalt concrete characterization is essential and vital for realistic performance prediction of asphalt concrete pavements. Volumetric (Mix design) and mechanical (Permanent deformation and Fatigue performance) properties are important factors to consider. Moreover, an advanced and efficient design method is necessary in order to correctly use the material. A design method such as a Mechanistic-Empirical approach, consisting of a structural model capable of predicting the state of stresses and strains within the pavement structure under the different traffic and environmental conditions, was the application of choice. In particular this study focus on the CalME and its Incremental-Recursive (I-R) procedure, based on damage models for fatigue and permanent shear strain related to the surface cracking and to the rutting respectively. It works in increments of time and, using the output from one increment, recursively, as input to the next increment, predicts the pavement conditions in terms of layer moduli, fatigue cracking, rutting and roughness. This software procedure was adopted in order to verify the mechanical properties of the study mixes and the reciprocal relationship between surface layer and pavement structure in terms of fatigue and permanent deformation with defined traffic and environmental conditions. The asphalt mixes studied were used in a pavement structure as surface layer of 60 mm thickness. The performance of the pavement was compared to the performance of the same pavement structure where different kinds of asphalt concrete were used as surface layer. In comparison to a conventional asphalt concrete, three eco-friendly materials, two warm mix asphalt and a rubberized asphalt concrete, were analyzed. The First Two Chapters summarize the necessary steps aimed to satisfy the sustainable pavement design procedure. In Chapter I the problem of asphalt pavement eco-compatible design was introduced. The low environmental impact materials such as the Warm Mix Asphalt and the Rubberized Asphalt Concrete were described in detail. In addition the value of a rational asphalt pavement design method was discussed. Chapter II underlines the importance of a deep laboratory characterization based on appropriate materials selection and performance evaluation. In Chapter III, CalME is introduced trough a specific explanation of the different equipped design approaches and specifically explaining the I-R procedure. In Chapter IV, the experimental program is presented with a explanation of test laboratory devices adopted. The Fatigue and Rutting performances of the study mixes are shown respectively in Chapter V and VI. Through these laboratory test data the CalME I-R models parameters for Master Curve, fatigue damage and permanent shear strain were evaluated. Lastly, in Chapter VII, the results of the asphalt pavement structures simulations with different surface layers were reported. For each pavement structure, the total surface cracking, the total rutting, the fatigue damage and the rutting depth in each bound layer were analyzed.
Resumo:
In this thesis the impact of R&D expenditures on firm market value and stock returns is examined. This is performed in a sample of European listed firms for the period 2000-2009. I apply different linear and GMM econometric estimations for testing the impact of R&D on market prices and construct country portfolios based on firms’ R&D expenditure to market capitalization ratio for studying the effect of R&D on stock returns. The results confirm that more innovative firms have a better market valuation,investors consider R&D as an asset that produces long-term benefits for corporations. The impact of R&D on firm value differs across countries. It is significantly modulated by the financial and legal environment where firms operate. Other firm and industry characteristics seem to play a determinant role when investors value R&D. First, only larger firms with lower financial leverage that operate in highly innovative sectors decide to disclose their R&D investment. Second, the markets assign a premium to small firms, which operate in hi-tech sectors compared to larger enterprises for low-tech industries. On the other hand, I provide empirical evidence indicating that generally highly R&D-intensive firms may enhance mispricing problems related to firm valuation. As R&D contributes to the estimation of future stock returns, portfolios that comprise high R&D-intensive stocks may earn significant excess returns compared to the less innovative after controlling for size and book-to-market risk. Further, the most innovative firms are generally more risky in terms of stock volatility but not systematically more risky than low-tech firms. Firms that operate in Continental Europe suffer more mispricing compared to Anglo-Saxon peers but the former are less volatile, other things being equal. The sectors where firms operate are determinant even for the impact of R&D on stock returns; this effect is much stronger in hi-tech industries.
Resumo:
In the last couple of decades we assisted to a reappraisal of spatial design-based techniques. Usually the spatial information regarding the spatial location of the individuals of a population has been used to develop efficient sampling designs. This thesis aims at offering a new technique for both inference on individual values and global population values able to employ the spatial information available before sampling at estimation level by rewriting a deterministic interpolator under a design-based framework. The achieved point estimator of the individual values is treated both in the case of finite spatial populations and continuous spatial domains, while the theory on the estimator of the population global value covers the finite population case only. A fairly broad simulation study compares the results of the point estimator with the simple random sampling without replacement estimator in predictive form and the kriging, which is the benchmark technique for inference on spatial data. The Monte Carlo experiment is carried out on populations generated according to different superpopulation methods in order to manage different aspects of the spatial structure. The simulation outcomes point out that the proposed point estimator has almost the same behaviour as the kriging predictor regardless of the parameters adopted for generating the populations, especially for low sampling fractions. Moreover, the use of the spatial information improves substantially design-based spatial inference on individual values.
Resumo:
Die vorliegende Arbeit ist motiviert durch biologische Fragestellungen bezüglich des Verhaltens von Membranpotentialen in Neuronen. Ein vielfach betrachtetes Modell für spikende Neuronen ist das Folgende. Zwischen den Spikes verhält sich das Membranpotential wie ein Diffusionsprozess X der durch die SDGL dX_t= beta(X_t) dt+ sigma(X_t) dB_t gegeben ist, wobei (B_t) eine Standard-Brown'sche Bewegung bezeichnet. Spikes erklärt man wie folgt. Sobald das Potential X eine gewisse Exzitationsschwelle S überschreitet entsteht ein Spike. Danach wird das Potential wieder auf einen bestimmten Wert x_0 zurückgesetzt. In Anwendungen ist es manchmal möglich, einen Diffusionsprozess X zwischen den Spikes zu beobachten und die Koeffizienten der SDGL beta() und sigma() zu schätzen. Dennoch ist es nötig, die Schwellen x_0 und S zu bestimmen um das Modell festzulegen. Eine Möglichkeit, dieses Problem anzugehen, ist x_0 und S als Parameter eines statistischen Modells aufzufassen und diese zu schätzen. In der vorliegenden Arbeit werden vier verschiedene Fälle diskutiert, in denen wir jeweils annehmen, dass das Membranpotential X zwischen den Spikes eine Brown'sche Bewegung mit Drift, eine geometrische Brown'sche Bewegung, ein Ornstein-Uhlenbeck Prozess oder ein Cox-Ingersoll-Ross Prozess ist. Darüber hinaus beobachten wir die Zeiten zwischen aufeinander folgenden Spikes, die wir als iid Treffzeiten der Schwelle S von X gestartet in x_0 auffassen. Die ersten beiden Fälle ähneln sich sehr und man kann jeweils den Maximum-Likelihood-Schätzer explizit angeben. Darüber hinaus wird, unter Verwendung der LAN-Theorie, die Optimalität dieser Schätzer gezeigt. In den Fällen OU- und CIR-Prozess wählen wir eine Minimum-Distanz-Methode, die auf dem Vergleich von empirischer und wahrer Laplace-Transformation bezüglich einer Hilbertraumnorm beruht. Wir werden beweisen, dass alle Schätzer stark konsistent und asymptotisch normalverteilt sind. Im letzten Kapitel werden wir die Effizienz der Minimum-Distanz-Schätzer anhand simulierter Daten überprüfen. Ferner, werden Anwendungen auf reale Datensätze und deren Resultate ausführlich diskutiert.
Resumo:
In the last few years the resolution of numerical weather prediction (nwp) became higher and higher with the progresses of technology and knowledge. As a consequence, a great number of initial data became fundamental for a correct initialization of the models. The potential of radar observations has long been recognized for improving the initial conditions of high-resolution nwp models, while operational application becomes more frequent. The fact that many nwp centres have recently taken into operations convection-permitting forecast models, many of which assimilate radar data, emphasizes the need for an approach to providing quality information which is needed in order to avoid that radar errors degrade the model's initial conditions and, therefore, its forecasts. Environmental risks can can be related with various causes: meteorological, seismical, hydrological/hydraulic. Flash floods have horizontal dimension of 1-20 Km and can be inserted in mesoscale gamma subscale, this scale can be modeled only with nwp model with the highest resolution as the COSMO-2 model. One of the problems of modeling extreme convective events is related with the atmospheric initial conditions, in fact the scale dimension for the assimilation of atmospheric condition in an high resolution model is about 10 Km, a value too high for a correct representation of convection initial conditions. Assimilation of radar data with his resolution of about of Km every 5 or 10 minutes can be a solution for this problem. In this contribution a pragmatic and empirical approach to deriving a radar data quality description is proposed to be used in radar data assimilation and more specifically for the latent heat nudging (lhn) scheme. Later the the nvective capabilities of the cosmo-2 model are investigated through some case studies. Finally, this work shows some preliminary experiments of coupling of a high resolution meteorological model with an Hydrological one.
Resumo:
This work presents exact algorithms for the Resource Allocation and Cyclic Scheduling Problems (RA&CSPs). Cyclic Scheduling Problems arise in a number of application areas, such as in hoist scheduling, mass production, compiler design (implementing scheduling loops on parallel architectures), software pipelining, and in embedded system design. The RA&CS problem concerns time and resource assignment to a set of activities, to be indefinitely repeated, subject to precedence and resource capacity constraints. In this work we present two constraint programming frameworks facing two different types of cyclic problems. In first instance, we consider the disjunctive RA&CSP, where the allocation problem considers unary resources. Instances are described through the Synchronous Data-flow (SDF) Model of Computation. The key problem of finding a maximum-throughput allocation and scheduling of Synchronous Data-Flow graphs onto a multi-core architecture is NP-hard and has been traditionally solved by means of heuristic (incomplete) algorithms. We propose an exact (complete) algorithm for the computation of a maximum-throughput mapping of applications specified as SDFG onto multi-core architectures. Results show that the approach can handle realistic instances in terms of size and complexity. Next, we tackle the Cyclic Resource-Constrained Scheduling Problem (i.e. CRCSP). We propose a Constraint Programming approach based on modular arithmetic: in particular, we introduce a modular precedence constraint and a global cumulative constraint along with their filtering algorithms. Many traditional approaches to cyclic scheduling operate by fixing the period value and then solving a linear problem in a generate-and-test fashion. Conversely, our technique is based on a non-linear model and tackles the problem as a whole: the period value is inferred from the scheduling decisions. The proposed approaches have been tested on a number of non-trivial synthetic instances and on a set of realistic industrial instances achieving good results on practical size problem.
Resumo:
Climate-change related impacts, notably coastal erosion, inundation and flooding from sea level rise and storms, will increase in the coming decades enhancing the risks for coastal populations. Further recourse to coastal armoring and other engineered defenses to address risk reduction will exacerbate threats to coastal ecosystems. Alternatively, protection services provided by healthy ecosystems is emerging as a key element in climate adaptation and disaster risk management. I examined two distinct approaches to coastal defense on the base of their ecological and ecosystem conservation values. First, I analyzed the role of coastal ecosystems in providing services for hazard risk reduction. The value in wave attenuation of coral reefs was quantitatively demonstrated using a meta-analysis approach. Results indicate that coral reefs can provide wave attenuation comparable to hard engineering artificial defenses and at lower costs. Conservation and restoration of existing coral reefs are cost-effective management options for disaster risk reduction. Second, I evaluated the possibility to enhance the ecological value of artificial coastal defense structures (CDS) as habitats for marine communities. I documented the suitability of CDS to support native, ecologically relevant, habitat-forming canopy algae exploring the feasibility of enhancing CDS ecological value by promoting the growth of desired species. Juveniles of Cystoseira barbata can be successfully transplanted at both natural and artificial habitats and not affected by lack of surrounding adult algal individuals nor by substratum orientation. Transplantation success was limited by biotic disturbance from macrograzers on CDS compared to natural habitats. Future work should explore the reasons behind the different ecological functioning of artificial and natural habitats unraveling the factors and mechanisms that cause it. The comprehension of the functioning of systems associated with artificial habitats is the key to allow environmental managers to identify proper mitigation options and to forecast the impact of alternative coastal development plans.
Resumo:
The market’s challenges bring firms to collaborate with other organizations in order to create Joint Ventures, Alliances and Consortia that are defined as “Interorganizational Networks” (IONs) (Provan, Fish and Sydow; 2007). Some of these IONs are managed through a shared partecipant governance (Provan and Kenis, 2008): a team composed by entrepreneurs and/or directors of each firm of an ION. The research is focused on these kind of management teams and it is based on an input-process-output model: some input variables (work group’s diversity, intra-team's friendship network density) have a direct influence on the process (team identification, shared leadership, interorganizational trust, team trust and intra-team's communication network density), which influence some team outputs, individual innovation behaviors and team effectiveness (team performance, work group satisfaction and ION affective commitment). Data was collected on a sample of 101 entrepreneurs grouped in 28 ION’s government teams and the research hypotheses are tested trough the path analysis and the multilevel models. As expected trust in team and shared leadership are positively and directly related to team effectiveness while team identification and interorganizational trust are indirectly related to the team outputs. The friendship network density among the team’s members has got positive effects on the trust in team and on the communication network density, and also, through the communication network density it improves the level of the teammates ION affective commitment. The shared leadership and its effects on the team effectiveness are fostered from higher level of team identification and weakened from higher level of work group diversity, specifically gender diversity. Finally, the communication network density and shared leadership at the individual level are related to the frequency of individual innovative behaviors. The dissertation’s results give a wider and more precise indication about the management of interfirm network through “shared” form of governance.
Resumo:
During my PhD,I have been develop an innovative technique to reproduce in vitro the 3D thymic microenvironment, to be used for growth and differentiation of thymocytes, and possible transplantation replacement in conditions of depressed thymic immune regulation. The work has been developed in the laboratory of Tissue Engineering at the University Hospital in Basel, Switzerland, under the tutorship of Prof.Ivan Martin. Since a number of studies have suggested that the 3D structure of the thymic microenvironment might play a key role in regulating the survival and functional competence of thymocytes, I’ve focused my effort on the isolation and purification of the extracellular matrix of the mouse thymus. Specifically, based on the assumption that TEC can favour the differentiation of pre-T lymphocytes, I’ve developed a specific decellularization protocol to obtain the intact, DNA-free extracellular matrix of the adult mouse thymus. Two different protocols satisfied the main characteristics of a decellularized matrix, according to qualitative and quantitative assays. In particular, the quantity of DNA was less than 10% in absolute value, no positive staining for cells was found and the 3D structure and composition of the ECM were maintained. In addition, I was able to prove that the decellularized matrixes were not cytotoxic for the cells themselves, and were able to increase expression of MHC II antigens compared to control cells grown in standard conditions. I was able to prove that TECs grow and proliferate up to ten days on top the decellularized matrix. After a complete characterization of the culture system, these innovative natural scaffolds could be used to improve the standard culture conditions of TEC, to study in vitro the action of different factors on their differentiation genes, and to test the ability of TECs to induce in vitro maturation of seeded T lymphocytes.
Resumo:
The use of linear programming in various areas has increased with the significant improvement of specialized solvers. Linear programs are used as such to model practical problems, or as subroutines in algorithms such as formal proofs or branch-and-cut frameworks. In many situations a certified answer is needed, for example the guarantee that the linear program is feasible or infeasible, or a provably safe bound on its objective value. Most of the available solvers work with floating-point arithmetic and are thus subject to its shortcomings such as rounding errors or underflow, therefore they can deliver incorrect answers. While adequate for some applications, this is unacceptable for critical applications like flight controlling or nuclear plant management due to the potential catastrophic consequences. We propose a method that gives a certified answer whether a linear program is feasible or infeasible, or returns unknown'. The advantage of our method is that it is reasonably fast and rarely answers unknown'. It works by computing a safe solution that is in some way the best possible in the relative interior of the feasible set. To certify the relative interior, we employ exact arithmetic, whose use is nevertheless limited in general to critical places, allowing us to rnremain computationally efficient. Moreover, when certain conditions are fulfilled, our method is able to deliver a provable bound on the objective value of the linear program. We test our algorithm on typical benchmark sets and obtain higher rates of success compared to previous approaches for this problem, while keeping the running times acceptably small. The computed objective value bounds are in most of the cases very close to the known exact objective values. We prove the usability of the method we developed by additionally employing a variant of it in a different scenario, namely to improve the results of a Satisfiability Modulo Theories solver. Our method is used as a black box in the nodes of a branch-and-bound tree to implement conflict learning based on the certificate of infeasibility for linear programs consisting of subsets of linear constraints. The generated conflict clauses are in general small and give good rnprospects for reducing the search space. Compared to other methods we obtain significant improvements in the running time, especially on the large instances.
Resumo:
In the present thesis I study the contribution to firm value of inventories management from a risk management perspective. I find a significant contribution of inventories to the value of risk management especially through the operating flexibility channel. In contrast, I do not find evidence supporting the view of inventories a reserve of liquidity. Inventories substitute, albeit not perfectly, derivatives or cash holdings. The substitution between hedging with derivatives and inventory is moderated by the correlation between cash flow and the underlying asset in the derivative contract. Hedge ratios increase with the effectiveness of derivatives. The decision to hedge with cash holdings or inventories is strongly influenced by the degree of complementarity between production factors and by cash flow volatility. In addition, I provide a risk management based explanation of the secular substitution between inventories and cash holdings documented, among others, in Bates et al. (2009), Journal of Finance. In a sample of U.S. firms between 1980 and 2006, I empirically confirm the negative relation between inventories and cash and provide evidence on the poor performance of investment cash flow sensitivities as a measure of financial constraints also in the case of inventories investment. This result can be explained by firms' scarce reliance on inventories as a reserve of liquidity. Finally, as an extension of my study, I contrast with empirical data the theoretical predictions of a model on the integrated management of inventories, trade credit and cash holdings.
Resumo:
Epoxy resins are very diffused materials due to their high added value deriving from high mechanical proprieties and thermal resistance; for this reason they are widely used both as metallic coatings in aerospace and in food packaging. However, their preparation uses dangerous reagents like bisphenol A and epichlorohydrin respectively classified as suspected of causing damage to fertility and to be carcinogen. Therefore, to satisfy the ever-growing attention to environmental problems and human safeness, we are considering alternative “green” processes through the use of reagents obtained as by-products from other processes and mild experimental conditions, and also economically sustainable and attractive for industries. Following previous results, we carried out the reaction leading to the formation of diphenolic acid (DPA), its allylation and the following epoxidation of the double bonds, all in aqueous solvent. In a second step the obtained product were cross-linked at high temperature with and without the use of hardeners. Then, on the obtained resin, some tests were performed like release in aqueous solution, scratch test and DSC analysis.
Resumo:
This thesis presents a process-based modelling approach to quantify carbon uptake by lichens and bryophytes at the global scale. Based on the modelled carbon uptake, potential global rates of nitrogen fixation, phosphorus uptake and chemical weathering by the organisms are estimated. In this way, the significance of lichens and bryophytes for global biogeochemical cycles can be assessed. The model uses gridded climate data and key properties of the habitat (e.g. disturbance intervals) to predict processes which control net carbon uptake, namely photosynthesis, respiration, water uptake and evaporation. It relies on equations used in many dynamical vegetation models, which are combined with concepts specific to lichens and bryophytes, such as poikilohydry or the effect of water content on CO2 diffusivity. To incorporate the great functional variation of lichens and bryophytes at the global scale, the model parameters are characterised by broad ranges of possible values instead of a single, globally uniform value. The predicted terrestrial net uptake of 0.34 to 3.3 Gt / yr of carbon and global patterns of productivity are in accordance with empirically-derived estimates. Based on the simulated estimates of net carbon uptake, further impacts of lichens and bryophytes on biogeochemical cycles are quantified at the global scale. Thereby the focus is on three processes, namely nitrogen fixation, phosphorus uptake and chemical weathering. The presented estimates have the form of potential rates, which means that the amount of nitrogen and phosphorus is quantified which is needed by the organisms to build up biomass, also accounting for resorption and leaching of nutrients. Subsequently, the potential phosphorus uptake on bare ground is used to estimate chemical weathering by the organisms, assuming that they release weathering agents to obtain phosphorus. The predicted requirement for nitrogen ranges from 3.5 to 34 Tg / yr and for phosphorus it ranges from 0.46 to 4.6 Tg / yr. Estimates of chemical weathering are between 0.058 and 1.1 km³ / yr of rock. These values seem to have a realistic order of magnitude and they support the notion that lichens and bryophytes have the potential to play an important role for global biogeochemical cycles.
Resumo:
A growing interest towards new sources of energy has led in recent years to the development of a new generation of catalysts for alcohol dehydrogenative coupling (ADC). This green, atom-efficient reaction is capable of turning alcohol derivatives into higher value and chemically more attractive ester molecules, and it finds interesting applications in the transformation of the large variety of products deriving from biomass. In the present work, a new series of ruthenium-PNP pincer complexes are investigated for the transformation of 1-butanol, one of the most challenging substrates for this type of reactions, into butyl butyrate, a short-chain symmetrical ester widely used in flavor industries. Since the reaction kinetics depends on hydrogen diffusion, the study aimed at identifying proper reactor type and right catalyst concentration to avoid mass transfer interferences and to get dependable data. A comparison between catalytic activities and productivities has been made to establish the role of the different ligands bonded both to the PNP binder and to the ruthenium metal center, and hence to find the best catalyst for this type of reaction.