807 resultados para Technology Acceptance Model TAM
Resumo:
Multivariate analyses of UV-Vis spectral data from cachaca wood extracts provide a simple and robust model to classify aged Brazilian cachacas according to the wood species used in the maturation barrels. The model is based on inspection of 93 extracts of oak and different Brazilian wood species by a non-aged cachaca used as an extraction solvent. Application of PCA (Principal Components Analysis) and HCA (Hierarchical Cluster Analysis) leads to identification of 6 clusters of cachaca wood extracts (amburana, amendoim, balsamo, castanheira, jatoba, and oak). LDA (Linear Discriminant Analysis) affords classification of 10 different wood species used in the cachaca extracts (amburana, amendoim, balsamo, cabreuva-parda, canela-sassafras, castanheira, jatoba, jequitiba-rosa, louro-canela, and oak) with an accuracy ranging from 80% (amendoim and castanheira) to 100% (balsamo and jequitiba-rosa). The methodology provides a low-cost alternative to methods based on liquid chromatography and mass spectrometry to classify cachacas aged in barrels that are composed of different wood species.
Resumo:
The uplift capacity of helical anchors normally increases with the number of helical plates. The rate of capacity gain is variable, considering that the disturbance caused by the anchor installation is generally more pronounced in the soil mass above the upper plates than above the lower plates, because the upper soil layers are penetrated more times. The present investigation examines the effect of the number of helices on the performance of helical anchors in sand, based on the results of centrifuge model tests. Uplift loading tests were performed on 12 different types of piles installed in two containers of dry sand prepared with different densities. The measured fractions of the uplift capacity related to each individual helical plate of multi-helix anchors were compared with the fractions predicted by the individual bearing method. The results of this investigation indicate that in double- and triple-helix anchors, the contributions of the second and third plate to the total anchor uplift capacity decreased with the increase of sand relative density and plate diameter. In addition, these experiments demonstrated that the variation of the anchor load-displacement behavior with the number of helices also depends on these parameters.
Resumo:
Aspects related to the users' cooperative work are not considered in the traditional approach of software engineering, since the user is viewed independently of his/her workplace environment or group, with the individual model generalized to the study of collective behavior of all users. This work proposes a process for software requirements to address issues involving cooperative work in information systems that provide distributed coordination in the users' actions and the communication among them occurs indirectly through the data entered while using the software. To achieve this goal, this research uses ergonomics, the 3C cooperation model, awareness and software engineering concepts. Action-research is used as a research methodology applied in three cycles during the development of a corporate workflow system in a technological research company. This article discusses the third cycle, which corresponds to the process that deals with the refinement of the cooperative work requirements with the software in actual use in the workplace, where the inclusion of a computer system changes the users' workplace, from the face to face interaction to the interaction mediated by the software. The results showed that the highest degree of users' awareness about their activities and other system users contribute to a decrease in their errors and in the inappropriate use of the system.
Resumo:
Cutting tools with higher wear resistance are those manufactured by powder metallurgy process, which combines the development of materials and design properties, features of shape-making technology and sintering. The annual global market of cutting tools consumes about US$ 12 billion; therefore, any research to improve tool designs and machining process techniques adds value or reduces costs. The aim is to describe the Spark Plasma Sintering (SPS) of cutting tools in functionally gradient materials, to show this structure design suitability through thermal residual stress model and, lastly, to present two kinds of inserts. For this, three cutting tool materials were used (Al2O3-ZrO2, Al2O3-TiC and WC-Co). The samples were sintered by SPS at 1300 °C and 70 MPa. The results showed that mechanical and thermal displacements may be separated during thermal treatment for analysis. Besides, the absence of cracks indicated coherence between experimental results and the residual stresses predicted.
Resumo:
Introduction 1.1 Occurrence of polycyclic aromatic hydrocarbons (PAH) in the environment Worldwide industrial and agricultural developments have released a large number of natural and synthetic hazardous compounds into the environment due to careless waste disposal, illegal waste dumping and accidental spills. As a result, there are numerous sites in the world that require cleanup of soils and groundwater. Polycyclic aromatic hydrocarbons (PAHs) are one of the major groups of these contaminants (Da Silva et al., 2003). PAHs constitute a diverse class of organic compounds consisting of two or more aromatic rings with various structural configurations (Prabhu and Phale, 2003). Being a derivative of benzene, PAHs are thermodynamically stable. In addition, these chemicals tend to adhere to particle surfaces, such as soils, because of their low water solubility and strong hydrophobicity, and this results in greater persistence under natural conditions. This persistence coupled with their potential carcinogenicity makes PAHs problematic environmental contaminants (Cerniglia, 1992; Sutherland, 1992). PAHs are widely found in high concentrations at many industrial sites, particularly those associated with petroleum, gas production and wood preserving industries (Wilson and Jones, 1993). 1.2 Remediation technologies Conventional techniques used for the remediation of soil polluted with organic contaminants include excavation of the contaminated soil and disposal to a landfill or capping - containment - of the contaminated areas of a site. These methods have some drawbacks. The first method simply moves the contamination elsewhere and may create significant risks in the excavation, handling and transport of hazardous material. Additionally, it is very difficult and increasingly expensive to find new landfill sites for the final disposal of the material. The cap and containment method is only an interim solution since the contamination remains on site, requiring monitoring and maintenance of the isolation barriers long into the future, with all the associated costs and potential liability. A better approach than these traditional methods is to completely destroy the pollutants, if possible, or transform them into harmless substances. Some technologies that have been used are high-temperature incineration and various types of chemical decomposition (for example, base-catalyzed dechlorination, UV oxidation). However, these methods have significant disadvantages, principally their technological complexity, high cost , and the lack of public acceptance. Bioremediation, on the contrast, is a promising option for the complete removal and destruction of contaminants. 1.3 Bioremediation of PAH contaminated soil & groundwater Bioremediation is the use of living organisms, primarily microorganisms, to degrade or detoxify hazardous wastes into harmless substances such as carbon dioxide, water and cell biomass Most PAHs are biodegradable unter natural conditions (Da Silva et al., 2003; Meysami and Baheri, 2003) and bioremediation for cleanup of PAH wastes has been extensively studied at both laboratory and commercial levels- It has been implemented at a number of contaminated sites, including the cleanup of the Exxon Valdez oil spill in Prince William Sound, Alaska in 1989, the Mega Borg spill off the Texas coast in 1990 and the Burgan Oil Field, Kuwait in 1994 (Purwaningsih, 2002). Different strategies for PAH bioremediation, such as in situ , ex situ or on site bioremediation were developed in recent years. In situ bioremediation is a technique that is applied to soil and groundwater at the site without removing the contaminated soil or groundwater, based on the provision of optimum conditions for microbiological contaminant breakdown.. Ex situ bioremediation of PAHs, on the other hand, is a technique applied to soil and groundwater which has been removed from the site via excavation (soil) or pumping (water). Hazardous contaminants are converted in controlled bioreactors into harmless compounds in an efficient manner. 1.4 Bioavailability of PAH in the subsurface Frequently, PAH contamination in the environment is occurs as contaminants that are sorbed onto soilparticles rather than in phase (NAPL, non aqueous phase liquids). It is known that the biodegradation rate of most PAHs sorbed onto soil is far lower than rates measured in solution cultures of microorganisms with pure solid pollutants (Alexander and Scow, 1989; Hamaker, 1972). It is generally believed that only that fraction of PAHs dissolved in the solution can be metabolized by microorganisms in soil. The amount of contaminant that can be readily taken up and degraded by microorganisms is defined as bioavailability (Bosma et al., 1997; Maier, 2000). Two phenomena have been suggested to cause the low bioavailability of PAHs in soil (Danielsson, 2000). The first one is strong adsorption of the contaminants to the soil constituents which then leads to very slow release rates of contaminants to the aqueous phase. Sorption is often well correlated with soil organic matter content (Means, 1980) and significantly reduces biodegradation (Manilal and Alexander, 1991). The second phenomenon is slow mass transfer of pollutants, such as pore diffusion in the soil aggregates or diffusion in the organic matter in the soil. The complex set of these physical, chemical and biological processes is schematically illustrated in Figure 1. As shown in Figure 1, biodegradation processes are taking place in the soil solution while diffusion processes occur in the narrow pores in and between soil aggregates (Danielsson, 2000). Seemingly contradictory studies can be found in the literature that indicate the rate and final extent of metabolism may be either lower or higher for sorbed PAHs by soil than those for pure PAHs (Van Loosdrecht et al., 1990). These contrasting results demonstrate that the bioavailability of organic contaminants sorbed onto soil is far from being well understood. Besides bioavailability, there are several other factors influencing the rate and extent of biodegradation of PAHs in soil including microbial population characteristics, physical and chemical properties of PAHs and environmental factors (temperature, moisture, pH, degree of contamination). Figure 1: Schematic diagram showing possible rate-limiting processes during bioremediation of hydrophobic organic contaminants in a contaminated soil-water system (not to scale) (Danielsson, 2000). 1.5 Increasing the bioavailability of PAH in soil Attempts to improve the biodegradation of PAHs in soil by increasing their bioavailability include the use of surfactants , solvents or solubility enhancers.. However, introduction of synthetic surfactant may result in the addition of one more pollutant. (Wang and Brusseau, 1993).A study conducted by Mulder et al. showed that the introduction of hydropropyl-ß-cyclodextrin (HPCD), a well-known PAH solubility enhancer, significantly increased the solubilization of PAHs although it did not improve the biodegradation rate of PAHs (Mulder et al., 1998), indicating that further research is required in order to develop a feasible and efficient remediation method. Enhancing the extent of PAHs mass transfer from the soil phase to the liquid might prove an efficient and environmentally low-risk alternative way of addressing the problem of slow PAH biodegradation in soil.
Resumo:
Nowadays licensing practices have increased in importance and relevance driving the widespread diffusion of markets for technologies. Firms are shifting from a tactical to a strategic attitude towards licensing, addressing both business and corporate level objectives. The Open Innovation Paradigm has been embraced. Firms rely more and more on collaboration and external sourcing of knowledge. This new model of innovation requires firms to leverage on external technologies to unlock the potential of firms’ internal innovative efforts. In this context, firms’ competitive advantage depends both on their ability to recognize available opportunities inside and outside their boundaries and on their readiness to exploit them in order to fuel their innovation process dynamically. Licensing is one of the ways available to firm to ripe the advantages associated to an open attitude in technology strategy. From the licensee’s point view this implies challenging the so-called not-invented-here syndrome, affecting the more traditional firms that emphasize the myth of internal research and development supremacy. This also entails understanding the so-called cognitive constraints affecting the perfect functioning of markets for technologies that are associated to the costs for the assimilation, integration and exploitation of external knowledge by recipient firms. My thesis aimed at shedding light on new interesting issues associated to in-licensing activities that have been neglected by the literature on licensing and markets for technologies. The reason for this gap is associated to the “perspective bias” affecting the works within this stream of research. With very few notable exceptions, they have been generally concerned with the investigation of the so-called licensing dilemma of the licensor – whether to license out or to internally exploit the in-house developed technologies, while neglecting the licensee’s perspective. In my opinion, this has left rooms for improving the understanding of the determinants and conditions affecting licensing-in practices. From the licensee’s viewpoint, the licensing strategy deals with the search, integration, assimilation, exploitation of external technologies. As such it lies at the very hearth of firm’s technology strategy. Improving our understanding of this strategy is thus required to assess the full implications of in-licensing decisions as they shape firms’ innovation patterns and technological capabilities evolution. It also allow for understanding the so-called cognitive constraints associated to the not-invented-here syndrome. In recognition of that, the aim of my work is to contribute to the theoretical and empirical literature explaining the determinants of the licensee’s behavior, by providing a comprehensive theoretical framework as well as ad-hoc conceptual tools to understand and overcome frictions and to ease the achievement of satisfactory technology transfer agreements in the marketplace. Aiming at this, I investigate licensing-in in three different fashions developed in three research papers. In the first work, I investigate the links between licensing and the patterns of firms’ technological search diversification according to the framework of references of the Search literature, Resource-based Theory and the theory of general purpose technologies. In the second paper - that continues where the first one left off – I analyze the new concept of learning-bylicensing, in terms of development of new knowledge inside the licensee firms (e.g. new patents) some years after the acquisition of the license, according to the Dynamic Capabilities perspective. Finally, in the third study, Ideal with the determinants of the remuneration structure of patent licenses (form and amount), and in particular on the role of the upfront fee from the licensee’s perspective. Aiming at this, I combine the insights of two theoretical approaches: agency and real options theory.
Resumo:
The presented study carried out an analysis on rural landscape changes. In particular the study focuses on the understanding of driving forces acting on the rural built environment using a statistical spatial model implemented through GIS techniques. It is well known that the study of landscape changes is essential for a conscious decision making in land planning. From a bibliography review results a general lack of studies dealing with the modeling of rural built environment and hence a theoretical modelling approach for such purpose is needed. The advancement in technology and modernity in building construction and agriculture have gradually changed the rural built environment. In addition, the phenomenon of urbanization of a determined the construction of new volumes that occurred beside abandoned or derelict rural buildings. Consequently there are two types of transformation dynamics affecting mainly the rural built environment that can be observed: the conversion of rural buildings and the increasing of building numbers. It is the specific aim of the presented study to propose a methodology for the development of a spatial model that allows the identification of driving forces that acted on the behaviours of the building allocation. In fact one of the most concerning dynamic nowadays is related to an irrational expansion of buildings sprawl across landscape. The proposed methodology is composed by some conceptual steps that cover different aspects related to the development of a spatial model: the selection of a response variable that better describe the phenomenon under study, the identification of possible driving forces, the sampling methodology concerning the collection of data, the most suitable algorithm to be adopted in relation to statistical theory and method used, the calibration process and evaluation of the model. A different combination of factors in various parts of the territory generated favourable or less favourable conditions for the building allocation and the existence of buildings represents the evidence of such optimum. Conversely the absence of buildings expresses a combination of agents which is not suitable for building allocation. Presence or absence of buildings can be adopted as indicators of such driving conditions, since they represent the expression of the action of driving forces in the land suitability sorting process. The existence of correlation between site selection and hypothetical driving forces, evaluated by means of modeling techniques, provides an evidence of which driving forces are involved in the allocation dynamic and an insight on their level of influence into the process. GIS software by means of spatial analysis tools allows to associate the concept of presence and absence with point futures generating a point process. Presence or absence of buildings at some site locations represent the expression of these driving factors interaction. In case of presences, points represent locations of real existing buildings, conversely absences represent locations were buildings are not existent and so they are generated by a stochastic mechanism. Possible driving forces are selected and the existence of a causal relationship with building allocations is assessed through a spatial model. The adoption of empirical statistical models provides a mechanism for the explanatory variable analysis and for the identification of key driving variables behind the site selection process for new building allocation. The model developed by following the methodology is applied to a case study to test the validity of the methodology. In particular the study area for the testing of the methodology is represented by the New District of Imola characterized by a prevailing agricultural production vocation and were transformation dynamic intensively occurred. The development of the model involved the identification of predictive variables (related to geomorphologic, socio-economic, structural and infrastructural systems of landscape) capable of representing the driving forces responsible for landscape changes.. The calibration of the model is carried out referring to spatial data regarding the periurban and rural area of the study area within the 1975-2005 time period by means of Generalised linear model. The resulting output from the model fit is continuous grid surface where cells assume values ranged from 0 to 1 of probability of building occurrences along the rural and periurban area of the study area. Hence the response variable assesses the changes in the rural built environment occurred in such time interval and is correlated to the selected explanatory variables by means of a generalized linear model using logistic regression. Comparing the probability map obtained from the model to the actual rural building distribution in 2005, the interpretation capability of the model can be evaluated. The proposed model can be also applied to the interpretation of trends which occurred in other study areas, and also referring to different time intervals, depending on the availability of data. The use of suitable data in terms of time, information, and spatial resolution and the costs related to data acquisition, pre-processing, and survey are among the most critical aspects of model implementation. Future in-depth studies can focus on using the proposed model to predict short/medium-range future scenarios for the rural built environment distribution in the study area. In order to predict future scenarios it is necessary to assume that the driving forces do not change and that their levels of influence within the model are not far from those assessed for the time interval used for the calibration.
Resumo:
In the last few years the resolution of numerical weather prediction (nwp) became higher and higher with the progresses of technology and knowledge. As a consequence, a great number of initial data became fundamental for a correct initialization of the models. The potential of radar observations has long been recognized for improving the initial conditions of high-resolution nwp models, while operational application becomes more frequent. The fact that many nwp centres have recently taken into operations convection-permitting forecast models, many of which assimilate radar data, emphasizes the need for an approach to providing quality information which is needed in order to avoid that radar errors degrade the model's initial conditions and, therefore, its forecasts. Environmental risks can can be related with various causes: meteorological, seismical, hydrological/hydraulic. Flash floods have horizontal dimension of 1-20 Km and can be inserted in mesoscale gamma subscale, this scale can be modeled only with nwp model with the highest resolution as the COSMO-2 model. One of the problems of modeling extreme convective events is related with the atmospheric initial conditions, in fact the scale dimension for the assimilation of atmospheric condition in an high resolution model is about 10 Km, a value too high for a correct representation of convection initial conditions. Assimilation of radar data with his resolution of about of Km every 5 or 10 minutes can be a solution for this problem. In this contribution a pragmatic and empirical approach to deriving a radar data quality description is proposed to be used in radar data assimilation and more specifically for the latent heat nudging (lhn) scheme. Later the the nvective capabilities of the cosmo-2 model are investigated through some case studies. Finally, this work shows some preliminary experiments of coupling of a high resolution meteorological model with an Hydrological one.
Resumo:
During the last decade peach and nectarine fruit have lost considerable market share, due to increased consumer dissatisfaction with quality at retail markets. This is mainly due to harvesting of too immature fruit and high ripening heterogeneity. The main problem is that the traditional used maturity indexes are not able to objectively detect fruit maturity stage, neither the variability present in the field, leading to a difficult post-harvest management of the product and to high fruit losses. To assess more precisely the fruit ripening other techniques and devices can be used. Recently, a new non-destructive maturity index, based on the vis-NIR technology, the Index of Absorbance Difference (IAD), that correlates with fruit degreening and ethylene production, was introduced and the IAD was used to study peach and nectarine fruit ripening from the “field to the fork”. In order to choose the best techniques to improve fruit quality, a detailed description of the tree structure, of fruit distribution and ripening evolution on the tree was faced. More in details, an architectural model (PlantToon®) was used to design the tree structure and the IAD was applied to characterize the maturity stage of each fruit. Their combined use provided an objective and precise evaluation of the fruit ripening variability, related to different training systems, crop load, fruit exposure and internal temperature. Based on simple field assessment of fruit maturity (as IAD) and growth, a model for an early prediction of harvest date and yield, was developed and validated. The relationship between the non-destructive maturity IAD, and the fruit shelf-life, was also confirmed. Finally the obtained results were validated by consumer test: the fruit sorted in different maturity classes obtained a different consumer acceptance. The improved knowledge, leaded to an innovative management of peach and nectarine fruit, from “field to market”.
Resumo:
Modern embedded systems embrace many-core shared-memory designs. Due to constrained power and area budgets, most of them feature software-managed scratchpad memories instead of data caches to increase the data locality. It is therefore programmers’ responsibility to explicitly manage the memory transfers, and this make programming these platform cumbersome. Moreover, complex modern applications must be adequately parallelized before they can the parallel potential of the platform into actual performance. To support this, programming languages were proposed, which work at a high level of abstraction, and rely on a runtime whose cost hinders performance, especially in embedded systems, where resources and power budget are constrained. This dissertation explores the applicability of the shared-memory paradigm on modern many-core systems, focusing on the ease-of-programming. It focuses on OpenMP, the de-facto standard for shared memory programming. In a first part, the cost of algorithms for synchronization and data partitioning are analyzed, and they are adapted to modern embedded many-cores. Then, the original design of an OpenMP runtime library is presented, which supports complex forms of parallelism such as multi-level and irregular parallelism. In the second part of the thesis, the focus is on heterogeneous systems, where hardware accelerators are coupled to (many-)cores to implement key functional kernels with orders-of-magnitude of speedup and energy efficiency compared to the “pure software” version. However, three main issues rise, namely i) platform design complexity, ii) architectural scalability and iii) programmability. To tackle them, a template for a generic hardware processing unit (HWPU) is proposed, which share the memory banks with cores, and the template for a scalable architecture is shown, which integrates them through the shared-memory system. Then, a full software stack and toolchain are developed to support platform design and to let programmers exploiting the accelerators of the platform. The OpenMP frontend is extended to interact with it.
Resumo:
The aim of the present thesis was to investigate the influence of lower-limb joint models on musculoskeletal model predictions during gait. We started our analysis by using a baseline model, i.e., the state-of-the-art lower-limb model (spherical joint at the hip and hinge joints at the knee and ankle) created from MRI of a healthy subject in the Medical Technology Laboratory of the Rizzoli Orthopaedic Institute. We varied the models of knee and ankle joints, including: knee- and ankle joints with mean instantaneous axis of rotation, universal joint at the ankle, scaled-generic-derived planar knee, subject-specific planar knee model, subject-specific planar ankle model, spherical knee, spherical ankle. The joint model combinations corresponding to 10 musculoskeletal models were implemented into a typical inverse dynamics problem, including inverse kinematics, inverse dynamics, static optimization and joint reaction analysis algorithms solved using the OpenSim software to calculate joint angles, joint moments, muscle forces and activations, joint reaction forces during 5 walking trials. The predicted muscle activations were qualitatively compared to experimental EMG, to evaluate the accuracy of model predictions. Planar joint at the knee, universal joint at the ankle and spherical joints at the knee and at the ankle produced appreciable variations in model predictions during gait trials. The planar knee joint model reduced the discrepancy between the predicted activation of the Rectus Femoris and the EMG (with respect to the baseline model), and the reduced peak knee reaction force was considered more accurate. The use of the universal joint, with the introduction of the subtalar joint, worsened the muscle activation agreement with the EMG, and increased ankle and knee reaction forces were predicted. The spherical joints, in particular at the knee, worsened the muscle activation agreement with the EMG. A substantial increase of joint reaction forces at all joints was predicted despite of the good agreement in joint kinematics with those of the baseline model. The introduction of the universal joint had a negative effect on the model predictions. The cause of this discrepancy is likely to be found in the definition of the subtalar joint and thus, in the particular subject’s anthropometry, used to create the model and define the joint pose. We concluded that the implementation of complex joint models do not have marked effects on the joint reaction forces during gait. Computed results were similar in magnitude and in pattern to those reported in literature. Nonetheless, the introduction of planar joint model at the knee had positive effect upon the predictions, while the use of spherical joint at the knee and/or at the ankle is absolutely unadvisable, because it predicted unrealistic joint reaction forces.
Resumo:
Efficient energy storage and conversion is playing a key role in overcoming the present and future challenges in energy supply. Batteries provide portable, electrochemical storage of green energy sources and potentially allow for a reduction of the dependence on fossil fuels, which is of great importance with respect to the issue of global warming. In view of both, energy density and energy drain, rechargeable lithium ion batteries outperform other present accumulator systems. However, despite great efforts over the last decades, the ideal electrolyte in terms of key characteristics such as capacity, cycle life, and most important reliable safety, has not yet been identified. rnrnSteps ahead in lithium ion battery technology require a fundamental understanding of lithium ion transport, salt association, and ion solvation within the electrolyte. Indeed, well-defined model compounds allow for systematic studies of molecular ion transport. Thus, in the present work, based on the concept of ‘immobilizing’ ion solvents, three main series with a cyclotriphosphazene (CTP), hexaphenylbenzene (HBP), and tetramethylcyclotetrasiloxane (TMS) scaffold were prepared. Lithium ion solvents, among others ethylene carbonate (EC), which has proven to fulfill together with pro-pylene carbonate safety and market concerns in commercial lithium ion batteries, were attached to the different cores via alkyl spacers of variable length.rnrnAll model compounds were fully characterized, pure and thermally stable up to at least 235 °C, covering the requested broad range of glass transition temperatures from -78.1 °C up to +6.2 °C. While the CTP models tend to rearrange at elevated temperatures over time, which questions the general stability of alkoxide related (poly)phosphazenes, both, the HPB and CTP based models show no evidence of core stacking. In particular the CTP derivatives represent good solvents for various lithium salts, exhibiting no significant differences in the ionic conductivity σ_dc and thus indicating comparable salt dissociation and rather independent motion of cations and ions.rnrnIn general, temperature-dependent bulk ionic conductivities investigated via impedance spectroscopy follow a William-Landel-Ferry (WLF) type behavior. Modifications of the alkyl spacer length were shown to influence ionic conductivities only in combination to changes in glass transition temperatures. Though the glass transition temperatures of the blends are low, their conductivities are only in the range of typical polymer electrolytes. The highest σ_dc obtained at ambient temperatures was 6.0 x 10-6 S•cm-1, strongly suggesting a rather tight coordination of the lithium ions to the solvating 2-oxo-1,3-dioxolane moieties, supported by the increased σ_dc values for the oligo(ethylene oxide) based analogues.rnrnFurther insights into the mechanism of lithium ion dynamics were derived from 7Li and 13C Solid- State NMR investigations. While localized ion motion was probed by i.e. 7Li spin-lattice relaxation measurements with apparent activation energies E_a of 20 to 40 kJ/mol, long-range macroscopic transport was monitored by Pulsed-Field Gradient (PFG) NMR, providing an E_a of 61 kJ/mol. The latter is in good agreement with the values determined from bulk conductivity data, indicating the major contribution of ion transport was only detected by PFG NMR. However, the μm-diffusion is rather slow, emphasizing the strong lithium coordination to the carbonyl oxygens, which hampers sufficient ion conductivities and suggests exploring ‘softer’ solvating moieties in future electrolytes.rn
Resumo:
This dissertation deals with two specific aspects of a potential hydrogen-based energy economy, namely the problems of energy storage and energy conversion. In order to contribute to the solution of these problems, the structural and dynamical properties of two promising materials for hydrogen storage (lithium imide/amide) and proton conduction (poly[vinyl phosphonic acid]) are modeled on an atomistic scale by means of first principles molecular dynamics simulation methods.rnrnrnIn the case of the hydrogen storage system lithium amide/imide (LiNH_2/Li_2NH), the focus was on the interplay of structural features and nuclear quantum effects. For these calculations, Path-Integral Molecular Dynamics (PIMD) simulations were used. The structures of these materials at room temperature were elucidated; in collaboration with an experimental group, a very good agreement between calculated and experimental solid-state 1H-NMR chemical shifts was observed. Specifically, the structure of Li_2NH features a disordered arrangement of the Li lattice, which was not reported in previous studies. In addition, a persistent precession of the NH bonds was observed in our simulations. We provide evidence that this precession is the consequence of a toroid-shaped effective potential, in which the protons in the material are immersed. This potential is essentially flat along the torus azimuthal angle, which might lead to important quantum delocalization effects of the protons over the torus.rnrnOn the energy conversion side, the dynamics of protons in a proton conducting polymer (poly[vinyl phosphonic acid], PVPA) was studied by means of a steered ab-initio Molecular Dynamics approach applied on a simplified polymer model. The focus was put on understanding the microscopic proton transport mechanism in polymer membranes, and on characterizing the relevance of the local environment. This covers particularly the effect of water molecules, which participate in the hydrogen bonding network in the material. The results indicate that these water molecules are essential for the effectiveness of proton conduction. A water-mediated Grotthuss mechanism is identified as the main contributor to proton conduction, which agrees with the experimentally observed decay on conductivity for the same material in the absence of water molecules.rnrnThe gain in understanding the microscopic processes and structures present in this materials can help the development of new materials with improved properties, thus contributing to the solution of problems in the implementation of fuel cells.
Resumo:
In this study a novel method MicroJet reactor technology was developed to enable the custom preparation of nanoparticles. rnDanazol/HPMCP HP50 and Gliclazide/Eudragit S100 nanoparticles were used as model systems for the investigation of effects of process parameters and microjet reactor setup on the nanoparticle properties during the microjet reactor construction. rnFollowing the feasibility study of the microjet reactor system, three different nanoparticle formulations were prepared using fenofibrate as model drug. Fenofibrate nanoparticles stabilized with poloxamer 407 (FN), fenofibrate nanoparticles in hydroxypropyl methyl cellulose phthalate (HPMCP) matrix (FHN) and fenofibrate nanoparticles in HPMCP and chitosan matrix (FHCN) were prepared under controlled precipitation using MicroJet reactor technology. Particle sizes of all the nanoparticle formulations were adjusted to 200-250 nm. rnThe changes in the experimental parameters altered the system thermodynamics resulting in the production of nanoparticles between 20-1000 nm (PDI<0.2) with high drug loading efficiencies (96.5% in 20:1 polymer:drug ratio).rnDrug releases from all nanoparticle formulations were fast and complete after 15 minutes both in FaSSIF and FeSSIF medium whereas in mucodhesiveness tests, only FHCN formulation was found to be mucoadhesive. Results of the Caco-2 studies revealed that % dose absorbed values were significantly higher (p<0.01) for FHCN in both cases where FaSSIF and FeSSIF were used as transport buffer.rn
Resumo:
Sustainable development is one of the biggest challenges of the twenty fist-century. Various university has begun the debate about the content of this concept and the ways in which to integrate it into their policy, organization and activities. Universities have a special responsibility to take over a leading position by demonstrating best practices that sustain and educate a sustainable society. For that reason universities have the opportunity to create the culture of sustainability for today’s student, and to set their expectations for how the world should be. This thesis aim at analyzing how Delft University of Technology and University of Bologna face the challenge of becoming a sustainable campus. In this context, both universities have been studied and analyzed following the International Sustainable Campus Network (ISCN) methodology that provides a common framework to formalize commitments and goals at campus level. In particular this work has been aimed to highlight which key performance indicators are essential to reach sustainability as a consequence the following aspects has been taken into consideration: energy use, water use, solid waste and recycling, carbon emission. Subsequently, in order to provide a better understanding of the current state of sustainability on University of Bologna and Delft University of Technology, and potential strategies to achieve the stated objective, a SWOT Analysis has been undertaken. Strengths, weaknesses, opportunities and threats have been shown to understand how the two universities can implement a synergy to improve each other. In the direction of framing a “Sustainable SWOT” has been considered the model proposed by People and Planet, so it has been necessary to evaluate important matters as for instance policy, investment, management, education and engagement. Regarding this, it has been fundamental to involve the main sustainability coordinators of the two universities, this has been achieved through a brainstorming session. Partnerships are key to the achievement of sustainability. The creation of a bridge between two universities aims to join forces and to create a new generation of talent. As a result, people can become able to support universities in the exchange of information, ideas, and best practices for achieving sustainable campus operations and integrating sustainability in research and teaching. For this purpose the project "SUCCESS" has been presented, the project aims to create an interactive European campus network that can be considered a strategic key player for sustainable campus innovation in Europe. Specifically, the main key performance indicators have been analyzed and the importance they have for the two universities and their strategic impact have been highlighted. For this reason, a survey was conducted with people who play crucial roles for sustainability within the two universities and they were asked to evaluate the KPIs of the project. This assessment has been relevant because has represented the foundation to develop a strategy to create a true collaboration.