931 resultados para Model free kinetics


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wave breaking is an important coastal process, influencing hydro-morphodynamic processes such as turbulence generation and wave energy dissipation, run-up on the beach and overtopping of coastal defence structures. During breaking, waves are complex mixtures of air and water (“white water”) whose properties affect velocity and pressure fields in the vicinity of the free surface and, depending on the breaker characteristics, different mechanisms for air entrainment are usually observed. Several laboratory experiments have been performed to investigate the role of air bubbles in the wave breaking process (Chanson & Cummings, 1994, among others) and in wave loading on vertical wall (Oumeraci et al., 2001; Peregrine et al., 2006, among others), showing that the air phase is not negligible since the turbulent energy dissipation involves air-water mixture. The recent advancement of numerical models has given valuable insights in the knowledge of wave transformation and interaction with coastal structures. Among these models, some solve the RANS equations coupled with a free-surface tracking algorithm and describe velocity, pressure, turbulence and vorticity fields (Lara et al. 2006 a-b, Clementi et al., 2007). The single-phase numerical model, in which the constitutive equations are solved only for the liquid phase, neglects effects induced by air movement and trapped air bubbles in water. Numerical approximations at the free surface may induce errors in predicting breaking point and wave height and moreover, entrapped air bubbles and water splash in air are not properly represented. The aim of the present thesis is to develop a new two-phase model called COBRAS2 (stands for Cornell Breaking waves And Structures 2 phases), that is the enhancement of the single-phase code COBRAS0, originally developed at Cornell University (Lin & Liu, 1998). In the first part of the work, both fluids are considered as incompressible, while the second part will treat air compressibility modelling. The mathematical formulation and the numerical resolution of the governing equations of COBRAS2 are derived and some model-experiment comparisons are shown. In particular, validation tests are performed in order to prove model stability and accuracy. The simulation of the rising of a large air bubble in an otherwise quiescent water pool reveals the model capability to reproduce the process physics in a realistic way. Analytical solutions for stationary and internal waves are compared with corresponding numerical results, in order to test processes involving wide range of density difference. Waves induced by dam-break in different scenarios (on dry and wet beds, as well as on a ramp) are studied, focusing on the role of air as the medium in which the water wave propagates and on the numerical representation of bubble dynamics. Simulations of solitary and regular waves, characterized by both spilling and plunging breakers, are analyzed with comparisons with experimental data and other numerical model in order to investigate air influence on wave breaking mechanisms and underline model capability and accuracy. Finally, modelling of air compressibility is included in the new developed model and is validated, revealing an accurate reproduction of processes. Some preliminary tests on wave impact on vertical walls are performed: since air flow modelling allows to have a more realistic reproduction of breaking wave propagation, the dependence of wave breaker shapes and aeration characteristics on impact pressure values is studied and, on the basis of a qualitative comparison with experimental observations, the numerical simulations achieve good results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The research performed during the PhD candidature was intended to evaluate the quality of white wines, as a function of the reduction in SO2 use during the first steps of the winemaking process. In order to investigate the mechanism and intensity of interactions occurring between lysozyme and the principal macro-components of musts and wines, a series of experiments on model wine solutions were undertaken, focusing attention on the polyphenols, SO2, oenological tannins, pectines, ethanol, and sugar components. In the second part of this research program, a series of conventional sulphite added vinifications were compared to vinifications in which sulphur dioxide was replaced by lysozyme and consequently define potential winemaking protocols suitable for the production of SO2-free wines. To reach the final goal, the technological performance of two selected yeast strains with a low aptitude to produce SO2 during fermentation were also evaluated. The data obtained suggested that the addition of lysozyme and oenological tannins during the alcoholic fermentation could represent a promising alternative to the use of sulphur dioxide and a reliable starting point for the production of SO2-free wines. The different vinification protocols studied influenced the composition of the volatile profile in wines at the end of the alcoholic fermentation, especially with regards to alcohols and ethyl esters also a consequence of the yeast’s response to the presence or absence of sulphites during fermentation, contributing in different ways to the sensory profiles of wines. In fact, the aminoacids analysis showed that lysozyme can affect the consumption of nitrogen as a function of the yeast strain used in fermentation. During the bottle storage, the evolution of volatile compounds is affected by the presence of SO2 and oenological tannins, confirming their positive role in scaveging oxygen and maintaining the amounts of esters over certain levels, avoiding a decline in the wine’s quality. Even though a natural decrease was found on phenolic profiles due to oxidation effects caused by the presence of oxygen dissolved in the medium during the storage period, the presence of SO2 together with tannins contrasted the decay of phenolic content at the end of the fermentation. Tannins also showed a central role in preserving the polyphenolic profile of wines during the storage period, confirming their antioxidant property, acting as reductants. Our study focused on the fundamental chemistry relevant to the oxidative phenolic spoilage of white wines has demonstrated the suitability of glutathione to inhibit the production of yellow xanthylium cation pigments generated from flavanols and glyoxylic acid at the concentration that it typically exists in wine. The ability of glutathione to bind glyoxylic acid rather than acetaldehyde may enable glutathione to be used as a ‘switch’ for glyoxylic acid-induced polymerisation mechanisms, as opposed to the equivalent acetaldehyde polymerisation, in processes such as microoxidation. Further research is required to assess the ability of glutathione to prevent xanthylium cation production during the in-situ production of glyoxylic acid and in the presence of sulphur dioxide.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In such territories where food production is mostly scattered in several small / medium size or even domestic farms, a lot of heterogeneous residues are produced yearly, since farmers usually carry out different activities in their properties. The amount and composition of farm residues, therefore, widely change during year, according to the single production process periodically achieved. Coupling high efficiency micro-cogeneration energy units with easy handling biomass conversion equipments, suitable to treat different materials, would provide many important advantages to the farmers and to the community as well, so that the increase in feedstock flexibility of gasification units is nowadays seen as a further paramount step towards their wide spreading in rural areas and as a real necessity for their utilization at small scale. Two main research topics were thought to be of main concern at this purpose, and they were therefore discussed in this work: the investigation of fuels properties impact on gasification process development and the technical feasibility of small scale gasification units integration with cogeneration systems. According to these two main aspects, the present work was thus divided in two main parts. The first one is focused on the biomass gasification process, that was investigated in its theoretical aspects and then analytically modelled in order to simulate thermo-chemical conversion of different biomass fuels, such as wood (park waste wood and softwood), wheat straw, sewage sludge and refuse derived fuels. The main idea is to correlate the results of reactor design procedures with the physical properties of biomasses and the corresponding working conditions of gasifiers (temperature profile, above all), in order to point out the main differences which prevent the use of the same conversion unit for different materials. At this scope, a gasification kinetic free model was initially developed in Excel sheets, considering different values of air to biomass ratio and the downdraft gasification technology as particular examined application. The differences in syngas production and working conditions (process temperatures, above all) among the considered fuels were tried to be connected to some biomass properties, such elementary composition, ash and water contents. The novelty of this analytical approach was the use of kinetic constants ratio in order to determine oxygen distribution among the different oxidation reactions (regarding volatile matter only) while equilibrium of water gas shift reaction was considered in gasification zone, by which the energy and mass balances involved in the process algorithm were linked together, as well. Moreover, the main advantage of this analytical tool is the easiness by which the input data corresponding to the particular biomass materials can be inserted into the model, so that a rapid evaluation on their own thermo-chemical conversion properties is possible to be obtained, mainly based on their chemical composition A good conformity of the model results with the other literature and experimental data was detected for almost all the considered materials (except for refuse derived fuels, because of their unfitting chemical composition with the model assumptions). Successively, a dimensioning procedure for open core downdraft gasifiers was set up, by the analysis on the fundamental thermo-physical and thermo-chemical mechanisms which are supposed to regulate the main solid conversion steps involved in the gasification process. Gasification units were schematically subdivided in four reaction zones, respectively corresponding to biomass heating, solids drying, pyrolysis and char gasification processes, and the time required for the full development of each of these steps was correlated to the kinetics rates (for pyrolysis and char gasification processes only) and to the heat and mass transfer phenomena from gas to solid phase. On the basis of this analysis and according to the kinetic free model results and biomass physical properties (particles size, above all) it was achieved that for all the considered materials char gasification step is kinetically limited and therefore temperature is the main working parameter controlling this step. Solids drying is mainly regulated by heat transfer from bulk gas to the inner layers of particles and the corresponding time especially depends on particle size. Biomass heating is almost totally achieved by the radiative heat transfer from the hot walls of reactor to the bed of material. For pyrolysis, instead, working temperature, particles size and the same nature of biomass (through its own pyrolysis heat) have all comparable weights on the process development, so that the corresponding time can be differently depending on one of these factors according to the particular fuel is gasified and the particular conditions are established inside the gasifier. The same analysis also led to the estimation of reaction zone volumes for each biomass fuel, so as a comparison among the dimensions of the differently fed gasification units was finally accomplished. Each biomass material showed a different volumes distribution, so that any dimensioned gasification unit does not seem to be suitable for more than one biomass species. Nevertheless, since reactors diameters were found out quite similar for all the examined materials, it could be envisaged to design a single units for all of them by adopting the largest diameter and by combining together the maximum heights of each reaction zone, as they were calculated for the different biomasses. A total height of gasifier as around 2400mm would be obtained in this case. Besides, by arranging air injecting nozzles at different levels along the reactor, gasification zone could be properly set up according to the particular material is in turn gasified. Finally, since gasification and pyrolysis times were found to considerably change according to even short temperature variations, it could be also envisaged to regulate air feeding rate for each gasified material (which process temperatures depend on), so as the available reactor volumes would be suitable for the complete development of solid conversion in each case, without even changing fluid dynamics behaviour of the unit as well as air/biomass ratio in noticeable measure. The second part of this work dealt with the gas cleaning systems to be adopted downstream the gasifiers in order to run high efficiency CHP units (i.e. internal engines and micro-turbines). Especially in the case multi–fuel gasifiers are assumed to be used, weightier gas cleaning lines need to be envisaged in order to reach the standard gas quality degree required to fuel cogeneration units. Indeed, as the more heterogeneous feed to the gasification unit, several contaminant species can simultaneously be present in the exit gas stream and, as a consequence, suitable gas cleaning systems have to be designed. In this work, an overall study on gas cleaning lines assessment is carried out. Differently from the other research efforts carried out in the same field, the main scope is to define general arrangements for gas cleaning lines suitable to remove several contaminants from the gas stream, independently on the feedstock material and the energy plant size The gas contaminant species taken into account in this analysis were: particulate, tars, sulphur (in H2S form), alkali metals, nitrogen (in NH3 form) and acid gases (in HCl form). For each of these species, alternative cleaning devices were designed according to three different plant sizes, respectively corresponding with 8Nm3/h, 125Nm3/h and 350Nm3/h gas flows. Their performances were examined on the basis of their optimal working conditions (efficiency, temperature and pressure drops, above all) and their own consumption of energy and materials. Successively, the designed units were combined together in different overall gas cleaning line arrangements, paths, by following some technical constraints which were mainly determined from the same performance analysis on the cleaning units and from the presumable synergic effects by contaminants on the right working of some of them (filters clogging, catalysts deactivation, etc.). One of the main issues to be stated in paths design accomplishment was the tars removal from the gas stream, preventing filters plugging and/or line pipes clogging At this scope, a catalytic tars cracking unit was envisaged as the only solution to be adopted, and, therefore, a catalytic material which is able to work at relatively low temperatures was chosen. Nevertheless, a rapid drop in tars cracking efficiency was also estimated for this same material, so that an high frequency of catalysts regeneration and a consequent relevant air consumption for this operation were calculated in all of the cases. Other difficulties had to be overcome in the abatement of alkali metals, which condense at temperatures lower than tars, but they also need to be removed in the first sections of gas cleaning line in order to avoid corrosion of materials. In this case a dry scrubber technology was envisaged, by using the same fine particles filter units and by choosing for them corrosion resistant materials, like ceramic ones. Besides these two solutions which seem to be unavoidable in gas cleaning line design, high temperature gas cleaning lines were not possible to be achieved for the two larger plant sizes, as well. Indeed, as the use of temperature control devices was precluded in the adopted design procedure, ammonia partial oxidation units (as the only considered methods for the abatement of ammonia at high temperature) were not suitable for the large scale units, because of the high increase of reactors temperature by the exothermic reactions involved in the process. In spite of these limitations, yet, overall arrangements for each considered plant size were finally designed, so that the possibility to clean the gas up to the required standard degree was technically demonstrated, even in the case several contaminants are simultaneously present in the gas stream. Moreover, all the possible paths defined for the different plant sizes were compared each others on the basis of some defined operational parameters, among which total pressure drops, total energy losses, number of units and secondary materials consumption. On the basis of this analysis, dry gas cleaning methods proved preferable to the ones including water scrubber technology in al of the cases, especially because of the high water consumption provided by water scrubber units in ammonia adsorption process. This result is yet connected to the possibility to use activated carbon units for ammonia removal and Nahcolite adsorber for chloride acid. The very high efficiency of this latter material is also remarkable. Finally, as an estimation of the overall energy loss pertaining the gas cleaning process, the total enthalpy losses estimated for the three plant sizes were compared with the respective gas streams energy contents, these latter obtained on the basis of low heating value of gas only. This overall study on gas cleaning systems is thus proposed as an analytical tool by which different gas cleaning line configurations can be evaluated, according to the particular practical application they are adopted for and the size of cogeneration unit they are connected to.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Heat treatment of steels is a process of fundamental importance in tailoring the properties of a material to the desired application; developing a model able to describe such process would allow to predict the microstructure obtained from the treatment and the consequent mechanical properties of the material. A steel, during a heat treatment, can undergo two different kinds of phase transitions [p.t.]: diffusive (second order p.t.) and displacive (first order p.t.); in this thesis, an attempt to describe both in a thermodynamically consistent framework is made; a phase field, diffuse interface model accounting for the coupling between thermal, chemical and mechanical effects is developed, and a way to overcome the difficulties arising from the treatment of the non-local effects (gradient terms) is proposed. The governing equations are the balance of linear momentum equation, the Cahn-Hilliard equation and the balance of internal energy equation. The model is completed with a suitable description of the free energy, from which constitutive relations are drawn. The equations are then cast in a variational form and different numerical techniques are used to deal with the principal features of the model: time-dependency, non-linearity and presence of high order spatial derivatives. Simulations are performed using DOLFIN, a C++ library for the automated solution of partial differential equations by means of the finite element method; results are shown for different test-cases. The analysis is reduced to a two dimensional setting, which is simpler than a three dimensional one, but still meaningful.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Among the experimental methods commonly used to define the behaviour of a full scale system, dynamic tests are the most complete and efficient procedures. A dynamic test is an experimental process, which would define a set of characteristic parameters of the dynamic behaviour of the system, such as natural frequencies of the structure, mode shapes and the corresponding modal damping values associated. An assessment of these modal characteristics can be used both to verify the theoretical assumptions of the project, to monitor the performance of the structural system during its operational use. The thesis is structured in the following chapters: The first introductive chapter recalls some basic notions of dynamics of structure, focusing the discussion on the problem of systems with multiply degrees of freedom (MDOF), which can represent a generic real system under study, when it is excited with harmonic force or in free vibration. The second chapter is entirely centred on to the problem of dynamic identification process of a structure, if it is subjected to an experimental test in forced vibrations. It first describes the construction of FRF through classical FFT of the recorded signal. A different method, also in the frequency domain, is subsequently introduced; it allows accurately to compute the FRF using the geometric characteristics of the ellipse that represents the direct input-output comparison. The two methods are compared and then the attention is focused on some advantages of the proposed methodology. The third chapter focuses on the study of real structures when they are subjected to experimental test, where the force is not known, like in an ambient or impact test. In this analysis we decided to use the CWT, which allows a simultaneous investigation in the time and frequency domain of a generic signal x(t). The CWT is first introduced to process free oscillations, with excellent results both in terms of frequencies, dampings and vibration modes. The application in the case of ambient vibrations defines accurate modal parameters of the system, although on the damping some important observations should be made. The fourth chapter is still on the problem of post processing data acquired after a vibration test, but this time through the application of discrete wavelet transform (DWT). In the first part the results obtained by the DWT are compared with those obtained by the application of CWT. Particular attention is given to the use of DWT as a tool for filtering the recorded signal, in fact in case of ambient vibrations the signals are often affected by the presence of a significant level of noise. The fifth chapter focuses on another important aspect of the identification process: the model updating. In this chapter, starting from the modal parameters obtained from some environmental vibration tests, performed by the University of Porto in 2008 and the University of Sheffild on the Humber Bridge in England, a FE model of the bridge is defined, in order to define what type of model is able to capture more accurately the real dynamic behaviour of the bridge. The sixth chapter outlines the necessary conclusions of the presented research. They concern the application of a method in the frequency domain in order to evaluate the modal parameters of a structure and its advantages, the advantages in applying a procedure based on the use of wavelet transforms in the process of identification in tests with unknown input and finally the problem of 3D modeling of systems with many degrees of freedom and with different types of uncertainty.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite intensive research during the last decades, thetheoreticalunderstanding of supercooled liquids and the glasstransition is stillfar from being complete. Besides analytical investigations,theso-called energy-landscape approach has turned out to beveryfruitful. In the literature, many numerical studies havedemonstratedthat, at sufficiently low temperatures, all thermodynamicquantities can be predicted with the help of the propertiesof localminima in the potential-energy-landscape (PEL). The main purpose of this thesis is to strive for anunderstanding ofdynamics in terms of the potential energy landscape. Incontrast to the study of static quantities, this requirestheknowledge of barriers separating the minima.Up to now, it has been the general viewpoint that thermallyactivatedprocesses ('hopping') determine the dynamics only belowTc(the critical temperature of mode-coupling theory), in thesense that relaxation rates follow from local energybarriers.As we show here, this viewpoint should be revisedsince the temperature dependence of dynamics is governed byhoppingprocesses already below 1.5Tc.At the example of a binary mixture of Lennard-Jonesparticles (BMLJ),we establish a quantitative link from the diffusioncoefficient,D(T), to the PEL topology. This is achieved in three steps:First, we show that it is essential to consider wholesuperstructuresof many PEL minima, called metabasins, rather than singleminima. Thisis a consequence of strong correlations within groups of PELminima.Second, we show that D(T) is inversely proportional to theaverageresidence time in these metabasins. Third, the temperaturedependenceof the residence times is related to the depths of themetabasins, asgiven by the surrounding energy barriers. We further discuss that the study of small (but not toosmall) systemsis essential, in that one deals with a less complex energylandscapethan in large systems. In a detailed analysis of differentsystemsizes, we show that the small BMLJ system consideredthroughout thethesis is free of major finite-size-related artifacts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Die freien Endigungen von Spinalganglienneuronen sind für die Detektion schmerzhafter Reize verantwortlich. Dabei rufen thermische, chemische oder mechanische Reize Ionenströme über die Membran und dadurch Membranpotentialänderungen hervor. Diese noxisch induzierten Ströme sind in großem Ausmaß durch chemische Substanzen und andere Reize modulierbar. Der Ionenkanal TRPV1 ist für die Detektion zahlreicher chemischer Reize und zumindest eines Teils der noxischen Hitzereize verantwortlich. Im Rahmen dieser Arbeit wurden einige der Mechanismen geklärt, die zur schnellen Sensibilisierung hitzeevozierter Ionenströme führen. Hierfür wurden akut dissoziierte Spinalganglienneurone der Ratte als Modell ihrer peripheren Endigung verwendet und mittels Ganzzellableitung in der patch-clamp-Technik untersucht. Die Verwendung von Trypsin während der Präparation von Spinalganglienneuronen hat keinen funktionellen Einfluss auf hitze- oder capsaicininduzierte Ströme, verbessert aber die Untersuchungsbedingungen für das patch-clamp-Verfahren. Bei 144 akut dissoziierten Spinalganglienneuronen wurden die Stromantworten auf drei im Abstand von 40 s durch Überspülen mit 45,3 bis 46,3°C heißer Extrazellularlösung applizierte einsekündige Hitzereize gemessen. Dabei ließen sich repetitiv reproduzierbare hitzeinduzierte Einwärtsströme von etwa 160 pA erzielen; es konnte keine Tachyphylaxie und nahezu keine Inaktivierung beobachtet werden. Direkt vor dem zweiten Hitzereiz wurden die Neurone für zwei Sekunden mit Extrazellularlösung überspült, die Kontrolllösung, 0,5 μM Capsaicin, 10 μM Natriumnitroprussid oder 10 μM YC-1 enthielt. Es fand sich kein Hinweis, dass Stickstoffmonoxid oder die Guanylatzyklase einen signifikanten Beitrag zur Sensibilisierung von hitzeinduzierten Strömen in Spinalganglienneuronen leisten, wobei ein durch den Versuchsaufbau bedingtes Auswaschen zytosolischer Faktoren, die für den Signalweg notwendig sind, nicht ausgeschlossen werden kann. Bei einer Konzentration von 0,5 μM löst Capsaicin für zwei Sekunden einen sehr kleinen Einwärtsstrom von etwa 33 pA aus und führt innerhalb von zwei Sekunden zu einer schnell reversiblen Sensibilisierung von hitzeinduzierten Einwärtsströmen in Spinalganglienneuronen (p<0,01). Das Ausmaß der Sensibilisierung ist proportional zur Größe des capsaicininduzierten Stromes (r=−0,7, p<0,001). Konstant halten der intrazellulären Calciumkonzentration mittels des Calciumchelators BAPTA verhindert die capsaicininduzierte Sensibilisierung hitzeinduzierter Ströme an Spinalganglienneuronen. Demzufolge beruht die capsaicininduzierte Sensibilisierung trotz der schnellen Kinetik nicht auf einer synergistischen Wirkung der beiden Agonisten Capsaicin und Hitze auf ihren gemeinsamen Rezeptor; vielmehr ist sie von einer Erhöhung der intrazellulären freien Calciumkonzentration abhängig. Funktionelle Änderungen der zellulären Funktion werden häufig durch Proteinkinasen vermittelt. Die zur Gruppe der MAP-Kinasen gehörende ERK (extracellular signal related kinase) wird bei Membrandepolarisation und Calciumeinstrom in die Zelle durch MEK (MAPK/extracellular signal related kinase kinase) aktiviert. Blockade der MEK/ERK-Kaskade durch den spezifischen MEK-Hemmstoff U0126 führt ebenfalls zu einer Aufhebung der Sensibilisierung der Hitzeantworten durch Capsaicin. Applikation von Capsaicin führt innerhalb von zwei Sekunden zu einer schnell reversiblen Sensibilisierung hitzeevozierter Ionenströme an nozizeptiven Spinalganglienneuronen. Diese Sensibilisierung wird durch einen Calciumeinstrom in die Zelle und die dadurch eintretende Aktivierung von Proteinkinasen hervorgerufen. Die MEK/ERK-Kaskade ist ein sehr schnell (deutlich unter 2 s) aktivierbares intrazelluläres Signalsystem, welches bei der Regulation der Empfindlichkeit nozizeptiver Spinalganglienneurone eine entscheidende Rolle spielt; die schnelle Kinetik ist dabei nur durch eine membranständige oder zumindest membrannahe Lokalisation dieser Proteinkinasen erklärbar. Durch Applikation zehnsekündiger Hitzereize lässt sich ebenfalls eine Sensibilisierung hitzeevozierter Ionenströme auslösen, die ebenso ausgeprägt ist, wie die Sensibilisierung durch 0,5 μM Capsaicin (p<0,005). Durch das immer größere Verständnis der Funktionsweise des nozizeptiven Systems ergeben sich ständig neue Ansätze für die Entwicklung neuer Analgetika. So könnte durch Modulation spezifischer intrazellulärer Proteinkinasen der Phosphorylierungszustand und damit die Aktivierbarkeit von Ionenkanälen, die der Transduktion noxischer Reize dienen, positiv beeinflusst werden. Neuere, noch spezifischere Inhibitoren der MEK können der Forschung und später auch der Therapie neue Möglichkeiten eröffnen.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

During my PhD,I have been develop an innovative technique to reproduce in vitro the 3D thymic microenvironment, to be used for growth and differentiation of thymocytes, and possible transplantation replacement in conditions of depressed thymic immune regulation. The work has been developed in the laboratory of Tissue Engineering at the University Hospital in Basel, Switzerland, under the tutorship of Prof.Ivan Martin. Since a number of studies have suggested that the 3D structure of the thymic microenvironment might play a key role in regulating the survival and functional competence of thymocytes, I’ve focused my effort on the isolation and purification of the extracellular matrix of the mouse thymus. Specifically, based on the assumption that TEC can favour the differentiation of pre-T lymphocytes, I’ve developed a specific decellularization protocol to obtain the intact, DNA-free extracellular matrix of the adult mouse thymus. Two different protocols satisfied the main characteristics of a decellularized matrix, according to qualitative and quantitative assays. In particular, the quantity of DNA was less than 10% in absolute value, no positive staining for cells was found and the 3D structure and composition of the ECM were maintained. In addition, I was able to prove that the decellularized matrixes were not cytotoxic for the cells themselves, and were able to increase expression of MHC II antigens compared to control cells grown in standard conditions. I was able to prove that TECs grow and proliferate up to ten days on top the decellularized matrix. After a complete characterization of the culture system, these innovative natural scaffolds could be used to improve the standard culture conditions of TEC, to study in vitro the action of different factors on their differentiation genes, and to test the ability of TECs to induce in vitro maturation of seeded T lymphocytes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Microemulsions are thermodynamically stable, macroscopically homogeneous but microscopically heterogeneous, mixtures of water and oil stabilised by surfactant molecules. They have unique properties like ultralow interfacial tension, large interfacial area and the ability to solubilise other immiscible liquids. Depending on the temperature and concentration, non-ionic surfactants self assemble to micelles, flat lamellar, hexagonal and sponge like bicontinuous morphologies. Microemulsions have three different macroscopic phases (a) 1phase- microemulsion (isotropic), (b) 2phase-microemulsion coexisting with either expelled water or oil and (c) 3phase- microemulsion coexisting with expelled water and oil.rnrnOne of the most important fundamental questions in this field is the relation between the properties of the surfactant monolayer at water-oil interface and those of microemulsion. This monolayer forms an extended interface whose local curvature determines the structure of the microemulsion. The main part of my thesis deals with the quantitative measurements of the temperature induced phase transitions of water-oil-nonionic microemulsions and their interpretation using the temperature dependent spontaneous curvature [c0(T)] of the surfactant monolayer. In a 1phase- region, conservation of the components determines the droplet (domain) size (R) whereas in 2phase-region, it is determined by the temperature dependence of c0(T). The Helfrich bending free energy density includes the dependence of the droplet size on c0(T) as

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis focuses on the design and characterization of a novel, artificial minimal model membrane system with chosen physical parameters to mimic a nanoparticle uptake process driven exclusively by adhesion and softness of the bilayer. The realization is based on polymersomes composed of poly(dimethylsiloxane)-b-poly(2-methyloxazoline) (PMDS-b-PMOXA) and nanoscopic colloidal particles (polystyrene, silica), and the utilization of powerful characterization techniques. rnPDMS-b-PMOXA polymersomes with a radius, Rh ~100 nm, a size polydispersity, PD = 1.1 and a membrane thickness, h = 16 nm, were prepared using the film rehydratation method. Due to the suitable mechanical properties (Young’s modulus of ~17 MPa and a bending modulus of ~7⋅10-8 J) along with the long-term stability and the modifiability, these kind of polymersomes can be used as model membranes to study physical and physicochemical aspects of transmembrane transport of nanoparticles. A combination of photon (PCS) and fluorescence (FCS) correlation spectroscopies optimizes species selectivity, necessary for a unique internalization study encompassing two main efforts. rnFor the proof of concepts, the first effort focused on the interaction of nanoparticles (Rh NP SiO2 = 14 nm, Rh NP PS = 16 nm; cNP = 0.1 gL-1) and polymersomes (Rh P = 112 nm; cP = 0.045 gL-1) with fixed size and concentration. Identification of a modified form factor of the polymersome entities, selectively seen in the PCS experiment, enabled a precise monitor and quantitative description of the incorporation process. Combining PCS and FCS led to the estimation of the incorporated particles per polymersome (about 8 in the examined system) and the development of an appropriate methodology for the kinetics and dynamics of the internalization process. rnThe second effort aimed at the establishment of the necessary phenomenology to facilitate comparison with theories. The size and concentration of the nanoparticles were chosen as the most important system variables (Rh NP = 14 - 57 nm; cNP = 0.05 - 0.2 gL-1). It was revealed that the incorporation process could be controlled to a significant extent by changing the nanoparticles size and concentration. Average number of 7 up to 11 NPs with Rh NP = 14 nm and 3 up to 6 NPs with Rh NP = 25 nm can be internalized into the present polymersomes by changing initial nanoparticles concentration in the range 0.1- 0.2 gL-1. Rapid internalization of the particles by polymersomes is observed only above a critical threshold particles concentration, dependent on the nanoparticle size. rnWith regard possible pathways for the particle uptake, cryogenic transmission electron microscopy (cryo-TEM) has revealed two different incorporation mechanisms depending on the size of the involved nanoparticles: cooperative incorporation of nanoparticles groups or single nanoparticles incorporation. Conditions for nanoparticle uptake and controlled filling of polymersomes were presented. rnIn the framework of this thesis, the experimental observation of transmembrane transport of spherical PS and SiO2 NPs into polymersomes via an internalization process was reported and examined quantitatively for the first time. rnIn a summary the work performed in frames of this thesis might have significant impact on cell model systems’ development and thus improved understanding of transmembrane transport processes. The present experimental findings help create the missing phenomenology necessary for a detailed understanding of a phenomenon with great relevance in transmembrane transport. The fact that transmembrane transport of nanoparticles can be performed by artificial model system without any additional stimuli has a fundamental impact on the understanding, not only of the nanoparticle invagination process but also of the interaction of nanoparticles with biological as well as polymeric membranes. rn

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Until few years ago, 3D modelling was a topic confined into a professional environment. Nowadays technological innovations, the 3D printer among all, have attracted novice users to this application field. This sudden breakthrough was not supported by adequate software solutions. The 3D editing tools currently available do not assist the non-expert user during the various stages of generation, interaction and manipulation of 3D virtual models. This is mainly due to the current paradigm that is largely supported by two-dimensional input/output devices and strongly affected by obvious geometrical constraints. We have identified three main phases that characterize the creation and management of 3D virtual models. We investigated these directions evaluating and simplifying the classic editing techniques in order to propose more natural and intuitive tools in a pure 3D modelling environment. In particular, we focused on freehand sketch-based modelling to create 3D virtual models, interaction and navigation in a 3D modelling environment and advanced editing tools for free-form deformation and objects composition. To pursuing these goals we wondered how new gesture-based interaction technologies can be successfully employed in a 3D modelling environments, how we could improve the depth perception and the interaction in 3D environments and which operations could be developed to simplify the classical virtual models editing paradigm. Our main aims were to propose a set of solutions with which a common user can realize an idea in a 3D virtual model, drawing in the air just as he would on paper. Moreover, we tried to use gestures and mid-air movements to explore and interact in 3D virtual environment, and we studied simple and effective 3D form transformations. The work was carried out adopting the discrete representation of the models, thanks to its intuitiveness, but especially because it is full of open challenges.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In Airbus GmbH (Hamburg) has been developed a new design of Rear Pressure Bulkhead (RPB) for the A320-family. The new model has been formed with vacuum forming technology. During this process the wrinkling phenomenon occurs. In this thesis is described an analytical model for prediction of wrinkling based on the energetic method of Timoshenko. Large deflection theory has been used for analyze two cases of study: a simply supported circular thin plate stamped by a spherical punch and a simply supported circular thin plate formed with vacuum forming technique. If the edges are free to displace radially, thin plates will develop radial wrinkles near the edge at a central deflection approximately equal to four plate thicknesses w0/ℎ≈4 if they’re stamped by a spherical punch and w0/ℎ≈3 if they’re formed with vacuum forming technique. Initially, there are four symmetrical wrinkles, but the number increases if the central deflection is increased. By using experimental results, the “Snaptrhough” phenomenon is described.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a new and clinically oriented approach to perform atlas-based segmentation of brain tumor images. A mesh-free method is used to model tumor-induced soft tissue deformations in a healthy brain atlas image with subsequent registration of the modified atlas to a pathologic patient image. The atlas is seeded with a tumor position prior and tumor growth simulating the tumor mass effect is performed with the aim of improving the registration accuracy in case of patients with space-occupying lesions. We perform tests on 2D axial slices of five different patient data sets and show that the approach gives good results for the segmentation of white matter, grey matter, cerebrospinal fluid and the tumor.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Regional citrate anticoagulation (RCA) during hemodialysis (HD) has several advantages over heparin anticoagulation, but calcium (Ca) derangements are a major concern necessitating repeated monitoring of systemic ionized Ca (Ca(2+)). We developed a mathematical model of Ca and citrate (Ci) kinetics during RCA.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present an automatic method to segment brain tissues from volumetric MRI brain tumor images. The method is based on non-rigid registration of an average atlas in combination with a biomechanically justified tumor growth model to simulate soft-tissue deformations caused by the tumor mass-effect. The tumor growth model, which is formulated as a mesh-free Markov Random Field energy minimization problem, ensures correspondence between the atlas and the patient image, prior to the registration step. The method is non-parametric, simple and fast compared to other approaches while maintaining similar accuracy. It has been evaluated qualitatively and quantitatively with promising results on eight datasets comprising simulated images and real patient data.