10 resultados para Non-isothermal method

em AMS Tesi di Laurea - Alm@DL - Università di Bologna


Relevância:

80.00% 80.00%

Publicador:

Resumo:

In recent years, environmental concerns and the expected shortage in the fossil reserves have increased further development of biomaterials. Among them, poly(lactide) PLA possess some potential properties such as good ability process, excellent tensile strength and stiffness equivalent to some commercial petroleum-based polymers (PP, PS, PET, etc.). This biobased polymer is also biodegradable and biocompatible However, one great disadvantage of commercial PLA is slow crystallization rate, which restricts its use in many fields. Using of nanofillers is viewed as an efficient strategy to overcome this problem. In this thesis, the effect of bionanofillers in neat PLA and in blends of poly (L-lactide)(PLA)/poly(ε-Caprolactone) (PCL) has been investigated. The used nanofillers are: poly(L-lactide-co-ε-caprolactone) and poly(L-lactide-b-ε-caprolactone) grafted on cellulose nanowhiskers and neat cellulose nanowhiskers (CNW). The grafting reaction of poly(L-lactide-co-caprolactone) and poly (L-lactide-b-caprolactone) on the nanocellulose has been performed by the grafting from technique. In this way the polymerization reaction it is directly initiated on the substrate surface. The condition of the reaction were chosen after a temperature and solvent screening. By non-isothermal an isothermal DSC analysis the effect of bionanofillers on PLA and 80/20 PLA/PCL was evaluated. Non-isothermal DSC scans show a nucleating effect of the bionanofillers on PLA. This effect is detectable during PLA crystallization from the glassy state. Cold crystallization temperature is reduced upon the addition of the poly(L-lactide-b-caprolactone) grafted on cellulose nanowhiskers that is most performing bionanofiller in acting as a nucleating agent. On the other hand, DSC isothermal analysis on the overall crystallization rate indicate that cellulose nanowhiskers are best nucleating agents during isothermal crystallization from the melt state. In conclusion, nanofillers have different behavior depending on the processing conditions. However, the efficiency of our nanofillers as nucleating agent was clearly demonstrated in both isothermal as in non-isothermal condition.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this work we study a model for the breast image reconstruction in Digital Tomosynthesis, that is a non-invasive and non-destructive method for the three-dimensional visualization of the inner structures of an object, in which the data acquisition includes measuring a limited number of low-dose two-dimensional projections of an object by moving a detector and an X-ray tube around the object within a limited angular range. The problem of reconstructing 3D images from the projections provided in the Digital Tomosynthesis is an ill-posed inverse problem, that leads to a minimization problem with an object function that contains a data fitting term and a regularization term. The contribution of this thesis is to use the techniques of the compressed sensing, in particular replacing the standard least squares problem of data fitting with the problem of minimizing the 1-norm of the residuals, and using as regularization term the Total Variation (TV). We tested two different algorithms: a new alternating minimization algorithm (ADM), and a version of the more standard scaled projected gradient algorithm (SGP) that involves the 1-norm. We perform some experiments and analyse the performance of the two methods comparing relative errors, iterations number, times and the qualities of the reconstructed images. In conclusion we noticed that the use of the 1-norm and the Total Variation are valid tools in the formulation of the minimization problem for the image reconstruction resulting from Digital Tomosynthesis and the new algorithm ADM has reached a relative error comparable to a version of the classic algorithm SGP and proved best in speed and in the early appearance of the structures representing the masses.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Poly(lactide) is one of the best candidate to replace conventional petroleum-based polymers, since it is biobased, biocompatible and biodegradable. However, commercial PLA materials typically have low crystallization rate resulting in long processing time and low production efficiency. In this work the effects of two nanofillers MMT30B and MMT30B-g-P(LA-co-CL) on the crystallization rate of neat PLA and PLA/PCL blend were investigated. MMT30B-g-P(LA-co-CL) was synthetized by in situ grafting reaction. The synthesis was carried in xylene at 140°C, upon the results of a screening. The grafted copolymers were evaluated by 1H-NMR ,ATR–IR and TGA. Solvent casted films were obtained by mixing MMT30B-g-P(LA-co-CL) at 5% (w/w) with neat PLA and PLA/PCL blend, comparing the properties with the corresponding blends with and without a 5% of (w/w) unmodified clay. SEM images on PLA based blends shows that MMT30B is aggregated into larger particles compared to MMT30B-g-P(LLA-co-CL). This behavior is correlated to the better exfoliation of MMT30B-g-P(LA-co-CL) clay layers. SEM images on PLA/PCL based blends exhibit the typical sea-island morphology, characteristic of immiscible blends. PLA is the matrix while PCL is finely dispersed in droplets. MMT30B does not reduce PCL droplets size, while MMT30B-g-P(LA-co-CL) reduces the size of PCL droplets. This means that MMT30B-g-P(LA-co-CL) can migrate to the PLA-PCL interface, acting as a compatibilizer. Non-isothermal DSC cooling scans show a fractionated crystallization of the PCL phase in PLA/PCL/MMT30B-g-P(LA-co-CL), confirming the compatibilizer effect of MMT30B-g-P(LA-co-CL). At the same timeMMT30B-g-P(LA-co-CL) can better nucleate the PLA phase, both in neat PLA and PLA/PCL blend, promoting the crystallization during the heating scans. In isothermal condition, both the nanofillers increase the crystallization rate of PLA phase in neat PLA, while in PLA/PCL blends the effect is covered by the nucleating effect of PCL.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

English: The assessment of safety in existing bridges and viaducts led the Ministry of Public Works of the Netherlands to finance a specific campaing aimed at the study of the response of the elements of these infrastructures. Therefore, this activity is focused on the investigation of the behaviour of reinforced concrete slabs under concentrated loads, adopting finite element modeling and comparison with experimental results. These elements are characterized by shear behaviour and crisi, whose modeling is, from a computational point of view, a hard challeng, due to the brittle behavior combined with three-dimensional effects. The numerical modeling of the failure is studied through Sequentially Linear Analysis (SLA), an alternative Finite Element method, with respect to traditional incremental and iterative approaches. The comparison between the two different numerical techniques represents one of the first works and comparisons in a three-dimensional environment. It's carried out adopting one of the experimental test executed on reinforced concrete slabs as well. The advantage of the SLA is to avoid the well known problems of convergence of typical non-linear analysis, by directly specifying a damage increment, in terms of reduction of stiffness and resistance in particular finite element, instead of load or displacement increasing on the whole structure . For the first time, particular attention has been paid to specific aspects of the slabs, like an accurate constraints modeling and sensitivity of the solution with respect to the mesh density. This detailed analysis with respect to the main parameters proofed a strong influence of the tensile fracture energy, mesh density and chosen model on the solution in terms of force-displacement diagram, distribution of the crack patterns and shear failure mode. The SLA showed a great potential, but it requires a further developments for what regards two aspects of modeling: load conditions (constant and proportional loads) and softening behaviour of brittle materials (like concrete) in the three-dimensional field, in order to widen its horizons in these new contexts of study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The seismic behaviour of one-storey asymmetric structures has been studied since 1970s by a number of researches studies which identified the coupled nature of the translational-to-torsional response of those class of systems leading to severe displacement magnifications at the perimeter frames and therefore to significant increase of local peak seismic demand to the structural elements with respect to those of equivalent not-eccentric systems (Kan and Chopra 1987). These studies identified the fundamental parameters (such as the fundamental period TL normalized eccentricity e and the torsional-to-lateral frequency ratio Ωϑ) governing the torsional behavior of in-plan asymmetric structures and trends of behavior. It has been clearly recognized that asymmetric structures characterized by Ωϑ >1, referred to as torsionally-stiff systems, behave quite different form structures with Ωϑ <1, referred to as torsionally-flexible systems. Previous research works by some of the authors proposed a simple closed-form estimation of the maximum torsional response of one-storey elastic systems (Trombetti et al. 2005 and Palermo et al. 2010) leading to the so called “Alpha-method” for the evaluation of the displacement magnification factors at the corner sides. The present paper provides an upgrade of the “Alpha Method” removing the assumption of linear elastic response of the system. The main objective is to evaluate how the excursion of the structural elements in the inelastic field (due to the reaching of yield strength) affects the displacement demand of one-storey in-plan asymmetric structures. The system proposed by Chopra and Goel in 2007, which is claimed to be able to capture the main features of the non-linear response of in-plan asymmetric system, is used to perform a large parametric analysis varying all the fundamental parameters of the system, including the inelastic demand by varying the force reduction factor from 2 to 5. Magnification factors for different force reduction factor are proposed and comparisons with the results obtained from linear analysis are provided.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the last decade the near-surface mounted (NSM) strengthening technique using carbon fibre reinforced polymers (CFRP) has been increasingly used to improve the load carrying capacity of concrete members. Compared to externally bonded reinforcement (EBR), the NSM system presents considerable advantages. This technique consists in the insertion of carbon fibre reinforced polymer laminate strips into pre-cut slits opened in the concrete cover of the elements to be strengthened. CFRP reinforcement is bonded to concrete with an appropriate groove filler, typically epoxy adhesive or cement grout. Up to now, research efforts have been mainly focused on several structural aspects, such as: bond behaviour, flexural and/or shear strengthening effectiveness, and energy dissipation capacity of beam-column joints. In such research works, as well as in field applications, the most widespread adhesives that are used to bond reinforcements to concrete are epoxy resins. It is largely accepted that the performance of the whole application of NSM systems strongly depends on the mechanical properties of the epoxy resins, for which proper curing conditions must be assured. Therefore, the existence of non-destructive methods that allow monitoring the curing process of epoxy resins in the NSM CFRP system is desirable, in view of obtaining continuous information that can provide indication in regard to the effectiveness of curing and the expectable bond behaviour of CFRP/adhesive/concrete systems. The experimental research was developed at the Laboratory of the Structural Division of the Civil Engineering Department of the University of Minho in Guimar\~aes, Portugal (LEST). The main objective was to develop and propose a new method for continuous quality control of the curing of epoxy resins applied in NSM CFRP strengthening systems. This objective is pursued through the adaptation of an existing technique, termed EMM-ARM (Elasticity Modulus Monitoring through Ambient Response Method) that has been developed for monitoring the early stiffness evolution of cement-based materials. The experimental program was composed of two parts: (i) direct pull-out tests on concrete specimens strengthened with NSM CFRP laminate strips were conducted to assess the evolution of bond behaviour between CFRP and concrete since early ages; and, (ii) EMM-ARM tests were carried out for monitoring the progressive stiffness development of the structural adhesive used in CFRP applications. In order to verify the capability of the proposed method for evaluating the elastic modulus of the epoxy, static E-Modulus was determined through tension tests. The results of the two series of tests were then combined and compared to evaluate the possibility of implementation of a new method for the continuous monitoring and quality control of NSM CFRP applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Oggi sappiamo che la materia ordinaria rappresenta solo una piccola parte dell'intero contenuto in massa dell'Universo. L'ipotesi dell'esistenza della Materia Oscura, un nuovo tipo di materia che interagisce solo gravitazionalmente e, forse, tramite la forza debole, è stata avvalorata da numerose evidenze su scala sia galattica che cosmologica. Gli sforzi rivolti alla ricerca delle cosiddette WIMPs (Weakly Interacting Massive Particles), il generico nome dato alle particelle di Materia Oscura, si sono moltiplicati nel corso degli ultimi anni. L'esperimento XENON1T, attualmente in costruzione presso i Laboratori Nazionali del Gran Sasso (LNGS) e che sarà in presa dati entro la fine del 2015, segnerà un significativo passo in avanti nella ricerca diretta di Materia Oscura, che si basa sulla rivelazione di collisioni elastiche su nuclei bersaglio. XENON1T rappresenta la fase attuale del progetto XENON, che ha già realizzato gli esperimenti XENON10 (2005) e XENON100 (2008 e tuttora in funzione) e che prevede anche un ulteriore sviluppo, chiamato XENONnT. Il rivelatore XENON1T sfrutta circa 3 tonnellate di xeno liquido (LXe) e si basa su una Time Projection Chamber (TPC) a doppia fase. Dettagliate simulazioni Monte Carlo della geometria del rivelatore, assieme a specifiche misure della radioattività dei materiali e stime della purezza dello xeno utilizzato, hanno permesso di predire con accuratezza il fondo atteso. In questo lavoro di tesi, presentiamo lo studio della sensibilità attesa per XENON1T effettuato tramite il metodo statistico chiamato Profile Likelihood (PL) Ratio, il quale nell'ambito di un approccio frequentista permette un'appropriata trattazione delle incertezze sistematiche. In un primo momento è stata stimata la sensibilità usando il metodo semplificato Likelihood Ratio che non tiene conto di alcuna sistematica. In questo modo si è potuto valutare l'impatto della principale incertezza sistematica per XENON1T, ovvero quella sulla emissione di luce di scintillazione dello xeno per rinculi nucleari di bassa energia. I risultati conclusivi ottenuti con il metodo PL indicano che XENON1T sarà in grado di migliorare significativamente gli attuali limiti di esclusione di WIMPs; la massima sensibilità raggiunge una sezione d'urto σ=1.2∙10-47 cm2 per una massa di WIMP di 50 GeV/c2 e per una esposizione nominale di 2 tonnellate∙anno. I risultati ottenuti sono in linea con l'ambizioso obiettivo di XENON1T di abbassare gli attuali limiti sulla sezione d'urto, σ, delle WIMPs di due ordini di grandezza. Con tali prestazioni, e considerando 1 tonnellata di LXe come massa fiduciale, XENON1T sarà in grado di superare gli attuali limiti (esperimento LUX, 2013) dopo soli 5 giorni di acquisizione dati.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La radioterapia è una tecnica molto impiegata per la cura del cancro. Attualmente la somministrazione avviene principalmente attraverso la intensity modulated radiotherapy (IMRT, sovrapposizione di campi ad intensità modulata), un cui sviluppo recente è la volumetric modulated arc therapy (VMAT, irradiazione continua lungo un arco ininterrotto). La generazione di piani richiede esperienza ed abilità: un dosimetrista seleziona cost functions ed obiettivi ed un TPS ottimizza la disposizione dei segmenti ad intensità modulata. Se il medico giudica il risultato non soddisfacente, il processo riparte da capo (trial-and-error). Una alternativa è la generazione automatica di piani. Erasmus-iCycle, software prodotto presso ErasmusMC (Rotterdam, The Netherlands), è un algoritmo di ottimizzazione multicriteriale di piani radioterapici per ottimizzazione di intensità basato su una wish list. L'output consiste di piani Pareto-ottimali ad intensità modulata. La generazione automatica garantisce maggiore coerenza e qualità più elevata con tempi di lavoro ridotti. Nello studio, una procedura di generazione automatica di piani con modalità VMAT è stata sviluppata e valutata per carcinoma polmonare. Una wish list è stata generata attraverso una procedura iterativa su un gruppo ristretto di pazienti con la collaborazione di fisici medici ed oncologi e poi validata su un gruppo più ampio di pazienti. Nella grande maggioranza dei casi, i piani automatici sono stati giudicati dagli oncologi migliori rispetto ai rispettivi piani IMRT clinici generati manualmente. Solo in pochi casi una rapida calibrazione manuale specifica per il paziente si è resa necessaria per soddisfare tutti i requisiti clinici. Per un sottogruppo di pazienti si è mostrato che la qualità dei piani VMAT automatici era equivalente o superiore rispetto ai piani VMAT generati manualmente da un dosimetrista esperto. Complessivamente, si è dimostrata la possibilità di generare piani radioterapici VMAT ad alta qualità automaticamente, con interazione umana minima. L'introduzione clinica della procedura automatica presso ErasmusMC è iniziata (ottobre 2015).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La portata media cardiaca, (cardiac output “CO”) è un parametro essenziale per una buona gestione dei pazienti o per il monitoraggio degli stessi durante la loro permanenza nell’unità di terapia intensiva. La stesura di questo elaborato prende spunto sull’articolo di Theodore G. Papaioannou, Orestis Vardoulis, and Nikos Stergiopulos dal titolo “ The “systolic volume balance” method for the non invasive estimation of cardiac output based on pressure wave analysis” pubblicato sulla rivista American Journal of Physiology-Heart and Circulatory Physiology nel Marzo 2012. Nel sopracitato articolo si propone un metodo per il monitoraggio potenzialmente non invasivo della portata media cardiaca, basato su principi fisici ed emodinamici, che usa l’analisi della forma d’onda di pressione e un metodo non invasivo di calibrazione e trova la sua espressione ultima nell’equazione Qsvb=(C*PPao)/(T-(Psm,aorta*ts)/Pm). Questa formula è stata validata dagli autori, con buoni risultati, solo su un modello distribuito della circolazione sistemica e non è ancora stato validato in vivo. Questo elaborato si pone come obiettivo quello di un’analisi critica di questa formula per la stima della portata media cardiaca Qsvb. La formula proposta nell'articolo verrà verificata nel caso in cui la circolazione sistemica sia approssimata con modelli di tipo windkessel. Dallo studio svolto emerge il fatto che la formula porta risultati con errori trascurabili solo se si approssima la circolazione sistemica con il modello windkessel classico a due elementi (WK2) e la portata aortica con un’onda rettangolare. Approssimando la circolazione sistemica con il modello windkessel a tre elementi (WK3), o descrivendo la portata aortica con un’onda triangolare si ottengono risultati con errori non più trascurabili che variano dal 7%-9% nel caso del WK2 con portata aortica approssimata con onda triangolare ad errori più ampi del 20% nei i casi del WK3 per entrambe le approssimazioni della portata aortica.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A comparison between main design methods for unpaved roads is presented in this paper. An unpaved road is made up of an unbound aggregate base course lying on a usually weak subgrade. A geosynthetic might be put between the two in reinforcing and separating function. The goal of a design method is to find the appropriate thickness of the base course knowing at least traffic volume, wheel load, tire pressure, undrained cohesion of the subgrade, allowable rut depth and influence of the reinforcement. Geosynthetics can reduce the thickness or the quality of aggregate required and improve the durability of an unpaved road. Geotextiles contribute to save aggregate through interaction friction and separation, while geogrids through interlocking between his apertures and lithic base elements. In the last chapter a case study is discussed and design thicknesses with two design methods for the three possible cases (i.e. unreinforced, geotextile reinforced, geogrid reinforced) are calculated.