951 resultados para Kuhn-Tucker type necessary optimality conditions
Resumo:
Overweight and obesity in youth is a worldwide public health problem. Overweight and obesity in childhood and adolescents have a substantial effect upon many systems, resulting in clinical conditions such as metabolic syndrome, early atherosclerosis, dyslipidemia, hypertension and type 2 diabetes (T2D). Obesity and the type of body fat distribution are still the core aspects of insulin resistance and seem to be the physiopathologic links common to metabolic syndrome, cardiovascular disease and T2D. The earlier the appearance of the clustering of risk factors and the higher the time of exposure, the greater will be the chance of developing coronary disease with a more severe endpoint. The age when the event may occur seems to be related to the presence and aggregation of risk factors throughout life.
Resumo:
Background Human T-cell lymphotropic virus type 1 (HTLV-1) is the etiologic agent of adult T-cell leukemia/lymphoma (ATLL), HTLV-1-associated myelopathy/tropical spastic paraparesis (HAM/TSP), infective dermatitis associated with HTLV-1 (IDH), and various other clinical conditions. Several of these diseases can occur in association. Objective Report an association of diseases related to HTLV-1 infection, occurring in an unusual age group. Methods Dermatological and laboratory exams were consecutively performed in HTLV-1-infected individuals from January 2008 to July 2010 in the HTLV Outpatient Clinic at the Institute of Infectious Diseases “Emilio Ribas” in São Paulo, Brazil. Results A total of 193 individuals (73 HAM/TSP and 120 asymptomatic carriers) were evaluated, three of which were associated with adult-onset IDH and HAM/TSP. In all three cases, the patients were affected by IDH after the development and progression of HAM/TSP-associated symptoms. Limitations Small number of cases because of the rarity of these diseases. Conclusion We draw attention to the possibility of co-presentation of adult-onset IDH in patients with a previous diagnosis of HAM/TSP, although IDH is a disease classically described in children. Thus, dermatologists should be aware of these diagnoses in areas endemic for HTLV-1 infection.
Resumo:
Niemann-Pick disease type C (NP-C) is a rare, progressive, irreversible disease leading to disabling neurological manifestations and premature death. The estimated disease incidence is 1:120,000 live births, but this likely represents an underestimate, as the disease may be under-diagnosed due to its highly heterogeneous presentation. NP-C is characterised by visceral, neurological and psychiatric manifestations that are not specific to the disease and that can be found in other conditions. The aim of this review is to provide non-specialists with an expert-based, detailed description of NP-C signs and symptoms, including how they present in patients and how they can be assessed. Early disease detection should rely on seeking a combination of signs and symptoms, rather than isolated findings. Examples of combinations which are strongly suggestive of NP-C include: splenomegaly and vertical supranuclear gaze palsy (VSGP); splenomegaly and clumsiness; splenomegaly and schizophrenia-like psychosis; psychotic symptoms and cognitive decline; and ataxia with dystonia, dysarthria/dysphagia and cognitive decline. VSGP is a hallmark of NP-C and becomes highly specific of the disease when it occurs in combination with other manifestations (e.g. splenomegaly, ataxia). In young infants (<2 years), abnormal saccades may first manifest as slowing and shortening of upward saccades, long before gaze palsy onset. While visceral manifestations tend to predominate during the perinatal and infantile period (2 months–6 years of age), neurological and psychiatric involvement is more prominent during the juvenile/adult period (>6 years of age). Psychosis in NP-C is atypical and variably responsive to treatment. Progressive cognitive decline, which always occurs in patients with NP-C, manifests as memory and executive impairment in juvenile/adult patients. Disease prognosis mainly correlates with the age at onset of the neurological signs, with early-onset forms progressing faster. Therefore, a detailed and descriptive picture of NP-C signs and symptoms may help improve disease detection and early diagnosis, so that therapy with miglustat (Zavesca®), the only available treatment approved to date, can be started as soon as neurological symptoms appear, in order to slow disease progression.
Resumo:
In the present study we have compared the effects of leucine supplementation and its metabolite β-hydroxy-β-methyl butyrate (HMB) on the ubiquitin-proteasome system and the PI3K/Akt pathway during two distinct atrophic conditions, hindlimb immobilization and dexamethasone treatment. Leucine supplementation was able to minimize the reduction in rat soleus mass driven by immobilization. On the other hand, leucine supplementation was unable to provide protection against soleus mass loss in dexamethasone treated rats. Interestingly, HMB supplementation was unable to provide protection against mass loss in all treatments. While solely fiber type I cross sectional area (CSA) was protected in immobilized soleus of leucine-supplemented rats, none of the fiber types were protected by leucine supplementation in rats under dexamethasone treatment. In addition and in line with muscle mass results, HMB treatment did not attenuate CSA decrease in all fiber types against either immobilization or dexamethasone treatment. While leucine supplementation was able to minimize increased expression of both Mafbx/Atrogin and MuRF1 in immobilized rats, leucine was only able to minimize Mafbx/Atrogin in dexamethasone treated rats. In contrast, HMB was unable to restrain the increase in those atrogenes in immobilized rats, but in dexamethasone treated rats, HMB minimized increased expression of Mafbx/Atrogin. The amount of ubiquitinated proteins, as expected, was increased in immobilized and dexamethasone treated rats and only leucine was able to block this increase in immobilized rats but not in dexamethasone treated rats. Leucine supplementation maintained soleus tetanic peak force in immobilized rats at normal level. On the other hand, HMB treatment failed to maintain tetanic peak force regardless of treatment. The present data suggested that the anti-atrophic effects of leucine are not mediated by its metabolite HMB.
Resumo:
Aerosol particles are likely important contributors to our future climate. Further, during recent years, effects on human health arising from emissions of particulate material have gained increasing attention. In order to quantify the effect of aerosols on both climate and human health we need to better quantify the interplay between sources and sinks of aerosol particle number and mass on large spatial scales. So far long-term, regional observations of aerosol properties have been scarce, but argued necessary in order to bring the knowledge of regional and global distribution of aerosols further. In this context, regional studies of aerosol properties and aerosol dynamics are truly important areas of investigation. This thesis is devoted to investigations of aerosol number size distribution observations performed through the course of one year encompassing observational data from five stations covering an area from southern parts of Sweden up to northern parts of Finland. This thesis tries to give a description of aerosol size distribution dynamics from both a quantitative and qualitative point of view. The thesis focuses on properties and changes in aerosol size distribution as a function of location, season, source area, transport pathways and links to various meteorological conditions. The investigations performed in this thesis show that although the basic behaviour of the aerosol number size distribution in terms of seasonal and diurnal characteristics is similar at all stations in the measurement network, the aerosol over the Nordic countries is characterised by a typically sharp gradient in aerosol number and mass. This gradient is argued to derive from geographical locations of the stations in relation to the dominant sources and transport pathways. It is clear that the source area significantly determine the aerosol size distribution properties, but it is obvious that transport condition in terms of frequency of precipitation and cloudiness in some cases even more strongly control the evolution of the number size distribution. Aerosol dynamic processes under clear sky transport are however likewise argued to be highly important. Southerly transport of marine air and northerly transport of air from continental sources is studied in detail under clear sky conditions by performing a pseudo-Lagrangian box model evaluation of the two type cases. Results from both modelling and observations suggest that nucleation events contribute to integral number increase during southerly transport of comparably clean marine air, while number depletion dominates the evolution of the size distribution during northerly transport. This difference is largely explained by different concentration of pre-existing aerosol surface associated with the two type cases. Mass is found to be accumulated in many of the individual transport cases studied. This mass increase was argued to be controlled by emission of organic compounds from the boreal forest. This puts the boreal forest in a central position for estimates of aerosol forcing on a regional scale.
Resumo:
Primary stability of stems in cementless total hip replacements is recognized to play a critical role for long-term survival and thus for the success of the overall surgical procedure. In Literature, several studies addressed this important issue. Different approaches have been explored aiming to evaluate the extent of stability achieved during surgery. Some of these are in-vitro protocols while other tools are coinceived for the post-operative assessment of prosthesis migration relative to the host bone. In vitro protocols reported in the literature are not exportable to the operating room. Anyway most of them show a good overall accuracy. The RSA, EBRA and the radiographic analysis are currently used to check the healing process of the implanted femur at different follow-ups, evaluating implant migration, occurance of bone resorption or osteolysis at the interface. These methods are important for follow up and clinical study but do not assist the surgeon during implantation. At the time I started my Ph.D Study in Bioengineering, only one study had been undertaken to measure stability intra-operatively. No follow-up was presented to describe further results obtained with that device. In this scenario, it was believed that an instrument that could measure intra-operatively the stability achieved by an implanted stem would consistently improve the rate of success. This instrument should be accurate and should give to the surgeon during implantation a quick answer concerning the stability of the implanted stem. With this aim, an intra-operative device was designed, developed and validated. The device is meant to help the surgeon to decide how much to press-fit the implant. It is essentially made of a torsional load cell, able to measure the extent of torque applied by the surgeon to test primary stability, an angular sensor that measure the relative angular displacement between stem and femur, a rigid connector that enable connecting the device to the stem, and all the electronics for signals conditioning. The device was successfully validated in-vitro, showing a good overall accuracy in discriminating stable from unstable implants. Repeatability tests showed that the device was reliable. A calibration procedure was then performed in order to convert the angular readout into a linear displacement measurement, which is an information clinically relevant and simple to read in real-time by the surgeon. The second study reported in my thesis, concerns the evaluation of the possibility to have predictive information regarding the primary stability of a cementless stem, by measuring the micromotion of the last rasp used by the surgeon to prepare the femoral canal. This information would be really useful to the surgeon, who could check prior to the implantation process if the planned stem size can achieve a sufficient degree of primary stability, under optimal press fitting conditions. An intra-operative tool was developed to this aim. It was derived from a previously validated device, which was adapted for the specific purpose. The device is able to measure the relative micromotion between the femur and the rasp, when a torsional load is applied. An in-vitro protocol was developed and validated on both composite and cadaveric specimens. High correlation was observed between one of the parameters extracted form the acquisitions made on the rasp and the stability of the corresponding stem, when optimally press-fitted by the surgeon. After tuning in-vitro the protocol as in a closed loop, verification was made on two hip patients, confirming the results obtained in-vitro and highlighting the independence of the rasp indicator from the bone quality, anatomy and preserving conditions of the tested specimens, and from the sharpening of the rasp blades. The third study is related to an approach that have been recently explored in the orthopaedic community, but that was already in use in other scientific fields. It is based on the vibration analysis technique. This method has been successfully used to investigate the mechanical properties of the bone and its application to evaluate the extent of fixation of dental implants has been explored, even if its validity in this field is still under discussion. Several studies have been published recently on the stability assessment of hip implants by vibration analysis. The aim of the reported study was to develop and validate a prototype device based on the vibration analysis technique to measure intra-operatively the extent of implant stability. The expected advantages of a vibration-based device are easier clinical use, smaller dimensions and minor overall cost with respect to other devices based on direct micromotion measurement. The prototype developed consists of a piezoelectric exciter connected to the stem and an accelerometer attached to the femur. Preliminary tests were performed on four composite femurs implanted with a conventional stem. The results showed that the input signal was repeatable and the output could be recorded accurately. The fourth study concerns the application of the device based on the vibration analysis technique to several cases, considering both composite and cadaveric specimens. Different degrees of bone quality were tested, as well as different femur anatomies and several levels of press-fitting were considered. The aim of the study was to verify if it is possible to discriminate between stable and quasi-stable implants, because this is the most challenging detection for the surgeon in the operation room. Moreover, it was possible to validate the measurement protocol by comparing the results of the acquisitions made with the vibration-based tool to two reference measurements made by means of a validated technique, and a validated device. The results highlighted that the most sensitive parameter to stability is the shift in resonance frequency of the stem-bone system, showing high correlation with residual micromotion on all the tested specimens. Thus, it seems possible to discriminate between many levels of stability, from the grossly loosened implant, through the quasi-stable implants, to the definitely stable one. Finally, an additional study was performed on a different type of hip prosthesis, which has recently gained great interest thus becoming fairly popular in some countries in the last few years: the hip resurfacing prosthesis. The study was motivated by the following rationale: although bone-prosthesis micromotion is known to influence the stability of total hip replacement, its effect on the outcome of resurfacing implants has not been investigated in-vitro yet, but only clinically. Thus the work was aimed at verifying if it was possible to apply to the resurfacing prosthesis one of the intraoperative devices just validated for the measurement of the micromotion in the resurfacing implants. To do that, a preliminary study was performed in order to evaluate the extent of migration and the typical elastic movement for an epiphyseal prosthesis. An in-vitro procedure was developed to measure micromotions of resurfacing implants. This included a set of in-vitro loading scenarios that covers the range of directions covered by hip resultant forces in the most typical motor-tasks. The applicability of the protocol was assessed on two different commercial designs and on different head sizes. The repeatability and reproducibility were excellent (comparable to the best previously published protocols for standard cemented hip stems). Results showed that the procedure is accurate enough to detect micromotions of the order of few microns. The protocol proposed was thus completely validated. The results of the study demonstrated that the application of an intra-operative device to the resurfacing implants is not necessary, as the typical micromovement associated to this type of prosthesis could be considered negligible and thus not critical for the stabilization process. Concluding, four intra-operative tools have been developed and fully validated during these three years of research activity. The use in the clinical setting was tested for one of the devices, which could be used right now by the surgeon to evaluate the degree of stability achieved through the press-fitting procedure. The tool adapted to be used on the rasp was a good predictor of the stability of the stem. Thus it could be useful for the surgeon while checking if the pre-operative planning was correct. The device based on the vibration technique showed great accuracy, small dimensions, and thus has a great potential to become an instrument appreciated by the surgeon. It still need a clinical evaluation, and must be industrialized as well. The in-vitro tool worked very well, and can be applied for assessing resurfacing implants pre-clinically.
Resumo:
Recent progress in microelectronic and wireless communications have enabled the development of low cost, low power, multifunctional sensors, which has allowed the birth of new type of networks named wireless sensor networks (WSNs). The main features of such networks are: the nodes can be positioned randomly over a given field with a high density; each node operates both like sensor (for collection of environmental data) as well as transceiver (for transmission of information to the data retrieval); the nodes have limited energy resources. The use of wireless communications and the small size of nodes, make this type of networks suitable for a large number of applications. For example, sensor nodes can be used to monitor a high risk region, as near a volcano; in a hospital they could be used to monitor physical conditions of patients. For each of these possible application scenarios, it is necessary to guarantee a trade-off between energy consumptions and communication reliability. The thesis investigates the use of WSNs in two possible scenarios and for each of them suggests a solution that permits to solve relating problems considering the trade-off introduced. The first scenario considers a network with a high number of nodes deployed in a given geographical area without detailed planning that have to transmit data toward a coordinator node, named sink, that we assume to be located onboard an unmanned aerial vehicle (UAV). This is a practical example of reachback communication, characterized by the high density of nodes that have to transmit data reliably and efficiently towards a far receiver. It is considered that each node transmits a common shared message directly to the receiver onboard the UAV whenever it receives a broadcast message (triggered for example by the vehicle). We assume that the communication channels between the local nodes and the receiver are subject to fading and noise. The receiver onboard the UAV must be able to fuse the weak and noisy signals in a coherent way to receive the data reliably. It is proposed a cooperative diversity concept as an effective solution to the reachback problem. In particular, it is considered a spread spectrum (SS) transmission scheme in conjunction with a fusion center that can exploit cooperative diversity, without requiring stringent synchronization between nodes. The idea consists of simultaneous transmission of the common message among the nodes and a Rake reception at the fusion center. The proposed solution is mainly motivated by two goals: the necessity to have simple nodes (to this aim we move the computational complexity to the receiver onboard the UAV), and the importance to guarantee high levels of energy efficiency of the network, thus increasing the network lifetime. The proposed scheme is analyzed in order to better understand the effectiveness of the approach presented. The performance metrics considered are both the theoretical limit on the maximum amount of data that can be collected by the receiver, as well as the error probability with a given modulation scheme. Since we deal with a WSN, both of these performance are evaluated taking into consideration the energy efficiency of the network. The second scenario considers the use of a chain network for the detection of fires by using nodes that have a double function of sensors and routers. The first one is relative to the monitoring of a temperature parameter that allows to take a local binary decision of target (fire) absent/present. The second one considers that each node receives a decision made by the previous node of the chain, compares this with that deriving by the observation of the phenomenon, and transmits the final result to the next node. The chain ends at the sink node that transmits the received decision to the user. In this network the goals are to limit throughput in each sensor-to-sensor link and minimize probability of error at the last stage of the chain. This is a typical scenario of distributed detection. To obtain good performance it is necessary to define some fusion rules for each node to summarize local observations and decisions of the previous nodes, to get a final decision that it is transmitted to the next node. WSNs have been studied also under a practical point of view, describing both the main characteristics of IEEE802:15:4 standard and two commercial WSN platforms. By using a commercial WSN platform it is realized an agricultural application that has been tested in a six months on-field experimentation.
Resumo:
Se il lavoro dello storico è capire il passato come è stato compreso dalla gente che lo ha vissuto, allora forse non è azzardato pensare che sia anche necessario comunicare i risultati delle ricerche con strumenti propri che appartengono a un'epoca e che influenzano la mentalità di chi in quell'epoca vive. Emergenti tecnologie, specialmente nell’area della multimedialità come la realtà virtuale, permettono agli storici di comunicare l’esperienza del passato in più sensi. In che modo la storia collabora con le tecnologie informatiche soffermandosi sulla possibilità di fare ricostruzioni storiche virtuali, con relativi esempi e recensioni? Quello che maggiormente preoccupa gli storici è se una ricostruzione di un fatto passato vissuto attraverso la sua ricreazione in pixels sia un metodo di conoscenza della storia che possa essere considerato valido. Ovvero l'emozione che la navigazione in una realtà 3D può suscitare, è un mezzo in grado di trasmettere conoscenza? O forse l'idea che abbiamo del passato e del suo studio viene sottilmente cambiato nel momento in cui lo si divulga attraverso la grafica 3D? Da tempo però la disciplina ha cominciato a fare i conti con questa situazione, costretta soprattutto dall'invasività di questo tipo di media, dalla spettacolarizzazione del passato e da una divulgazione del passato parziale e antiscientifica. In un mondo post letterario bisogna cominciare a pensare che la cultura visuale nella quale siamo immersi sta cambiando il nostro rapporto con il passato: non per questo le conoscenze maturate fino ad oggi sono false, ma è necessario riconoscere che esiste più di una verità storica, a volte scritta a volte visuale. Il computer è diventato una piattaforma onnipresente per la rappresentazione e diffusione dell’informazione. I metodi di interazione e rappresentazione stanno evolvendo di continuo. Ed è su questi due binari che è si muove l’offerta delle tecnologie informatiche al servizio della storia. Lo scopo di questa tesi è proprio quello di esplorare, attraverso l’utilizzo e la sperimentazione di diversi strumenti e tecnologie informatiche, come si può raccontare efficacemente il passato attraverso oggetti tridimensionali e gli ambienti virtuali, e come, nel loro essere elementi caratterizzanti di comunicazione, in che modo possono collaborare, in questo caso particolare, con la disciplina storica. La presente ricerca ricostruisce alcune linee di storia delle principali fabbriche attive a Torino durante la seconda guerra mondiale, ricordando stretta relazione che esiste tra strutture ed individui e in questa città in particolare tra fabbrica e movimento operaio, è inevitabile addentrarsi nelle vicende del movimento operaio torinese che nel periodo della lotta di Liberazione in città fu un soggetto politico e sociale di primo rilievo. Nella città, intesa come entità biologica coinvolta nella guerra, la fabbrica (o le fabbriche) diventa il nucleo concettuale attraverso il quale leggere la città: sono le fabbriche gli obiettivi principali dei bombardamenti ed è nelle fabbriche che si combatte una guerra di liberazione tra classe operaia e autorità, di fabbrica e cittadine. La fabbrica diventa il luogo di "usurpazione del potere" di cui parla Weber, il palcoscenico in cui si tengono i diversi episodi della guerra: scioperi, deportazioni, occupazioni .... Il modello della città qui rappresentata non è una semplice visualizzazione ma un sistema informativo dove la realtà modellata è rappresentata da oggetti, che fanno da teatro allo svolgimento di avvenimenti con una precisa collocazione cronologica, al cui interno è possibile effettuare operazioni di selezione di render statici (immagini), di filmati precalcolati (animazioni) e di scenari navigabili interattivamente oltre ad attività di ricerca di fonti bibliografiche e commenti di studiosi segnatamente legati all'evento in oggetto. Obiettivo di questo lavoro è far interagire, attraverso diversi progetti, le discipline storiche e l’informatica, nelle diverse opportunità tecnologiche che questa presenta. Le possibilità di ricostruzione offerte dal 3D vengono così messe a servizio della ricerca, offrendo una visione integrale in grado di avvicinarci alla realtà dell’epoca presa in considerazione e convogliando in un’unica piattaforma espositiva tutti i risultati. Divulgazione Progetto Mappa Informativa Multimediale Torino 1945 Sul piano pratico il progetto prevede una interfaccia navigabile (tecnologia Flash) che rappresenti la pianta della città dell’epoca, attraverso la quale sia possibile avere una visione dei luoghi e dei tempi in cui la Liberazione prese forma, sia a livello concettuale, sia a livello pratico. Questo intreccio di coordinate nello spazio e nel tempo non solo migliora la comprensione dei fenomeni, ma crea un maggiore interesse sull’argomento attraverso l’utilizzo di strumenti divulgativi di grande efficacia (e appeal) senza perdere di vista la necessità di valicare le tesi storiche proponendosi come piattaforma didattica. Un tale contesto richiede uno studio approfondito degli eventi storici al fine di ricostruire con chiarezza una mappa della città che sia precisa sia topograficamente sia a livello di navigazione multimediale. La preparazione della cartina deve seguire gli standard del momento, perciò le soluzioni informatiche utilizzate sono quelle fornite da Adobe Illustrator per la realizzazione della topografia, e da Macromedia Flash per la creazione di un’interfaccia di navigazione. La base dei dati descrittivi è ovviamente consultabile essendo contenuta nel supporto media e totalmente annotata nella bibliografia. È il continuo evolvere delle tecnologie d'informazione e la massiccia diffusione dell’uso dei computer che ci porta a un cambiamento sostanziale nello studio e nell’apprendimento storico; le strutture accademiche e gli operatori economici hanno fatto propria la richiesta che giunge dall'utenza (insegnanti, studenti, operatori dei Beni Culturali) di una maggiore diffusione della conoscenza storica attraverso la sua rappresentazione informatizzata. Sul fronte didattico la ricostruzione di una realtà storica attraverso strumenti informatici consente anche ai non-storici di toccare con mano quelle che sono le problematiche della ricerca quali fonti mancanti, buchi della cronologia e valutazione della veridicità dei fatti attraverso prove. Le tecnologie informatiche permettono una visione completa, unitaria ed esauriente del passato, convogliando tutte le informazioni su un'unica piattaforma, permettendo anche a chi non è specializzato di comprendere immediatamente di cosa si parla. Il miglior libro di storia, per sua natura, non può farlo in quanto divide e organizza le notizie in modo diverso. In questo modo agli studenti viene data l'opportunità di apprendere tramite una rappresentazione diversa rispetto a quelle a cui sono abituati. La premessa centrale del progetto è che i risultati nell'apprendimento degli studenti possono essere migliorati se un concetto o un contenuto viene comunicato attraverso più canali di espressione, nel nostro caso attraverso un testo, immagini e un oggetto multimediale. Didattica La Conceria Fiorio è uno dei luoghi-simbolo della Resistenza torinese. Il progetto è una ricostruzione in realtà virtuale della Conceria Fiorio di Torino. La ricostruzione serve a arricchire la cultura storica sia a chi la produce, attraverso una ricerca accurata delle fonti, sia a chi può poi usufruirne, soprattutto i giovani, che, attratti dall’aspetto ludico della ricostruzione, apprendono con più facilità. La costruzione di un manufatto in 3D fornisce agli studenti le basi per riconoscere ed esprimere la giusta relazione fra il modello e l’oggetto storico. Le fasi di lavoro attraverso cui si è giunti alla ricostruzione in 3D della Conceria: . una ricerca storica approfondita, basata sulle fonti, che possono essere documenti degli archivi o scavi archeologici, fonti iconografiche, cartografiche, ecc.; . La modellazione degli edifici sulla base delle ricerche storiche, per fornire la struttura geometrica poligonale che permetta la navigazione tridimensionale; . La realizzazione, attraverso gli strumenti della computer graphic della navigazione in 3D. Unreal Technology è il nome dato al motore grafico utilizzato in numerosi videogiochi commerciali. Una delle caratteristiche fondamentali di tale prodotto è quella di avere uno strumento chiamato Unreal editor con cui è possibile costruire mondi virtuali, e che è quello utilizzato per questo progetto. UnrealEd (Ued) è il software per creare livelli per Unreal e i giochi basati sul motore di Unreal. E’ stata utilizzata la versione gratuita dell’editor. Il risultato finale del progetto è un ambiente virtuale navigabile raffigurante una ricostruzione accurata della Conceria Fiorio ai tempi della Resistenza. L’utente può visitare l’edificio e visualizzare informazioni specifiche su alcuni punti di interesse. La navigazione viene effettuata in prima persona, un processo di “spettacolarizzazione” degli ambienti visitati attraverso un arredamento consono permette all'utente una maggiore immersività rendendo l’ambiente più credibile e immediatamente codificabile. L’architettura Unreal Technology ha permesso di ottenere un buon risultato in un tempo brevissimo, senza che fossero necessari interventi di programmazione. Questo motore è, quindi, particolarmente adatto alla realizzazione rapida di prototipi di una discreta qualità, La presenza di un certo numero di bug lo rende, però, in parte inaffidabile. Utilizzare un editor da videogame per questa ricostruzione auspica la possibilità di un suo impiego nella didattica, quello che le simulazioni in 3D permettono nel caso specifico è di permettere agli studenti di sperimentare il lavoro della ricostruzione storica, con tutti i problemi che lo storico deve affrontare nel ricreare il passato. Questo lavoro vuole essere per gli storici una esperienza nella direzione della creazione di un repertorio espressivo più ampio, che includa gli ambienti tridimensionali. Il rischio di impiegare del tempo per imparare come funziona questa tecnologia per generare spazi virtuali rende scettici quanti si impegnano nell'insegnamento, ma le esperienze di progetti sviluppati, soprattutto all’estero, servono a capire che sono un buon investimento. Il fatto che una software house, che crea un videogame di grande successo di pubblico, includa nel suo prodotto, una serie di strumenti che consentano all'utente la creazione di mondi propri in cui giocare, è sintomatico che l'alfabetizzazione informatica degli utenti medi sta crescendo sempre più rapidamente e che l'utilizzo di un editor come Unreal Engine sarà in futuro una attività alla portata di un pubblico sempre più vasto. Questo ci mette nelle condizioni di progettare moduli di insegnamento più immersivi, in cui l'esperienza della ricerca e della ricostruzione del passato si intreccino con lo studio più tradizionale degli avvenimenti di una certa epoca. I mondi virtuali interattivi vengono spesso definiti come la forma culturale chiave del XXI secolo, come il cinema lo è stato per il XX. Lo scopo di questo lavoro è stato quello di suggerire che vi sono grosse opportunità per gli storici impiegando gli oggetti e le ambientazioni in 3D, e che essi devono coglierle. Si consideri il fatto che l’estetica abbia un effetto sull’epistemologia. O almeno sulla forma che i risultati delle ricerche storiche assumono nel momento in cui devono essere diffuse. Un’analisi storica fatta in maniera superficiale o con presupposti errati può comunque essere diffusa e avere credito in numerosi ambienti se diffusa con mezzi accattivanti e moderni. Ecco perchè non conviene seppellire un buon lavoro in qualche biblioteca, in attesa che qualcuno lo scopra. Ecco perchè gli storici non devono ignorare il 3D. La nostra capacità, come studiosi e studenti, di percepire idee ed orientamenti importanti dipende spesso dai metodi che impieghiamo per rappresentare i dati e l’evidenza. Perché gli storici possano ottenere il beneficio che il 3D porta con sè, tuttavia, devono sviluppare un’agenda di ricerca volta ad accertarsi che il 3D sostenga i loro obiettivi di ricercatori e insegnanti. Una ricostruzione storica può essere molto utile dal punto di vista educativo non sono da chi la visita ma, anche da chi la realizza. La fase di ricerca necessaria per la ricostruzione non può fare altro che aumentare il background culturale dello sviluppatore. Conclusioni La cosa più importante è stata la possibilità di fare esperienze nell’uso di mezzi di comunicazione di questo genere per raccontare e far conoscere il passato. Rovesciando il paradigma conoscitivo che avevo appreso negli studi umanistici, ho cercato di desumere quelle che potremo chiamare “leggi universali” dai dati oggettivi emersi da questi esperimenti. Da punto di vista epistemologico l’informatica, con la sua capacità di gestire masse impressionanti di dati, dà agli studiosi la possibilità di formulare delle ipotesi e poi accertarle o smentirle tramite ricostruzioni e simulazioni. Il mio lavoro è andato in questa direzione, cercando conoscere e usare strumenti attuali che nel futuro avranno sempre maggiore presenza nella comunicazione (anche scientifica) e che sono i mezzi di comunicazione d’eccellenza per determinate fasce d’età (adolescenti). Volendo spingere all’estremo i termini possiamo dire che la sfida che oggi la cultura visuale pone ai metodi tradizionali del fare storia è la stessa che Erodoto e Tucidide contrapposero ai narratori di miti e leggende. Prima di Erodoto esisteva il mito, che era un mezzo perfettamente adeguato per raccontare e dare significato al passato di una tribù o di una città. In un mondo post letterario la nostra conoscenza del passato sta sottilmente mutando nel momento in cui lo vediamo rappresentato da pixel o quando le informazioni scaturiscono non da sole, ma grazie all’interattività con il mezzo. La nostra capacità come studiosi e studenti di percepire idee ed orientamenti importanti dipende spesso dai metodi che impieghiamo per rappresentare i dati e l’evidenza. Perché gli storici possano ottenere il beneficio sottinteso al 3D, tuttavia, devono sviluppare un’agenda di ricerca volta ad accertarsi che il 3D sostenga i loro obiettivi di ricercatori e insegnanti. Le esperienze raccolte nelle pagine precedenti ci portano a pensare che in un futuro non troppo lontano uno strumento come il computer sarà l’unico mezzo attraverso cui trasmettere conoscenze, e dal punto di vista didattico la sua interattività consente coinvolgimento negli studenti come nessun altro mezzo di comunicazione moderno.
Resumo:
Gnocchi is a typical Italian potato-based fresh pasta that can be either homemade or industrially manufactured. The homemade traditional product is consumed fresh on the day it is produced, whereas the industrially manufactured one is vacuum-packed in polyethylene and usually stored at refrigerated conditions. At industrial level, most kinds of gnocchi are usually produced by using some potato derivatives (i.e. flakes, dehydrated products or flour) to which soft wheat flour, salt, some emulsifiers and aromas are added. Recently, a novel type of gnocchi emerged on the Italian pasta market, since it would be as much similar as possible to the traditional homemade one. It is industrially produced from fresh potatoes as main ingredient and soft wheat flour, pasteurized liquid eggs and salt, moreover this product undergoes steam cooking and mashing industrial treatments. Neither preservatives nor emulsifiers are included in the recipe. The main aim of this work was to get inside the industrial manufacture of gnocchi, in order to improve the quality characteristics of the final product, by the study of the main steps of the production, starting from the raw and steam cooked tubers, through the semi-finished materials, such as the potato puree and the formulated dough. For this purpose the investigation of the enzymatic activity of the raw and steam cooked potatoes, the main characteristics of the puree (colour, texture and starch), the interaction among ingredients of differently formulated doughs and the basic quality aspects of the final product have been performed. Results obtained in this work indicated that steam cooking influenced the analysed enzymes (Pectin methylesterase and α- and β-amylases) in different tissues of the tuber. PME resulted still active in the cortex, it therefore may affect the texture of cooked potatoes to be used as main ingredient in the production of gnocchi. Starch degrading enzymes (α- and β-amylases) were inactivated both in the cortex and in the pith of the tuber. The study performed on the potato puree showed that, between the two analysed samples, the product which employed dual lower pressure treatments seemed to be the most suitable to the production of gnocchi, in terms of its better physicochemical and textural properties. It did not evidence aggregation phenomena responsible of hard lumps, which may occur in this kind of semi-finished product. The textural properties of gnocchi doughs were not influenced by the different formulation as expected. Among the ingredients involved in the preparation of the different samples, soft wheat flour seemed to be the most crucial in affecting the quality features of gnocchi doughs. As a consequence of the interactive effect of the ingredients on the physicochemical and textural characteristics of the different doughs, a uniform and well-defined split-up among samples was not obtained. In the comparison of different kinds of gnocchi, the optimal physicochemical and textural properties were detected in the sample made with fresh tubers. This was probably caused not only by the use of fresh steam cooked potatoes, but also by the pasteurized liquid eggs and by the absence of any kind of emulsifier, additive or preserving substance.
Resumo:
Nowadays, it is clear that the target of creating a sustainable future for the next generations requires to re-think the industrial application of chemistry. It is also evident that more sustainable chemical processes may be economically convenient, in comparison with the conventional ones, because fewer by-products means lower costs for raw materials, for separation and for disposal treatments; but also it implies an increase of productivity and, as a consequence, smaller reactors can be used. In addition, an indirect gain could derive from the better public image of the company, marketing sustainable products or processes. In this context, oxidation reactions play a major role, being the tool for the production of huge quantities of chemical intermediates and specialties. Potentially, the impact of these productions on the environment could have been much worse than it is, if a continuous efforts hadn’t been spent to improve the technologies employed. Substantial technological innovations have driven the development of new catalytic systems, the improvement of reactions and process technologies, contributing to move the chemical industry in the direction of a more sustainable and ecological approach. The roadmap for the application of these concepts includes new synthetic strategies, alternative reactants, catalysts heterogenisation and innovative reactor configurations and process design. Actually, in order to implement all these ideas into real projects, the development of more efficient reactions is one primary target. Yield, selectivity and space-time yield are the right metrics for evaluating the reaction efficiency. In the case of catalytic selective oxidation, the control of selectivity has always been the principal issue, because the formation of total oxidation products (carbon oxides) is thermodynamically more favoured than the formation of the desired, partially oxidized compound. As a matter of fact, only in few oxidation reactions a total, or close to total, conversion is achieved, and usually the selectivity is limited by the formation of by-products or co-products, that often implies unfavourable process economics; moreover, sometimes the cost of the oxidant further penalizes the process. During my PhD work, I have investigated four reactions that are emblematic of the new approaches used in the chemical industry. In the Part A of my thesis, a new process aimed at a more sustainable production of menadione (vitamin K3) is described. The “greener” approach includes the use of hydrogen peroxide in place of chromate (from a stoichiometric oxidation to a catalytic oxidation), also avoiding the production of dangerous waste. Moreover, I have studied the possibility of using an heterogeneous catalytic system, able to efficiently activate hydrogen peroxide. Indeed, the overall process would be carried out in two different steps: the first is the methylation of 1-naphthol with methanol to yield 2-methyl-1-naphthol, the second one is the oxidation of the latter compound to menadione. The catalyst for this latter step, the reaction object of my investigation, consists of Nb2O5-SiO2 prepared with the sol-gel technique. The catalytic tests were first carried out under conditions that simulate the in-situ generation of hydrogen peroxide, that means using a low concentration of the oxidant. Then, experiments were carried out using higher hydrogen peroxide concentration. The study of the reaction mechanism was fundamental to get indications about the best operative conditions, and improve the selectivity to menadione. In the Part B, I explored the direct oxidation of benzene to phenol with hydrogen peroxide. The industrial process for phenol is the oxidation of cumene with oxygen, that also co-produces acetone. This can be considered a case of how economics could drive the sustainability issue; in fact, the new process allowing to obtain directly phenol, besides avoiding the co-production of acetone (a burden for phenol, because the market requirements for the two products are quite different), might be economically convenient with respect to the conventional process, if a high selectivity to phenol were obtained. Titanium silicalite-1 (TS-1) is the catalyst chosen for this reaction. Comparing the reactivity results obtained with some TS-1 samples having different chemical-physical properties, and analyzing in detail the effect of the more important reaction parameters, we could formulate some hypothesis concerning the reaction network and mechanism. Part C of my thesis deals with the hydroxylation of phenol to hydroquinone and catechol. This reaction is already industrially applied but, for economical reason, an improvement of the selectivity to the para di-hydroxilated compound and a decrease of the selectivity to the ortho isomer would be desirable. Also in this case, the catalyst used was the TS-1. The aim of my research was to find out a method to control the selectivity ratio between the two isomers, and finally to make the industrial process more flexible, in order to adapt the process performance in function of fluctuations of the market requirements. The reaction was carried out in both a batch stirred reactor and in a re-circulating fixed-bed reactor. In the first system, the effect of various reaction parameters on catalytic behaviour was investigated: type of solvent or co-solvent, and particle size. With the second reactor type, I investigated the possibility to use a continuous system, and the catalyst shaped in extrudates (instead of powder), in order to avoid the catalyst filtration step. Finally, part D deals with the study of a new process for the valorisation of glycerol, by means of transformation into valuable chemicals. This molecule is nowadays produced in big amount, being a co-product in biodiesel synthesis; therefore, it is considered a raw material from renewable resources (a bio-platform molecule). Initially, we tested the oxidation of glycerol in the liquid-phase, with hydrogen peroxide and TS-1. However, results achieved were not satisfactory. Then we investigated the gas-phase transformation of glycerol into acrylic acid, with the intermediate formation of acrolein; the latter can be obtained by dehydration of glycerol, and then can be oxidized into acrylic acid. Actually, the oxidation step from acrolein to acrylic acid is already optimized at an industrial level; therefore, we decided to investigate in depth the first step of the process. I studied the reactivity of heterogeneous acid catalysts based on sulphated zirconia. Tests were carried out both in aerobic and anaerobic conditions, in order to investigate the effect of oxygen on the catalyst deactivation rate (one main problem usually met in glycerol dehydration). Finally, I studied the reactivity of bifunctional systems, made of Keggin-type polyoxometalates, either alone or supported over sulphated zirconia, in this way combining the acid functionality (necessary for the dehydrative step) with the redox one (necessary for the oxidative step). In conclusion, during my PhD work I investigated reactions that apply the “green chemistry” rules and strategies; in particular, I studied new greener approaches for the synthesis of chemicals (Part A and Part B), the optimisation of reaction parameters to make the oxidation process more flexible (Part C), and the use of a bioplatform molecule for the synthesis of a chemical intermediate (Part D).
Resumo:
ABSTRACTDie vorliegende Arbeit befasste sich mit der Reinigung,heterologen Expression, Charakterisierung, molekularenAnalyse, Mutation und Kristallisation des EnzymsVinorin-Synthase. Das Enzym spielt eine wichtige Rolle inder Ajmalin-Biosynthese, da es in einerAcetyl-CoA-abhängigen Reaktion die Umwandlung desSarpagan-Alkaloids 16-epi-Vellosimin zu Vinorin unterBildung des Ajmalan-Grundgerüstes katalysiert. Nach der Reinigung der Vinorin-Synthase ausHybrid-Zellkulturen von Rauvolfia serpentina/Rhazya strictamit den fünf chromatographischen TrennmethodenAnionenaustauschchromatographie an SOURCE 30Q, HydrophobeInteraktionen Chromatographie an SOURCE 15PHE,Chromatographie an MacroPrep Ceramic Hydroxyapatit,Anionenaustauschchromatographie an Mono Q undGrößenausschlußchromatographie an Superdex 75 konnte dieVinorin-Synthase aus 2 kg Zellkulturgewebe 991fachangereichert werden.Das nach der Reinigung angefertigte SDS-Gel ermöglichte eineklare Zuordnung der Protein-Bande als Vinorin-Synthase.Der Verdau der Enzymbande mit der Endoproteinase LysC unddie darauffolgende Sequenzierung der Spaltpeptide führte zuvier Peptidsequenzen. Der Datenbankvergleich (SwissProt)zeigte keinerlei Homologien zu Sequenzen bekannterPflanzenenzyme. Mit degenerierten Primern, abgeleitet voneinem der erhaltenen Peptidfragmente und einer konserviertenRegion bekannter Acetyltransferasen gelang es, ein erstescDNA-Fragment der Vinorin-Synthase zu amplifizieren. Mit derMethode der RACE-PCR wurde die Nukleoidsequenzvervollständigt, was zu einem cDNA-Vollängenklon mit einerGröße von 1263 bp führte, der für ein Protein mit 421Aminosäuren (46 kDa) codiert.Das Vinorin-Synthase-Gen wurde in den pQE2-Expressionsvektorligiert, der für einen N-terminalen 6-fachen His-tagcodiert. Anschließend wurde sie erstmals erfolgreich in E.coli im mg-Maßstab exprimiert und bis zur Homogenitätgereinigt. Durch die erfolgreiche Überexpression konnte dieVinorin-Synthase eingehend charakterisiert werden. DerKM-Wert für das Substrat Gardneral wurde mit 20 µM, bzw.41.2 µM bestimmt und Vmax betrug 1 pkat, bzw. 1.71 pkat.Nach erfolgreicher Abspaltung des His-tags wurden diekinetischen Parameter erneut bestimmt (KM- Wert 7.5 µM, bzw.27.52 µM, Vmax 0.7 pkat, bzw. 1.21 pkat). Das Co-Substratzeigt einen KM- Wert von 60.5 µM (Vmax 0.6 pkat). DieVinorin-Synthase besitzt ein Temperatur-Optimum von 35 °Cund ein pH-Optimum bei 7.8.Homologievergleiche mit anderen Enzymen zeigten, dass dieVinorin-Synthase zu einer noch kleinen Familie von bisher 10Acetyltransferasen gehört. Alle Enzyme der Familie haben einHxxxD und ein DFGWG-Motiv zu 100 % konserviert. Basierendauf diesen Homologievergleichen und Inhibitorstudien wurden11 in dieser Proteinfamilie konservierte Aminosäuren gegenAlanin ausgetauscht, um so die Aminosäuren einer in derLiteratur postulierten katalytischen Triade(Ser/Cys-His-Asp) zu identifizieren.Die Mutation aller vorhandenen konservierten Serine undCysteine resultierte in keiner Mutante, die zumvollständigen Aktivitätsverlust des Enzyms führte. Nur dieMutationen H160A und D164A resultierten in einemvollständigen Aktivitätsverlust des Enzyms. Dieses Ergebniswiderlegt die Theorie einer katalytischen Triade und zeigte,dass die Aminosäuren H160A und D164A exklusiv an derkatalytischen Reaktion beteiligt sind.Zur Überprüfung dieser Ergebnisse und zur vollständigenAufklärung des Reaktionsmechanismus wurde dieVinorin-Synthase kristallisiert. Die bis jetzt erhaltenenKristalle (Kristallgröße in µm x: 150, y: 200, z: 200)gehören der Raumgruppe P212121 (orthorhombisch primitiv) anund beugen bis 3.3 Å. Da es bis jetzt keine Kristallstruktureines zur Vinorin-Synthase homologen Proteins gibt, konntedie Struktur noch nicht vollständig aufgeklärt werden. ZurLösung des Phasenproblems wird mit der Methode der multiplenanomalen Dispersion (MAD) jetzt versucht, die ersteKristallstruktur in dieser Enzymfamilie aufzuklären.
Resumo:
The "sustainability" concept relates to the prolonging of human economic systems with as little detrimental impact on ecological systems as possible. Construction that exhibits good environmental stewardship and practices that conserve resources in a manner that allow growth and development to be sustained for the long-term without degrading the environment are indispensable in a developed society. Past, current and future advancements in asphalt as an environmentally sustainable paving material are especially important because the quantities of asphalt used annually in Europe as well as in the U.S. are large. The asphalt industry is still developing technological improvements that will reduce the environmental impact without affecting the final mechanical performance. Warm mix asphalt (WMA) is a type of asphalt mix requiring lower production temperatures compared to hot mix asphalt (HMA), while aiming to maintain the desired post construction properties of traditional HMA. Lowering the production temperature reduce the fuel usage and the production of emissions therefore and that improve conditions for workers and supports the sustainable development. Even the crumb-rubber modifier (CRM), with shredded automobile tires and used in the United States since the mid 1980s, has proven to be an environmentally friendly alternative to conventional asphalt pavement. Furthermore, the use of waste tires is not only relevant in an environmental aspect but also for the engineering properties of asphalt [Pennisi E., 1992]. This research project is aimed to demonstrate the dual value of these Asphalt Mixes in regards to the environmental and mechanical performance and to suggest a low environmental impact design procedure. In fact, the use of eco-friendly materials is the first phase towards an eco-compatible design but it cannot be the only step. The eco-compatible approach should be extended also to the design method and material characterization because only with these phases is it possible to exploit the maximum potential properties of the used materials. Appropriate asphalt concrete characterization is essential and vital for realistic performance prediction of asphalt concrete pavements. Volumetric (Mix design) and mechanical (Permanent deformation and Fatigue performance) properties are important factors to consider. Moreover, an advanced and efficient design method is necessary in order to correctly use the material. A design method such as a Mechanistic-Empirical approach, consisting of a structural model capable of predicting the state of stresses and strains within the pavement structure under the different traffic and environmental conditions, was the application of choice. In particular this study focus on the CalME and its Incremental-Recursive (I-R) procedure, based on damage models for fatigue and permanent shear strain related to the surface cracking and to the rutting respectively. It works in increments of time and, using the output from one increment, recursively, as input to the next increment, predicts the pavement conditions in terms of layer moduli, fatigue cracking, rutting and roughness. This software procedure was adopted in order to verify the mechanical properties of the study mixes and the reciprocal relationship between surface layer and pavement structure in terms of fatigue and permanent deformation with defined traffic and environmental conditions. The asphalt mixes studied were used in a pavement structure as surface layer of 60 mm thickness. The performance of the pavement was compared to the performance of the same pavement structure where different kinds of asphalt concrete were used as surface layer. In comparison to a conventional asphalt concrete, three eco-friendly materials, two warm mix asphalt and a rubberized asphalt concrete, were analyzed. The First Two Chapters summarize the necessary steps aimed to satisfy the sustainable pavement design procedure. In Chapter I the problem of asphalt pavement eco-compatible design was introduced. The low environmental impact materials such as the Warm Mix Asphalt and the Rubberized Asphalt Concrete were described in detail. In addition the value of a rational asphalt pavement design method was discussed. Chapter II underlines the importance of a deep laboratory characterization based on appropriate materials selection and performance evaluation. In Chapter III, CalME is introduced trough a specific explanation of the different equipped design approaches and specifically explaining the I-R procedure. In Chapter IV, the experimental program is presented with a explanation of test laboratory devices adopted. The Fatigue and Rutting performances of the study mixes are shown respectively in Chapter V and VI. Through these laboratory test data the CalME I-R models parameters for Master Curve, fatigue damage and permanent shear strain were evaluated. Lastly, in Chapter VII, the results of the asphalt pavement structures simulations with different surface layers were reported. For each pavement structure, the total surface cracking, the total rutting, the fatigue damage and the rutting depth in each bound layer were analyzed.
Resumo:
Für die Zukunft wird eine Zunahme an Verkehr prognostiziert, gleichzeitig herrscht ein Mangel an Raum und finanziellen Mitteln, um weitere Straßen zu bauen. Daher müssen die vorhandenen Kapazitäten durch eine bessere Verkehrssteuerung sinnvoller genutzt werden, z.B. durch Verkehrsleitsysteme. Dafür werden räumlich aufgelöste, d.h. den Verkehr in seiner flächenhaften Verteilung wiedergebende Daten benötigt, die jedoch fehlen. Bisher konnten Verkehrsdaten nur dort erhoben werden, wo sich örtlich feste Meßeinrichtungen befinden, jedoch können damit die fehlenden Daten nicht erhoben werden. Mit Fernerkundungssystemen ergibt sich die Möglichkeit, diese Daten flächendeckend mit einem Blick von oben zu erfassen. Nach jahrzehntelangen Erfahrungen mit Fernerkundungsmethoden zur Erfassung und Untersuchung der verschiedensten Phänomene auf der Erdoberfläche wird nun diese Methodik im Rahmen eines Pilotprojektes auf den Themenbereich Verkehr angewendet. Seit Ende der 1990er Jahre wurde mit flugzeuggetragenen optischen und Infrarot-Aufnahmesystemen Verkehr beobachtet. Doch bei schlechten Wetterbedingungen und insbesondere bei Bewölkung, sind keine brauchbaren Aufnahmen möglich. Mit einem abbildenden Radarverfahren werden Daten unabhängig von Wetter- und Tageslichtbedingungen oder Bewölkung erhoben. Im Rahmen dieser Arbeit wird untersucht, inwieweit mit Hilfe von flugzeuggetragenem synthetischem Apertur Radar (SAR) Verkehrsdaten aufgenommen, verarbeitet und sinnvoll angewendet werden können. Nicht nur wird die neue Technik der Along-Track Interferometrie (ATI) und die Prozessierung und Verarbeitung der aufgenommenen Verkehrsdaten ausführlich dargelegt, es wird darüberhinaus ein mit dieser Methodik erstellter Datensatz mit einer Verkehrssimulation verglichen und bewertet. Abschließend wird ein Ausblick auf zukünftige Entwicklungen der Radarfernerkundung zur Verkehrsdatenerfassung gegeben.
Resumo:
This thesis focuses on the ceramic process for the production of optical grade transparent materials to be used as laser hosts. In order to be transparent a ceramic material must exhibit a very low concentration of defects. Defects are mainly represented by secondary or grain boundary phases and by residual pores. The strict control of the stoichiometry is mandatory to avoid the formation of secondary phases, whereas residual pores need to be below 150 ppm. In order to fulfill these requirements specific experimental conditions must be combined together. In addition powders need to be nanometric or at least sub-micrometric and extremely pure. On the other hand, nanometric powders aggregate easily and this leads to a poor, not homogeneous packing during shaping by pressing and to the formation of residual pores during sintering. Very fine powders are also difficult to handle and tend to absorb water on the surface. Finally, the powder manipulation (weighting operations, solvent removal, spray drying, shaping, etc), easily introduces impurities. All these features must be fully controlled in order to avoid the formation of defects that work as scattering sources thus decreasing the transparency of the material. The important role played by the processing on the transparency of ceramic materials is often underestimated. In the literature a high level of transparency has been reported by many authors but the description of the experimental process, in particular of the powder treatment and shaping, is seldom extensively described and important information that are necessary to reproduce the described results are often missing. The main goal of the present study therefore is to give additional information on the way the experimental features affect the microstructural evolution of YAG-based ceramics and thus the final properties, in particular transparency. Commercial powders are used to prepare YAG materials doped with Nd or Yb by reactive sintering under high vacuum. These dopants have been selected as the more appropriate for high energy and high peak power lasers. As far as it concerns the powder treatment, the thesis focuses on the influence of the solvent removal technique (rotavapor versus spray drying of suspensions in ethanol), the ball milling duration and speed, suspension concentration, solvent ratio, type and amount of dispersant. The influence of the powder type and process on the powder packing as well as the pressure conditions during shaping by pressing are also described. Finally calcination, sintering under high vacuum and in clean atmosphere, and post sintering cycles are studied and related to the final microstructure analyzed by SEM-EDS and HR-TEM, and to the optical and laser properties.
Resumo:
The central objective of research in Information Retrieval (IR) is to discover new techniques to retrieve relevant information in order to satisfy an Information Need. The Information Need is satisfied when relevant information can be provided to the user. In IR, relevance is a fundamental concept which has changed over time, from popular to personal, i.e., what was considered relevant before was information for the whole population, but what is considered relevant now is specific information for each user. Hence, there is a need to connect the behavior of the system to the condition of a particular person and his social context; thereby an interdisciplinary sector called Human-Centered Computing was born. For the modern search engine, the information extracted for the individual user is crucial. According to the Personalized Search (PS), two different techniques are necessary to personalize a search: contextualization (interconnected conditions that occur in an activity), and individualization (characteristics that distinguish an individual). This movement of focus to the individual's need undermines the rigid linearity of the classical model overtaken the ``berry picking'' model which explains that the terms change thanks to the informational feedback received from the search activity introducing the concept of evolution of search terms. The development of Information Foraging theory, which observed the correlations between animal foraging and human information foraging, also contributed to this transformation through attempts to optimize the cost-benefit ratio. This thesis arose from the need to satisfy human individuality when searching for information, and it develops a synergistic collaboration between the frontiers of technological innovation and the recent advances in IR. The search method developed exploits what is relevant for the user by changing radically the way in which an Information Need is expressed, because now it is expressed through the generation of the query and its own context. As a matter of fact the method was born under the pretense to improve the quality of search by rewriting the query based on the contexts automatically generated from a local knowledge base. Furthermore, the idea of optimizing each IR system has led to develop it as a middleware of interaction between the user and the IR system. Thereby the system has just two possible actions: rewriting the query, and reordering the result. Equivalent actions to the approach was described from the PS that generally exploits information derived from analysis of user behavior, while the proposed approach exploits knowledge provided by the user. The thesis went further to generate a novel method for an assessment procedure, according to the "Cranfield paradigm", in order to evaluate this type of IR systems. The results achieved are interesting considering both the effectiveness achieved and the innovative approach undertaken together with the several applications inspired using a local knowledge base.