4 resultados para fill
em AMS Tesi di Laurea - Alm@DL - Università di Bologna
Resumo:
Lo scopo di questa attività è approfondire le conoscenze sul processo di riempimento a caldo noto come nitro-hot-fill (NHF) utilizzato per contenitori in PET. Il nostro obiettivo è quello di simulare su scala di laboratorio il processo industriale al fine di ottimizzarne i parametri e aumentare la stabilità dei contenitori anche attraverso l’utilizzo di materie prime con caratteristiche migliorate utilizzando formulazioni adatte ai trattamenti a caldo. Il processo consiste nel riempimento della bottiglia ad una temperatura tra gli 80°/85°C, successivo al quale vi è l’iniezione di azoto al fine di evitare l’implosione durante il raffreddamento fino a temperatura ambiente. Questo settore del mercato è in forte crescita, molte bevande infatti hanno la necessità di un contenitore asettico; il processo di NHF ha il vantaggio di utilizzare il calore del prodotto stesso al fine di rendere la bottiglia sterile. Da qui nascono le criticità legate al processo, occorre prendere diversi accorgimenti al fine di rendere processabile in questo modo una bottiglia, infatti l’aumento di pressione interna dovuto all’iniezione di azoto si accompagna una temperatura vicina alla temperatura di transizione vetrosa. La nostra attività di ricerca ha focalizzato la propria attenzione sul design della bottiglia, sul processo di stiro-soffiaggio, sull’influenza dell’umidità assorbita nel PET, sul materiale utilizzato e su altri parametri di processo al fine di produrre contenitori in grado di resistere al riempimento NHF.
Resumo:
Ontology design and population -core aspects of semantic technologies- re- cently have become fields of great interest due to the increasing need of domain-specific knowledge bases that can boost the use of Semantic Web. For building such knowledge resources, the state of the art tools for ontology design require a lot of human work. Producing meaningful schemas and populating them with domain-specific data is in fact a very difficult and time-consuming task. Even more if the task consists in modelling knowledge at a web scale. The primary aim of this work is to investigate a novel and flexible method- ology for automatically learning ontology from textual data, lightening the human workload required for conceptualizing domain-specific knowledge and populating an extracted schema with real data, speeding up the whole ontology production process. Here computational linguistics plays a fundamental role, from automati- cally identifying facts from natural language and extracting frame of relations among recognized entities, to producing linked data with which extending existing knowledge bases or creating new ones. In the state of the art, automatic ontology learning systems are mainly based on plain-pipelined linguistics classifiers performing tasks such as Named Entity recognition, Entity resolution, Taxonomy and Relation extraction [11]. These approaches present some weaknesses, specially in capturing struc- tures through which the meaning of complex concepts is expressed [24]. Humans, in fact, tend to organize knowledge in well-defined patterns, which include participant entities and meaningful relations linking entities with each other. In literature, these structures have been called Semantic Frames by Fill- 6 Introduction more [20], or more recently as Knowledge Patterns [23]. Some NLP studies has recently shown the possibility of performing more accurate deep parsing with the ability of logically understanding the structure of discourse [7]. In this work, some of these technologies have been investigated and em- ployed to produce accurate ontology schemas. The long-term goal is to collect large amounts of semantically structured information from the web of crowds, through an automated process, in order to identify and investigate the cognitive patterns used by human to organize their knowledge.
Resumo:
Laser shock peening is a technique similar to shot peening that imparts compressive residual stresses in materials for improving fatigue resistance. The ability to use a high energy laser pulse to generate shock waves, inducing a compressive residual stress field in metallic materials, has applications in multiple fields such as turbo-machinery, airframe structures, and medical appliances. The transient nature of the LSP phenomenon and the high rate of the laser's dynamic make real time in-situ measurement of laser/material interaction very challenging. For this reason and for the high cost of the experimental tests, reliable analytical methods for predicting detailed effects of LSP are needed to understand the potential of the process. Aim of this work has been the prediction of residual stress field after Laser Peening process by means of Finite Element Modeling. The work has been carried out in the Stress Methods department of Airbus Operations GmbH (Hamburg) and it includes investigation on compressive residual stresses induced by Laser Shock Peening, study on mesh sensitivity, optimization and tuning of the model by using physical and numerical parameters, validation of the model by comparing it with experimental results. The model has been realized with Abaqus/Explicit commercial software starting from considerations done on previous works. FE analyses are “Mesh Sensitive”: by increasing the number of elements and by decreasing their size, the software is able to probe even the details of the real phenomenon. However, these details, could be only an amplification of real phenomenon. For this reason it was necessary to optimize the mesh elements' size and number. A new model has been created with a more fine mesh in the trough thickness direction because it is the most involved in the process deformations. This increment of the global number of elements has been paid with an "in plane" size reduction of the elements far from the peened area in order to avoid too high computational costs. Efficiency and stability of the analyses has been improved by using bulk viscosity coefficients, a merely numerical parameter available in Abaqus/Explicit. A plastic rate sensitivity study has been also carried out and a new set of Johnson Cook's model coefficient has been chosen. These investigations led to a more controllable and reliable model, valid even for more complex geometries. Moreover the study about the material properties highlighted a gap of the model about the simulation of the surface conditions. Modeling of the ablative layer employed during the real process has been used to fill this gap. In the real process ablative layer is a super thin sheet of pure aluminum stuck on the masterpiece. In the simulation it has been simply reproduced as a 100µm layer made by a material with a yield point of 10MPa. All those new settings has been applied to a set of analyses made with different geometry models to verify the robustness of the model. The calibration of the model with the experimental results was based on stress and displacement measurements carried out on the surface and in depth as well. The good correlation between the simulation and experimental tests results proved this model to be reliable.
Resumo:
The aim of this thesis is to analyse the main translating issues related to the subtitling of the Italian social movie Italy in a day into English: Italy in a day is a crowdsourced film, comprising a selection of video clips sent by ordinary people, showing occurrences of everyday life on a single day, October 26th, 2013. My dissertation consists of four chapters. The first provides a general overview of audiovisual translation, from the description of the characteristics of filmic products to a summary of the most important audiovisual translation modes; a theoretical framework of the discipline is also provided, through the analysis of the major contributions of Translations Studies and the multidisciplinary approach proposed by the scholar Frederic Chaume. The second chapter offers insight into the subtitling practice, examining its technical parameters, the spatial and temporal constraints, together with the advantages and pitfalls of this translation mode. The main criteria for quality assessment are also outlined, as well as the procedures carried out in the creation of subtitles within a professional environment, with a particular focus on the production of subtitles for the DVD industry. In the third chapter a definition of social movie is provided and the audiovisual material is accurately described, both in form and content. The creation of the subtitling project is here illustrated: after giving some information about the software employed, every step of the process is explained. In the final chapter the main translation challenges are highlighted. In the first part some text reduction techniques in the shift from oral to written are presented; then the culture-specific references and the linguistic variation in the film are analysed and the compensating strategies adopted to fill the linguistic and cultural gap are commented on and justified taking into account the needs and expectations of the target audience.