8 resultados para machine tools and accessories

em AMS Tesi di Laurea - Alm@DL - Università di Bologna


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Con questa dissertazione di tesi miro ad illustrare i risultati della mia ricerca nel campo del Semantic Publishing, consistenti nello sviluppo di un insieme di metodologie, strumenti e prototipi, uniti allo studio di un caso d‟uso concreto, finalizzati all‟applicazione ed alla focalizzazione di Lenti Semantiche (Semantic Lenses).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The need to effectively manage the documentation covering the entire production process, from the concept phase right through to market realise, constitutes a key issue in the creation of a successful and highly competitive product. For almost forty years the most commonly used strategies to achieve this have followed Product Lifecycle Management (PLM) guidelines. Translated into information management systems at the end of the '90s, this methodology is now widely used by companies operating all over the world in many different sectors. PLM systems and editor programs are the two principal types of software applications used by companies for their process aotomation. Editor programs allow to store in documents the information related to the production chain, while the PLM system stores and shares this information so that it can be used within the company and made it available to partners. Different software tools, which capture and store documents and information automatically in the PLM system, have been developed in recent years. One of them is the ''DirectPLM'' application, which has been developed by the Italian company ''Focus PLM''. It is designed to ensure interoperability between many editors and the Aras Innovator PLM system. In this dissertation we present ''DirectPLM2'', a new version of the previous software application DirectPLM. It has been designed and developed as prototype during the internship by Focus PLM. Its new implementation separates the abstract logic of business from the real commands implementation, previously strongly dependent on Aras Innovator. Thanks to its new design, Focus PLM can easily develop different versions of DirectPLM2, each one devised for a specific PLM system. In fact, the company can focus the development effort only on a specific set of software components which provides specialized functions interacting with that particular PLM system. This allows shorter Time-To-Market and gives the company a significant competitive advantage.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Following the internationalization of contemporary higher education, academic institutions based in non-English speaking countries are increasingly urged to produce contents in English to address international prospective students and personnel, as well as to increase their attractiveness. The demand for English translations in the institutional academic domain is consequently increasing at a rate exceeding the capacity of the translation profession. Resources for assisting non-native authors and translators in the production of appropriate texts in L2 are therefore required in order to help academic institutions and professionals streamline their translation workload. Some of these resources include: (i) parallel corpora to train machine translation systems and multilingual authoring tools; and (ii) translation memories for computer-aided tools. The purpose of this study is to create and evaluate reference resources like the ones mentioned in (i) and (ii) through the automatic sentence alignment of a large set of Italian and English as a Lingua Franca (ELF) institutional academic texts given as equivalent but not necessarily parallel (i.e. translated). In this framework, a set of aligning algorithms and alignment tools is examined in order to identify the most profitable one(s) in terms of accuracy and time- and cost-effectiveness. In order to determine the text pairs to align, a sample is selected according to document length similarity (characters) and subsequently evaluated in terms of extent of noisiness/parallelism, alignment accuracy and content leverageability. The results of these analyses serve as the basis for the creation of an aligned bilingual corpus of academic course descriptions, which is eventually used to create a translation memory in TMX format.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the last years, the importance of locating people and objects and communicating with them in real time has become a common occurrence in every day life. Nowadays, the state of the art of location systems for indoor environments has not a dominant technology as instead occurs in location systems for outdoor environments, where GPS is the dominant technology. In fact, each location technology for indoor environments presents a set of features that do not allow their use in the overall application scenarios, but due its characteristics, it can well coexist with other similar technologies, without being dominant and more adopted than the others indoor location systems. In this context, the European project SELECT studies the opportunity of collecting all these different features in an innovative system which can be used in a large number of application scenarios. The goal of this project is to realize a wireless system, where a network of fixed readers able to query one or more tags attached to objects to be located. The SELECT consortium is composed of European institutions and companies, including Datalogic S.p.A. and CNIT, which deal with software and firmware development of the baseband receiving section of the readers, whose function is to acquire and process the information received from generic tagged objects. Since the SELECT project has an highly innovative content, one of the key stages of the system design is represented by the debug phase. This work aims to study and develop tools and techniques that allow to perform the debug phase of the firmware of the baseband receiving section of the readers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation was conducted within the project Language Toolkit, which has the aim of integrating the worlds of work and university. In particular, it consists of the translation into English of documents commissioned by the Italian company TR Turoni and its primary purpose is to demonstrate that, in the field of translation for companies, the existing translation support tools and software can optimise and facilitate the translation process. The work consists of five chapters. The first introduces the Language Toolkit project, the TR Turoni company and its relationship with the CERMAC export consortium. After outlining the current state of company internationalisation, the importance of professional translators in enhancing the competitiveness of companies that enter new international markets is highlighted. Chapter two provides an overview of the texts to be translated, focusing on the textual function and typology and on the addressees. After that, manual translation and the main software developed specifically for translators are described, with a focus on computer-assisted translation (CAT) and machine translation (MT). The third chapter presents the target texts and the corresponding translations. Chapter four is dedicated to the analysis of the translation process. The first two texts were translated manually, with the support of a purpose-built specialized corpus. The following two documents were translated with the software SDL Trados Studio 2011 and its applications. The last texts were submitted to the Google Translate service and to a process of pre and post-editing. Finally, in chapter five conclusions are drawn about the main limits and potentialities of the different translations techniques. In addition to this, the importance of an integrated use of all available instruments is underlined.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this dissertation is to provide a translation from English into Italian of a highly specialized scientific article published by the online journal ALTEX. In this text, the authors propose a roadmap for how to overcome the acknowledged scientific gaps for the full replacement of systemic toxicity testing using animals. The main reasons behind this particular choice are my personal interest in specialized translation of scientific texts and in the alternatives to animal testing. Moreover, this translation has been directly requested by the Italian molecular biologist and clinical biochemist Candida Nastrucci. It was not possible to translate the whole article in this project, for this reason, I decided to translate only the introduction, the chapter about skin sensitization, and the conclusion. I intend to use the resources that were created for this project to translate the rest of the article in the near future. In this study, I will show how a translator can translate such a specialized text with the help of a field expert using CAT Tools and a specialized corpus. I will also discuss whether machine translation can prove useful to translate this type of document. This work is divided into six chapters. The first one introduces the main topic of the article and explains my reasons for choosing this text; the second one contains an analysis of the text type, focusing on the differences and similarities between Italian and English conventions. The third chapter provides a description of the resources that were used to translate this text, i.e. the corpus and the CAT Tools. The fourth one contains the actual translation, side-by-side with the original text, while the fifth one provides a general comment on the translation difficulties, an analysis of my translation choices and strategies, and a comment about the relationship between the field expert and the translator. Finally, the last chapter shows whether machine translation and post-editing can be an advantageous strategy to translate this type of document. The project also contains two appendixes. The first one includes 54 complex terminological sheets, while the second one includes 188 simple terminological sheets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In recent years, Deep Learning techniques have shown to perform well on a large variety of problems both in Computer Vision and Natural Language Processing, reaching and often surpassing the state of the art on many tasks. The rise of deep learning is also revolutionizing the entire field of Machine Learning and Pattern Recognition pushing forward the concepts of automatic feature extraction and unsupervised learning in general. However, despite the strong success both in science and business, deep learning has its own limitations. It is often questioned if such techniques are only some kind of brute-force statistical approaches and if they can only work in the context of High Performance Computing with tons of data. Another important question is whether they are really biologically inspired, as claimed in certain cases, and if they can scale well in terms of "intelligence". The dissertation is focused on trying to answer these key questions in the context of Computer Vision and, in particular, Object Recognition, a task that has been heavily revolutionized by recent advances in the field. Practically speaking, these answers are based on an exhaustive comparison between two, very different, deep learning techniques on the aforementioned task: Convolutional Neural Network (CNN) and Hierarchical Temporal memory (HTM). They stand for two different approaches and points of view within the big hat of deep learning and are the best choices to understand and point out strengths and weaknesses of each of them. CNN is considered one of the most classic and powerful supervised methods used today in machine learning and pattern recognition, especially in object recognition. CNNs are well received and accepted by the scientific community and are already deployed in large corporation like Google and Facebook for solving face recognition and image auto-tagging problems. HTM, on the other hand, is known as a new emerging paradigm and a new meanly-unsupervised method, that is more biologically inspired. It tries to gain more insights from the computational neuroscience community in order to incorporate concepts like time, context and attention during the learning process which are typical of the human brain. In the end, the thesis is supposed to prove that in certain cases, with a lower quantity of data, HTM can outperform CNN.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this Bachelor Thesis I want to provide readers with tools and scripts for the control of a 7DOF manipulator, backed up by some theory of Robotics and Computer Science, in order to better contextualize the work done. In practice, we will see most common software, and developing environments, used to cope with our task: these include ROS, along with visual simulation by VREP and RVIZ, and an almost "stand-alone" ROS extension called MoveIt!, a very complete programming interface for trajectory planning and obstacle avoidance. As we will better appreciate and understand in the introduction chapter, the capability of detecting collision objects through a camera sensor, and re-plan to the desired end-effector pose, are not enough. In fact, this work is implemented in a more complex system, where recognition of particular objects is needed. Through a package of ROS and customized scripts, a detailed procedure will be provided on how to distinguish a particular object, retrieve its reference frame with respect to a known one, and then allow navigation to that target. Together with technical details, the aim is also to report working scripts and a specific appendix (A) you can refer to, if desiring to put things together.