965 resultados para Presses (machine tools)
Resumo:
Tutkimuksen tavoitteena oli selvittää sisäisen kommunikoinnin tilannetta case-yrityksissä. Yritykset kuuluvat kahteen case-arvoverkostoon, jotka toimivat informaatio- ja kommunikaatioteknologian alalla. Sisäinen kommunikointi valittiin tutkimusalueeksi, koska se muodostaa perustan ulkoiselle, yritysten väliselle kommunikoinnille. Tutkimuksen painopiste oli web-pohjaisessa kommunikoinnissa ja webin ominaisuuksissa arvoverkoston näkökulmasta. Tutkimusprosessissa käytettiin sekä kvalitatiivisia että kvantitatiivisia menetelmiä. Tutkimuksen kvantitatiivinen osa toteutettiin web-kyselynä, jonka tulokset osoittivat, että case-yritysten sisäinen kommunikointi perustuu pääasiassa perinteisten kommunikointivälineiden käyttöön. Toisin sanoen, webin hyödyntäminen on vähäistä, mihin vaikuttavat monet eri tekijät. Webissä on kuitenkin useita ominaisuuksia, jotka parantavat kommunikointia arvoverkostossa ja siksi nämä web-pohjaiset välineet tulisi huomioida, kun suunnitellaan yleistä kommunikointijärjestelmää. Tutkimuksen teoreettisessa osassa määriteltiin vuorovaikutteisuus-ominaisuuteen perustuva kommunikointivälineiden luokittelu. Tämän lisäksi määriteltiin myös arvoverkoston käsite. Empiirinen osa koostui web-kyselyn toteutuksen ja tulosten raportoinnista, jonka jälkeen yhteenvetokappale koosti merkittävimmät havainnot sekä mahdolliset jatkotutkimusaiheet.
Resumo:
Mottling is one of the key defects in offset-printing. Mottling can be defined as unwanted unevenness of print. In this work, diameter of a mottle spot is defined between 0.5-10.0 mm. There are several types of mottling, but the reason behind the problem is still not fully understood. Several commercial machine vision products for the evaluation of print unevenness have been presented. Two of these methods used in these products have been implemented in this thesis. The one is the cluster method and the other is the band-pass method. The properties of human vision system have been taken into account in the implementation of these two methods. An index produced by the cluster method is a weighted sum of the number of found spots, and an index produced by band-pass method is a weighted sum of coefficients of variations of gray-levels for each spatial band. Both methods produce larger indices for visually poor samples, so they can discern good samples from the poor ones. The difference between the indices for good and poor samples is slightly larger produced by the cluster method. 11 However, without the samples evaluated by human experts, the goodness of these results is still questionable. This comparison will be left to the next phase of the project.
Resumo:
A method for the analysis of high-speed solid-rotor induction motors in presented. The analysis is based on a new combination of the three dimensional linear method and the transfer matrix method. Both saturation and finite length effects are taken into account. The active region of the solid rotor is divided into saturated and unsaturated parts. The time dependence is assumed to be sinusoidal and phasor quantities are used in the solution. The method is applied to the calculation of smooth solid rotors manufactured of different materials. Six rotor materials are tested: three construction steels, pure iron, a cobaltiron alloy and an aluminium alloy. The results obtained by the method agree fairly well with the measurement quantities.
Resumo:
Within the latest decade high-speed motor technology has been increasingly commonly applied within the range of medium and large power. More particularly, applications like such involved with gas movement and compression seem to be the most important area in which high-speed machines are used. In manufacturing the induction motor rotor core of one single piece of steel it is possible to achieve an extremely rigid rotor construction for the high-speed motor. In a mechanical sense, the solid rotor may be the best possible rotor construction. Unfortunately, the electromagnetic properties of a solid rotor are poorer than the properties of the traditional laminated rotor of an induction motor. This thesis analyses methods for improving the electromagnetic properties of a solid-rotor induction machine. The slip of the solid rotor is reduced notably if the solid rotor is axially slitted. The slitting patterns of the solid rotor are examined. It is shown how the slitting parameters affect the produced torque. Methods for decreasing the harmonic eddy currents on the surface of the rotor are also examined. The motivation for this is to improve the efficiency of the motor to reach the efficiency standard of a laminated rotor induction motor. To carry out these research tasks the finite element analysis is used. An analytical calculation of solid rotors based on the multi-layer transfer-matrix method is developed especially for the calculation of axially slitted solid rotors equipped with wellconducting end rings. The calculation results are verified by using the finite element analysis and laboratory measurements. The prototype motors of 250 – 300 kW and 140 Hz were tested to verify the results. Utilization factor data are given for several other prototypes the largest of which delivers 1000 kW at 12000 min-1.
Resumo:
This thesis analyses the calculation of FanSave and PumpSave energy saving tools calculation. With these programs energy consumption of variable speed drive control for fans and pumps can be compared to other control methods. With FanSave centrifugal and axial fans can be examined and PumpSave deals with centrifugal pumps. By means of these programs also suitable frequency converter can be chosen from the ABB collection. Programs need as initial values information about the appliances like amount of flow and efficiencies. Operation time is important factor when calculating the annual energy consumption and information about it are the length and profile. Basic theory related to fans and pumps is introduced without more precise instructions for dimensioning. FanSave and PumpSave contain various methods for flow control. These control methods are introduced in the thesis based on their operational principles and suitability. Also squirrel cage motor and frequency converter are introduced because of their close involvement to fans and pumps. Second part of the thesis contains comparison between results of FanSave’s and PumpSave’s calculation and performance curve based calculation. Also laboratory tests were made with centrifugal and axial fan and also with centrifugal pump. With the results from this thesis the calculation of these programs can be adjusted to be more accurate and also some new features can be added.
Resumo:
We tested and compared performances of Roach formula, Partin tables and of three Machine Learning (ML) based algorithms based on decision trees in identifying N+ prostate cancer (PC). 1,555 cN0 and 50 cN+ PC were analyzed. Results were also verified on an independent population of 204 operated cN0 patients, with a known pN status (187 pN0, 17 pN1 patients). ML performed better, also when tested on the surgical population, with accuracy, specificity, and sensitivity ranging between 48-86%, 35-91%, and 17-79%, respectively. ML potentially allows better prediction of the nodal status of PC, potentially allowing a better tailoring of pelvic irradiation.
Resumo:
This study compares the impact of quality management tools on the performance of organisations utilising the ISO 9001:2000 standard as a basis for a quality-management system band those utilising the EFQM model for this purpose. A survey is conducted among 107 experienced and independent quality-management assessors. The study finds that organisations with qualitymanagement systems based on the ISO 9001:2000 standard tend to use general-purpose qualitative tools, and that these do have a relatively positive impact on their general performance. In contrast, organisations adopting the EFQM model tend to use more specialised quantitative tools, which produce significant improvements in specific aspects of their performance. The findings of the study will enable organisations to choose the most effective quality-improvement tools for their particular quality strategy
Resumo:
Recent advances in machine learning methods enable increasingly the automatic construction of various types of computer assisted methods that have been difficult or laborious to program by human experts. The tasks for which this kind of tools are needed arise in many areas, here especially in the fields of bioinformatics and natural language processing. The machine learning methods may not work satisfactorily if they are not appropriately tailored to the task in question. However, their learning performance can often be improved by taking advantage of deeper insight of the application domain or the learning problem at hand. This thesis considers developing kernel-based learning algorithms incorporating this kind of prior knowledge of the task in question in an advantageous way. Moreover, computationally efficient algorithms for training the learning machines for specific tasks are presented. In the context of kernel-based learning methods, the incorporation of prior knowledge is often done by designing appropriate kernel functions. Another well-known way is to develop cost functions that fit to the task under consideration. For disambiguation tasks in natural language, we develop kernel functions that take account of the positional information and the mutual similarities of words. It is shown that the use of this information significantly improves the disambiguation performance of the learning machine. Further, we design a new cost function that is better suitable for the task of information retrieval and for more general ranking problems than the cost functions designed for regression and classification. We also consider other applications of the kernel-based learning algorithms such as text categorization, and pattern recognition in differential display. We develop computationally efficient algorithms for training the considered learning machines with the proposed kernel functions. We also design a fast cross-validation algorithm for regularized least-squares type of learning algorithm. Further, an efficient version of the regularized least-squares algorithm that can be used together with the new cost function for preference learning and ranking tasks is proposed. In summary, we demonstrate that the incorporation of prior knowledge is possible and beneficial, and novel advanced kernels and cost functions can be used in algorithms efficiently.
Resumo:
Overall Equipment Effectiveness (OEE) is the key metric of operational excellence. OEE monitors the actual performance of equipment relative to its performance capabilities under optimal manufacturing conditions. It looks at the entire manufacturing environment measuring, in addition to the equipment availability, the production efficiency while the equipment is available to run products, as well as the efficiency loss that results from scrap, rework, and yield losses. The analysis of the OEE provides improvement opportunities for the operation. One of the tools used for OEE improvement is Six Sigma DMAIC methodology which is a set of practices originally developed to improve processes by eliminating defects. It asserts the continuous efforts to reduce variation in process outputs as key to business success, as well as the possibility of measurement, analysis, improvement and control of manufacturing and business processes. In the case of the Bottomer line AD2378 in Papsac Maghreb Casablanca plant, the OEE figures reached 48.65 %, which is below the accepted OEE group performance. This required immediate actions to achieve OEE improvement. This Master thesis focuses on the application of Six Sigma DMAIC methodology in the OEE improvement on the Bottomer Line AD2378 in Papsac Maghreb Casablanca plant. First, the Six Sigma DMAIC and OEE usage in operation measurement will be discussed. Afterwards, the different DMAIC phases will allow the identification of improvement focus, the identification of OEE low performance causes and the design of improvement solutions. These will be implemented to allow further tracking of improvement impact on the plant operations.
Resumo:
The aim of this work is to design a flywheel generator for a diesel hybrid working machine. In this work we perform detailed design of a generator. Mobile machines are commonly used in industry: road building machines, three harvesting machines, boring machines, trucks and other equipment. These machines work with a hydraulic drive system. This system provides good service property and high technical level. Manufacturers of mobile machines tend to satisfy all requirements of customers and modernized drive system. In this work also a description of the frequency inverter is present. Power electronics system is one of the basic parts for structures perform in the project.
Resumo:
Biomedical research is currently facing a new type of challenge: an excess of information, both in terms of raw data from experiments and in the number of scientific publications describing their results. Mirroring the focus on data mining techniques to address the issues of structured data, there has recently been great interest in the development and application of text mining techniques to make more effective use of the knowledge contained in biomedical scientific publications, accessible only in the form of natural human language. This thesis describes research done in the broader scope of projects aiming to develop methods, tools and techniques for text mining tasks in general and for the biomedical domain in particular. The work described here involves more specifically the goal of extracting information from statements concerning relations of biomedical entities, such as protein-protein interactions. The approach taken is one using full parsing—syntactic analysis of the entire structure of sentences—and machine learning, aiming to develop reliable methods that can further be generalized to apply also to other domains. The five papers at the core of this thesis describe research on a number of distinct but related topics in text mining. In the first of these studies, we assessed the applicability of two popular general English parsers to biomedical text mining and, finding their performance limited, identified several specific challenges to accurate parsing of domain text. In a follow-up study focusing on parsing issues related to specialized domain terminology, we evaluated three lexical adaptation methods. We found that the accurate resolution of unknown words can considerably improve parsing performance and introduced a domain-adapted parser that reduced the error rate of theoriginal by 10% while also roughly halving parsing time. To establish the relative merits of parsers that differ in the applied formalisms and the representation given to their syntactic analyses, we have also developed evaluation methodology, considering different approaches to establishing comparable dependency-based evaluation results. We introduced a methodology for creating highly accurate conversions between different parse representations, demonstrating the feasibility of unification of idiverse syntactic schemes under a shared, application-oriented representation. In addition to allowing formalism-neutral evaluation, we argue that such unification can also increase the value of parsers for domain text mining. As a further step in this direction, we analysed the characteristics of publicly available biomedical corpora annotated for protein-protein interactions and created tools for converting them into a shared form, thus contributing also to the unification of text mining resources. The introduced unified corpora allowed us to perform a task-oriented comparative evaluation of biomedical text mining corpora. This evaluation established clear limits on the comparability of results for text mining methods evaluated on different resources, prompting further efforts toward standardization. To support this and other research, we have also designed and annotated BioInfer, the first domain corpus of its size combining annotation of syntax and biomedical entities with a detailed annotation of their relationships. The corpus represents a major design and development effort of the research group, with manual annotation that identifies over 6000 entities, 2500 relationships and 28,000 syntactic dependencies in 1100 sentences. In addition to combining these key annotations for a single set of sentences, BioInfer was also the first domain resource to introduce a representation of entity relations that is supported by ontologies and able to capture complex, structured relationships. Part I of this thesis presents a summary of this research in the broader context of a text mining system, and Part II contains reprints of the five included publications.
Resumo:
Life sciences are yielding huge data sets that underpin scientific discoveries fundamental to improvement in human health, agriculture and the environment. In support of these discoveries, a plethora of databases and tools are deployed, in technically complex and diverse implementations, across a spectrum of scientific disciplines. The corpus of documentation of these resources is fragmented across the Web, with much redundancy, and has lacked a common standard of information. The outcome is that scientists must often struggle to find, understand, compare and use the best resources for the task at hand.Here we present a community-driven curation effort, supported by ELIXIR-the European infrastructure for biological information-that aspires to a comprehensive and consistent registry of information about bioinformatics resources. The sustainable upkeep of this Tools and Data Services Registry is assured by a curation effort driven by and tailored to local needs, and shared amongst a network of engaged partners.As of November 2015, the registry includes 1785 resources, with depositions from 126 individual registrations including 52 institutional providers and 74 individuals. With community support, the registry can become a standard for dissemination of information about bioinformatics resources: we welcome everyone to join us in this common endeavour. The registry is freely available at https://bio.tools.
Resumo:
Postprint (published version)
Resumo:
Aquesta memòria descriu la preparació, l'execució i els resultats obtinguts d'implementar un sistema calculador de rutes. El projecte Open Source Routing Machine és un motor calculador de rutes d'alt rendiment que utilitza les dades de OpenStreetMaps per calcular el camí més curt entre dos punts. En aquest projecte final no únicament es volen utilitzar les dades OpenStreetMap sinó que també es pretenen utilitzar dades pròpies en format shapefile i poder visualitzar-los en un visor web. Aquest visor permet a l'usuari, de forma senzilla, sol•licitar rutes al servidor OSRM creat, obtenint la ruta desitjada en molt pocs milisegons