996 resultados para Electric transformer industry
Resumo:
This research report illustrates and examines new operation models for decreasing fixed costs and transforming them into variable costs in the field of paper industry. The report illustrates two cases - a new operation model for material logistics in maintenance and an examination of forklift truck fleet outsourcing solutions. Conventional material logistics in maintenance operation is illustrated and some problems related to conventional operation are identified. A new operation model that solves some of these problems is presented including descriptions of procurement and service contracts and sources of added value. Forklift truck fleet outsourcing solutions are examined by illustrating the responsibilities of a host company and a service provider both before and after outsourcing. The customer buys outsourcingservices in order to improve its investment productivity. The mechanism of how these services affect the customer company's investment productivity is illustrated.
Resumo:
VVALOSADE is a research project of professor Anita Lukka's VALORE research team in the Lappeenranta University of Technology. The VALOSADE includes the ELO technology program of Tekes. SMILE is one of four subprojects of the VALOSADE. The SMILE study focuses on the case of the company network that is composed of small and micro-sized mechanical maintenance service providers and forest industry as large-scale customers. The basic principle of the SMILE study is the communication and ebusiness in supply and demand networks. The aim of the study is to develop ebusiness strategy, ebusiness model and e-processes among the SME local service providers, and onthe other hand, between the local service provider network and the forest industry customers in a maintenance and operations service business. A literature review, interviews and benchmarking are used as research methods in this qualitative case study. The first SMILE report, 'Ebusiness between Global Company and Its Local SME Supplier Network', concentrated on creating background for the SMILE study by studying general trends of ebusiness in supply chains and networks of different industries. This second phase of the study concentrates on case network background, such as business relationships, information systems and business objectives; core processes in maintenance and operations service network; development needs in communication among the network participants; and ICT solutions to respond needs in changing environment. In the theory part of the report, different ebusiness models and frameworks are introduced. Those models and frameworks are compared to empirical case data. From that analysis of the empirical data, therecommendations for the development of the network information system are derived. In process industry such as the forest industry, it is crucial to achieve a high level of operational efficiency and reliability, which sets up great requirements for maintenance and operations. Therefore, partnerships or strategic alliances are needed between the network participants. In partnerships and alliances, deep communication is important, and therefore the information systems in the network also are critical. Communication, coordination and collaboration will increase in the case network in the future, because network resources must be optimised to improve competitive capability of the forest industry customers and theefficiency of their service providers. At present, ebusiness systems are not usual in this maintenance network. A network information system among the forest industry customers and their local service providers actually is the only genuinenetwork information system in this total network. However, the utilisation of that system has been quite insignificant. The current system does not add value enough either to the customers or to the local service providers. At present, thenetwork information system is the infomediary that share static information forthe network partners. The network information system should be the transaction intermediary, which integrates internal processes of the network companies; the network information system, which provides common standardised processes for thelocal service providers; and the infomediary, which share static and dynamic information on right time, on right partner, on right costs, on right format and on right quality. This study provides recommendations how to develop this system in the future to add value to the network companies. Ebusiness scenarios, vision, objectives, strategies, application architecture, ebusiness model, core processes and development strategy must be considered when the network information system will be developed in the next development step. The core processes in the case network are demand/capacity management, customer/supplier relationship management, service delivery management, knowledge management and cash flow management. Most benefits from ebusiness solutions come from the electrifying of operational level processes, such as service delivery management and cash flow management.
Resumo:
Un système efficace de sismique tridimensionnelle (3-D) haute-résolution adapté à des cibles lacustres de petite échelle a été développé. Dans le Lac Léman, près de la ville de Lausanne, en Suisse, des investigations récentes en deux dimension (2-D) ont mis en évidence une zone de faille complexe qui a été choisie pour tester notre système. Les structures observées incluent une couche mince (<40 m) de sédiments quaternaires sub-horizontaux, discordants sur des couches tertiaires de molasse pentées vers le sud-est. On observe aussi la zone de faille de « La Paudèze » qui sépare les unités de la Molasse du Plateau de la Molasse Subalpine. Deux campagnes 3-D complètes, d?environ d?un kilomètre carré, ont été réalisées sur ce site de test. La campagne pilote (campagne I), effectuée en 1999 pendant 8 jours, a couvert 80 profils en utilisant une seule flûte. Pendant la campagne II (9 jours en 2001), le nouveau système trois-flûtes, bien paramétrés pour notre objectif, a permis l?acquisition de données de très haute qualité sur 180 lignes CMP. Les améliorations principales incluent un système de navigation et de déclenchement de tirs grâce à un nouveau logiciel. Celui-ci comprend un contrôle qualité de la navigation du bateau en temps réel utilisant un GPS différentiel (dGPS) à bord et une station de référence près du bord du lac. De cette façon, les tirs peuvent être déclenchés tous les 5 mètres avec une erreur maximale non-cumulative de 25 centimètres. Tandis que pour la campagne I la position des récepteurs de la flûte 48-traces a dû être déduite à partir des positions du bateau, pour la campagne II elle ont pu être calculées précisément (erreur <20 cm) grâce aux trois antennes dGPS supplémentaires placées sur des flotteurs attachés à l?extrémité de chaque flûte 24-traces. Il est maintenant possible de déterminer la dérive éventuelle de l?extrémité des flûtes (75 m) causée par des courants latéraux ou de petites variations de trajet du bateau. De plus, la construction de deux bras télescopiques maintenant les trois flûtes à une distance de 7.5 m les uns des autres, qui est la même distance que celle entre les lignes naviguées de la campagne II. En combinaison avec un espacement de récepteurs de 2.5 m, la dimension de chaque «bin» de données 3-D de la campagne II est de 1.25 m en ligne et 3.75 m latéralement. L?espacement plus grand en direction « in-line » par rapport à la direction «cross-line» est justifié par l?orientation structurale de la zone de faille perpendiculaire à la direction «in-line». L?incertitude sur la navigation et le positionnement pendant la campagne I et le «binning» imprécis qui en résulte, se retrouve dans les données sous forme d?une certaine discontinuité des réflecteurs. L?utilisation d?un canon à air à doublechambre (qui permet d?atténuer l?effet bulle) a pu réduire l?aliasing observé dans les sections migrées en 3-D. Celui-ci était dû à la combinaison du contenu relativement haute fréquence (<2000 Hz) du canon à eau (utilisé à 140 bars et à 0.3 m de profondeur) et d?un pas d?échantillonnage latéral insuffisant. Le Mini G.I 15/15 a été utilisé à 80 bars et à 1 m de profondeur, est mieux adapté à la complexité de la cible, une zone faillée ayant des réflecteurs pentés jusqu?à 30°. Bien que ses fréquences ne dépassent pas les 650 Hz, cette source combine une pénétration du signal non-aliasé jusqu?à 300 m dans le sol (par rapport au 145 m pour le canon à eau) pour une résolution verticale maximale de 1.1 m. Tandis que la campagne I a été acquise par groupes de plusieurs lignes de directions alternées, l?optimisation du temps d?acquisition du nouveau système à trois flûtes permet l?acquisition en géométrie parallèle, ce qui est préférable lorsqu?on utilise une configuration asymétrique (une source et un dispositif de récepteurs). Si on ne procède pas ainsi, les stacks sont différents selon la direction. Toutefois, la configuration de flûtes, plus courtes que pour la compagne I, a réduit la couverture nominale, la ramenant de 12 à 6. Une séquence classique de traitement 3-D a été adaptée à l?échantillonnage à haute fréquence et elle a été complétée par deux programmes qui transforment le format non-conventionnel de nos données de navigation en un format standard de l?industrie. Dans l?ordre, le traitement comprend l?incorporation de la géométrie, suivi de l?édition des traces, de l?harmonisation des «bins» (pour compenser l?inhomogénéité de la couverture due à la dérive du bateau et de la flûte), de la correction de la divergence sphérique, du filtrage passe-bande, de l?analyse de vitesse, de la correction DMO en 3-D, du stack et enfin de la migration 3-D en temps. D?analyses de vitesse détaillées ont été effectuées sur les données de couverture 12, une ligne sur deux et tous les 50 CMP, soit un nombre total de 600 spectres de semblance. Selon cette analyse, les vitesses d?intervalles varient de 1450-1650 m/s dans les sédiments non-consolidés et de 1650-3000 m/s dans les sédiments consolidés. Le fait que l?on puisse interpréter plusieurs horizons et surfaces de faille dans le cube, montre le potentiel de cette technique pour une interprétation tectonique et géologique à petite échelle en trois dimensions. On distingue cinq faciès sismiques principaux et leurs géométries 3-D détaillées sur des sections verticales et horizontales: les sédiments lacustres (Holocène), les sédiments glacio-lacustres (Pléistocène), la Molasse du Plateau, la Molasse Subalpine de la zone de faille (chevauchement) et la Molasse Subalpine au sud de cette zone. Les couches de la Molasse du Plateau et de la Molasse Subalpine ont respectivement un pendage de ~8° et ~20°. La zone de faille comprend de nombreuses structures très déformées de pendage d?environ 30°. Des tests préliminaires avec un algorithme de migration 3-D en profondeur avant sommation et à amplitudes préservées démontrent que la qualité excellente des données de la campagne II permet l?application de telles techniques à des campagnes haute-résolution. La méthode de sismique marine 3-D était utilisée jusqu?à présent quasi-exclusivement par l?industrie pétrolière. Son adaptation à une échelle plus petite géographiquement mais aussi financièrement a ouvert la voie d?appliquer cette technique à des objectifs d?environnement et du génie civil.<br/><br/>An efficient high-resolution three-dimensional (3-D) seismic reflection system for small-scale targets in lacustrine settings was developed. In Lake Geneva, near the city of Lausanne, Switzerland, past high-resolution two-dimensional (2-D) investigations revealed a complex fault zone (the Paudèze thrust zone), which was subsequently chosen for testing our system. Observed structures include a thin (<40 m) layer of subhorizontal Quaternary sediments that unconformably overlie southeast-dipping Tertiary Molasse beds and the Paudèze thrust zone, which separates Plateau and Subalpine Molasse units. Two complete 3-D surveys have been conducted over this same test site, covering an area of about 1 km2. In 1999, a pilot survey (Survey I), comprising 80 profiles, was carried out in 8 days with a single-streamer configuration. In 2001, a second survey (Survey II) used a newly developed three-streamer system with optimized design parameters, which provided an exceptionally high-quality data set of 180 common midpoint (CMP) lines in 9 days. The main improvements include a navigation and shot-triggering system with in-house navigation software that automatically fires the gun in combination with real-time control on navigation quality using differential GPS (dGPS) onboard and a reference base near the lake shore. Shots were triggered at 5-m intervals with a maximum non-cumulative error of 25 cm. Whereas the single 48-channel streamer system of Survey I requires extrapolation of receiver positions from the boat position, for Survey II they could be accurately calculated (error <20 cm) with the aid of three additional dGPS antennas mounted on rafts attached to the end of each of the 24- channel streamers. Towed at a distance of 75 m behind the vessel, they allow the determination of feathering due to cross-line currents or small course variations. Furthermore, two retractable booms hold the three streamers at a distance of 7.5 m from each other, which is the same distance as the sail line interval for Survey I. With a receiver spacing of 2.5 m, the bin dimension of the 3-D data of Survey II is 1.25 m in in-line direction and 3.75 m in cross-line direction. The greater cross-line versus in-line spacing is justified by the known structural trend of the fault zone perpendicular to the in-line direction. The data from Survey I showed some reflection discontinuity as a result of insufficiently accurate navigation and positioning and subsequent binning errors. Observed aliasing in the 3-D migration was due to insufficient lateral sampling combined with the relatively high frequency (<2000 Hz) content of the water gun source (operated at 140 bars and 0.3 m depth). These results motivated the use of a double-chamber bubble-canceling air gun for Survey II. A 15 / 15 Mini G.I air gun operated at 80 bars and 1 m depth, proved to be better adapted for imaging the complexly faulted target area, which has reflectors dipping up to 30°. Although its frequencies do not exceed 650 Hz, this air gun combines a penetration of non-aliased signal to depths of 300 m below the water bottom (versus 145 m for the water gun) with a maximum vertical resolution of 1.1 m. While Survey I was shot in patches of alternating directions, the optimized surveying time of the new threestreamer system allowed acquisition in parallel geometry, which is preferable when using an asymmetric configuration (single source and receiver array). Otherwise, resulting stacks are different for the opposite directions. However, the shorter streamer configuration of Survey II reduced the nominal fold from 12 to 6. A 3-D conventional processing flow was adapted to the high sampling rates and was complemented by two computer programs that format the unconventional navigation data to industry standards. Processing included trace editing, geometry assignment, bin harmonization (to compensate for uneven fold due to boat/streamer drift), spherical divergence correction, bandpass filtering, velocity analysis, 3-D DMO correction, stack and 3-D time migration. A detailed semblance velocity analysis was performed on the 12-fold data set for every second in-line and every 50th CMP, i.e. on a total of 600 spectra. According to this velocity analysis, interval velocities range from 1450-1650 m/s for the unconsolidated sediments and from 1650-3000 m/s for the consolidated sediments. Delineation of several horizons and fault surfaces reveal the potential for small-scale geologic and tectonic interpretation in three dimensions. Five major seismic facies and their detailed 3-D geometries can be distinguished in vertical and horizontal sections: lacustrine sediments (Holocene) , glaciolacustrine sediments (Pleistocene), Plateau Molasse, Subalpine Molasse and its thrust fault zone. Dips of beds within Plateau and Subalpine Molasse are ~8° and ~20°, respectively. Within the fault zone, many highly deformed structures with dips around 30° are visible. Preliminary tests with 3-D preserved-amplitude prestack depth migration demonstrate that the excellent data quality of Survey II allows application of such sophisticated techniques even to high-resolution seismic surveys. In general, the adaptation of the 3-D marine seismic reflection method, which to date has almost exclusively been used by the oil exploration industry, to a smaller geographical as well as financial scale has helped pave the way for applying this technique to environmental and engineering purposes.<br/><br/>La sismique réflexion est une méthode d?investigation du sous-sol avec un très grand pouvoir de résolution. Elle consiste à envoyer des vibrations dans le sol et à recueillir les ondes qui se réfléchissent sur les discontinuités géologiques à différentes profondeurs et remontent ensuite à la surface où elles sont enregistrées. Les signaux ainsi recueillis donnent non seulement des informations sur la nature des couches en présence et leur géométrie, mais ils permettent aussi de faire une interprétation géologique du sous-sol. Par exemple, dans le cas de roches sédimentaires, les profils de sismique réflexion permettent de déterminer leur mode de dépôt, leurs éventuelles déformations ou cassures et donc leur histoire tectonique. La sismique réflexion est la méthode principale de l?exploration pétrolière. Pendant longtemps on a réalisé des profils de sismique réflexion le long de profils qui fournissent une image du sous-sol en deux dimensions. Les images ainsi obtenues ne sont que partiellement exactes, puisqu?elles ne tiennent pas compte de l?aspect tridimensionnel des structures géologiques. Depuis quelques dizaines d?années, la sismique en trois dimensions (3-D) a apporté un souffle nouveau à l?étude du sous-sol. Si elle est aujourd?hui parfaitement maîtrisée pour l?imagerie des grandes structures géologiques tant dans le domaine terrestre que le domaine océanique, son adaptation à l?échelle lacustre ou fluviale n?a encore fait l?objet que de rares études. Ce travail de thèse a consisté à développer un système d?acquisition sismique similaire à celui utilisé pour la prospection pétrolière en mer, mais adapté aux lacs. Il est donc de dimension moindre, de mise en oeuvre plus légère et surtout d?une résolution des images finales beaucoup plus élevée. Alors que l?industrie pétrolière se limite souvent à une résolution de l?ordre de la dizaine de mètres, l?instrument qui a été mis au point dans le cadre de ce travail permet de voir des détails de l?ordre du mètre. Le nouveau système repose sur la possibilité d?enregistrer simultanément les réflexions sismiques sur trois câbles sismiques (ou flûtes) de 24 traces chacun. Pour obtenir des données 3-D, il est essentiel de positionner les instruments sur l?eau (source et récepteurs des ondes sismiques) avec une grande précision. Un logiciel a été spécialement développé pour le contrôle de la navigation et le déclenchement des tirs de la source sismique en utilisant des récepteurs GPS différentiel (dGPS) sur le bateau et à l?extrémité de chaque flûte. Ceci permet de positionner les instruments avec une précision de l?ordre de 20 cm. Pour tester notre système, nous avons choisi une zone sur le Lac Léman, près de la ville de Lausanne, où passe la faille de « La Paudèze » qui sépare les unités de la Molasse du Plateau et de la Molasse Subalpine. Deux campagnes de mesures de sismique 3-D y ont été réalisées sur une zone d?environ 1 km2. Les enregistrements sismiques ont ensuite été traités pour les transformer en images interprétables. Nous avons appliqué une séquence de traitement 3-D spécialement adaptée à nos données, notamment en ce qui concerne le positionnement. Après traitement, les données font apparaître différents faciès sismiques principaux correspondant notamment aux sédiments lacustres (Holocène), aux sédiments glacio-lacustres (Pléistocène), à la Molasse du Plateau, à la Molasse Subalpine de la zone de faille et la Molasse Subalpine au sud de cette zone. La géométrie 3-D détaillée des failles est visible sur les sections sismiques verticales et horizontales. L?excellente qualité des données et l?interprétation de plusieurs horizons et surfaces de faille montrent le potentiel de cette technique pour les investigations à petite échelle en trois dimensions ce qui ouvre des voies à son application dans les domaines de l?environnement et du génie civil.
Resumo:
The aim of this research study was to find out guidelines for outsourcing of logistics processes. The study was outlined to spare parts and `business-to-business' (B2B) markets in metal industry. This study can be applied as a manual for outsourcing especially warehousing and transportation activities. The study also touches other important areas of spare part logistics like manufacturing, customer service, procurement, quality control, reverse and recycling logistics, logistics technologies and valueadded services. The method of study consisted of three areas. Firstly exchanging views with logistics experts in participating companies, and secondly compiling material based on author's practical experience in logistics business with several international and domestic logistics service providers and vendors. Thirdlythe study includes also references to literature material. Due to the fact thatthe outsourcing of logistics functions can be handled widely and from differentpoint of views, in this study it is concentrated mainly on giving general levelguidelines for defining logistics strategy, hints both for tendering process and implementation project, and not forgetting the aftercare of business partnership.
Resumo:
Process development will be largely driven by the main equipment suppliers. The reason for this development is their ambition to supply complete plants or process systems instead of single pieces of equipment. The pulp and paper companies' interest lies in product development, as their main goal is to create winning brands and effective brand management. Design engineering companies will find their niche in detail engineering based on approved process solutions. Their development work will focus on increasing the efficiency of engineering work. Process design is a content-producing profession, which requires certain special characteristics: creativity, carefulness, the ability to work as a member of a design team according to time schedules and fluency in oral as well as written presentation. In the future, process engineers will increasingly need knowledge of chemistry as well as information and automation technology. Process engineering tools are developing rapidly. At the moment, these tools are good enough for static sizing and balancing, but dynamic simulation tools are not yet good enough for the complicated chemical reactions of pulp and paper chemistry. Dynamic simulation and virtual mill models are used as tools for training the operators. Computational fluid dynamics will certainlygain ground in process design.
Resumo:
The main research problem of this thesis is to find out the means of promoting the recovery of packaging waste generated in thefast food industry. The recovery of packaging waste generated in the fast food industry is demanded by the packaging waste legislation and expected by the public. The means are revealed by the general factors influencing the recovery of packaging waste, analysed by a multidisciplinary literature review and a case study focusing on the packaging waste managementof McDonald's Oy operating in Finland. The existing solid waste infrastructure does not promote the recovery ofpackaging waste generated in the fast food industry. The theoretical recovery rate of the packaging waste is high, 93 %, while the actual recovery rate is only 29 % consisting of secondary packaging manufactured from cardboard. The total recovery potential of packaging waste is 64 %, resulting in 1 230 tonnes ofrecoverable packaging waste. The achievable recovery potential of 33 %, equalling 647 tonnes of packaging waste could be recovered, but is not recovered mainly because of non-working waste management practises. The theoretical recovery potential of 31 %, equalling 583 tonnes of packaging waste can not be recovered by the existing solid waste infrastructure because of the obscure status of commecial waste, the improper operation ofproducer organisations, and the municipal autonomy. The sorting experiment indicated that it is possible to reach the achievable recovery potential inthe existing solid waste infrastructure. The achievement is promoted by waste producer -oriented waste management practises. The theoretical recovery potential can be reached by increasing the consistency of the solid waste infrastructure through governmental action.
Resumo:
This work deals with the cooling of high-speed electric machines, such as motors and generators, through an air gap. It consists of numerical and experimental modelling of gas flow and heat transfer in an annular channel. Velocity and temperature profiles are modelled in the air gap of a high-speed testmachine. Local and mean heat transfer coefficients and total friction coefficients are attained for a smooth rotor-stator combination at a large velocity range. The aim is to solve the heat transfer numerically and experimentally. The FINFLO software, developed at Helsinki University of Technology, has been used in the flow solution, and the commercial IGG and Field view programs for the grid generation and post processing. The annular channel is discretized as a sector mesh. Calculation is performed with constant mass flow rate on six rotational speeds. The effect of turbulence is calculated using three turbulence models. The friction coefficient and velocity factor are attained via total friction power. The first part of experimental section consists of finding the proper sensors and calibrating them in a straight pipe. After preliminary tests, a RdF-sensor is glued on the walls of stator and rotor surfaces. Telemetry is needed to be able to measure the heat transfer coefficients at the rotor. The mean heat transfer coefficients are measured in a test machine on four cooling air mass flow rates at a wide Couette Reynolds number range. The calculated values concerning the friction and heat transfer coefficients are compared with measured and semi-empirical data. Heat is transferred from the hotter stator and rotor surfaces to the coolerair flow in the air gap, not from the rotor to the stator via the air gap, althought the stator temperature is lower than the rotor temperature. The calculatedfriction coefficients fits well with the semi-empirical equations and precedingmeasurements. On constant mass flow rate the rotor heat transfer coefficient attains a saturation point at a higher rotational speed, while the heat transfer coefficient of the stator grows uniformly. The magnitudes of the heat transfer coefficients are almost constant with different turbulence models. The calibrationof sensors in a straight pipe is only an advisory step in the selection process. Telemetry is tested in the pipe conditions and compared to the same measurements with a plain sensor. The magnitudes of the measured data and the data from the semi-empirical equation are higher for the heat transfer coefficients than thenumerical data considered on the velocity range. Friction and heat transfer coefficients are presented in a large velocity range in the report. The goals are reached acceptably using numerical and experimental research. The next challenge is to achieve results for grooved stator-rotor combinations. The work contains also results for an air gap with a grooved stator with 36 slots. The velocity field by the numerical method does not match in every respect the estimated flow mode. The absence of secondary Taylor vortices is evident when using time averagednumerical simulation.
Resumo:
This study examines how firms interpret new, potentially disruptive technologies in their own strategic context. The work presents a cross-case analysis of four potentially disruptive technologies or technical operating models: Bluetooth, WLAN, Grid computing and Mobile Peer-to-peer paradigm. The technologies were investigated from the perspective of three mobile operators, a device manufacturer and a software company in the ICT industry. The theoretical background for the study consists of the resource-based view of the firm with dynamic perspective, the theories on the nature of technology and innovations, and the concept of business model. The literature review builds up a propositional framework for estimating the amount of radical change in the companies' business model with two middle variables, the disruptiveness potential of a new technology, and the strategic importance of a new technology to a firm. The data was gathered in group discussion sessions in each company. The results of each case analysis were brought together to evaluate, how firms interpret the potential disruptiveness in terms of changes in product characteristics and added value, technology and market uncertainty, changes in product-market positions, possible competence disruption and changes in value network positions. The results indicate that the perceived disruptiveness in terms ofproduct characteristics does not necessarily translate into strategic importance. In addition, firms did not see the new technologies as a threat in terms of potential competence disruption.
Resumo:
The market place of the twenty-first century will demand that manufacturing assumes a crucial role in a new competitive field. Two potential resources in the area of manufacturing are advanced manufacturing technology (AMT) and empowered employees. Surveys in Finland have shown the need to invest in the new AMT in the Finnish sheet metal industry in the 1990's. In this run the focus has been on hard technology and less attention is paid to the utilization of human resources. In manymanufacturing companies an appreciable portion of the profit within reach is wasted due to poor quality of planning and workmanship. The production flow production error distribution of the sheet metal part based constructions is inspectedin this thesis. The objective of the thesis is to analyze the origins of production errors in the production flow of sheet metal based constructions. Also the employee empowerment is investigated in theory and the meaning of the employee empowerment in reducing the overall production error amount is discussed in this thesis. This study is most relevant to the sheet metal part fabricating industrywhich produces sheet metal part based constructions for electronics and telecommunication industry. This study concentrates on the manufacturing function of a company and is based on a field study carried out in five Finnish case factories. In each studied case factory the most delicate work phases for production errors were detected. It can be assumed that most of the production errors are caused in manually operated work phases and in mass production work phases. However, no common theme in collected production error data for production error distribution in the production flow can be found. Most important finding was still that most of the production errors in each case factory studied belong to the 'human activity based errors-category'. This result indicates that most of the problemsin the production flow are related to employees or work organization. Development activities must therefore be focused to the development of employee skills orto the development of work organization. Employee empowerment gives the right tools and methods to achieve this.
Resumo:
Electric motors driven by adjustable-frequency converters may produce periodic excitation forces that can cause torque and speed ripple. Interaction with the driven mechanical system may cause undesirable vibrations that affect the system performance and lifetime. Direct drives in sensitive applications, such as elevators or paper machines, emphasize the importance of smooth torque production. This thesis analyses the non-idealities of frequencyconverters that produce speed and torque ripple in electric drives. The origin of low order harmonics in speed and torque is examined. It is shown how different current measurement error types affect the torque. As the application environment, direct torque control (DTC) method is applied to permanent magnet synchronous machines (PMSM). A simulation model to analyse the effect of the frequency converter non-idealities on the performance of the electric drives is created. Themodel enables to identify potential problems causing torque vibrations and possibly damaging oscillations in electrically driven machine systems. The model is capable of coupling with separate simulation software of complex mechanical loads. Furthermore, the simulation model of the frequency converter's control algorithm can be applied to control a real frequency converter. A commercial frequencyconverter with standard software, a permanent magnet axial flux synchronous motor and a DC motor as the load are used to detect the effect of current measurement errors on load torque. A method to reduce the speed and torque ripple by compensating the current measurement errors is introduced. The method is based on analysing the amplitude of a selected harmonic component of speed as a function oftime and selecting a suitable compensation alternative for the current error. The speed can be either measured or estimated, so the compensation method is applicable also for speed sensorless drives. The proposed compensation method is tested with a laboratory drive, which consists of commercial frequency converter hardware with self-made software and a prototype PMSM. The speed and torque rippleof the test drive are reduced by applying the compensation method. In addition to the direct torque controlled PMSM drives, the compensation method can also beapplied to other motor types and control methods.
Resumo:
Position sensitive particle detectors are needed in high energy physics research. This thesis describes the development of fabrication processes and characterization techniques of silicon microstrip detectors used in the work for searching elementary particles in the European center for nuclear research, CERN. The detectors give an electrical signal along the particles trajectory after a collision in the particle accelerator. The trajectories give information about the nature of the particle in the struggle to reveal the structure of the matter and the universe. Detectors made of semiconductors have a better position resolution than conventional wire chamber detectors. Silicon semiconductor is overwhelmingly used as a detector material because of its cheapness and standard usage in integrated circuit industry. After a short spread sheet analysis of the basic building block of radiation detectors, the pn junction, the operation of a silicon radiation detector is discussed in general. The microstrip detector is then introduced and the detailed structure of a double-sided ac-coupled strip detector revealed. The fabrication aspects of strip detectors are discussedstarting from the process development and general principles ending up to the description of the double-sided ac-coupled strip detector process. Recombination and generation lifetime measurements in radiation detectors are discussed shortly. The results of electrical tests, ie. measuring the leakage currents and bias resistors, are displayed. The beam test setups and the results, the signal to noise ratio and the position accuracy, are then described. It was found out in earlier research that a heavy irradiation changes the properties of radiation detectors dramatically. A scanning electron microscope method was developed to measure the electric potential and field inside irradiated detectorsto see how a high radiation fluence changes them. The method and the most important results are discussed shortly.
Resumo:
In recent years, Semantic Web (SW) research has resulted in significant outcomes. Various industries have adopted SW technologies, while the ‘deep web’ is still pursuing the critical transformation point, in which the majority of data found on the deep web will be exploited through SW value layers. In this article we analyse the SW applications from a ‘market’ perspective. We are setting the key requirements for real-world information systems that are SW-enabled and we discuss the major difficulties for the SW uptake that has been delayed. This article contributes to the literature of SW and knowledge management providing a context for discourse towards best practices on SW-based information systems.
Resumo:
World mango production is spread over 100 countries that produce over 34.3 million tons of fruit annually. Eighty percent of this production is based in the top nine producing nations that also consume upward of 90% of their production domestically. One to 2 percent of fruit is traded internationally in to markets in the European Community, USA, Arabian Peninsula and Asia. This paper outlines some of the recent research and development advances in mango breeding and genomics, rootstock development, disease management and harvest technologies that are influencing the production and quality of mango fruit traded domestically and internationally.