157 resultados para Surveying Methods
Resumo:
The aim of this thesis is to propose a novel control method for teleoperated electrohydraulic servo systems that implements a reliable haptic sense between the human and manipulator interaction, and an ideal position control between the manipulator and the task environment interaction. The proposed method has the characteristics of a universal technique independent of the actual control algorithm and it can be applied with other suitable control methods as a real-time control strategy. The motivation to develop this control method is the necessity for a reliable real-time controller for teleoperated electrohydraulic servo systems that provides highly accurate position control based on joystick inputs with haptic capabilities. The contribution of the research is that the proposed control method combines a directed random search method and a real-time simulation to develop an intelligent controller in which each generation of parameters is tested on-line by the real-time simulator before being applied to the real process. The controller was evaluated on a hydraulic position servo system. The simulator of the hydraulic system was built based on Markov chain Monte Carlo (MCMC) method. A Particle Swarm Optimization algorithm combined with the foraging behavior of E. coli bacteria was utilized as the directed random search engine. The control strategy allows the operator to be plugged into the work environment dynamically and kinetically. This helps to ensure the system has haptic sense with high stability, without abstracting away the dynamics of the hydraulic system. The new control algorithm provides asymptotically exact tracking of both, the position and the contact force. In addition, this research proposes a novel method for re-calibration of multi-axis force/torque sensors. The method makes several improvements to traditional methods. It can be used without dismantling the sensor from its application and it requires smaller number of standard loads for calibration. It is also more cost efficient and faster in comparison to traditional calibration methods. The proposed method was developed in response to re-calibration issues with the force sensors utilized in teleoperated systems. The new approach aimed to avoid dismantling of the sensors from their applications for applying calibration. A major complication with many manipulators is the difficulty accessing them when they operate inside a non-accessible environment; especially if those environments are harsh; such as in radioactive areas. The proposed technique is based on design of experiment methodology. It has been successfully applied to different force/torque sensors and this research presents experimental validation of use of the calibration method with one of the force sensors which method has been applied to.
Resumo:
Fluid handling systems such as pump and fan systems are found to have a significant potential for energy efficiency improvements. To deliver the energy saving potential, there is a need for easily implementable methods to monitor the system output. This is because information is needed to identify inefficient operation of the fluid handling system and to control the output of the pumping system according to process needs. Model-based pump or fan monitoring methods implemented in variable speed drives have proven to be able to give information on the system output without additional metering; however, the current model-based methods may not be usable or sufficiently accurate in the whole operation range of the fluid handling device. To apply model-based system monitoring in a wider selection of systems and to improve the accuracy of the monitoring, this paper proposes a new method for pump and fan output monitoring with variable-speed drives. The method uses a combination of already known operating point estimation methods. Laboratory measurements are used to verify the benefits and applicability of the improved estimation method, and the new method is compared with five previously introduced model-based estimation methods. According to the laboratory measurements, the new estimation method is the most accurate and reliable of the model-based estimation methods.
Resumo:
Laser beam welding (LBW) is applicable for a wide range of industrial sectors and has a history of fifty years. However, it is considered an unusual method with applications typically limited to welding of thin sheet metal. With a new generation of high power lasers there has been a renewed interest in thick section LBW (also known as keyhole laser welding). There was a growing body of publications during 2001-2011 that indicates an increasing interest in laser welding for many industrial applications, and in last ten years, an increasing number of studies have examined the ways to increase the efficiency of the process. Expanding the thickness range and efficiency of LBW makes the process a possibility for industrial applications dealing with thick metal welding: shipbuilding, offshore structures, pipelines, power plants and other industries. The advantages provided by LBW, such as high process speed, high productivity, and low heat input, may revolutionize these industries and significantly reduce the process costs. The research to date has focused on either increasing the efficiency via optimizing process parameters, or on the process fundamentals, rather than on process and workpiece modifications. The argument of this thesis is that the efficiency of the laser beam process can be increased in a straightforward way in the workshop conditions. Throughout this dissertation, the term “efficiency” is used to refer to welding process efficiency, specifically, an increase in efficiency refers an increase in weld’s penetration depth without increasing laser power level or decreasing welding speed. These methods are: modifications of the workpiece – edge surface roughness and air gap between the joining plates; modification of the ambient conditions – local reduction of the pressure in the welding zone; modification of the welding process – preheating of the welding zone. Approaches to improve the efficiency are analyzed and compared both separately and combined. These experimentally proven methods confirm previous findings and contribute additional evidence which expand the opportunities for laser beam welding applications. The focus of this research was primarily on the effects of edge surface roughness preparation and pre-set air gap between the plates on weld quality and penetration depth. To date, there has been no reliable evidence that such modifications of the workpiece give a positive effect on the welding efficiency. Other methods were tested in combination with the two methods mentioned above. The most promising - combining with reduced pressure method - resulted in at least 100% increase in efficiency. The results of this thesis support the idea that joining those methods in one modified process will provide the modern engineering with a sufficient tool for many novel applications with potential benefits to a range of industries.
Resumo:
Energy efficiency is an important topic when considering electric motor drives market. Although more efficient electric motor types are available, the induction motor remains as the most common industrial motor type. IEC methods for determining losses and efficiency of converter-fed induction motors were introduced recently with the release of technical specification IEC/TS 60034-2-3. Determining the induction motor losses with IEC/TS 60034-2-3 method 2-3-A and assessing the practical applicability of the method are the main interests of this study. The method 2-3-A introduces a specific test converter waveform to be used in the measurements. Differences between the induction motor losses with a test converter supply, and with a DTC converter supply are investigated. In the IEC methods, the tests are run at motor rated fundamental voltage, which, in practice, requires the frequency converter to be fed with a risen input voltage. In this study, the tests are run on both frequency converters with artificially risen converter input voltage, resulting in rated motor fundamental input voltage as required by IEC. For comparison, the tests are run with both converters on normal grid input voltage supply, which results in lower motor fundamental voltage and reduced flux level, but should be more relevant from practical point of view. According to IEC method 2-3-A, tests are run at rated motor load, and to ensure comparability of the results, the rated load is used in the grid-fed converter measurements, although motor is overloaded while producing the rated torque at reduced flux level. The IEC 2-3-A method requires also sinusoidal supply test results with IEC method 2-1-1B. Therefore, the induction motor losses with the recently updated IEC 60034-2-1 method 2-1-1B are determined at the motor rated voltage, but also at two lower motor voltages, which are according to the output fundamental voltages of the two network-supplied converters. The method 2-3-A was found to be complex to apply but the results were stable. According to the results, the method 2-3-A and the test converter supply are usable for comparing losses and efficiency of different induction motors at the operating point of rated voltage, rated frequency and rated load, but the measurements do not give any prediction of the motor losses at final application. One might therefore strongly criticize the method’s main principles. It seems, that the release of IEC 60034-2-3 as a technical specification instead of a final standard for now was justified, since the practical relevance of the main method is questionable.
Resumo:
Phenomena in cyber domain, especially threats to security and privacy, have proven an increasingly heated topic addressed by different writers and scholars at an increasing pace – both nationally and internationally. However little public research has been done on the subject of cyber intelligence. The main research question of the thesis was: To what extent is the applicability of cyber intelligence acquisition methods circumstantial? The study was conducted in sequential a manner, starting with defining the concept of intelligence in cyber domain and identifying its key attributes, followed by identifying the range of intelligence methods in cyber domain, criteria influencing their applicability, and types of operatives utilizing cyber intelligence. The methods and criteria were refined into a hierarchical model. The existing conceptions of cyber intelligence were mapped through an extensive literature study on a wide variety of sources. The established understanding was further developed through 15 semi-structured interviews with experts of different backgrounds, whose wide range of points of view proved to substantially enhance the perspective on the subject. Four of the interviewed experts participated in a relatively extensive survey based on the constructed hierarchical model on cyber intelligence that was formulated in to an AHP hierarchy and executed in the Expert Choice Comparion online application. It was concluded that Intelligence in cyber domain is an endorsing, cross-cutting intelligence discipline that adds value to all aspects of conventional intelligence and furthermore that it bears a substantial amount of characteristic traits – both advantageous and disadvantageous – and furthermore that the applicability of cyber intelligence methods is partly circumstantially limited.
Resumo:
Consumer neuroscience (neuromarketing) is an emerging field of marketing research which uses brain imaging techniques to study neural conditions and processes that underlie consumption. The purpose of this study was to map this fairly new and growing field in Finland by studying the opinions of both Finnish consumers and marketing professionals towards it and comparing the opinions to the current consumer neuroscience literature, and based on that evaluate the usability of brain imaging techniques as a marketing research method. Mixed methods research design was chosen for this study. Quantitative data was collected from 232 consumers and 28 marketing professionals by means of online surveys. Both respondent groups had either neutral opinions or lacked knowledge about the four themes chosen for this study: benefits, limitations and challenges, ethical issues and future prospects of consumer neuroscience. Qualitative interview data was collected from 2 individuals from Finnish neuromarketing companies to deepen insights gained from quantitative research. The four interview themes were the same as in the surveys and the interviewees’ answers were mostly in line with the current literature, although more optimistic about the future of the field. The interviews also exposed a gap between academic consumer neuroscience research and practical level applications. The results of this study suggest that there are still many unresolved challenges and relevant populations either have neutral opinions or lack information about consumer neuroscience. The practical level applications are, however, already being successfully used and this new field of marketing research is growing both globally and in Finland.
Resumo:
Active magnetic bearing is a type of bearing which uses magnetic field to levitate the rotor. These bearings require continuous control of the currents in electromagnets and data from position of the rotor and the measured current from electromagnets. Because of this different identification methods can be implemented with no additional hardware. In this thesis the focus was to implement and test identification methods for active magnetic bearing system and to update the rotor model. Magnetic center calibration is a method used to locate the magnetic center of the rotor. Rotor model identification is an identification method used to identify the rotor model. Rotor model update is a method used to update the rotor model based on identification data. These methods were implemented and tested with a real machine where rotor was levitated with active magnetic bearings and the functionality of the methods was ensured. Methods were developed with further extension in mind and also with the possibility to apply them for different machines with ease.
Resumo:
The recent rapid development of biotechnological approaches has enabled the production of large whole genome level biological data sets. In order to handle thesedata sets, reliable and efficient automated tools and methods for data processingand result interpretation are required. Bioinformatics, as the field of studying andprocessing biological data, tries to answer this need by combining methods and approaches across computer science, statistics, mathematics and engineering to studyand process biological data. The need is also increasing for tools that can be used by the biological researchers themselves who may not have a strong statistical or computational background, which requires creating tools and pipelines with intuitive user interfaces, robust analysis workflows and strong emphasis on result reportingand visualization. Within this thesis, several data analysis tools and methods have been developed for analyzing high-throughput biological data sets. These approaches, coveringseveral aspects of high-throughput data analysis, are specifically aimed for gene expression and genotyping data although in principle they are suitable for analyzing other data types as well. Coherent handling of the data across the various data analysis steps is highly important in order to ensure robust and reliable results. Thus,robust data analysis workflows are also described, putting the developed tools andmethods into a wider context. The choice of the correct analysis method may also depend on the properties of the specific data setandthereforeguidelinesforchoosing an optimal method are given. The data analysis tools, methods and workflows developed within this thesis have been applied to several research studies, of which two representative examplesare included in the thesis. The first study focuses on spermatogenesis in murinetestis and the second one examines cell lineage specification in mouse embryonicstem cells.
Resumo:
The strongest wish of the customer concerning chemical pulp features is consistent, uniform quality. Variation may be controlled and reduced by using statistical methods. However, studies addressing the application and benefits of statistical methods in forest product sector are scarce. Thus, the customer wish is the root cause of the motivation behind this dissertation. The research problem addressed by this dissertation is that companies in the chemical forest product sector require new knowledge for improving their utilization of statistical methods. To gain this new knowledge, the research problem is studied from five complementary viewpoints – challenges and success factors, organizational learning, problem solving, economic benefit, and statistical methods as management tools. The five research questions generated on the basis of these viewpoints are answered in four research papers, which are case studies based on empirical data collection. This research as a whole complements the literature dealing with the use of statistical methods in the forest products industry. Practical examples of the application of statistical process control, case-based reasoning, the cross-industry standard process for data mining, and performance measurement methods in the context of chemical forest products manufacturing are brought to the public knowledge of the scientific community. The benefit of the application of these methods is estimated or demonstrated. The purpose of this dissertation is to find pragmatic ideas for companies in the chemical forest product sector in order for them to improve their utilization of statistical methods. The main practical implications of this doctoral dissertation can be summarized in four points: 1. It is beneficial to reduce variation in chemical forest product manufacturing processes 2. Statistical tools can be used to reduce this variation 3. Problem-solving in chemical forest product manufacturing processes can be intensified through the use of statistical methods 4. There are certain success factors and challenges that need to be addressed when implementing statistical methods
Resumo:
Intelligence from a human source, that is falsely thought to be true, is potentially more harmful than a total lack of it. The veracity assessment of the gathered intelligence is one of the most important phases of the intelligence process. Lie detection and veracity assessment methods have been studied widely but a comprehensive analysis of these methods’ applicability is lacking. There are some problems related to the efficacy of lie detection and veracity assessment. According to a conventional belief an almighty lie detection method, that is almost 100% accurate and suitable for any social encounter, exists. However, scientific studies have shown that this is not the case, and popular approaches are often over simplified. The main research question of this study was: What is the applicability of veracity assessment methods, which are reliable and are based on scientific proof, in terms of the following criteria? o Accuracy, i.e. probability of detecting deception successfully o Ease of Use, i.e. easiness to apply the method correctly o Time Required to apply the method reliably o No Need for Special Equipment o Unobtrusiveness of the method In order to get an answer to the main research question, the following supporting research questions were answered first: What kinds of interviewing and interrogation techniques exist and how could they be used in the intelligence interview context, what kinds of lie detection and veracity assessment methods exist that are reliable and are based on scientific proof and what kind of uncertainty and other limitations are included in these methods? Two major databases, Google Scholar and Science Direct, were used to search and collect existing topic related studies and other papers. After the search phase, the understanding of the existing lie detection and veracity assessment methods was established through a meta-analysis. Multi Criteria Analysis utilizing Analytic Hierarchy Process was conducted to compare scientifically valid lie detection and veracity assessment methods in terms of the assessment criteria. In addition, a field study was arranged to get a firsthand experience of the applicability of different lie detection and veracity assessment methods. The Studied Features of Discourse and the Studied Features of Nonverbal Communication gained the highest ranking in overall applicability. They were assessed to be the easiest and fastest to apply, and to have required temporal and contextual sensitivity. The Plausibility and Inner Logic of the Statement, the Method for Assessing the Credibility of Evidence and the Criteria Based Content Analysis were also found to be useful, but with some limitations. The Discourse Analysis and the Polygraph were assessed to be the least applicable. Results from the field study support these findings. However, it was also discovered that the most applicable methods are not entirely troublefree either. In addition, this study highlighted that three channels of information, Content, Discourse and Nonverbal Communication, can be subjected to veracity assessment methods that are scientifically defensible. There is at least one reliable and applicable veracity assessment method for each of the three channels. All of the methods require disciplined application and a scientific working approach. There are no quick gains if high accuracy and reliability is desired. Since most of the current lie detection studies are concentrated around a scenario, where roughly half of the assessed people are totally truthful and the other half are liars who present a well prepared cover story, it is proposed that in future studies lie detection and veracity assessment methods are tested against partially truthful human sources. This kind of test setup would highlight new challenges and opportunities for the use of existing and widely studied lie detection methods, as well as for the modern ones that are still under development.
Resumo:
The purpose of this work was to describe and compare sourcing practices and challenges in different geographies, to discuss possible options to advance sustainability of global sourcing, and to provide examples to answer why sourcing driven by sustainability principles is so challenging to implement. The focus was on comparison between Europe & Asia & South-America from the perspective of sustainability adoption. By analyzing sourcing practices of the case company it was possible to describe main differences and challenges of each continent, available sourcing options, supplier relationships and ways to foster positive chance. In this qualitative case study gathered theoretical material was compared to extensive sourcing practices of case company in a vast supplier network. Sourcing specialist were interviewed and information provided by them analyzed in order to see how different research results and theories are reflecting reality and to find answers to proposed research questions.
Resumo:
Ohjelmiston suorituskyky on kokonaisvaltainen asia, johon kaikki ohjelmiston elinkaaren vaiheet vaikuttavat. Suorituskykyongelmat johtavat usein projektien viivästymisiin, kustannusten ylittymisiin sekä joissain tapauksissa projektin täydelliseen epäonnistumiseen. Software performance engineering (SPE) on ohjelmistolähtöinen lähestysmistapa, joka tarjoaa tekniikoita suorituskykyisen ohjelmiston kehittämiseen. Tämä diplomityö tutkii näitä tekniikoita ja valitsee niiden joukosta ne, jotka soveltuvat suorituskykyongelmien ratkaisemiseen kahden IT-laitehallintatuotteen kehityksessä. Työn lopputuloksena on päivitetty versio nykyisestä tuotekehitysprosessista, mikä huomioi sovellusten suorituskykyyn liittyvät haasteet tuotteiden elinkaaren eri vaiheissa.