32 resultados para Grid-based clustering approach
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
Productivity and profitability are important concepts and measures describing the performance and success of a firm. We know that increase in productivity decreases the costs per unit produced and leads to better profitability. This common knowledge is not, however, enough in the modern business environment. Productivity improvement is one means among others for increasing the profitability of actions. There are many means to increase productivity. The use of these means presupposes operative decisions and these decisions presuppose informationabout the effects of these means. Productivity improvement actions are in general made at floor level with machines, cells, activities and human beings. Profitability is most meaningful at the level of the whole firm. It has been very difficult or even impossible to analyze closely enough the economical aspects of thechanges at floor level with the traditional costing systems. New ideas in accounting have only recently brought in elements which make it possible to considerthese phenomena where they actually happen. The aim of this study is to supportthe selection of objects to productivity improvement, and to develop a method to analyze the effects of the productivity change in an activity on the profitability of a firm. A framework for systemizing the economical management of productivity improvement is developed in this study. This framework is a systematical way with two stages to analyze the effects of productivity improvement actions inan activity on the profitability of a firm. At the first stage of the framework, a simple selection method which is based on the worth, possibility and the necessity of the improvement actions in each activity is presented. This method is called Urgency Analysis. In the second stage it is analyzed how much a certain change of productivity in an activity affects the profitability of a firm. A theoretical calculation model with which it is possible to analyze the effects of a productivity improvement in monetary values is presented. On the basis of this theoretical model a tool is made for the analysis at the firm level. The usefulness of this framework was empirically tested with the data of the profit center of one medium size Finnish firm which operates in metal industry. It is expressedthat the framework provides valuable information about the economical effects of productivity improvement for supporting the management in their decision making.
Resumo:
This thesis concentrates on developing a practical local approach methodology based on micro mechanical models for the analysis of ductile fracture of welded joints. Two major problems involved in the local approach, namely the dilational constitutive relation reflecting the softening behaviour of material, and the failure criterion associated with the constitutive equation, have been studied in detail. Firstly, considerable efforts were made on the numerical integration and computer implementation for the non trivial dilational Gurson Tvergaard model. Considering the weaknesses of the widely used Euler forward integration algorithms, a family of generalized mid point algorithms is proposed for the Gurson Tvergaard model. Correspondingly, based on the decomposition of stresses into hydrostatic and deviatoric parts, an explicit seven parameter expression for the consistent tangent moduli of the algorithms is presented. This explicit formula avoids any matrix inversion during numerical iteration and thus greatly facilitates the computer implementation of the algorithms and increase the efficiency of the code. The accuracy of the proposed algorithms and other conventional algorithms has been assessed in a systematic manner in order to highlight the best algorithm for this study. The accurate and efficient performance of present finite element implementation of the proposed algorithms has been demonstrated by various numerical examples. It has been found that the true mid point algorithm (a = 0.5) is the most accurate one when the deviatoric strain increment is radial to the yield surface and it is very important to use the consistent tangent moduli in the Newton iteration procedure. Secondly, an assessment of the consistency of current local failure criteria for ductile fracture, the critical void growth criterion, the constant critical void volume fraction criterion and Thomason's plastic limit load failure criterion, has been made. Significant differences in the predictions of ductility by the three criteria were found. By assuming the void grows spherically and using the void volume fraction from the Gurson Tvergaard model to calculate the current void matrix geometry, Thomason's failure criterion has been modified and a new failure criterion for the Gurson Tvergaard model is presented. Comparison with Koplik and Needleman's finite element results shows that the new failure criterion is fairly accurate indeed. A novel feature of the new failure criterion is that a mechanism for void coalescence is incorporated into the constitutive model. Hence the material failure is a natural result of the development of macroscopic plastic flow and the microscopic internal necking mechanism. By the new failure criterion, the critical void volume fraction is not a material constant and the initial void volume fraction and/or void nucleation parameters essentially control the material failure. This feature is very desirable and makes the numerical calibration of void nucleation parameters(s) possible and physically sound. Thirdly, a local approach methodology based on the above two major contributions has been built up in ABAQUS via the user material subroutine UMAT and applied to welded T joints. By using the void nucleation parameters calibrated from simple smooth and notched specimens, it was found that the fracture behaviour of the welded T joints can be well predicted using present methodology. This application has shown how the damage parameters of both base material and heat affected zone (HAZ) material can be obtained in a step by step manner and how useful and capable the local approach methodology is in the analysis of fracture behaviour and crack development as well as structural integrity assessment of practical problems where non homogeneous materials are involved. Finally, a procedure for the possible engineering application of the present methodology is suggested and discussed.
Resumo:
Case-based reasoning (CBR) is a recent approach to problem solving and learning that has got a lot of attention over the last years. In this work, the CBR methodology is used to reduce the time and amount of resources spent on carry out experiments to determine the viscosity of the new slurry. The aim of this work is: to develop a CBR system to support the decision making process about the type of slurries behavior, to collect a sufficient volume of qualitative data for case base, and to calculate the viscosity of the Newtonian slurries. Firstly in this paper, the literature review about the types of fluid flow, Newtonian and non-Newtonian slurries is presented. Some physical properties of the suspensions are also considered. The second part of the literature review provides an overview of the case-based reasoning field. Different models and stages of CBR cycles, benefits and disadvantages of this methodology are considered subsequently. Brief review of the CBS tools is also given in this work. Finally, some results of work and opportunities for system modernization are presented. To develop a decision support system for slurry viscosity determination, software application MS Office Excel was used. Designed system consists of three parts: workspace, the case base, and section for calculating the viscosity of Newtonian slurries. First and second sections are supposed to work with Newtonian and Bingham fluids. In the last section, apparent viscosity can be calculated for Newtonian slurries.
Resumo:
Convective transport, both pure and combined with diffusion and reaction, can be observed in a wide range of physical and industrial applications, such as heat and mass transfer, crystal growth or biomechanics. The numerical approximation of this class of problemscan present substantial difficulties clue to regions of high gradients (steep fronts) of the solution, where generation of spurious oscillations or smearing should be precluded. This work is devoted to the development of an efficient numerical technique to deal with pure linear convection and convection-dominated problems in the frame-work of convection-diffusion-reaction systems. The particle transport method, developed in this study, is based on using rneshless numerical particles which carry out the solution along the characteristics defining the convective transport. The resolution of steep fronts of the solution is controlled by a special spacial adaptivity procedure. The serni-Lagrangian particle transport method uses an Eulerian fixed grid to represent the solution. In the case of convection-diffusion-reaction problems, the method is combined with diffusion and reaction solvers within an operator splitting approach. To transfer the solution from the particle set onto the grid, a fast monotone projection technique is designed. Our numerical results confirm that the method has a spacial accuracy of the second order and can be faster than typical grid-based methods of the same order; for pure linear convection problems the method demonstrates optimal linear complexity. The method works on structured and unstructured meshes, demonstrating a high-resolution property in the regions of steep fronts of the solution. Moreover, the particle transport method can be successfully used for the numerical simulation of the real-life problems in, for example, chemical engineering.
Resumo:
Diplomityössä on käsitelty uudenlaisia menetelmiä riippumattomien komponenttien analyysiin(ICA): Menetelmät perustuvat colligaatioon ja cross-momenttiin. Colligaatio menetelmä perustuu painojen colligaatioon. Menetelmässä on käytetty kahden tyyppisiä todennäköisyysjakaumia yhden sijasta joka perustuu yleiseen itsenäisyyden kriteeriin. Työssä on käytetty colligaatio lähestymistapaa kahdella asymptoottisella esityksellä. Gram-Charlie ja Edgeworth laajennuksia käytetty arvioimaan todennäköisyyksiä näissä menetelmissä. Työssä on myös käytetty cross-momentti menetelmää joka perustuu neljännen asteen cross-momenttiin. Menetelmä on hyvin samankaltainen FastICA algoritmin kanssa. Molempia menetelmiä on tarkasteltu lineaarisella kahden itsenäisen muuttajan sekoituksella. Lähtö signaalit ja sekoitetut matriisit ovattuntemattomia signaali lähteiden määrää lukuunottamatta. Työssä on vertailtu colligaatio menetelmään ja sen modifikaatioita FastICA:an ja JADE:en. Työssä on myös tehty vertailu analyysi suorituskyvyn ja keskusprosessori ajan suhteen cross-momenttiin perustuvien menetelmien, FastICA:n ja JADE:n useiden sekoitettujen parien kanssa.
Resumo:
The capabilities and thus, design complexity of VLSI-based embedded systems have increased tremendously in recent years, riding the wave of Moore’s law. The time-to-market requirements are also shrinking, imposing challenges to the designers, which in turn, seek to adopt new design methods to increase their productivity. As an answer to these new pressures, modern day systems have moved towards on-chip multiprocessing technologies. New architectures have emerged in on-chip multiprocessing in order to utilize the tremendous advances of fabrication technology. Platform-based design is a possible solution in addressing these challenges. The principle behind the approach is to separate the functionality of an application from the organization and communication architecture of hardware platform at several levels of abstraction. The existing design methodologies pertaining to platform-based design approach don’t provide full automation at every level of the design processes, and sometimes, the co-design of platform-based systems lead to sub-optimal systems. In addition, the design productivity gap in multiprocessor systems remain a key challenge due to existing design methodologies. This thesis addresses the aforementioned challenges and discusses the creation of a development framework for a platform-based system design, in the context of the SegBus platform - a distributed communication architecture. This research aims to provide automated procedures for platform design and application mapping. Structural verification support is also featured thus ensuring correct-by-design platforms. The solution is based on a model-based process. Both the platform and the application are modeled using the Unified Modeling Language. This thesis develops a Domain Specific Language to support platform modeling based on a corresponding UML profile. Object Constraint Language constraints are used to support structurally correct platform construction. An emulator is thus introduced to allow as much as possible accurate performance estimation of the solution, at high abstraction levels. VHDL code is automatically generated, in the form of “snippets” to be employed in the arbiter modules of the platform, as required by the application. The resulting framework is applied in building an actual design solution for an MP3 stereo audio decoder application.
Resumo:
Software is a key component in many of our devices and products that we use every day. Most customers demand not only that their devices should function as expected but also that the software should be of high quality, reliable, fault tolerant, efficient, etc. In short, it is not enough that a calculator gives the correct result of a calculation, we want the result instantly, in the right form, with minimal use of battery, etc. One of the key aspects for succeeding in today's industry is delivering high quality. In most software development projects, high-quality software is achieved by rigorous testing and good quality assurance practices. However, today, customers are asking for these high quality software products at an ever-increasing pace. This leaves the companies with less time for development. Software testing is an expensive activity, because it requires much manual work. Testing, debugging, and verification are estimated to consume 50 to 75 per cent of the total development cost of complex software projects. Further, the most expensive software defects are those which have to be fixed after the product is released. One of the main challenges in software development is reducing the associated cost and time of software testing without sacrificing the quality of the developed software. It is often not enough to only demonstrate that a piece of software is functioning correctly. Usually, many other aspects of the software, such as performance, security, scalability, usability, etc., need also to be verified. Testing these aspects of the software is traditionally referred to as nonfunctional testing. One of the major challenges with non-functional testing is that it is usually carried out at the end of the software development process when most of the functionality is implemented. This is due to the fact that non-functional aspects, such as performance or security, apply to the software as a whole. In this thesis, we study the use of model-based testing. We present approaches to automatically generate tests from behavioral models for solving some of these challenges. We show that model-based testing is not only applicable to functional testing but also to non-functional testing. In its simplest form, performance testing is performed by executing multiple test sequences at once while observing the software in terms of responsiveness and stability, rather than the output. The main contribution of the thesis is a coherent model-based testing approach for testing functional and performance related issues in software systems. We show how we go from system models, expressed in the Unified Modeling Language, to test cases and back to models again. The system requirements are traced throughout the entire testing process. Requirements traceability facilitates finding faults in the design and implementation of the software. In the research field of model-based testing, many new proposed approaches suffer from poor or the lack of tool support. Therefore, the second contribution of this thesis is proper tool support for the proposed approach that is integrated with leading industry tools. We o er independent tools, tools that are integrated with other industry leading tools, and complete tool-chains when necessary. Many model-based testing approaches proposed by the research community suffer from poor empirical validation in an industrial context. In order to demonstrate the applicability of our proposed approach, we apply our research to several systems, including industrial ones.
Resumo:
The Thesis gives a decision support framework that has significant impact on the economic performance and viability of a hydropower company. The studyaddresses the short-term hydropower planning problem in the Nordic deregulated electricity market. The basics of the Nordic electricity market, trading mechanisms, hydropower system characteristics and production planning are presented in the Thesis. The related modelling theory and optimization methods are covered aswell. The Thesis provides a mixed integer linear programming model applied in asuccessive linearization method for optimal bidding and scheduling decisions inthe hydropower system operation within short-term horizon. A scenario based deterministic approach is exploited for modelling uncertainty in market price and inflow. The Thesis proposes a calibration framework to examine the physical accuracy and economic optimality of the decisions suggested by the model. A calibration example is provided with data from a real hydropower system using a commercial modelling application with the mixed integer linear programming solver CPLEX.
Resumo:
Interconnection of loads and small size generation forms a new type of distribution systems, the Microgrid. The microgrids can be operated together with the utility grid or be operated autonomously in an island. Thesesmall grids present a new paradigm of the construction of the low voltage distribution systems. The microgrids in the distribution systems can become small, controllable units, which immediately react to the system's changes. Along with that the microgrids can realize the special properties, such as increasing the reliability, reducing losses, voltage sag correction, uninterruptible supplying. The goals of the thesis are to explain the principles of the microgrid's functioning, to clarify the main ideas and positive features of the microgrids, to find out and prove their advantages and explain why they are so popular nowadays all over the world. The practical aims of the thesis are to construct and build a test setup of a microgrid based on two inverters from SMA Technologie AG in the laboratory and to test all the main modes and parameters of the microgrid's operating. Also the purpose of the thesis is to test the main component of the microgrid - the battery inverter which controls allthe processes and energy flows inside a microgrid and communicates with the main grid. Based on received data the main contribution of the thesis consists of the estimation of the established microgrid from the reliability, economy and simplicity of operating points of view and evaluation ofthe advisability of its use in different conditions. Moreover, the thesis assumes to give the recommendations and advice for the future investigations of the built system.
Resumo:
The major objective of this thesis is to describe and analyse how a railcarrier is engaged in an intermodal freight transportation network through its role and position. Because of the fact that the role as a conceptualisation has a lot of parallels with the position, both these phenomena are evaluated theoretically and empirically. VR Cargo (a strategical business unitof the Finnish railway company VR Ltd.) was chosen to be the focal firm surrounded by the actors of the focal net. Because of the fact that networks are sets of relationships rather than sets of actors, it is essential to describe the dimensions of the relationships created through the time thus having a past, presentand future. The roles are created during long common history shared by the actors especially when IM networks are considered. The presence of roles is embeddedin the tasks, and the future is anchored to the expectations. Furthermore, in this study role refers to network dynamics, and to incremental and radical changes in the network, in a similar way as position refers to stability and to the influences of bonded structures. The main purpose of the first part of the study was to examine how the two distinctive views that have a dominant position in modern logistics ¿ the network view (particularly IMP-based network approach) and the managerial view (represented by Supply Chain Management) differ, especially when intermodalism is under consideration. In this study intermodalism was defined as a form of interorganisational behaviour characterized by the physical movement of unitized goods with Intermodal Transport Units, using more than one mode as performed by the net of operators. In this particular stage the study relies mainly on theoretical evaluation broadened by some discussions with the practitioners. This is essential, because the continuous dialogue between theory and practice is highly emphasized. Some managerial implications are discussed on the basis of the theoretical examination. A tentative model for empirical analysis in subsequent research is suggested. The empirical investigation, which relies on the interviews among the members in the focal net, shows that the major role of the focal company in the network is the common carrier. This role has some behavioural and functional characteristics, such as an executive's disclosure expressing strategic will attached with stable and predictable managerial and organisational behaviour. Most important is the notion that the focal company is neutral for all the other operators, and willing to enhance and strengthen the collaboration with all the members in the IM network. This also means that all the accounts are aimed at being equal in terms of customer satisfaction. Besides, the adjustments intensify the adopted role. However, the focal company is also obliged tosustain its role as it still has a government-erected right to maintain solely the railway operations on domestic tracks. In addition, the roles of a dominator, principal, partner, subcontractor, and integrator were present appearing either in a dyadic relationship or in net(work) context. In order to reveal differentroles, a dualistic interpretation of the concept of role/position was employed.
Resumo:
Due to functional requirement of a structural detail brackets with and without scallop are frequently used in bridges, decks, ships and offshore structure. Scallops are designed to serve as passage way for fluids, to reduce weld length and plate distortions. Moreover, scallops are used to avoid intersection of two or more welds for the fact that there is the presence of inventible inherent initial crack except for full penetrated weld and the formation of multi-axial stress state at the weld intersection. Welding all around the scallop corner increase the possibility of brittle fracture even for the case the bracket is not loaded by primary load. Avoiding of scallop will establish an initial crack in the corner if bracket is welded by fillet welds. If the two weld run pass had crossed, this would have given a 3D residual stress situation. Therefore the presences and absence of scallop necessitates the 3D FEA fatigue resistance of both types of brackets using effective notch stress approach ( ). FEMAP 10.1 with NX NASTRAN was used for the 3D FEA. The first and main objective of this research was to investigate and compare the fatigue resistance of brackets with and without scallop. The secondary goal was the fatigue design of scallops in case they cannot be avoided for some reason. The fatigue resistance for both types of brackets was determined based on approach using 1 mm fictitiously rounded radius based on IIW recommendation. Identical geometrical, boundary and loading conditions were used for the determination and comparison of fatigue resistance of both types of brackets using linear 3D FEA. Moreover the size effect of bracket length was also studied using 2D SHELL element FEA. In the case of brackets with scallop the flange plate weld toe at the corner of the scallop was found to exhibit the highest and made the flange plate weld toe critical for fatigue failure. Whereas weld root and weld toe at the weld intersections were the highly stressed location for brackets without scallop. Thus weld toe for brackets with scallop, and weld root and weld toe for brackets without scallop were found to be the critical area for fatigue failure. Employing identical parameters on both types of brackets, brackets without scallop had the highest except for full penetrated weld. Furthermore the fatigue resistance of brackets without scallop was highly affected by the lack of weld penetration length and it was found out that decreased as the weld penetration was increased. Despite the fact that the very presence of scallop reduces the stiffness and also same time induce stress concentration, based on the 3D FEA it is worth concluding that using scallop provided better fatigue resistance when both types of brackets were fillet welded. However brackets without scallop had the highest fatigue resistance when full penetration weld was used. This thesis also showed that weld toe for brackets with scallop was the only highly stressed area unlike brackets without scallop in which both weld toe and weld root were the critical locations for fatigue failure when different types of boundary conditions were used. Weld throat thickness, plate thickness, scallop radius, lack of weld penetration length, boundary condition and weld quality affected the fatigue resistance of both types of brackets. And as a result, bracket design procedure, especially welding quality and post weld treatment techniques significantly affect the fatigue resistance of both type of brackets.
Resumo:
Muotikaupan kansainvälistymisen motiivien vaikutusta operaatiomuodon valintaan on aikaisemmin tutkittu niukasti. Tutkijoiden painopiste on ollut lähinnä kansainvälistymisen motiivien selvittämisessä ja eri operaatiomuotojen soveltuvuuden analysoimisessa. Pro gradu -tutkielman tavoitteena on esittää, miten muotikaupan kansainvälistymisen motiivit vaikuttavat operaatiomuodon valintaan suomalaisen yrityksen näkökulmasta. Teorialähtöisen kvalitatiivisen case-tutkimuksen primääriaineisto kerättiin teemahaastatteluin Venäjälle kansainvälistymistä pohtivasta yrityksestä. Tutkimustulokset todistavat muotikaupan kansainvälistymisen motiivien vaikuttavan merkittävästi operaatiomuodon valintaan, mutta motiivien ohella vaikuttavat myös muut tekijät, tärkeimpinä kontrollin tarve ja kohdemaan markkinaolosuhteet. Lisäksi keskeisenä vaikuttavana tekijänä on huomioitava valikoiman sopeutustarve kohdemarkkinoilla. Tulokset osoittavat motiivien sekä muiden tekijöiden ja operaatiomuodon valinnan välisen, kontekstisidonnaisen, kausaalisuhteen lisäksi karkealla tasolla vaikuttavien tekijöiden keskinäistä tärkeysjärjestystä ja vaikutusastetta operaatiomuodon valintaan.
Resumo:
Panel at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
The literature on agency suggests different implications for the use of export intermediaries. However, only few studies provide a view on import intermediaries. This thesis tries for its part to fill this research gap by studying the import intermediaries in the EU–Russia trade from a Russian industrial company’s point of view. The aim is to describe import intermediation and explain the need for import intermediary companies in the EU–Russia trade. The theoretical framework of this thesis originates from an article by Peng and York (2001), in which they study the performance of export intermediaries. This thesis applies resource-based theory, transaction cost theory and agency cost theory, following the idea of Peng and York. The resource-based theory approach is utilised for describing an ideal import intermediary company, and transaction cost theory provides a basis for understanding the benefits of using the services of import intermediary companies, while agency cost theory is applied in order to understand the risks the Russian industrial company faces when it decides to use the services of import intermediaries. The study is performed in the form of a case interview with a representative of a major Russian metallurgy company. The results of the study suggest that an ideal intermediary has the skills required specifically for the imports process, in order to save time and money of the principal company. The intermediary company helps reducing the amount of time the managers and the staff of the principal company use to make imports possible, thus reducing the salary costs and providing the possibility to concentrate on the company’s core competencies. The benefits of using the services of import intermediary companies are the reduced transaction costs, especially salary costs that are minimised because of the effectiveness and specialisation of import intermediaries. Intermediaries are specialised in the imports process and thus need less time and resources to organise the imports. They also help to reduce the fixed salary costs, because their services can be used only when needed. The risks of being misled by intermediaries are minimised by the competition on the import intermediary market. In case an intermediary attempts fraud, it gets replaced by its rival.
Resumo:
This research is the continuation and a joint work with a master thesis that has been done in this department recently by Hemamali Chathurangani Yashika Jayathunga. The mathematical system of the equations in the designed Heat Exchanger Network synthesis has been extended by adding a number of equipment; such as heat exchangers, mixers and dividers. The solutions of the system is obtained and the optimal setting of the valves (Each divider contains a valve) is calculated by introducing grid-based optimization. Finding the best position of the valves will lead to maximization of the transferred heat in the hot stream and minimization of the pressure drop in the cold stream. The aim of the following thesis will be achieved by practicing the cost optimization to model an optimized network.