834 resultados para Multicommodity capacitated network design problem
Resumo:
In this paper, the optimum design of 3R manipulators is formulated and solved by using an algebraic formulation of workspace boundary. A manipulator design can be approached as a problem of optimization, in which the objective functions are the size of the manipulator and workspace volume; and the constrains can be given as a prescribed workspace volume. The numerical solution of the optimization problem is investigated by using two different numerical techniques, namely, sequential quadratic programming and simulated annealing. Numerical examples illustrate a design procedure and show the efficiency of the proposed algorithms.
Resumo:
The aim of this master’s thesis was to specify a system requiring minimal configuration and providing maximal connectivity in the vein of Skype but for device management purposes. As peer-to-peer applications are pervasive and especially as Skype is known to provide this functionality, the research was focused on these technologies. The resulting specification was a hybrid of a tiered hierarchical network structure and a Kademlia based DHT. A prototype was produced as a proof-of-concept for the hierarchical topology, demonstrating that the specification was feasible.
Resumo:
This study concerns performance measurement and management in a collaborative network. Collaboration between companies has been increased in recent years due to the turbulent operating environment. The literature shows that there is a need for more comprehensive research on performance measurement in networks and the use of measurement information in their management. This study examines the development process and uses of a performance measurement system supporting performance management in a collaborative network. There are two main research questions: how to design a performance measurement system for a collaborative network and how to manage performance in a collaborative network. The work can be characterised as a qualitative single case study. The empirical data was collected in a Finnish collaborative network, which consists of a leading company and a reseller network. The work is based on five research articles applying various research methods. The research questions are examined at the network level and at the single network partner level. The study contributes to the earlier literature by producing new and deeper understanding of network-level performance measurement and management. A three-step process model is presented to support the performance measurement system design process. The process model has been tested in another collaborative network. The study also examines the factors affecting the process of designing the measurement system. The results show that a participatory development style, network culture, and outside facilitators have a positive effect on the design process. The study increases understanding of how to manage performance in a collaborative network and what kind of uses of performance information can be identified in a collaborative network. The results show that the performance measurement system is an applicable tool to manage the performance of a network. The results reveal that trust and openness increased during the utilisation of the performance measurement system, and operations became more transparent. The study also presents a management model that evaluates the maturity of performance management in a collaborative network. The model is a practical tool that helps to analyse the current stage of the performance management of a collaborative network and to develop it further.
Resumo:
At present, permanent magnet synchronous generators (PMSGs) are of great interest. Since they do not have electrical excitation losses, the highly efficient, lightweight and compact PMSGs equipped with damper windings work perfectly when connected to a network. However, in island operation, the generator (or parallel generators) alone is responsible for the building up of the network and maintaining its voltage and reactive power level. Thus, in island operation, a PMSG faces very tight constraints, which are difficult to meet, because the flux produced by the permanent magnets (PMs) is constant and the voltage of the generator cannot be controlled. Traditional electrically excited synchronous generators (EESGs) can easily meet these constraints, because the field winding current is controllable. The main drawback of the conventional EESG is the relatively high excitation loss. This doctoral thesis presents a study of an alternative solution termed as a hybrid excitation synchronous generator (HESG). HESGs are a special class of electrical machines, where the total rotor current linkage is produced by the simultaneous action of two different excitation sources: the electrical and permanent magnet (PM) excitation. An overview of the existing HESGs is given. Several HESGs are introduced and compared with the conventional EESG from technical and economic points of view. In the study, the armature-reaction-compensated permanent magnet synchronous generator with alternated current linkages (ARC-PMSG with ACL) showed a better performance than the other options. Therefore, this machine type is studied in more detail. An electromagnetic design and a thermal analysis are presented. To verify the operation principle and the electromagnetic design, a down-sized prototype of 69 kVA apparent power was built. The experimental results are demonstrated and compared with the predicted ones. A prerequisite for an ARC-PMSG with ACL is an even number of pole pairs (p = 2, 4, 6, …) in the machine. Naturally, the HESG technology is not limited to even-pole-pair machines. However, the analysis of machines with p = 3, 5, 7, … becomes more complicated, especially if analytical tools are used, and is outside the scope of this thesis. The contribution of this study is to propose a solution where an ARC-PMSG replaces an EESG in electrical power generation while meeting all the requirements set for generators given for instance by ship classification societies, particularly as regards island operation. The maximum power level when applying the technology studied here is mainly limited by the economy of the machine. The larger the machine is, the smaller is the efficiency benefit. However, it seems that machines up to ten megawatts of power could benefit from the technology. However, in low-power applications, for instance in the 500 kW range, the efficiency increase can be significant.
Resumo:
Value network has been studied greatly in the academic research, but a tool for value network mapping is missing. The objective of this study was to design a tool (process) for value network mapping in cross-sector collaboration. Furthermore, the study addressed a future perspective of collaboration, aiming to map the value network potential. During the study was investigated and pondered how to get the full potential of collaboration, by creating new value in collaboration process. These actions are parts of mapping process proposed in the study. The implementation and testing of the mapping process were realized through a case study of cross-sector collaboration in welfare services for elderly in the Eastern Finland. Key representatives in elderly care from public, private and third sectors were interviewed and a workshop with experts from every sector was also conducted in this regard. The value network mapping process designed in this study consists of specific steps that help managers and experts to understand how to get a complex value network map and how to enhance it. Furthermore, it make easier the understanding of how new value can be created in collaboration process. The map can be used in order to motivate participants to be engaged with responsibility in collaboration and to be fully committed in their interactions. It can be also used as a motivator tool for those organizations that intend to engage in collaboration process. Additionally, value network map is a starting point in many value network analyses. Furthermore, the enhanced value network map can be used as a performance measurement tool in cross-sector collaboration.
Resumo:
Multiprocessor system-on-chip (MPSoC) designs utilize the available technology and communication architectures to meet the requirements of the upcoming applications. In MPSoC, the communication platform is both the key enabler, as well as the key differentiator for realizing efficient MPSoCs. It provides product differentiation to meet a diverse, multi-dimensional set of design constraints, including performance, power, energy, reconfigurability, scalability, cost, reliability and time-to-market. The communication resources of a single interconnection platform cannot be fully utilized by all kind of applications, such as the availability of higher communication bandwidth for computation but not data intensive applications is often unfeasible in the practical implementation. This thesis aims to perform the architecture-level design space exploration towards efficient and scalable resource utilization for MPSoC communication architecture. In order to meet the performance requirements within the design constraints, careful selection of MPSoC communication platform, resource aware partitioning and mapping of the application play important role. To enhance the utilization of communication resources, variety of techniques such as resource sharing, multicast to avoid re-transmission of identical data, and adaptive routing can be used. For implementation, these techniques should be customized according to the platform architecture. To address the resource utilization of MPSoC communication platforms, variety of architectures with different design parameters and performance levels, namely Segmented bus (SegBus), Network-on-Chip (NoC) and Three-Dimensional NoC (3D-NoC), are selected. Average packet latency and power consumption are the evaluation parameters for the proposed techniques. In conventional computing architectures, fault on a component makes the connected fault-free components inoperative. Resource sharing approach can utilize the fault-free components to retain the system performance by reducing the impact of faults. Design space exploration also guides to narrow down the selection of MPSoC architecture, which can meet the performance requirements with design constraints.
Resumo:
In a just-in-time, assemble-to-order production environments the scheduling of material requirements and production tasks - even though difficult - is of paramount importance. Different enterprise resource planning solutions with master scheduling functionality have been created to ease this problem and work as expected unless there is a problem in the material flow. This case-based candidate’s thesis introduces a tool for Microsoft Dynamics AX multisite environment, that can be used by site managers and production coordinators to get an overview of the current open sales order base and prioritize production in the event of material shortouts to avoid part-deliveries.
Resumo:
The importance of after-sales service or service in general can be seen and experienced by customers every day with industrial as well as other non-industrial services or products. This dissertation, drawing on theory and experience, focuses on practical engineering implications, specifically the management of customer issues in the after-sales phase in the mobile phone arena. The main objective of this doctoral dissertation is to investigate customer after-sales issue management, specifically regarding mobile phones. The case studies focus on issue resolution time and the issue of corrective actions. This dissertation consists of a main body and four peer-reviewed journal articles and one manuscript currently under review by a peer-reviewed journal. The main body of this dissertation examines the elements of customer satisfaction, loyalty, and retention with respect to corrective actions to address customer issues and issue resolution time through literature and empirical studies. The five independent works are case studies supporting the thesis research questions. This study examines four questions: 1) What are the factors affecting corrective actions for customers? 2) How can customer issue resolution time be controlled? 3) What are the factors affecting processes in the service chain? and 4) How can communication be measured in a service chain? In this work, both quantitative and qualitative analysis methods are used. The main body of the thesis reviews the literature regarding the elements that bridge the five case studies. The case studies of the articles and surveys lean more toward the methodology of critical positivism and then apply the interpretive approach in interpreting the results. The case study articles employ various statistical methods to analyze and to interpret the empirical and survey data. The statistical methods were used to create a model that is useful for significantly optimizing issue resolution time. Moreover, it was found that samples for verifying issues provided by the customer neither improve the perceived quality of corrective actions nor the perceived quality of issue resolution time. The term “service” in this work is limited to the technical services that are provided by product manufacturers and after-sales authorized service vendors. On the basis of this research work, it has been observed that corrective actions and issue resolution time are associated with customer satisfaction and hence, according to induction theory, to customer loyalty and retention. This thesis utilizes knowledge of marketing and customer relationships to contribute to the existing body of knowledge concerning information and communication technology for after-sales service recovery of mobile terminals. The established models in the thesis contribute to the existing knowledge of the after-sales process of dealing with customer issues in the field of mobile phones. The findings suggest that process managers could focus more on communication and training provided to the staff as new technology evolves rapidly. The study also suggest the managers formulate strategies for how customers can be kept informed on a regular basis of the status of issues that have been escalated for corrective action. The findings also lay the foundation for the comprehensive objective to control the entire product development process, starting with conceptualization. This implies that robust design should be applied to the new products so that problems affecting customer service quality are not repeated. The objective will be achieved when the entire service chain from product development to the final user can be modeled and this model can be used to support the organization at all levels.
Resumo:
Finnish design has attracted global attention lately and companies within the industry have potential in international markets. Because networks have been found to be extremely helpful in a firm’s international business operations and usefulness of networks is not fully exploited, their role in Finnish design companies is investigated. Accordingly, this study concentrates on understanding the role of networks in the internationalization process of Finnish design companies. This was investigated through describing the internationalization process of Finnish design companies, analyzing what kind of networks are related to internationalization process of Finnish design companies, and analyzing how networks are utilized in the internationalization process of Finnish design companies. The theoretical framework explores the Finnish design industry, internationalization process and networks. The Finnish design industry is introduced in general and the concept of design is defined to refer to the industries of textiles, furniture, clothing, and lighting equipment in the research. The theories of internationalization process, the Uppsala model and Luostarinen’s operation modes, are explored in detail. The Born Global theory, which is a contrary view to stage models, is also discussed. The concept of network is investigated, networks are classified into business and social networks, and network approach to internationalization is discussed. The research is conducted empirically and the research method is a descriptive case study. In this study, four case companies are investigated: the interior decoration unit of L-Fashion Group, Globe Hope, Klo Design, and Melaja Ltd. Data is collected by semi-structured interviews and the analysis is done in the following way: the case companies are introduced, their internationalization processes and networks are described and, finally, the comparison of the case companies is done in a form of cross-case analysis. This research showed that cooperation with social networks, such as locals or employees who have experience from the target market can be extremely helpful in the beginning of a Finnish design company’s internationalization process. This study also indicated that public organizations do not necessarily enhance the internationalization process in a design company point-of-view. In addition, the research showed that there is cooperation between small Finnish design companies whereas large design companies are not as open to cooperation with competitors.
Resumo:
With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.
Resumo:
Technological innovations, the development of the internet, and globalization have increased the number and complexity of web applications. As a result, keeping web user interfaces understandable and usable (in terms of ease-of-use, effectiveness, and satisfaction) is a challenge. As part of this, designing userintuitive interface signs (i.e., the small elements of web user interface, e.g., navigational link, command buttons, icons, small images, thumbnails, etc.) is an issue for designers. Interface signs are key elements of web user interfaces because ‘interface signs’ act as a communication artefact to convey web content and system functionality, and because users interact with systems by means of interface signs. In the light of the above, applying semiotic (i.e., the study of signs) concepts on web interface signs will contribute to discover new and important perspectives on web user interface design and evaluation. The thesis mainly focuses on web interface signs and uses the theory of semiotic as a background theory. The underlying aim of this thesis is to provide valuable insights to design and evaluate web user interfaces from a semiotic perspective in order to improve overall web usability. The fundamental research question is formulated as What do practitioners and researchers need to be aware of from a semiotic perspective when designing or evaluating web user interfaces to improve web usability? From a methodological perspective, the thesis follows a design science research (DSR) approach. A systematic literature review and six empirical studies are carried out in this thesis. The empirical studies are carried out with a total of 74 participants in Finland. The steps of a design science research process are followed while the studies were designed and conducted; that includes (a) problem identification and motivation, (b) definition of objectives of a solution, (c) design and development, (d) demonstration, (e) evaluation, and (f) communication. The data is collected using observations in a usability testing lab, by analytical (expert) inspection, with questionnaires, and in structured and semi-structured interviews. User behaviour analysis, qualitative analysis and statistics are used to analyze the study data. The results are summarized as follows and have lead to the following contributions. Firstly, the results present the current status of semiotic research in UI design and evaluation and highlight the importance of considering semiotic concepts in UI design and evaluation. Secondly, the thesis explores interface sign ontologies (i.e., sets of concepts and skills that a user should know to interpret the meaning of interface signs) by providing a set of ontologies used to interpret the meaning of interface signs, and by providing a set of features related to ontology mapping in interpreting the meaning of interface signs. Thirdly, the thesis explores the value of integrating semiotic concepts in usability testing. Fourthly, the thesis proposes a semiotic framework (Semiotic Interface sign Design and Evaluation – SIDE) for interface sign design and evaluation in order to make them intuitive for end users and to improve web usability. The SIDE framework includes a set of determinants and attributes of user-intuitive interface signs, and a set of semiotic heuristics to design and evaluate interface signs. Finally, the thesis assesses (a) the quality of the SIDE framework in terms of performance metrics (e.g., thoroughness, validity, effectiveness, reliability, etc.) and (b) the contributions of the SIDE framework from the evaluators’ perspective.
Resumo:
Continuous loading and unloading can cause breakdown of cranes. In seeking solution to this problem, the use of an intelligent control system for improving the fatigue life of cranes in the control of mechatronics has been under study since 1994. This research focuses on the use of neural networks as possibilities of developing algorithm to map stresses on a crane. The intelligent algorithm was designed to be a part of the system of a crane, the design process started with solid works, ANSYS and co-simulation using MSc Adams software which was incorporated in MATLAB-Simulink and finally MATLAB neural network (NN) for the optimization process. The flexibility of the boom accounted for the accuracy of the maximum stress results in the ADAMS model. The flexibility created in ANSYS produced more accurate results compared to the flexibility model in ADAMS/View using discrete link. The compatibility between.ADAMS and ANSYS softwares was paramount in the efficiency and the accuracy of the results. Von Mises stresses analysis was more suitable for this thesis work because the hydraulic boom was made from construction steel FE-510 of steel grade S355 with yield strength of 355MPa. Von Mises theory was good for further analysis due to ductility of the material and the repeated tensile and shear loading. Neural network predictions for the maximum stresses were then compared with the co-simulation results for accuracy, and the comparison showed that the results obtained from neural network model were sufficiently accurate in predicting the maximum stresses on the boom than co-simulation.
Resumo:
In the design of electrical machines, efficiency improvements have become very important. However, there are at least two significant cases in which the compactness of electrical machines is critical and the tolerance of extremely high losses is valued: vehicle traction, where very high torque density is desired at least temporarily; and direct-drive wind turbine generators, whose mass should be acceptably low. As ever higher torque density and ever more compact electrical machines are developed for these purposes, thermal issues, i.e. avoidance of over-temperatures and damage in conditions of high heat losses, are becoming of utmost importance. The excessive temperatures of critical machine components, such as insulation and permanent magnets, easily cause failures of the whole electrical equipment. In electrical machines with excitation systems based on permanent magnets, special attention must be paid to the rotor temperature because of the temperature-sensitive properties of permanent magnets. The allowable temperature of NdFeB magnets is usually significantly less than 150 ˚C. The practical problem is that the part of the machine where the permanent magnets are located should stay cooler than the copper windings, which can easily tolerate temperatures of 155 ˚C or 180 ˚C. Therefore, new cooling solutions should be developed in order to cool permanent magnet electrical machines with high torque density and because of it with high concentrated losses in stators. In this doctoral dissertation, direct and indirect liquid cooling techniques for permanent magnet synchronous electrical machines (PMSM) with high torque density are presented and discussed. The aim of this research is to analyse thermal behaviours of the machines using the most applicable and accurate thermal analysis methods and to propose new, practical machine designs based on these analyses. The Computational Fluid Dynamics (CFD) thermal simulations of the heat transfer inside the machines and lumped parameter thermal network (LPTN) simulations both presented herein are used for the analyses. Detailed descriptions of the simulated thermal models are also presented. Most of the theoretical considerations and simulations have been verified via experimental measurements on a copper tooth-coil (motorette) and on various prototypes of electrical machines. The indirect liquid cooling systems of a 100 kW axial flux (AF) PMSM and a 110 kW radial flux (RF) PMSM are analysed here by means of simplified 3D CFD conjugate thermal models of the parts of both machines. In terms of results, a significant temperature drop of 40 ̊C in the stator winding and 28 ̊C in the rotor of the AF PMSM was achieved with the addition of highly thermally conductive materials into the machine: copper bars inserted in the teeth, and potting material around the end windings. In the RF PMSM, the potting material resulted in a temperature decrease of 6 ̊C in the stator winding, and in a decrease of 10 ̊C in the rotor embedded-permanentmagnets. Two types of unique direct liquid cooling systems for low power machines are analysed herein to demonstrate the effectiveness of the cooling systems in conditions of highly concentrated heat losses. LPTN analysis and CFD thermal analysis (the latter being particularly useful for unique design) were applied to simulate the temperature distribution within the machine models. Oil-immersion cooling provided good cooling capability for a 26.6 kW PMSM of a hybrid vehicle. A direct liquid cooling system for the copper winding with inner stainless steel tubes was designed for an 8 MW directdrive PM synchronous generator. The design principles of this cooling solution are described in detail in this thesis. The thermal analyses demonstrate that the stator winding and the rotor magnet temperatures are kept significantly below their critical temperatures with demineralized water flow. A comparison study of the coolant agents indicates that propylene glycol is more effective than ethylene glycol in arctic conditions.
Resumo:
Massive Open Online Courses have been in the center of attention in the recent years. However, the main problem of all online learning environments is their lack of personalization according to the learners’ knowledge, learning styles and other learning preferences. This research explores the parameters and features used for personalization in the literature and based on them, evaluates to see how well the current MOOC platforms have been personalized. Then, proposes a design framework for personalization of MOOC platforms that fulfills most of the personalization parameters in the literature including the learning style as well as personalization features. The result of an assessment made for the proposed design framework shows that the framework well supports personalization of MOOCs.
Resumo:
Due to various advantages such as flexibility, scalability and updatability, software intensive systems are increasingly embedded in everyday life. The constantly growing number of functions executed by these systems requires a high level of performance from the underlying platform. The main approach to incrementing performance has been the increase of operating frequency of a chip. However, this has led to the problem of power dissipation, which has shifted the focus of research to parallel and distributed computing. Parallel many-core platforms can provide the required level of computational power along with low power consumption. On the one hand, this enables parallel execution of highly intensive applications. With their computational power, these platforms are likely to be used in various application domains: from home use electronics (e.g., video processing) to complex critical control systems. On the other hand, the utilization of the resources has to be efficient in terms of performance and power consumption. However, the high level of on-chip integration results in the increase of the probability of various faults and creation of hotspots leading to thermal problems. Additionally, radiation, which is frequent in space but becomes an issue also at the ground level, can cause transient faults. This can eventually induce a faulty execution of applications. Therefore, it is crucial to develop methods that enable efficient as well as resilient execution of applications. The main objective of the thesis is to propose an approach to design agentbased systems for many-core platforms in a rigorous manner. When designing such a system, we explore and integrate various dynamic reconfiguration mechanisms into agents functionality. The use of these mechanisms enhances resilience of the underlying platform whilst maintaining performance at an acceptable level. The design of the system proceeds according to a formal refinement approach which allows us to ensure correct behaviour of the system with respect to postulated properties. To enable analysis of the proposed system in terms of area overhead as well as performance, we explore an approach, where the developed rigorous models are transformed into a high-level implementation language. Specifically, we investigate methods for deriving fault-free implementations from these models into, e.g., a hardware description language, namely VHDL.