24 resultados para Higly Efficient


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The pumping processes requiring wide range of flow are often equipped with parallelconnected centrifugal pumps. In parallel pumping systems, the use of variable speed control allows that the required output for the process can be delivered with a varying number of operated pump units and selected rotational speed references. However, the optimization of the parallel-connected rotational speed controlled pump units often requires adaptive modelling of both parallel pump characteristics and the surrounding system in varying operation conditions. The available information required for the system modelling in typical parallel pumping applications such as waste water treatment and various cooling and water delivery pumping tasks can be limited, and the lack of real-time operation point monitoring often sets limits for accurate energy efficiency optimization. Hence, alternatives for easily implementable control strategies which can be adopted with minimum system data are necessary. This doctoral thesis concentrates on the methods that allow the energy efficient use of variable speed controlled parallel pumps in system scenarios in which the parallel pump units consist of a centrifugal pump, an electric motor, and a frequency converter. Firstly, the suitable operation conditions for variable speed controlled parallel pumps are studied. Secondly, methods for determining the output of each parallel pump unit using characteristic curve-based operation point estimation with frequency converter are discussed. Thirdly, the implementation of the control strategy based on real-time pump operation point estimation and sub-optimization of each parallel pump unit is studied. The findings of the thesis support the idea that the energy efficiency of the pumping can be increased without the installation of new, more efficient components in the systems by simply adopting suitable control strategies. An easily implementable and adaptive control strategy for variable speed controlled parallel pumping systems can be created by utilizing the pump operation point estimation available in modern frequency converters. Hence, additional real-time flow metering, start-up measurements, and detailed system model are unnecessary, and the pumping task can be fulfilled by determining a speed reference for each parallel-pump unit which suggests the energy efficient operation of the pumping system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In recent years, chief information officers (CIOs) around the world have identified Business Intelligence (BI) as their top priority and as the best way to enhance their enterprises competitiveness. Yet, many enterprises are struggling to realize the business value that BI promises. This discrepancy causes important questions, for example: what are the critical success factors of Business Intelligence and, more importantly, how it can be ensured that a Business Intelligence program enhances enterprises competitiveness. The main objective of the study is to find out how it can be ensured that a BI program meets its goals in providing competitive advantage to an enterprise. The objective is approached with a literature review and a qualitative case study. For the literature review the main objective populates three research questions (RQs); RQ1: What is Business Intelligence and why is it important for modern enterprises? RQ2: What are the critical success factors of Business Intelligence programs? RQ3: How it can be ensured that CSFs are met? The qualitative case study covers the BI program of a Finnish global manufacturer company. The research questions for the case study are as follows; RQ4: What is the current state of the case company’s BI program and what are the key areas for improvement? RQ5: In what ways the case company’s Business Intelligence program could be improved? The case company’s BI program is researched using the following methods; action research, semi-structured interviews, maturity assessment and benchmarking. The literature review shows that Business Intelligence is a technology-based information process that contains a series of systematic activities, which are driven by the specific information needs of decision-makers. The objective of BI is to provide accurate, timely, fact-based information, which enables taking actions that lead to achieving competitive advantage. There are many reasons for the importance of Business Intelligence, two of the most important being; 1) It helps to bridge the gap between an enterprise’s current and its desired performance, and 2) It helps enterprises to be in alignment with key performance indicators meaning it helps an enterprise to align towards its key objectives. The literature review also shows that there are known critical success factors (CSFs) for Business Intelligence programs which have to be met if the above mentioned value is wanted to be achieved, for example; committed management support and sponsorship, business-driven development approach and sustainable data quality. The literature review shows that the most common challenges are related to these CSFs and, more importantly, that overcoming these challenges requires a more comprehensive form of BI, called Enterprise Performance Management (EPM). EPM links measurement to strategy by focusing on what is measured and why. The case study shows that many of the challenges faced in the case company’s BI program are related to the above-mentioned CSFs. The main challenges are; lack of support and sponsorship from business, lack of visibility to overall business performance, lack of rigid BI development process, lack of clear purpose for the BI program and poor data quality. To overcome these challenges the case company should define and design an enterprise metrics framework, make sure that BI development requirements are gathered and prioritized by business, focus on data quality and ownership, and finally define clear goals for the BI program and then support and sponsor these goals.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

One of the main challenges in Software Engineering is to cope with the transition from an industry based on software as a product to software as a service. The field of Software Engineering should provide the necessary methods and tools to develop and deploy new cost-efficient and scalable digital services. In this thesis, we focus on deployment platforms to ensure cost-efficient scalability of multi-tier web applications and on-demand video transcoding service for different types of load conditions. Infrastructure as a Service (IaaS) clouds provide Virtual Machines (VMs) under the pay-per-use business model. Dynamically provisioning VMs on demand allows service providers to cope with fluctuations on the number of service users. However, VM provisioning must be done carefully, because over-provisioning results in an increased operational cost, while underprovisioning leads to a subpar service. Therefore, our main focus in this thesis is on cost-efficient VM provisioning for multi-tier web applications and on-demand video transcoding. Moreover, to prevent provisioned VMs from becoming overloaded, we augment VM provisioning with an admission control mechanism. Similarly, to ensure efficient use of provisioned VMs, web applications on the under-utilized VMs are consolidated periodically. Thus, the main problem that we address is cost-efficient VM provisioning augmented with server consolidation and admission control on the provisioned VMs. We seek solutions for two types of applications: multi-tier web applications that follow the request-response paradigm and on-demand video transcoding that is based on video streams with soft realtime constraints. Our first contribution is a cost-efficient VM provisioning approach for multi-tier web applications. The proposed approach comprises two subapproaches: a reactive VM provisioning approach called ARVUE and a hybrid reactive-proactive VM provisioning approach called Cost-efficient Resource Allocation for Multiple web applications with Proactive scaling. Our second contribution is a prediction-based VM provisioning approach for on-demand video transcoding in the cloud. Moreover, to prevent virtualized servers from becoming overloaded, the proposed VM provisioning approaches are augmented with admission control approaches. Therefore, our third contribution is a session-based admission control approach for multi-tier web applications called adaptive Admission Control for Virtualized Application Servers. Similarly, the fourth contribution in this thesis is a stream-based admission control and scheduling approach for on-demand video transcoding called Stream-Based Admission Control and Scheduling. Our fifth contribution is a computation and storage trade-o strategy for cost-efficient video transcoding in cloud computing. Finally, the sixth and the last contribution is a web application consolidation approach, which uses Ant Colony System to minimize the under-utilization of the virtualized application servers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The driving forces for current research of flame retardants are increased fire safety in combination with flame retardant formulations that fulfill the criteria of sustainable production and products. In recent years, important questions about the environmental safety of antimony, and in particular, brominated flame retardants have been raised. As a consequence of this, the current doctoral thesis work describes efforts to develop new halogen-free flame retardants that are based on various radical generators and phosphorous compounds. The investigation was first focused on compounds that are capable of generating alkyl radicals in order to study their role on flame retardancy of polypropylene. The family of azoalkanes was selected as the cleanest and most convenient source of free alkyl radicals. Therefore, a number of symmetrical and unsymmetrical azoalkanes of the general formula R-N=N-R’ were prepared. The experimental results show that in the series of different sized azocycloalkanes the flame retardant efficacy decreased in the following order: R = R´= cyclohexyl > cyclopentyl > cyclobutyl > cyclooctanyl > cyclododecanyl. However, in the series of aliphatic azoalkanes compounds, the efficacy decreased as followed: R = R´= n-alkyl > tert-butyl > tert-octyl. The most striking difference in flame retardant efficacy was observed in thick polypropylene plaques of 1 mm, e.g. azocyclohexane (AZO) had a much better flame retardant performance than did the commercial reference FR (Flamestab® NOR116) in thick PP sections. In addition, some of the prepared azoalkane flame retardants e.g. 4’4- bis(cyclohexylazocyclohexyl) methane (BISAZO) exhibited non-burning dripping behavior. Extrusion coating experiments of flame retarded low density polyethylene (LDPE) onto a standard machine finished Kraft paper were carried out in order to investigate the potential of azoalkanes in multilayer facings. The results show that azocyclohexane (AZO) and 4’4-bis (cyclohexylazocyclohexyl) methane (BISAZO) can significantly improve the flame retardant properties of low density polyethylene coated paper already at 0.5 wt.% loadings, provided that the maximum extrusion temperature of 260 oC is not exceeded and coating weight is kept low at 13 g/m2. In addition, various triazene-based flame retardants (RN1=N2-N3R’R’’) were prepared. For example, polypropylene samples containing a very low concentration of only 0.5 wt.% of bis- 4’4’-(3’3’-dimethyltriazene) diphenyl ether and other triazenes passed the DIN 4102-1 test with B2 classification. It is noteworthy that no burning dripping could be detected and the average burning times were very short with exceptionally low weight losses. Therefore, triazene compounds constitute a new and interesting family of radical generators for flame retarding of polymeric materials. The high flame retardant potential of triazenes can be attributed to their ability to generate various types of radicals during their thermal decomposition. According to thermogravimetric analysis/Fourier transform infrared spectroscopy/MS analysis, triazene units are homolytically cleaved into various aminyl, resonance-stabilized aryl radicals, and different CH fragments with simultaneous evolution of elemental nitrogen. Furthermore, the potential of thirteen aliphatic, aromatic, thiuram and heterocyclic substituted organic disulfide derivatives of the general formula R-S-S-R’ as a new group of halogen-free flame retardants for polypropylene films have been investigated. According to the DIN 4102- 1 standard ignitibility test, for the first time it has been demonstrated that many of the disulfides alone can effectively provide flame retardancy and self-extinguishing properties to polypropylene films at already very low concentrations of 0.5 wt.%. For the disulfide family, the highest FR activity was recorded for 5’5’-dithiobis (2-nitrobenzoic acid). Very low values for burning length (53 mm) and burning time (10 s) reflect significantly increased fire retardant performance of this disulfide compared to other compounds in this series as well as to Flamestab® NOR116. Finally, two new, phosphorus-based flame retardants were synthesized: P’P-diphenyl phosphinic hydrazide (PAH) and melamine phenyl phosphonate (MPhP). The DIN 4102-1 test and the more stringent UL94 vertical burning test (UL94 V) were used to assess the formulations ability to extinguish a flame once ignited. A very strong synergistic effect with azoalkanes was found, i.e. in combination with these radical generators even UL94 V0 rate could be obtained.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Identification of low-dimensional structures and main sources of variation from multivariate data are fundamental tasks in data analysis. Many methods aimed at these tasks involve solution of an optimization problem. Thus, the objective of this thesis is to develop computationally efficient and theoretically justified methods for solving such problems. Most of the thesis is based on a statistical model, where ridges of the density estimated from the data are considered as relevant features. Finding ridges, that are generalized maxima, necessitates development of advanced optimization methods. An efficient and convergent trust region Newton method for projecting a point onto a ridge of the underlying density is developed for this purpose. The method is utilized in a differential equation-based approach for tracing ridges and computing projection coordinates along them. The density estimation is done nonparametrically by using Gaussian kernels. This allows application of ridge-based methods with only mild assumptions on the underlying structure of the data. The statistical model and the ridge finding methods are adapted to two different applications. The first one is extraction of curvilinear structures from noisy data mixed with background clutter. The second one is a novel nonlinear generalization of principal component analysis (PCA) and its extension to time series data. The methods have a wide range of potential applications, where most of the earlier approaches are inadequate. Examples include identification of faults from seismic data and identification of filaments from cosmological data. Applicability of the nonlinear PCA to climate analysis and reconstruction of periodic patterns from noisy time series data are also demonstrated. Other contributions of the thesis include development of an efficient semidefinite optimization method for embedding graphs into the Euclidean space. The method produces structure-preserving embeddings that maximize interpoint distances. It is primarily developed for dimensionality reduction, but has also potential applications in graph theory and various areas of physics, chemistry and engineering. Asymptotic behaviour of ridges and maxima of Gaussian kernel densities is also investigated when the kernel bandwidth approaches infinity. The results are applied to the nonlinear PCA and to finding significant maxima of such densities, which is a typical problem in visual object tracking.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this thesis was to create a process for all multi-site ramp-up (MSRU) projects in the case company in order to have simultaneous ramp-ups early in the market. The research was done through case study in one company and semi-structured interviews. There are already processes, which are now in use in MSRU-cases. Interviews of 20 ramp-up specialists revealed topics to be improved. Those were project team set up, roles and responsibilities and recommended project organization, communication, product change management practices, competence and know how transfer practices and support model. More R&D support and involvement is needed in MSRU-projects. DCM’s role is very important in the MSRU-projects among PMT-team; he should be the business owner of the project. Recommendation is that product programs could take care of the product and repair training of new products in volume factories. R&D’s participation in competence transfers is essential important in MSRU-projects. Communication in projects could be shared through special intranet commune. Blogging and tweeting could be considered in the communication plan. If hundreds of change notes are open in ramp-up phase, it should be considered not to approve the product into volume ramp-up. PMTs’ supports are also important and MSRU-projects should be planned, budgeted and executed together. Finally a new MSRU-process is presented in this thesis to be used in all MSRU-projects.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Due to various advantages such as flexibility, scalability and updatability, software intensive systems are increasingly embedded in everyday life. The constantly growing number of functions executed by these systems requires a high level of performance from the underlying platform. The main approach to incrementing performance has been the increase of operating frequency of a chip. However, this has led to the problem of power dissipation, which has shifted the focus of research to parallel and distributed computing. Parallel many-core platforms can provide the required level of computational power along with low power consumption. On the one hand, this enables parallel execution of highly intensive applications. With their computational power, these platforms are likely to be used in various application domains: from home use electronics (e.g., video processing) to complex critical control systems. On the other hand, the utilization of the resources has to be efficient in terms of performance and power consumption. However, the high level of on-chip integration results in the increase of the probability of various faults and creation of hotspots leading to thermal problems. Additionally, radiation, which is frequent in space but becomes an issue also at the ground level, can cause transient faults. This can eventually induce a faulty execution of applications. Therefore, it is crucial to develop methods that enable efficient as well as resilient execution of applications. The main objective of the thesis is to propose an approach to design agentbased systems for many-core platforms in a rigorous manner. When designing such a system, we explore and integrate various dynamic reconfiguration mechanisms into agents functionality. The use of these mechanisms enhances resilience of the underlying platform whilst maintaining performance at an acceptable level. The design of the system proceeds according to a formal refinement approach which allows us to ensure correct behaviour of the system with respect to postulated properties. To enable analysis of the proposed system in terms of area overhead as well as performance, we explore an approach, where the developed rigorous models are transformed into a high-level implementation language. Specifically, we investigate methods for deriving fault-free implementations from these models into, e.g., a hardware description language, namely VHDL.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent advances in Information and Communication Technology (ICT), especially those related to the Internet of Things (IoT), are facilitating smart regions. Among many services that a smart region can offer, remote health monitoring is a typical application of IoT paradigm. It offers the ability to continuously monitor and collect health-related data from a person, and transmit the data to a remote entity (for example, a healthcare service provider) for further processing and knowledge extraction. An IoT-based remote health monitoring system can be beneficial in rural areas belonging to the smart region where people have limited access to regular healthcare services. The same system can be beneficial in urban areas where hospitals can be overcrowded and where it may take substantial time to avail healthcare. However, this system may generate a large amount of data. In order to realize an efficient IoT-based remote health monitoring system, it is imperative to study the network communication needs of such a system; in particular the bandwidth requirements and the volume of generated data. The thesis studies a commercial product for remote health monitoring in Skellefteå, Sweden. Based on the results obtained via the commercial product, the thesis identified the key network-related requirements of a typical remote health monitoring system in terms of real-time event update, bandwidth requirements and data generation. Furthermore, the thesis has proposed an architecture called IReHMo - an IoT-based remote health monitoring architecture. This architecture allows users to incorporate several types of IoT devices to extend the sensing capabilities of the system. Using IReHMo, several IoT communication protocols such as HTTP, MQTT and CoAP has been evaluated and compared against each other. Results showed that CoAP is the most efficient protocol to transmit small size healthcare data to the remote servers. The combination of IReHMo and CoAP significantly reduced the required bandwidth as well as the volume of generated data (up to 56 percent) compared to the commercial product. Finally, the thesis conducted a scalability analysis, to determine the feasibility of deploying the combination of IReHMo and CoAP in large numbers in regions in north Sweden.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The increasing performance of computers has made it possible to solve algorithmically problems for which manual and possibly inaccurate methods have been previously used. Nevertheless, one must still pay attention to the performance of an algorithm if huge datasets are used or if the problem iscomputationally difficult. Two geographic problems are studied in the articles included in this thesis. In the first problem the goal is to determine distances from points, called study points, to shorelines in predefined directions. Together with other in-formation, mainly related to wind, these distances can be used to estimate wave exposure at different areas. In the second problem the input consists of a set of sites where water quality observations have been made and of the results of the measurements at the different sites. The goal is to select a subset of the observational sites in such a manner that water quality is still measured in a sufficient accuracy when monitoring at the other sites is stopped to reduce economic cost. Most of the thesis concentrates on the first problem, known as the fetch length problem. The main challenge is that the two-dimensional map is represented as a set of polygons with millions of vertices in total and the distances may also be computed for millions of study points in several directions. Efficient algorithms are developed for the problem, one of them approximate and the others exact except for rounding errors. The solutions also differ in that three of them are targeted for serial operation or for a small number of CPU cores whereas one, together with its further developments, is suitable also for parallel machines such as GPUs.