884 resultados para Cost Estimation System
Resumo:
In today's logistics environment, there is a tremendous need for accurate cost information and cost allocation. Companies searching for the proper solution often come across with activity-based costing (ABC) or one of its variations which utilizes cost drivers to allocate the costs of activities to cost objects. In order to allocate the costs accurately and reliably, the selection of appropriate cost drivers is essential in order to get the benefits of the costing system. The purpose of this study is to validate the transportation cost drivers of a Finnish wholesaler company and ultimately select the best possible driver alternatives for the company. The use of cost driver combinations as an alternative is also studied. The study is conducted as a part of case company's applied ABC-project using the statistical research as the main research method supported by a theoretical, literature based method. The main research tools featured in the study include simple and multiple regression analyses, which together with the literature and observations based practicality analysis forms the basis for the advanced methods. The results suggest that the most appropriate cost driver alternatives are the delivery drops and internal delivery weight. The possibility of using cost driver combinations is not suggested as their use doesn't provide substantially better results while increasing the measurement costs, complexity and load of use at the same time. The use of internal freight cost drivers is also questionable as the results indicate weakening trend in the cost allocation capabilities towards the end of the period. Therefore more research towards internal freight cost drivers should be conducted before taking them in use.
Resumo:
The capabilities and thus, design complexity of VLSI-based embedded systems have increased tremendously in recent years, riding the wave of Moore’s law. The time-to-market requirements are also shrinking, imposing challenges to the designers, which in turn, seek to adopt new design methods to increase their productivity. As an answer to these new pressures, modern day systems have moved towards on-chip multiprocessing technologies. New architectures have emerged in on-chip multiprocessing in order to utilize the tremendous advances of fabrication technology. Platform-based design is a possible solution in addressing these challenges. The principle behind the approach is to separate the functionality of an application from the organization and communication architecture of hardware platform at several levels of abstraction. The existing design methodologies pertaining to platform-based design approach don’t provide full automation at every level of the design processes, and sometimes, the co-design of platform-based systems lead to sub-optimal systems. In addition, the design productivity gap in multiprocessor systems remain a key challenge due to existing design methodologies. This thesis addresses the aforementioned challenges and discusses the creation of a development framework for a platform-based system design, in the context of the SegBus platform - a distributed communication architecture. This research aims to provide automated procedures for platform design and application mapping. Structural verification support is also featured thus ensuring correct-by-design platforms. The solution is based on a model-based process. Both the platform and the application are modeled using the Unified Modeling Language. This thesis develops a Domain Specific Language to support platform modeling based on a corresponding UML profile. Object Constraint Language constraints are used to support structurally correct platform construction. An emulator is thus introduced to allow as much as possible accurate performance estimation of the solution, at high abstraction levels. VHDL code is automatically generated, in the form of “snippets” to be employed in the arbiter modules of the platform, as required by the application. The resulting framework is applied in building an actual design solution for an MP3 stereo audio decoder application.
Resumo:
Chickpea yield potential is limited by weed competition in typical chickpea growing areas of Pakistan where zero tillage crop grown on moisture conserved from rains received during the months of September and August. The objective of this work was to evaluate the growth and yield characteristics of chickpea grown in coexistence with increasing densities of wild onion (Asphodelus tenuifolius). The experiment was comprised of six density levels viz. zero, 20, 40, 80, 160 and 320 plants m-2 of A. tenuifolius. A decrease in chickpea primary and secondary branches per plant, pods per plant, seeds per pod, 100-seed weight and seed yield was observed due to more accumulation of dry matter per increasing densities of A. tenuifolius. The increase in A. tenuifolius density accelerated chickpea yield losses and reached the maximum values of 28, 35, 42, 50, 58 and 96% at 20, 40, 80, 160 and 320 A. tenuifolius plants m-2, respectively. The yield loss estimation model showed that chickpea losses with infinite A. tenuifolius density were 60%. Yield reduction could be predicted by 2.52% with increase of one A. tenuifolius plant m-2. It is concluded that A. tenuifolius has a strong influence on chickpea seed yield and showed a linear response at the range of densities studied.
Resumo:
In the doctoral dissertation, low-voltage direct current (LVDC) distribution system stability, supply security and power quality are evaluated by computational modelling and measurements on an LVDC research platform. Computational models for the LVDC network analysis are developed. Time-domain simulation models are implemented in the time-domain simulation environment PSCAD/EMTDC. The PSCAD/EMTDC models of the LVDC network are applied to the transient behaviour and power quality studies. The LVDC network power loss model is developed in a MATLAB environment and is capable of fast estimation of the network and component power losses. The model integrates analytical equations that describe the power loss mechanism of the network components with power flow calculations. For an LVDC network research platform, a monitoring and control software solution is developed. The solution is used to deliver measurement data for verification of the developed models and analysis of the modelling results. In the work, the power loss mechanism of the LVDC network components and its main dependencies are described. Energy loss distribution of the LVDC network components is presented. Power quality measurements and current spectra are provided and harmonic pollution on the DC network is analysed. The transient behaviour of the network is verified through time-domain simulations. DC capacitor guidelines for an LVDC power distribution network are introduced. The power loss analysis results show that one of the main optimisation targets for an LVDC power distribution network should be reduction of the no-load losses and efficiency improvement of converters at partial loads. Low-frequency spectra of the network voltages and currents are shown, and harmonic propagation is analysed. Power quality in the LVDC network point of common coupling (PCC) is discussed. Power quality standard requirements are shown to be met by the LVDC network. The network behaviour during transients is analysed by time-domain simulations. The network is shown to be transient stable during large-scale disturbances. Measurement results on the LVDC research platform proving this are presented in the work.
Resumo:
One of the main challenges in Software Engineering is to cope with the transition from an industry based on software as a product to software as a service. The field of Software Engineering should provide the necessary methods and tools to develop and deploy new cost-efficient and scalable digital services. In this thesis, we focus on deployment platforms to ensure cost-efficient scalability of multi-tier web applications and on-demand video transcoding service for different types of load conditions. Infrastructure as a Service (IaaS) clouds provide Virtual Machines (VMs) under the pay-per-use business model. Dynamically provisioning VMs on demand allows service providers to cope with fluctuations on the number of service users. However, VM provisioning must be done carefully, because over-provisioning results in an increased operational cost, while underprovisioning leads to a subpar service. Therefore, our main focus in this thesis is on cost-efficient VM provisioning for multi-tier web applications and on-demand video transcoding. Moreover, to prevent provisioned VMs from becoming overloaded, we augment VM provisioning with an admission control mechanism. Similarly, to ensure efficient use of provisioned VMs, web applications on the under-utilized VMs are consolidated periodically. Thus, the main problem that we address is cost-efficient VM provisioning augmented with server consolidation and admission control on the provisioned VMs. We seek solutions for two types of applications: multi-tier web applications that follow the request-response paradigm and on-demand video transcoding that is based on video streams with soft realtime constraints. Our first contribution is a cost-efficient VM provisioning approach for multi-tier web applications. The proposed approach comprises two subapproaches: a reactive VM provisioning approach called ARVUE and a hybrid reactive-proactive VM provisioning approach called Cost-efficient Resource Allocation for Multiple web applications with Proactive scaling. Our second contribution is a prediction-based VM provisioning approach for on-demand video transcoding in the cloud. Moreover, to prevent virtualized servers from becoming overloaded, the proposed VM provisioning approaches are augmented with admission control approaches. Therefore, our third contribution is a session-based admission control approach for multi-tier web applications called adaptive Admission Control for Virtualized Application Servers. Similarly, the fourth contribution in this thesis is a stream-based admission control and scheduling approach for on-demand video transcoding called Stream-Based Admission Control and Scheduling. Our fifth contribution is a computation and storage trade-o strategy for cost-efficient video transcoding in cloud computing. Finally, the sixth and the last contribution is a web application consolidation approach, which uses Ant Colony System to minimize the under-utilization of the virtualized application servers.
Resumo:
Data management consists of collecting, storing, and processing the data into the format which provides value-adding information for decision-making process. The development of data management has enabled of designing increasingly effective database management systems to support business needs. Therefore as well as advanced systems are designed for reporting purposes, also operational systems allow reporting and data analyzing. The used research method in the theory part is qualitative research and the research type in the empirical part is case study. Objective of this paper is to examine database management system requirements from reporting managements and data managements perspectives. In the theory part these requirements are identified and the appropriateness of the relational data model is evaluated. In addition key performance indicators applied to the operational monitoring of production are studied. The study has revealed that the appropriate operational key performance indicators of production takes into account time, quality, flexibility and cost aspects. Especially manufacturing efficiency has been highlighted. In this paper, reporting management is defined as a continuous monitoring of given performance measures. According to the literature review, the data management tool should cover performance, usability, reliability, scalability, and data privacy aspects in order to fulfill reporting managements demands. A framework is created for the system development phase based on requirements, and is used in the empirical part of the thesis where such a system is designed and created for reporting management purposes for a company which operates in the manufacturing industry. Relational data modeling and database architectures are utilized when the system is built for relational database platform.
Resumo:
Pheochromocytomas are rare chromaffin cell tumors that nevertheless must be excluded in large numbers of patients who develop sustained or episodic hypertension as well as in many others with suggestive symptoms or with a familial history of pheochromocytoma. Diagnosis of pheochromocytoma depends importantly on biochemical evidence of excess catecholamine production by a tumor. Imperfect sensitivity and specificity of commonly available biochemical tests and the low incidence of the tumor among the tested population mean that considerable time and effort can be expended in confirming or ruling out pheochromocytoma in patients where the tumor is suspected. Measurements of plasma free metanephrines provide a superior test compared to other available tests for diagnosis of pheochromocytoma. In particular, the high sensitivity of plasma free metanephrines means that a normal test result reliably excludes all but the smallest of pheochromocytomas so that no other tests are necessary. Measurements of plasma free metanephrines, when systematically combined with other diagnostic procedures outlined in this review, provide a more efficient, reliable and cost-effective approach for diagnosis of pheochromocytoma than offered by previously available approaches.
Resumo:
The main objective of this Master’s thesis is to develop a cost allocation model for a leading food industry company in Finland. The goal is to develop an allocation method for fixed overhead expenses produced in a specific production unit and create a plausible tracking system for product costs. The second objective is to construct an allocation model and modify the created model to be suited for other units as well. Costs, activities, drivers and appropriate allocation methods are studied. This thesis is started with literature review of existing theory of ABC, inspecting cost information and then conducting interviews with officials to get a general view of the requirements for the model to be constructed. The familiarization of the company started with becoming acquainted with the existing cost accounting methods. The main proposals for a new allocation model were revealed through interviews, which were utilized in setting targets for developing the new allocation method. As a result of this thesis, an Excel-based model is created based on the theoretical and empiric data. The new system is able to handle overhead costs in more detail improving the cost awareness, transparency in cost allocations and enhancing products’ cost structure. The improved cost awareness is received by selecting the best possible cost drivers for this situation. Also the capacity changes are taken into consideration, such as usage of practical or normal capacity instead of theoretical is suggested to apply. Also some recommendations for further development are made about capacity handling and cost collection.
Resumo:
Electrical machine drives are the most electrical energy-consuming systems worldwide. The largest proportion of drives is found in industrial applications. There are, however many other applications that are also based on the use of electrical machines, because they have a relatively high efficiency, a low noise level, and do not produce local pollution. Electrical machines can be classified into several categories. One of the most commonly used electrical machine types (especially in the industry) is induction motors, also known as asynchronous machines. They have a mature production process and a robust rotor construction. However, in the world pursuing higher energy efficiency with reasonable investments not every application receives the advantage of using this type of motor drives. The main drawback of induction motors is the fact that they need slipcaused and thus loss-generating current in the rotor, and additional stator current for magnetic field production along with the torque-producing current. This can reduce the electric motor drive efficiency, especially in low-speed, low-power applications. Often, when high torque density is required together with low losses, it is desirable to apply permanent magnet technology, because in this case there is no need to use current to produce the basic excitation of the machine. This promotes the effectiveness of copper use in the stator, and further, there is no rotor current in these machines. Again, if permanent magnets with a high remanent flux density are used, the air gap flux density can be higher than in conventional induction motors. These advantages have raised the popularity of PMSMs in some challenging applications, such as hybrid electric vehicles (HEV), wind turbines, and home appliances. Usually, a correctly designed PMSM has a higher efficiency and consequently lower losses than its induction machine counterparts. Therefore, the use of these electrical machines reduces the energy consumption of the whole system to some extent, which can provide good motivation to apply permanent magnet technology to electrical machines. However, the cost of high performance rare earth permanent magnets in these machines may not be affordable in many industrial applications, because the tight competition between the manufacturers dictates the rules of low-cost and highly robust solutions, where asynchronous machines seem to be more feasible at the moment. Two main electromagnetic components of an electrical machine are the stator and the rotor. In the case of a conventional radial flux PMSM, the stator contains magnetic circuit lamination and stator winding, and the rotor consists of rotor steel (laminated or solid) and permanent magnets. The lamination itself does not significantly influence the total cost of the machine, even though it can considerably increase the construction complexity, as it requires a special assembly arrangement. However, thin metal sheet processing methods are very effective and economically feasible. Therefore, the cost of the machine is mainly affected by the stator winding and the permanent magnets. The work proposed in this doctoral dissertation comprises a description and analysis of two approaches of PMSM cost reduction: one on the rotor side and the other on the stator side. The first approach on the rotor side includes the use of low-cost and abundant ferrite magnets together with a tooth-coil winding topology and an outer rotor construction. The second approach on the stator side exploits the use of a modular stator structure instead of a monolithic one. PMSMs with the proposed structures were thoroughly analysed by finite element method based tools (FEM). It was found out that by implementing the described principles, some favourable characteristics of the machine (mainly concerning the machine size) will inevitable be compromised. However, the main target of the proposed approaches is not to compete with conventional rare earth PMSMs, but to reduce the price at which they can be implemented in industrial applications, keeping their dimensions at the same level or lower than those of a typical electrical machine used in the industry at the moment. The measurement results of the prototypes show that the main performance characteristics of these machines are at an acceptable level. It is shown that with certain specific actions it is possible to achieve a desirable efficiency level of the machine with the proposed cost reduction methods.
Resumo:
Knowledge of the radiochemical purity of radiopharmaceuticals is mandatory and can be evaluated by several methods and techniques. Planar chromatography is the technique normally employed in nuclear medicine since it is simple, rapid and usually of low cost. There is no standard system for the chromatographic technique, but price, separation efficiency and short time for execution must be considered. We have studied an alternative system using common chromatographic stationary phase and alcohol or alcohol:chloroform mixtures as the mobile phase, using the lipophilic radiopharmaceutical [99mTc(MIBI)6]+ as a model. Whatman 1 modified phase paper and absolute ethanol, Whatman 1 paper and methanol:chloroform (25:75), Whatman 3MM paper and ethanol:chloroform (25:75), and the more expensive ITLC-SG and 1-propanol:chloroform (10:90) were suitable systems for the direct determination of radiochemical purity of [99mTc(MIBI)6]+ since impurities such as99mTc-reduced-hydrolyzed (RH),99mTcO4- and [99mTc(cysteine)2]-complex were completely separated from the radiopharmaceutical, which moved toward the front of chromatographic systems while impurities were retained at the origin. The time required for analysis was 4 to 15 min, which is appropriate for nuclear medicine routines.
Resumo:
Fluid handling systems such as pump and fan systems are found to have a significant potential for energy efficiency improvements. To deliver the energy saving potential, there is a need for easily implementable methods to monitor the system output. This is because information is needed to identify inefficient operation of the fluid handling system and to control the output of the pumping system according to process needs. Model-based pump or fan monitoring methods implemented in variable speed drives have proven to be able to give information on the system output without additional metering; however, the current model-based methods may not be usable or sufficiently accurate in the whole operation range of the fluid handling device. To apply model-based system monitoring in a wider selection of systems and to improve the accuracy of the monitoring, this paper proposes a new method for pump and fan output monitoring with variable-speed drives. The method uses a combination of already known operating point estimation methods. Laboratory measurements are used to verify the benefits and applicability of the improved estimation method, and the new method is compared with five previously introduced model-based estimation methods. According to the laboratory measurements, the new estimation method is the most accurate and reliable of the model-based estimation methods.
Resumo:
Transportation plays a major role in the gross domestic product of various nations. There are, however, many obstacles hindering the transportation sector. Cost-efficiency along with proper delivery times, high frequency and reliability are not a straightforward task. Furthermore, environmental friendliness has increased the importance of the whole transportation sector. This development will change roles inside the transportation sector. Even now, but especially in the future, decisions regarding the transportation sector will be partly based on emission levels and other externalities originating from transportation in addition to pure transportation costs. There are different factors, which could have an impact on the transportation sector. IMO’s sulphur regulation is estimated to increase the costs of short sea shipping in the Baltic Sea. Price development of energy could change the roles of different transport modes. Higher awareness of the environmental impacts originating from transportation could also have an impact on the price level of more polluting transport modes. According to earlier research, increased inland transportation, modal shift and slowsteaming can be possible results of these changes in the transportation sector. Possible changes in the transportation sector and ways to settle potential obstacles are studied in this dissertation. Furthermore, means to improve cost-efficiency and to decrease environmental impacts originating from transportation are researched. Hypothetical Finnish dry port network and Rail Baltica transport corridor are studied in this dissertation. Benefits and disadvantages are studied with different methodologies. These include gravitational models, which were optimized with linear integer programming, discrete-event and system dynamics simulation, an interview study and a case study. Geographical focus is on the Baltic Sea Region, but the results can be adapted to other geographical locations with discretion. Results indicate that the dry port concept has benefits, but optimization regarding the location and the amount of dry ports plays an important role. In addition, the utilization of dry ports for freight transportation should be carefully operated, since only a certain amount of total freight volume can be cost-efficiently transported through dry ports. If dry ports are created and located without proper planning, they could actually increase transportation costs and delivery times of the whole transportation system. With an optimized dry port network, transportation costs can be lowered in Finland with three to five dry ports. Environmental impacts can be lowered with up to nine dry ports. If more dry ports are added to the system, the benefits become very minor, i.e. payback time of investments becomes extremely long. Furthermore, dry port network could support major transport corridors such as Rail Baltica. Based on an analysis of statistics and interview study, there could be enough freight volume available for Rail Baltica, especially, if North-West Russia is part of the Northern end of the corridor. Transit traffic to and from Russia (especially through the Baltic States) plays a large role. It could be possible to increase transit traffic through Finland by connecting the potential Finnish dry port network and the studied transport corridor. Additionally, sulphur emission regulation is assumed to increase the attractiveness of Rail Baltica in the year 2015. Part of the transit traffic could be rerouted along Rail Baltica instead of the Baltic Sea, since the price level of sea transport could increase due to the sulphur regulation. Both, the hypothetical Finnish dry port network and Rail Baltica transport corridor could benefit each other. The dry port network could gain more market share from Russia, but also from Central Europe, which is the other end of Rail Baltica. In addition, further Eastern countries could also be connected to achieve higher potential freight volume by rail.
Electromagnetic and thermal design of a multilevel converter with high power density and reliability
Resumo:
Electric energy demand has been growing constantly as the global population increases. To avoid electric energy shortage, renewable energy sources and energy conservation are emphasized all over the world. The role of power electronics in energy saving and development of renewable energy systems is significant. Power electronics is applied in wind, solar, fuel cell, and micro turbine energy systems for the energy conversion and control. The use of power electronics introduces an energy saving potential in such applications as motors, lighting, home appliances, and consumer electronics. Despite the advantages of power converters, their penetration into the market requires that they have a set of characteristics such as high reliability and power density, cost effectiveness, and low weight, which are dictated by the emerging applications. In association with the increasing requirements, the design of the power converter is becoming more complicated, and thus, a multidisciplinary approach to the modelling of the converter is required. In this doctoral dissertation, methods and models are developed for the design of a multilevel power converter and the analysis of the related electromagnetic, thermal, and reliability issues. The focus is on the design of the main circuit. The electromagnetic model of the laminated busbar system and the IGBT modules is established with the aim of minimizing the stray inductance of the commutation loops that degrade the converter power capability. The circular busbar system is proposed to achieve equal current sharing among parallel-connected devices and implemented in the non-destructive test set-up. In addition to the electromagnetic model, a thermal model of the laminated busbar system is developed based on a lumped parameter thermal model. The temperature and temperature-dependent power losses of the busbars are estimated by the proposed algorithm. The Joule losses produced by non-sinusoidal currents flowing through the busbars in the converter are estimated taking into account the skin and proximity effects, which have a strong influence on the AC resistance of the busbars. The lifetime estimation algorithm was implemented to investigate the influence of the cooling solution on the reliability of the IGBT modules. As efficient cooling solutions have a low thermal inertia, they cause excessive temperature cycling of the IGBTs. Thus, a reliability analysis is required when selecting the cooling solutions for a particular application. The control of the cooling solution based on the use of a heat flux sensor is proposed to reduce the amplitude of the temperature cycles. The developed methods and models are verified experimentally by a laboratory prototype.
Resumo:
This thesis is done as a part of the NEOCARBON project. The aim of NEOCARBON project is to study a fully renewable energy system utilizing Power-to-Gas or Power-to-Liquid technology for energy storage. Power-to-Gas consists of two main operations: Hydrogen production via electrolysis and methane production via methanation. Methanation requires carbon dioxide and hydrogen as a raw material. This thesis studies the potential carbon dioxide sources within Finland. The different sources are ranked using the cost and energy penalty of the carbon capture, carbon biogenity and compatibility with Power-to-Gas. It can be concluded that in Finland there exists enough CO2 point sources to provide national PtG system with sufficient amounts of carbon. Pulp and paper industry is single largest producer of biogenic CO2 in Finland. It is possible to obtain single unit capable of grid balancing operations and energy transformations via Power-to-Gas and Gas-to-Power by coupling biogas plants with biomethanation and CHP units.
Resumo:
This literature review aims to clarify what is known about map matching by using inertial sensors and what are the requirements for map matching, inertial sensors, placement and possible complementary position technology. The target is to develop a wearable location system that can position itself within a complex construction environment automatically with the aid of an accurate building model. The wearable location system should work on a tablet computer which is running an augmented reality (AR) solution and is capable of track and visualize 3D-CAD models in real environment. The wearable location system is needed to support the system in initialization of the accurate camera pose calculation and automatically finding the right location in the 3D-CAD model. One type of sensor which does seem applicable to people tracking is inertial measurement unit (IMU). The IMU sensors in aerospace applications, based on laser based gyroscopes, are big but provide a very accurate position estimation with a limited drift. Small and light units such as those based on Micro-Electro-Mechanical (MEMS) sensors are becoming very popular, but they have a significant bias and therefore suffer from large drifts and require method for calibration like map matching. The system requires very little fixed infrastructure, the monetary cost is proportional to the number of users, rather than to the coverage area as is the case for traditional absolute indoor location systems.