991 resultados para Efficient implementation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Työn tarkoituksena on tarkastella ERP-järjestelmän käyttöönottoa ja tarjota ohjekartta kuinka tehdä se menestyksekkäästi. Lisäksi työ kartoittaa Konecranesin saamia etuja ja hyötyjä yrityksen ottaessa ERP-järjestelmä käyttöön. Käyttöönottoprojekti, sen vaiheet ja muut merkittävät ERP-projekteihin liittyvät vaiheet on kuvattu työssä yksityiskohtaisesti. Ensiksi ERP-järjestelmän käyttöönottoa tarkastellaan kirjallisuuteen perustuen. Myöhemmin sitä tarkastellaan perustuen kirjoittajan kokemuksiin ja havaintoihin ERP-järjestelmän käyttöönotosta, ja vertaillaan käytännön suhdetta teoriaan.ERP-järjestemät ovat kalliita ja niiden käyttöön ottaminen on aikaa vievää. Viimeisen vuosikymmenen aikana yritykset ovat enenevissä määrin alkaneet ottamaan ERP-järjestelmiä käyttöön. ERP-järjestelmät ovat saavuttaneet kasvavaa suosiota mm. niiden operaatioita integroivan ja tehostavan luonteesta ansiosta sekä niiden kyvystä tarjota päivitettyä tietoa reaaliajassa.Myös menestyksekkäissä ERP-projekteissa on parantamisen varaa. Mitattaessa ERP- projektin menestyksellisyyttä pitäisi käyttää sekä määrällisiä että laadullisia mittareita. On helppoa käyttää ainoastaan määrällisiä mittareita. Usein kuitenkin laadulliset asiat ovat tärkeämpiä. Ihmiset on saatava sitoutumaan yhteiseen tavoitteeseen kommunikaation avulla. Huonoja ensivaikutelmia on vaikea muuttaa. Vaikka vaikuttaisikin siltä, että ERP-projekti on onnistunut, kun kaikki näyttää hyvältä ”paperilla”, loppujen lopuksi systeemiä käyttävät ihmiset päättävät projektin menestyksellisyydestä. Järjestelmän käyttöönottohetkeä on pidettävä ERP-projektin ensimmäisenä vaiheena.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The variability observed in drug exposure has a direct impact on the overall response to drug. The largest part of variability between dose and drug response resides in the pharmacokinetic phase, i.e. in the dose-concentration relationship. Among possibilities offered to clinicians, Therapeutic Drug Monitoring (TDM; Monitoring of drug concentration measurements) is one of the useful tool to guide pharmacotherapy. TDM aims at optimizing treatments by individualizing dosage regimens based on blood drug concentration measurement. Bayesian calculations, relying on population pharmacokinetic approach, currently represent the gold standard TDM strategy. However, it requires expertise and computational assistance, thus limiting its large implementation in routine patient care. The overall objective of this thesis was to implement robust tools to provide Bayesian TDM to clinician in modern routine patient care. To that endeavour, aims were (i) to elaborate an efficient and ergonomic computer tool for Bayesian TDM: EzeCHieL (ii) to provide algorithms for drug concentration Bayesian forecasting and software validation, relying on population pharmacokinetics (iii) to address some relevant issues encountered in clinical practice with a focus on neonates and drug adherence. First, the current stage of the existing software was reviewed and allows establishing specifications for the development of EzeCHieL. Then, in close collaboration with software engineers a fully integrated software, EzeCHieL, has been elaborated. EzeCHieL provides population-based predictions and Bayesian forecasting and an easy-to-use interface. It enables to assess the expectedness of an observed concentration in a patient compared to the whole population (via percentiles), to assess the suitability of the predicted concentration relative to the targeted concentration and to provide dosing adjustment. It allows thus a priori and a posteriori Bayesian drug dosing individualization. Implementation of Bayesian methods requires drug disposition characterisation and variability quantification trough population approach. Population pharmacokinetic analyses have been performed and Bayesian estimators have been provided for candidate drugs in population of interest: anti-infectious drugs administered to neonates (gentamicin and imipenem). Developed models were implemented in EzeCHieL and also served as validation tool in comparing EzeCHieL concentration predictions against predictions from the reference software (NONMEM®). Models used need to be adequate and reliable. For instance, extrapolation is not possible from adults or children to neonates. Therefore, this work proposes models for neonates based on the developmental pharmacokinetics concept. Patients' adherence is also an important concern for drug models development and for a successful outcome of the pharmacotherapy. A last study attempts to assess impact of routine patient adherence measurement on models definition and TDM interpretation. In conclusion, our results offer solutions to assist clinicians in interpreting blood drug concentrations and to improve the appropriateness of drug dosing in routine clinical practice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Monet ohjelmistoyritykset ovat alkaneet kiinnittää yhä enemmän huomiota ohjelmistotuotteidensa laatuun. Tämä on johtanut siihen, että useimmat niistä ovat valinneet ohjelmistotestauksen välineeksi, jolla tätä laatua voidaan parantaa. Testausta ei pidä rajoittaa ainoastaan ohjelmistotuotteeseen itseensä, vaan sen tulisi kattaa koko ohjelmiston kehitysprosessi. Validaatiotestauksessa keskitytään varmistamaan, että lopputuote täyttää sille asetetut vaatimukset, kun taas verifikaatiotestausta käytetään ennaltaehkäisevänä testauksena, jolla pyritään poistamaan virheitä jo ennenkuin ne pääsevät lähdekoodiin asti. Työ, johon tämä diplomityö perustuu, tehtiin alkukevään ja kesän aikana vuonna 2003 Necsom Oy:n toimeksiannosta. Necsom on pieni suomalainen ohjelmistoyritys, jonka tutkimus- ja kehitysyksikkö toimii Lappeenrannassa.Tässä diplomityössä tutustutaan aluksi ohjelmistotestaukseen sekä eri tapoihin sen organisoimiseksi. Tämän lisäksi annetaan yleisiä ohjeita testisuunnitelmien ja testaustapausten tekoon, joita onnistunut ja tehokas testaus edellyttää. Kun tämä teoria on käyty läpi, esitetään esimerkkinä kuinka sisäinen ohjelmistotestaus toteutettiin Necsomilla. Lopuksi esitetään johtopäätökset, joihin päädyttiin käytännön testausprosessin seuraamisen jälkeen ja annetaan jatkotoimenpide-ehdotuksia.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Jatkuvasti lisääntyvä matkapuhelinten käyttäjien määrä, internetin kehittyminen yleiseksi tiedon ja viihteen lähteeksi on luonut tarpeen palvelulle liikkuvan työaseman liittämiseksi tietokoneverkkoihin. GPRS on uusi teknologia, joka tarjoaa olemassa olevia matka- puhelinverkkoja (esim. NMT ja GSM) nopeamman, tehokkaamman ja taloudellisemman liitynnän pakettidataverkkoihin, kuten internettiin ja intranetteihin. Tämän työn tavoitteena oli toteuttaa GPRS:n paketinohjausyksikön (Packet Control Unit, PCU) testauksessa tarvittavat viestintäajurit työasemaympristöön. Aidot matkapuhelinverkot ovat liian kalliita, eikä niistä saa tarvittavasti lokitulostuksia, jotta niitä voisi käyttää GPRS:n testauksessa ohjelmiston kehityksen alkuvaihessa. Tämän takia PCU-ohjelmiston testaus suoritetaan joustavammassa ja helpommin hallittavassa ympäristössä, joka ei aseta kovia reaaliaikavaatimuksia. Uusi toimintaympäristö ja yhteysmedia vaativat PCU:n ja muiden GPRS-verkon yksiköiden välisistä yhteyksistä huolehtivien ohjelman osien, viestintäajurien uuden toteutuksen. Tämän työn tuloksena syntyivät tarvittavien viestintäajurien työasemaversiot. Työssä tarkastellaan eri tiedonsiirtotapoja ja -protokollia testattavan ohjelmiston vaateiden, toteutetun ajurin ja testauksen kannalta. Työssä esitellään kunkin ajurin toteuttama rajapinta ja toteutuksen aste, eli mitkä toiminnot on toteutettu ja mitä on jätetty pois. Ajureiden rakenne ja toiminta selvitetään siltä osin, kuin se on oleellista ohjelman toiminnan kannalta.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although fetal anatomy can be adequately viewed in new multi-slice MR images, many critical limitations remain for quantitative data analysis. To this end, several research groups have recently developed advanced image processing methods, often denoted by super-resolution (SR) techniques, to reconstruct from a set of clinical low-resolution (LR) images, a high-resolution (HR) motion-free volume. It is usually modeled as an inverse problem where the regularization term plays a central role in the reconstruction quality. Literature has been quite attracted by Total Variation energies because of their ability in edge preserving but only standard explicit steepest gradient techniques have been applied for optimization. In a preliminary work, it has been shown that novel fast convex optimization techniques could be successfully applied to design an efficient Total Variation optimization algorithm for the super-resolution problem. In this work, two major contributions are presented. Firstly, we will briefly review the Bayesian and Variational dual formulations of current state-of-the-art methods dedicated to fetal MRI reconstruction. Secondly, we present an extensive quantitative evaluation of our SR algorithm previously introduced on both simulated fetal and real clinical data (with both normal and pathological subjects). Specifically, we study the robustness of regularization terms in front of residual registration errors and we also present a novel strategy for automatically select the weight of the regularization as regards the data fidelity term. Our results show that our TV implementation is highly robust in front of motion artifacts and that it offers the best trade-off between speed and accuracy for fetal MRI recovery as in comparison with state-of-the art methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As technology geometries have shrunk to the deep submicron regime, the communication delay and power consumption of global interconnections in high performance Multi- Processor Systems-on-Chip (MPSoCs) are becoming a major bottleneck. The Network-on- Chip (NoC) architecture paradigm, based on a modular packet-switched mechanism, can address many of the on-chip communication issues such as performance limitations of long interconnects and integration of large number of Processing Elements (PEs) on a chip. The choice of routing protocol and NoC structure can have a significant impact on performance and power consumption in on-chip networks. In addition, building a high performance, area and energy efficient on-chip network for multicore architectures requires a novel on-chip router allowing a larger network to be integrated on a single die with reduced power consumption. On top of that, network interfaces are employed to decouple computation resources from communication resources, to provide the synchronization between them, and to achieve backward compatibility with existing IP cores. Three adaptive routing algorithms are presented as a part of this thesis. The first presented routing protocol is a congestion-aware adaptive routing algorithm for 2D mesh NoCs which does not support multicast (one-to-many) traffic while the other two protocols are adaptive routing models supporting both unicast (one-to-one) and multicast traffic. A streamlined on-chip router architecture is also presented for avoiding congested areas in 2D mesh NoCs via employing efficient input and output selection. The output selection utilizes an adaptive routing algorithm based on the congestion condition of neighboring routers while the input selection allows packets to be serviced from each input port according to its congestion level. Moreover, in order to increase memory parallelism and bring compatibility with existing IP cores in network-based multiprocessor architectures, adaptive network interface architectures are presented to use multiple SDRAMs which can be accessed simultaneously. In addition, a smart memory controller is integrated in the adaptive network interface to improve the memory utilization and reduce both memory and network latencies. Three Dimensional Integrated Circuits (3D ICs) have been emerging as a viable candidate to achieve better performance and package density as compared to traditional 2D ICs. In addition, combining the benefits of 3D IC and NoC schemes provides a significant performance gain for 3D architectures. In recent years, inter-layer communication across multiple stacked layers (vertical channel) has attracted a lot of interest. In this thesis, a novel adaptive pipeline bus structure is proposed for inter-layer communication to improve the performance by reducing the delay and complexity of traditional bus arbitration. In addition, two mesh-based topologies for 3D architectures are also introduced to mitigate the inter-layer footprint and power dissipation on each layer with a small performance penalty.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Green IT is a term that covers various tasks and concepts that are related to reducing the environmental impact of IT. At enterprise level, Green IT has significant potential to generate sustainable cost savings: the total amount of devices is growing and electricity prices are rising. The lifecycle of a computer can be made more environmentally sustainable using Green IT, e.g. by using energy efficient components and by implementing device power management. The challenge using power management at enterprise level is how to measure and follow-up the impact of power management policies? During the thesis a power management feature was developed to a configuration management system. The feature can be used to automatically power down and power on PCs using a pre-defined schedule and to estimate the total power usage of devices. Measurements indicate that using the feature the device power consumption can be monitored quite precisely and the power consumption can be reduced, which generates electricity cost savings and reduces the environmental impact of IT.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In accordance with the Moore's law, the increasing number of on-chip integrated transistors has enabled modern computing platforms with not only higher processing power but also more affordable prices. As a result, these platforms, including portable devices, work stations and data centres, are becoming an inevitable part of the human society. However, with the demand for portability and raising cost of power, energy efficiency has emerged to be a major concern for modern computing platforms. As the complexity of on-chip systems increases, Network-on-Chip (NoC) has been proved as an efficient communication architecture which can further improve system performances and scalability while reducing the design cost. Therefore, in this thesis, we study and propose energy optimization approaches based on NoC architecture, with special focuses on the following aspects. As the architectural trend of future computing platforms, 3D systems have many bene ts including higher integration density, smaller footprint, heterogeneous integration, etc. Moreover, 3D technology can signi cantly improve the network communication and effectively avoid long wirings, and therefore, provide higher system performance and energy efficiency. With the dynamic nature of on-chip communication in large scale NoC based systems, run-time system optimization is of crucial importance in order to achieve higher system reliability and essentially energy efficiency. In this thesis, we propose an agent based system design approach where agents are on-chip components which monitor and control system parameters such as supply voltage, operating frequency, etc. With this approach, we have analysed the implementation alternatives for dynamic voltage and frequency scaling and power gating techniques at different granularity, which reduce both dynamic and leakage energy consumption. Topologies, being one of the key factors for NoCs, are also explored for energy saving purpose. A Honeycomb NoC architecture is proposed in this thesis with turn-model based deadlock-free routing algorithms. Our analysis and simulation based evaluation show that Honeycomb NoCs outperform their Mesh based counterparts in terms of network cost, system performance as well as energy efficiency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The pumping processes requiring wide range of flow are often equipped with parallelconnected centrifugal pumps. In parallel pumping systems, the use of variable speed control allows that the required output for the process can be delivered with a varying number of operated pump units and selected rotational speed references. However, the optimization of the parallel-connected rotational speed controlled pump units often requires adaptive modelling of both parallel pump characteristics and the surrounding system in varying operation conditions. The available information required for the system modelling in typical parallel pumping applications such as waste water treatment and various cooling and water delivery pumping tasks can be limited, and the lack of real-time operation point monitoring often sets limits for accurate energy efficiency optimization. Hence, alternatives for easily implementable control strategies which can be adopted with minimum system data are necessary. This doctoral thesis concentrates on the methods that allow the energy efficient use of variable speed controlled parallel pumps in system scenarios in which the parallel pump units consist of a centrifugal pump, an electric motor, and a frequency converter. Firstly, the suitable operation conditions for variable speed controlled parallel pumps are studied. Secondly, methods for determining the output of each parallel pump unit using characteristic curve-based operation point estimation with frequency converter are discussed. Thirdly, the implementation of the control strategy based on real-time pump operation point estimation and sub-optimization of each parallel pump unit is studied. The findings of the thesis support the idea that the energy efficiency of the pumping can be increased without the installation of new, more efficient components in the systems by simply adopting suitable control strategies. An easily implementable and adaptive control strategy for variable speed controlled parallel pumping systems can be created by utilizing the pump operation point estimation available in modern frequency converters. Hence, additional real-time flow metering, start-up measurements, and detailed system model are unnecessary, and the pumping task can be fulfilled by determining a speed reference for each parallel-pump unit which suggests the energy efficient operation of the pumping system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the market where companies similar in size and resources are competing, it is challenging to have any advantage over others. In order to stay afloat company needs to have capability to perform with fewer resources and yet provide better service. Hence development of efficient processes which can cut costs and improve performance is crucial. As business expands, processes become complicated and large amount of data needs to be managed and available on request. Different tools are used in companies to store and manage data, which facilitates better production and transactions. In the modern business world the most utilized tool for that purpose is ERP - Enterprise Resource Planning system. The focus of this research is to study how competitive advantage can be achieved by implementing proprietary ERP system in the company; ERP system that is in-house created, tailor made to match and align business needs and processes. Market is full of ERP software, but choosing the right one is a big challenge. Identifying the key features that need improvement in processes and data management, choosing the right ERP, implementing it and the follow-up is a long and expensive journey companies undergo. Some companies prefer to invest in a ready-made package bought from vendor and adjust it according to own business needs, while others focus on creating own system with in-house IT capabilities. In this research a case company is used and author tries to identify and analyze why organization in question decided to pursue the development of proprietary ERP system, how it has been implemented and whether it has been successful. Main conclusion and recommendation of this research is for companies to know core capabilities and constraints before choosing and implementing ERP system. Knowledge of factors that affect system change outcome is important, to make the right decisions on strategic level and implement on operational level. Duration of the project in the case company has lasted longer than anticipated. It has been reported that in cases of buying ready product from vendor, projects are delayed and completed over budget as well. In general, in case company implementation of proprietary ERP has been successful both from business performance figures and usability of system by employees. In terms of future research, conducting a study to calculate statistically ROI of both approaches; of buying ready product and creating own ERP will be beneficial.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main objective of the study was to form a strategic process model and project management tool to help IFRS change implementation projects in the future. These research results were designed based on the theoretical framework of Total Quality Management and leaning on the facts that were collected during the empirical case study of IAS 17 change. The us-age of the process oriented approach in IFRS standard change implementation after the initial IFRS implementation is rationalized with the following arguments: 1) well designed process tools lead to optimization of resources 2) With the help of process stages and related tasks it is easy to ensure the efficient way of working and managing the project as well as make sure to include all necessary stakeholders to the change process. This research is following the qualitative approach and the analysis is in describing format. The first part of the study is a literature review and the latter part has been conducted as a case study. The data has been col-lected in the case company with interviews and observation. The main findings are a process model for IFRS standard change process and a check-list formatted management tool for up-coming IFRS standard change projects. The process flow follows the main cornerstones in IASB’s standard setting process and the management tool has been divided to stages accordingly.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Finnish design and consulting companies are delivering robust and cost-efficient steel structures solutions to a large number of manufacturing companies worldwide. Recently introduced EN 1090-2 standard obliges these companies to specify the execution class of steel structures for their customers. This however, requires clarifying, understanding and interpreting the sophisticated procedure of execution class assignment. The objective of this research is to provide a clear explanation and guidance through the process of execution class assignment for a given steel structure and to support the implementation of EN 1090-2 standard in Rejlers Oy, one of Finnish design and consulting companies. This objective is accomplished by creating a guideline for designers that elaborates on the four-step process of the execution class assignment for a steel structure or its part. Steps one to three define the consequence class (projected consequences of structure failure), the service category (hazards associated with the service use exploitation of steel structure) and the production category (manufacturing process peculiarities), based on the ductility class (capacity of structure to withstand deformations) and the behaviour factor (corresponds to structure seismic behaviour). The final step is the execution class assignment taking into account results of previous steps. Main research method is indepth literature review of European standards family for steel structures. Other research approach is a series of interviews of Rejlers Oy representatives and its clients, results of which have been used to evaluate the level of EN 1090-2 awareness. Rejlers Oy will use the developed novel coherent standard implementation guideline to improve its services and to obtain greater customer satisfaction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For the past decades, educational large-scale reforms have been elaborated and implemented in many countries and often resulted in partial or complete failure. These results brought researchers to study policy processes in order to address this particular challenge. Studies on implementation processes brought to light an existing causal relationship between the implementation process and the effectiveness of a reform. This study aims to describe the implementation process of educational change in Finland, who produced efficient educational reforms over the last 50 years. The case study used for the purpose of this study is the national reform of undivided basic education (yhtenäinen peruskoulu) implemented in the end of the 1990s. Therefore, this research aims to describe how the Finnish undivided basic education reform was implemented. This research was carried out using a pluralist and structuralist approach of policy process and was analyzed according to the hybrid model of implementation process. The data were collected using a triangulation of methods, i.e. documentary research, interviews and questionnaires. The data were qualitative and were analyzed using content analysis methods. This study concludes that the undivided basic education reform was applied in a very decentralized manner, which is a reflection of the decentralized system present in Finland. Central authorities provided a clear vision of the purpose of the reform, but did not control the implementation process. They rather provided extensive support in the form of transmission of information and development of collaborative networks. Local authorities had complete autonomy in terms of decision-making and implementation process. Discussions, debates and decisions regarding implementation processes took place at the local level and included the participation of all actors present on the field. Implementation methods differ from a region to another, with is the consequence of the variation of the level of commitment of local actors but also the diversity of local realities. The reform was implemented according to existing structures and values, which means that it was in cohesion with the context in which it was implemented. These results cannot be generalized to all implementation processes of educational change in Finland but give a great insight of what could be the model used in Finland. Future studies could intent to confirm the model described here by studying other reforms that took place in Finland.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For the past decades, educational large-scale reforms have been elaborated and implemented in many countries and often resulted in partial or complete failure. These results brought researchers to study policy processes in order to address this particular challenge. Studies on implementation processes brought to light an existing causal relationship between the implementation process and the effectiveness of a reform. This study aims to describe the implementation process of educational change in Finland, who produced efficient educational reforms over the last 50 years. The case study used for the purpose of this study is the national reform of undivided basic education (yhtenäinen peruskoulu) implemented in the end of the 1990s. Therefore, this research aims to describe how the Finnish undivided basic education reform was implemented. This research was carried out using a pluralist and structuralist approach of policy process and was analyzed according to the hybrid model of implementation process. The data were collected using a triangulation of methods, i.e. documentary research, interviews and questionnaires. The data were qualitative and were analyzed using content analysis methods. This study concludes that the undivided basic education reform was applied in a very decentralized manner, which is a reflection of the decentralized system present in Finland. Central authorities provided a clear vision of the purpose of the reform, but did not control the implementation process. They rather provided extensive support in the form of transmission of information and development of collaborative networks. Local authorities had complete autonomy in terms of decision-making and implementation process. Discussions, debates and decisions regarding implementation processes took place at the local level and included the participation of all actors present on the field. Implementation methods differ from a region to another, with is the consequence of the variation of the level of commitment of local actors but also the diversity of local realities. The reform was implemented according to existing structures and values, which means that it was in cohesion with the context in which it was implemented. These results cannot be generalized to all implementation processes of educational change in Finland but give a great insight of what could be the model used in Finland. Future studies could intent to confirm the model described here by studying other reforms that took place in Finland.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The attached file is created with Scientific Workplace Latex