996 resultados para Implementation accuracy


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study is a qualitative action research by its nature with elements of personal design in the form of a tangible model implementation framework construction. Utilized empirical data has been gathered via two questionnaires in relation to the arranged four workshop events with twelve individual participants. Five of them represented maintenance customers, three maintenance service providers and four equipment providers respectively. Further, there are two main research objectives in proportion to the two complementary focusing areas of this thesis. Firstly, the value-based life-cycle model, which first version has already been developed prior to this thesis, requires updating in order to increase its real-life applicability as an inter-firm decision-making tool in industrial maintenance. This first research objective is fulfilled by improving appearance, intelligibility and usability of the above-mentioned model. In addition, certain new features are also added. The workshop participants from the collaborating companies were reasonably pleased with made changes, although further attention will be required in future on the model’s intelligibility in particular as main results, charts and values were all reckoned as slightly hard to understand. Moreover, upgraded model’s appearance and added new features satisfied them the most. Secondly and more importantly, the premises of the model’s possible inter-firm implementation process need to be considered. This second research objective is delivered in two consecutive steps. At first, a bipartite open-books supported implementation framework is created and its different characteristics discussed in theory. Afterwards, the prerequisites and the pitfalls of increasing inter-organizational information transparency are studied in empirical context. One of the main findings was that the organizations are not yet prepared for network-wide information disclosure as dyadic collaboration was favored instead. However, they would be willing to share information bilaterally at least. Another major result was that the present state of companies’ cost accounting systems will definitely need implementation-wise enhancing in future since accurate and sufficiently detailed maintenance data is not available. Further, it will also be crucial to create supporting and mutually agreed network infrastructure. There are hardly any collaborative models, methods or tools currently in usage. Lastly, the essential questions about mutual trust and predominant purchasing strategies are cooperation-wise important. If inter-organizational activities are expanded, a more relational approach should be favored in this regard. Mutual trust was also recognized as a significant cooperation factor, but it is hard to measure in reality.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In a just-in-time, assemble-to-order production environments the scheduling of material requirements and production tasks - even though difficult - is of paramount importance. Different enterprise resource planning solutions with master scheduling functionality have been created to ease this problem and work as expected unless there is a problem in the material flow. This case-based candidate’s thesis introduces a tool for Microsoft Dynamics AX multisite environment, that can be used by site managers and production coordinators to get an overview of the current open sales order base and prioritize production in the event of material shortouts to avoid part-deliveries.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Steganografian tarkoituksena on salaisen viestin piilottaminen muun informaation sekaan. Tutkielmassa perehdytään kirjallisuuden pohjalta steganografiaan ja kuvien digitaaliseen vesileimaamiseen. Tutkielmaan kuuluu myös kokeellinen osuus. Siinä esitellään vesileimattujen kuvien tunnistamiseen kehitetty testausjärjestelmä ja testiajojen tulokset. Testiajoissa kuvasarjoja on vesileimattu valituilla vesileimausmenetelmillä parametreja vaihdellen. Tunnistettaville kuville tehdään piirreirrotus. Erotellut piirteet annetaan parametreina luokittimelle, joka tekee lopullisen tunnistamispäätöksen. Tutkimuksessa saatiin toteutettua toimiva ohjelmisto vesileiman lisäämiseen ja vesileimattujen kuvien tunnistamiseen kuvajoukosta. Tulosten perusteella, sopivalla piirreirrottimella ja tukivektorikoneluokittimella päästään yli 95 prosentin tunnistamistarkkuuteen.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Poster at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Workshop at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014

Relevância:

20.00% 20.00%

Publicador:

Resumo:

For certain applications of the polymerase chain reaction (PCR), it may be necessary to consider the accuracy of replication. The breakthrough that made PCR user friendly was the commercialization of Thermus aquaticus (Taq) DNA polymerase, an enzyme that would survive the high temperatures needed for DNA denaturation. The development of enzymes with an inherent 3' to 5' exonuclease proofreading activity, lacking in Taq polymerase, would be an improvement when higher fidelity is needed. We used the forward mutation assay to compare the fidelity of Taq polymerase and Thermotoga maritima (ULTMA™) DNA polymerase, an enzyme that does have proofreading activity. We did not find significant differences in the fidelity of either enzyme, even when using optimal buffer conditions, thermal cycling parameters, and number of cycles (0.2% and 0.13% error rates for ULTMA™ and Taq, respectively, after reading about 3,000 bases each). We conclude that for sequencing purposes there is no difference in using a DNA polymerase that contains an inherent 3' to 5' exonuclease activity for DNA amplification. Perhaps the specificity and fidelity of PCR are complex issues influenced by the nature of the target sequence, as well as by each PCR component.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Entrepreneurial marketing is newly established term and there is need for more specific studies in order to understand the concept fully. SMEs have entrepreneurial marketing elements more visible in their marketing and therefore provide more fruitful insights for this research. SMEs marketing has gained more recognition during the past years and in some cases innovative characteristics can be identified despite constraints such as lack of certain resources. The purpose of this research is to study entrepreneurial marketing characteristics and SME processes in order to wider understanding and gain more insights of entrepreneurial marketing. In addition, planning and implementation of entrepreneurial marketing processes is examined in order to gain full coverage of SMEs marketing activities. The research was conducted as a qualitative research and data gathering was based on semi-structured interview survey, which involved nine company interviews. Multiple case research was used to analyze data so that focus and clarity could be maintained in organized manner. Case companies were chosen from different business fields so that more variation and insights could be identified. The empirical results suggest that two examined processes networking and word-of-mouth communication are very important processes for case companies which supports the previous researches. However, the entrepreneurial marketing characteristics had variation some were more visible and recognizable than others. Examining more closely the processes companies did not fully understand that networking or word-of-mouth marketing could be used as efficiently as other conventional marketing methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the market where companies similar in size and resources are competing, it is challenging to have any advantage over others. In order to stay afloat company needs to have capability to perform with fewer resources and yet provide better service. Hence development of efficient processes which can cut costs and improve performance is crucial. As business expands, processes become complicated and large amount of data needs to be managed and available on request. Different tools are used in companies to store and manage data, which facilitates better production and transactions. In the modern business world the most utilized tool for that purpose is ERP - Enterprise Resource Planning system. The focus of this research is to study how competitive advantage can be achieved by implementing proprietary ERP system in the company; ERP system that is in-house created, tailor made to match and align business needs and processes. Market is full of ERP software, but choosing the right one is a big challenge. Identifying the key features that need improvement in processes and data management, choosing the right ERP, implementing it and the follow-up is a long and expensive journey companies undergo. Some companies prefer to invest in a ready-made package bought from vendor and adjust it according to own business needs, while others focus on creating own system with in-house IT capabilities. In this research a case company is used and author tries to identify and analyze why organization in question decided to pursue the development of proprietary ERP system, how it has been implemented and whether it has been successful. Main conclusion and recommendation of this research is for companies to know core capabilities and constraints before choosing and implementing ERP system. Knowledge of factors that affect system change outcome is important, to make the right decisions on strategic level and implement on operational level. Duration of the project in the case company has lasted longer than anticipated. It has been reported that in cases of buying ready product from vendor, projects are delayed and completed over budget as well. In general, in case company implementation of proprietary ERP has been successful both from business performance figures and usability of system by employees. In terms of future research, conducting a study to calculate statistically ROI of both approaches; of buying ready product and creating own ERP will be beneficial.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work, the feasibility of the floating-gate technology in analog computing platforms in a scaled down general-purpose CMOS technology is considered. When the technology is scaled down the performance of analog circuits tends to get worse because the process parameters are optimized for digital transistors and the scaling involves the reduction of supply voltages. Generally, the challenge in analog circuit design is that all salient design metrics such as power, area, bandwidth and accuracy are interrelated. Furthermore, poor flexibility, i.e. lack of reconfigurability, the reuse of IP etc., can be considered the most severe weakness of analog hardware. On this account, digital calibration schemes are often required for improved performance or yield enhancement, whereas high flexibility/reconfigurability can not be easily achieved. Here, it is discussed whether it is possible to work around these obstacles by using floating-gate transistors (FGTs), and analyze problems associated with the practical implementation. FGT technology is attractive because it is electrically programmable and also features a charge-based built-in non-volatile memory. Apart from being ideal for canceling the circuit non-idealities due to process variations, the FGTs can also be used as computational or adaptive elements in analog circuits. The nominal gate oxide thickness in the deep sub-micron (DSM) processes is too thin to support robust charge retention and consequently the FGT becomes leaky. In principle, non-leaky FGTs can be implemented in a scaled down process without any special masks by using “double”-oxide transistors intended for providing devices that operate with higher supply voltages than general purpose devices. However, in practice the technology scaling poses several challenges which are addressed in this thesis. To provide a sufficiently wide-ranging survey, six prototype chips with varying complexity were implemented in four different DSM process nodes and investigated from this perspective. The focus is on non-leaky FGTs, but the presented autozeroing floating-gate amplifier (AFGA) demonstrates that leaky FGTs may also find a use. The simplest test structures contain only a few transistors, whereas the most complex experimental chip is an implementation of a spiking neural network (SNN) which comprises thousands of active and passive devices. More precisely, it is a fully connected (256 FGT synapses) two-layer spiking neural network (SNN), where the adaptive properties of FGT are taken advantage of. A compact realization of Spike Timing Dependent Plasticity (STDP) within the SNN is one of the key contributions of this thesis. Finally, the considerations in this thesis extend beyond CMOS to emerging nanodevices. To this end, one promising emerging nanoscale circuit element - memristor - is reviewed and its applicability for analog processing is considered. Furthermore, it is discussed how the FGT technology can be used to prototype computation paradigms compatible with these emerging two-terminal nanoscale devices in a mature and widely available CMOS technology.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Innovative gas cooled reactors, such as the pebble bed reactor (PBR) and the gas cooled fast reactor (GFR) offer higher efficiency and new application areas for nuclear energy. Numerical methods were applied and developed to analyse the specific features of these reactor types with fully three dimensional calculation models. In the first part of this thesis, discrete element method (DEM) was used for a physically realistic modelling of the packing of fuel pebbles in PBR geometries and methods were developed for utilising the DEM results in subsequent reactor physics and thermal-hydraulics calculations. In the second part, the flow and heat transfer for a single gas cooled fuel rod of a GFR were investigated with computational fluid dynamics (CFD) methods. An in-house DEM implementation was validated and used for packing simulations, in which the effect of several parameters on the resulting average packing density was investigated. The restitution coefficient was found out to have the most significant effect. The results can be utilised in further work to obtain a pebble bed with a specific packing density. The packing structures of selected pebble beds were also analysed in detail and local variations in the packing density were observed, which should be taken into account especially in the reactor core thermal-hydraulic analyses. Two open source DEM codes were used to produce stochastic pebble bed configurations to add realism and improve the accuracy of criticality calculations performed with the Monte Carlo reactor physics code Serpent. Russian ASTRA criticality experiments were calculated. Pebble beds corresponding to the experimental specifications within measurement uncertainties were produced in DEM simulations and successfully exported into the subsequent reactor physics analysis. With the developed approach, two typical issues in Monte Carlo reactor physics calculations of pebble bed geometries were avoided. A novel method was developed and implemented as a MATLAB code to calculate porosities in the cells of a CFD calculation mesh constructed over a pebble bed obtained from DEM simulations. The code was further developed to distribute power and temperature data accurately between discrete based reactor physics and continuum based thermal-hydraulics models to enable coupled reactor core calculations. The developed method was also found useful for analysing sphere packings in general. CFD calculations were performed to investigate the pressure losses and heat transfer in three dimensional air cooled smooth and rib roughened rod geometries, housed inside a hexagonal flow channel representing a sub-channel of a single fuel rod of a GFR. The CFD geometry represented the test section of the L-STAR experimental facility at Karlsruhe Institute of Technology and the calculation results were compared to the corresponding experimental results. Knowledge was gained of the adequacy of various turbulence models and of the modelling requirements and issues related to the specific application. The obtained pressure loss results were in a relatively good agreement with the experimental data. Heat transfer in the smooth rod geometry was somewhat under predicted, which can partly be explained by unaccounted heat losses and uncertainties. In the rib roughened geometry heat transfer was severely under predicted by the used realisable k − epsilon turbulence model. An additional calculation with a v2 − f turbulence model showed significant improvement in the heat transfer results, which is most likely due to the better performance of the model in separated flow problems. Further investigations are suggested before using CFD to make conclusions of the heat transfer performance of rib roughened GFR fuel rod geometries. It is suggested that the viewpoints of numerical modelling are included in the planning of experiments to ease the challenging model construction and simulations and to avoid introducing additional sources of uncertainties. To facilitate the use of advanced calculation approaches, multi-physical aspects in experiments should also be considered and documented in a reasonable detail.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The use of limiting dilution assay (LDA) for assessing the frequency of responders in a cell population is a method extensively used by immunologists. A series of studies addressing the statistical method of choice in an LDA have been published. However, none of these studies has addressed the point of how many wells should be employed in a given assay. The objective of this study was to demonstrate how a researcher can predict the number of wells that should be employed in order to obtain results with a given accuracy, and, therefore, to help in choosing a better experimental design to fulfill one's expectations. We present the rationale underlying the expected relative error computation based on simple binomial distributions. A series of simulated in machina experiments were performed to test the validity of the a priori computation of expected errors, thus confirming the predictions. The step-by-step procedure of the relative error estimation is given. We also discuss the constraints under which an LDA must be performed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

By recent years the phenomenon called crowdsourcing has been acknowledged as an innovative form of value creation that must be taken seriously. Crowdsourcing can be defined as an act of outsourcing tasks originally performed inside an organization, or assigned externally in form of a business relationship, to an undefinably large, heterogeneous mass of potential actors. This thesis constructs a framework for successful implementation of crowdsourcing initiatives. Firms that rely entirely on their own research and ideas cannot compete with the innovative capacity that crowd-powered firms have. Nowadays, crowdsourcing has become one of the key capabilities of businesses due to its innovative capabilities, in addition to the existing internal resources of the firm. By utilizing crowdsourcing the business gains access to an enormous pool of competence and knowledge. However, various risks remain such as uncertainty of crowd structure and loss of internal know-how. Crowdsourcing Success Framework introduces a step by step model for implementing crowdsourcing into the everyday operations of the business. It starts from the decision to utilize crowdsourcing and continues further into planning, organizing and execution. Finally, this thesis presents the success factors of crowdsourcing initiative.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The main objective of the study was to form a strategic process model and project management tool to help IFRS change implementation projects in the future. These research results were designed based on the theoretical framework of Total Quality Management and leaning on the facts that were collected during the empirical case study of IAS 17 change. The us-age of the process oriented approach in IFRS standard change implementation after the initial IFRS implementation is rationalized with the following arguments: 1) well designed process tools lead to optimization of resources 2) With the help of process stages and related tasks it is easy to ensure the efficient way of working and managing the project as well as make sure to include all necessary stakeholders to the change process. This research is following the qualitative approach and the analysis is in describing format. The first part of the study is a literature review and the latter part has been conducted as a case study. The data has been col-lected in the case company with interviews and observation. The main findings are a process model for IFRS standard change process and a check-list formatted management tool for up-coming IFRS standard change projects. The process flow follows the main cornerstones in IASB’s standard setting process and the management tool has been divided to stages accordingly.