18 resultados para Processing Time
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
Simulation has traditionally been used for analyzing the behavior of complex real world problems. Even though only some features of the problems are considered, simulation time tends to become quite high even for common simulation problems. Parallel and distributed simulation is a viable technique for accelerating the simulations. The success of parallel simulation depends heavily on the combination of the simulation application, algorithm and message population in the simulation is sufficient, no additional delay is caused by this environment. In this thesis a conservative, parallel simulation algorithm is applied to the simulation of a cellular network application in a distributed workstation environment. This thesis presents a distributed simulation environment, Diworse, which is based on the use of networked workstations. The distributed environment is considered especially hard for conservative simulation algorithms due to the high cost of communication. In this thesis, however, the distributed environment is shown to be a viable alternative if the amount of communication is kept reasonable. Novel ideas of multiple message simulation and channel reduction enable efficient use of this environment for the simulation of a cellular network application. The distribution of the simulation is based on a modification of the well known Chandy-Misra deadlock avoidance algorithm with null messages. The basic Chandy Misra algorithm is modified by using the null message cancellation and multiple message simulation techniques. The modifications reduce the amount of null messages and the time required for their execution, thus reducing the simulation time required. The null message cancellation technique reduces the processing time of null messages as the arriving null message cancels other non processed null messages. The multiple message simulation forms groups of messages as it simulates several messages before it releases the new created messages. If the message population in the simulation is suffiecient, no additional delay is caused by this operation A new technique for considering the simulation application is also presented. The performance is improved by establishing a neighborhood for the simulation elements. The neighborhood concept is based on a channel reduction technique, where the properties of the application exclusively determine which connections are necessary when a certain accuracy for simulation results is required. Distributed simulation is also analyzed in order to find out the effect of the different elements in the implemented simulation environment. This analysis is performed by using critical path analysis. Critical path analysis allows determination of a lower bound for the simulation time. In this thesis critical times are computed for sequential and parallel traces. The analysis based on sequential traces reveals the parallel properties of the application whereas the analysis based on parallel traces reveals the properties of the environment and the distribution.
Resumo:
Fluent health information flow is critical for clinical decision-making. However, a considerable part of this information is free-form text and inabilities to utilize it create risks to patient safety and cost-effective hospital administration. Methods for automated processing of clinical text are emerging. The aim in this doctoral dissertation is to study machine learning and clinical text in order to support health information flow.First, by analyzing the content of authentic patient records, the aim is to specify clinical needs in order to guide the development of machine learning applications.The contributions are a model of the ideal information flow,a model of the problems and challenges in reality, and a road map for the technology development. Second, by developing applications for practical cases,the aim is to concretize ways to support health information flow. Altogether five machine learning applications for three practical cases are described: The first two applications are binary classification and regression related to the practical case of topic labeling and relevance ranking.The third and fourth application are supervised and unsupervised multi-class classification for the practical case of topic segmentation and labeling.These four applications are tested with Finnish intensive care patient records.The fifth application is multi-label classification for the practical task of diagnosis coding. It is tested with English radiology reports.The performance of all these applications is promising. Third, the aim is to study how the quality of machine learning applications can be reliably evaluated.The associations between performance evaluation measures and methods are addressed,and a new hold-out method is introduced.This method contributes not only to processing time but also to the evaluation diversity and quality. The main conclusion is that developing machine learning applications for text requires interdisciplinary, international collaboration. Practical cases are very different, and hence the development must begin from genuine user needs and domain expertise. The technological expertise must cover linguistics,machine learning, and information systems. Finally, the methods must be evaluated both statistically and through authentic user-feedback.
Resumo:
Memristive computing refers to the utilization of the memristor, the fourth fundamental passive circuit element, in computational tasks. The existence of the memristor was theoretically predicted in 1971 by Leon O. Chua, but experimentally validated only in 2008 by HP Labs. A memristor is essentially a nonvolatile nanoscale programmable resistor — indeed, memory resistor — whose resistance, or memristance to be precise, is changed by applying a voltage across, or current through, the device. Memristive computing is a new area of research, and many of its fundamental questions still remain open. For example, it is yet unclear which applications would benefit the most from the inherent nonlinear dynamics of memristors. In any case, these dynamics should be exploited to allow memristors to perform computation in a natural way instead of attempting to emulate existing technologies such as CMOS logic. Examples of such methods of computation presented in this thesis are memristive stateful logic operations, memristive multiplication based on the translinear principle, and the exploitation of nonlinear dynamics to construct chaotic memristive circuits. This thesis considers memristive computing at various levels of abstraction. The first part of the thesis analyses the physical properties and the current-voltage behaviour of a single device. The middle part presents memristor programming methods, and describes microcircuits for logic and analog operations. The final chapters discuss memristive computing in largescale applications. In particular, cellular neural networks, and associative memory architectures are proposed as applications that significantly benefit from memristive implementation. The work presents several new results on memristor modeling and programming, memristive logic, analog arithmetic operations on memristors, and applications of memristors. The main conclusion of this thesis is that memristive computing will be advantageous in large-scale, highly parallel mixed-mode processing architectures. This can be justified by the following two arguments. First, since processing can be performed directly within memristive memory architectures, the required circuitry, processing time, and possibly also power consumption can be reduced compared to a conventional CMOS implementation. Second, intrachip communication can be naturally implemented by a memristive crossbar structure.
Resumo:
The aim of this thesis research work focused on the carbonate precipitation of magnesium using magnesium hydroxide Mg(OH)2 and carbon dioxide (CO2) gas at ambient temperature and pressure. The rate of dissolution of Mg(OH)2 and precipitation kinetics were investigated under different operating conditions. The conductivity and pH of the solution were inline monitored by a Consort meter and the solid samples gotten from the precipitation reaction were analysed by a laser diffraction analyzer Malvern Mastersizer to obtain particle size distributions (PSD) of crystal samples. Also the Mg2+ concentration profiles were determined from the liquid phase of the precipitate by ion chromatography (IC) analysis. Crystal morphology of the obtained precipitates were also investigated and discussed in this work. For the carbonation reaction of magnesium hydroxide in the present work, it was found that magnesium carbonate trihydrate (nesquehonite) was the main product and its formation occurred at a pH of around 7-8. The stirrer speed has a significant effect on the dissolution rate of Mg(OH)2. The highest obtained Mg2+ concentration level was 0.424 mol L-l for the 470 rpm and 0.387 mol L-1 for the 560 rpm which corresponded to the processing time of 45 mins and 40 mins respectively. The particle size distribution shows that the average particle size keeps increasing during the reaction as the CO2 is been fed to the system. The carbonation process is kinetically favored and simple as nesquehonite formation occurs in a very short time. It is a thermodynamically and chemically stable solid product, which allows for a long-term storage of CO2. Since the carbonation reaction is a complex system which includes dissolution of magnesium hydroxide particles, absorption of CO2, chemical reaction and crystallization, the dissolution of magnesium hydroxide was studied in hydrochloric acid (HCl) solvent with and without nitrogen (N2) inert gas. It was found on the dissolution part that the impeller speed had effect on the dissolution rate. The higher the impeller speed the higher the pH of the solution, although for the highest speed of 650rpm it was not the case. Therefore, it was concluded that the optimum speed of the stirrer was 560rpm. The influence of inert gas N2 on the dissolution rate of Mg(OH)2 particles could be seen based on measured pH, electric conductivity and Mg2+ concentration curves.
Resumo:
Crystallization is employed in different industrial processes. The method and operation can differ depending on the nature of the substances involved. The aim of this study is to examine the effect of various operating conditions on the crystal properties in a chemical engineering design window with a focus on ultrasound assisted cooling crystallization. Batch to batch variations, minimal manufacturing steps and faster production times are factors which continuous crystallization seeks to resolve. Continuous processes scale-up is considered straightforward compared to batch processes owing to increase of processing time in the specific reactor. In cooling crystallization process, ultrasound can be used to control the crystal properties. Different model compounds were used to define the suitable process parameters for the modular crystallizer using equal operating conditions in each module. A final temperature of 20oC was employed in all experiments while the operating conditions differed. The studied process parameters and configuration of the crystallizer were manipulated to achieve a continuous operation without crystal clogging along the crystallization path. The results from the continuous experiment were compared with the batch crystallization results and analysed using the Malvern Morphologi G3 instrument to determine the crystal morphology and CSD. The modular crystallizer was operated successfully with three different residence times. At optimal process conditions, a longer residence time gives smaller crystals and narrower CSD. Based on the findings, at a constant initial solution concentration, the residence time had clear influence on crystal properties. The equal supersaturation criterion in each module offered better results compared to other cooling profiles. The combination of continuous crystallization and ultrasound has large potential to overcome clogging, obtain reproducible and narrow CSD, specific crystal morphologies and uniform particle sizes, and exclusion of milling stages in comparison to batch processes.
Resumo:
Diplomityössä on käsitelty paperin pinnankarkeuden mittausta, joka on keskeisimpiä ongelmia paperimateriaalien tutkimuksessa. Paperiteollisuudessa käytettävät mittausmenetelmät sisältävät monia haittapuolia kuten esimerkiksi epätarkkuus ja yhteensopimattomuus sileiden papereiden mittauksissa, sekä suuret vaatimukset laboratorio-olosuhteille ja menetelmien hitaus. Työssä on tutkittu optiseen sirontaan perustuvia menetelmiä pinnankarkeuden määrittämisessä. Konenäköä ja kuvan-käsittelytekniikoita tutkittiin karkeilla paperipinnoilla. Tutkimuksessa käytetyt algoritmit on tehty Matlab® ohjelmalle. Saadut tulokset osoittavat mahdollisuuden pinnankarkeuden mittaamiseen kuvauksen avulla. Parhaimman tuloksen perinteisen ja kuvausmenetelmän välillä antoi fraktaaliulottuvuuteen perustuva menetelmä.
Resumo:
Teollusuussovelluksissa vaaditaan nykyisin yhä useammin reaaliaikaista tiedon käsittelyä. Luotettavuus on yksi tärkeimmistä reaaliaikaiseen tiedonkäsittelyyn kykenevän järjestelmän ominaisuuksista. Sen saavuttamiseksi on sekä laitteisto, että ohjelmisto testattava. Tämän työn päätavoitteena on laitteiston testaaminen ja laitteiston testattavuus, koska luotettava laitteistoalusta on perusta tulevaisuuden reaaliaikajärjestelmille. Diplomityössä esitetään digitaaliseen signaalinkäsittelyyn soveltuvan prosessorikortin suunnittelu. Prosessorikortti on tarkoitettu sähkökoneiden ennakoivaa kunnonvalvontaa varten. Uusimmat DFT (Desing for Testability) menetelmät esitellään ja niitä sovelletaan prosessorikortin sunnittelussa yhdessä vanhempien menetelmien kanssa. Kokemukset ja huomiot menetelmien soveltuvuudesta raportoidaan työn lopussa. Työn tavoitteena on kehittää osakomponentti web -pohjaiseen valvontajärjestelmään, jota on kehitetty Sähkötekniikan osastolla Lappeenrannan teknillisellä korkeakoululla.
Resumo:
Raw measurement data does not always immediately convey useful information, but applying mathematical statistical analysis tools into measurement data can improve the situation. Data analysis can offer benefits like acquiring meaningful insight from the dataset, basing critical decisions on the findings, and ruling out human bias through proper statistical treatment. In this thesis we analyze data from an industrial mineral processing plant with the aim of studying the possibility of forecasting the quality of the final product, given by one variable, with a model based on the other variables. For the study mathematical tools like Qlucore Omics Explorer (QOE) and Sparse Bayesian regression (SB) are used. Later on, linear regression is used to build a model based on a subset of variables that seem to have most significant weights in the SB model. The results obtained from QOE show that the variable representing the desired final product does not correlate with other variables. For SB and linear regression, the results show that both SB and linear regression models built on 1-day averaged data seriously underestimate the variance of true data, whereas the two models built on 1-month averaged data are reliable and able to explain a larger proportion of variability in the available data, making them suitable for prediction purposes. However, it is concluded that no single model can fit well the whole available dataset and therefore, it is proposed for future work to make piecewise non linear regression models if the same available dataset is used, or the plant to provide another dataset that should be collected in a more systematic fashion than the present data for further analysis.
Resumo:
The ability of the supplier firm to generate and utilise customer-specific knowledge has attracted increasing attention in the academic literature during the last decade. It has been argued the customer knowledge should treated as a strategic asset the same as any other intangible assets. Yet, at the same time it has been shown that the management of customer-specific knowledge is challenging in practice, and that many firms are better at acquiring customer knowledge than at making use of it. This study examines customer knowledge processing in the context of key account management in large industrial firms. This focus was chosen because key accounts are demanding and complex. It is not unusual for a single key account relationship to constitute a complex web of relationships between the supplier and the key account – thus easily leading to the dispersion of customer-specific knowledge in the supplier firm. Although the importance of customer-specific knowledge generation has been widely acknowledged in the literature, surprisingly little attention has been paid to the processes through which firms generate, disseminate and use such knowledge internally for enhancing the relationships with their major, strategically important key account customers. This thesis consists of two parts. The first part comprises a theoretical overview and draws together the main findings of the study, whereas the second part consists of five complementary empirical research papers based on survey data gathered from large industrial firms in Finland. The findings suggest that the management of customer knowledge generated about and form key accounts is a three-dimensional process consisting of acquisition, dissemination and utilization. It could be concluded from the results that customer-specific knowledge is a strategic asset because the supplier’s customer knowledge processing activities have a positive effect on supplier’s key account performance. Moreover, in examining the determinants of each phase separately the study identifies a number of intra-organisational factors that facilitate the process in supplier firms. The main contribution of the thesis lies in linking the concept of customer knowledge processing to the previous literature on key account management. Moreover, given than this literature is mainly conceptual or case-based, a further contribution is to examine its consequences and determinants based on quantitative empirical data.
Resumo:
In dentistry, yttrium partially stabilized zirconia (ZrO2) has become one of the most attractive ceramic materials for prosthetic applications. The aim of this series of studies was to evaluate whether certain treatments used in the manufacturing process, such as sintering time, color shading or heat treatment of zirconia affect the material properties. Another aim was to evaluate the load-bearing capacity and marginal fit of manually copy-milled custom-made versus prefabricated commercially available zirconia implant abutments. Mechanical properties such as flexural strength and surface microhardness were determined for green-stage milled and sintered yttrium partially stabilized zirconia after different sintering time, coloring process and heat treatments. Scanning electron microscope (SEM) was used for analyzing the possible changes in surface structure of zirconia material after reduced sintering time, coloring and heat treatments. Possible phase change from the tetragonal to the monoclinic phase was evaluated by X-ray diffraction analysis (XRD). The load-bearing capacity of different implant abutments was measured and the fit between abutment and implant replica was examined with SEM. The results of these studies showed that the shorter sintering time or the thermocycling did not affect the strength or surface microhardness of zirconia. Coloring of zirconia decreased strength compared to un-colored control zirconia, and some of the colored zirconia specimens also showed a decrease in surface microhardness. Coloring also affected the dimensions of zirconia. Significantly decreased shrinkage was found for colored zirconia specimens during sintering. Heat treatment of zirconia did not seem to affect materials’ mechanical properties but when a thin coating of wash and glaze porcelain was fired on the tensile side of the disc the flexural strength decreased significantly. Furthermore, it was found that thermocycling increased the monoclinic phase on the surface of the zirconia. Color shading or heat treatment did not seem to affect phase transformation but small monoclinic peaks were detected on the surface of the heat treated specimens with a thin coating of wash and glaze porcelain on the opposite side. Custom-made zirconia abutments showed comparable load-bearing capacity to the prefabricated commercially available zirconia abutments. However, the fit of the custom-made abutments was less satisfactory than that of the commercially available abutments. These studies suggest that zirconia is a durable material and other treatments than color shading used in the manufacturing process of zirconia bulk material does not affect the material’s strength. The decrease in strength and dimensional changes after color shading needs to be taken into account when fabricating zirconia substructures for fixed dental prostheses. Manually copy-milled custom-made abutments have acceptable load-bearing capacity but the marginal accuracy has to be evaluated carefully.
Resumo:
Chaotic behaviour is one of the hardest problems that can happen in nonlinear dynamical systems with severe nonlinearities. It makes the system's responses unpredictable. It makes the system's responses to behave similar to noise. In some applications it should be avoided. One of the approaches to detect the chaotic behaviour is nding the Lyapunov exponent through examining the dynamical equation of the system. It needs a model of the system. The goal of this study is the diagnosis of chaotic behaviour by just exploring the data (signal) without using any dynamical model of the system. In this work two methods are tested on the time series data collected from AMB (Active Magnetic Bearing) system sensors. The rst method is used to nd the largest Lyapunov exponent by Rosenstein method. The second method is a 0-1 test for identifying chaotic behaviour. These two methods are used to detect if the data is chaotic. By using Rosenstein method it is needed to nd the minimum embedding dimension. To nd the minimum embedding dimension Cao method is used. Cao method does not give just the minimum embedding dimension, it also gives the order of the nonlinear dynamical equation of the system and also it shows how the system's signals are corrupted with noise. At the end of this research a test called runs test is introduced to show that the data is not excessively noisy.
Resumo:
The purpose of the study is to examine and increase knowledge on customer knowledge processing in B2B context from sales perspective. Further objectives include identifying possible inhibiting and enabling factors in each phase of the process. The theoretical framework is based on customer knowledge management literature. The study is a qualitative study, in which the research method utilized is a case study. The empirical part was implemented in a case company by conducting in-depth interviews with the company’s value-selling champions located internationally. Context was maintenance business. Altogether 17 interviews were conducted. The empirical findings indicate that customer knowledge processing has not been clearly defined within the maintenance business line. Main inhibiting factors in acquiring customer knowledge are lack of time and vast amount of customer knowledge received. Enabling factors recognized are good customer relationships and sales representatives’ communication skills. Internal dissemination of knowledge is mainly inhibited by lack of time and restrictions in customer relationship management systems. Enabling factors are composition of the sales team and updated customer knowledge. Inhibiting utilization is lack of goals to utilize the customer knowledge and a low quality of the knowledge. Moreover, customer knowledge is not systematically updated nor analysed. Management of customer knowledge is based on the CRM system. As implications of the study, it is suggested for the case company to define customer knowledge processing in order to support maintenance business process.
Resumo:
In this thesis, the suitability of different trackers for finger tracking in high-speed videos was studied. Tracked finger trajectories from the videos were post-processed and analysed using various filtering and smoothing methods. Position derivatives of the trajectories, speed and acceleration were extracted for the purposes of hand motion analysis. Overall, two methods, Kernelized Correlation Filters and Spatio-Temporal Context Learning tracking, performed better than the others in the tests. Both achieved high accuracy for the selected high-speed videos and also allowed real-time processing, being able to process over 500 frames per second. In addition, the results showed that different filtering methods can be applied to produce more appropriate velocity and acceleration curves calculated from the tracking data. Local Regression filtering and Unscented Kalman Smoother gave the best results in the tests. Furthermore, the results show that tracking and filtering methods are suitable for high-speed hand-tracking and trajectory-data post-processing.
Resumo:
The aim of this master’s thesis is to research and analyze how purchase invoice processing can be automated and streamlined in a system renewal project. The impacts of workflow automation on invoice handling are studied by means of time, cost and quality aspects. Purchase invoice processing has a lot of potential for automation because of its labor-intensive and repetitive nature. As a case study combining both qualitative and quantitative methods, the topic is approached from a business process management point of view. The current process was first explored through interviews and workshop meetings to create a holistic understanding of the process at hand. Requirements for process streamlining were then researched focusing on specified vendors and their purchase invoices, which helped to identify the critical factors for successful invoice automation. To optimize the flow from invoice receipt to approval for payment, the invoice receiving process was outsourced and the automation functionalities of the new system utilized in invoice handling. The quality of invoice data and the need of simple structured purchase order (PO) invoices were emphasized in the system testing phase. Hence, consolidated invoices containing references to multiple PO or blanket release numbers should be simplified in order to use automated PO matching. With non-PO invoices, it is important to receive the buyer reference details in an applicable invoice data field so that automation rules could be created to route invoices to a review and approval flow. In the beginning of the project, invoice processing was seen ineffective both time- and cost-wise, and it required a lot of manual labor to carry out all tasks. In accordance with testing results, it was estimated that over half of the invoices could be automated within a year after system implementation. Processing times could be reduced remarkably, which would then result savings up to 40 % in annual processing costs. Due to several advancements in the purchase invoice process, business process quality could also be perceived as improved.