28 resultados para Scheduling algorithms and analysis


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Simplification of highly detailed CAD models is an important step when CAD models are visualized or by other means utilized in augmented reality applications. Without simplification, CAD models may cause severe processing and storage is- sues especially in mobile devices. In addition, simplified models may have other advantages like better visual clarity or improved reliability when used for visual pose tracking. The geometry of CAD models is invariably presented in form of a 3D mesh. In this paper, we survey mesh simplification algorithms in general and focus especially to algorithms that can be used to simplify CAD models. We test some commonly known algorithms with real world CAD data and characterize some new CAD related simplification algorithms that have not been surveyed in previous mesh simplification reviews.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Diabetic retinopathy, age-related macular degeneration and glaucoma are the leading causes of blindness worldwide. Automatic methods for diagnosis exist, but their performance is limited by the quality of the data. Spectral retinal images provide a significantly better representation of the colour information than common grayscale or red-green-blue retinal imaging, having the potential to improve the performance of automatic diagnosis methods. This work studies the image processing techniques required for composing spectral retinal images with accurate reflection spectra, including wavelength channel image registration, spectral and spatial calibration, illumination correction, and the estimation of depth information from image disparities. The composition of a spectral retinal image database of patients with diabetic retinopathy is described. The database includes gold standards for a number of pathologies and retinal structures, marked by two expert ophthalmologists. The diagnostic applications of the reflectance spectra are studied using supervised classifiers for lesion detection. In addition, inversion of a model of light transport is used to estimate histological parameters from the reflectance spectra. Experimental results suggest that the methods for composing, calibrating and postprocessing spectral images presented in this work can be used to improve the quality of the spectral data. The experiments on the direct and indirect use of the data show the diagnostic potential of spectral retinal data over standard retinal images. The use of spectral data could improve automatic and semi-automated diagnostics for the screening of retinal diseases, for the quantitative detection of retinal changes for follow-up, clinically relevant end-points for clinical studies and development of new therapeutic modalities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The amount of installed wind power has been growing exponentially during the past ten years. As wind turbines have become a significant source of electrical energy, the interactions between the turbines and the electric power network need to be studied more thoroughly than before. Especially, the behavior of the turbines in fault situations is of prime importance; simply disconnecting all wind turbines from the network during a voltage drop is no longer acceptable, since this would contribute to a total network collapse. These requirements have been a contributor to the increased role of simulations in the study and design of the electric drive train of a wind turbine. When planning a wind power investment, the selection of the site and the turbine are crucial for the economic feasibility of the installation. Economic feasibility, on the other hand, is the factor that determines whether or not investment in wind power will continue, contributing to green electricity production and reduction of emissions. In the selection of the installation site and the turbine (siting and site matching), the properties of the electric drive train of the planned turbine have so far been generally not been taken into account. Additionally, although the loss minimization of some of the individual components of the drive train has been studied, the drive train as a whole has received less attention. Furthermore, as a wind turbine will typically operate at a power level lower than the nominal most of the time, efficiency analysis in the nominal operating point is not sufficient. This doctoral dissertation attempts to combine the two aforementioned areas of interest by studying the applicability of time domain simulations in the analysis of the economicfeasibility of a wind turbine. The utilization of a general-purpose time domain simulator, otherwise applied to the study of network interactions and control systems, in the economic analysis of the wind energy conversion system is studied. The main benefits of the simulation-based method over traditional methods based on analytic calculation of losses include the ability to reuse and recombine existing models, the ability to analyze interactions between the components and subsystems in the electric drive train (something which is impossible when considering different subsystems as independent blocks, as is commonly done in theanalytical calculation of efficiencies), the ability to analyze in a rather straightforward manner the effect of selections other than physical components, for example control algorithms, and the ability to verify assumptions of the effects of a particular design change on the efficiency of the whole system. Based on the work, it can be concluded that differences between two configurations can be seen in the economic performance with only minor modifications to the simulation models used in the network interaction and control method study. This eliminates the need ofdeveloping analytic expressions for losses and enables the study of the system as a whole instead of modeling it as series connection of independent blocks with no lossinterdependencies. Three example cases (site matching, component selection, control principle selection) are provided to illustrate the usage of the approach and analyze its performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tutkimuksen päätavoite on arvioida, ovatko neljä ohjelmistovaihtoehtoa riittäviä tuotannon aikataulutuksen työkaluja ja mikä työkaluista sopii toimeksiantajayritykselle. Alatavoitteena on kuvata tuotannon aikataulutuksen nyky- ja tahtotila prosessimallinnuksen avulla, selvittää työkalun käyttäjätarpeet ja määritellä priorisoidut valintakriteerit työkalulle.Tutkimuksen teoriaosuudessa tutkitaan tuotannon aikataulutuksen logiikkaa ja haasteita. Työssä tarkastellaan aikataulutusohjelmiston valintaa rinnakkain prosessinmallinnuksen kanssa. Aikataulutusohjelmistovaihtoehdot ja metodit käyttäjätarpeiden selvittämiseksi käydään läpi. Empiriaosuudessa selvitetään tutkimuksen suhde toimeksiantajayrityksen strategiaan. Käyttäjätarpeet selvitetään haastattelujen avulla jaanalysoidaan QFD matriisin avulla. Toimeksiantajayrityksen tuotannon aikataulutuksen nyky- ja tahtotilaprosessit mallinnetaan, jotta ohjelmistojen sopivuutta, aikataulutusprosessia tukevana työkaluna voidaan arvioida.Tutkimustuloksena ovatpriorisoidut valintakriteerit aikataulutustyökalulle eli käyttäjätarpeista johdetut tärkeimmät toiminnalliset ominaisuudet, järjestelmätoimittaja-arvio sekä suositukset jatkotoimenpiteistä ja lisätutkimuksesta.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The past few decades have seen a considerable increase in the number of parallel and distributed systems. With the development of more complex applications, the need for more powerful systems has emerged and various parallel and distributed environments have been designed and implemented. Each of the environments, including hardware and software, has unique strengths and weaknesses. There is no single parallel environment that can be identified as the best environment for all applications with respect to hardware and software properties. The main goal of this thesis is to provide a novel way of performing data-parallel computation in parallel and distributed environments by utilizing the best characteristics of difference aspects of parallel computing. For the purpose of this thesis, three aspects of parallel computing were identified and studied. First, three parallel environments (shared memory, distributed memory, and a network of workstations) are evaluated to quantify theirsuitability for different parallel applications. Due to the parallel and distributed nature of the environments, networks connecting the processors in these environments were investigated with respect to their performance characteristics. Second, scheduling algorithms are studied in order to make them more efficient and effective. A concept of application-specific information scheduling is introduced. The application- specific information is data about the workload extractedfrom an application, which is provided to a scheduling algorithm. Three scheduling algorithms are enhanced to utilize the application-specific information to further refine their scheduling properties. A more accurate description of the workload is especially important in cases where the workunits are heterogeneous and the parallel environment is heterogeneous and/or non-dedicated. The results obtained show that the additional information regarding the workload has a positive impact on the performance of applications. Third, a programming paradigm for networks of symmetric multiprocessor (SMP) workstations is introduced. The MPIT programming paradigm incorporates the Message Passing Interface (MPI) with threads to provide a methodology to write parallel applications that efficiently utilize the available resources and minimize the overhead. The MPIT allows for communication and computation to overlap by deploying a dedicated thread for communication. Furthermore, the programming paradigm implements an application-specific scheduling algorithm. The scheduling algorithm is executed by the communication thread. Thus, the scheduling does not affect the execution of the parallel application. Performance results achieved from the MPIT show that considerable improvements over conventional MPI applications are achieved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

During the latest few years the need for new motor types has grown, since both high efficiency and an accurate dynamic performance are demanded in industrial applications. For this reason, new effective control systems such as direct torque control (DTC) have been developed. Permanent magnet synchronous motors (PMSM) are well suitable for new adjustable speed AC inverter drives, because their efficiency and power factor are not depending on the pole pair number and speed to the same extent as it is the case in induction motors. Therefore, an induction motor (IM) with a mechanical gearbox can often be replaced with a direct PM motor drive. Space as well as costs will be saved, because the efficiency increases and the cost of maintenance decreases as well. This thesis deals with design criterion, analytical calculation and analysis of the permanent magnet synchronous motor for both sinusoidal air-gap flux density and rectangular air-gapflux density. It is examined how the air-gap flux, flux densities, inductances and torque can be estimated analytically for salient pole and non-salient pole motors. It has been sought by means of analytical calculations for the ultimate construction for machines rotating at relative low 300 rpm to 600 rpm speeds, which are suitable speeds e.g. in Pulp&Paper industry. The calculations are verified by using Finite Element calculations and by measuring of prototype motor. The prototype motor is a 45 kW, 600 rpm PMSM with buried V-magnets, which is a very appropriate construction for high torque motors with a high performance. With the purposebuilt prototype machine it is possible not only to verify the analytical calculations but also to show whether the 600 rpm PMSM can replace the 1500 rpm IM with a gear. It can also be tested if the outer dimensions of the PMSM may be the same as for the IM and if the PMSM in this case can produce a 2.5 fold torque, in consequence of which it may be possible to achieve the same power. The thesis also considers the question how to design a permanent magnet synchronous motor for relatively low speed applications that require a high motor torqueand efficiency as well as bearable costs of permanent magnet materials. It is shown how a selection of different parameters affects the motor properties. Key words: Permanent magnet synchronous motor, PMSM, surface magnets, buried magnets

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis concentrates on developing a practical local approach methodology based on micro mechanical models for the analysis of ductile fracture of welded joints. Two major problems involved in the local approach, namely the dilational constitutive relation reflecting the softening behaviour of material, and the failure criterion associated with the constitutive equation, have been studied in detail. Firstly, considerable efforts were made on the numerical integration and computer implementation for the non trivial dilational Gurson Tvergaard model. Considering the weaknesses of the widely used Euler forward integration algorithms, a family of generalized mid point algorithms is proposed for the Gurson Tvergaard model. Correspondingly, based on the decomposition of stresses into hydrostatic and deviatoric parts, an explicit seven parameter expression for the consistent tangent moduli of the algorithms is presented. This explicit formula avoids any matrix inversion during numerical iteration and thus greatly facilitates the computer implementation of the algorithms and increase the efficiency of the code. The accuracy of the proposed algorithms and other conventional algorithms has been assessed in a systematic manner in order to highlight the best algorithm for this study. The accurate and efficient performance of present finite element implementation of the proposed algorithms has been demonstrated by various numerical examples. It has been found that the true mid point algorithm (a = 0.5) is the most accurate one when the deviatoric strain increment is radial to the yield surface and it is very important to use the consistent tangent moduli in the Newton iteration procedure. Secondly, an assessment of the consistency of current local failure criteria for ductile fracture, the critical void growth criterion, the constant critical void volume fraction criterion and Thomason's plastic limit load failure criterion, has been made. Significant differences in the predictions of ductility by the three criteria were found. By assuming the void grows spherically and using the void volume fraction from the Gurson Tvergaard model to calculate the current void matrix geometry, Thomason's failure criterion has been modified and a new failure criterion for the Gurson Tvergaard model is presented. Comparison with Koplik and Needleman's finite element results shows that the new failure criterion is fairly accurate indeed. A novel feature of the new failure criterion is that a mechanism for void coalescence is incorporated into the constitutive model. Hence the material failure is a natural result of the development of macroscopic plastic flow and the microscopic internal necking mechanism. By the new failure criterion, the critical void volume fraction is not a material constant and the initial void volume fraction and/or void nucleation parameters essentially control the material failure. This feature is very desirable and makes the numerical calibration of void nucleation parameters(s) possible and physically sound. Thirdly, a local approach methodology based on the above two major contributions has been built up in ABAQUS via the user material subroutine UMAT and applied to welded T joints. By using the void nucleation parameters calibrated from simple smooth and notched specimens, it was found that the fracture behaviour of the welded T joints can be well predicted using present methodology. This application has shown how the damage parameters of both base material and heat affected zone (HAZ) material can be obtained in a step by step manner and how useful and capable the local approach methodology is in the analysis of fracture behaviour and crack development as well as structural integrity assessment of practical problems where non homogeneous materials are involved. Finally, a procedure for the possible engineering application of the present methodology is suggested and discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Statistical analyses of measurements that can be described by statistical models are of essence in astronomy and in scientific inquiry in general. The sensitivity of such analyses, modelling approaches, and the consequent predictions, is sometimes highly dependent on the exact techniques applied, and improvements therein can result in significantly better understanding of the observed system of interest. Particularly, optimising the sensitivity of statistical techniques in detecting the faint signatures of low-mass planets orbiting the nearby stars is, together with improvements in instrumentation, essential in estimating the properties of the population of such planets, and in the race to detect Earth-analogs, i.e. planets that could support liquid water and, perhaps, life on their surfaces. We review the developments in Bayesian statistical techniques applicable to detections planets orbiting nearby stars and astronomical data analysis problems in general. We also discuss these techniques and demonstrate their usefulness by using various examples and detailed descriptions of the respective mathematics involved. We demonstrate the practical aspects of Bayesian statistical techniques by describing several algorithms and numerical techniques, as well as theoretical constructions, in the estimation of model parameters and in hypothesis testing. We also apply these algorithms to Doppler measurements of nearby stars to show how they can be used in practice to obtain as much information from the noisy data as possible. Bayesian statistical techniques are powerful tools in analysing and interpreting noisy data and should be preferred in practice whenever computational limitations are not too restrictive.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This master’s thesis studies the case company’s current purchase invoice process and the challenges that are related to it. Like most of other master’s thesis this study consists of both theoretical- and empirical parts. The purpose of this work is to combine theoretical and empirical parts together so that the theoretical part brings value to the empirical case study. The case company’s main business is frequency converters for both low voltage AC & DC drives and medium voltage AC Drives which are used across all industries and applications. The main focus of this study is on the current invoice process modelling. When modelling the existing process with discipline and care, current challenges can be understood better. Empirical study relays heavily on interviews and existing, yet fragmented, data. This, along with own calculations and analysis, creates the foundation for the empirical part of this master’s thesis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Outsourcing and offshoring or any combinations of these have not just become a popular phenomenon, but are viewed as one of the most important management strategies due to the new possibilities from globalization. They have been seen as a possibility to save costs and improve customer service. Executing offshoring and offshore outsourcing successfully can be more complex than initially expected. Potential cost savings resulting from of offshoring and offshore outsourcing are often based on lower manufacturing costs. However, these benefits might be conflicted by a more complex supply chain with service level challenges that can respectively increase costs. Therefore analyzing the total cost effects of offshoring and outsourcing is necessary. The aim of this Master´s Thesis was to to construct a total cost model using academic literature to calculate the total costs and analyze the reasonability of offshoring and offshore outsourcing production of a case company compared to insourcing production. The research data was mainly quantitative and collected mainly from the case company past sales and production records. In addition management level interviews from the case company were conducted. The information from these interviews was used for the qualification of the necessary quantitative data and adding supportive information that could not be gathered from the quantitative data. Both data collection and analysis were guided by a theoretical frame of reference that was based on academic literature concerning offshoring and outsourcing, statistical calculation of demand and total costs. The results confirm the theories that offshoring and offshore outsourcing would reduce total costs as both offshoring and offshore outsourcing options result in lower total annual costs than insourcing mainly due to lower manufacturing costs. However, increased demand uncertainty would make the alternative of offshore outsourcing more risky and difficult to manage. Therefore when assessing the overall impact of the alternatives, offshoring is the most preferable option. As the main cost savings in offshore outsourcing came from lower manufacturing costs, more specifically labour costs, the logistics costs in this case company did not have an essential effect in total costs. The management should therefore pay attention initially to manufacturing costs and then logistics costs when choosing the best production sourcing option for the company.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Carbonic anhydrases are enzymes that are ubiquitously found in all organisms that are engaged in catalyzing the hydration of carbon dioxide to form bicarbonate and proton and vice versa. They are crucial in the process of respiration, bone resorption, pH regulation, ion transport, and photosynthesis in plants. Out of the five classes of carbonic anhydrase α, β, γ, δ, ζ this study focused in the α carbonic anhydrases. This class of CAs constitute of 16 subfamilies in mammals that include 3 non-active enzymes known as Carbonic Anhydrase Related Proteins. The inactiveness of these enzymes is due to the loss of one or more Histidine residues in the active site. This thesis was conducted based on the aim of studying evolutionary analysis of carbonic anhydrase sequences from organisms spanning from the Cambrian age. It was carried out in two phases. The first phase was the sequence collection, which involved many biological sequence databases as a source. The scope of this segment included sequence alignments and analysis of the sequence manually and in an automated form incorporating few analysis tools. The second Phase was phylogenetic analysis and exploring the subcellular location of the proteins, which was key for the evolutionary analysis. Through the medium of the methods conducted with respect to the phases mentioned above, it was possible to accomplish the desired result. Certain thought-provoking sequences were come across and analyzed thoroughly. Whereas, Phylogenetics showed interesting results to bolster previous findings and new findings as well which lay bedrock for future intensified studies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ancient starch analysis is a microbotanical method in which starch granules are extracted from archaeological residues and the botanical source is identified. The method is an important addition to established palaeoethnobotanical research, as it can reveal ancient microremains of starchy staples such as cereal grains and seeds. In addition, starch analysis can detect starch originating from underground storage organs, which are rarely discovered using other methods. Because starch is tolerant of acidic soils, unlike most organic matter, starch analysis can be successful in northern boreal regions. Starch analysis has potential in the study of cultivation, plant domestication, wild plant usage and tool function, as well as in locating activity areas at sites and discovering human impact on the environment. The aim of this study was to experiment with the starch analysis method in Finnish and Estonian archaeology by building a starch reference collection from cultivated and native plant species, by developing sampling, measuring and analysis protocols, by extracting starch residues from archaeological artefacts and soils, and by identifying their origin. The purpose of this experiment was to evaluate the suitability of the method for the study of subsistence strategies in prehistoric Finland and Estonia. A total of 64 archaeological samples were analysed from four Late Neolithic sites in Finland and Estonia, with radiocarbon dates ranging between 2904 calBC and 1770 calBC. The samples yielded starch granules, which were compared with the starch reference collection and descriptions in the literature. Cereal-type starch was identified from the Finnish Kiukainen culture site and from the Estonian Corded Ware site. The samples from the Finnish Corded Ware site yielded underground storage organ starch, which may be the first evidence of the use of rhizomes as food in Finland. No cereal-type starch was observed. Although the sample sets were limited, the experiment confirmed that starch granules have been preserved well in the archaeological material of Finland and Estonia, and that differences between subsistence patterns, as well as evidence of cultivation and wild plant gathering, can be discovered using starch analysis. By collecting large sample sets and addressing the three most important issues – preventing contamination, collecting adequate references and understanding taphonomic processes – starch analysis can substantially contribute to research on ancient subsistence in Finland and Estonia.