374 resultados para Iteration


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we study the set of periods of holomorphic maps on compact manifolds, using the periodic Lefschetz numbers introduced by Dold and Llibre, which can be computed from the homology class of the map. We show that these numbers contain information about the existence of periodic points of a given period; and, if we assume the map to be transversal, then they give us the exact number of such periodic orbits. We apply this result to the complex projective space of dimension n and to some special type of Hopf surfaces, partially characterizing their set of periods. In the first case we also show that any holomorphic map of CP(n) of degree greater than one has infinitely many distinct periodic orbits, hence generalizing a theorem of Fornaess and Sibony. We then characterize the set of periods of a holomorphic map on the Riemann sphere, hence giving an alternative proof of Baker's theorem.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract In this thesis we present the design of a systematic integrated computer-based approach for detecting potential disruptions from an industry perspective. Following the design science paradigm, we iteratively develop several multi-actor multi-criteria artifacts dedicated to environment scanning. The contributions of this thesis are both theoretical and practical. We demonstrate the successful use of multi-criteria decision-making methods for technology foresight. Furthermore, we illustrate the design of our artifacts using build and-evaluate loops supported with a field study of the Swiss mobile payment industry. To increase the relevance of this study, we systematically interview key Swiss experts for each design iteration. As a result, our research provides a realistic picture of the current situation in the Swiss mobile payment market and reveals previously undiscovered weak signals for future trends. Finally, we suggest a generic design process for environment scanning.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This research has been focused at the development of a tuned systematic design methodology, which gives the best performance in a computer aided environment and utilises a cross-technological approach, specially tested with and for laser processed microwave mechanics. A tuned design process scheme is also presented. Because of the currently large production volumes of microwave and radio frequency mechanics even slight improvements of design methodologies or manufacturing technologies would give reasonable possibilities for cost reduction. The typical number of required iteration cycles could be reduced to one fifth of normal. The research area dealing with the methodologies is divided firstly into a function-oriented, a performance-oriented or a manufacturability-oriented product design. Alternatively various approaches can be developed for a customer-oriented, a quality-oriented, a cost-oriented or an organisation-oriented design. However, the real need for improvements is between these two extremes. This means that the effective methodology for the designers should not be too limited (like in the performance-oriented design) or too general (like in the organisation-oriented design), but it should, include the context of the design environment. This is the area where the current research is focused. To test the developed tuned design methodology for laser processing (TDMLP) and the tuned optimising algorithm for laser processing (TOLP), seven different industrial product applications for microwave mechanics have been designed, CAD-modelled and manufactured by using laser in small production series. To verify that the performance of these products meets the required level and to ensure the objectiveness ofthe results extensive laboratory tests were used for all designed prototypes. As an example a Ku-band horn antenna can be laser processed from steel in 2 minutes at the same time obtaining a comparable electrical performance of classical aluminium units or the residual resistance of a laser joint in steel could be limited to 72 milliohmia.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Työn tavoitteena on kartoittaa painevesireaktorityyppisen ydinvoimalaitoksen prosessihyötysuhteen parantamiskohteita. Aluksi kirjallisuudesta etsitään hyötysuhteen parantamiskeinoja ideaalisessa höyryvoimalaitosprosessissa. Näistä valitaan sopivimmat tarkastelun kohteeksi todellisessa voimalaitoksessa: syöttöveden esilämmityksen tehostaminen väliottohöyryvirtausta kasvattamalla ja syöttöveden esilämmittimen lämmönsiirtopintaa lisäämällä. Tarkastelussa pyritään löytämään paras mahdollinen hyötysuhde väliottohöyrylinjojen putkikokoa sekä esilämmittimien putkien lukumäärää muuttamalla. Diskreetin optimoinnin iteraatioaskel määritetään hyötysuhteen osittaisderivaattojen avulla. Tehtäviä muutoksia simuloidaan APROS-simulointiohjelmalla, jossa käytetään Loviisan voimalaitoksesta tehtyä mallia VVER-440. Työssä havaittiin, että pelkkiä väliottohöyrylinjojen putkikokoja – ja massavirtaa – kasvattamalla Loviisan voimalaitoksen hyötysuhdetta voidaan parantaa parhaimmillaan 32,75%:sta 32,85%:iin. Syöttöveden esilämmittimien lämmönsiirtopintaa lisäämällä saadaan suurempi parannus hyötysuhteeseen: 32,75%:sta 32,99%:iin. Näissä tapauksissa muutettiin kaikkia väliottohöyrylinjoja tai syöttöveden esilämmittimien lämpöpintoja. Työssä tarkasteltiin myös joitakin pienempiä muutoskohteita, joista paras hyötysuhteen kasvu saatiin korkeapaine-esilämmittimien lämmönsiirtopintaa kasvattamalla sekä toisen väliottohöyrylinjan (RD12) ja sitä vastaavan syöttöveden esilämmittimen muutosten yhteisvaikutuksena.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A Fortran77 program, SSPBE, designed to solve the spherically symmetric Poisson-Boltzmann equation using cell model for ionic macromolecular aggregates or macroions is presented. The program includes an adsorption model for ions at the aggregate surface. The working algorithm solves the Poisson-Boltzmann equation in the integral representation using the Picard iteration method. Input parameters are introduced via an ASCII file, sspbe.txt. Output files yield the radial distances versus mean field potentials and average molar ion concentrations, the molar concentration of ions at the cell boundary, the self-consistent degree of ion adsorption from the surface and other related data. Ion binding to ionic, zwitterionic and reverse micelles are presented as representative examples of the applications of the SSPBE program.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis deals with distance transforms which are a fundamental issue in image processing and computer vision. In this thesis, two new distance transforms for gray level images are presented. As a new application for distance transforms, they are applied to gray level image compression. The new distance transforms are both new extensions of the well known distance transform algorithm developed by Rosenfeld, Pfaltz and Lay. With some modification their algorithm which calculates a distance transform on binary images with a chosen kernel has been made to calculate a chessboard like distance transform with integer numbers (DTOCS) and a real value distance transform (EDTOCS) on gray level images. Both distance transforms, the DTOCS and EDTOCS, require only two passes over the graylevel image and are extremely simple to implement. Only two image buffers are needed: The original gray level image and the binary image which defines the region(s) of calculation. No other image buffers are needed even if more than one iteration round is performed. For large neighborhoods and complicated images the two pass distance algorithm has to be applied to the image more than once, typically 3 10 times. Different types of kernels can be adopted. It is important to notice that no other existing transform calculates the same kind of distance map as the DTOCS. All the other gray weighted distance function, GRAYMAT etc. algorithms find the minimum path joining two points by the smallest sum of gray levels or weighting the distance values directly by the gray levels in some manner. The DTOCS does not weight them that way. The DTOCS gives a weighted version of the chessboard distance map. The weights are not constant, but gray value differences of the original image. The difference between the DTOCS map and other distance transforms for gray level images is shown. The difference between the DTOCS and EDTOCS is that the EDTOCS calculates these gray level differences in a different way. It propagates local Euclidean distances inside a kernel. Analytical derivations of some results concerning the DTOCS and the EDTOCS are presented. Commonly distance transforms are used for feature extraction in pattern recognition and learning. Their use in image compression is very rare. This thesis introduces a new application area for distance transforms. Three new image compression algorithms based on the DTOCS and one based on the EDTOCS are presented. Control points, i.e. points that are considered fundamental for the reconstruction of the image, are selected from the gray level image using the DTOCS and the EDTOCS. The first group of methods select the maximas of the distance image to new control points and the second group of methods compare the DTOCS distance to binary image chessboard distance. The effect of applying threshold masks of different sizes along the threshold boundaries is studied. The time complexity of the compression algorithms is analyzed both analytically and experimentally. It is shown that the time complexity of the algorithms is independent of the number of control points, i.e. the compression ratio. Also a new morphological image decompression scheme is presented, the 8 kernels' method. Several decompressed images are presented. The best results are obtained using the Delaunay triangulation. The obtained image quality equals that of the DCT images with a 4 x 4

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A model to solve heat and mass balances during the offdesign load calculations was created. These equations are complex and nonlinear. The main new ideas used in the created offdesign model of a kraft recovery boiler are the use of heat flows as torn iteration variables instead of the current practice of using the mass flows, vectorizing equation solving, thus speeding up the process, using non dimensional variables for solving the multiple heat transfer surface problem and using a new procedure for calculating pressure losses. Recovery boiler heat and mass balances are reduced to vector form. It is shown that these vectorized equations can be solved virtually without iteration. The iteration speed is enhanced by the use of the derived method of calculating multiple heat transfer surfaces simultaneously. To achieve this quick convergence the heat flows were used as the torn iteration parameters. A new method to handle pressure loss calculations with linearization was presented. This method enabled less time to be spent calculating pressure losses. The derived vector representation of the steam generator was used to calculate offdesign operation parameters for a 3000 tds/d example recovery boiler. The model was used to study recovery boiler part load operation and the effect of the black liquor dry solids increase on recovery boiler dimensioning. Heat flows to surface elements for part load calculations can be closely approximated with a previously defined exponent function. The exponential method can be used for the prediction of fouling in kraft recovery boilers. For similar furnaces the firing of 80 % dry solids liquor produces lower hearth heat release rate than the 65 % dry solids liquor if we fire at constant steam flow. The furnace outlet temperatures show that capacity increase with firing rate increase produces higher loadings than capacity increase with dry solids increase. The economizers, boiler banks and furnaces can be dimensioned smaller if we increase the black liquor dry solids content. The main problem with increased black liquor dry solids content is the decrease in the heat available to superheat. Whenever possible the furnace exit temperature should be increased by decreasing the furnace height. The increase in the furnace exit temperature is usually opposed because of fear of increased corrosion.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis concentrates on developing a practical local approach methodology based on micro mechanical models for the analysis of ductile fracture of welded joints. Two major problems involved in the local approach, namely the dilational constitutive relation reflecting the softening behaviour of material, and the failure criterion associated with the constitutive equation, have been studied in detail. Firstly, considerable efforts were made on the numerical integration and computer implementation for the non trivial dilational Gurson Tvergaard model. Considering the weaknesses of the widely used Euler forward integration algorithms, a family of generalized mid point algorithms is proposed for the Gurson Tvergaard model. Correspondingly, based on the decomposition of stresses into hydrostatic and deviatoric parts, an explicit seven parameter expression for the consistent tangent moduli of the algorithms is presented. This explicit formula avoids any matrix inversion during numerical iteration and thus greatly facilitates the computer implementation of the algorithms and increase the efficiency of the code. The accuracy of the proposed algorithms and other conventional algorithms has been assessed in a systematic manner in order to highlight the best algorithm for this study. The accurate and efficient performance of present finite element implementation of the proposed algorithms has been demonstrated by various numerical examples. It has been found that the true mid point algorithm (a = 0.5) is the most accurate one when the deviatoric strain increment is radial to the yield surface and it is very important to use the consistent tangent moduli in the Newton iteration procedure. Secondly, an assessment of the consistency of current local failure criteria for ductile fracture, the critical void growth criterion, the constant critical void volume fraction criterion and Thomason's plastic limit load failure criterion, has been made. Significant differences in the predictions of ductility by the three criteria were found. By assuming the void grows spherically and using the void volume fraction from the Gurson Tvergaard model to calculate the current void matrix geometry, Thomason's failure criterion has been modified and a new failure criterion for the Gurson Tvergaard model is presented. Comparison with Koplik and Needleman's finite element results shows that the new failure criterion is fairly accurate indeed. A novel feature of the new failure criterion is that a mechanism for void coalescence is incorporated into the constitutive model. Hence the material failure is a natural result of the development of macroscopic plastic flow and the microscopic internal necking mechanism. By the new failure criterion, the critical void volume fraction is not a material constant and the initial void volume fraction and/or void nucleation parameters essentially control the material failure. This feature is very desirable and makes the numerical calibration of void nucleation parameters(s) possible and physically sound. Thirdly, a local approach methodology based on the above two major contributions has been built up in ABAQUS via the user material subroutine UMAT and applied to welded T joints. By using the void nucleation parameters calibrated from simple smooth and notched specimens, it was found that the fracture behaviour of the welded T joints can be well predicted using present methodology. This application has shown how the damage parameters of both base material and heat affected zone (HAZ) material can be obtained in a step by step manner and how useful and capable the local approach methodology is in the analysis of fracture behaviour and crack development as well as structural integrity assessment of practical problems where non homogeneous materials are involved. Finally, a procedure for the possible engineering application of the present methodology is suggested and discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of this paper is to present a simple way of treating the general equation for acid-base titrations based on the concept of degree of dissociation, and to propose a new spreadsheet approach for simulating the titration of mixtures of polyprotic compounds. The general expression, without any approximation, is calculated a simple iteration method, making number manipulation easy and painless. The user-friendly spreadsheet was developed by using MS-Excel and Visual-Basic-for-Excel. Several graphs are drawn for helping visualizing the titration behavior. A Monte Carlo function for error simulation was also implemented. Two examples for titration of alkalinity and McIlvaine buffer are presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Modern sophisticated telecommunication devices require even more and more comprehensive testing to ensure quality. The test case amount to ensure well enough coverage of testing has increased rapidly and this increased demand cannot be fulfilled anymore only by using manual testing. Also new agile development models require execution of all test cases with every iteration. This has lead manufactures to use test automation more than ever to achieve adequate testing coverage and quality. This thesis is separated into three parts. Evolution of cellular networks is presented at the beginning of the first part. Also software testing, test automation and the influence of development model for testing are examined in the first part. The second part describes a process which was used to implement test automation scheme for functional testing of LTE core network MME element. In implementation of the test automation scheme agile development models and Robot Framework test automation tool were used. In the third part two alternative models are presented for integrating this test automation scheme as part of a continuous integration process. As a result, the test automation scheme for functional testing was implemented. Almost all new functional level testing test cases can now be automated with this scheme. In addition, two models for integrating this scheme to be part of a wider continuous integration pipe were introduced. Also shift from usage of a traditional waterfall model to a new agile development based model in testing stated to be successful.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tässä työssä tarkastellaan CE-merkintään vaadittavia teknisen tuotteistamisen vaiheita käyttäen esimerkkinä painesuodattimen automatisoidun kankaanvaihtolaitteen suunnitteluprosessia. Työssä selvitetään, mitä vaihtoehtoja on painesuodattimen lisälaitteiden luokitteluksi, että ne saadaan tuotteistettua Euroopan talousalueella (ETA). Esimerkkinä käytettävä kankaanvaihtolaite on suunniteltu käyttäen järjestelmällisen koneensuunnittelun menetelmää. CE-merkinnän vaatima riskianalyysi on tehty laitteelle standardin SFS-EN ISO 12100:2010 mukaisesti. Tuloksena saatu laitteen prototyyppi täyttää pääosin laitteelle asetettavat vaatimukset. Kustannusarvio ylittää kuitenkin toivotun omakustannehinnan valoverhojen suhteellisen kalliin hinnan takia. Kustannusarvion mukaan prototyyppi voidaan kuitenkin valmistaa edullisesti, sillä valoverhot eivät ole pakollisia laitteen toiminnallisissa testeissä. Ennen tuotteistamista valoverhojen korvaamisen mahdollisuutta muulla turvatekniikalla on kuitenkin tutkittava. Suunnitteluvaiheen jälkeen laitteen turvallisuuden voidaan todeta olevan vähintään riittävällä tasolla. Riskianalyysi on kuitenkin päivitettävä dokumentti, ja laitteen turvallisuus täytyy varmistaa prototyyppiä testaamalla. Työn perusteella voidaan todeta, että huomioimalla laitteen mahdollisesti aiheuttamat vaaratilanteet jo tuotesuunnittelun alussa, voidaan tuotekehitysprosessia nopeuttaa. Tunnistamalla vaaratilanteet suunnittelun varhaisessa vaiheessa voidaan vähentää riskien määrää, ja siten tarvetta riskien pienentämiselle. Näin vähennetään rakenteen suunnittelun ja riskianalyysin iterointikierrosten määrää, jolloin myös tuotteistamisprosessi nopeutuu.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work presents a formulation of the contact with friction between elastic bodies. This is a non linear problem due to unilateral constraints (inter-penetration of bodies) and friction. The solution of this problem can be found using optimization concepts, modelling the problem as a constrained minimization problem. The Finite Element Method is used to construct approximation spaces. The minimization problem has the total potential energy of the elastic bodies as the objective function, the non-inter-penetration conditions are represented by inequality constraints, and equality constraints are used to deal with the friction. Due to the presence of two friction conditions (stick and slip), specific equality constraints are present or not according to the current condition. Since the Coulomb friction condition depends on the normal and tangential contact stresses related to the constraints of the problem, it is devised a conditional dependent constrained minimization problem. An Augmented Lagrangian Method for constrained minimization is employed to solve this problem. This method, when applied to a contact problem, presents Lagrange Multipliers which have the physical meaning of contact forces. This fact allows to check the friction condition at each iteration. These concepts make possible to devise a computational scheme which lead to good numerical results.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we present an algorithm for the numerical simulation of the cavitation in the hydrodynamic lubrication of journal bearings. Despite the fact that this physical process is usually modelled as a free boundary problem, we adopted the equivalent variational inequality formulation. We propose a two-level iterative algorithm, where the outer iteration is associated to the penalty method, used to transform the variational inequality into a variational equation, and the inner iteration is associated to the conjugate gradient method, used to solve the linear system generated by applying the finite element method to the variational equation. This inner part was implemented using the element by element strategy, which is easily parallelized. We analyse the behavior of two physical parameters and discuss some numerical results. Also, we analyse some results related to the performance of a parallel implementation of the algorithm.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This doctoral thesis introduces an improved control principle for active du/dt output filtering in variable-speed AC drives, together with performance comparisons with previous filtering methods. The effects of power semiconductor nonlinearities on the output filtering performance are investigated. The nonlinearities include the timing deviation and the voltage pulse waveform distortion in the variable-speed AC drive output bridge. Active du/dt output filtering (ADUDT) is a method to mitigate motor overvoltages in variable-speed AC drives with long motor cables. It is a quite recent addition to the du/dt reduction methods available. This thesis improves on the existing control method for the filter, and concentrates on the lowvoltage (below 1 kV AC) two-level voltage-source inverter implementation of the method. The ADUDT uses narrow voltage pulses having a duration in the order of a microsecond from an IGBT (insulated gate bipolar transistor) inverter to control the output voltage of a tuned LC filter circuit. The filter output voltage has thus increased slope transition times at the rising and falling edges, with an opportunity of no overshoot. The effect of the longer slope transition times is a reduction in the du/dt of the voltage fed to the motor cable. Lower du/dt values result in a reduction in the overvoltage effects on the motor terminals. Compared with traditional output filtering methods to accomplish this task, the active du/dt filtering provides lower inductance values and a smaller physical size of the filter itself. The filter circuit weight can also be reduced. However, the power semiconductor nonlinearities skew the filter control pulse pattern, resulting in control deviation. This deviation introduces unwanted overshoot and resonance in the filter. The controlmethod proposed in this thesis is able to directly compensate for the dead time-induced zero-current clamping (ZCC) effect in the pulse pattern. It gives more flexibility to the pattern structure, which could help in the timing deviation compensation design. Previous studies have shown that when a motor load current flows in the filter circuit and the inverter, the phase leg blanking times distort the voltage pulse sequence fed to the filter input. These blanking times are caused by excessively large dead time values between the IGBT control pulses. Moreover, the various switching timing distortions, present in realworld electronics when operating with a microsecond timescale, bring additional skew to the control. Left uncompensated, this results in distortion of the filter input voltage and a filter self-induced overvoltage in the form of an overshoot. This overshoot adds to the voltage appearing at the motor terminals, thus increasing the transient voltage amplitude at the motor. This doctoral thesis investigates the magnitude of such timing deviation effects. If the motor load current is left uncompensated in the control, the filter output voltage can overshoot up to double the input voltage amplitude. IGBT nonlinearities were observed to cause a smaller overshoot, in the order of 30%. This thesis introduces an improved ADUDT control method that is able to compensate for phase leg blanking times, giving flexibility to the pulse pattern structure and dead times. The control method is still sensitive to timing deviations, and their effect is investigated. A simple approach of using a fixed delay compensation value was tried in the test setup measurements. The ADUDT method with the new control algorithm was found to work in an actual motor drive application. Judging by the simulation results, with the delay compensation, the method should ultimately enable an output voltage performance and a du/dt reduction that are free from residual overshoot effects. The proposed control algorithm is not strictly required for successful ADUDT operation: It is possible to precalculate the pulse patterns by iteration and then for instance store them into a look-up table inside the control electronics. Rather, the newly developed control method is a mathematical tool for solving the ADUDT control pulses. It does not contain the timing deviation compensation (from the logic-level command to the phase leg output voltage), and as such is not able to remove the timing deviation effects that cause error and overshoot in the filter. When the timing deviation compensation has to be tuned-in in the control pattern, the precalculated iteration method could prove simpler and equally good (or even better) compared with the mathematical solution with a separate timing compensation module. One of the key findings in this thesis is the conclusion that the correctness of the pulse pattern structure, in the sense of ZCC and predicted pulse timings, cannot be separated from the timing deviations. The usefulness of the correctly calculated pattern is reduced by the voltage edge timing errors. The doctoral thesis provides an introductory background chapter on variable-speed AC drives and the problem of motor overvoltages and takes a look at traditional solutions for overvoltage mitigation. Previous results related to the active du/dt filtering are discussed. The basic operation principle and design of the filter have been studied previously. The effect of load current in the filter and the basic idea of compensation have been presented in the past. However, there was no direct way of including the dead time in the control (except for solving the pulse pattern manually by iteration), and the magnitude of nonlinearity effects had not been investigated. The enhanced control principle with the dead time handling capability and a case study of the test setup timing deviations are the main contributions of this doctoral thesis. The simulation and experimental setup results show that the proposed control method can be used in an actual drive. Loss measurements and a comparison of active du/dt output filtering with traditional output filtering methods are also presented in the work. Two different ADUDT filter designs are included, with ferrite core and air core inductors. Other filters included in the tests were a passive du/dtfilter and a passive sine filter. The loss measurements incorporated a silicon carbide diode-equipped IGBT module, and the results show lower losses with these new device technologies. The new control principle was measured in a 43 A load current motor drive system and was able to bring the filter output peak voltage from 980 V (the previous control principle) down to 680 V in a 540 V average DC link voltage variable-speed drive. A 200 m motor cable was used, and the filter losses for the active du/dt methods were 111W–126 W versus 184 W for the passive du/dt. In terms of inverter and filter losses, the active du/dt filtering method had a 1.82-fold increase in losses compared with an all-passive traditional du/dt output filter. The filter mass with the active du/dt method was 17% (2.4 kg, air-core inductors) compared with 14 kg of the passive du/dt method filter. Silicon carbide freewheeling diodes were found to reduce the inverter losses in the active du/dt filtering by 18% compared with the same IGBT module with silicon diodes. For a 200 m cable length, the average peak voltage at the motor terminals was 1050 V with no filter, 960 V for the all-passive du/dt filter, and 700 V for the active du/dt filtering applying the new control principle.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014