828 resultados para compression parallel
Resumo:
This thesis presents briefly the basic operation and use of centrifugal pumps and parallel pumping applications. The characteristics of parallel pumping applications are compared to circuitry, in order to search analogy between these technical fields. The purpose of studying circuitry is to find out if common software tools for solving circuit performance could be used to observe parallel pumping applications. The empirical part of the thesis introduces a simulation environment for parallel pumping systems, which is based on circuit components of Matlab Simulink —software. The created simulation environment ensures the observation of variable speed controlled parallel pumping systems in case of different controlling methods. The introduced simulation environment was evaluated by building a simulation model for actual parallel pumping system at Lappeenranta University of Technology. The simulated performance of the parallel pumps was compared to measured values of the actual system. The gathered information shows, that if the initial data of the system and pump perfonnance is adequate, the circuitry based simulation environment can be exploited to observe parallel pumping systems. The introduced simulation environment can represent the actual operation of parallel pumps in reasonably accuracy. There by the circuitry based simulation can be used as a researching tool to develop new controlling ways for parallel pumps.
Resumo:
Numerous studies assess the correlation between genetic and species diversities, but the processes underlying the observed patterns have only received limited attention. For instance, varying levels of habitat disturbance across a region may locally reduce both diversities due to extinctions, and increased genetic drift during population bottlenecks and founder events. We investigated the regional distribution of genetic and species diversities of a coastal sand dune plant community along 240 kilometers of coastline with the aim to test for a correlation between the two diversity levels. We further quantify and tease apart the respective contributions of natural and anthropogenic disturbance factors to the observed patterns. We detected significant positive correlation between both variables. We further revealed a negative impact of urbanization: Sites with a high amount of recreational infrastructure within 10 km coastline had significantly lowered genetic and species diversities. On the other hand, a measure of natural habitat disturbance had no effect. This study shows that parallel variation of genetic and species diversities across a region can be traced back to human landscape alteration, provides arguments for a more resolute dune protection, and may help to design priority conservation areas.
Resumo:
I extend Spence's signaling model by assuming that some workers are overconfident-they underestimate their marginal cost of acquiring education-and some are underconfident. Firms cannot observe workers' productive abilities and beliefs but know the fractions of high-ability, overconfident, and underconfident workers. I find that biased beliefs lower the wage spread and compress the wages of unbiased workers. I show that gender differences in self-confidence can contribute to the gender pay gap. If education raises productivity, men are overconfident, and women underconfident, then women will, on average, earn less than men. Finally, I show that biased beliefs can improve welfare.
Resumo:
The strength properties of paper coating layer are very important in converting and printing operations. Too great or low strength of the coating can affect several problems in printing. One of the problems caused by the strength of coating is the cracking at the fold. After printing the paper is folded to final form and the pages are stapled together. In folding the paper coating can crack causing aesthetic damage over printed image or in the worst case the centre sheet can fall off in stapling. When folding the paper other side undergoes tensile stresses and the other side compressive stresses. If the difference between these stresses is too high, the coating can crack on the folding. To better predict and prevent cracking at the fold it is good to know the strength properties of coating layer. It has measured earlier the tensile strength of coating layer but not the compressive strength. In this study it was tried to find some way to measure the compressive strength of the coating layer and investigate how different coatings behave in compression. It was used the short span crush test, which is used to measure the in-plane compressive strength of paperboards, to measure the compressive strength of the coating layer. In this method the free span of the specimen is very small which prevent buckling. It was measured the compressive strength of free coating films as well as coated paper. It was also measured the tensile strength and the Bendtsen air permeance of the coating film. The results showed that the shape of pigment has a great effect to the strength of coating. Platy pigment gave much better strength than round or needle-like pigment. On the other hand calcined kaolin, which is also platy but the particles are aggregated, decreased the strength substantially. The difference in the strength can be explained with packing of the particles which is affecting to the porosity and thus to the strength. The platy kaolin packs up much better than others and creates less porous structure. The results also showed that the binder properties have a great effect to the compressive strength of coating layer. The amount of latex and the glass transition temperature, Tg, affect to the strength. As the amount of latex is increasing, the strength of coating is increasing also. Larger amount of latex is binding the pigment particles better together and decreasing the porosity. Compressive strength was increasing when the Tg was increasing because the hard latex gives a stiffer and less elastic film than soft latex.
Resumo:
Over the last decades, calibration techniques have been widely used to improve the accuracy of robots and machine tools since they only involve software modification instead of changing the design and manufacture of the hardware. Traditionally, there are four steps are required for a calibration, i.e. error modeling, measurement, parameter identification and compensation. The objective of this thesis is to propose a method for the kinematics analysis and error modeling of a newly developed hybrid redundant robot IWR (Intersector Welding Robot), which possesses ten degrees of freedom (DOF) where 6-DOF in parallel and additional 4-DOF in serial. In this article, the problem of kinematics modeling and error modeling of the proposed IWR robot are discussed. Based on the vector arithmetic method, the kinematics model and the sensitivity model of the end-effector subject to the structure parameters is derived and analyzed. The relations between the pose (position and orientation) accuracy and manufacturing tolerances, actuation errors, and connection errors are formulated. Computer simulation is performed to examine the validity and effectiveness of the proposed method.
Resumo:
Background: Since barrier protection measures to avoid contact with allergens are being increasingly developed, we assessed the clinical efficacy and tolerability of a topical nasal microemulsion made of glycerol esters in patients with allergic rhinitis. Methods: Randomized, controlled, double-blind, parallel group, multicentre, multinational clinical trial in which adult patients with allergic rhinitis or rhinoconjunctivitis due to sensitization to birch, grass or olive tree pollens received treatment with topical microemulsion or placebo during the pollen seasons. Efficacy variables included scores in the mini-RQLQ questionnaire, number and severity of nasal, ocular and lung signs and symptoms, need for symptomatic medications and patients" satisfaction with treatment. Adverse events were also recorded. Results: Demographic characteristics were homogeneous between groups and mini-RQLQ scores did not differ significantly at baseline (visit 1). From symptoms recorded in the diary cards, the ME group showed statistically significant better scores for nasal congestion (0.72 vs. 1.01; p = 0.017) and mean total nasal symptoms (0.7 vs. 0.9; p = 0.045). At visit 2 (pollen season), lower values were observed in the mini-RQLQ in the ME group, although there were no statistically significant differences between groups in both full analysis set (FAS) and patients completing treatment (PPS) populations. The results obtained in the nasal symptoms domain of the mini-RQLQ at visit 2 showed the highest difference (−0.43; 95% CI: -0.88 to 0.02) for the ME group in the FAS population. The topical microemulsion was safe and well tolerated and no major discomforts were observed. Satisfaction rating with the treatment was similar between the groups. Conclusions: The topical application of the microemulsion is a feasible and safe therapy in the prevention of allergic symptoms, particularly nasal congestion.
Resumo:
This thesis deals with distance transforms which are a fundamental issue in image processing and computer vision. In this thesis, two new distance transforms for gray level images are presented. As a new application for distance transforms, they are applied to gray level image compression. The new distance transforms are both new extensions of the well known distance transform algorithm developed by Rosenfeld, Pfaltz and Lay. With some modification their algorithm which calculates a distance transform on binary images with a chosen kernel has been made to calculate a chessboard like distance transform with integer numbers (DTOCS) and a real value distance transform (EDTOCS) on gray level images. Both distance transforms, the DTOCS and EDTOCS, require only two passes over the graylevel image and are extremely simple to implement. Only two image buffers are needed: The original gray level image and the binary image which defines the region(s) of calculation. No other image buffers are needed even if more than one iteration round is performed. For large neighborhoods and complicated images the two pass distance algorithm has to be applied to the image more than once, typically 3 10 times. Different types of kernels can be adopted. It is important to notice that no other existing transform calculates the same kind of distance map as the DTOCS. All the other gray weighted distance function, GRAYMAT etc. algorithms find the minimum path joining two points by the smallest sum of gray levels or weighting the distance values directly by the gray levels in some manner. The DTOCS does not weight them that way. The DTOCS gives a weighted version of the chessboard distance map. The weights are not constant, but gray value differences of the original image. The difference between the DTOCS map and other distance transforms for gray level images is shown. The difference between the DTOCS and EDTOCS is that the EDTOCS calculates these gray level differences in a different way. It propagates local Euclidean distances inside a kernel. Analytical derivations of some results concerning the DTOCS and the EDTOCS are presented. Commonly distance transforms are used for feature extraction in pattern recognition and learning. Their use in image compression is very rare. This thesis introduces a new application area for distance transforms. Three new image compression algorithms based on the DTOCS and one based on the EDTOCS are presented. Control points, i.e. points that are considered fundamental for the reconstruction of the image, are selected from the gray level image using the DTOCS and the EDTOCS. The first group of methods select the maximas of the distance image to new control points and the second group of methods compare the DTOCS distance to binary image chessboard distance. The effect of applying threshold masks of different sizes along the threshold boundaries is studied. The time complexity of the compression algorithms is analyzed both analytically and experimentally. It is shown that the time complexity of the algorithms is independent of the number of control points, i.e. the compression ratio. Also a new morphological image decompression scheme is presented, the 8 kernels' method. Several decompressed images are presented. The best results are obtained using the Delaunay triangulation. The obtained image quality equals that of the DCT images with a 4 x 4
Resumo:
We present parallel characterizations of two different values in the framework of restricted cooperation games. The restrictions are introduced as a finite sequence of partitions defined on the player set, each of them being coarser than the previous one, hence forming a structure of different levels of a priori unions. On the one hand, we consider a value first introduced in Ref. [18], which extends the Shapley value to games with different levels of a priori unions. On the other hand, we introduce another solution for the same type of games, which extends the Banzhaf value in the same manner. We characterize these two values using logically comparable properties.
Resumo:
The purpose of this thesis was to investigate the compression of filter cakes at high filtration pressures with five different test materials and to compare the energy consumption of high pressure compression with the energy consumption of thermal drying. The secondary target of this study was to investigate the particle deformation of test materials during filtration and compression. Literature part consists of basic theory of filtration and compression and of the basic parameters that influence the filtration process. There is also a brief description about all of the test materials including their properties and their industrial production and processing. Theoretical equations for calculating the energy consumptions of the filtrations at different conditions are also presented. At the beginning of the experiments at experimental part, the basic filtration tests were done with all the five test materials. Filtration tests were made at eight different pressures, from 6 bars up to 100 bars, by using piston press pressure filter. Filtration tests were then repeated by using a cylinder with smaller slurry volume than in the first series of filtration tests. Separate filtration tests were also done for investigating the deformation of solid particles during filtration and for finding the optimal curve for raising the filtration pressure. Energy consumption differences between high pressure filtration and ideal thermal drying process were done partly experimentally and partly by using theoretical calculation equations. By comparing these two water removal methods, the optimal ranges for their use were found considering their energy efficiency. The results of the measurements shows that the filtration rate increased and the moisture content of the filter cakes decreased as the filtration pressure was increased. Also the porosity of the filter cakes mainly decreased when the filtration pressure was increased. Particle deformation during the filtration was observed only with coal particles.
Resumo:
Most modern passenger aeroplanes use air cycle cooling. A high-speed air cycle is a reliable and light option, but not very efficient. This thesis presents research work done to design a novel vapour cooling cycle for aeroplanes. Due to advancements in high-speed permanent magnet motors, the vapour cycle is seen as a competitive option for the air cycle in aeroplanes. The aerospace industry places tighter demands on the weight, reliability and environmental effects of the machinery than those met by conventional chillers, and thus modifications to conventional design are needed. The thesis is divided into four parts: the initial screening of the working fluid, 1-D design and performance values of the compressor, 1-D off-design value predictions of the compressor and the 3-D design of the compressor. The R245fa was selected as the working fluid based the study. The off-design range of the compressor was predicted to be wide and suitable for the application. The air-conditioning system developed is considerably smaller than previous designs using centrifugal compressors.
Resumo:
The maximum realizable power throughput of power electronic converters may be limited or constrained by technical or economical considerations. One solution to this problemis to connect several power converter units in parallel. The parallel connection can be used to increase the current carrying capacity of the overall system beyond the ratings of individual power converter units. Thus, it is possible to use several lower-power converter units, produced in large quantities, as building blocks to construct high-power converters in a modular manner. High-power converters realized by using parallel connection are needed for example in multimegawatt wind power generation systems. Parallel connection of power converter units is also required in emerging applications such as photovoltaic and fuel cell power conversion. The parallel operation of power converter units is not, however, problem free. This is because parallel-operating units are subject to overcurrent stresses, which are caused by unequal load current sharing or currents that flow between the units. Commonly, the term ’circulatingcurrent’ is used to describe both the unequal load current sharing and the currents flowing between the units. Circulating currents, again, are caused by component tolerances and asynchronous operation of the parallel units. Parallel-operating units are also subject to stresses caused by unequal thermal stress distribution. Both of these problemscan, nevertheless, be handled with a proper circulating current control. To design an effective circulating current control system, we need information about circulating current dynamics. The dynamics of the circulating currents can be investigated by developing appropriate mathematical models. In this dissertation, circulating current models aredeveloped for two different types of parallel two-level three-phase inverter configurations. Themodels, which are developed for an arbitrary number of parallel units, provide a framework for analyzing circulating current generation mechanisms and developing circulating current control systems. In addition to developing circulating current models, modulation of parallel inverters is considered. It is illustrated that depending on the parallel inverter configuration and the modulation method applied, common-mode circulating currents may be excited as a consequence of the differential-mode circulating current control. To prevent the common-mode circulating currents that are caused by the modulation, a dual modulator method is introduced. The dual modulator basically consists of two independently operating modulators, the outputs of which eventually constitute the switching commands of the inverter. The two independently operating modulators are referred to as primary and secondary modulators. In its intended usage, the same voltage vector is fed to the primary modulators of each parallel unit, and the inputs of the secondary modulators are obtained from the circulating current controllers. To ensure that voltage commands obtained from the circulating current controllers are realizable, it must be guaranteed that the inverter is not driven into saturation by the primary modulator. The inverter saturation can be prevented by limiting the inputs of the primary and secondary modulators. Because of this, also a limitation algorithm is proposed. The operation of both the proposed dual modulator and the limitation algorithm is verified experimentally.
Resumo:
The aim of this thesis is to describe hybrid drive design problems, the advantages and difficulties related to the drive. A review of possible hybrid constructions, benefits of parallel, series and series-parallel hybrids is done. In the thesis analytical and finite element calculations of permanent magnet synchronous machines with embedded magnets were done. The finite element calculations were done using Cedrat’s Flux 2D software. This machine is planned to be used as a motor-generator in a low power parallel hybrid vehicle. The boundary conditions for the design were found from Lucas-TVS Ltd., India. Design Requirements, briefly: • The system DC voltage level is 120 V, which implies Uphase = 49 V (RMS) in a three phase system. • The power output of 10 kW at base speed 1500 rpm (Torque of 65 Nm) is desired. • The maximum outer diameter should not be more than 250 mm, and the maximum core length should not exceed 40 mm. The main difficulties which the author met were the dimensional restrictions. After having designed and analyzed several possible constructions they were compared and the final design selected. Dimensioned and detailed design is performed. Effects of different parameters, such as the number of poles, number of turns and magnetic geometry are discussed. The best modification offers considerable reduction of volume.
Resumo:
Novel biomaterials are needed to fill the demand of tailored bone substitutes required by an ever‐expanding array of surgical procedures and techniques. Wood, a natural fiber composite, modified with heat treatment to alter its composition, may provide a novel approach to the further development of hierarchically structured biomaterials. The suitability of wood as a model biomaterial as well as the effects of heat treatment on the osteoconductivity of wood was studied by placing untreated and heat‐treated (at 220 C , 200 degrees and 140 degrees for 2 h) birch implants (size 4 x 7mm) into drill cavities in the distal femur of rabbits. The follow‐up period was 4, 8 and 20 weeks in all in vivo experiments. The flexural properties of wood as well as dimensional changes and hydroxyl apatite formation on the surface of wood (untreated, 140 degrees C and 200 degrees C heat‐treated wood) were tested using 3‐point bending and compression tests and immersion in simulated body fluid. The effect of premeasurement grinding and the effect of heat treatment on the surface roughness and contour of wood were tested with contact stylus and non‐contact profilometry. The effects of heat treatment of wood on its interactions with biological fluids was assessed using two different test media and real human blood in liquid penetration tests. The results of the in vivo experiments showed implanted wood to be well tolerated, with no implants rejected due to foreign body reactions. Heat treatment had significant effects on the biocompatibility of wood, allowing host bone to grow into tight contact with the implant, with occasional bone ingrowth into the channels of the wood implant. The results of the liquid immersion experiments showed hydroxyl apatite formation only in the most extensively heat‐treated wood specimens, which supported the results of the in vivo experiments. Parallel conclusions could be drawn based on the results of the liquid penetration test where human blood had the most favorable interaction with the most extensively heat‐treated wood of the compared materials (untreated, 140 degrees C and 200 degrees C heat‐treated wood). The increasing biocompatibility was inferred to result mainly from changes in the chemical composition of wood induced by the heat treatment, namely the altered arrangement and concentrations of functional chemical groups. However, the influence of microscopic changes in the cell walls, surface roughness and contour cannot be totally excluded. The heat treatment was hypothesized to produce a functional change in the liquid distribution within wood, which could have biological relevance. It was concluded that the highly evolved hierarchical anatomy of wood could yield information for the future development of bulk bone substitutes according to the ideology of bioinspiration. Furthermore, the results of the biomechanical tests established that heat treatment alters various biologically relevant mechanical properties of wood, thus expanding the possibilities of wood as a model material, which could include e.g. scaffold applications, bulk bone applications and serving as a tool for both mechanical testing and for further development of synthetic fiber reinforced composites.