698 resultados para Lappeenranta University of Technology
Resumo:
The objective of this thesis was to identify the effects of different factors on the tension and tension relaxation of wet paper web after high-speed straining. The study was motivated by the plausible connection between wet web mechanical properties and wet web runnability on paper machines shown by previous studies. The mechanical properties of wet paper were examined using a fast tensile test rig with a strain rate of 1000%/s. Most of the tests were carried out with laboratory handsheets, but samples from a pilot paper machine were also used. The tension relaxation of paper was evaluated as the tension remaining after 0.475 s of relaxation (residual tension). The tensile and relaxation properties of wet webs were found to be strongly dependent on the quality and amount of fines. With low fines content, the tensile strength and residual tension of wet paper was mainly determined by the mechanical interactions between fibres at their contact points. As the fines strengthen the mechanical interaction in the network, the fibre properties also become important. Fibre deformations caused by the mechanical treatment of pulp were shown to reduce the mechanical properties of both dry and wet paper. However, the effect was significantly higher for wet paper. An increase of filler content from 10% to 25% greatly reduced the tensile strength of dry paper, but did not significantly impair wet web tensile strength or residual tension. Increased filler content in wet web was shown to increase the dryness of the wet web after the press section, which partly compensates for the reduction of fibrous material in the web. It is also presumable that fillers increase entanglement friction between fibres, which is beneficial for wet web strength. Different contaminants present in white water during sheet formation resulted in lowered surface tension and increased dryness after wet pressing. The addition of different contaminants reduced the tensile strength of the dry paper. The reduction of dry paper tensile strength could not be explained by the reduced surface tension, but rather on the tendency of different contaminants to interfere with the inter-fibre bonding. Additionally, wet web strength was not affected by the changes in the surface tension of white water or possible changes in the hydrophilicity of fibres caused by the addition of different contaminants. The spraying of different polymers on wet paper before wet pressing had a significant effect on both dry and wet web tensile strength, whereas wet web elastic modulus and residual tension were basically not affected. We suggest that the increase of dry and wet paper strength could be affected by the molecular level interactions between these chemicals and fibres. The most significant increases in dry and wet paper strength were achieved with a dual application of anionic and cationic polymers. Furthermore, selectively adding papermaking chemicals to different fibre fractions (as opposed to adding chemicals to the whole pulp) improved the wet web mechanical properties and the drainage of the pulp suspension.
Resumo:
Cutting of thick section stainless steel and mild steel, and medium section aluminium using the high power ytterbium fibre laser has been experimentally investigated in this study. Theoretical models of the laser power requirement for cutting of a metal workpiece and the melt removal rate were also developed. The calculated laser power requirement was correlated to the laser power used for the cutting of 10 mm stainless steel workpiece and 15 mm mild steel workpiece using the ytterbium fibre laser and the CO2 laser. Nitrogen assist gas was used for cutting of stainless steel and oxygen was used for mild steel cutting. It was found that the incident laser power required for cutting at a given cutting speed was lower for fibre laser cutting than for CO2 laser cutting indicating a higher absorptivity of the fibre laser beam by the workpiece and higher melting efficiency for the fibre laser beam than for the CO2 laser beam. The difficulty in achieving an efficient melt removal during high speed cutting of the 15 mmmild steel workpiece with oxygen assist gas using the ytterbium fibre laser can be attributed to the high melting efficiency of the ytterbium fibre laser. The calculated melt flow velocity and melt film thickness correlated well with the location of the boundary layer separation point on the 10 mm stainless steel cut edges. An increase in the melt film thickness caused by deceleration of the melt particles in the boundary layer by the viscous shear forces results in the flow separation. The melt flow velocity increases with an increase in assist gas pressure and cut kerf width resulting in a reduction in the melt film thickness and the boundary layer separation point moves closer to the bottom cut edge. The cut edge quality was examined by visual inspection of the cut samples and measurement of the cut kerf width, boundary layer separation point, cut edge squareness (perpendicularity) deviation, and cut edge surface roughness as output quality factors. Different regions of cut edge quality in 10 mm stainless steel and 4 mm aluminium workpieces were defined for different combinations of cutting speed and laserpower.Optimization of processing parameters for a high cut edge quality in 10 mmstainless steel was demonstrated
Resumo:
By alloying metals with other materials, one can modify the metal’s characteristics or compose an alloy which has certain desired characteristics that no pure metal has. The field is vast and complex, and phenomena that govern the behaviour of alloys are numerous. Theories cannot penetrate such complexity, and the scope of experiments is also limited. This is why the relatively new field of ab initio computational methods has much to give to this field. With these methods, one can extend the understanding given by theories, predict how some systems might behave, and be able to obtain information that is not there to see in physical experiments. This thesis pursues to contribute to the collective knowledge of this field in the light of two cases. The first part examines the oxidation of Ag/Cu, namely, the adsorption dynamics and oxygen induced segregation of the surface. Our results demonstrate that the presence of Ag on the Cu(100) surface layer strongly inhibits dissociative adsorption. Our results also confirmed that surface reconstruction does happen, as experiments predicted. Our studies indicate that 0.25 ML of oxygen is enough for Ag to diffuse towards the bulk, under the copper oxide layer. The other part elucidates the complex interplay of various energy and entropy contributions to the phase stability of paramagnetic duplex steel alloys. We were able to produce a phase stability map from first principles, and it agrees with experiments rather well. Our results also show that entropy contributions play a very important role on defining the phase stability. This is, to the author’s knowledge, the first ab initio study upon this subject.
Resumo:
The increasing power demand and emerging applications drive the design of electrical power converters into modularization. Despite the wide use of modularized power stage structures, the control schemes that are used are often traditional, in other words, centralized. The flexibility and re-usability of these controllers are typically poor. With a dedicated distributed control scheme, the flexibility and re-usability of the system parts, building blocks, can be increased. Only a few distributed control schemes have been introduced for this purpose, but their breakthrough has not yet taken place. A demand for the further development offlexible control schemes for building-block-based applications clearly exists. The control topology, communication, synchronization, and functionality allocationaspects of building-block-based converters are studied in this doctoral thesis. A distributed control scheme that can be easily adapted to building-block-based power converter designs is developed. The example applications are a parallel and series connection of building blocks. The building block that is used in the implementations of both the applications is a commercial off-the-shelf two-level three-phase frequency converter with a custom-designed controller card. The major challenge with the parallel connection of power stages is the synchronization of the building blocks. The effect of synchronization accuracy on the system performance is studied. The functionality allocation and control scheme design are challenging in the seriesconnected multilevel converters, mainly because of the large number of modules. Various multilevel modulation schemes are analyzed with respect to the implementation, and this information is used to develop a flexible control scheme for modular multilevel inverters.
Resumo:
This dissertation is based on 5 articles which deal with reaction mechanisms of the following selected industrially important organic reactions: 1. dehydrocyclization of n-butylbenzene to produce naphthalene 2. dehydrocyclization of 1-(p-tolyl)-2-methylbutane (MB) to produce 2,6-dimethylnaphthalene 3. esterification of neopentyl glycol (NPG) with different carboxylic acids to produce monoesters 4. skeletal isomerization of 1-pentene to produce 2-methyl-1-butene and 2-methyl-2-butene The results of initial- and integral-rate experiments of n-butylbenzene dehydrocyclization over selfmade chromia/alumina catalyst were applied when investigating reaction 2. Reaction 2 was performed using commercial chromia/alumina of different acidity, platina on silica and vanadium/calcium/alumina as catalysts. On all catalysts used for the dehydrocyclization, major reactions were fragmentation of MB and 1-(p-tolyl)-2-methylbutenes (MBes), dehydrogenation of MB, double bond transfer, hydrogenation and 1,6-cyclization of MBes. Minor reactions were 1,5-cyclization of MBes and methyl group fragmentation of 1,6- cyclization products. Esterification reactions of NPG were performed using three different carboxylic acids: propionic, isobutyric and 2-ethylhexanoic acid. Commercial heterogeneous gellular (Dowex 50WX2), macroreticular (Amberlyst 15) type resins and homogeneous para-toluene sulfonic acid were used as catalysts. At first NPG reacted with carboxylic acids to form corresponding monoester and water. Then monoester esterified with carboxylic acid to form corresponding diester. In disproportionation reaction two monoester molecules formed NPG and corresponding diester. All these three reactions can attain equilibrium. Concerning esterification, water was removed from the reactor in order to prevent backward reaction. Skeletal isomerization experiments of 1-pentene were performed over HZSM-22 catalyst. Isomerization reactions of three different kind were detected: double bond, cis-trans and skeletal isomerization. Minor side reaction were dimerization and fragmentation. Monomolecular and bimolecular reaction mechanisms for skeletal isomerization explained experimental results almost equally well. Pseudohomogeneous kinetic parameters of reactions 1 and 2 were estimated by usual least squares fitting. Concerning reactions 3 and 4 kinetic parameters were estimated by the leastsquares method, but also the possible cross-correlation and identifiability of parameters were determined using Markov chain Monte Carlo (MCMC) method. Finally using MCMC method, the estimation of model parameters and predictions were performed according to the Bayesian paradigm. According to the fitting results suggested reaction mechanisms explained experimental results rather well. When the possible cross-correlation and identifiability of parameters (Reactions 3 and 4) were determined using MCMC method, the parameters identified well, and no pathological cross-correlation could be seen between any parameter pair.
Resumo:
In recent years, the network vulnerability to natural hazards has been noticed. Moreover, operating on the limits of the network transmission capabilities have resulted in major outages during the past decade. One of the reasons for operating on these limits is that the network has become outdated. Therefore, new technical solutions are studied that could provide more reliable and more energy efficient power distributionand also a better profitability for the network owner. It is the development and price of power electronics that have made the DC distribution an attractive alternative again. In this doctoral thesis, one type of a low-voltage DC distribution system is investigated. Morespecifically, it is studied which current technological solutions, used at the customer-end, could provide better power quality for the customer when compared with the current system. To study the effect of a DC network on the customer-end power quality, a bipolar DC network model is derived. The model can also be used to identify the supply parameters when the V/kW ratio is approximately known. Although the model provides knowledge of the average behavior, it is shown that the instantaneous DC voltage ripple should be limited. The guidelines to choose an appropriate capacitance value for the capacitor located at the input DC terminals of the customer-end are given. Also the structure of the customer-end is considered. A comparison between the most common solutions is made based on their cost, energy efficiency, and reliability. In the comparison, special attention is paid to the passive filtering solutions since the filter is considered a crucial element when the lifetime expenses are determined. It is found out that the filter topology most commonly used today, namely the LC filter, does not provide economical advantage over the hybrid filter structure. Finally, some of the typical control system solutions are introduced and their shortcomings are presented. As a solution to the customer-end voltage regulation problem, an observer-based control scheme is proposed. It is shown how different control system structures affect the performance. The performance meeting the requirements is achieved by using only one output measurement, when operating in a rigid network. Similar performance can be achieved in a weak grid by DC voltage measurement. An additional improvement can be achieved when an adaptive gain scheduling-based control is introduced. As a conclusion, the final power quality is determined by a sum of various factors, and the thesis provides the guidelines for designing the system that improves the power quality experienced by the customer.
Centralized Motion Control of a Linear Tooth Belt Drive: Analysis of the Performance and Limitations
Resumo:
A centralized robust position control for an electrical driven tooth belt drive is designed in this doctoral thesis. Both a cascaded control structure and a PID based position controller are discussed. The performance and the limitations of the system are analyzed and design principles for the mechanical structure and the control design are given. These design principles are also suitable for most of the motion control applications, where mechanical resonance frequencies and control loop delays are present. One of the major challenges in the design of a controller for machinery applications is that the values of the parameters in the system model (parameter uncertainty) or the system model it self (non-parametric uncertainty) are seldom known accurately in advance. In this thesis a systematic analysis of the parameter uncertainty of the linear tooth beltdrive model is presented and the effect of the variation of a single parameter on the performance of the total system is shown. The total variation of the model parameters is taken into account in the control design phase using a Quantitative Feedback Theory (QFT). The thesis also introduces a new method to analyze reference feedforward controllers applying the QFT. The performance of the designed controllers is verified by experimentalmeasurements. The measurements confirm the control design principles that are given in this thesis.
Resumo:
Electricity distribution network operation (NO) models are challenged as they are expected to continue to undergo changes during the coming decades in the fairly developed and regulated Nordic electricity market. Network asset managers are to adapt to competitive technoeconomical business models regarding the operation of increasingly intelligent distribution networks. Factors driving the changes for new business models within network operation include: increased investments in distributed automation (DA), regulative frameworks for annual profit limits and quality through outage cost, increasing end-customer demands, climatic changes and increasing use of data system tools, such as Distribution Management System (DMS). The doctoral thesis addresses the questions a) whether there exist conditions and qualifications for competitive markets within electricity distribution network operation and b) if so, identification of limitations and required business mechanisms. This doctoral thesis aims to provide an analytical business framework, primarily for electric utilities, for evaluation and development purposes of dedicated network operation models to meet future market dynamics within network operation. In the thesis, the generic build-up of a business model has been addressed through the use of the strategicbusiness hierarchy levels of mission, vision and strategy for definition of the strategic direction of the business followed by the planning, management and process execution levels of enterprisestrategy execution. Research questions within electricity distribution network operation are addressed at the specified hierarchy levels. The results of the research represent interdisciplinary findings in the areas of electrical engineering and production economics. The main scientific contributions include further development of the extended transaction cost economics (TCE) for government decisions within electricity networks and validation of the usability of the methodology for the electricity distribution industry. Moreover, DMS benefit evaluations in the thesis based on the outage cost calculations propose theoretical maximum benefits of DMS applications equalling roughly 25% of the annual outage costs and 10% of the respective operative costs in the case electric utility. Hence, the annual measurable theoretical benefits from the use of DMS applications are considerable. The theoretical results in the thesis are generally validated by surveys and questionnaires.
Resumo:
The developing energy markets and rising energy system costs have sparked the need to find new forms of energy production and increase the self-sufficiency of energy production. One alternative is gasification, whose principles have been known for decades, but it is only recently when the technology has become a true alternative. However, in order to meet the requirements of modern energy production methods, it is necessary to study the phenomenon thoroughly. In order to understand the gasification process better and optimize it from the viewpoint of ecology and energy efficiency, it is necessary to develop effective and reliable modeling tools for gasifiers. The main aims of this work have been to understand gasification as a process and furthermore to develop an existing three-dimensional circulating fluidized bed modeling tool for modeling of gasification. The model is applied to two gasification processes of 12 and 50 MWth. The results of modeling and measurements have been compared and subsequently reviewed. The work was done in co-operation with Lappeenranta University of Technology and Foster Wheeler Energia Oy.
Resumo:
There are several filtration applications in the pulp and paper industry where the capacity and cost-effectiveness of processes are of importance. Ultrafiltration is used to clean process water. Ultrafiltration is a membrane process that separates a certain component or compound from a liquid stream. The pressure difference across the membrane sieves macromolecules smaller than 0.001-0.02 μm through the membrane. When optimizing the filtration process capacity, online information about the conditions of the membrane is needed. Fouling and compaction of the membrane both affect the capacity of the filtration process. In fouling a “cake” layer starts to build on the surface of the membrane. This layer blocks the molecules from sieving through the membrane thereby decreasing the yield of the process. In compaction of the membrane the structure is flattened out because of the high pressure applied. The higher pressure increases the capacity but may damage the structure of the membrane permanently. Information about the compaction is needed to effectively operate the filters. The objective of this study was to develop an accurate system for online monitoring of the condition of the membrane using ultrasound reflectometry. Measurements of ultrafiltration membrane compaction were made successfully utilizing ultrasound. The results were confirmed by permeate flux decline, measurements of compaction with a micrometer, mechanical compaction using a hydraulic piston and a scanning electron microscope (SEM). The scientific contribution of this thesis is to introduce a secondary ultrasound transducer to determine the speed of sound in the fluid used. The speed of sound is highly dependent on the temperature and pressure used in the filters. When the exact speed of sound is obtained by the reference transducer, the effect of temperature and pressure is eliminated. This speed is then used to calculate the distances with a higher accuracy. As the accuracy or the resolution of the ultrasound measurement is increased, the method can be applied to a higher amount of applications especially for processes where fouling layers are thinner because of smaller macromolecules. With the help of the transducer, membrane compaction of 13 μm was measured in the pressure of 5 bars. The results were verified with the permeate flux decline, which indicated that compaction had taken place. The measurements of compaction with a micrometer showed compaction of 23–26 μm. The results are in the same range and confirm the compaction. Mechanical compaction measurements were made using a hydraulic piston, and the result was the same 13 μm as obtained by applying the ultrasound time domain reflectometry (UTDR). A scanning electron microscope (SEM) was used to study the structure of the samples before and after the compaction.
Resumo:
Diabetes is a rapidly increasing worldwide problem which is characterised by defective metabolism of glucose that causes long-term dysfunction and failure of various organs. The most common complication of diabetes is diabetic retinopathy (DR), which is one of the primary causes of blindness and visual impairment in adults. The rapid increase of diabetes pushes the limits of the current DR screening capabilities for which the digital imaging of the eye fundus (retinal imaging), and automatic or semi-automatic image analysis algorithms provide a potential solution. In this work, the use of colour in the detection of diabetic retinopathy is statistically studied using a supervised algorithm based on one-class classification and Gaussian mixture model estimation. The presented algorithm distinguishes a certain diabetic lesion type from all other possible objects in eye fundus images by only estimating the probability density function of that certain lesion type. For the training and ground truth estimation, the algorithm combines manual annotations of several experts for which the best practices were experimentally selected. By assessing the algorithm’s performance while conducting experiments with the colour space selection, both illuminance and colour correction, and background class information, the use of colour in the detection of diabetic retinopathy was quantitatively evaluated. Another contribution of this work is the benchmarking framework for eye fundus image analysis algorithms needed for the development of the automatic DR detection algorithms. The benchmarking framework provides guidelines on how to construct a benchmarking database that comprises true patient images, ground truth, and an evaluation protocol. The evaluation is based on the standard receiver operating characteristics analysis and it follows the medical practice in the decision making providing protocols for image- and pixel-based evaluations. During the work, two public medical image databases with ground truth were published: DIARETDB0 and DIARETDB1. The framework, DR databases and the final algorithm, are made public in the web to set the baseline results for automatic detection of diabetic retinopathy. Although deviating from the general context of the thesis, a simple and effective optic disc localisation method is presented. The optic disc localisation is discussed, since normal eye fundus structures are fundamental in the characterisation of DR.
Resumo:
Project management has evolved in recent decades. Project portfolio management, together with multi project management, is an emerging area in the project management field in practice, and correspondingly in academic research and forums. In multi project management, projects cannot be handled isolated from each other, as they often have interdependencies that have to be taken into account. If the interdependencies between projects are evaluated during the selection process, the success rate of the project portfolio is increased. Interdependencies can be human resources, technological, and/or market based. Despite of the fact that interdependency as a phenomenon has roots in the 1960s and is related to famous management theories, it has not been much studied, although in practice most companies use it to great extent. There exists some research on interdependency, but prior publications have not emphasized the phenomenon per se, because a practical orientation practitioner techniques prevails in the literature. This research applies the method triangulation, electronic surveys and multiple case study. The research concentrates on small to large companies in Estonia and Finland, mainly in construction, engineering, ICT, and machinery industries. The literature review reveals that interdependencies are deeply involved in R&D and innovation. Survey analysis shows that companies are aware of interdependency issues in general, but they i have lack of detailed knowledge to use it thoroughly. Empirical evidence also indicates that interdependency techniques influence the success rate and other efficiency aspects to different extents. There are a lot of similarities in interdependency related managerial issues in companies of varying sizes and countries in Northern Europe. Differences found in the study are for instance the fact that smaller companies face more difficulties in implementing and evaluating interdependency procedures. Country differences between Estonia and Finland stem from working solutions to manage interdependencies on a daily basis.historical and cultural reasons, such as the special features of a transition country compared to a mature country. An overview of the dominant problems, best practices, and commonly used techniques associated with interdependency is provided in the study. Empirical findings show that many interdependency techniques are not used in practice. A multiple case study was performed in the study to find out how interdependencies are managed in real life on a daily basis. The results show that interdependencies are mostly managed in an informal manner. A description of managing the interdependencies and implementation procedures is given. Interdependency procedures are hard to implement, especially in smaller companies. Companies have difficulties in implementing interdependency procedures and evaluating them. The study contains detailed results on how companies have implemented working solutions to manage interdependencies on a daily basis
Resumo:
Crystallization is a purification method used to obtain crystalline product of a certain crystal size. It is one of the oldest industrial unit processes and commonly used in modern industry due to its good purification capability from rather impure solutions with reasonably low energy consumption. However, the process is extremely challenging to model and control because it involves inhomogeneous mixing and many simultaneous phenomena such as nucleation, crystal growth and agglomeration. All these phenomena are dependent on supersaturation, i.e. the difference between actual liquid phase concentration and solubility. Homogeneous mass and heat transfer in the crystallizer would greatly simplify modelling and control of crystallization processes, such conditions are, however, not the reality, especially in industrial scale processes. Consequently, the hydrodynamics of crystallizers, i.e. the combination of mixing, feed and product removal flows, and recycling of the suspension, needs to be thoroughly investigated. Understanding of hydrodynamics is important in crystallization, especially inlargerscale equipment where uniform flow conditions are difficult to attain. It is also important to understand different size scales of mixing; micro-, meso- and macromixing. Fast processes, like nucleation and chemical reactions, are typically highly dependent on micro- and mesomixing but macromixing, which equalizes the concentrations of all the species within the entire crystallizer, cannot be disregarded. This study investigates the influence of hydrodynamics on crystallization processes. Modelling of crystallizers with the mixed suspension mixed product removal (MSMPR) theory (ideal mixing), computational fluid dynamics (CFD), and a compartmental multiblock model is compared. The importance of proper verification of CFD and multiblock models is demonstrated. In addition, the influence of different hydrodynamic conditions on reactive crystallization process control is studied. Finally, the effect of extreme local supersaturation is studied using power ultrasound to initiate nucleation. The present work shows that mixing and chemical feeding conditions clearly affect induction time and cluster formation, nucleation, growth kinetics, and agglomeration. Consequently, the properties of crystalline end products, e.g. crystal size and crystal habit, can be influenced by management of mixing and feeding conditions. Impurities may have varying impacts on crystallization processes. As an example, manganese ions were shown to replace magnesium ions in the crystal lattice of magnesium sulphate heptahydrate, increasing the crystal growth rate significantly, whereas sodium ions showed no interaction at all. Modelling of continuous crystallization based on MSMPR theory showed that the model is feasible in a small laboratoryscale crystallizer, whereas in larger pilot- and industrial-scale crystallizers hydrodynamic effects should be taken into account. For that reason, CFD and multiblock modelling are shown to be effective tools for modelling crystallization with inhomogeneous mixing. The present work shows also that selection of the measurement point, or points in the case of multiprobe systems, is crucial when process analytical technology (PAT) is used to control larger scale crystallization. The thesis concludes by describing how control of local supersaturation by highly localized ultrasound was successfully applied to induce nucleation and to control polymorphism in reactive crystallization of L-glutamic acid.
Resumo:
Interest to hole-doped mixed-valence manganite perovskites is connected to the ‘colossal’ magnetoresistance. This effect or huge drop of the resistivity, ρ, in external magnetic field, B, attains usually the maximum value near the ferromagnetic Curie temperature, TC. In this thesis are investigated conductivity mechanisms and magnetic properties of the manganite perovskite compounds LaMnO3+, La1-xCaxMnO3, La1-xCaxMn1-yFeyO3 and La1- xSrxMn1-yFeyO3. When the present work was started the key role of the phase separation and its influence on the properties of the colossal magnetoresistive materials were not clear. Our main results are based on temperature dependencies of the magnetoresistance and magnetothermopower, investigated in the temperature interval of 4.2 - 300 K in magnetic fields up to 10 T. The magnetization was studied in the same temperature range in weak (up to 0.1 T) magnetic fields. LaMnO3+δ is the parent compound for preparation of the hole-doped CMR materials. The dependences of such parameters as the Curie temperature, TC, the Coulomb gap, Δ, the rigid gap, γ, and the localization radius, a, on pressure, p, are observed in LaMnO3+δ. It has been established that the dependences above can be interpreted by increase of the electron bandwidth and decrease of the polaron potential well when p is increased. Generally, pressure stimulates delocalization of the electrons in LaMnO3+δ. Doping of LaMnO3 with Ca, leading to La1-xCaxMnO3, changes the Mn3+/Mn4+ ratio significantly and brings an additional disorder to the crystal lattice. Phase separation in a form of mixture of the ferromagnetic and the spin glass phases was observed and investigated in La1- xCaxMnO3 at x between 0 and 0.4. Influence of the replacement of Mn by Fe is studied in La0.7Ca0.3Mn1−yFeyO3 and La0.7Sr0.3Mn1−yFeyO3. Asymmetry of the soft Coulomb gap and of the rigid gap in the density of localized states, small shift of the centre of the gaps with respect to the Fermi level and cubic asymmetry of the density of states are obtained in La0.7Ca0.3Mn1−yFeyO3. Damping of TC with y is connected to breaking of the double-exchange interaction by doping with Fe, whereas the irreversibility and the critical behavior of the magnetic susceptibility are determined by the phase separation and the frustrated magnetic state of La0.7Sr0.3Mn1−yFeyO3.
Resumo:
Print quality and the printability of paper are very important attributes when modern printing applications are considered. In prints containing images, high print quality is a basic requirement. Tone unevenness and non uniform glossiness of printed products are the most disturbing factors influencing overall print quality. These defects are caused by non ideal interactions of paper, ink and printing devices in high speed printing processes. Since print quality is a perceptive characteristic, the measurement of unevenness according to human vision is a significant problem. In this thesis, the mottling phenomenon is studied. Mottling is a printing defect characterized by a spotty, non uniform appearance in solid printed areas. Print mottle is usually the result of uneven ink lay down or non uniform ink absorption across the paper surface, especially visible in mid tone imagery or areas of uniform color, such as solids and continuous tone screen builds. By using existing knowledge on visual perception and known methods to quantify print tone variation, a new method for print unevenness evaluation is introduced. The method is compared to previous results in the field and is supported by psychometric experiments. Pilot studies are made to estimate the effect of optical paper characteristics prior to printing, on the unevenness of the printed area after printing. Instrumental methods for print unevenness evaluation have been compared and the results of the comparison indicate that the proposed method produces better results in terms of visual evaluation correspondence. The method has been successfully implemented as ail industrial application and is proved to be a reliable substitute to visual expertise.