1000 resultados para 952
Resumo:
This thesis is devoted to investigations of three typical representatives of the II-V diluted magnetic semiconductors, Zn1-xMnxAs2, (Zn1-xMnx)3As2 and p-CdSb:Ni. When this work started the family of the II-V semiconductors was presented by only the compounds belonging to the subgroup II3-V2, as (Zn1-xMnx)3As2, whereas the rest of the materials mentioned above were not investigated at all. Pronounced low-field magnetic irreversibility, accompanied with a ferromagnetic transition, are observed in Zn1-xMnxAs2 and (Zn1-xMnx)3As2 near 300 K. These features give evidence for presence of MnAs nanosize magnetic clusters, responsible for frustrated ground magnetic state. In addition, (Zn1-xMnx)3As2 demonstrates large paramagnetic response due to considerable amount of single Mn ions and small antiferromagnetic clusters. Similar paramagnetic system existing in Zn1-xMnxAs2 is much weaker. Distinct low-field magnetic irreversibility, accompanied with a rapid saturation of the magnetization with increasing magnetic field, is observed near the room temperature in p- CdSb:Ni, as well. Such behavior is connected to the frustrated magnetic state, determined by Ni-rich magnetic Ni1-xSbx nanoclusters. Their large non-sphericity and preferable orientations are responsible for strong anisotropy of the coercivity and saturation magnetization of p- CdSb:Ni. Parameters of the Ni1-xSbx nanoclusters are estimated. Low-temperature resistivity of p-CdSb:Ni is governed by a hopping mechanism of charge transfer. The variable-range hopping conductivity, observed in zero magnetic field, demonstrates a tendency of transformation into the nearest-neighbor hopping conductivity in non-zero magnetic filed. The Hall effect in p-CdSb:Ni exhibits presence of a positive normal and a negative anomalous contributions to the Hall resistivity. The normal Hall coefficient is governed mainly by holes activated into the valence band, whereas the anomalous Hall effect, attributable to the Ni1-xSbx nanoclusters with ferromagnetically ordered internal spins, exhibits a low-temperature power-law resistivity scaling.
Resumo:
The ongoing development of the digital media has brought a new set of challenges with it. As images containing more than three wavelength bands, often called spectral images, are becoming a more integral part of everyday life, problems in the quality of the RGB reproduction from the spectral images have turned into an important area of research. The notion of image quality is often thought to comprise two distinctive areas – image quality itself and image fidelity, both dealing with similar questions, image quality being the degree of excellence of the image, and image fidelity the measure of the match of the image under study to the original. In this thesis, both image fidelity and image quality are considered, with an emphasis on the influence of color and spectral image features on both. There are very few works dedicated to the quality and fidelity of spectral images. Several novel image fidelity measures were developed in this study, which include kernel similarity measures and 3D-SSIM (structural similarity index). The kernel measures incorporate the polynomial, Gaussian radial basis function (RBF) and sigmoid kernels. The 3D-SSIM is an extension of a traditional gray-scale SSIM measure developed to incorporate spectral data. The novel image quality model presented in this study is based on the assumption that the statistical parameters of the spectra of an image influence the overall appearance. The spectral image quality model comprises three parameters of quality: colorfulness, vividness and naturalness. The quality prediction is done by modeling the preference function expressed in JNDs (just noticeable difference). Both image fidelity measures and the image quality model have proven to be effective in the respective experiments.
Resumo:
In recent times of global turmoil, the need for uncertainty management has become ever momentous. The need for enhanced foresight especially concerns capital-intensive industries, which need to commit their resources and assets with long-term planning horizons. Scenario planning has been acknowledged to have many virtues - and limitations - concerning the mapping of the future and illustrating the alternative development paths. The present study has been initiated to address both the need of improved foresight in two capital-intensive industries, i.e. the paper and steel industries and the imperfections in the current scenario practice. The research problem has been approached by engendering a problem-solving vehicle, which combines, e.g. elements of generic scenario process, face-to-face group support methods, deductive scenario reasoning and causal mapping into a fully integrated scenario process. The process, called the SAGES scenario framework, has been empirically tested by creating alternative futures for two capital-intensive industries, i.e. the paper and steel industries. Three scenarios for each industry have been engendered together with the identification of the key megatrends, the most important foreign investment determinants, key future drivers and leading indicators for the materialisation of the scenarios. The empirical results revealed a two-fold outlook for the paper industry, while the steel industry future was seen as much more positive. The research found support for utilising group support systems in scenario and strategic planning context with some limitations. Key perceived benefits include high time-efficiency, productivity and lower resource-intensiveness. Group support also seems to enhance participant satisfaction, encourage innovative thinking and provide the users with personalised qualitative scenarios.
Resumo:
The traditional forest industry is a good example of the changing nature of the competitive environment in many industries. Faced with drastic challenges forestindustry companies are forced to search for new value-creating strategies in order to create competitive advantage. The emerging bioenergy business is now offering promising avenues for value creation for both the forest and energy sectors because of their complementary resources and knowledge with respect to bioenergy production from forest-based biomass. The key objective of this dissertation is to examine the sources of sustainable competitive advantage and the value-creation opportunities that are emerging at the intersection between the forest and energy industries. The research topic is considered from different perspectives in order to provide a comprehensive view of the phenomenon. The study discusses the business opportunities that are related to producing bioenergy from forest-based biomass, and sheds light on the greatest challenges and threats influencing the success of collaboration between the forest and energy sectors. In addition, it identifies existing and potential bioenergy actors, and considers the resources and capabilities needed in order to prosper in the bioenergy field. The value-creation perspective is founded on strategic management accounting, the theoretical frameworks are adopted from the field of strategic management, and the future aspect is taken into account through the application of futures studies research methodology. This thesis consists of two parts. The first part provides a synthesis of the overall dissertation, and the second part comprises four complementary research papers. There search setting is explorative in nature, and both qualitative and quantitative research methods are used. As a result, the thesis lays the foundation for non-technological studies on bioenergy. It gives an example of how to study new value-creation opportunities at an industrial intersection, and discusses the main determinants affecting the value-creation process. In order to accomplish these objectives the phenomenon of value creation at the intersection between the forest and energy industries is theorized and connected with the dynamic resource-based view of the firm.
Resumo:
An oscillating overvoltage has become a common phenomenon at the motor terminal in inverter-fed variable-speed drives. The problem has emerged since modern insulated gate bipolar transistors have become the standard choice as the power switch component in lowvoltage frequency converter drives. Theovervoltage phenomenon is a consequence of the pulse shape of inverter output voltage and impedance mismatches between the inverter, motor cable, and motor. The overvoltages are harmful to the electric motor, and may cause, for instance, insulation failure in the motor. Several methods have been developed to mitigate the problem. However, most of them are based on filtering with lossy passive components, the drawbacks of which are typically their cost and size. In this doctoral dissertation, application of a new active du/dt filtering method based on a low-loss LC circuit and active control to eliminate the motor overvoltages is discussed. The main benefits of the method are the controllability of the output voltage du/dt within certain limits, considerably smaller inductances in the filter circuit resulting in a smaller physical component size, and excellent filtering performance when compared with typical traditional du/dt filtering solutions. Moreover, no additional components are required, since the active control of the filter circuit takes place in the process of the upper-level PWM modulation using the same power switches as the inverter output stage. Further, the active du/dt method will benefit from the development of semiconductor power switch modules, as new technologies and materials emerge, because the method requires additional switching in the output stage of the inverter and generation of narrow voltage pulses. Since additional switching is required in the output stage, additional losses are generated in the inverter as a result of the application of the method. Considerations on the application of the active du/dt filtering method in electric drives are presented together with experimental data in order to verify the potential of the method.
Resumo:
Concentrated winding permanent magnet machines and their electromagnetic properties are studied in this doctoral thesis. The thesis includes a number of main tasks related to the application of permanent magnets in concentrated winding open slot machines. Suitable analytical methods are required for the first design calculations of a new machine. Concentrated winding machines differ from conventional integral slot winding machines in such a way that adapted analytical calculation methods are needed. A simple analytical model for calculating the concentrated winding axial flux machines is provided. The next three main design tasks are discussed in more detail in the thesis. The magnetic length of the rotor surface magnet machines is studied, and it is shown that the traditional methods have to be modified also in this respect. An important topic in this study has been to evaluate and minimize the rotor permanent magnet Joule losses by using segmented magnets in the calculations and experiments. Determination of the magnetizing and leakage inductances for a concentrated winding machine and the torque production capability of concentrated winding machines with different pole pair numbers are studied, and the results are compared with the corresponding properties of integral slot winding machines. The thesis introduces a new practical permanent magnet motor type for industrial use. The special features of the machine are based on the option of using concentrated winding open slot constructions of permanent magnet synchronous machines in the normal speed ranges of industrial motors, for instance up to 3000 min-1, without excessive rotor losses. By applying the analytical equations and methods introduced in the thesis, a 37 kW 2400 min-1 12-slot 10-pole axial flux machine with rotor-surfacemounted magnets is designed. The performance of the designed motor is determined by experimental measurements and finite element calculations.
Resumo:
In the paper machine, it is not a desired feature for the boundary layer flows in the fabric and the roll surfaces to travel into the closing nips, creating overpressure. In this thesis, the aerodynamic behavior of the grooved roll and smooth rolls is compared in order to understand the nip flow phenomena, which is the main reason why vacuum and grooved roll constructions are designed. A common method to remove the boundary layer flow from the closing nip is to use the vacuum roll construction. The downside of the use of vacuum rolls is high operational costs due to pressure losses in the vacuum roll shell. The deep grooved roll has the same goal, to create a pressure difference over the paper web and keep the paper attached to the roll or fabric surface in the drying pocket of the paper machine. A literature review revealed that the aerodynamic functionality of the grooved roll is not very well known. In this thesis, the aerodynamic functionality of the grooved roll in interaction with a permeable or impermeable wall is studied by varying the groove properties. Computational fluid dynamics simulations are utilized as the research tool. The simulations have been performed with commercial fluid dynamics software, ANSYS Fluent. Simulation results made with 3- and 2-dimensional fluid dynamics models are compared to laboratory scale measurements. The measurements have been made with a grooved roll simulator designed for the research. The variables in the comparison are the paper or fabric wrap angle, surface velocities, groove geometry and wall permeability. Present-day computational and modeling resources limit grooved roll fluid dynamics simulations in the paper machine scale. Based on the analysis of the aerodynamic functionality of the grooved roll, a grooved roll simulation tool is proposed. The smooth roll simulations show that the closing nip pressure does not depend on the length of boundary layer development. The surface velocity increase affects the pressure distribution in the closing and opening nips. The 3D grooved roll model reveals the aerodynamic functionality of the grooved roll. With the optimal groove size it is possible to avoid closing nip overpressure and keep the web attached to the fabric surface in the area of the wrap angle. The groove flow friction and minor losses play a different role when the wrap angle is changed. The proposed 2D grooved roll simulation tool is able to replicate the grooved aerodynamic behavior with reasonable accuracy. A small wrap angle predicts the pressure distribution correctly with the chosen approach for calculating the groove friction losses. With a large wrap angle, the groove friction loss shows too large pressure gradients, and the way of calculating the air flow friction losses in the groove has to be reconsidered. The aerodynamic functionality of the grooved roll is based on minor and viscous losses in the closing and opening nips as well as in the grooves. The proposed 2D grooved roll model is a simplification in order to reduce computational and modeling efforts. The simulation tool makes it possible to simulate complex paper machine constructions in the paper machine scale. In order to use the grooved roll as a replacement for the vacuum roll, the grooved roll properties have to be considered on the basis of the web handling application.
Resumo:
It is necessary to use highly specialized robots in ITER (International Thermonuclear Experimental Reactor) both in the manufacturing and maintenance of the reactor due to a demanding environment. The sectors of the ITER vacuum vessel (VV) require more stringent tolerances than normally expected for the size of the structure involved. VV consists of nine sectors that are to be welded together. The vacuum vessel has a toroidal chamber structure. The task of the designed robot is to carry the welding apparatus along a path with a stringent tolerance during the assembly operation. In addition to the initial vacuum vessel assembly, after a limited running period, sectors need to be replaced for repair. Mechanisms with closed-loop kinematic chains are used in the design of robots in this work. One version is a purely parallel manipulator and another is a hybrid manipulator where the parallel and serial structures are combined. Traditional industrial robots that generally have the links actuated in series are inherently not very rigid and have poor dynamic performance in high speed and high dynamic loading conditions. Compared with open chain manipulators, parallel manipulators have high stiffness, high accuracy and a high force/torque capacity in a reduced workspace. Parallel manipulators have a mechanical architecture where all of the links are connected to the base and to the end-effector of the robot. The purpose of this thesis is to develop special parallel robots for the assembly, machining and repairing of the VV of the ITER. The process of the assembly and machining of the vacuum vessel needs a special robot. By studying the structure of the vacuum vessel, two novel parallel robots were designed and built; they have six and ten degrees of freedom driven by hydraulic cylinders and electrical servo motors. Kinematic models for the proposed robots were defined and two prototypes built. Experiments for machine cutting and laser welding with the 6-DOF robot were carried out. It was demonstrated that the parallel robots are capable of holding all necessary machining tools and welding end-effectors in all positions accurately and stably inside the vacuum vessel sector. The kinematic models appeared to be complex especially in the case of the 10-DOF robot because of its redundant structure. Multibody dynamics simulations were carried out, ensuring sufficient stiffness during the robot motion. The entire design and testing processes of the robots appeared to be complex tasks due to the high specialization of the manufacturing technology needed in the ITER reactor, while the results demonstrate the applicability of the proposed solutions quite well. The results offer not only devices but also a methodology for the assembly and repair of ITER by means of parallel robots.
Resumo:
The front end of innovation is regarded as one of the most important steps in building new software products or services, and the most significant benefits in software development can be achieved through improvements in the front end activities. Problems in the front end phase have an impact on customer dissatisfaction with delivered software, and on the effectiveness of the entire software development process. When these processes are improved, the likelihood of delivering high quality software and business success increases. This thesis highlights the challenges and problems related to the early phases of software development, and provides new methods and tools for improving performance in the front end activities of software development. The theoretical framework of this study comprises two fields of research. The first section belongs to the field of innovation management, and especially to the management of the early phases of the innovation process, i.e. the front end of innovation. The second section of the framework is closely linked to the processes of software engineering, especially to the early phases of the software development process, i.e. the practice of requirements engineering. Thus, this study extends the theoretical knowledge and discloses the differences and similarities in these two fields of research. In addition, this study opens up a new strand for academic discussion by connecting these research directions. Several qualitative business research methodologies have been utilized in the individual publications to solve the research questions. The theoretical and managerial contribution of the study can be divided into three areas: 1) processes and concepts, 2) challenges and development needs, and 3) means and methods for the front end activities of software development. First, the study discloses the difference and similarities between the concepts of the front end of innovation and requirements engineering, and proposes a new framework for managing the front end of the software innovation process, bringing business and innovation perspectives into software development. Furthermore, the study discloses managerial perceptions of the similarities and differences in the concept of the front end of innovation between the software industry and the traditional industrial sector. Second, the study highlights the challenges and development needs in the front end phase of software development, especially challenges in communication, such as linguistic problems, ineffective communication channels, a communication gap between users/customers and software developers, and participation of multiple persons in software development. Third, the study proposes new group methods for improving the front end activities of software development, especially customer need assessment, and the elicitation of software requirements.
Resumo:
Strategic development of distribution networks plays a key role in the asset management in electricity distribution companies. Owing to the capital-intensive nature of the field and longspan operations of companies, the significance of a strategy is emphasised. A well-devised strategy combines awareness of challenges posed by the operating environment and the future targets of the distribution company. Economic regulation, ageing infrastructure, scarcity of resources and tightening supply requirements with challenges created by the climate change put a pressure on the strategy work. On the other hand, technology development related to network automation and underground cabling assists in answering these challenges. This dissertation aims at developing process knowledge and establishing a methodological framework by which key issues related to network development can be addressed. Moreover, the work develops tools by which the effects of changes in the operating environment on the distribution business can be analysed in the strategy work. To this end, the work discusses certain characteristics of the distribution business and describes the strategy process at a principle level. Further, the work defines the subtasks in the strategy process and presents the key elements in the strategy work and long-term network planning. The work delineates the factors having either a direct or indirect effect on strategic planning and development needs in the networks; in particular, outage costs constitute an important part of the economic regulation of the distribution business, reliability being thus a key driver in network planning. The dissertation describes the methodology and tools applied to cost and reliability analyses in the strategy work. The work focuses on determination of the techno-economic feasibility of different network development technologies; these feasibility surveys are linked to the economic regulation model of the distribution business, in particular from the viewpoint of reliability of electricity supply and allowed return. The work introduces the asset management system developed for research purposes and to support the strategy work, the calculation elements of the system and initial data used in the network analysis. The key elements of this asset management system are utilised in the dissertation. Finally, the study addresses the stages of strategic decision-making and compilation of investment strategies. Further, the work illustrates implementation of strategic planning in an actual distribution company environment.
Resumo:
Virtuaaliammattikorkeakoulu, VirtuaaliAMK
Resumo:
Virtuaaliammattikorkeakoulu, VirtuaaliAMK
Resumo:
Stratospheric ozone can be measured accurately using a limb scatter remote sensing technique at the UV-visible spectral region of solar light. The advantages of this technique includes a good vertical resolution and a good daytime coverage of the measurements. In addition to ozone, UV-visible limb scatter measurements contain information about NO2, NO3, OClO, BrO and aerosols. There are currently several satellite instruments continuously scanning the atmosphere and measuring the UVvisible region of the spectrum, e.g., the Optical Spectrograph and Infrared Imager System (OSIRIS) launched on the Odin satellite in February 2001, and the Scanning Imaging Absorption SpectroMeter for Atmospheric CartograpHY (SCIAMACHY) launched on Envisat in March 2002. Envisat also carries the Global Ozone Monitoring by Occultation of Stars (GOMOS) instrument, which also measures limb-scattered sunlight under bright limb occultation conditions. These conditions occur during daytime occultation measurements. The global coverage of the satellite measurements is far better than any other ozone measurement technique, but still the measurements are sparse in the spatial domain. Measurements are also repeated relatively rarely over a certain area, and the composition of the Earth’s atmosphere changes dynamically. Assimilation methods are therefore needed in order to combine the information of the measurements with the atmospheric model. In recent years, the focus of assimilation algorithm research has turned towards filtering methods. The traditional Extended Kalman filter (EKF) method takes into account not only the uncertainty of the measurements, but also the uncertainty of the evolution model of the system. However, the computational cost of full blown EKF increases rapidly as the number of the model parameters increases. Therefore the EKF method cannot be applied directly to the stratospheric ozone assimilation problem. The work in this thesis is devoted to the development of inversion methods for satellite instruments and the development of assimilation methods used with atmospheric models.
Resumo:
Cooling crystallization is one of the most important purification and separation techniques in the chemical and pharmaceutical industry. The product of the cooling crystallization process is always a suspension that contains both the mother liquor and the product crystals, and therefore the first process step following crystallization is usually solid-liquid separation. The properties of the produced crystals, such as their size and shape, can be affected by modifying the conditions during the crystallization process. The filtration characteristics of solid/liquid suspensions, on the other hand, are strongly influenced by the particle properties, as well as the properties of the liquid phase. It is thus obvious that the effect of the changes made to the crystallization parameters can also be seen in the course of the filtration process. Although the relationship between crystallization and filtration is widely recognized, the number of publications where these unit operations have been considered in the same context seems to be surprisingly small. This thesis explores the influence of different crystallization parameters in an unseeded batch cooling crystallization process on the external appearance of the product crystals and on the pressure filtration characteristics of the obtained product suspensions. Crystallization experiments are performed by crystallizing sulphathiazole (C9H9N3O2S2), which is a wellknown antibiotic agent, from different mixtures of water and n-propanol in an unseeded batch crystallizer. The different crystallization parameters that are studied are the composition of the solvent, the cooling rate during the crystallization experiments carried out by using a constant cooling rate throughout the whole batch, the cooling profile, as well as the mixing intensity during the batch. The obtained crystals are characterized by using an automated image analyzer and the crystals are separated from the solvent through constant pressure batch filtration experiments. Separation characteristics of the suspensions are described by means of average specific cake resistance and average filter cake porosity, and the compressibilities of the cakes are also determined. The results show that fairly large differences can be observed between the size and shape of the crystals, and it is also shown experimentally that the changes in the crystal size and shape have a direct impact on the pressure filtration characteristics of the crystal suspensions. The experimental results are utilized to create a procedure that can be used for estimating the filtration characteristics of solid-liquid suspensions according to the particle size and shape data obtained by image analysis. Multilinear partial least squares regression (N-PLS) models are created between the filtration parameters and the particle size and shape data, and the results presented in this thesis show that relatively obvious correlations can be detected with the obtained models.
Resumo:
Virtuaaliammattikorkeakoulu, VirtuaaliAMK