44 resultados para Frequency stability
Resumo:
Tämän pro gradu– tutkielman tavoitteena oli testata täytettyjen taukojen (er ja erm) esiintymistiheyttä, sijaintia kieliopillisessa rakenteessa sekä funktioita Kjellmerin (2003) korpus-tutkimuksessa. Materiaalina käytin viiden yhdysvaltalaisen poliitikon puhetta keskusteluohjelmasta Larry King Live. Tutkimuksessani sovelsin Kjellmerin tutkimusmenetelmiä, joita muokkasin huomattavasti suppeampaan materiaaliini sopiviksi. Lähestymistapani oli täten induktiivinen toisin kuin testatussa tutkimuksessa. Materiaalini oli tarkoituksellisesti rajattu, sillä halusin selvittää, kuvaavatko Kjellmerin laajaan materiaaliin perustuvat tutkimustulokset myös täytettyjen taukojen käyttöä suppeammassa materiaalissa. Materiaalini (kokonaisuudessaan 101 minuuttia) transkriboin ortografisesti. Analyysissäni arvioin täytettyjen taukojen esiintymistiheyden puhujakohtaisesti ja koko ryhmälle suhteuttamalla täytettyjen taukojen lukumäärän kokonaissanamäärään. Tämän jälkeen tein perinteisen kielioppianalyysin rakenteista, joita edeltää tai joissa esiintyy täytetty tauko, ja täytettyjen taukojen sijainnin perusteella luokittelin ne sana-, lauseke-, ja lausetasolle. Lopuksi analysoin täytettyjen taukojen käyttöä soveltaen Kjellmerin ehdottamia funktioita (hesitaatio, vuorottelujäsennyksen merkitseminen, huomion herättäminen ja kontaktin luominen, korostus ja korjaus) ja niiden piirteitä omaan materiaaliini. Tutkimukseni perusteella täytetyt tauot esiintyvät tutkitun viiden poliitikon puheessa suhteellisen usein. Puhujakohtaiset eroavaisuudet olivat kuitenkin huomattavat. Kieliopillisen luokitteluni mukaan sana-, lauseke- ja lausetasot eivät täysin kuvaa täytettyjen taukojen sijoittumista, sillä täytetyt tauot edelsivät mm. määre-lauseita, jotka eivät vastaa lausetasoa englannin kielessä. Materiaalini funktioanalyysi osoitti, että täytetyt tauot yleensä vastaavat yhtä tai useampaa Kjellmerin ehdottamaa funktioita. Lisäksi tutkimukseni mukaan täytetyillä tauoilla on ainakin yksi rakenteellinen funktio. Analyysini perusteella Kjellmerin tutkimustulokset ovat siis pääosin sovellettavissa suppeampaan materiaaliin. Puutteiksi hänen tutkimuksessaan osoittautuivat funktioanalyysille tärkeän kontekstuaalisen informaation puute sekä keskittyminen täytettyihin taukoihin, jotka esiintyvät vain tietyissä kielioppirakenteissa. Yleisesti voin tutkimukseni pohjalta todeta, että täytetyt tauot ovat vielä vajaasti tunnettuja ja että kieliopillisen sijoituksen ja funktioiden lisätutkimus on tarpeellista.
Resumo:
The ongoing global financial crisis has demonstrated the importance of a systemwide, or macroprudential, approach to safeguarding financial stability. An essential part of macroprudential oversight concerns the tasks of early identification and assessment of risks and vulnerabilities that eventually may lead to a systemic financial crisis. Thriving tools are crucial as they allow early policy actions to decrease or prevent further build-up of risks or to otherwise enhance the shock absorption capacity of the financial system. In the literature, three types of systemic risk can be identified: i ) build-up of widespread imbalances, ii ) exogenous aggregate shocks, and iii ) contagion. Accordingly, the systemic risks are matched by three categories of analytical methods for decision support: i ) early-warning, ii ) macro stress-testing, and iii ) contagion models. Stimulated by the prolonged global financial crisis, today's toolbox of analytical methods includes a wide range of innovative solutions to the two tasks of risk identification and risk assessment. Yet, the literature lacks a focus on the task of risk communication. This thesis discusses macroprudential oversight from the viewpoint of all three tasks: Within analytical tools for risk identification and risk assessment, the focus concerns a tight integration of means for risk communication. Data and dimension reduction methods, and their combinations, hold promise for representing multivariate data structures in easily understandable formats. The overall task of this thesis is to represent high-dimensional data concerning financial entities on lowdimensional displays. The low-dimensional representations have two subtasks: i ) to function as a display for individual data concerning entities and their time series, and ii ) to use the display as a basis to which additional information can be linked. The final nuance of the task is, however, set by the needs of the domain, data and methods. The following ve questions comprise subsequent steps addressed in the process of this thesis: 1. What are the needs for macroprudential oversight? 2. What form do macroprudential data take? 3. Which data and dimension reduction methods hold most promise for the task? 4. How should the methods be extended and enhanced for the task? 5. How should the methods and their extensions be applied to the task? Based upon the Self-Organizing Map (SOM), this thesis not only creates the Self-Organizing Financial Stability Map (SOFSM), but also lays out a general framework for mapping the state of financial stability. This thesis also introduces three extensions to the standard SOM for enhancing the visualization and extraction of information: i ) fuzzifications, ii ) transition probabilities, and iii ) network analysis. Thus, the SOFSM functions as a display for risk identification, on top of which risk assessments can be illustrated. In addition, this thesis puts forward the Self-Organizing Time Map (SOTM) to provide means for visual dynamic clustering, which in the context of macroprudential oversight concerns the identification of cross-sectional changes in risks and vulnerabilities over time. Rather than automated analysis, the aim of visual means for identifying and assessing risks is to support disciplined and structured judgmental analysis based upon policymakers' experience and domain intelligence, as well as external risk communication.
Resumo:
At the present work the bifurcational behaviour of the solutions of Rayleigh equation and corresponding spatially distributed system is being analysed. The conditions of oscillatory and monotonic loss of stability are obtained. In the case of oscillatory loss of stability, the analysis of linear spectral problem is being performed. For nonlinear problem, recurrent formulas for the general term of the asymptotic approximation of the self-oscillations are found, the stability of the periodic mode is analysed. Lyapunov-Schmidt method is being used for asymptotic approximation. The correlation between periodic solutions of ODE and PDE is being investigated. The influence of the diffusion on the frequency of self-oscillations is being analysed. Several numerical experiments are being performed in order to support theoretical findings.
Resumo:
Protein engineering aims to improve the properties of enzymes and affinity reagents by genetic changes. Typical engineered properties are affinity, specificity, stability, expression, and solubility. Because proteins are complex biomolecules, the effects of specific genetic changes are seldom predictable. Consequently, a popular strategy in protein engineering is to create a library of genetic variants of the target molecule, and render the population in a selection process to sort the variants by the desired property. This technique, called directed evolution, is a central tool for trimming protein-based products used in a wide range of applications from laundry detergents to anti-cancer drugs. New methods are continuously needed to generate larger gene repertoires and compatible selection platforms to shorten the development timeline for new biochemicals. In the first study of this thesis, primer extension mutagenesis was revisited to establish higher quality gene variant libraries in Escherichia coli cells. In the second study, recombination was explored as a method to expand the number of screenable enzyme variants. A selection platform was developed to improve antigen binding fragment (Fab) display on filamentous phages in the third article and, in the fourth study, novel design concepts were tested by two differentially randomized recombinant antibody libraries. Finally, in the last study, the performance of the same antibody repertoire was compared in phage display selections as a genetic fusion to different phage capsid proteins and in different antibody formats, Fab vs. single chain variable fragment (ScFv), in order to find out the most suitable display platform for the library at hand. As a result of the studies, a novel gene library construction method, termed selective rolling circle amplification (sRCA), was developed. The method increases mutagenesis frequency close to 100% in the final library and the number of transformants over 100-fold compared to traditional primer extension mutagenesis. In the second study, Cre/loxP recombination was found to be an appropriate tool to resolve the DNA concatemer resulting from error-prone RCA (epRCA) mutagenesis into monomeric circular DNA units for higher efficiency transformation into E. coli. Library selections against antigens of various size in the fourth study demonstrated that diversity placed closer to the antigen binding site of antibodies supports generation of antibodies against haptens and peptides, whereas diversity at more peripheral locations is better suited for targeting proteins. The conclusion from a comparison of the display formats was that truncated capsid protein three (p3Δ) of filamentous phage was superior to the full-length p3 and protein nine (p9) in obtaining a high number of uniquely specific clones. Especially for digoxigenin, a difficult hapten target, the antibody repertoire as ScFv-p3Δ provided the clones with the highest affinity for binding. This thesis on the construction, design, and selection of gene variant libraries contributes to the practical know-how in directed evolution and contains useful information for scientists in the field to support their undertakings.
Resumo:
The pumping processes requiring wide range of flow are often equipped with parallelconnected centrifugal pumps. In parallel pumping systems, the use of variable speed control allows that the required output for the process can be delivered with a varying number of operated pump units and selected rotational speed references. However, the optimization of the parallel-connected rotational speed controlled pump units often requires adaptive modelling of both parallel pump characteristics and the surrounding system in varying operation conditions. The available information required for the system modelling in typical parallel pumping applications such as waste water treatment and various cooling and water delivery pumping tasks can be limited, and the lack of real-time operation point monitoring often sets limits for accurate energy efficiency optimization. Hence, alternatives for easily implementable control strategies which can be adopted with minimum system data are necessary. This doctoral thesis concentrates on the methods that allow the energy efficient use of variable speed controlled parallel pumps in system scenarios in which the parallel pump units consist of a centrifugal pump, an electric motor, and a frequency converter. Firstly, the suitable operation conditions for variable speed controlled parallel pumps are studied. Secondly, methods for determining the output of each parallel pump unit using characteristic curve-based operation point estimation with frequency converter are discussed. Thirdly, the implementation of the control strategy based on real-time pump operation point estimation and sub-optimization of each parallel pump unit is studied. The findings of the thesis support the idea that the energy efficiency of the pumping can be increased without the installation of new, more efficient components in the systems by simply adopting suitable control strategies. An easily implementable and adaptive control strategy for variable speed controlled parallel pumping systems can be created by utilizing the pump operation point estimation available in modern frequency converters. Hence, additional real-time flow metering, start-up measurements, and detailed system model are unnecessary, and the pumping task can be fulfilled by determining a speed reference for each parallel-pump unit which suggests the energy efficient operation of the pumping system.
Resumo:
In the doctoral dissertation, low-voltage direct current (LVDC) distribution system stability, supply security and power quality are evaluated by computational modelling and measurements on an LVDC research platform. Computational models for the LVDC network analysis are developed. Time-domain simulation models are implemented in the time-domain simulation environment PSCAD/EMTDC. The PSCAD/EMTDC models of the LVDC network are applied to the transient behaviour and power quality studies. The LVDC network power loss model is developed in a MATLAB environment and is capable of fast estimation of the network and component power losses. The model integrates analytical equations that describe the power loss mechanism of the network components with power flow calculations. For an LVDC network research platform, a monitoring and control software solution is developed. The solution is used to deliver measurement data for verification of the developed models and analysis of the modelling results. In the work, the power loss mechanism of the LVDC network components and its main dependencies are described. Energy loss distribution of the LVDC network components is presented. Power quality measurements and current spectra are provided and harmonic pollution on the DC network is analysed. The transient behaviour of the network is verified through time-domain simulations. DC capacitor guidelines for an LVDC power distribution network are introduced. The power loss analysis results show that one of the main optimisation targets for an LVDC power distribution network should be reduction of the no-load losses and efficiency improvement of converters at partial loads. Low-frequency spectra of the network voltages and currents are shown, and harmonic propagation is analysed. Power quality in the LVDC network point of common coupling (PCC) is discussed. Power quality standard requirements are shown to be met by the LVDC network. The network behaviour during transients is analysed by time-domain simulations. The network is shown to be transient stable during large-scale disturbances. Measurement results on the LVDC research platform proving this are presented in the work.
Resumo:
Scanning optics create different types of phenomena and limitation to cladding process compared to cladding with static optics. This work concentrates on identifying and explaining the special features of laser cladding with scanning optics. Scanner optics changes cladding process energy input mechanics. Laser energy is introduced into the process through a relatively small laser spot which moves rapidly back and forth, distributing the energy to a relatively large area. The moving laser spot was noticed to cause dynamic movement in the melt pool. Due to different energy input mechanism scanner optic can make cladding process unstable if parameter selection is not done carefully. Especially laser beam intensity and scanning frequency have significant role in the process stability. The laser beam scanning frequency determines how long the laser beam affects with specific place local specific energy input. It was determined that if the scanning frequency in too low, under 40 Hz, scanned beam can start to vaporize material. The intensity in turn determines on how large package this energy is brought and if the intensity of the laser beam was too high, over 191 kW/cm2, laser beam started to vaporize material. If there was vapor formation noticed in the melt pool, the process starts to resample more laser alloying due to deep penetration of laser beam in to the substrate. Scanner optics enables more flexibility to the process than static optics. The numerical adjustment of scanning amplitude enables clad bead width adjustment. In turn scanner power modulation (where laser power is adjusted according to where the scanner is pointing) enables modification of clad bead cross-section geometry when laser power can be adjusted locally and thus affect how much laser beam melts material in each sector. Power modulation is also an important factor in terms of process stability. When a linear scanner is used, oscillating the scanning mirror causes a dwell time in scanning amplitude border area, where the scanning mirror changes the direction of movement. This can cause excessive energy input to this area which in turn can cause vaporization and process instability. This process instability can be avoided by decreasing energy in this region by power modulation. Powder feeding parameters have a significant role in terms of process stability. It was determined that with certain powder feeding parameter combinations powder cloud behavior became unstable, due to the vaporizing powder material in powder cloud. Mainly this was noticed, when either or both the scanning frequency or powder feeding gas flow was low or steep powder feeding angle was used. When powder material vaporization occurred, it created vapor flow, which prevented powder material to reach the melt pool and thus dilution increased. Also powder material vaporization was noticed to produce emission of light at wavelength range of visible light. This emission intensity was noticed to be correlated with the amount of vaporization in the powder cloud.
Resumo:
Tämän sivuaineen tutkielman tarkoituksena on selvittää, miten suomalaiset ja italialaiset englannin kielen opiskelijat tunnistavat englanninkielisten idiomien merkityksiä. Erityisesti opiskelijoiden oman äidinkielen vaikutusta idiomien ymmärtämiseen tutkitaan, kuin myös idiomien eri ominaisuuksien vaikutusta. Lisäksi tutkitaan, miten opiskelijat itse ajattelevat osaavansa käyttää idiomeja, ja pitävätkö he idiomien oppimista tärkeänä. Tutkielmaan osallistui 35 suomalaista englannin kielen yliopisto-opiskelijaa ja 34 italialaista englannin kielen yliopisto-opiskelijaa. Tutkimusaineisto kerättiin monivalintakyselyn avulla. Idiomit valittiin Collins Cobuild Dictionary of Idioms (2001) -sanakirjasta. Kysely sisälsi 36 idiomia, jotka valittiin kolmesta eri frekvenssiluokasta. Jokaisesta frekvenssiluokasta valittiin neljä idiomia, joille oli vastine sekä suomen että italian kielessä, neljä idiomia, joille oli vastine vain suomen kielessä ja neljä idiomia, joille oli vastine vain italian kielessä. Kullekin idiomille oli annettu neljä merkitysvaihtoehtoa, ja näistä yksi tai kaksi oli sanakirjojen mukaisia oikeita vastauksia. Tutkimuksen tulokset näyttävät osoittavan, että sekä suomalaisilla että italialaisilla oli vaikeuksia idiomien merkitysten tunnistamisessa. Kuitenkin myös suomalaisten ja italiaisten välillä oli tilastollisesti merkittävä ero. Suomalaiset osasivat idiomit huomattavasti paremmin kuin italialaiset. Koehenkilöt ymmärsivät merkittävästi helpommin idiomit joille oli vastine heidän omassa äidinkielessään kuin idiomit joille ei ollut vastinetta. Lisäksi vastaajat näyttivät hyötyvän myös idiomien kuvainnollisen ja kirjaimellisen merkityksen läheisyydestä eli läpinäkyvyydestä. Idiomien frekvenssi sen sijaan ei näyttänyt vaikuttavan niiden ymmärtämiseen. Suomalaisten ja italialaisten englannin opiskelijoiden mukaan idiomien opiskelu on hyödyllistä ja tarpeellista. Tulokset osoittavat, että idiomien ymmärtäminen on haastavaa jopa edistyneille oppijoille. Omalla äidinkielellä näyttää olevan suuri vaikutus idiomien ymmärtämiseen, ja nimenomaan samankaltaisuudesta on hyötyä. Äidinkielen merkitykseen vieraiden kielten oppimisessa ja sanaston oppimisessa täytyisi kiinnittää enemmän huomiota, ja idiomeja sekä muuta kuvainnollista kieltä täytyisi opettaa myös edistyneemmille oppijoille.
Resumo:
Nowadays global business trends force the adoption of innovative ICTs into the supply chain management (SCM). Particularly, the RFID technology is on high demand among SCM professionals due to its business advantages such as improving of accuracy and veloc-ity of SCM processes which lead to decrease of operational costs. Nevertheless, a question of the RFID technology impact on the efficiency of warehouse processes in the SCM re-mains open. The goal of the present study is to experiment the possibility of improvement order picking velocity in a warehouse of a big logistics company with the use of the RFID technology. In order to achieve this goal the following objectives have been developed: 1) Defining the scope of the RFID technology applications in the SCM; 2) Justification of the RFID technology impact on the SCM processes; 3) Defining a place of the warehouse order picking process in the SCM; 4) Identification and systematization of existing meth-ods of order picking velocity improvement; 5) Choosing of the study object and gathering of the empirical data about number of orders, number of hours spent per each order line daily during 5 months; 6) Processing and analysis of the empirical data; 7) Conclusion about the impact of the RFID technology on the speed of order picking process. As a result of the research it has been found that the speed of the order picking processes has not been changed as time has gone after the RFID adoption. It has been concluded that in order to achieve a positive effect in the speed of order picking process with the use of the RFID technology it is necessary to simultaneously implement changes in logistics and organizational management in 3PL logistics companies. Practical recommendations have been forwarded to the management of the company for further investigation and procedure.
Resumo:
In recent years, there have been studies that show a correlation between the hyperactivity of children and use of artificial food additives, including colorants. This has, in part, led to preference of natural products over products with artificial additives. Consumers have also become more aware of health issues. Natural food colorants have many bioactive functions, mainly vitamin A activity of carotenoids and antioxidativity, and therefore they could be more easily accepted by the consumers. However, natural colorant compounds are usually unstable, which restricts their usage. Microencapsulation could be one way to enhance the stability of natural colorant compounds and thus enable better usage for them as food colorants. Microencapsulation is a term used for processes in which the active material is totally enveloped in a coating or capsule, and thus it is separated and protected from the surrounding environment. In addition to protection by the capsule, microencapsulation can also be used to modify solubility and other properties of the encapsulated material, for example, to incorporate fat-soluble compounds into aqueous matrices. The aim of this thesis work was to study the stability of two natural pigments, lutein (carotenoid) and betanin (betalain), and to determine possible ways to enhance their stability with different microencapsulation techniques. Another aim was the extraction of pigments without the use of organic solvents and the development of previously used extraction methods. Stability of pigments in microencapsulated pigment preparations and model foods containing these were studied by measuring the pigment content after storage in different conditions. Preliminary studies on the bioavailability of microencapsulated pigments and sensory evaluation for consumer acceptance of model foods containing microencapsulated pigments were also carried out. Enzyme-assisted oil extraction was used to extract lutein from marigold (Tagetes erecta) flower without organic solvents, and the yield was comparable to solvent extraction of lutein from the same flowers. The effects of temperature, extraction time, and beet:water ratio on extraction efficiency of betanin from red beet (Beta vulgaris) were studied and the optimal conditions for maximum yield and maximum betanin concentration were determined. In both cases, extraction at 40 °C was better than extraction at 80 °C and the extraction for five minutes was as efficient as 15 or 30 minutes. For maximum betanin yield, the beet:water ratio of 1:2 was better, with possibly repeated extraction, but for maximum betanin concentration, a ratio of 1:1 was better. Lutein was incorporated into oil-in-water (o/w) emulsions with a polar oil fraction from oat (Avena sativa) as an emulsifier and mixtures of guar gum and xanthan gum or locust bean gum and xanthan gum as stabilizers to retard creaming. The stability of lutein in these emulsions was quite good, with 77 to 91 percent of lutein being left after storage in the dark at 20 to 22°C for 10 weeks whereas in spray dried emulsions the retention of lutein was 67 to 75 percent. The retention of lutein in oil was also good at 85 percent. Betanin was incorporated into the inner w1 water phase of a water1-in-oil-inwater2 (w1/o/w2) double emulsion with primary w1/o emulsion droplet size of 0.34 μm and secondary w1/o/w2 emulsion droplet size of 5.5 μm and encapsulation efficiency of betanin of 89 percent. In vitro intestinal lipid digestion was performed on the double emulsion, and during the first two hours, coalescence of the inner water phase droplets was observed, and the sizes of the double emulsion droplets increased quickly because of aggregation. This period also corresponded to gradual release of betanin, with a final release of 35 percent. The double emulsion structure was retained throughout the three-hour experiment. Betanin was also spray dried and incorporated into model juices with different pH and dry matter content. Model juices were stored in the dark at -20, 4, 20–24 or 60 °C (accelerated test) for several months. Betanin degraded quite rapidly in all of the samples and higher temperature and a lower pH accelerated degradation. Stability of betanin was much better in the spray dried powder, with practically no degradation during six months of storage in the dark at 20 to 24 °C and good stability also for six months in the dark at 60 °C with 60 percent retention. Consumer acceptance of model juices colored with spray dried betanin was compared with similar model juices colored with anthocyanins or beet extract. Consumers preferred beet extract and anthocyanin colored model juices over juices colored with spray dried betanin. However, spray dried betanin did not impart any off-odors or off-flavors into the model juices contrary to the beet extract. In conclusion, this thesis describes novel solvent-free extraction and encapsulation processes for lutein and betanin from plant sources. Lutein showed good stability in oil and in o/w emulsions, but slightly inferior in spray dried emulsions. In vitro intestinal lipid digestion showed a good stability of w1/o/w2 double emulsion and quite high retention of betanin during digestion. Consumer acceptance of model juices colored with spray dried betanin was not as good as model juices colored with anthocyanins, but addition of betanin to real berry juice could produce better results with mixture of added betanin and natural berry anthocyanins could produce a more acceptable color. Overall, further studies are needed to obtain natural colorants with good stability for the use in food products.
Resumo:
The most common reason for a low-voltage induction motor breakdown is a bearing failure. Along with the increasing popularity of modern frequency converters, bearing failures have become the most important motor fault type. Conditions in which bearing currents are likely to occur are generated as a side effect of fast du/dt switching transients. Once present, different types of bearing currents can accelerate the mechanical wear of bearings by causing deformation of metal parts in the bearing and degradation of the lubricating oil properties.The bearing current phenomena are well known, and several bearing current measurement and mitigation methods have been proposed. Nevertheless, in order to develop more feasible methods to measure and mitigate bearing currents, better knowledge of the phenomena is required. When mechanical wear is caused by bearing currents, the resulting aging impact has to be monitored and dealt with. Moreover, because of the stepwise aging mechanism, periodically executed condition monitoring measurements have been found ineffective. Thus, there is a need for feasible bearing current measurement methods that can be applied in parallel with the normal operation of series production drive systems. In order to reach the objectives of feasibility and applicability, nonintrusive measurement methods are preferred. In this doctoral dissertation, the characteristics and conditions of bearings that are related to the occurrence of different kinds of bearing currents are studied. Further, the study introduces some nonintrusive radio-frequency-signal-based approaches to detect and measure parameters that are associated with the accelerated bearing wear caused by bearing currents.
Resumo:
A high-frequency cyclonverter acts as a direct ac-to-ac power converter circuit that does not require a diode bidge rectifier. Bridgeless topology makes it possible to remove forward voltage drop losses that are present in a diode bridge. In addition, the on-state losses can be reduced to 1.5 times the on-state resistance of switches in half-bridge operation of the cycloconverter. A high-frequency cycloconverter is reviewed and the charging effect of the dc-capacitors in ``back-to-back'' or synchronous mode operation operation is analyzed. In addition, a control method is introduced for regulating dc-voltage of the ac-side capacitors in synchronous operation mode. The controller regulates the dc-capacitors and prevents switches from reaching overvoltage level. This can be accomplished by variating phase-shift between the upper and the lower gate signals. By adding phase-shift between the gate signal pairs, the charge stored in the energy storage capacitors can be discharged through the resonant load and substantially, the output resonant current amplitude can be improved. The above goals are analyzed and illustrated with simulation. Theory is supported with practical measurements where the proposed control method is implemented in an FPGA device and tested with a high-frequency cycloconverter using super-junction power MOSFETs as switching devices.
Resumo:
Almost every problem of design, planning and management in the technical and organizational systems has several conflicting goals or interests. Nowadays, multicriteria decision models represent a rapidly developing area of operation research. While solving practical optimization problems, it is necessary to take into account various kinds of uncertainty due to lack of data, inadequacy of mathematical models to real-time processes, calculation errors, etc. In practice, this uncertainty usually leads to undesirable outcomes where the solutions are very sensitive to any changes in the input parameters. An example is the investment managing. Stability analysis of multicriteria discrete optimization problems investigates how the found solutions behave in response to changes in the initial data (input parameters). This thesis is devoted to the stability analysis in the problem of selecting investment project portfolios, which are optimized by considering different types of risk and efficiency of the investment projects. The stability analysis is carried out in two approaches: qualitative and quantitative. The qualitative approach describes the behavior of solutions in conditions with small perturbations in the initial data. The stability of solutions is defined in terms of existence a neighborhood in the initial data space. Any perturbed problem from this neighborhood has stability with respect to the set of efficient solutions of the initial problem. The other approach in the stability analysis studies quantitative measures such as stability radius. This approach gives information about the limits of perturbations in the input parameters, which do not lead to changes in the set of efficient solutions. In present thesis several results were obtained including attainable bounds for the stability radii of Pareto optimal and lexicographically optimal portfolios of the investment problem with Savage's, Wald's criteria and criteria of extreme optimism. In addition, special classes of the problem when the stability radii are expressed by the formulae were indicated. Investigations were completed using different combinations of Chebyshev's, Manhattan and Hölder's metrics, which allowed monitoring input parameters perturbations differently.