18 resultados para adiabatic invariant
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
Perceiving the world visually is a basic act for humans, but for computers it is still an unsolved problem. The variability present innatural environments is an obstacle for effective computer vision. The goal of invariant object recognition is to recognise objects in a digital image despite variations in, for example, pose, lighting or occlusion. In this study, invariant object recognition is considered from the viewpoint of feature extraction. Thedifferences between local and global features are studied with emphasis on Hough transform and Gabor filtering based feature extraction. The methods are examined with respect to four capabilities: generality, invariance, stability, and efficiency. Invariant features are presented using both Hough transform and Gabor filtering. A modified Hough transform technique is also presented where the distortion tolerance is increased by incorporating local information. In addition, methods for decreasing the computational costs of the Hough transform employing parallel processing and local information are introduced.
Resumo:
The development of correct programs is a core problem in computer science. Although formal verification methods for establishing correctness with mathematical rigor are available, programmers often find these difficult to put into practice. One hurdle is deriving the loop invariants and proving that the code maintains them. So called correct-by-construction methods aim to alleviate this issue by integrating verification into the programming workflow. Invariant-based programming is a practical correct-by-construction method in which the programmer first establishes the invariant structure, and then incrementally extends the program in steps of adding code and proving after each addition that the code is consistent with the invariants. In this way, the program is kept internally consistent throughout its development, and the construction of the correctness arguments (proofs) becomes an integral part of the programming workflow. A characteristic of the approach is that programs are described as invariant diagrams, a graphical notation similar to the state charts familiar to programmers. Invariant-based programming is a new method that has not been evaluated in large scale studies yet. The most important prerequisite for feasibility on a larger scale is a high degree of automation. The goal of the Socos project has been to build tools to assist the construction and verification of programs using the method. This thesis describes the implementation and evaluation of a prototype tool in the context of the Socos project. The tool supports the drawing of the diagrams, automatic derivation and discharging of verification conditions, and interactive proofs. It is used to develop programs that are correct by construction. The tool consists of a diagrammatic environment connected to a verification condition generator and an existing state-of-the-art theorem prover. Its core is a semantics for translating diagrams into verification conditions, which are sent to the underlying theorem prover. We describe a concrete method for 1) deriving sufficient conditions for total correctness of an invariant diagram; 2) sending the conditions to the theorem prover for simplification; and 3) reporting the results of the simplification to the programmer in a way that is consistent with the invariantbased programming workflow and that allows errors in the program specification to be efficiently detected. The tool uses an efficient automatic proof strategy to prove as many conditions as possible automatically and lets the remaining conditions be proved interactively. The tool is based on the verification system PVS and i uses the SMT (Satisfiability Modulo Theories) solver Yices as a catch-all decision procedure. Conditions that were not discharged automatically may be proved interactively using the PVS proof assistant. The programming workflow is very similar to the process by which a mathematical theory is developed inside a computer supported theorem prover environment such as PVS. The programmer reduces a large verification problem with the aid of the tool into a set of smaller problems (lemmas), and he can substantially improve the degree of proof automation by developing specialized background theories and proof strategies to support the specification and verification of a specific class of programs. We demonstrate this workflow by describing in detail the construction of a verified sorting algorithm. Tool-supported verification often has little to no presence in computer science (CS) curricula. Furthermore, program verification is frequently introduced as an advanced and purely theoretical topic that is not connected to the workflow taught in the early and practically oriented programming courses. Our hypothesis is that verification could be introduced early in the CS education, and that verification tools could be used in the classroom to support the teaching of formal methods. A prototype of Socos has been used in a course at Åbo Akademi University targeted at first and second year undergraduate students. We evaluate the use of Socos in the course as part of a case study carried out in 2007.
Resumo:
The share of variable renewable energy in electricity generation has seen exponential growth during the recent decades, and due to the heightened pursuit of environmental targets, the trend is to continue with increased pace. The two most important resources, wind and insolation both bear the burden of intermittency, creating a need for regulation and posing a threat to grid stability. One possibility to deal with the imbalance between demand and generation is to store electricity temporarily, which was addressed in this thesis by implementing a dynamic model of adiabatic compressed air energy storage (CAES) with Apros dynamic simulation software. Based on literature review, the existing models due to their simplifications were found insufficient for studying transient situations, and despite of its importance, the investigation of part load operation has not yet been possible with satisfactory precision. As a key result of the thesis, the cycle efficiency at design point was simulated to be 58.7%, which correlated well with literature information, and was validated through analytical calculations. The performance at part load was validated against models shown in literature, showing good correlation. By introducing wind resource and electricity demand data to the model, grid operation of CAES was studied. In order to enable the dynamic operation, start-up and shutdown sequences were approximated in dynamic environment, as far as is known, the first time, and a user component for compressor variable guide vanes (VGV) was implemented. Even in the current state, the modularly designed model offers a framework for numerous studies. The validity of the model is limited by the accuracy of VGV correlations at part load, and in addition the implementation of heat losses to the thermal energy storage is necessary to enable longer simulations. More extended use of forecasts is one of the important targets of development, if the system operation is to be optimised in future.
Resumo:
Abstract
Resumo:
Tässä työssä tutkittiin suotautuneen massakakun adheesiota teräspinnalle. Muuttujina käytettiin tyypillisiä pesuprosessin arvoja. Lisäksi tutkittiin teollisessa pesulaitteessa massakakun irrotuksessa käytettävän paineilmapulssin käyttäytymistä. Kirjallisuusosassa tarkasteltiin suotautuneen massakakun muodostumista, adheesiota ja adheesioon vaikuttavia tekijöitä massakakun irrotuksen kannalta. Lisäksi massakakun irrotuksessa käytettävän paineilman virtaustatarkasteltiin adiabaattisena virtauksena. Kokeellisen osan irrotuskokeissa tutkittiin suotautuneen massakakun adheesiovoimia teräksisen kartion laitoihin ja reikälevyyn. Kokeissa tutkittiin tyypillisten prosessi-muuttujien kuten lämpötilan, pH:n ja kuiva-ainepitoisuuden vaikutusta tarvittavaan kakun irrotusvoimaan. Lisäksi tutkittiin myös kartion kulman ja pinnanlaadun vaikutusta. Adheesiokokeiden perusteella pystyttiin laskemaan massakakun irrotukseen tarvittava rajapaine teollisen pesulaitteen lokeron suodoskanavassa. Kokeellisen osan paineilmakokeissa tutkittiin irrotuksessa käytettävän paineilman painevaikutusta ja virtausta teollisen pesulaitteen lokerossa ja näihin vaikuttavia rakenteellisten ja prosessiteknisten parametrien vaikutusta. Lisäksi tutkimusta täydennettiin videokuvauksen avulla tutkimalla paineaallon etenemistä merkkiaineen avulla.
Resumo:
Electrocaloric cooling based on ability of material to change temperature by applying an electric field under adiabatic conditions is relatively new and challenging direction of ferroelectrics research. In this work we report about analytical, simulation and experimental data for BaSrTiO3 thin film and bulk ceramic samples. Detailed discussion of a theoretical base of the electrocaloric effect is included. Demonstrated experimental and computational results exemplify rational approach to a problem of solid-state cooler construction.
Resumo:
Dynamic behavior of bothisothermal and non-isothermal single-column chromatographic reactors with an ion-exchange resin as the stationary phase was investigated. The reactor performance was interpreted by using results obtained when studying the effect of the resin properties on the equilibrium and kinetic phenomena occurring simultaneously in the reactor. Mathematical models were derived for each phenomenon and combined to simulate the chromatographic reactor. The phenomena studied includes phase equilibria in multicomponent liquid mixture¿ion-exchange resin systems, chemicalequilibrium in the presence of a resin catalyst, diffusion of liquids in gel-type and macroporous resins, and chemical reaction kinetics. Above all, attention was paid to the swelling behavior of the resins and how it affects the kinetic phenomena. Several poly(styrene-co-divinylbenzene) resins with different cross-link densities and internal porosities were used. Esterification of acetic acid with ethanol to produce ethyl acetate and water was used as a model reaction system. Choosing an ion-exchange resin with a low cross-link density is beneficial inthe case of the present reaction system: the amount of ethyl acetate as well the ethyl acetate to water mole ratio in the effluent stream increase with decreasing cross-link density. The enhanced performance of the reactor is mainly attributed to increasing reaction rate, which in turn originates from the phase equilibrium behavior of the system. Also mass transfer considerations favor the use ofresins with low cross-link density. The diffusion coefficients of liquids in the gel-type ion-exchange resins were found to fall rapidly when the extent of swelling became low. Glass transition of the polymer was not found to significantlyretard the diffusion in sulfonated PS¿DVB ion-exchange resins. It was also shown that non-isothermal operation of a chromatographic reactor could be used to significantly enhance the reactor performance. In the case of the exothermic modelreaction system and a near-adiabatic column, a positive thermal wave (higher temperature than in the initial state) was found to travel together with the reactive front. This further increased the conversion of the reactants. Diffusion-induced volume changes of the ion-exchange resins were studied in a flow-through cell. It was shown that describing the swelling and shrinking kinetics of the particles calls for a mass transfer model that explicitly includes the limited expansibility of the polymer network. A good description of the process was obtained by combining the generalized Maxwell-Stefan approach and an activity model that was derived from the thermodynamics of polymer solutions and gels. The swelling pressure in the resin phase was evaluated by using a non-Gaussian expression forthe polymer chain length distribution. Dimensional changes of the resin particles necessitate the use of non-standard mathematical tools for dynamic simulations. A transformed coordinate system, where the mass of the polymer was used as a spatial variable, was applied when simulating the chromatographic reactor columns as well as the swelling and shrinking kinetics of the resin particles. Shrinking of the particles in a column leads to formation of dead volume on top of the resin bed. In ordinary Eulerian coordinates, this results in a moving discontinuity that in turn causes numerical difficulties in the solution of the PDE system. The motion of the discontinuity was eliminated by spanning two calculation grids in the column that overlapped at the top of the resin bed. The reactive and non-reactive phase equilibrium data were correlated with a model derived from thethermodynamics of polymer solution and gels. The thermodynamic approach used inthis work is best suited at high degrees of swelling because the polymer matrixmay be in the glassy state when the extent of swelling is low.
Resumo:
The application of forced unsteady-state reactors in case of selective catalytic reduction of nitrogen oxides (NOx) with ammonia (NH3) is sustained by the fact that favorable temperature and composition distributions which cannot be achieved in any steady-state regime can be obtained by means of unsteady-state operations. In a normal way of operation the low exothermicity of the selective catalytic reduction (SCR) reaction (usually carried out in the range of 280-350°C) is not enough to maintain by itself the chemical reaction. A normal mode of operation usually requires supply of supplementary heat increasing in this way the overall process operation cost. Through forced unsteady-state operation, the main advantage that can be obtained when exothermic reactions take place is the possibility of trapping, beside the ammonia, the moving heat wave inside the catalytic bed. The unsteady state-operation enables the exploitation of the thermal storage capacity of the catalyticbed. The catalytic bed acts as a regenerative heat exchanger allowing auto-thermal behaviour when the adiabatic temperature rise is low. Finding the optimum reactor configuration, employing the most suitable operation model and identifying the reactor behavior are highly important steps in order to configure a proper device for industrial applications. The Reverse Flow Reactor (RFR) - a forced unsteady state reactor - corresponds to the above mentioned characteristics and may be employed as an efficient device for the treatment of dilute pollutant mixtures. As a main disadvantage, beside its advantages, the RFR presents the 'wash out' phenomena. This phenomenon represents emissions of unconverted reactants at every switch of the flow direction. As a consequence our attention was focused on finding an alternative reactor configuration for RFR which is not affected by the incontrollable emissions of unconverted reactants. In this respect the Reactor Network (RN) was investigated. Its configuration consists of several reactors connected in a closed sequence, simulating a moving bed by changing the reactants feeding position. In the RN the flow direction is maintained in the same way ensuring uniformcatalyst exploitation and in the same time the 'wash out' phenomena is annulated. The simulated moving bed (SMB) can operate in transient mode giving practically constant exit concentration and high conversion levels. The main advantage of the reactor network operation is emphasizedby the possibility to obtain auto-thermal behavior with nearly uniformcatalyst utilization. However, the reactor network presents only a small range of switching times which allow to reach and to maintain an ignited state. Even so a proper study of the complex behavior of the RN may give the necessary information to overcome all the difficulties that can appear in the RN operation. The unsteady-state reactors complexity arises from the fact that these reactor types are characterized by short contact times and complex interaction between heat and mass transportphenomena. Such complex interactions can give rise to a remarkable complex dynamic behavior characterized by a set of spatial-temporal patterns, chaotic changes in concentration and traveling waves of heat or chemical reactivity. The main efforts of the current research studies concern the improvement of contact modalities between reactants, the possibility of thermal wave storage inside the reactor and the improvement of the kinetic activity of the catalyst used. Paying attention to the above mentioned aspects is important when higher activity even at low feeding temperatures and low emissions of unconverted reactants are the main operation concerns. Also, the prediction of the reactor pseudo or steady-state performance (regarding the conversion, selectivity and thermal behavior) and the dynamicreactor response during exploitation are important aspects in finding the optimal control strategy for the forced unsteady state catalytic tubular reactors. The design of an adapted reactor requires knowledge about the influence of its operating conditions on the overall process performance and a precise evaluation of the operating parameters rage for which a sustained dynamic behavior is obtained. An apriori estimation of the system parameters result in diminution of the computational efforts. Usually the convergence of unsteady state reactor systems requires integration over hundreds of cycles depending on the initial guess of the parameter values. The investigation of various operation models and thermal transfer strategies give reliable means to obtain recuperative and regenerative devices which are capable to maintain an auto-thermal behavior in case of low exothermic reactions. In the present research work a gradual analysis of the SCR of NOx with ammonia process in forced unsteady-state reactors was realized. The investigation covers the presentationof the general problematic related to the effect of noxious emissions in the environment, the analysis of the suitable catalysts types for the process, the mathematical analysis approach for modeling and finding the system solutions and the experimental investigation of the device found to be more suitable for the present process. In order to gain information about the forced unsteady state reactor design, operation, important system parameters and their values, mathematical description, mathematicalmethod for solving systems of partial differential equations and other specific aspects, in a fast and easy way, and a case based reasoning (CBR) approach has been used. This approach, using the experience of past similarproblems and their adapted solutions, may provide a method for gaining informations and solutions for new problems related to the forced unsteady state reactors technology. As a consequence a CBR system was implemented and a corresponding tool was developed. Further on, grooving up the hypothesis of isothermal operation, the investigation by means of numerical simulation of the feasibility of the SCR of NOx with ammonia in the RFRand in the RN with variable feeding position was realized. The hypothesis of non-isothermal operation was taken into account because in our opinion ifa commercial catalyst is considered, is not possible to modify the chemical activity and its adsorptive capacity to improve the operation butis possible to change the operation regime. In order to identify the most suitable device for the unsteady state reduction of NOx with ammonia, considering the perspective of recuperative and regenerative devices, a comparative analysis of the above mentioned two devices performance was realized. The assumption of isothermal conditions in the beginningof the forced unsteadystate investigation allowed the simplification of the analysis enabling to focus on the impact of the conditions and mode of operation on the dynamic features caused by the trapping of one reactant in the reactor, without considering the impact of thermal effect on overall reactor performance. The non-isothermal system approach has been investigated in order to point out the important influence of the thermal effect on overall reactor performance, studying the possibility of RFR and RN utilization as recuperative and regenerative devices and the possibility of achieving a sustained auto-thermal behavior in case of lowexothermic reaction of SCR of NOx with ammonia and low temperature gasfeeding. Beside the influence of the thermal effect, the influence of the principal operating parameters, as switching time, inlet flow rate and initial catalyst temperature have been stressed. This analysis is important not only because it allows a comparison between the two devices and optimisation of the operation, but also the switching time is the main operating parameter. An appropriate choice of this parameter enables the fulfilment of the process constraints. The level of the conversions achieved, the more uniform temperature profiles, the uniformity ofcatalyst exploitation and the much simpler mode of operation imposed the RN as a much more suitable device for SCR of NOx with ammonia, in usual operation and also in the perspective of control strategy implementation. Theoretical simplified models have also been proposed in order to describe the forced unsteady state reactors performance and to estimate their internal temperature and concentration profiles. The general idea was to extend the study of catalytic reactor dynamics taking into account the perspectives that haven't been analyzed yet. The experimental investigation ofRN revealed a good agreement between the data obtained by model simulation and the ones obtained experimentally.
Resumo:
Evoluutioalgoritmit ovat viime vuosina osoittautuneet tehokkaiksi menetelmiksi globaalien optimointitehtävien ratkaisuun. Niiden vahvuutena on etenkin yleiskäyttöisyys ja kyky löytää globaali ratkaisu juuttumatta optimoitavan tavoitefunktion paikallisiin optimikohtiin. Tässä työssä on tavoitteena kehittää uusi, normaalijakaumaan perustuva mutaatio-operaatio differentiaalievoluutioalgoritmiin, joka on eräs uusimmista evoluutiopohjaisista optimointialgoritmeista. Menetelmän oletetaan vähentävän entisestään sekä populaation ennenaikaisen suppenemisen, että algoritmin tilojen juuttumisen riskiä ja se on teoreettisesti osoitettavissa suppenevaksi. Tämä ei päde alkuperäisen differentiaalievoluution tapauksessa, koska on voitu osoittaa, että sen tilanmuutokset voivat pienellä todennäköisyydellä juuttua. Työssä uuden menetelmän toimintaa tarkastellaan kokeellisesti käyttäen testiongelmina monirajoiteongelmia. Rajoitefunktioiden käsittelyyn käytetään Jouni Lampisen kehittämää, Pareto-optimaalisuuden periaatteeseen perustuvaa menetelmää. Samalla saadaan kerättyä lisää kokeellista näyttöä myös tämän menetelmän toiminnasta. Kaikki käytetyt testiongelmat kyettiin ratkaisemaan sekä alkuperäisellä differentiaalievoluutiolla, että uutta mutaatio-operaatiota käyttävällä versiolla. Uusi menetelmä osoittautui kuitenkin luotettavammaksi sellaisissa tapauksissa, joissa alkuperäisellä algoritmilla oli vaikeuksia. Lisäksi useimmat ongelmat kyettiin ratkaisemaan luotettavasti pienemmällä populaation koolla kuin alkuperäistä differentiaalievoluutiota käytettäessä. Uuden menetelmän käyttö myös mahdollistaa paremmin sellaisten kontrolliparametrien käytön, joilla hausta saadaan rotaatioinvariantti. Laskennallisesti uusi menetelmä on hieman alkuperäistä differentiaalievoluutiota raskaampi ja se tarvitsee yhden kontrolliparametrin enemmän. Uusille kontrolliparametreille määritettiin kuitenkin mahdollisimman yleiskäyttöiset arvot, joita käyttämällä on mahdollista ratkaista suuri joukko erilaisia ongelmia.
Resumo:
Conservation laws in physics are numerical invariants of the dynamics of a system. In cellular automata (CA), a similar concept has already been defined and studied. To each local pattern of cell states a real value is associated, interpreted as the “energy” (or “mass”, or . . . ) of that pattern.The overall “energy” of a configuration is simply the sum of the energy of the local patterns appearing on different positions in the configuration. We have a conservation law for that energy, if the total energy of each configuration remains constant during the evolution of the CA. For a given conservation law, it is desirable to find microscopic explanations for the dynamics of the conserved energy in terms of flows of energy from one region toward another. Often, it happens that the energy values are from non-negative integers, and are interpreted as the number of “particles” distributed on a configuration. In such cases, it is conjectured that one can always provide a microscopic explanation for the conservation laws by prescribing rules for the local movement of the particles. The onedimensional case has already been solved by Fuk´s and Pivato. We extend this to two-dimensional cellular automata with radius-0,5 neighborhood on the square lattice. We then consider conservation laws in which the energy values are chosen from a commutative group or semigroup. In this case, the class of all conservation laws for a CA form a partially ordered hierarchy. We study the structure of this hierarchy and prove some basic facts about it. Although the local properties of this hierarchy (at least in the group-valued case) are tractable, its global properties turn out to be algorithmically inaccessible. In particular, we prove that it is undecidable whether this hierarchy is trivial (i.e., if the CA has any non-trivial conservation law at all) or unbounded. We point out some interconnections between the structure of this hierarchy and the dynamical properties of the CA. We show that positively expansive CA do not have non-trivial conservation laws. We also investigate a curious relationship between conservation laws and invariant Gibbs measures in reversible and surjective CA. Gibbs measures are known to coincide with the equilibrium states of a lattice system defined in terms of a Hamiltonian. For reversible cellular automata, each conserved quantity may play the role of a Hamiltonian, and provides a Gibbs measure (or a set of Gibbs measures, in case of phase multiplicity) that is invariant. Conversely, every invariant Gibbs measure provides a conservation law for the CA. For surjective CA, the former statement also follows (in a slightly different form) from the variational characterization of the Gibbs measures. For one-dimensional surjective CA, we show that each invariant Gibbs measure provides a conservation law. We also prove that surjective CA almost surely preserve the average information content per cell with respect to any probability measure.
Resumo:
Tämä diplomityö käsittelee teollisen yrityksen tuotannonohjauksen kehittämistä piensarjatuotannossa. Työn kohteena on ABB Oy:n Tuulivoimageneraattorit-tulosyksikkö, joka valmistaa vakiotuotteita asiakasohjautuvasti. Työssä esitellään aluksi tuotannon ja tuotannonohjauksen teoriaa. Lävitse käydään perusasioiden kuten määritelmien, tavoitteiden ja tehtävien lisäksi tuotannonohjausprosessia sekä tuotannonohjauksen tietotekniikkaa. Teorian jälkeisessä empiriaosuudessa esitellään työssä kehitettyjä keinoja tuotannonohjauksen parantamiseksi. Tutkimus on toteutettu teoreettisen ja empiirisen tutkimustyön avulla. Teoreettiseen tutkimustyöhön sisältyi suomalaisiin ja ulkomaalaisiin kirjallisuuslähteisiin perehtyminen. Empiirinen tutkimustyö suoritettiin itsenäisen ongelman ratkaisutyön avulla. Tämä sisälsi kehittämiskohteiden analysoinnin, tarkempien kehittämistarpeiden määrityksen sekä kokeilujen kautta tapahtuneen kehittämistyön. Tutkimuksen päätavoitteena oli selvittää, miten tuotannonohjauksen kehittämisellä voidaan parantaa kohteena olevan tulosyksikön tuottavuutta ja kannattavuutta. Päätavoitteen pohjalta muodostettiin kuusi osatavoitetta: toimitusvarmuuden parantaminen, kapasiteetin kuormitusasteen nostaminen, kapasiteetin suunnittelun kehittäminen, läpäisyaikojen lyhentäminen, uuden ERP-järjestelmän vaatimusmäärittely sekä tuotannonohjausprosessin määrittäminen. Työssä rakennettiin neljään ensiksi mainittuun osatavoitteeseen tietotekniset sovellukset, jotka mahdollistavat osatavoitteiden suunnittelun ja ohjaamisen. Sovelluksia varten kullekin tuotteelle määriteltiin esimerkiksi työnvaiheketjut läpäisyaikoineen, kuormitusryhmät, kuormitusryhmien kapasiteetit, tuotteiden kuormittavuudet sekä kriittiset työvälineet. Työ osoitti, että tietotekniikka auttaa suuresti tuotannonohjauksessa. Lisääntynyt läpinäkyvyys, parantunut tiedonkulku, simulointimahdollisuudet sekä graafinen esitystapa helpottavat erilaisten suunnitelmien teossa ja parantavat siten päätöksenteon laatua. Tietotekniikan hyväksikäytön pohjana toimii tuotannon perus- ja tapahtumatietojen kurinalainen päivitys. Tämän vuoksi tietojärjestelmistä kannattaa rakentaa mahdollisimman yksinkertaisia.
Resumo:
In this Thesis the interaction of an electromagnetic field and matter is studied from various aspects in the general framework of cold atoms. Our subjects cover a wide spectrum of phenomena ranging from semiclassical few-level models to fully quantum mechanical interaction with structured reservoirs leading to non-Markovian open quantum system dynamics. Within closed quantum systems, we propose a selective method to manipulate the motional state of atoms in a time-dependent double-well potential and interpret the method in terms of adiabatic processes. Also, we derive a simple wave-packet model, based on distributions of generalized eigenstates, explaining the finite visibility of interference in overlapping continuous-wave atom lasers. In the context of open quantum systems, we develop an unraveling of non-Markovian dynamics in terms of piecewise deterministic quantum jump processes confined in the Hilbert space of the reduced system - the non-Markovian quantum jump method. As examples, we apply it for simple 2- and 3-level systems interacting with a structured reservoir. Also, in the context of ion-cavity QED we study the entanglement generation based on collective Dicke modes in experimentally realistic conditions including photonic losses and an atomic spontaneous decay.
Resumo:
Programming and mathematics are core areas of computer science (CS) and consequently also important parts of CS education. Introductory instruction in these two topics is, however, not without problems. Studies show that CS students find programming difficult to learn and that teaching mathematical topics to CS novices is challenging. One reason for the latter is the disconnection between mathematics and programming found in many CS curricula, which results in students not seeing the relevance of the subject for their studies. In addition, reports indicate that students' mathematical capability and maturity levels are dropping. The challenges faced when teaching mathematics and programming at CS departments can also be traced back to gaps in students' prior education. In Finland the high school curriculum does not include CS as a subject; instead, focus is on learning to use the computer and its applications as tools. Similarly, many of the mathematics courses emphasize application of formulas, while logic, formalisms and proofs, which are important in CS, are avoided. Consequently, high school graduates are not well prepared for studies in CS. Motivated by these challenges, the goal of the present work is to describe new approaches to teaching mathematics and programming aimed at addressing these issues: Structured derivations is a logic-based approach to teaching mathematics, where formalisms and justifications are made explicit. The aim is to help students become better at communicating their reasoning using mathematical language and logical notation at the same time as they become more confident with formalisms. The Python programming language was originally designed with education in mind, and has a simple syntax compared to many other popular languages. The aim of using it in instruction is to address algorithms and their implementation in a way that allows focus to be put on learning algorithmic thinking and programming instead of on learning a complex syntax. Invariant based programming is a diagrammatic approach to developing programs that are correct by construction. The approach is based on elementary propositional and predicate logic, and makes explicit the underlying mathematical foundations of programming. The aim is also to show how mathematics in general, and logic in particular, can be used to create better programs.
Resumo:
The purpose of this thesis is twofold. The first and major part is devoted to sensitivity analysis of various discrete optimization problems while the second part addresses methods applied for calculating measures of solution stability and solving multicriteria discrete optimization problems. Despite numerous approaches to stability analysis of discrete optimization problems two major directions can be single out: quantitative and qualitative. Qualitative sensitivity analysis is conducted for multicriteria discrete optimization problems with minisum, minimax and minimin partial criteria. The main results obtained here are necessary and sufficient conditions for different stability types of optimal solutions (or a set of optimal solutions) of the considered problems. Within the framework of quantitative direction various measures of solution stability are investigated. A formula for a quantitative characteristic called stability radius is obtained for the generalized equilibrium situation invariant to changes of game parameters in the case of the H¨older metric. Quality of the problem solution can also be described in terms of robustness analysis. In this work the concepts of accuracy and robustness tolerances are presented for a strategic game with a finite number of players where initial coefficients (costs) of linear payoff functions are subject to perturbations. Investigation of stability radius also aims to devise methods for its calculation. A new metaheuristic approach is derived for calculation of stability radius of an optimal solution to the shortest path problem. The main advantage of the developed method is that it can be potentially applicable for calculating stability radii of NP-hard problems. The last chapter of the thesis focuses on deriving innovative methods based on interactive optimization approach for solving multicriteria combinatorial optimization problems. The key idea of the proposed approach is to utilize a parameterized achievement scalarizing function for solution calculation and to direct interactive procedure by changing weighting coefficients of this function. In order to illustrate the introduced ideas a decision making process is simulated for three objective median location problem. The concepts, models, and ideas collected and analyzed in this thesis create a good and relevant grounds for developing more complicated and integrated models of postoptimal analysis and solving the most computationally challenging problems related to it.
Resumo:
Both atom localization and Raman cooling, considered in the thesis, reflect recent progress in the area of all-optical methods. We focus on twodimensional (2D) case, using a four-level tripod-type atomic scheme for atom localization within the optical half-wavelength as well as for efficient subrecoil Raman cooling. In the first part, we discuss the principles of 1D atom localization, accompanying by an example of the measurement of a spontaneously-emitted photon. Modifying this example, one archives sub-wavelength localization of a three-level -type atom, measuring the population in its upper state. We go further and obtain 2D sub-wavelength localization for a four-level tripod-type atom. The upper-state population is classified according to the spatial distribution, which in turn forms such structures as spikes, craters and waves. The second part of the thesis is devoted to Raman cooling. The cooling process is controlled by a sequence of velocity-selective transfers from one to another ground state. So far, 1D deep subrecoil cooling has been carried out with the sequence of square or Blackman pulses, applied to -type atoms. In turn, we discuss the transfer of atoms by stimulated Raman adiabatic passage (STIRAP), which provides robustness against the pulse duration if the cooling time is not in any critical role. A tripod-type atomic scheme is used for the purpose of 2D Raman cooling, allowing one to increase the efficiency and simplify the realization of the cooling.