981 resultados para DYNAMIC PROGRAMMING


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The development of correct programs is a core problem in computer science. Although formal verification methods for establishing correctness with mathematical rigor are available, programmers often find these difficult to put into practice. One hurdle is deriving the loop invariants and proving that the code maintains them. So called correct-by-construction methods aim to alleviate this issue by integrating verification into the programming workflow. Invariant-based programming is a practical correct-by-construction method in which the programmer first establishes the invariant structure, and then incrementally extends the program in steps of adding code and proving after each addition that the code is consistent with the invariants. In this way, the program is kept internally consistent throughout its development, and the construction of the correctness arguments (proofs) becomes an integral part of the programming workflow. A characteristic of the approach is that programs are described as invariant diagrams, a graphical notation similar to the state charts familiar to programmers. Invariant-based programming is a new method that has not been evaluated in large scale studies yet. The most important prerequisite for feasibility on a larger scale is a high degree of automation. The goal of the Socos project has been to build tools to assist the construction and verification of programs using the method. This thesis describes the implementation and evaluation of a prototype tool in the context of the Socos project. The tool supports the drawing of the diagrams, automatic derivation and discharging of verification conditions, and interactive proofs. It is used to develop programs that are correct by construction. The tool consists of a diagrammatic environment connected to a verification condition generator and an existing state-of-the-art theorem prover. Its core is a semantics for translating diagrams into verification conditions, which are sent to the underlying theorem prover. We describe a concrete method for 1) deriving sufficient conditions for total correctness of an invariant diagram; 2) sending the conditions to the theorem prover for simplification; and 3) reporting the results of the simplification to the programmer in a way that is consistent with the invariantbased programming workflow and that allows errors in the program specification to be efficiently detected. The tool uses an efficient automatic proof strategy to prove as many conditions as possible automatically and lets the remaining conditions be proved interactively. The tool is based on the verification system PVS and i uses the SMT (Satisfiability Modulo Theories) solver Yices as a catch-all decision procedure. Conditions that were not discharged automatically may be proved interactively using the PVS proof assistant. The programming workflow is very similar to the process by which a mathematical theory is developed inside a computer supported theorem prover environment such as PVS. The programmer reduces a large verification problem with the aid of the tool into a set of smaller problems (lemmas), and he can substantially improve the degree of proof automation by developing specialized background theories and proof strategies to support the specification and verification of a specific class of programs. We demonstrate this workflow by describing in detail the construction of a verified sorting algorithm. Tool-supported verification often has little to no presence in computer science (CS) curricula. Furthermore, program verification is frequently introduced as an advanced and purely theoretical topic that is not connected to the workflow taught in the early and practically oriented programming courses. Our hypothesis is that verification could be introduced early in the CS education, and that verification tools could be used in the classroom to support the teaching of formal methods. A prototype of Socos has been used in a course at Åbo Akademi University targeted at first and second year undergraduate students. We evaluate the use of Socos in the course as part of a case study carried out in 2007.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Programming and mathematics are core areas of computer science (CS) and consequently also important parts of CS education. Introductory instruction in these two topics is, however, not without problems. Studies show that CS students find programming difficult to learn and that teaching mathematical topics to CS novices is challenging. One reason for the latter is the disconnection between mathematics and programming found in many CS curricula, which results in students not seeing the relevance of the subject for their studies. In addition, reports indicate that students' mathematical capability and maturity levels are dropping. The challenges faced when teaching mathematics and programming at CS departments can also be traced back to gaps in students' prior education. In Finland the high school curriculum does not include CS as a subject; instead, focus is on learning to use the computer and its applications as tools. Similarly, many of the mathematics courses emphasize application of formulas, while logic, formalisms and proofs, which are important in CS, are avoided. Consequently, high school graduates are not well prepared for studies in CS. Motivated by these challenges, the goal of the present work is to describe new approaches to teaching mathematics and programming aimed at addressing these issues: Structured derivations is a logic-based approach to teaching mathematics, where formalisms and justifications are made explicit. The aim is to help students become better at communicating their reasoning using mathematical language and logical notation at the same time as they become more confident with formalisms. The Python programming language was originally designed with education in mind, and has a simple syntax compared to many other popular languages. The aim of using it in instruction is to address algorithms and their implementation in a way that allows focus to be put on learning algorithmic thinking and programming instead of on learning a complex syntax. Invariant based programming is a diagrammatic approach to developing programs that are correct by construction. The approach is based on elementary propositional and predicate logic, and makes explicit the underlying mathematical foundations of programming. The aim is also to show how mathematics in general, and logic in particular, can be used to create better programs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cellen har ett s.k. cytoskelett som bl.a. ger stadga åt cellen och deltar i dess form- och rörelsefunktioner. Intermediärfilamenten är en viktig del av cytoskelettet och de har länge varit kända för sina väsentliga roller i att upprätthålla den cellulära organisationen och vävnadernas integritet. På senare år har man insett att intermediärfilamenten har en större funktionell mångsidighet än man tidigare tänkts sig, i och med att en rad olika studier har visat betydelsen av intermediärfilamenten vid olika signaleringprocesser. Dessa proteinnätverk samverkar nämligen med kinaser och andra viktiga signalfaktorer och deltar därmed i cellens signaleringmaskineri. Intermediärfilamentproteinet nestin används ofta som en markör för stamceller men dess fysiologiska funktioner är i stort sett okända. Interaktion mellan nestin och ett signalkomplex bestående av cyklin-beroende kinas 5 (eng. Cyclin-dependent kinase, Cdk5) och dess aktivatorprotein p35 upptäcktes i vårt laboratorium före denna avhandling påbörjades. Därför var syftet med min avhandling att undersöka den funktionella betydelsen av nestin i regleringen av Cdk5/p35 komplexet. Cdk5 är ett multifunktionellt kinas som reglerar både utvecklingen och stressreaktioner i nerver och muskler. Vi visade att nestin skyddar neuronala stamceller under oxidativ stress genom dess förmåga att hämma Cdk5s skadliga aktivitet. Genom att förankra Cdk5/p35 komplexet, reglerar nestin den subcellulära lokaliseringen av Cdk5/p35 och minskar klyvningen av p35 till den mer stabila aktivatorn p25. Vi demonstrerade också aktiveringsmekanismen för Cdk5 under differentiering av muskelceller. Proteinkinas C zeta (PKCzeta) avslöjades ha en förmåga att accelera klyvningen av p35 till p25, och därmed öka aktiviteten hos Cdk5. Nestin kunde genom sin förmåga att reglera Cdk5 signalkomplexet styra muskelcellernas differentiering. Denna doktorsavhandling har på ett avgörande vis ökat förståelsen av de reglerande mekanismer som styr Cdk5 aktivering. Avhandling presenterar nestin och PKCzeta som kritiska faktorer i denna reglering. Vidare innehåller avhandlingen ny information om de cellulära funktionerna hos nestin som vi har visat vara en viktig reglerare av cellernas överlevnad och differentiering.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The objective of this dissertation is to improve the dynamic simulation of fluid power circuits. A fluid power circuit is a typical way to implement power transmission in mobile working machines, e.g. cranes, excavators etc. Dynamic simulation is an essential tool in developing controllability and energy-efficient solutions for mobile machines. Efficient dynamic simulation is the basic requirement for the real-time simulation. In the real-time simulation of fluid power circuits there exist numerical problems due to the software and methods used for modelling and integration. A simulation model of a fluid power circuit is typically created using differential and algebraic equations. Efficient numerical methods are required since differential equations must be solved in real time. Unfortunately, simulation software packages offer only a limited selection of numerical solvers. Numerical problems cause noise to the results, which in many cases leads the simulation run to fail. Mathematically the fluid power circuit models are stiff systems of ordinary differential equations. Numerical solution of the stiff systems can be improved by two alternative approaches. The first is to develop numerical solvers suitable for solving stiff systems. The second is to decrease the model stiffness itself by introducing models and algorithms that either decrease the highest eigenvalues or neglect them by introducing steady-state solutions of the stiff parts of the models. The thesis proposes novel methods using the latter approach. The study aims to develop practical methods usable in dynamic simulation of fluid power circuits using explicit fixed-step integration algorithms. In this thesis, twomechanisms whichmake the systemstiff are studied. These are the pressure drop approaching zero in the turbulent orifice model and the volume approaching zero in the equation of pressure build-up. These are the critical areas to which alternative methods for modelling and numerical simulation are proposed. Generally, in hydraulic power transmission systems the orifice flow is clearly in the turbulent area. The flow becomes laminar as the pressure drop over the orifice approaches zero only in rare situations. These are e.g. when a valve is closed, or an actuator is driven against an end stopper, or external force makes actuator to switch its direction during operation. This means that in terms of accuracy, the description of laminar flow is not necessary. But, unfortunately, when a purely turbulent description of the orifice is used, numerical problems occur when the pressure drop comes close to zero since the first derivative of flow with respect to the pressure drop approaches infinity when the pressure drop approaches zero. Furthermore, the second derivative becomes discontinuous, which causes numerical noise and an infinitely small integration step when a variable step integrator is used. A numerically efficient model for the orifice flow is proposed using a cubic spline function to describe the flow in the laminar and transition areas. Parameters for the cubic spline function are selected such that its first derivative is equal to the first derivative of the pure turbulent orifice flow model in the boundary condition. In the dynamic simulation of fluid power circuits, a tradeoff exists between accuracy and calculation speed. This investigation is made for the two-regime flow orifice model. Especially inside of many types of valves, as well as between them, there exist very small volumes. The integration of pressures in small fluid volumes causes numerical problems in fluid power circuit simulation. Particularly in realtime simulation, these numerical problems are a great weakness. The system stiffness approaches infinity as the fluid volume approaches zero. If fixed step explicit algorithms for solving ordinary differential equations (ODE) are used, the system stability would easily be lost when integrating pressures in small volumes. To solve the problem caused by small fluid volumes, a pseudo-dynamic solver is proposed. Instead of integration of the pressure in a small volume, the pressure is solved as a steady-state pressure created in a separate cascade loop by numerical integration. The hydraulic capacitance V/Be of the parts of the circuit whose pressures are solved by the pseudo-dynamic method should be orders of magnitude smaller than that of those partswhose pressures are integrated. The key advantage of this novel method is that the numerical problems caused by the small volumes are completely avoided. Also, the method is freely applicable regardless of the integration routine applied. The superiority of both above-mentioned methods is that they are suited for use together with the semi-empirical modelling method which necessarily does not require any geometrical data of the valves and actuators to be modelled. In this modelling method, most of the needed component information can be taken from the manufacturer’s nominal graphs. This thesis introduces the methods and shows several numerical examples to demonstrate how the proposed methods improve the dynamic simulation of various hydraulic circuits.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis, simple methods have been sought to lower the teacher’s threshold to start to apply constructive alignment in instruction. From the phases of the instructional process, aspects that can be improved with little effort by the teacher have been identified. Teachers have been interviewed in order to find out what students actually learn in computer science courses. A quantitative analysis of the structured interviews showed that in addition to subject specific skills and knowledge, students learn many other skills that should be mentioned in the learning outcomes of the course. The students’ background, such as their prior knowledge, learning style and culture, affects how they learn in a course. A survey was conducted to map the learning styles of computer science students and to see if their cultural background affected their learning style. A statistical analysis of the data indicated that computer science students are different learners than engineering students in general and that there is a connection between the student’s culture and learning style. In this thesis, a simple self-assessment scale that is based on Bloom’s revised taxonomy has been developed. A statistical analysis of the test results indicates that in general the scale is quite reliable, but single students still slightly overestimate or under-estimate their knowledge levels. For students, being able to follow their own progress is motivating, and for a teacher, self-assessment results give information about how the class is proceeding and what the level of the students’ knowledge is.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of the study was to create and evaluate an intervention programme for Tanzanian children from a low-income area who are at risk of reading and writing difficulties. The learning difficulties, including reading and writing difficulties, are likely to be behind many of the common school problems in Tanzania, but they are not well understood, and research is needed. The design of the study included an identification and intervention phase with follow-up. A group based dynamic assessment approach was used in identifying children at risk of difficulties in reading and writing. The same approach was used in the intervention. The study was a randomized experiment with one experimental and two control groups. For the experimental and the control groups, a total of 96 (46 girls and 50 boys) children from grade one were screened out of 301 children from two schools in a low income urban area of Dar-es-Salaam. One third of the children, the experimental group, participated in an intensive training programme in literacy skills for five weeks, six hours per week, aimed at promoting reading and writing ability, while the children in the control groups had a mathematics and art programme. Follow-up was performed five months after the intervention. The intervention programme and the tests were based on the Zambian BASAT (Basic Skill Assessment Tool, Ketonen & Mulenga, 2003), but the content was drawn from the Kiswahili school curriculum in Tanzania. The main components of the training and testing programme were the same, only differing in content. The training process was different from traditional training in Tanzanian schools in that principles of teaching and training in dynamic assessment were followed. Feedback was the cornerstone of the training and the focus was on supporting the children in exploring knowledge and strategies in performing the tasks. The experimental group improved significantly more (p = .000) than the control groups during the intervention from pre-test to follow-up (repeated measures ANOVA). No differences between the control groups were noticed. The effect was significant on all the measures: phonological awareness, reading skills, writing skills and overall literacy skills. A transfer effect on school marks in Kiswahili and English was found. Following a discussion of the results, suggestions for further research and adaptation of the programme are presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tämän tutkimuksen tavoitteena on selvittää opintojensa alussa olevien yliopisto-opiskelijoiden vaikeimpina pitämät käytännön ohjelmoinnin aihealueet sekä koostaa luentomoniste käytettäväksi seuraavalla alkavalla Käytännön ohjelmointi -kurssilla. Tutkimusmetodina käytettiin konstruktiivista tutkimusmetodia, jossa tavoitteen spesifioinnin jälkeen implementoitiin luentomoniste koostamalla määriteltyjen aihekokonaisuuksien lähdemateriaalia yhtenäiseksi, luettavaksi kokonaisuudeksi. Yliopistoissa ei yleisesti opeteta ohjelmistojen testausta ennen syventäviä ohjelmistotekniikan kursseja, mikä on kuitenkin puute työelämän kannalta. Tässä työssä esitetään perusteluja käytännönläheisten aihekokonaisuuksien painottamiselle ohjelmointikursseilla jo yliopisto-opintojen alkuvaiheessa. Työssä käsitellään Käytännön ohjelmointi -kurssin kurssipalautetta, missä havaittiin opiskelijoiden pitävän kurssin hankalimpina aihealueina linkitettyä listaa, osoittimia, dynaamista muistinhallintaa, tietorakenteita ja versionhallintaa. Työn avulla on pyritty kehittämään käytännön ohjelmoinnin yliopisto-opetusta Lappeenrannan teknillisessä yliopistossa luentomateriaalin avulla, jossa on muun muassa teoriaa, keskeisiä opiskelijoiden tarvitsemia komentoja, www-linkkejä sekä ohjelmoinnin tyyliopas.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis, general approach is devised to model electrolyte sorption from aqueous solutions on solid materials. Electrolyte sorption is often considered as unwanted phenomenon in ion exchange and its potential as an independent separation method has not been fully explored. The solid sorbents studied here are porous and non-porous organic or inorganic materials with or without specific functional groups attached on the solid matrix. Accordingly, the sorption mechanisms include physical adsorption, chemisorption on the functional groups and partition restricted by electrostatic or steric factors. The model is tested in four Cases Studies dealing with chelating adsorption of transition metal mixtures, physical adsorption of metal and metalloid complexes from chloride solutions, size exclusion of electrolytes in nano-porous materials and electrolyte exclusion of electrolyte/non-electrolyte mixtures. The model parameters are estimated using experimental data from equilibrium and batch kinetic measurements, and they are used to simulate actual single-column fixed-bed separations. Phase equilibrium between the solution and solid phases is described using thermodynamic Gibbs-Donnan model and various adsorption models depending on the properties of the sorbent. The 3-dimensional thermodynamic approach is used for volume sorption in gel-type ion exchangers and in nano-porous adsorbents, and satisfactory correlation is obtained provided that both mixing and exclusion effects are adequately taken into account. 2-Dimensional surface adsorption models are successfully applied to physical adsorption of complex species and to chelating adsorption of transition metal salts. In the latter case, comparison is also made with complex formation models. Results of the mass transport studies show that uptake rates even in a competitive high-affinity system can be described by constant diffusion coefficients, when the adsorbent structure and the phase equilibrium conditions are adequately included in the model. Furthermore, a simplified solution based on the linear driving force approximation and the shrinking-core model is developed for very non-linear adsorption systems. In each Case Study, the actual separation is carried out batch-wise in fixed-beds and the experimental data are simulated/correlated using the parameters derived from equilibrium and kinetic data. Good agreement between the calculated and experimental break-through curves is usually obtained indicating that the proposed approach is useful in systems, which at first sight are very different. For example, the important improvement in copper separation from concentrated zinc sulfate solution at elevated temperatures can be correctly predicted by the model. In some cases, however, re-adjustment of model parameters is needed due to e.g. high solution viscosity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ketterien menetelmien käyttö on yleistymässä ohjelmistotuotannossa. Tämän vuoksi ketteriltä menetelmiltä vaaditaan hyvää laadunhallintaa. Ketteriä menetelmiä on olemassa useita erilaisia, mutta ne kaikki jakavat samanlaiset perusarvot ja periaatteet. Tässä työssä tutkitaan kolmea eri ketterää menetelmää: Scrum, eXtreme Programming (XP) sekä Dynamic Systems Development Method (DSDM). Jokaisesta menetelmästä selvitetään, miten niissä hoidetaan laadunhallinta. Työssä otetaan myös kantaa ketterien ja perinteisten menetelmien eroihin sekä siihen, millaisissa projekteissa ketteriä menetelmiä kannattaa käyttää.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Linear programming models are effective tools to support initial or periodic planning of agricultural enterprises, requiring, however, technical coefficients that can be determined using computer simulation models. This paper, presented in two parts, deals with the development, application and tests of a methodology and of a computational modeling tool to support planning of irrigated agriculture activities. Part I aimed at the development and application, including sensitivity analysis, of a multiyear linear programming model to optimize the financial return and water use, at farm level for Jaíba irrigation scheme, Minas Gerais State, Brazil, using data on crop irrigation requirement and yield, obtained from previous simulation with MCID model. The linear programming model outputted a crop pattern to which a maximum total net present value of R$ 372,723.00 for the four years period, was obtained. Constraints on monthly water availability, labor, land and production were critical in the optimal solution. In relation to the water use optimization, it was verified that an expressive reductions on the irrigation requirements may be achieved by small reductions on the maximum total net present value.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The objective of this study was to model mathematically and to simulate the dynamic behavior of an auger-type fertilizer applicator (AFA) in order to use the variable-rate application (VRA) and reduce the coefficient of variation (CV) of the application, proposing an angular speed controller θ' for the motor drive shaft. The input model was θ' and the response was the fertilizer mass flow, due to the construction, density of fertilizer, fill factor and the end position of the auger. The model was used to simulate a control system in open loop, with an electric drive for AFA using an armature voltage (V A) controller. By introducing a sinusoidal excitation signal in V A with amplitude and delay phase optimized and varying θ' during an operation cycle, it is obtained a reduction of 29.8% in the CV (constant V A) to 11.4%. The development of the mathematical model was a first step towards the introduction of electric drive systems and closed loop control for the implementation of AFA with low CV in VRA.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis investigates the effectiveness of time-varying hedging during the financial crisis of 2007 and the European Debt Crisis of 2010. In addition, the seven test economies are part of the European Monetary Union and these countries are in different economical states. Time-varying hedge ratio was constructed using conditional variances and correlations, which were created by using multivariate GARCH models. Here we have used three different underlying portfolios: national equity markets, government bond markets and the combination of these two. These underlying portfolios were hedged by using credit default swaps. Empirical part includes the in-sample and out-of-sample analysis, which are constructed by using constant and dynamic models. Moreover, almost in every case dynamic models outperform the constant ones in the determination of the hedge ratio. We could not find any statistically significant evidence to support the use of asymmetric dynamic conditional correlation model. In addition, our findings are in line with prior literature and support the use of time-varying hedge ratio. Finally, we found that in some cases credit default swaps are not suitable instruments for hedging and they act more as a speculative instrument.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This master’s thesis aims to examine the relationship between dynamic capabilities and operational-level innovations. In addition, measures for the concept of dynamic capabilities are developed. The study was executed in the magazine publishing industry which is considered favourable for examining dynamic capabilities, since the sector is characterized by rapid change. As a basis for the study and the measure development, a literary review was conducted. Data for the empirical section was gathered by a survey targeted to chief-editors of Finnish consumer magazines. The relationship between dynamic capabilities and innovation was examined by multiple linear regression. The results indicate that dynamic capabilities have effect on the emergence of radical innovations. Environmental dynamism’s effect on radical innovations was not detected. Also, dynamic capabilities’ effect on innovation was not greater in turbulent operating environment.