28 resultados para Mathematical and Computer Modelling
Resumo:
Over 70% of the total costs of an end product are consequences of decisions that are made during the design process. A search for optimal cross-sections will often have only a marginal effect on the amount of material used if the geometry of a structure is fixed and if the cross-sectional characteristics of its elements are property designed by conventional methods. In recent years, optimalgeometry has become a central area of research in the automated design of structures. It is generally accepted that no single optimisation algorithm is suitable for all engineering design problems. An appropriate algorithm, therefore, mustbe selected individually for each optimisation situation. Modelling is the mosttime consuming phase in the optimisation of steel and metal structures. In thisresearch, the goal was to develop a method and computer program, which reduces the modelling and optimisation time for structural design. The program needed anoptimisation algorithm that is suitable for various engineering design problems. Because Finite Element modelling is commonly used in the design of steel and metal structures, the interaction between a finite element tool and optimisation tool needed a practical solution. The developed method and computer programs were tested with standard optimisation tests and practical design optimisation cases. Three generations of computer programs are developed. The programs combine anoptimisation problem modelling tool and FE-modelling program using three alternate methdos. The modelling and optimisation was demonstrated in the design of a new boom construction and steel structures of flat and ridge roofs. This thesis demonstrates that the most time consuming modelling time is significantly reduced. Modelling errors are reduced and the results are more reliable. A new selection rule for the evolution algorithm, which eliminates the need for constraint weight factors is tested with optimisation cases of the steel structures that include hundreds of constraints. It is seen that the tested algorithm can be used nearly as a black box without parameter settings and penalty factors of the constraints.
Resumo:
This work deals with the cooling of high-speed electric machines, such as motors and generators, through an air gap. It consists of numerical and experimental modelling of gas flow and heat transfer in an annular channel. Velocity and temperature profiles are modelled in the air gap of a high-speed testmachine. Local and mean heat transfer coefficients and total friction coefficients are attained for a smooth rotor-stator combination at a large velocity range. The aim is to solve the heat transfer numerically and experimentally. The FINFLO software, developed at Helsinki University of Technology, has been used in the flow solution, and the commercial IGG and Field view programs for the grid generation and post processing. The annular channel is discretized as a sector mesh. Calculation is performed with constant mass flow rate on six rotational speeds. The effect of turbulence is calculated using three turbulence models. The friction coefficient and velocity factor are attained via total friction power. The first part of experimental section consists of finding the proper sensors and calibrating them in a straight pipe. After preliminary tests, a RdF-sensor is glued on the walls of stator and rotor surfaces. Telemetry is needed to be able to measure the heat transfer coefficients at the rotor. The mean heat transfer coefficients are measured in a test machine on four cooling air mass flow rates at a wide Couette Reynolds number range. The calculated values concerning the friction and heat transfer coefficients are compared with measured and semi-empirical data. Heat is transferred from the hotter stator and rotor surfaces to the coolerair flow in the air gap, not from the rotor to the stator via the air gap, althought the stator temperature is lower than the rotor temperature. The calculatedfriction coefficients fits well with the semi-empirical equations and precedingmeasurements. On constant mass flow rate the rotor heat transfer coefficient attains a saturation point at a higher rotational speed, while the heat transfer coefficient of the stator grows uniformly. The magnitudes of the heat transfer coefficients are almost constant with different turbulence models. The calibrationof sensors in a straight pipe is only an advisory step in the selection process. Telemetry is tested in the pipe conditions and compared to the same measurements with a plain sensor. The magnitudes of the measured data and the data from the semi-empirical equation are higher for the heat transfer coefficients than thenumerical data considered on the velocity range. Friction and heat transfer coefficients are presented in a large velocity range in the report. The goals are reached acceptably using numerical and experimental research. The next challenge is to achieve results for grooved stator-rotor combinations. The work contains also results for an air gap with a grooved stator with 36 slots. The velocity field by the numerical method does not match in every respect the estimated flow mode. The absence of secondary Taylor vortices is evident when using time averagednumerical simulation.
Resumo:
Tutkimus ”Ilmarisen Suomi” ja sen tekijät tarjoaa uutta tietoa ja historiallisen tulkinnan huipputeknologisen Suomen rakentamisesta sodanjälkeisenä aikana. Kirja kertoo ESKO-tietokoneen tekijöiden monipuolisesta toiminnasta sekä koneen kohtalosta 1950-luvulla. ESKOa rakennuttanut Matematiikkakonekomitea (19541960) suunnitteli laitteesta Suomen ensimmäistä tietokonetta, mutta kirjassa esitetyn tulkinnan mukaan komitealla oli myös laajempia, kansallisia ihanteita ja tavoitteita, kuten kansallisen keskuslaskutoimiston perustaminen. Varhaisia tietokoneita kutsuttiin niiden käyttöä kuvaavasti matematiikkakoneiksi. Kirja on ensimmäinen perusteellinen esitys ja samalla ensimmäinen tutkimus ESKOsta ja sen tekijöiden hankkeesta 1950-luvulla. Matematiikkakonekomitean johdossa toimivat aikansa huipputiedemiehet Rolf Nevanlinna ja Erkki Laurila. Väitöstutkimuksessa kysytään, miten maan ensimmäisen tietokoneen hankkimista perusteltiin, mitä Matematiikkakonekomitea oikeastaan teki ja millaisia erityisesti kansallisia motiiveja koneen tekijöiden toiminta ilmaisi. Tutkimuksessa käytetään monipuolisesti arkistoaineistoa, kirjallisuutta ja haastatteluja Suomesta, Saksasta ja Ruotsista. Tarkastelussa hyödynnetään erityisesti teknologian historian ja yhteiskuntatieteellisen tieteen- ja teknologiantutkimuksen tutkimuskirjallisuutta. Kirjassa tarkastellaan yksityiskohtaisesti sitä, miten ESKOn tekijät yhdistivät tekniikan ja kansalliset perustelut sekä rakensivat uudenlaista, teknisesti taitavaa ”Ilmarisen Suomea” yhdessä ja kilvan muiden tahojen kanssa tuottaen teknologiasta kansallista projektia suomalaisille. Matematiikkakonekomitean ja ESKO-hankkeen tutkimisen perusteella suomalaisten ja teknologian suhteesta voidaan sanoa, että tekniikasta ei vain tullut kansallinen asia suomalaisille, vaan tekniikasta nimenomaan tehtiin kansallinen projekti, joka ei suinkaan ollut erityisen yksimielinen edes sodanjälkeisenä aikana. Tutkimuksen mukaan kotimainen komitea sai paljon aikaan ja tuotti vielä merkittävämpiä seurauksia. Näin siitä huolimatta, että ESKO valmistui pahasti myöhässä, vuonna 1960. Komitea myötävaikutti niin IBM:n menestykseen Suomessa, valtiojohtoisen tiedepolitiikan alkuun kuin Nokian edeltäjän Kaapelitehtaan elektroniikkaosaamisen syntyyn.
Resumo:
Current-day web search engines (e.g., Google) do not crawl and index a significant portion of theWeb and, hence, web users relying on search engines only are unable to discover and access a large amount of information from the non-indexable part of the Web. Specifically, dynamic pages generated based on parameters provided by a user via web search forms (or search interfaces) are not indexed by search engines and cannot be found in searchers’ results. Such search interfaces provide web users with an online access to myriads of databases on the Web. In order to obtain some information from a web database of interest, a user issues his/her query by specifying query terms in a search form and receives the query results, a set of dynamic pages that embed required information from a database. At the same time, issuing a query via an arbitrary search interface is an extremely complex task for any kind of automatic agents including web crawlers, which, at least up to the present day, do not even attempt to pass through web forms on a large scale. In this thesis, our primary and key object of study is a huge portion of the Web (hereafter referred as the deep Web) hidden behind web search interfaces. We concentrate on three classes of problems around the deep Web: characterization of deep Web, finding and classifying deep web resources, and querying web databases. Characterizing deep Web: Though the term deep Web was coined in 2000, which is sufficiently long ago for any web-related concept/technology, we still do not know many important characteristics of the deep Web. Another matter of concern is that surveys of the deep Web existing so far are predominantly based on study of deep web sites in English. One can then expect that findings from these surveys may be biased, especially owing to a steady increase in non-English web content. In this way, surveying of national segments of the deep Web is of interest not only to national communities but to the whole web community as well. In this thesis, we propose two new methods for estimating the main parameters of deep Web. We use the suggested methods to estimate the scale of one specific national segment of the Web and report our findings. We also build and make publicly available a dataset describing more than 200 web databases from the national segment of the Web. Finding deep web resources: The deep Web has been growing at a very fast pace. It has been estimated that there are hundred thousands of deep web sites. Due to the huge volume of information in the deep Web, there has been a significant interest to approaches that allow users and computer applications to leverage this information. Most approaches assumed that search interfaces to web databases of interest are already discovered and known to query systems. However, such assumptions do not hold true mostly because of the large scale of the deep Web – indeed, for any given domain of interest there are too many web databases with relevant content. Thus, the ability to locate search interfaces to web databases becomes a key requirement for any application accessing the deep Web. In this thesis, we describe the architecture of the I-Crawler, a system for finding and classifying search interfaces. Specifically, the I-Crawler is intentionally designed to be used in deepWeb characterization studies and for constructing directories of deep web resources. Unlike almost all other approaches to the deep Web existing so far, the I-Crawler is able to recognize and analyze JavaScript-rich and non-HTML searchable forms. Querying web databases: Retrieving information by filling out web search forms is a typical task for a web user. This is all the more so as interfaces of conventional search engines are also web forms. At present, a user needs to manually provide input values to search interfaces and then extract required data from the pages with results. The manual filling out forms is not feasible and cumbersome in cases of complex queries but such kind of queries are essential for many web searches especially in the area of e-commerce. In this way, the automation of querying and retrieving data behind search interfaces is desirable and essential for such tasks as building domain-independent deep web crawlers and automated web agents, searching for domain-specific information (vertical search engines), and for extraction and integration of information from various deep web resources. We present a data model for representing search interfaces and discuss techniques for extracting field labels, client-side scripts and structured data from HTML pages. We also describe a representation of result pages and discuss how to extract and store results of form queries. Besides, we present a user-friendly and expressive form query language that allows one to retrieve information behind search interfaces and extract useful data from the result pages based on specified conditions. We implement a prototype system for querying web databases and describe its architecture and components design.
Resumo:
There is an increasing reliance on computers to solve complex engineering problems. This is because computers, in addition to supporting the development and implementation of adequate and clear models, can especially minimize the financial support required. The ability of computers to perform complex calculations at high speed has enabled the creation of highly complex systems to model real-world phenomena. The complexity of the fluid dynamics problem makes it difficult or impossible to solve equations of an object in a flow exactly. Approximate solutions can be obtained by construction and measurement of prototypes placed in a flow, or by use of a numerical simulation. Since usage of prototypes can be prohibitively time-consuming and expensive, many have turned to simulations to provide insight during the engineering process. In this case the simulation setup and parameters can be altered much more easily than one could with a real-world experiment. The objective of this research work is to develop numerical models for different suspensions (fiber suspensions, blood flow through microvessels and branching geometries, and magnetic fluids), and also fluid flow through porous media. The models will have merit as a scientific tool and will also have practical application in industries. Most of the numerical simulations were done by the commercial software, Fluent, and user defined functions were added to apply a multiscale method and magnetic field. The results from simulation of fiber suspension can elucidate the physics behind the break up of a fiber floc, opening the possibility for developing a meaningful numerical model of the fiber flow. The simulation of blood movement from an arteriole through a venule via a capillary showed that the model based on VOF can successfully predict the deformation and flow of RBCs in an arteriole. Furthermore, the result corresponds to the experimental observation illustrates that the RBC is deformed during the movement. The concluding remarks presented, provide a correct methodology and a mathematical and numerical framework for the simulation of blood flows in branching. Analysis of ferrofluids simulations indicate that the magnetic Soret effect can be even higher than the conventional one and its strength depends on the strength of magnetic field, confirmed experimentally by Völker and Odenbach. It was also shown that when a magnetic field is perpendicular to the temperature gradient, there will be additional increase in the heat transfer compared to the cases where the magnetic field is parallel to the temperature gradient. In addition, the statistical evaluation (Taguchi technique) on magnetic fluids showed that the temperature and initial concentration of the magnetic phase exert the maximum and minimum contribution to the thermodiffusion, respectively. In the simulation of flow through porous media, dimensionless pressure drop was studied at different Reynolds numbers, based on pore permeability and interstitial fluid velocity. The obtained results agreed well with the correlation of Macdonald et al. (1979) for the range of actual flow Reynolds studied. Furthermore, calculated results for the dispersion coefficients in the cylinder geometry were found to be in agreement with those of Seymour and Callaghan.
Resumo:
This thesis deals with distance transforms which are a fundamental issue in image processing and computer vision. In this thesis, two new distance transforms for gray level images are presented. As a new application for distance transforms, they are applied to gray level image compression. The new distance transforms are both new extensions of the well known distance transform algorithm developed by Rosenfeld, Pfaltz and Lay. With some modification their algorithm which calculates a distance transform on binary images with a chosen kernel has been made to calculate a chessboard like distance transform with integer numbers (DTOCS) and a real value distance transform (EDTOCS) on gray level images. Both distance transforms, the DTOCS and EDTOCS, require only two passes over the graylevel image and are extremely simple to implement. Only two image buffers are needed: The original gray level image and the binary image which defines the region(s) of calculation. No other image buffers are needed even if more than one iteration round is performed. For large neighborhoods and complicated images the two pass distance algorithm has to be applied to the image more than once, typically 3 10 times. Different types of kernels can be adopted. It is important to notice that no other existing transform calculates the same kind of distance map as the DTOCS. All the other gray weighted distance function, GRAYMAT etc. algorithms find the minimum path joining two points by the smallest sum of gray levels or weighting the distance values directly by the gray levels in some manner. The DTOCS does not weight them that way. The DTOCS gives a weighted version of the chessboard distance map. The weights are not constant, but gray value differences of the original image. The difference between the DTOCS map and other distance transforms for gray level images is shown. The difference between the DTOCS and EDTOCS is that the EDTOCS calculates these gray level differences in a different way. It propagates local Euclidean distances inside a kernel. Analytical derivations of some results concerning the DTOCS and the EDTOCS are presented. Commonly distance transforms are used for feature extraction in pattern recognition and learning. Their use in image compression is very rare. This thesis introduces a new application area for distance transforms. Three new image compression algorithms based on the DTOCS and one based on the EDTOCS are presented. Control points, i.e. points that are considered fundamental for the reconstruction of the image, are selected from the gray level image using the DTOCS and the EDTOCS. The first group of methods select the maximas of the distance image to new control points and the second group of methods compare the DTOCS distance to binary image chessboard distance. The effect of applying threshold masks of different sizes along the threshold boundaries is studied. The time complexity of the compression algorithms is analyzed both analytically and experimentally. It is shown that the time complexity of the algorithms is independent of the number of control points, i.e. the compression ratio. Also a new morphological image decompression scheme is presented, the 8 kernels' method. Several decompressed images are presented. The best results are obtained using the Delaunay triangulation. The obtained image quality equals that of the DCT images with a 4 x 4
Resumo:
The objective of this thesis is to shed light on the vertical vibration of granular materials for potential interest in the power generation industry. The main focus is investigating the drag force and frictional resistance that influence the movement of a granular material (in the form of glass beads) contained in a vessel, which is subjected to sinusoidal oscillation. The thesis is divided into three parts: theoretical analysis, experiments and computer simulations. The theoretical part of this study presents the underlying physical phenomena of the vibration of granular materials. Experiments are designed to determine fundamental parameters that contribute to the behavior of vibrating granular media. Numerical simulations include the use of three different software applications: FLUENT, LS-DYNA and ANSYS Workbench. The goal of these simulations is to test theoretical and semiempirical models for granular materials in order to validate their compatibility with the experimental findings, to assist in predicting their behavior, and to estimate quantities that are hard to measure in laboratory.
Resumo:
In general, models of ecological systems can be broadly categorized as ’top-down’ or ’bottom-up’ models, based on the hierarchical level that the model processes are formulated on. The structure of a top-down, also known as phenomenological, population model can be interpreted in terms of population characteristics, but it typically lacks an interpretation on a more basic level. In contrast, bottom-up, also known as mechanistic, population models are derived from assumptions and processes on a more basic level, which allows interpretation of the model parameters in terms of individual behavior. Both approaches, phenomenological and mechanistic modelling, can have their advantages and disadvantages in different situations. However, mechanistically derived models might be better at capturing the properties of the system at hand, and thus give more accurate predictions. In particular, when models are used for evolutionary studies, mechanistic models are more appropriate, since natural selection takes place on the individual level, and in mechanistic models the direct connection between model parameters and individual properties has already been established. The purpose of this thesis is twofold. Firstly, a systematical way to derive mechanistic discrete-time population models is presented. The derivation is based on combining explicitly modelled, continuous processes on the individual level within a reproductive period with a discrete-time maturation process between reproductive periods. Secondly, as an example of how evolutionary studies can be carried out in mechanistic models, the evolution of the timing of reproduction is investigated. Thus, these two lines of research, derivation of mechanistic population models and evolutionary studies, are complementary to each other.
Resumo:
This study presents an automatic, computer-aided analytical method called Comparison Structure Analysis (CSA), which can be applied to different dimensions of music. The aim of CSA is first and foremost practical: to produce dynamic and understandable representations of musical properties by evaluating the prevalence of a chosen musical data structure through a musical piece. Such a comparison structure may refer to a mathematical vector, a set, a matrix or another type of data structure and even a combination of data structures. CSA depends on an abstract systematic segmentation that allows for a statistical or mathematical survey of the data. To choose a comparison structure is to tune the apparatus to be sensitive to an exclusive set of musical properties. CSA settles somewhere between traditional music analysis and computer aided music information retrieval (MIR). Theoretically defined musical entities, such as pitch-class sets, set-classes and particular rhythm patterns are detected in compositions using pattern extraction and pattern comparison algorithms that are typical within the field of MIR. In principle, the idea of comparison structure analysis can be applied to any time-series type data and, in the music analytical context, to polyphonic as well as homophonic music. Tonal trends, set-class similarities, invertible counterpoints, voice-leading similarities, short-term modulations, rhythmic similarities and multiparametric changes in musical texture were studied. Since CSA allows for a highly accurate classification of compositions, its methods may be applicable to symbolic music information retrieval as well. The strength of CSA relies especially on the possibility to make comparisons between the observations concerning different musical parameters and to combine it with statistical and perhaps other music analytical methods. The results of CSA are dependent on the competence of the similarity measure. New similarity measures for tonal stability, rhythmic and set-class similarity measurements were proposed. The most advanced results were attained by employing the automated function generation – comparable with the so-called genetic programming – to search for an optimal model for set-class similarity measurements. However, the results of CSA seem to agree strongly, independent of the type of similarity function employed in the analysis.
Resumo:
Lipotoxicity is a condition in which fatty acids (FAs) are not efficiently stored in adipose tissue and overflow to non-adipose tissue, causing organ damages. A defect of adipose tissue FA storage capability can be the primary culprit in the insulin resistance condition that characterizes many of the severe metabolic diseases that affect people nowadays. Obesity, in this regard, constitutes the gateway and risk factor of the major killers of modern society, such as cardiovascular disease and cancer. A deep understanding of the pathogenetic mechanisms that underlie obesity and the insulin resistance syndrome is a challenge for modern medicine. In the last twenty years of scientific research, FA metabolism and dysregulations have been the object of numerous studies. Development of more targeted and quantitative methodologies is required on one hand, to investigate and dissect organ metabolism, on the other hand to test the efficacy and mechanisms of action of novel drugs. The combination of functional and anatomical imaging is an answer to this need, since it provides more understanding and more information than we have ever had. The first purpose of this study was to investigate abnormalities of substrate organ metabolism, with special reference to the FA metabolism in obese drug-naïve subjects at an early stage of disease. Secondly, trimetazidine (TMZ), a metabolic drug supposed to inhibit FA oxidation (FAO), has been for the first time evaluated in obese subjects to test a whole body and organ metabolism improvement based on the hypothesis that FAO is increased at an early stage of the disease. A third objective was to investigate the relationship between ectopic fat accumulation surrounding heart and coronaries, and impaired myocardial perfusion in patients with risk of coronary artery disease (CAD). In the current study a new methodology has been developed with PET imaging with 11C-palmitate and compartmental modelling for the non-invasive in vivo study of liver FA metabolism, and a similar approach has been used to study FA metabolism in the skeletal muscle, the adipose tissue and the heart. The results of the different substudies point in the same direction. Obesity, at the an early stage, is associated with an impairment in the esterification of FAs in adipose tissue and skeletal muscle, which is accompanied by the upregulation in skeletal muscle, liver and heart FAO. The inability to store fat may initiate a cascade of events leading to FA oversupply to lean tissue, overload of the oxidative pathway, and accumulation of toxic lipid species and triglycerides, and it was paralleled by a proportional growth in insulin resistance. In subjects with CAD, the accumulation of ectopic fat inside the pericardium is associated with impaired myocardial perfusion, presumably via a paracrine/vasocrine effect. At the beginning of the disease, TMZ is not detrimental to health; on the contrary at the single organ level (heart, skeletal muscle and liver) it seems beneficial, while no relevant effects were found on adipose tissue function. Taken altogether these findings suggest that adipose tissue storage capability should be preserved, if it is not possible to prevent excessive fat intake in the first place.
Resumo:
Earlier management studies have found a relationship between managerial qualities and subordinate impacts, but the effect of managers‘ social competence on leader perceptions has not been solidly established. To fill the related research gap, the present work embarks on a quantitative empirical effort to identify predictors of successful leadership. In particular, this study investigates relationships between perceived leader behavior and three selfreport instruments used to measure managerial capability: 1) the WOPI Work Personality Inventory, 2) Raven‘s general intelligence scale, and 3) the Emotive Communication Scale (ECS). This work complements previous research by resorting to both self-reports and other-reports: the results acquired from the managerial sample are compared to subordinate perceptions as measured through the ECS other-report and the WOPI360 multi-source appraisal. The quantitative research is comprised of a sample of 8o superiors and 354 subordinates operating in eight Finnish organizations. The strongest predictive value emerged from the ECS self- and other-reports and certain personality dimensions. In contrast, supervisors‘ logical intelligence did not correlate with leadership perceived as socially competent by subordinates. 16 of the superiors rated as most socially competent by their subordinates were selected for case analysis. Their qualitative narratives evidence the role of life history and post-traumatic growth in developing managerial skills. The results contribute to leadership theory in four ways. First, the ECS self-report devised for this research offers a reliable scale for predicting socially competent leader ability. Second, the work identifies dimensions of personality and emotive skills that can be considered predictors of managerial ability and benefited from in leader recruitment and career planning. Third, the Emotive Communication Model delineated on the basis of the empirical data allows for a systematic design and planning of communication and leadership education. Fourth, this workfurthers understanding of personal growth strategies and the role of life history in leader development and training. Finally, this research advances educational leadership by conceptualizing and operationalizing effective managerial communications. The Emotive Communication Model devised directs the pedagogic attention in engineering to assertion, emotional availability and inspiration skills. The proposed methodology addresses classroom management strategies drawing from problem-based learning, student empowerment, collaborative learning, and so-called socially competent teachership founded on teacher immediacy and perceived caring, all constituting strategies moving away from student compliance and teacher modelling. The ultimate educational objective embraces the development of individual engineers and organizational leaders that not only possess traditional analytical and technical expertise and substantive knowledge but are intelligent also creatively, practically, and socially.
Resumo:
The aim of the present set of studies was to explore primary school children’s Spontaneous Focusing On quantitative Relations (SFOR) and its role in the development of rational number conceptual knowledge. The specific goals were to determine if it was possible to identify a spontaneous quantitative focusing tendency that indexes children’s tendency to recognize and utilize quantitative relations in non-explicitly mathematical situations and to determine if this tendency has an impact on the development of rational number conceptual knowledge in late primary school. To this end, we report on six original empirical studies that measure SFOR in children ages five to thirteen years and the development of rational number conceptual knowledge in ten- to thirteen-year-olds. SFOR measures were developed to determine if there are substantial differences in SFOR that are not explained by the ability to use quantitative relations. A measure of children’s conceptual knowledge of the magnitude representations of rational numbers and the density of rational numbers is utilized to capture the process of conceptual change with rational numbers in late primary school students. Finally, SFOR tendency was examined in relation to the development of rational number conceptual knowledge in these students. Study I concerned the first attempts to measure individual differences in children’s spontaneous recognition and use of quantitative relations in 86 Finnish children from the ages of five to seven years. Results revealed that there were substantial inter-individual differences in the spontaneous recognition and use of quantitative relations in these tasks. This was particularly true for the oldest group of participants, who were in grade one (roughly seven years old). However, the study did not control for ability to solve the tasks using quantitative relations, so it was not clear if these differences were due to ability or SFOR. Study II more deeply investigated the nature of the two tasks reported in Study I, through the use of a stimulated-recall procedure examining children’s verbalizations of how they interpreted the tasks. Results reveal that participants were able to verbalize reasoning about their quantitative relational responses, but not their responses based on exact number. Furthermore, participants’ non-mathematical responses revealed a variety of other aspects, beyond quantitative relations and exact number, which participants focused on in completing the tasks. These results suggest that exact number may be more easily perceived than quantitative relations. As well, these tasks were revealed to contain both mathematical and non-mathematical aspects which were interpreted by the participants as relevant. Study III investigated individual differences in SFOR 84 children, ages five to nine, from the US and is the first to report on the connection between SFOR and other mathematical abilities. The cross-sectional data revealed that there were individual differences in SFOR. Importantly, these differences were not entirely explained by the ability to solve the tasks using quantitative relations, suggesting that SFOR is partially independent from the ability to use quantitative relations. In other words, the lack of use of quantitative relations on the SFOR tasks was not solely due to participants being unable to solve the tasks using quantitative relations, but due to a lack of the spontaneous attention to the quantitative relations in the tasks. Furthermore, SFOR tendency was found to be related to arithmetic fluency among these participants. This is the first evidence to suggest that SFOR may be a partially distinct aspect of children’s existing mathematical competences. Study IV presented a follow-up study of the first graders who participated in Studies I and II, examining SFOR tendency as a predictor of their conceptual knowledge of fraction magnitudes in fourth grade. Results revealed that first graders’ SFOR tendency was a unique predictor of fraction conceptual knowledge in fourth grade, even after controlling for general mathematical skills. These results are the first to suggest that SFOR tendency may play a role in the development of rational number conceptual knowledge. Study V presents a longitudinal study of the development of 263 Finnish students’ rational number conceptual knowledge over a one year period. During this time participants completed a measure of conceptual knowledge of the magnitude representations and the density of rational numbers at three time points. First, a Latent Profile Analysis indicated that a four-class model, differentiating between those participants with high magnitude comparison and density knowledge, was the most appropriate. A Latent Transition Analysis reveal that few students display sustained conceptual change with density concepts, though conceptual change with magnitude representations is present in this group. Overall, this study indicated that there were severe deficiencies in conceptual knowledge of rational numbers, especially concepts of density. The longitudinal Study VI presented a synthesis of the previous studies in order to specifically detail the role of SFOR tendency in the development of rational number conceptual knowledge. Thus, the same participants from Study V completed a measure of SFOR, along with the rational number test, including a fourth time point. Results reveal that SFOR tendency was a predictor of rational number conceptual knowledge after two school years, even after taking into consideration prior rational number knowledge (through the use of residualized SFOR scores), arithmetic fluency, and non-verbal intelligence. Furthermore, those participants with higher-than-expected SFOR scores improved significantly more on magnitude representation and density concepts over the four time points. These results indicate that SFOR tendency is a strong predictor of rational number conceptual development in late primary school children. The results of the six studies reveal that within children’s existing mathematical competences there can be identified a spontaneous quantitative focusing tendency named spontaneous focusing on quantitative relations. Furthermore, this tendency is found to play a role in the development of rational number conceptual knowledge in primary school children. Results suggest that conceptual change with the magnitude representations and density of rational numbers is rare among this group of students. However, those children who are more likely to notice and use quantitative relations in situations that are not explicitly mathematical seem to have an advantage in the development of rational number conceptual knowledge. It may be that these students gain quantitative more and qualitatively better self-initiated deliberate practice with quantitative relations in everyday situations due to an increased SFOR tendency. This suggests that it may be important to promote this type of mathematical activity in teaching rational numbers. Furthermore, these results suggest that there may be a series of spontaneous quantitative focusing tendencies that have an impact on mathematical development throughout the learning trajectory.
Resumo:
Computer Supported Collaborative Learning (CSCL) is a teaching and learning approach which is widely adopted. However there are still some problems can be found when CSCL takes place. Studies show that using game-like mechanics can increase motivation, engagement, as well as modelling behaviors of players. Gamification is a rapid growing trend by applying the same mechanics. It refers to use game design elements in non-game contexts. This thesis is about combining gamification concept and computer supported collaborative learning together in software engineering education field. And finally a gamified prototype system is designed.