855 resultados para Rietveld refinement
Resumo:
Tutkimukseni käsittelee J. A. Hollon (1885–1967) sivistyskasvatusajattelua. Hollo oli monitoiminen kulttuurivaikuttaja, joka toimi kriitikkona, kirjailijana, suomentajana ja kasvatustieteilijänä. Häntä voidaan pitää J. V. Snellmanin rinnalla yhtenä merkittävimpänä suomalaisena kasvatusajattelijana. Hänen kasvatusajattelustaan ei ole kuitenkaan aiemmin tehty väitöskirjatason tutkimusta. Tutkimuskysymykseni ovat seuraavat: 1. Millainen on Hollon näkemys kasvatuksesta, kasvatuksen maailmasta ja kasvatuksen teoriasta? 2. Mikä on Hollon käsitys kasvattajan ja kasvatettavan merkityksestä kasvatustapahtumassa? 3. Mitä asioita sisältyy sivistyskasvatuksen eli kasvamaan saattamisen elementteihin? Tutkimukseni on kasvatusfilosofinen. Tutkimusmenetelmäni on systemaattinen analyysi ja lähestymistapani on hermeneuttinen. Tutkimukseni pääaineistona ovat Hollon kasvatusta koskevat kirjoitukset, joista tärkeimmät ovat Mielikuvitus ja sen kasvattaminen I-II (1918, 1919), Kasvatuksen maailma (1927), Kasvatuksen teoria (1927) ja Itsekasvatus ja elämisen taito (1931). Hollon mukaan kasvatuksen maailma on suhteellisen itsenäinen elämänmuoto (Lebensform), jolla on oma ontologinen erityislaatunsa, so. sui generis. Kasvatusoppia ei pidä redusoida psykologiaan tai filosofiaan, koska sillä tavoin se menettää tieteellisen itsenäisyytensä. Hollon mielestä kasvatuksen teoria on teoria käytäntöä varten. Kasvatuksen teorian luomisessa tulee ottaa huomioon kasvatuksen maailman erityispiirteenä oleva kokonaisvaltainen näkökulma ja elämän palvelemisen päämäärä. Kasvattaminen on aina myös eettistä toimintaa. Kasvatuksen tavoitteena on hyvä elämä. Hollon mukaan kasvattajan tehtävä on luoda kasvatettavalleen eheä sivistyksellinen perusta. Tämä voi tapahtua vain laaja-alaisen sivistyskasvatuksen avulla, jonka runkona on antiikin humanistinen sivistysperinne. Sivistyskasvatukseen kuuluvat älyllinen, eettinen, uskonnollinen, esteettinen ja toiminnallinen kasvatus. Mielikuvituksen avulla kasvattaja voi yhdistää kasvatuksen osa-alueet eheäksi kokonaisuudeksi. Ilman mielikuvitusta erilaiset ilmiöt olisivat pirstaleisina, toisistaan erillisinä osina ihmisen mielessä. Opettajan persoona on merkittävä tekijä kasvatuksessa. Se tulee ottaa huomioon opettajankoulutuksen eli kasvattajan kasvattamisen valinnoissa. Opettaja-kasvattajan on tärkeää opiskella laajasti humanistisia opintoja, koska kasvatuksessa on kysymys ihmisestä. Ennen kaikkea kasvattajan eettistä ja esteettistä kykyä tulee harjoituttaa. Näin hän oppii käyttämään mielikuvitustaan kasvatustapahtumassa siten, että hän tulee kasvatuksellisesti näkeväksi kasvamaan saattajaksi, joka ymmärtää sen, mikä kussakin tilanteessa vaatii erityistä huomiota. Tutkimukseni osoittaa, että Hollon henkitieteellinen ja fenomenologis-hermeneuttinen kasvatusnäkemys ei ole vain vastaparadigma empiiriselle kasvatustieteelle, vaan myös nykyajan teknis-taloudelliselle eetokselle, joka yhtäältä uhkaa välineellistää kasvatuksen ja toisaalta väärällä tavoin tieteellistää kasvatuksen tutkimuksen. Tämän takia kasvatusoppi kysymyksineen uhkaa siirtyä kasvatuskeskustelussa syrjemmälle, jopa hävitä kokonaan. Kasvatuksen ja kasvatuksen tutkimuksen vaarana on niiden liiallinen sitouttaminen tuotantoelämän jatkeeksi, minkä seurauksena on ihmisyyden toteuttamisen vaikeutuminen. Tutkimuksen lopuksi esitän ideaalikoulunäkemykseni, joka perustuu osittain Hollon kasvatusnäkemykseen. Hollon näkemys on yhä ajankohtainen ja merkittävä kontribuutio kasvatusta, sen teoriaa ja käytäntöä koskevaan keskusteluun.
Resumo:
-
Resumo:
Today's networked systems are becoming increasingly complex and diverse. The current simulation and runtime verification techniques do not provide support for developing such systems efficiently; moreover, the reliability of the simulated/verified systems is not thoroughly ensured. To address these challenges, the use of formal techniques to reason about network system development is growing, while at the same time, the mathematical background necessary for using formal techniques is a barrier for network designers to efficiently employ them. Thus, these techniques are not vastly used for developing networked systems. The objective of this thesis is to propose formal approaches for the development of reliable networked systems, by taking efficiency into account. With respect to reliability, we propose the architectural development of correct-by-construction networked system models. With respect to efficiency, we propose reusable network architectures as well as network development. At the core of our development methodology, we employ the abstraction and refinement techniques for the development and analysis of networked systems. We evaluate our proposal by employing the proposed architectures to a pervasive class of dynamic networks, i.e., wireless sensor network architectures as well as to a pervasive class of static networks, i.e., network-on-chip architectures. The ultimate goal of our research is to put forward the idea of building libraries of pre-proved rules for the efficient modelling, development, and analysis of networked systems. We take into account both qualitative and quantitative analysis of networks via varied formal tool support, using a theorem prover the Rodin platform and a statistical model checker the SMC-Uppaal.
Resumo:
In this work, image based estimation methods, also known as direct methods, are studied which avoid feature extraction and matching completely. Cost functions use raw pixels as measurements and the goal is to produce precise 3D pose and structure estimates. The cost functions presented minimize the sensor error, because measurements are not transformed or modified. In photometric camera pose estimation, 3D rotation and translation parameters are estimated by minimizing a sequence of image based cost functions, which are non-linear due to perspective projection and lens distortion. In image based structure refinement, on the other hand, 3D structure is refined using a number of additional views and an image based cost metric. Image based estimation methods are particularly useful in conditions where the Lambertian assumption holds, and the 3D points have constant color despite viewing angle. The goal is to improve image based estimation methods, and to produce computationally efficient methods which can be accomodated into real-time applications. The developed image-based 3D pose and structure estimation methods are finally demonstrated in practise in indoor 3D reconstruction use, and in a live augmented reality application.
Resumo:
In the latter days, human activities constantly increase greenhouse gases emissions in the atmosphere, which has a direct impact on a global climate warming. Finland as European Union member, developed national structural plan to promote renewable energy generation, pursuing the aspects of Directive 2009/28/EC and put it on the sharepoint. Finland is on a way of enhancing national security of energy supply, increasing diversity of the energy mix. There are plenty significant objectives to develop onshore and offshore wind energy generation in country for a next few decades, as well as another renewable energy sources. To predict the future changes, there are a lot of scenario methods developed and adapted to energy industry. The Master’s thesis explored “Fuzzy cognitive maps” approach in scenarios developing, which captures expert’s knowledge in a graphical manner and using these captures for a raw scenarios testing and refinement. There were prospects of Finnish wind energy development for the year of 2030 considered, with aid of FCM technique. Five positive raw scenarios were developed and three of them tested against integrated expert’s map of knowledge, using graphical simulation. The study provides robust scenarios out of the preliminary defined, as outcome, assuming the impact of results, taken after simulation. The thesis was conducted in such way, that there will be possibilities to use existing knowledge captures from expert panel, to test and deploy different sets of scenarios regarding to Finnish wind energy development.
Resumo:
Increasing renewable energy utilization is a challenge that is tried to be solved in different ways. One of the most promising options for renewable energy is different biomasses, and the bioenergy field offers numerous emerging business opportunities. The actors in the field have rarely all the needed know-how and resources for exploiting these opportunities, and thus it is reasonable to seize them in cooperation. Networking is not an easy task to carry out, however, and in addition to its advantages for the firms engaged, it sets numerous challenges as well. The development of a network is a result of several steps firms need to take. In order to gain optimal advantage of their networks, firms need to weigh out with whom, why and how they should cooperate. In addition, everything does not depend on the firms themselves, as several factors in the external environment set their own enablers and barriers for cooperation. The formation of a network around a business opportunity is thus a multiphase process. The objective of this thesis is to depict this process via a step-by-step analysis and thus increase understanding on the whole development path from an entrepreneurial opportunity to a successful business network. The empirical evidence has been gathered by discussing the opportunities of animal manure refinement to biogas and forest biomass utilization for heating in Finland. The thesis comprises two parts. The first part provides an overview of the study, and the second part includes five research publications. The results reveal that it is essential to identify and analyze all the steps in the development process of a network, and several frameworks are used in the thesis to analyze these steps. The frameworks combine the views of theory and practical experiences of empirical study, and thus give new multifaceted views for the discussion on SME networking. The results indicate that the ground for cooperation should be investigated adequately by taking account of the preconditions in all the three contexts in which the actors operate: the social context, the region and the institutional environment. In case the project advances to exploitation, the assets and objectives of the actors should be paired off, which sets a need for relationships and sub-networks differing in breadth and depth. Different relationships and networks require different kinds of maintenance and management. Moreover, the actors should have the capability to change the formality or strategy of the relationships if needed. The drivers for these changes come along with the changing environment, which causes changes in the objectives of the actors and this way in the whole network. Bioenergy as the empirical field of the study represents well an industrial field with many emerging opportunities, a motley group of actors, and sensitivity for fast changes.
Resumo:
Nowadays, computer-based systems tend to become more complex and control increasingly critical functions affecting different areas of human activities. Failures of such systems might result in loss of human lives as well as significant damage to the environment. Therefore, their safety needs to be ensured. However, the development of safety-critical systems is not a trivial exercise. Hence, to preclude design faults and guarantee the desired behaviour, different industrial standards prescribe the use of rigorous techniques for development and verification of such systems. The more critical the system is, the more rigorous approach should be undertaken. To ensure safety of a critical computer-based system, satisfaction of the safety requirements imposed on this system should be demonstrated. This task involves a number of activities. In particular, a set of the safety requirements is usually derived by conducting various safety analysis techniques. Strong assurance that the system satisfies the safety requirements can be provided by formal methods, i.e., mathematically-based techniques. At the same time, the evidence that the system under consideration meets the imposed safety requirements might be demonstrated by constructing safety cases. However, the overall safety assurance process of critical computerbased systems remains insufficiently defined due to the following reasons. Firstly, there are semantic differences between safety requirements and formal models. Informally represented safety requirements should be translated into the underlying formal language to enable further veri cation. Secondly, the development of formal models of complex systems can be labour-intensive and time consuming. Thirdly, there are only a few well-defined methods for integration of formal verification results into safety cases. This thesis proposes an integrated approach to the rigorous development and verification of safety-critical systems that (1) facilitates elicitation of safety requirements and their incorporation into formal models, (2) simplifies formal modelling and verification by proposing specification and refinement patterns, and (3) assists in the construction of safety cases from the artefacts generated by formal reasoning. Our chosen formal framework is Event-B. It allows us to tackle the complexity of safety-critical systems as well as to structure safety requirements by applying abstraction and stepwise refinement. The Rodin platform, a tool supporting Event-B, assists in automatic model transformations and proof-based verification of the desired system properties. The proposed approach has been validated by several case studies from different application domains.
Resumo:
We investigated the level of expression of neuronal nitric oxide synthase (nNOS) in the retinorecipient layers of the rat superior colliculus during early postnatal development. Male and female Lister rats ranging in age between the day of birth (P0) and the fourth postnatal week were used in the present study. Two biochemical methods were used, i.e., in vitro measurement of NOS specific activity by the conversion of [³H]-arginine to [³H]-citrulline, and analysis of Western blotting immunoreactive bands from superior colliculus homogenates. As revealed by Western blotting, very weak immunoreactive bands were observed as early as P0-2, and their intensity increased progressively at least until P21. The analysis of specific activity of NOS showed similar results. There was a progressive increase in enzymatic activity until near the end of the second postnatal week, and a nonsignificant tendency to an increase until the end of the third week was also observed. Thus, these results indicated an increase in the amount of nNOS during the first weeks after birth. Our results confirm and extend previous reports using histochemistry for NADPH-diaphorase and immunocytochemistry for nNOS, which showed a progressive increase in the number of stained cells in the superficial layers during the first two postnatal weeks, reaching an adult pattern at the end of the third week. Furthermore, our results suggested that nNOS is present in an active form in the rat superior colliculus during the period of refinement of the retinocollicular pathway.
Resumo:
Due to various advantages such as flexibility, scalability and updatability, software intensive systems are increasingly embedded in everyday life. The constantly growing number of functions executed by these systems requires a high level of performance from the underlying platform. The main approach to incrementing performance has been the increase of operating frequency of a chip. However, this has led to the problem of power dissipation, which has shifted the focus of research to parallel and distributed computing. Parallel many-core platforms can provide the required level of computational power along with low power consumption. On the one hand, this enables parallel execution of highly intensive applications. With their computational power, these platforms are likely to be used in various application domains: from home use electronics (e.g., video processing) to complex critical control systems. On the other hand, the utilization of the resources has to be efficient in terms of performance and power consumption. However, the high level of on-chip integration results in the increase of the probability of various faults and creation of hotspots leading to thermal problems. Additionally, radiation, which is frequent in space but becomes an issue also at the ground level, can cause transient faults. This can eventually induce a faulty execution of applications. Therefore, it is crucial to develop methods that enable efficient as well as resilient execution of applications. The main objective of the thesis is to propose an approach to design agentbased systems for many-core platforms in a rigorous manner. When designing such a system, we explore and integrate various dynamic reconfiguration mechanisms into agents functionality. The use of these mechanisms enhances resilience of the underlying platform whilst maintaining performance at an acceptable level. The design of the system proceeds according to a formal refinement approach which allows us to ensure correct behaviour of the system with respect to postulated properties. To enable analysis of the proposed system in terms of area overhead as well as performance, we explore an approach, where the developed rigorous models are transformed into a high-level implementation language. Specifically, we investigate methods for deriving fault-free implementations from these models into, e.g., a hardware description language, namely VHDL.
Resumo:
The need for industries to remain competitive in the welding business, has created necessity to develop innovative processes that can exceed customer’s demand. Significant development in improving weld efficiency, during the past decades, still have their drawbacks, specifically in the weld strength properties. The recent innovative technologies have created smallest possible solid material known as nanomaterial and their introduction in welding production has improved the weld strength properties and to overcome unstable microstructures in the weld. This study utilizes a qualitative research method, to elaborate the methods of introducing nanomaterial to the weldments and the characteristic of the welds produced by different welding processes. The study mainly focuses on changes in the microstructural formation and strength properties on the welded joint and also discusses those factors influencing such improvements, due to the addition of nanomaterials. The effect of nanomaterial addition in welding process modifies the physics of joining region, thereby, resulting in significant improvement in the strength properties, with stable microstructure in the weld. The addition of nanomaterials in the welding processes are, through coating on base metal, addition in filler metal and utilizing nanostructured base metal. However, due to its insignificant size, the addition of nanomaterials directly to the weld, would poses complications. The factors having major influence on the joint integrity are dispersion of nanomaterials, characteristics of the nanomaterials, quantity of nanomaterials and selection of nanomaterials. The addition of nanomaterials does not affect the fundamental properties and characteristics of base metals and the filler metal. However, in some cases, the addition of nanomaterials lead to the deterioration of the joint properties by unstable microstructural formations. Still research are ongoing to achieve high joint integrity, in various materials through different welding processes and also on other factors that influence the joint strength.
Resumo:
Resilience is the property of a system to remain trustworthy despite changes. Changes of a different nature, whether due to failures of system components or varying operational conditions, significantly increase the complexity of system development. Therefore, advanced development technologies are required to build robust and flexible system architectures capable of adapting to such changes. Moreover, powerful quantitative techniques are needed to assess the impact of these changes on various system characteristics. Architectural flexibility is achieved by embedding into the system design the mechanisms for identifying changes and reacting on them. Hence a resilient system should have both advanced monitoring and error detection capabilities to recognise changes as well as sophisticated reconfiguration mechanisms to adapt to them. The aim of such reconfiguration is to ensure that the system stays operational, i.e., remains capable of achieving its goals. Design, verification and assessment of the system reconfiguration mechanisms is a challenging and error prone engineering task. In this thesis, we propose and validate a formal framework for development and assessment of resilient systems. Such a framework provides us with the means to specify and verify complex component interactions, model their cooperative behaviour in achieving system goals, and analyse the chosen reconfiguration strategies. Due to the variety of properties to be analysed, such a framework should have an integrated nature. To ensure the system functional correctness, it should rely on formal modelling and verification, while, to assess the impact of changes on such properties as performance and reliability, it should be combined with quantitative analysis. To ensure scalability of the proposed framework, we choose Event-B as the basis for reasoning about functional correctness. Event-B is a statebased formal approach that promotes the correct-by-construction development paradigm and formal verification by theorem proving. Event-B has a mature industrial-strength tool support { the Rodin platform. Proof-based verification as well as the reliance on abstraction and decomposition adopted in Event-B provides the designers with a powerful support for the development of complex systems. Moreover, the top-down system development by refinement allows the developers to explicitly express and verify critical system-level properties. Besides ensuring functional correctness, to achieve resilience we also need to analyse a number of non-functional characteristics, such as reliability and performance. Therefore, in this thesis we also demonstrate how formal development in Event-B can be combined with quantitative analysis. Namely, we experiment with integration of such techniques as probabilistic model checking in PRISM and discrete-event simulation in SimPy with formal development in Event-B. Such an integration allows us to assess how changes and di erent recon guration strategies a ect the overall system resilience. The approach proposed in this thesis is validated by a number of case studies from such areas as robotics, space, healthcare and cloud domain.
Resumo:
In much of the previous research into the field of interactive storytelling, the focus has been on the creation of complete systems, then evaluating the performance of those systems based on user experience. Less focus has been placed on finding general solutions to problems that manifest in many different types of interactive storytelling systems. The goal of this thesis was to identify potential candidates for metrics that a system could use to predict player behavior or how players experience the story they are presented with, and to put these metrics to an empirical test. The three metrics that were used were morality, relationships and conflict. The game used for user testing of the metrics, Regicide is an interactive storytelling experience that was created in conjunction with Eero Itkonen. Data, in the forms of internal system data and survey answers, collected through user testing, was used to evaluate hypotheses for each metric. Out of the three chosen metrics, morality performed the best in this study. Though further research and refinement may be required, the results were promising, and point to the conclusion that user responses to questions of morality are a strong predictor for their choices in similar situations later on in the course of an interactive story. A similar examination for user relationships with other characters in the story did not produce promising results, but several problems were recognized in terms of methodology and further research with a better optimized system may yield different results. On the subject of conflict, several aspects, proposed by Ware et al. (2012), were evaluated separately. Results were inconclusive, with the aspect of directness showing the most promise.
Resumo:
Increasingly growing share of distributed generation in the whole electrical power system’s generating system is currently a worldwide tendency, driven by several factors, encircling mainly difficulties in refinement of megalopolises’ distribution networks and its maintenance; widening environmental concerns adding to both energy efficiency approaches and installation of renewable sources based generation, inherently distributed; increased power quality and reliability needs; progress in IT field, making implementable harmonization of needs and interests of different-energy-type generators and consumers. At this stage, the volume, formed by system-interconnected distributed generation facilities, have reached the level of causing broad impact toward system operation under emergency and post-emergency conditions in several EU countries, thus previously implementable approach of their preliminary tripping in case of a fault, preventing generating equipment damage and disoperation of relay protection and automation, is not applicable any more. Adding to the preceding, withstand capability and transient electromechanical stability of generating technologies, interconnecting in proximity of load nodes, enhanced significantly since the moment Low Voltage Ride-Through regulations, followed by techniques, were introduced in Grid Codes. Both aspects leads to relay protection and auto-reclosing operation in presence of distributed generation generally connected after grid planning and construction phases. This paper proposes solutions to the emerging need to ensure correct operation of the equipment in question with least possible grid refinements, distinctively for every type of distributed generation technology achieved its technical maturity to date and network’s protection. New generating technologies are equivalented from the perspective of representation in calculation of initial steady-state short-circuit current used to dimension current-sensing relay protection, and widely adopted short-circuit calculation practices, as IEC 60909 and VDE 0102. The phenomenon of unintentional islanding, influencing auto-reclosing, is addressed, and protection schemes used to eliminate an sustained island are listed and characterized by reliability and implementation related factors, whereas also forming a crucial aspect of realization of the proposed protection operation relieving measures.
Resumo:
The "Java Intelligent Tutoring System" (JITS) research project focused on designing, constructing, and determining the effectiveness of an Intelligent Tutoring System for beginner Java programming students at the postsecondary level. The participants in this research were students in the School of Applied Computing and Engineering Sciences at Sheridan College. This research involved consistently gathering input from students and instructors using JITS as it developed. The cyclic process involving designing, developing, testing, and refinement was used for the construction of JITS to ensure that it adequately meets the needs of students and instructors. The second objective in this dissertation determined the effectiveness of learning within this environment. The main findings indicate that JITS is a richly interactive ITS that engages students on Java programming problems. JITS is equipped with a sophisticated personalized feedback mechanism that models and supports each student in his/her learning style. The assessment component involved 2 main quantitative experiments to determine the effectiveness of JITS in terms of student performance. In both experiments it was determined that a statistically significant difference was achieved between the control group and the experimental group (i.e., JITS group). The main effect for Test (i.e., pre- and postiest), F( l , 35) == 119.43,p < .001, was qualified by a Test by Group interaction, F( l , 35) == 4.98,p < .05, and a Test by Time interaction, F( l , 35) == 43.82, p < .001. Similar findings were found for the second experiment; Test by Group interaction revealed F( 1 , 92) == 5.36, p < .025. In both experiments the JITS groups outperformed the corresponding control groups at posttest.
Resumo:
The new Physiotherapy and Occupational Therapy programmes, based in the Faculty of Health Sciences, McMaster University (Hamilton, Ontario) are unique. The teaching and learning philosophies utilized are based on learner-centred and selfdirected learning theories. The 1991 admissions process of these programmes attempted to select individuals who would make highly qualified professionals and who would have the necessary skills to complete such unique programmes. In order to: 1 . learn more about the concept of self-directed learning and its related characteristics in health care professionals; 2. examine the relationship between various student characteristics - personal, learner and those assessed during the admissions process - and final course grades, and 3. determine which, if any, smdent characteristics could be considered predictors for success in learner-centred programmes requiring self-directed learning skills, a correlational research design was developed and carried out. Thirty Occupational Therapy and thirty Physiotherapy smdents were asked to complete 2 instruments - a questionnaire developed by the author and the Oddi Continuing Learning Inventory (Oddi, 1986). Course grades and ratings of students during the admissions process were also obtained. Both questionnaires were examined for reliability, and factor analyses were conducted to determine construct validity. Data obtained from the questionnaires, course grades and student ratings (from the admissions process) were analyzed and compared using the Contingency Co-efficient, the Pearson's product-moment correlation co-efficient, and the multiple regression analysis model. The research findings demonstrated a positive relationship (as identified by Contingency Coefficient or Pearson r values) between various course grades and the following personal and learner characteristics: field of smdy of highest level of education achieved, level of education achieved, sex, marital stams, motivation for completing the programmes, reasons for eru-oling in the programmes, decision to enrol in the programmes, employment history, preferred learning style, strong selfconcept and the identification of various components of the concept of self-directed learning. In most cases, the relationships were significant to the 0.01 or 0.(X)1 levels. Results of the multiple regression analyses demonstrated that several learner and admissions characteristic variables had R^ values that accounted for the largest proportion of the variance in several dependent variables. Thus, these variables could be considered predictors for success. The learner characteristics included: level of education and strong self-concept. The admissions characteristics included: ability to evaluate strengths, ability to give feedback, curiosity and creativity, and communication skills. It is recommended that research continue to be conducted to substantiate the relationships found between course grades and characteristic variables in more diverse populations. "Success in self-directed programmes" from the learner's perspective should also be investigated. The Oddi Continuing Learning Inventory should continue to be researched. Further research may lead to refinement or further development of the instrument, and may provide further insight into self-directed learner attributes. The concept of self-directed learning continues to be incorporated into educational programmes, and thus should continue to be explored.