999 resultados para Scenario Programming, Markup Languages, 3D Virtualworlds
Resumo:
Software integration is a stage in a software development process to assemble separate components to produce a single product. It is important to manage the risks involved and being able to integrate smoothly, because software cannot be released without integrating it first. Furthermore, it has been shown that the integration and testing phase can make up 40 % of the overall project costs. These issues can be mitigated by using a software engineering practice called continuous integration. This thesis work presents how continuous integration is introduced to the author's employer organisation. This includes studying how the continuous integration process works and creating the technical basis to start using the process on future projects. The implemented system supports software written in C and C++ programming languages on Linux platform, but the general concepts can be applied to any programming language and platform by selecting the appropriate tools. The results demonstrate in detail what issues need to be solved when the process is acquired in a corporate environment. Additionally, they provide an implementation and process description suitable to the organisation. The results show that continuous integration can reduce the risks involved in a software process and increase the quality of the product as well.
Resumo:
Este artículo trata de la aplicación de las competencias básicas en el currículum de Educación Primaria. El objetivo que persigue es ofrecer algunas estrategias para ayudar a los maestros a integrar las competencias básicas en los métodos de programación y evaluación. Con este fin, y para prever las posibles dificultades en la implementación de las competencias básicas, en la primera parte del artículo se analiza la situación actual a partir de la lectura de diversos documentos legales vigentes. A continuación, en la segunda parte del artículo se aportan algunas herramientas para facilitar esta integración desde las áreas de lengua y de matemáticas. Realizamos esta aproximación desde la didáctica de la lengua y de las matemáticas por su carácter instrumental para la adquisición de otros conocimientos
Resumo:
We develop a method for obtaining 3D polarimetric integral images from elemental images recorded in low light illumination conditions. Since photon-counting images are very sparse, calculation of the Stokes parameters and the degree of polarization should be handled carefully. In our approach, polarimetric 3D integral images are generated using the Maximum Likelihood Estimation and subsequently reconstructed by means of a Total Variation Denoising filter. In this way, polarimetric results are comparable to those obtained in conventional illumination conditions. We also show that polarimetric information retrieved from photon starved images can be used in 3D object recognition problems. To the best of our knowledge, this is the first report on 3D polarimetric photon counting integral imaging.
Resumo:
Alzheimer's disease (AD) is considered the main cause of cognitive decline in adults. The available therapies for AD treatment seek to maintain the activity of cholinergic system through the inhibition of the enzyme acetylcholinesterase. However, butyrylcholinesterase (BuChE) can be considered an alternative target for AD treatment. Aiming at developing new BuChE inhibitors, robust QSAR 3D models with high predictive power were developed. The best model presents a good fit (r²=0.82, q²=0.76, with two PCs) and high predictive power (r²predict=0.88). Analysis of regression vector shows that steric properties have considerable importance to the inhibition of the BuChE.
Resumo:
Total spectrofluorimetry associated to Principal Components Analysis (PCA) were used to classify into different groups the samples of diesel oil, biodiesel, vegetal oil and residual oil, as well as, to identify addition of non-transesterified residual vegetable oil, instead of biodiesel, to the diesel oil. Using this method, the samples of diesel oil, mixtures of biodiesel in diesel and mixtures of residual oil in diesel were separated into well-defined groups.
Resumo:
Imide compounds have shown biological activity. These compounds can be easily synthesized with good yields. The objective of this paper was the rational planning of imides and sulfonamides with antinociceptive activity using the 3D-QSAR/CoMFA approach. The studies were performed using two data sets. The first set consisted of 39 cyclic imides while the second set consisted of 39 imides and 15 sulfonamides. The 3D- QSAR/CoMFA models have shown that the steric effect is important for the antinociceptive activity of imide and sulphonamide compounds. Ten new compounds with improved potential antinociceptive activity have been proposed by de novo design leapfrog simulations.
Resumo:
Solid-state silicon detectors have replaced conventional ones in almost all recent high-energy physics experiments. Pixel silicon sensors don't have any alternative in the area near the interaction point because of their high resolution and fast operation speed. However, present detectors hardly withstand high radiation doses. Forthcoming upgrade of the LHC in 2014 requires development of a new generation of pixel detectors which will be able to operate under ten times increased luminosity. A planar fabrication technique has some physical limitations; an improvement of the radiation hardness will reduce sensitivity of a detector. In that case a 3D pixel detector seems to be the most promising device which can overcome these difficulties. The objective of this work was to model a structure of the 3D stripixel detector and to simulate electrical characteristics of the device. Silvaco Atlas software has been used for these purposes. The structures of single and double sided dual column detectors with active edges were described using special command language. Simulations of these detectors have shown that electric field inside an active area has more uniform distribution in comparison to the planar structure. A smaller interelectrode space leads to a stronger field and also decreases the collection time. This makes the new type of detectors more radiation resistant. Other discovered advantages are the lower full depletion voltage and increased charge collection efficiency. So the 3D stripixel detectors have demonstrated improved characteristics and will be a suitable replacement for the planar ones.
Resumo:
A novel heteronuclear 3d-4f compound having formula NdCu3L3·13H2O (where H3L = Schiff base derived from 5-bromosalicylaldehyde and glycylglycine and L³ = C11H8 N2O4Br) was obtained. It was characterized by elemental and thermal analyses and magnetic measurements. The Cu(II)-Nd(III) compound is stable up to 323 K. During dehydration process the water molecules are lost in two stages. The magnetic susceptibility data for this complex change with temperature according to the Curie-Weiss law with theta = -35 K. The magnetic moment values decrease from 5.00µB at 303 K to 4.38µB at 76 K.
Resumo:
The skill of programming is a key asset for every computer science student. Many studies have shown that this is a hard skill to learn and the outcomes of programming courses have often been substandard. Thus, a range of methods and tools have been developed to assist students’ learning processes. One of the biggest fields in computer science education is the use of visualizations as a learning aid and many visualization based tools have been developed to aid the learning process during last few decades. Studies conducted in this thesis focus on two different visualizationbased tools TRAKLA2 and ViLLE. This thesis includes results from multiple empirical studies about what kind of effects the introduction and usage of these tools have on students’ opinions and performance, and what kind of implications there are from a teacher’s point of view. The results from studies in this thesis show that students preferred to do web-based exercises, and felt that those exercises contributed to their learning. The usage of the tool motivated students to work harder during their course, which was shown in overall course performance and drop-out statistics. We have also shown that visualization-based tools can be used to enhance the learning process, and one of the key factors is the higher and active level of engagement (see. Engagement Taxonomy by Naps et al., 2002). The automatic grading accompanied with immediate feedback helps students to overcome obstacles during the learning process, and to grasp the key element in the learning task. These kinds of tools can help us to cope with the fact that many programming courses are overcrowded with limited teaching resources. These tools allows us to tackle this problem by utilizing automatic assessment in exercises that are most suitable to be done in the web (like tracing and simulation) since its supports students’ independent learning regardless of time and place. In summary, we can use our course’s resources more efficiently to increase the quality of the learning experience of the students and the teaching experience of the teacher, and even increase performance of the students. There are also methodological results from this thesis which contribute to developing insight into the conduct of empirical evaluations of new tools or techniques. When we evaluate a new tool, especially one accompanied with visualization, we need to give a proper introduction to it and to the graphical notation used by tool. The standard procedure should also include capturing the screen with audio to confirm that the participants of the experiment are doing what they are supposed to do. By taken such measures in the study of the learning impact of visualization support for learning, we can avoid drawing false conclusion from our experiments. As computer science educators, we face two important challenges. Firstly, we need to start to deliver the message in our own institution and all over the world about the new – scientifically proven – innovations in teaching like TRAKLA2 and ViLLE. Secondly, we have the relevant experience of conducting teaching related experiment, and thus we can support our colleagues to learn essential know-how of the research based improvement of their teaching. This change can transform academic teaching into publications and by utilizing this approach we can significantly increase the adoption of the new tools and techniques, and overall increase the knowledge of best-practices. In future, we need to combine our forces and tackle these universal and common problems together by creating multi-national and multiinstitutional research projects. We need to create a community and a platform in which we can share these best practices and at the same time conduct multi-national research projects easily.
Resumo:
The age-old adage goes that nothing in this world lasts but change, and this generation has indeed seen changes that are unprecedented. Business managers do not have the luxury of going with the flow: they have to plan ahead, to think strategies that will meet the changing conditions, however stormy the weather seems to be. This demand raises the question of whether there is something a manager or planner can do to circumvent the eye of the storm in the future? Intuitively, one can either run on the risk of something happening without preparing, or one can try to prepare oneself. Preparing by planning for each eventuality and contingency would be impractical and prohibitively expensive, so one needs to develop foreknowledge, or foresight past the horizon of the present and the immediate future. The research mission in this study is to support strategic technology management by designing an effective and efficient scenario method to induce foresight to practicing managers. The design science framework guides this study in developing and evaluating the IDEAS method. The IDEAS method is an electronically mediated scenario method that is specifically designed to be an effective and accessible. The design is based on the state-of-the-art in scenario planning, and the product is a technology-based artifact to solve the foresight problem. This study demonstrates the utility, quality and efficacy of the artifact through a multi-method empirical evaluation study, first by experimental testing and secondly through two case studies. The construction of the artifact is rigorously documented as justification knowledge as well as the principles of form and function on the general level, and later through the description and evaluation of instantiations. This design contributes both to practice and foundation of the design. The IDEAS method contributes to the state-of-the-art in scenario planning by offering a light-weight and intuitive scenario method for resource constrained applications. Additionally, the study contributes to the foundations and methods of design by forging a clear design science framework which is followed rigorously. To summarize, the IDEAS method is offered for strategic technology management, with a confident belief that it will enable gaining foresight and aid the users to choose trajectories past the gales of creative destruction and off to a brighter future.
Resumo:
Western societies have been faced with the fact that overweight, impaired glucose regulation and elevated blood pressure are already prevalent in pediatric populations. This will inevitably mean an increase in later manifestations of cardio-metabolic diseases. The dilemma has been suggested to stem from fetal life and it is surmised that the early nutritional environment plays an important role in the process called programming. The aim of the present study was to characterize early nutritional determinants associating with cardio-metabolic risk factors in fetuses, infants and children. Further, the study was designated to establish whether dietary counseling initiated in early pregnancy can modify this cascade. Healthy mother-child pairs (n=256) participating in a dietary intervention study were followed from early pregnancy to childhood. The intervention included detailed dietary counseling by a nutritionist targeting saturated fat intake in excess of recommendations and fiber consumption below recommendations. Cardio-metabolic programming was studied by characterizing the offspring’s cardio-metabolic risk factors such as over-activation of the autonomic nervous system, elevated blood pressure and adverse metabolic status (e.g. serum high split proinsulin concentration). Fetal cardiac sympathovagal activation was measured during labor. Postnatally, children’s blood pressure was measured at six-month and four-year follow-up visits. Further, infants’ metabolic status was assessed by means of growth and serum biomarkers (32-33 split proinsulin, leptin and adiponectin) at the age of six months. This study proved that fetal cardiac sympathovagal activity was positively associated with maternal pre-pregnancy body mass index indicating adverse cardio-metabolic programming in the offspring. Further, a reduced risk of high split proinsulin in infancy and lower blood pressure in childhood were found in those offspring whose mothers’ weight gain and amount and type of fats in the diet during pregnancy were as recommended. Of note, maternal dietary counseling from early pregnancy onwards could ameliorate the offspring’s metabolic status by reducing the risk of high split proinsulin concentration, although it had no effect on the other cardio-metabolic markers in the offspring. At postnatal period breastfeeding proved to entail benefits in cardio-metabolic programming. Finally, the recommended dietary protein and total fat content in the child’s diet were important nutritional determinants reducing blood pressure at the age of four years. The intrauterine and immediate postnatal period comprise a window of opportunity for interventions aiming to reduce the risk of cardio-metabolic disorders and brings the prospect of achieving health benefits over one generation.
Resumo:
The development of correct programs is a core problem in computer science. Although formal verification methods for establishing correctness with mathematical rigor are available, programmers often find these difficult to put into practice. One hurdle is deriving the loop invariants and proving that the code maintains them. So called correct-by-construction methods aim to alleviate this issue by integrating verification into the programming workflow. Invariant-based programming is a practical correct-by-construction method in which the programmer first establishes the invariant structure, and then incrementally extends the program in steps of adding code and proving after each addition that the code is consistent with the invariants. In this way, the program is kept internally consistent throughout its development, and the construction of the correctness arguments (proofs) becomes an integral part of the programming workflow. A characteristic of the approach is that programs are described as invariant diagrams, a graphical notation similar to the state charts familiar to programmers. Invariant-based programming is a new method that has not been evaluated in large scale studies yet. The most important prerequisite for feasibility on a larger scale is a high degree of automation. The goal of the Socos project has been to build tools to assist the construction and verification of programs using the method. This thesis describes the implementation and evaluation of a prototype tool in the context of the Socos project. The tool supports the drawing of the diagrams, automatic derivation and discharging of verification conditions, and interactive proofs. It is used to develop programs that are correct by construction. The tool consists of a diagrammatic environment connected to a verification condition generator and an existing state-of-the-art theorem prover. Its core is a semantics for translating diagrams into verification conditions, which are sent to the underlying theorem prover. We describe a concrete method for 1) deriving sufficient conditions for total correctness of an invariant diagram; 2) sending the conditions to the theorem prover for simplification; and 3) reporting the results of the simplification to the programmer in a way that is consistent with the invariantbased programming workflow and that allows errors in the program specification to be efficiently detected. The tool uses an efficient automatic proof strategy to prove as many conditions as possible automatically and lets the remaining conditions be proved interactively. The tool is based on the verification system PVS and i uses the SMT (Satisfiability Modulo Theories) solver Yices as a catch-all decision procedure. Conditions that were not discharged automatically may be proved interactively using the PVS proof assistant. The programming workflow is very similar to the process by which a mathematical theory is developed inside a computer supported theorem prover environment such as PVS. The programmer reduces a large verification problem with the aid of the tool into a set of smaller problems (lemmas), and he can substantially improve the degree of proof automation by developing specialized background theories and proof strategies to support the specification and verification of a specific class of programs. We demonstrate this workflow by describing in detail the construction of a verified sorting algorithm. Tool-supported verification often has little to no presence in computer science (CS) curricula. Furthermore, program verification is frequently introduced as an advanced and purely theoretical topic that is not connected to the workflow taught in the early and practically oriented programming courses. Our hypothesis is that verification could be introduced early in the CS education, and that verification tools could be used in the classroom to support the teaching of formal methods. A prototype of Socos has been used in a course at Åbo Akademi University targeted at first and second year undergraduate students. We evaluate the use of Socos in the course as part of a case study carried out in 2007.
Resumo:
Tämä Diplomityö keskittyy tutkimaan pilvisovelluskehitystä Google App Engine – pilvisovellusalustalle perustuen kuusi –vaiheiseen vesiputousmalliin sekä tutkimaan Google App Engine -pilvisovellusalustan tarjoamia mahdollisuuksia ja rajoituksia sovelluskehityksen muodossa. Tutkimuksen perusteella kuusi –vaiheinen vesiputousmalli soveltuu pilvisovelluskehitykseen,mikäli vaatimusmäärittely on tarkka jo sovelluskehityksen alkuvaiheessa. Tutkimuksen tuloksena syntyi vaatimusmäärittely MikkoMail –pilvisovellukselle. Vaatimusmäärittelyn pohjalta luotiin MikkoMail –pilvisovellus Google App Engine –pilvisovellusalustalle. Google App Engine –pilvisovellusalusta tukee vain Python- ja Java –ohjelmointikieliä eikä sisällä lainkaan ulkoista tietokantapalveluiden tukea. Tästä syystä Google App Engine -pilvisovellusalusta soveltuu pieniin, keskisuuriin ja pilottiprojektinomaisiin sovelluskehitysprojekteihin.
Resumo:
In this thesis, simple methods have been sought to lower the teacher’s threshold to start to apply constructive alignment in instruction. From the phases of the instructional process, aspects that can be improved with little effort by the teacher have been identified. Teachers have been interviewed in order to find out what students actually learn in computer science courses. A quantitative analysis of the structured interviews showed that in addition to subject specific skills and knowledge, students learn many other skills that should be mentioned in the learning outcomes of the course. The students’ background, such as their prior knowledge, learning style and culture, affects how they learn in a course. A survey was conducted to map the learning styles of computer science students and to see if their cultural background affected their learning style. A statistical analysis of the data indicated that computer science students are different learners than engineering students in general and that there is a connection between the student’s culture and learning style. In this thesis, a simple self-assessment scale that is based on Bloom’s revised taxonomy has been developed. A statistical analysis of the test results indicates that in general the scale is quite reliable, but single students still slightly overestimate or under-estimate their knowledge levels. For students, being able to follow their own progress is motivating, and for a teacher, self-assessment results give information about how the class is proceeding and what the level of the students’ knowledge is.