63 resultados para dialogue tool


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The development of correct programs is a core problem in computer science. Although formal verification methods for establishing correctness with mathematical rigor are available, programmers often find these difficult to put into practice. One hurdle is deriving the loop invariants and proving that the code maintains them. So called correct-by-construction methods aim to alleviate this issue by integrating verification into the programming workflow. Invariant-based programming is a practical correct-by-construction method in which the programmer first establishes the invariant structure, and then incrementally extends the program in steps of adding code and proving after each addition that the code is consistent with the invariants. In this way, the program is kept internally consistent throughout its development, and the construction of the correctness arguments (proofs) becomes an integral part of the programming workflow. A characteristic of the approach is that programs are described as invariant diagrams, a graphical notation similar to the state charts familiar to programmers. Invariant-based programming is a new method that has not been evaluated in large scale studies yet. The most important prerequisite for feasibility on a larger scale is a high degree of automation. The goal of the Socos project has been to build tools to assist the construction and verification of programs using the method. This thesis describes the implementation and evaluation of a prototype tool in the context of the Socos project. The tool supports the drawing of the diagrams, automatic derivation and discharging of verification conditions, and interactive proofs. It is used to develop programs that are correct by construction. The tool consists of a diagrammatic environment connected to a verification condition generator and an existing state-of-the-art theorem prover. Its core is a semantics for translating diagrams into verification conditions, which are sent to the underlying theorem prover. We describe a concrete method for 1) deriving sufficient conditions for total correctness of an invariant diagram; 2) sending the conditions to the theorem prover for simplification; and 3) reporting the results of the simplification to the programmer in a way that is consistent with the invariantbased programming workflow and that allows errors in the program specification to be efficiently detected. The tool uses an efficient automatic proof strategy to prove as many conditions as possible automatically and lets the remaining conditions be proved interactively. The tool is based on the verification system PVS and i uses the SMT (Satisfiability Modulo Theories) solver Yices as a catch-all decision procedure. Conditions that were not discharged automatically may be proved interactively using the PVS proof assistant. The programming workflow is very similar to the process by which a mathematical theory is developed inside a computer supported theorem prover environment such as PVS. The programmer reduces a large verification problem with the aid of the tool into a set of smaller problems (lemmas), and he can substantially improve the degree of proof automation by developing specialized background theories and proof strategies to support the specification and verification of a specific class of programs. We demonstrate this workflow by describing in detail the construction of a verified sorting algorithm. Tool-supported verification often has little to no presence in computer science (CS) curricula. Furthermore, program verification is frequently introduced as an advanced and purely theoretical topic that is not connected to the workflow taught in the early and practically oriented programming courses. Our hypothesis is that verification could be introduced early in the CS education, and that verification tools could be used in the classroom to support the teaching of formal methods. A prototype of Socos has been used in a course at Åbo Akademi University targeted at first and second year undergraduate students. We evaluate the use of Socos in the course as part of a case study carried out in 2007.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this study is to analyse the content of the interdisciplinary conversations in Göttingen between 1949 and 1961. The task is to compare models for describing reality presented by quantum physicists and theologians. Descriptions of reality indifferent disciplines are conditioned by the development of the concept of reality in philosophy, physics and theology. Our basic problem is stated in the question: How is it possible for the intramental image to match the external object?Cartesian knowledge presupposes clear and distinct ideas in the mind prior to observation resulting in a true correspondence between the observed object and the cogitative observing subject. The Kantian synthesis between rationalism and empiricism emphasises an extended character of representation. The human mind is not a passive receiver of external information, but is actively construing intramental representations of external reality in the epistemological process. Heidegger's aim was to reach a more primordial mode of understanding reality than what is possible in the Cartesian Subject-Object distinction. In Heidegger's philosophy, ontology as being-in-the-world is prior to knowledge concerning being. Ontology can be grasped only in the totality of being (Dasein), not only as an object of reflection and perception. According to Bohr, quantum mechanics introduces an irreducible loss in representation, which classically understood is a deficiency in knowledge. The conflicting aspects (particle and wave pictures) in our comprehension of physical reality, cannot be completely accommodated into an entire and coherent model of reality. What Bohr rejects is not realism, but the classical Einsteinian version of it. By the use of complementary descriptions, Bohr tries to save a fundamentally realistic position. The fundamental question in Barthian theology is the problem of God as an object of theological discourse. Dialectics is Barth¿s way to express knowledge of God avoiding a speculative theology and a human-centred religious self-consciousness. In Barthian theology, the human capacity for knowledge, independently of revelation, is insufficient to comprehend the being of God. Our knowledge of God is real knowledge in revelation and our words are made to correspond with the divine reality in an analogy of faith. The point of the Bultmannian demythologising programme was to claim the real existence of God beyond our faculties. We cannot simply define God as a human ideal of existence or a focus of values. The theological programme of Bultmann emphasised the notion that we can talk meaningfully of God only insofar as we have existential experience of his intervention. Common to all these twentieth century philosophical, physical and theological positions, is a form of anti-Cartesianism. Consequently, in regard to their epistemology, they can be labelled antirealist. This common insight also made it possible to find a common meeting point between the different disciplines. In this study, the different standpoints from all three areas and the conversations in Göttingen are analysed in the frameworkof realism/antirealism. One of the first tasks in the Göttingen conversations was to analyse the nature of the likeness between the complementary structures inquantum physics introduced by Niels Bohr and the dialectical forms in the Barthian doctrine of God. The reaction against epistemological Cartesianism, metaphysics of substance and deterministic description of reality was the common point of departure for theologians and physicists in the Göttingen discussions. In his complementarity, Bohr anticipated the crossing of traditional epistemic boundaries and the generalisation of epistemological strategies by introducing interpretative procedures across various disciplines.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Early identification of beginning readers at risk of developing reading and writing difficulties plays an important role in the prevention and provision of appropriate intervention. In Tanzania, as in other countries, there are children in schools who are at risk of developing reading and writing difficulties. Many of these children complete school without being identified and without proper and relevant support. The main language in Tanzania is Kiswahili, a transparent language. Contextually relevant, reliable and valid instruments of identification are needed in Tanzanian schools. This study aimed at the construction and validation of a group-based screening instrument in the Kiswahili language for identifying beginning readers at risk of reading and writing difficulties. In studying the function of the test there was special interest in analyzing the explanatory power of certain contextual factors related to the home and school. Halfway through grade one, 337 children from four purposively selected primary schools in Morogoro municipality were screened with a group test consisting of 7 subscales measuring phonological awareness, word and letter knowledge and spelling. A questionnaire about background factors and the home and school environments related to literacy was also used. The schools were chosen based on performance status (i.e. high, good, average and low performing schools) in order to include variation. For validation, 64 children were chosen from the original sample to take an individual test measuring nonsense word reading, word reading, actual text reading, one-minute reading and writing. School marks from grade one and a follow-up test half way through grade two were also used for validation. The correlations between the results from the group test and the three measures used for validation were very high (.83-.95). Content validity of the group test was established by using items drawn from authorized text books for reading in grade one. Construct validity was analyzed through item analysis and principal component analysis. The difficulty level of most items in both the group test and the follow-up test was good. The items also discriminated well. Principal component analysis revealed one powerful latent dimension (initial literacy factor), accounting for 93% of the variance. This implies that it could be possible to use any set of the subtests of the group test for screening and prediction. The K-Means cluster analysis revealed four clusters: at-risk children, strugglers, readers and good readers. The main concern in this study was with the groups of at-risk children (24%) and strugglers (22%), who need the most assistance. The predictive validity of the group test was analyzed by correlating the measures from the two school years and by cross tabulating grade one and grade two clusters. All the correlations were positive and very high, and 94% of the at-risk children in grade two were already identified in the group test in grade one. The explanatory power of some of the home and school factors was very strong. The number of books at home accounted for 38% of the variance in reading and writing ability measured by the group test. Parents´ reading ability and the support children received at home for schoolwork were also influential factors. Among the studied school factors school attendance had the strongest explanatory power, accounting for 21% of the variance in reading and writing ability. Having been in nursery school was also of importance. Based on the findings in the study a short version of the group test was created. It is suggested for use in the screening processes in grade one aiming at identifying children at risk of reading and writing difficulties in the Tanzanian context. Suggestions for further research as well as for actions for improving the literacy skills of Tanzanian children are presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The purpose of this study is to examine how well risk parity works in terms of risk, return and diversification relative to more traditional minimum variance, 1/N and 60/40 portfolios. Risk parity portfolios were constituted of five risk sources; three common asset classes and two alternative beta investment strategies. The three common asset classes were equities, bonds and commodities, and the alternative beta investment strategies were carry trade and trend following. Risk parity portfolios were constructed using five different risk measures of which four were tail risk measures. The risk measures were standard deviation, Value-at-Risk, Expected Shortfall, modified Value-at-Risk and modified Expected Shortfall. We studied also how sensitive risk parity is to the choice of risk measure. The hypothesis is that risk parity portfolios provide better return with the same amount of risk and are better diversified than the benchmark portfolios. We used two data sets, monthly and weekly data. The monthly data was from the years 1989-2011 and the weekly data was from the years 2000-2011. Empirical studies showed that risk parity portfolios provide better diversification since the diversification is made at the risk level. Risk based portfolios provided superior return compared to the asset based portfolios. Using tail risk measures in risk parity portfolios do not necessarily provide better hedge from tail events than standard deviation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This three-phase study was conducted to examine the effect of the Breast Cancer Patient’s Pathway program (BCPP) on breast cancer patients’ empowering process from the viewpoint of the difference between knowledge expectations and perceptions of received knowledge, knowledge level, quality of life, anxiety and treatment-related side effects during the breast cancer treatment process. The BCPP is an Internet-based patient education tool describing a flow chart of the patient pathway during the breast treatment process, from breast cancer diagnostic tests to the follow-up after treatments. The ultimate goal of this study was to evaluate the effect of the BCPP to the breast cancer patient’s empowerment by using the patient pathway as a patient education tool. In phase I, a systematic literature review was carried out to chart the solutions and outcomes of Internet-based educational programs for breast cancer patients. In phase II, a Delphi study was conducted to evaluate the usability of web pages and adequacy of their content. In phase III, the BCPP program was piloted with 10 patients and patients were randomised to an intervention group (n=50) and control group (n=48). According to the results of this study, the Internet is an effective patient education tool for increasing knowledge, and BCPP can be used as a patient education method supporting other education methods. However, breast cancer patients’ perceptions of received knowledge were not fulfilled; their knowledge expectations exceed the perceived amount of received knowledge. Although control group patients’ knowledge expectations were met better with the knowledge they received in hospital compared to the patients in the intervention group, no statistical differences were found between the groups in terms of quality of life, anxiety and treatment-related side effects. However, anxiety decreased faster in the intervention group when looking at internal differences between the groups at different measurement times. In the intervention group the relationship between the difference between knowledge expectations and perceptions of received knowledge correlated significantly with quality of life and anxiety. Their knowledge level was also significant higher than in the control group. These results support the theory that the empowering process requires patient’s awareness of knowledge expectations and perceptions of received knowledge. There is a need to develop patient education to meet patients’ perceptions of received knowledge, including oral and written education and BCPP, to fulfil patient’s knowledge expectations and facilitate the empowering process. Further research is needed on the process of cognitive empowerment with breast cancer patients. There is a need for new patient education methods to increase breast cancer patients’ awareness of knowing.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The purpose of the study is to determine general features of the supply chain performance management system, to assess the current state of performance management in the case company mills and to make proposals for improvement – how the future state of performance management system would look like. The study covers four phases which consist of theory and case company parts. Theoretical review gives understanding about performance management and measurement. Current state analysis assesses the current state of performance management in the mills. Results and proposals for improvement are derived from current state analysis and finally the conclusions with answers to research questions are presented. Supply chain performance management system consists of five areas: perfor-mance measurement and metrics, action plans, performance tracking, performance dialogue and rewards, consequences and actions. The result of the study revealed that all mills were quite average level in performance management and there is a room for improvement. Created performance improvement matrix served as a tool in assessing current performance management and could work also as a tool in the future in mapping the current state after transformation process. Limited harmonization was revealed as there were different ways to work and manage performance in the mills. Lots of good ideas existed though actions are needed to make a progress. There is also need to harmonize KPI structure.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Pumping systems account for over 20 % of all electricity consumption in European industry. Optimization and correct design of such systems is important and there is a reasonable amount of unrealized energy saving potential in old pumping systems. The energy efficiency and therefore also the energy consumption of a pumping system heavily depends on the correct dimensioning and selection of devices. In this work, a graphical optimization tool for pumping systems is developed in Matlab programming language. The tool selects optimal pump, electrical motor and frequency converter for existing pumping process and calculates the life cycle costs of the whole system. The tool could be used as an aid when choosing the machinery and to analyze the energy consumption of existing systems. Results given by the tool are compared to the results of laboratory tests. The selection of pump and motor works reasonably well, but the frequency converter selection still needs development

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tutkimuksen tavoitteena on kuvailla tarinankerrontaa brändin rakennuksen välineenä, ja selvittää mitkä ovat tarinankerronnan erityispiirteet sosiaalisessa mediassa. Aikaisempien teorioiden ja julkaisujen avulla kootaan viitekehys joka osoittaa käsitteiden vaikutukset toisiinsa. Tutkimus osoittaa tarinankerronnan vaikuttavan vahvistavasti kaikkiin brändipääoman ulottuvuuksiin, lähinnä tunteita herättävien ja muistettavuutta lisäävien ominaisuuksiensa ansiosta. Empiirisessä osiossa selvitetään laadullisen sisällönanalyysin keinoin kuinka yleistä brändien tarinankerronta tällä hetkellä on, ja kuvaillaan millaisia tarinoita yritykset kertovat. Aineisto koostuu sadan suosituimman brändin Facebookissa jakamista videomuotoisista tarinoista. Tutkimus osoitti, että brändien tarinankerronta sosiaalisessa mediassa on toistaiseksi melko vähäistä. Tarinat voidaan luokitella kirjallisuudesta tuttujen tarinatyyppien mukaisesti. Suurin osa brändien esittämistä tarinoista pyrkii synnyttämään yleisössä ihailun ja nostalgian tunteita, mutta useat tarinat myös sisältävät humoristisia piirteitä.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this report are described means for indoor localization in special, challenging circum-stances in marine industry. The work has been carried out in MARIN project, where a tool based on mobile augmented reality technologies for marine industry is developed. The tool can be used for various inspection and documentation tasks and it is aimed for improving the efficiency in design and construction work by offering the possibility to visualize the newest 3D-CAD model in real environment. Indoor localization is needed to support the system in initialization of the accurate camera pose calculation and auto-matically finding the right location in the 3D-CAD model. The suitability of each indoor localization method to the specific environment and circumstances is evaluated.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Julkaisumaa: 158 TW TWN Taiwan

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Diffusion tensor imaging (DTI) is an advanced magnetic resonance imaging (MRI) technique. DTI is based on free thermal motion (diffusion) of water molecules. The properties of diffusion can be represented using parameters such as fractional anisotropy, mean diffusivity, axial diffusivity, and radial diffusivity, which are calculated from DTI data. These parameters can be used to study the microstructure in fibrous structure such as brain white matter. The aim of this study was to investigate the reproducibility of region-of-interest (ROI) analysis and determine associations between white matter integrity and antenatal and early postnatal growth at term age using DTI. Antenatal growth was studied using both the ROI and tract-based spatial statistics (TBSS) method and postnatal growth using only the TBSS method. The infants included to this study were born below 32 gestational weeks or birth weight less than 1,501 g and imaged with a 1.5 T MRI system at term age. Total number of 132 infants met the inclusion criteria between June 2004 and December 2006. Due to exclusion criteria, a total of 76 preterm infants (ROI) and 36 preterm infants (TBSS) were accepted to this study. The ROI analysis was quite reproducible at term age. Reproducibility varied between white matter structures and diffusion parameters. Normal antenatal growth was positively associated with white matter maturation at term age. The ROI analysis showed associations only in the corpus callosum. Whereas, TBSS revealed associations in several brain white matter areas. Infants with normal antenatal growth showed more mature white matter compared to small for gestational age infants. The gestational age at birth had no significant association with white matter maturation at term age. It was observed that good early postnatal growth associated negatively with white matter maturation at term age. Growth-restricted infants seemed to have delayed brain maturation that was not fully compensated at term, despite catchup growth.