11 resultados para Formal Methods

em Aston University Research Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Heuristics, simulation, artificial intelligence techniques and combinations thereof have all been employed in the attempt to make computer systems adaptive, context-aware, reconfigurable and self-managing. This paper complements such efforts by exploring the possibility to achieve runtime adaptiveness using mathematically-based techniques from the area of formal methods. It is argued that formal methods @ runtime represents a feasible approach, and promising preliminary results are summarised to support this viewpoint. The survey of existing approaches to employing formal methods at runtime is accompanied by a discussion of their challenges and of the future research required to overcome them. © 2011 Springer-Verlag.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This chapter explores ways in which rigorous mathematical techniques, termed formal methods, can be employed to improve the predictability and dependability of autonomic computing. Model checking, formal specification, and quantitative verification are presented in the contexts of conflict detection in autonomic computing policies, and of implementation of goal and utility-function policies in autonomic IT systems, respectively. Each of these techniques is illustrated using a detailed case study, and analysed to establish its merits and limitations. The analysis is then used as a basis for discussing the challenges and opportunities of this endeavour to transition the development of autonomic IT systems from the current practice of using ad-hoc methods and heuristic towards a more principled approach. © 2012, IGI Global.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

There is an increasing emphasis on the use of software to control safety critical plants for a wide area of applications. The importance of ensuring the correct operation of such potentially hazardous systems points to an emphasis on the verification of the system relative to a suitably secure specification. However, the process of verification is often made more complex by the concurrency and real-time considerations which are inherent in many applications. A response to this is the use of formal methods for the specification and verification of safety critical control systems. These provide a mathematical representation of a system which permits reasoning about its properties. This thesis investigates the use of the formal method Communicating Sequential Processes (CSP) for the verification of a safety critical control application. CSP is a discrete event based process algebra which has a compositional axiomatic semantics that supports verification by formal proof. The application is an industrial case study which concerns the concurrent control of a real-time high speed mechanism. It is seen from the case study that the axiomatic verification method employed is complex. It requires the user to have a relatively comprehensive understanding of the nature of the proof system and the application. By making a series of observations the thesis notes that CSP possesses the scope to support a more procedural approach to verification in the form of testing. This thesis investigates the technique of testing and proposes the method of Ideal Test Sets. By exploiting the underlying structure of the CSP semantic model it is shown that for certain processes and specifications the obligation of verification can be reduced to that of testing the specification over a finite subset of the behaviours of the process.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The success of the Semantic Web, as the next generation of Web technology, can have profound impact on the environment for formal software development. It allows both the software engineers and machines to understand the content of formal models and supports more effective software design in terms of understanding, sharing and reusing in a distributed manner. To realise the full potential of the Semantic Web in formal software development, effectively creating proper semantic metadata for formal software models and their related software artefacts is crucial. In this paper, a methodology with tool support is proposed to automatically derive ontological metadata from formal software models and semantically describe them.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Many software engineers have found that it is difficult to understand, incorporate and use different formal models consistently in the process of software developments, especially for large and complex software systems. This is mainly due to the complex mathematical nature of the formal methods and the lack of tool support. It is highly desirable to have software models and their related software artefacts systematically connected and used collaboratively, rather than in isolation. The success of the Semantic Web, as the next generation of Web technology, can have profound impact on the environment for formal software development. It allows both the software engineers and machines to understand the content of formal models and supports more effective software design in terms of understanding, sharing and reusing in a distributed manner. To realise the full potential of the Semantic Web in formal software development, effectively creating proper semantic metadata for formal software models and their related software artefacts is crucial. This paper proposed a framework that allows users to interconnect the knowledge about formal software models and other related documents using the semantic technology. We first propose a methodology with tool support is proposed to automatically derive ontological metadata from formal software models and semantically describe them. We then develop a Semantic Web environment for representing and sharing formal Z/OZ models. A method with prototype tool is presented to enhance semantic query to software models and other artefacts. © 2014.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We report the case of a neologistic jargonaphasic and ask whether her target-related and abstruse neologisms are the result of a single deficit, which affects some items more severely than others, or two deficits: one to lexical access and the other to phonological encoding. We analyse both correct/incorrect performance and errors and apply both traditional and formal methods (maximum-likelihood estimation and model selection). All evidence points to a single deficit at the level of phonological encoding. Further characteristics are used to constrain the locus still further. V.S. does not show the type of length effect expected of a memory component, nor the pattern of errors associated with an articulatory deficit. We conclude that her neologistic errors can result from a single deficit at a level of phonological encoding that immediately follows lexical access where segments are represented in terms of their features. We do not conclude, however, that this is the only possible locus that will produce phonological errors in aphasia, or, indeed, jargonaphasia.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This research involves a study of the questions, "what is considered safe", how are safety levels defined or decided, and according to whom. Tolerable or acceptable risk questions raise various issues: about values and assumptions inherent in such levels; about decision-making frameworks at the highest level of policy making as well as on the individual level; and about the suitability and competency of decision-makers to decide and to communicate their decisions. The wide-ranging topics covering philosophical and practical concerns examined in the literature review reveal the multi-disciplined scope of this research. To support this theoretical study empirical research was undertaken at the European Space Research and Technology Centre (ESTEC) of the European Space Agency (ESA). ESTEC is a large, multi-nationality, high technology organisation which presented an ideal case study for exploring how decisions are made with respect to safety from a personal as well as organisational aspect. A qualitative methodology was employed to gather, analyse and report the findings of this research. Significant findings reveal how experts perceive risks and the prevalence of informal decision-making processes partly due to the inadequacy of formal methods for deciding risk tolerability. In the field of occupational health and safety, this research has highlighted the importance and need for criteria to decide whether a risk is great enough to warrant attention in setting standards and priorities for risk control and resources. From a wider perspective and with the recognition that risk is an inherent part of life, the establishment of tolerability risk levels can be viewed as cornerstones indicating our progress, expectations and values, of life and work, in an increasingly litigious, knowledgeable and global society.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper summarizes the scientific work presented at the 32nd European Conference on Information Retrieval. It demonstrates that information retrieval (IR) as a research area continues to thrive with progress being made in three complementary sub-fields, namely IR theory and formal methods together with indexing and query representation issues, furthermore Web IR as a primary application area and finally research into evaluation methods and metrics. It is the combination of these areas that gives IR its solid scientific foundations. The paper also illustrates that significant progress has been made in other areas of IR. The keynote speakers addressed three such subject fields, social search engines using personalization and recommendation technologies, the renewed interest in applying natural language processing to IR, and multimedia IR as another fast-growing area.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We discuss aggregation of data from neuropsychological patients and the process of evaluating models using data from a series of patients. We argue that aggregation can be misleading but not aggregating can also result in information loss. The basis for combining data needs to be theoretically defined, and the particular method of aggregation depends on the theoretical question and characteristics of the data. We present examples, often drawn from our own research, to illustrate these points. We also argue that statistical models and formal methods of model selection are a useful way to test theoretical accounts using data from several patients in multiple-case studies or case series. Statistical models can often measure fit in a way that explicitly captures what a theory allows; the parameter values that result from model fitting often measure theoretically important dimensions and can lead to more constrained theories or new predictions; and model selection allows the strength of evidence for models to be quantified without forcing this into the artificial binary choice that characterizes hypothesis testing methods. Methods that aggregate and then formally model patient data, however, are not automatically preferred to other methods. Which method is preferred depends on the question to be addressed, characteristics of the data, and practical issues like availability of suitable patients, but case series, multiple-case studies, single-case studies, statistical models, and process models should be complementary methods when guided by theory development.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction-The design of the UK MPharm curriculum is driven by the Royal Pharmaceutical Society of Great Britain (RPSGB) accreditation process and the EU directive (85/432/EEC).[1] Although the RPSGB is informed about teaching activity in UK Schools of Pharmacy (SOPs), there is no database which aggregates information to provide the whole picture of pharmacy education within the UK. The aim of the teaching, learning and assessment study [2] was to document and map current programmes in the 16 established SOPs. Recent developments in programme delivery have resulted in a focus on deep learning (for example, through problem based learning approaches) and on being more student centred and less didactic through lectures. The specific objectives of this part of the study were (a) to quantify the content and modes of delivery of material as described in course documentation and (b) having categorised the range of teaching methods, ask students to rate how important they perceived each one for their own learning (using a three point Likert scale: very important, fairly important or not important). Material and methods-The study design compared three datasets: (1) quantitative course document review, (2) qualitative staff interview and (3) quantitative student self completion survey. All 16 SOPs provided a set of their undergraduate course documentation for the year 2003/4. The documentation variables were entered into Excel tables. A self-completion questionnaire was administered to all year four undergraduates, using a pragmatic mixture of methods, (n=1847) in 15 SOPs within Great Britain. The survey data were analysed (n=741) using SPSS, excluding non-UK students who may have undertaken part of their studies within a non-UK university. Results and discussion-Interviews showed that individual teachers and course module leaders determine the choice of teaching methods used. Content review of the documentary evidence showed that 51% of the taught element of the course was delivered using lectures, 31% using practicals (includes computer aided learning) and 18% small group or interactive teaching. There was high uniformity across the schools for the first three years; variation in the final year was due to the project. The average number of hours per year across 15 schools (data for one school were not available) was: year 1: 408 hours; year 2: 401 hours; year 3: 387 hours; year 4: 401 hours. The survey showed that students perceived lectures to be the most important method of teaching after dispensing or clinical practicals. Taking the very important rating only: 94% (n=694) dispensing or clinical practicals; 75% (n=558) lectures; 52% (n=386) workshops, 50% (n=369) tutorials, 43% (n=318) directed study. Scientific laboratory practices were rated very important by only 31% (n=227). The study shows that teaching of pharmacy to undergraduates in the UK is still essentially didactic through a high proportion of formal lectures and with high levels of staff-student contact. Schools consider lectures still to be the most cost effective means of delivering the core syllabus to large cohorts of students. However, this does limit the scope for any optionality within teaching, the scope for small group work is reduced as is the opportunity to develop multi-professional learning or practice placements. Although novel teaching and learning techniques such as e-learning have expanded considerably over the past decade, schools of pharmacy have concentrated on lectures as the best way of coping with the huge expansion in student numbers. References [1] Council Directive. Concerning the coordination of provisions laid down by law, regulation or administrative action in respect of certain activities in the field of pharmacy. Official Journal of the European Communities 1985;85/432/EEC. [2] Wilson K, Jesson J, Langley C, Clarke L, Hatfield K. MPharm Programmes: Where are we now? Report commissioned by the Pharmacy Practice Research Trust., 2005.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: Diabetes is a leading cause of visual impairment in working age population in the UK. This study looked at the causes of Severe Visual Impairment(SVI) in the patients attending diabetic eye clinic and influence on the rate of SVI, over a 12 year period, after introducing retinal screening programmes in the hospital and the community in 1993 (review in 1992, 1998 & 2004). Methods: Medical records of all the patients attending the diabetic eye clinic over a period of 5months(April to August) in 1992, 1998 and 2004 were reviewed. The data collected for each patient included age, sex, ethnic origin, diabetes (type,duration &treatment), the best corrected visual acuity (present and at time of presentation), type and duration of retinopathy and attendance record to both diabetic clinic and diabetic eye clinic. In this study, SVI is defined as a visual acuity of 6/36 or worse in at least one eye. Results: In 1992, of a total 245 patients, 58patients(23.6%) had SVI {38 (15.5% of total) due to diabetic retinopathy [31(12.6%) maculopathy, 2(0.8%) vitreous haemorrhage and 5(2%) retinal detachment] and 20(8.1%) due to non–diabetic retinopathy causes}. In 1998, of a total 297, 77patients(25.9%) had SVI {33(11.1% of total) due to diabetic retinopathy [19(6.4%) maculopathy, 9(3%) proliferative retinopathy, 8(2.7%) vitreous haemorrhage and 3(1%) retinal detachment]and 44(14.8%)due to non–diabetic retinopathy}. In 2004, of a total 471, 72patients(15.2%) had SVI{46(9.7%of total) due to diabetic retinopathy [37(7.8%) maculopathy, 1(0.2%) proliferative retinopathy, 6(1.8%) vitreous haemorrhage and 2(0.4%) retinal detachment]and 26(5.5%) due to non– diabetic retinopathy causes}. Conclusions: Introduction of formalised annual diabetic review including retinal screening and a community retinal screening programme has reduced the rate of severe visual impairment due to diabetic retinopathy, in patients attending diabetic eye clinic, from 15.5% in1992 to 9.7% in2004. Keywords: diabetic retinopathy