32 resultados para software, translation, validation tool, VMNET, Wikipedia, XML
em Aston University Research Archive
Resumo:
The traditional waterfall software life cycle model has several weaknesses. One problem is that a working version of a system is unavailable until a late stage in the development; any omissions and mistakes in the specification undetected until that stage can be costly to maintain. The operational approach which emphasises the construction of executable specifications can help to remedy this problem. An operational specification may be exercised to generate the behaviours of the specified system, thereby serving as a prototype to facilitate early validation of the system's functional requirements. Recent ideas have centred on using an existing operational method such as JSD in the specification phase of object-oriented development. An explicit transformation phase following specification is necessary in this approach because differences in abstractions between the two domains need to be bridged. This research explores an alternative approach of developing an operational specification method specifically for object-oriented development. By incorporating object-oriented concepts in operational specifications, the specifications have the advantage of directly facilitating implementation in an object-oriented language without requiring further significant transformations. In addition, object-oriented concepts can help the developer manage the complexity of the problem domain specification, whilst providing the user with a specification that closely reflects the real world and so the specification and its execution can be readily understood and validated. A graphical notation has been developed for the specification method which can capture the dynamic properties of an object-oriented system. A tool has also been implemented comprising an editor to facilitate the input of specifications, and an interpreter which can execute the specifications and graphically animate the behaviours of the specified systems.
Resumo:
The present thesis is located within the framework of descriptive translation studies and critical discourse analysis. Modern translation studies have increasingly taken into account the complexities of power relations and ideological management involved in the production of translations. Paradoxically, persuasive political discourse has not been much touched upon, except for studies following functional (e.g. Schäffner 2002) or systemic-linguistic approaches (e.g. Calzada Pérez 2001). By taking 11 English translations of Hitler’s Mein Kampf as prime examples, the thesis aims to contribute to a better understanding of the translation of politically sensitive texts. Actors involved in political discourse are usually more concerned with the emotional appeal of their message than they are with its factual content. When such political discourse becomes the locus of translation, it may equally be crafted rhetorically, being used as a tool to persuade. It is thus the purpose of the thesis to describe subtle ‘persuasion strategies’ in institutionally translated political discourse. The subject of the analysis is an illustrative corpus of four full-text translations, two abridgements, and five extract translations of Mein Kampf. Methodologically, the thesis pursues a top-down approach. It begins by delineating sociocultural and situative-agentive conditions as causal factors impinging on the individual translations. Such interactive and interpersonal factors determined textual choices. The overall textual analysis consists of an interrelated corpus-driven and corpus-based approach. It demonstrates how corpus software can be fruitfully harnessed to discern ‘ideological significations’ in the translated texts. Altogether, the thesis investigates how translational decision-makers attempted to position the source text author and his narrative in line with overall rhetorical purposes.
Resumo:
BACKGROUND: Gilles de la Tourette syndrome (GTS) is a chronic childhood-onset neuropsychiatric disorder with a significant impact on patients' health-related quality of life (HR-QOL). Cavanna et al. (Neurology 2008; 71: 1410-1416) developed and validated the first disease-specific HR-QOL assessment tool for adults with GTS (Gilles de la Tourette Syndrome-Quality of Life Scale, GTS-QOL). This paper presents the translation, adaptation and validation of the GTS-QOL for young Italian patients with GTS. METHODS: A three-stage process involving 75 patients with GTS recruited through three Departments of Child and Adolescent Neuropsychiatry in Italy led to the development of a 27-item instrument (Gilles de la Tourette Syndrome-Quality of Life Scale in children and adolescents, C&A-GTS-QOL) for the assessment of HR-QOL through a clinician-rated interview for 6-12 year-olds and a self-report questionnaire for 13-18 year-olds. RESULTS: The C&A-GTS-QOL demonstrated satisfactory scaling assumptions and acceptability. Internal consistency reliability was high (Cronbach's alpha > 0.7) and validity was supported by interscale correlations (range 0.4-0.7), principal-component factor analysis and correlations with other rating scales and clinical variables. CONCLUSIONS: The present version of the C&A-GTS-QOL is the first disease-specific HR-QOL tool for Italian young patients with GTS, satisfying criteria for acceptability, reliability and validity. © 2013 - IOS Press and the authors. All rights reserved.
Resumo:
Monitoring land-cover changes on sites of conservation importance allows environmental problems to be detected, solutions to be developed and the effectiveness of actions to be assessed. However, the remoteness of many sites or a lack of resources means these data are frequently not available. Remote sensing may provide a solution, but large-scale mapping and change detection may not be appropriate, necessitating site-level assessments. These need to be easy to undertake, rapid and cheap. We present an example of a Web-based solution based on free and open-source software and standards (including PostGIS, OpenLayers, Web Map Services, Web Feature Services and GeoServer) to support assessments of land-cover change (and validation of global land-cover maps). Authorised users are provided with means to assess land-cover visually and may optionally provide uncertainty information at various levels: from a general rating of their confidence in an assessment to a quantification of the proportions of land-cover types within a reference area. Versions of this tool have been developed for the TREES-3 initiative (Simonetti, Beuchle and Eva, 2011). This monitors tropical land-cover change through ground-truthing at latitude / longitude degree confluence points, and for monitoring of change within and around Important Bird Areas (IBAs) by Birdlife International and the Royal Society for the Protection of Birds (RSPB). In this paper we present results from the second of these applications. We also present further details on the potential use of the land-cover change assessment tool on sites of recognised conservation importance, in combination with NDVI and other time series data from the eStation (a system for receiving, processing and disseminating environmental data). We show how the tool can be used to increase the usability of earth observation data by local stakeholders and experts, and assist in evaluating the impact of protection regimes on land-cover change.
Resumo:
The yeast Saccharomyces cerevisiae is an important model organism for the study of cell biology. The similarity between yeast and human genes and the conservation of fundamental pathways means it can be used to investigate characteristics of healthy and diseased cells throughout the lifespan. Yeast is an equally important biotechnological tool that has long been the organism of choice for the production of alcoholic beverages, bread and a large variety of industrial products. For example, yeast is used to manufacture biofuels, lubricants, detergents, industrial enzymes, food additives and pharmaceuticals such as anti-parasitics, anti-cancer compounds, hormones (including insulin), vaccines and nutraceuticals. Its function as a cell factory is possible because of the speed with which it can be grown to high cell yields, the knowledge that it is generally recognized as safe (GRAS) and the ease with which metabolism and cellular pathways, such as translation can be manipulated. In this thesis, these two pathways are explored in the context of their biotechnological application to ageing research: (i) understanding translational processes during the high-yielding production of membrane protein drug targets and (ii) the manipulation of yeast metabolism to study the molecule, L-carnosine, which has been proposed to have anti-ageing properties. In the first of these themes, the yeast strains, spt3?, srb5?, gcn5? and yTHCBMS1, were examined since they have been previously demonstrated to dramatically increase the yields of a target membrane protein (the aquaporin, Fps1) compared to wild-type cells. The mechanisms underlying this discovery were therefore investigated. All high yielding strains were shown to have an altered translational state (mostly characterised by an initiation block) and constitutive phosphorylation of the translational initiation factor, eIF2a. The relevance of the initiation block was further supported by the finding that other strains, with known initiation blocks, are also high yielding for Fps1. A correlation in all strains between increased Fps1 yields and increased production of the transcriptional activator protein, Gcn4, suggested that yields are subject to translational control. Analysis of the 5´ untranslated region (UTR) of FPS1 revealed two upstream open reading frames (uORFs). Mutagenesis data suggest that high yielding strains may circumvent these control elements through either a leaky scanning or a re-initiation mechanism. In the second theme, the dipeptide L-carnosine (ß-alanyl-L-histidine) was investigated: it has previously been shown to inhibit the growth of cancer cells but delay senescence in cultured human fibroblasts and extend the lifespan of male fruit flies. To understand these apparently contradictory properties, the effects of L-carnosine on yeast were studied. S. cerevisiae can respire aerobically when grown on a non-fermentable carbon source as a substrate but has a respiro-fermentative metabolism when grown on a fermentable carbon source; these metabolisms mimic normal cell and cancerous cell metabolisms, respectively. When yeast were grown on fermentable carbon sources, in the presence of L-carnosine, a reduction in cell growth and viability was observed, which was not apparent for cells grown on a non-fermentable carbon source. The metabolism-dependent mechanism was confirmed in the respiratory yeast species Pichia pastoris. Further analysis of S. cerevisiae yeast strains with deletions in their nutrient-sensing pathway, which result in an increase in respiratory metabolism, confirmed the metabolism-dependent effects of L-carnosine.
Resumo:
Many software engineers have found that it is difficult to understand, incorporate and use different formal models consistently in the process of software developments, especially for large and complex software systems. This is mainly due to the complex mathematical nature of the formal methods and the lack of tool support. It is highly desirable to have software models and their related software artefacts systematically connected and used collaboratively, rather than in isolation. The success of the Semantic Web, as the next generation of Web technology, can have profound impact on the environment for formal software development. It allows both the software engineers and machines to understand the content of formal models and supports more effective software design in terms of understanding, sharing and reusing in a distributed manner. To realise the full potential of the Semantic Web in formal software development, effectively creating proper semantic metadata for formal software models and their related software artefacts is crucial. This paper proposed a framework that allows users to interconnect the knowledge about formal software models and other related documents using the semantic technology. We first propose a methodology with tool support is proposed to automatically derive ontological metadata from formal software models and semantically describe them. We then develop a Semantic Web environment for representing and sharing formal Z/OZ models. A method with prototype tool is presented to enhance semantic query to software models and other artefacts. © 2014.
Resumo:
As machine tools continue to become increasingly repeatable and accurate, high-precision manufacturers may be tempted to consider how they might utilise machine tools as measurement systems. In this paper, we have explored this paradigm by attempting to repurpose state-of-the-art coordinate measuring machine Uncertainty Evaluating Software (UES) for a machine tool application. We performed live measurements on all the systems in question. Our findings have highlighted some gaps with UES when applied to machine tools, and we have attempted to identify the sources of variation which have led to discrepancies. Implications of this research include requirements to evolve the algorithms within the UES if it is to be adapted for on-machine measurement, improve the robustness of the input parameters, and most importantly, clarify expectations.
Resumo:
The involvement of oxidatively modified low density lipoprotein (LDL) in the development of CHD is widely described. We have produced two antibodies, recognizing the lipid oxidation product malondialdehyde (MDA) on whole LDL or ApoB-100. The antibodies were utilized in the development of an ELISA for quantitation of MDA-LDL in human plasma. Intra- and inter-assay coefficients of variation (% CV) were measured as 4.8 and 7.7%, respectively, and sensitivity of the assay as 0.04 μg/ml MDA-LDL. Recovery of standard MDA-LDL from native LDL was 102%, indicating the ELISA to be specific with no interference from other biomolecules. Further validation of the ELISA was carried out against two established methods for measurement of lipid peroxidation products, MDA by HPLC and F2-isoprostanes by GC-MS. Results indicated that MDA-LDL is formed at a later stage of oxidation than either MDA or F2- isoprostanes. In vivo analysis demonstrated that the ELISA was able to determine steady-state concentrations of plasma MDA-LDL (an end marker of lipid peroxidation). A reference range of 34.3 ± 8.8 μg/ml MDA-LDL was established for healthy individuals. Further, the ELISA was used to show significantly increased plasma MDA-LDL levels in subjects with confirmed ischemic heart disease, and could therefore possibly be of benefit as a diagnostic tool for assessing CHD risk. © 2003 Elsevier Inc.
Resumo:
Information technology has increased both the speed and medium of communication between nations. It has brought the world closer, but it has also created new challenges for translation — how we think about it, how we carry it out and how we teach it. Translation and Information Technology has brought together experts in computational linguistics, machine translation, translation education, and translation studies to discuss how these new technologies work, the effect of electronic tools, such as the internet, bilingual corpora, and computer software, on translator education and the practice of translation, as well as the conceptual gaps raised by the interface of human and machine.
Resumo:
A current EPSRC project, product introduction process: a simulation in the extended enterprise (PIPSEE) is discussed. PIPSEE attempts to improve the execution of the product introduction process (PIP) within an extended enterprise in the aerospace sector. The modus operandi for accomplishing this has been to develop process understanding amongst a core team, spanning four different companies, through process modelling, review and improvement recommendation. In parallel, a web-based simulation capability is being used to conduct simulation experiments, and to disseminate findings by training others in the lessons that have been learned. It is intended that the use of the PIPSEE simulator should encourage radical thinking about the ‘fuzzy front end’ of the PIP. This presents a topical, exciting and challenging research problem.
Resumo:
Using current software engineering technology, the robustness required for safety critical software is not assurable. However, different approaches are possible which can help to assure software robustness to some extent. For achieving high reliability software, methods should be adopted which avoid introducing faults (fault avoidance); then testing should be carried out to identify any faults which persist (error removal). Finally, techniques should be used which allow any undetected faults to be tolerated (fault tolerance). The verification of correctness in system design specification and performance analysis of the model, are the basic issues in concurrent systems. In this context, modeling distributed concurrent software is one of the most important activities in the software life cycle, and communication analysis is a primary consideration to achieve reliability and safety. By and large fault avoidance requires human analysis which is error prone; by reducing human involvement in the tedious aspect of modelling and analysis of the software it is hoped that fewer faults will persist into its implementation in the real-time environment. The Occam language supports concurrent programming and is a language where interprocess interaction takes place by communications. This may lead to deadlock due to communication failure. Proper systematic methods must be adopted in the design of concurrent software for distributed computing systems if the communication structure is to be free of pathologies, such as deadlock. The objective of this thesis is to provide a design environment which ensures that processes are free from deadlock. A software tool was designed and used to facilitate the production of fault-tolerant software for distributed concurrent systems. Where Occam is used as a design language then state space methods, such as Petri-nets, can be used in analysis and simulation to determine the dynamic behaviour of the software, and to identify structures which may be prone to deadlock so that they may be eliminated from the design before the program is ever run. This design software tool consists of two parts. One takes an input program and translates it into a mathematical model (Petri-net), which is used for modeling and analysis of the concurrent software. The second part is the Petri-net simulator that takes the translated program as its input and starts simulation to generate the reachability tree. The tree identifies `deadlock potential' which the user can explore further. Finally, the software tool has been applied to a number of Occam programs. Two examples were taken to show how the tool works in the early design phase for fault prevention before the program is ever run.
Resumo:
CONTEXT: The homeless are a significant group within society, which is increasing in size. They have demonstrably greater physical and mental health needs than the housed, and yet often have difficulty accessing primary health care. Medical 'reluctance' to look after homeless people is increasingly suggested as part of the problem. Medical education may have a role in ameliorating this. OBJECTIVES: This paper reports on the development and validation of a questionnaire specifically developed to measure medical students' attitudes towards the homeless. METHOD AND RESULTS: The Attitudes Towards the Homeless Questionnaire, developed using the views of over 370 medical students, was shown to have a Pearson test-retest reliability correlation coefficient of 0.8 and a Cronbach's alpha coefficient of 0.74. CONCLUSIONS: The Attitudes Towards the Homeless Questionnaire appears to be a valid and reliable instrument, which can measure students' attitudes towards the homeless. It could be a useful tool in assessing the effectiveness of educational interventions.
Resumo:
The objective of this research is to design and build a groupware system which will allow members of a distributed group more flexibility in performing software inspection. Software inspection, which is part of non-execution based testing in software development, is a group activity. The groupware system aims to provide a system that will improve acceptability of groupware and improve software quality by providing a software inspection tool that is flexible and adaptable. The groupware system provide a flexible structure for software inspection meetings. The groupware system will extend the structure of the software inspection meeting itself, allowing software inspection meetings to use all four quadrant of the space-time matrix: face-to-face, distributed synchronous, distributed asynchronous, and same place-different time. This will open up new working possibilities. The flexibility and adaptability of the system allows work to switch rapidly between synchronous and asynchronous interaction. A model for a flexible groupware system was developed. The model was developed based on review of the literature and questionnaires. A prototype based on the model was built using java and WWW technology. To test the effectiveness of the system, an evaluation was conducted. Questionnaires was used to gather response from the users. The evaluations ascertained that the model developed is flexible and adaptable to the different working modes, and the system is capable of supporting several different models of the software inspection process.
Resumo:
Software development methodologies are becoming increasingly abstract, progressing from low level assembly and implementation languages such as C and Ada, to component based approaches that can be used to assemble applications using technologies such as JavaBeans and the .NET framework. Meanwhile, model driven approaches emphasise the role of higher level models and notations, and embody a process of automatically deriving lower level representations and concrete software implementations. The relationship between data and software is also evolving. Modern data formats are becoming increasingly standardised, open and empowered in order to support a growing need to share data in both academia and industry. Many contemporary data formats, most notably those based on XML, are self-describing, able to specify valid data structure and content, and can also describe data manipulations and transformations. Furthermore, while applications of the past have made extensive use of data, the runtime behaviour of future applications may be driven by data, as demonstrated by the field of dynamic data driven application systems. The combination of empowered data formats and high level software development methodologies forms the basis of modern game development technologies, which drive software capabilities and runtime behaviour using empowered data formats describing game content. While low level libraries provide optimised runtime execution, content data is used to drive a wide variety of interactive and immersive experiences. This thesis describes the Fluid project, which combines component based software development and game development technologies in order to define novel component technologies for the description of data driven component based applications. The thesis makes explicit contributions to the fields of component based software development and visualisation of spatiotemporal scenes, and also describes potential implications for game development technologies. The thesis also proposes a number of developments in dynamic data driven application systems in order to further empower the role of data in this field.
Resumo:
PURPOSE. To establish an alternative method, sequential and diameter response analysis (SDRA), to determine dynamic retinal vessel responses and their time course in serial stimulation compared with the established method of averaged diameter responses and standard static assessment. METHODS. SDRA focuses on individual time and diameter responses, taking into account the fluctuation in baseline diameter, providing improved insight into reaction patterns when compared with established methods as delivered by retinal vessel analyzer (RVA) software. SDRA patterns were developed with measurements from 78 healthy nonsmokers and subsequently validated in a group of 21 otherwise healthy smokers. Fundus photography and retinal vessel responses were assessed by RVA, intraocular pressure by contact tonometry, and blood pressure by sphygmomanometry. RESULTS. Compared with the RVA software method, SDRA demonstrated a marked difference in retinal vessel responses to flickering light (P 0.05). As a validation of that finding, SDRA showed a strong relation between baseline retinal vessel diameter and subsequent dilatory response in both healthy subjects and smokers (P 0.001). The RVA software was unable to detect this difference or to find a difference in retinal vessel arteriovenous ratio between smokers and nonsmokers (P 0.243). However, SDRA revealed that smokers’ vessels showed both an increased level of arterial baseline diameter fluctuation before flicker stimulation (P 0.005) and an increased stiffness of retinal arterioles (P 0.035) compared with those in nonsmokers. These differences were unrelated to intraocular pressure or systemic blood pressure. CONCLUSIONS. SDRA shows promise as a tool for the assessment of vessel physiology. Further studies are needed to explore its application in patients with vascular diseases.