827 resultados para Tool use
Resumo:
Since the advent of High Level Programming languages (HLPLs) in the early 1950s researchers have sought ways to automate the construction of HLPL compilers. To this end a variety of Translator Writing Tools (TWTs) have been developed in the last three decades. However, only a very few of these tools have gained significant commercial acceptance. This thesis re-examines traditional compiler construction techniques, along with a number of previous TWTs, and proposes a new improved tool for automated compiler construction called the Aston Compiler Constructor (ACC). This new tool allows the specification of complete compilation systems using a high level compiler oriented specification notation called the Compiler Construction Language (CCL). This specification notation is based on a modern variant of Backus Naur Form (BNF) and an extended variant of Attribute Grammars (AGs). The implementation and processing of the CCL is discussed along with an extensive CCL example. The CCL is shown to have an extensive expressive power, to be convenient in use, and highly readable, and thus a superior alternative to earlier TWTs, and to traditional compiler construction techniques. The execution performance of CCL specifications is evaluated and shown to be acceptable. A number of related areas are also addressed, including tools for the rapid construction of individual compiler components, and tools for the construction of compilation systems for multiprocessor operating systems and hardware. This latter area is expected to become of particular interest in future years due to the anticipated increased use of multiprocessor architectures.
Resumo:
The main objective of the work presented in this thesis is to investigate the two sides of the flute, the face and the heel of a twist drill. The flute face was designed to yield straight diametral lips which could be extended to eliminate the chisel edge, and consequently a single cutting edge will be obtained. Since drill rigidity and space for chip conveyance have to be a compromise a theoretical expression is deduced which enables optimum chip disposal capacity to be described in terms of drill parameters. This expression is used to describe the flute heel side. Another main objective is to study the effect on drill performance of changing the conventional drill flute. Drills were manufactured according to the new flute design. Tests were run in order to compare the performance of a conventional flute drill and non conventional design put forward. The results showed that 50% reduction in thrust force and approximately 18% reduction in torque were attained for the new design. The flank wear was measured at the outer corner and found to be less for the new design drill than for the conventional one in the majority of cases. Hole quality, roundness, size and roughness were also considered as a further aspect of drill performance. Improvement in hole quality is shown to arise under certain cutting conditions. Accordingly it might be possible to use a hole which is produced in one pass of the new drill which previously would have required a drilled and reamed hole. A subsidiary objective is to design the form milling cutter that should be employed for milling the foregoing special flute from drill blank allowing for the interference effect. A mathematical analysis in conjunction with computing technique and computers is used. To control the grinding parameter, a prototype drill grinder was designed and built upon the framework of an existing cincinnati cutter grinder. The design and build of the new grinder is based on a computer aided drill point geometry analysis. In addition to the conical grinding concept, the new grinder is also used to produce spherical point utilizing a computer aided drill point geometry analysis.
Resumo:
This work is undertaken in the attempt to understand the processes at work at the cutting edge of the twist drill. Extensive drill life testing performed by the University has reinforced a survey of previously published information. This work demonstrated that there are two specific aspects of drilling which have not previously been explained comprehensively. The first concerns the interrelating of process data between differing drilling situations, There is no method currently available which allows the cutting geometry of drilling to be defined numerically so that such comparisons, where made, are purely subjective. Section one examines this problem by taking as an example a 4.5mm drill suitable for use with aluminium. This drill is examined using a prototype solid modelling program to explore how the required numerical information may be generated. The second aspect is the analysis of drill stiffness. What aspects of drill stiffness provide the very great difference in performance between short flute length, medium flute length and long flute length drills? These differences exist between drills of identical point geometry and the practical superiority of short drills has been known to shop floor drilling operatives since drilling was first introduced. This problem has been dismissed repeatedly as over complicated but section two provides a first approximation and shows that at least for smaller drills of 4. 5mm the effects are highly significant. Once the cutting action of the twist drill is defined geometrically there is a huge body of machinability data that becomes applicable to the drilling process. Work remains to interpret the very high inclination angles of the drill cutting process in terms of cutting forces and tool wear but aspects of drill design may already be looked at in new ways with the prospect of a more analytical approach rather than the present mix of experience and trial and error. Other problems are specific to the twist drill, such as the behaviour of the chips in the flute. It is now possible to predict the initial direction of chip flow leaving the drill cutting edge. For the future the parameters of further chip behaviour may also be explored within this geometric model.
Resumo:
SPOT simulation imagery was acquired for a test site in the Forest of Dean in Gloucestershire, U.K. This data was qualitatively and quantitatively evaluated for its potential application in forest resource mapping and management. A variety of techniques are described for enhancing the image with the aim of providing species level discrimination within the forest. Visual interpretation of the imagery was more successful than automated classification. The heterogeneity within the forest classes, and in particular between the forest and urban class, resulted in poor discrimination using traditional `per-pixel' automated methods of classification. Different means of assessing classification accuracy are proposed. Two techniques for measuring textural variation were investigated in an attempt to improve classification accuracy. The first of these, a sequential segmentation method, was found to be beneficial. The second, a parallel segmentation method, resulted in little improvement though this may be related to a combination of resolution in size of the texture extraction area. The effect on classification accuracy of combining the SPOT simulation imagery with other data types is investigated. A grid cell encoding technique was selected as most appropriate for storing digitised topographic (elevation, slope) and ground truth data. Topographic data were shown to improve species-level classification, though with sixteen classes overall accuracies were consistently below 50%. Neither sub-division into age groups or the incorporation of principal components and a band ratio significantly improved classification accuracy. It is concluded that SPOT imagery will not permit species level classification within forested areas as diverse as the Forest of Dean. The imagery will be most useful as part of a multi-stage sampling scheme. The use of texture analysis is highly recommended for extracting maximum information content from the data. Incorporation of the imagery into a GIS will both aid discrimination and provide a useful management tool.
Resumo:
Motivation: T-cell epitope identification is a critical immunoinformatic problem within vaccine design. To be an epitope, a peptide must bind an MHC protein. Results: Here, we present EpiTOP, the first server predicting MHC class II binding based on proteochemometrics, a QSAR approach for ligands binding to several related proteins. EpiTOP uses a quantitative matrix to predict binding to 12 HLA-DRB1 alleles. It identifies 89% of known epitopes within the top 20% of predicted binders, reducing laboratory labour, materials and time by 80%. EpiTOP is easy to use, gives comprehensive quantitative predictions and will be expanded and updated with new quantitative matrices over time.
Resumo:
G protein coupled receptors (GPCRs) are highly flexible and dynamic proteins, which are able to interact with diverse ligands, effectors, and regulatory proteins. Site-directed mutagenesis (SDM) is a powerful tool for providing insight into how these proteins actually work, both in its own right and when used in conjunction with information provided by other techniques such as crystallography or molecular modelling. Mutagenesis has been used to identify and characterise a myriad of functionally important residues, motifs and domains within the GPCR architecture, and to identify aspects of similarity and differences between the major families of GPCRs. This chapter presents the necessary information for undertaking informative SDM of these proteins. Whilst this is relevant to protein structure/function studies in -general, specific pitfalls and protocols suited to investigating GPCRs in particular will be highlighted.
Resumo:
Formulating manufacturing business strategy is often fragmented in as much as current tools address upstream and downstream vertical integration with product integration, or more recently, product and infrastructure integration. Rarely do tools address all of these dimensions in an holistic manner. The research described in this paper is that undertaken in the MAPSTRAT project: a scoping study with industrial partners, aiming to satisfy this business need. A comprehensive literature study is described which is contextualized using six case studies. The paper stresses the importance of ‘joined-up thinking’ and outlines plans for an appropriate tool that is under development.
Resumo:
Although the importance of dataset fitness-for-use evaluation and intercomparison is widely recognised within the GIS community, no practical tools have yet been developed to support such interrogation. GeoViQua aims to develop a GEO label which will visually summarise and allow interrogation of key informational aspects of geospatial datasets upon which users rely when selecting datasets for use. The proposed GEO label will be integrated in the Global Earth Observation System of Systems (GEOSS) and will be used as a value and trust indicator for datasets accessible through the GEO Portal. As envisioned, the GEO label will act as a decision support mechanism for dataset selection and thereby hopefully improve user recognition of the quality of datasets. To date we have conducted 3 user studies to (1) identify the informational aspects of geospatial datasets upon which users rely when assessing dataset quality and trustworthiness, (2) elicit initial user views on a GEO label and its potential role and (3), evaluate prototype label visualisations. Our first study revealed that, when evaluating quality of data, users consider 8 facets: dataset producer information; producer comments on dataset quality; dataset compliance with international standards; community advice; dataset ratings; links to dataset citations; expert value judgements; and quantitative quality information. Our second study confirmed the relevance of these facets in terms of the community-perceived function that a GEO label should fulfil: users and producers of geospatial data supported the concept of a GEO label that provides a drill-down interrogation facility covering all 8 informational aspects. Consequently, we developed three prototype label visualisations and evaluated their comparative effectiveness and user preference via a third user study to arrive at a final graphical GEO label representation. When integrated in the GEOSS, an individual GEO label will be provided for each dataset in the GEOSS clearinghouse (or other data portals and clearinghouses) based on its available quality information. Producer and feedback metadata documents are being used to dynamically assess information availability and generate the GEO labels. The producer metadata document can either be a standard ISO compliant metadata record supplied with the dataset, or an extended version of a GeoViQua-derived metadata record, and is used to assess the availability of a producer profile, producer comments, compliance with standards, citations and quantitative quality information. GeoViQua is also currently developing a feedback server to collect and encode (as metadata records) user and producer feedback on datasets; these metadata records will be used to assess the availability of user comments, ratings, expert reviews and user-supplied citations for a dataset. The GEO label will provide drill-down functionality which will allow a user to navigate to a GEO label page offering detailed quality information for its associated dataset. At this stage, we are developing the GEO label service that will be used to provide GEO labels on demand based on supplied metadata records. In this presentation, we will provide a comprehensive overview of the GEO label development process, with specific emphasis on the GEO label implementation and integration into the GEOSS.
Resumo:
This project is focused on exchanging knowledge between ABS, UKBI and managers of business incubators in the UK. The project relates to exploitation of extant knowledge-base on assessing and improving business incubation management practice and performance and builds on two earlier studies. It addresses a pressing need for assessing and benchmarking business incubation input, process and outcome performance and highlighting best practice. The overarching aim of this project was to obtain proof-of-concept for a business incubation performance assessment and benchmarking online tool, fine-tune it and put it in use by nurturing a community of business incubation management practice, aligned by the resultant tool. The purpose was to offer an appropriate set of measures, in areas identified by relevant research on business incubation performance management and impact as critical, against which: 1.The input and process performance of business incubation management practice can be assessed and benchmarked within the auspices of a community of incubator managers concerned with best practice 2.The outcome performance and impact of business incubators can be assessed longitudinally. As such, the developed online assessment framework is geared towards the needs of researchers, policy makers and practitioners concerned with business incubation performance, added value and impact.
Resumo:
Developers of interactive software are confronted by an increasing variety of software tools to help engineer the interactive aspects of software applications. Typically resorting to ad hoc means of tool selection, developers are often dissatisfied with their chosen tool on account of the fact that the tool lacks required functionality or does not fit seamlessly within the context in which it is to be used. This paper describes a system for evaluating the suitability of user interface development tools for use in software development organisations and projects such that the selected tool appears ‘invisible’ within its anticipated context of use. The paper also outlines and presents the results of an informal empirical study and a series of observational case studies of the system.
Resumo:
There is an increasing pressure on university staff to provide ever more information and resources to students. This study investigated student opinions on (audio) podcasts and (video) vodcasts and how well they met requirements and aided learning processes. Two experiments within the Aston University looked at student opinion on, and usage of, podcasts and vodcasts for a selection of their psychology lectures. Recordings were produced first using a hand-held camcorder, and then using the in-house media department. WebCT was used to distribute the podcasts and vodcasts, attitude questionnaires were then circulated at two time points. Overall students indicated that podcasts and vodcasts were a beneficial addition resource for learning, particularly when used in conjunction with lecturers’ slides and as a tool for revision/assessment. The online material translated into students having increased understanding of the material, which supplemented and enhanced their learning without being a substitute for traditional lectures. There is scope for the provision of portable media files to become standard practice within higher education; integrating distance and online learning with traditional approaches to improve teaching and learning.
Resumo:
Many software engineers have found that it is difficult to understand, incorporate and use different formal models consistently in the process of software developments, especially for large and complex software systems. This is mainly due to the complex mathematical nature of the formal methods and the lack of tool support. It is highly desirable to have software models and their related software artefacts systematically connected and used collaboratively, rather than in isolation. The success of the Semantic Web, as the next generation of Web technology, can have profound impact on the environment for formal software development. It allows both the software engineers and machines to understand the content of formal models and supports more effective software design in terms of understanding, sharing and reusing in a distributed manner. To realise the full potential of the Semantic Web in formal software development, effectively creating proper semantic metadata for formal software models and their related software artefacts is crucial. This paper proposed a framework that allows users to interconnect the knowledge about formal software models and other related documents using the semantic technology. We first propose a methodology with tool support is proposed to automatically derive ontological metadata from formal software models and semantically describe them. We then develop a Semantic Web environment for representing and sharing formal Z/OZ models. A method with prototype tool is presented to enhance semantic query to software models and other artefacts. © 2014.
Resumo:
Summarizing the accumulated experience for a long time in the polyparametric cognitive modeling of different physiological processes (electrocardiogram, electroencephalogram, electroreovasogram and others) and the development on this basis some diagnostics methods give ground for formulating a new methodology of the system analysis in biology. The gist of the methodology consists of parametrization of fractals of electrophysiological processes, matrix description of functional state of an object with a unified set of parameters, construction of the polyparametric cognitive geometric model with artificial intelligence algorithms. The geometry model enables to display the parameter relationships are adequate to requirements of the system approach. The objective character of the elements of the models and high degree of formalization which facilitate the use of the mathematical methods are advantages of these models. At the same time the geometric images are easily interpreted in physiological and clinical terms. The polyparametric modeling is an object oriented tool possessed advances functional facilities and some principal features.
Resumo:
METPEX is a 3 year, FP7 project which aims to develop a PanEuropean tool to measure the quality of the passenger's experience of multimodal transport. Initial work has led to the development of a comprehensive set of variables relating to different passenger groups, forms of transport and journey stages. This paper addresses the main challenges in transforming the variables into usable, accessible computer based tools allowing for the real time collection of information, across multiple journey stages in different EU countries. Non-computer based measurement instruments will be used to gather information from those who may not have or be familiar with mobile technology. Smartphone-based measurement instruments will also be used, hosted in two applications. The mobile applications need to be easy to use, configurable and adaptable according to the context of use. They should also be inherently interesting and rewarding for the participant, whilst allowing for the collection of high quality, valid and reliable data from all journey types and stages (from planning, through to entry into and egress from different transport modes, travel on public and personal vehicles and support of active forms of transport (e.g. cycling and walking). During all phases of the data collection and processing, the privacy of the participant is highly regarded and is ensured. © 2014 Springer International Publishing.
Resumo:
The mechanisms for regulating PIKfyve complex activity are currently emerging. The PIKfyve complex, consisting of the phosphoinositide kinase PIKfyve (also known as FAB1), VAC14 and FIG4, is required for the production of phosphatidylinositol-3,5-bisphosphate (PI(3,5)P2). PIKfyve function is required for homeostasis of the endo/lysosomal system and is crucially implicated in neuronal function and integrity, as loss of function mutations in the PIKfyve complex lead to neurodegeneration in mouse models and human patients. Our recent work has shown that the intracellular domain of the Amyloid Precursor Protein (APP), a molecule central to the aetiology of Alzheimer's disease binds to VAC14 and enhances PIKfyve function. Here we utilise this recent advance to create an easy-to-use tool for increasing PIKfyve activity in cells. We fused APP's intracellular domain (AICD) to the HIV TAT domain, a cell permeable peptide allowing proteins to penetrate cells. The resultant TAT-AICD fusion protein is cell permeable and triggers an increase of PI(3,5)P2. Using the PI(3,5)P2 specific GFP-ML1Nx2 probe we show that cell-permeable AICD alters PI(3,5)P2 dynamics. TAT-AICD also provides partial protection from pharmacological inhibition of PIKfyve. All three lines of evidence show that the APP intracellular domain activates the PIKfyve complex in cells, a finding that is important for our understanding of the mechanism of neurodegeneration in Alzheimer's disease.