942 resultados para software, translation, validation tool, VMNET, Wikipedia, XML
Resumo:
In the world, scientific studies increase day by day and computer programs facilitate the human’s life. Scientists examine the human’s brain’s neural structure and they try to be model in the computer and they give the name of artificial neural network. For this reason, they think to develop more complex problem’s solution. The purpose of this study is to estimate fuel economy of an automobile engine by using artificial neural network (ANN) algorithm. Engine characteristics were simulated by using “Neuro Solution” software. The same data is used in MATLAB to compare the performance of MATLAB is such a problem and show its validity. The cylinder, displacement, power, weight, acceleration and vehicle production year are used as input data and miles per gallon (MPG) are used as target data. An Artificial Neural Network model was developed and 70% of data were used as training data, 15% of data were used as testing data and 15% of data is used as validation data. In creating our model, proper neuron number is carefully selected to increase the speed of the network. Since the problem has a nonlinear structure, multi layer are used in our model.
Resumo:
Educational games such as quizzes, quests, puzzles, mazes and logical problems may be modeled as multimedia board games. In the scope of the ADOPTA project1 being under development at the Faculty of Mathematics and Informatics at Sofia University, a formal model for presentation of such educational board games was invented and elaborated. Educational games can be modeled as special board mini-games, with a board of any form and any types of positions. Over defined positions, figures (objects) with certain properties are placed and, next, there are to be defined formal rules for manipulation of these figures and resulted effects. The model has been found to be general enough in order to allow description and execution control of more complex logical problems to be solved by several actions delivered to/by the player according some formal rules and context conditions and, in general, of any learning activities and their workflow. It is used as a base for creation of a software platform providing facilities for easy construction of multimedia board games and their execution. The platform consists of game designer (i.e., a game authoring tool) and game run-time controller communicating each other through game repository. There are created and modeled many examples of educational board games appropriate for didactic purposes, self evaluations, etc., which are supposed to be designed easily by authors with no IT skills and experience. By means of game metadata descriptions, these games are going be included into narrative storyboards and, next, delivered to learners with appropriate profile according their learning style, preferences, etc. Moreover, usage of artificial intelligence agents is planned as well – once as playing virtual opponents of the player or, otherwise, being virtual advisers of the gamer helping him/her in finding the right problem solution within given domain such as discovering a treasure using a location map, finding best tour in a virtual museum, guessing an unknown word in a hangman game, and many others.
Resumo:
A valuable alternative to traditional diagnostic tool to record fetal heart rate, to monitor the general fetal wellbeing, is fetal phonocardiography, a passive and low cost acoustic recording of fetal heart sounds. In this paper, it is presented a simulating software of fetal phonocardiographic signals relative to different fetal physiological states and recording conditions (for example different kinds and levels of noise). This software can be useful to test and assess fetal heart rate extraction algorithms from fetal phonocardiographic recordings and as a teaching tool for demonstration to medical students and others. © 2010 IEEE.
Resumo:
ACM Computing Classification System (1998): J.2.
Resumo:
OBJECTIVE: The authors developed and validated a clozapine-specific side-effects scale capable of eliciting the subjectively unpleasant side-effects of clozapine. METHODS: Questions from the original Glasgow Antipsychotic Side-effects Scale (GASS) were compared to a list of the most commonly reported clozapine side-effects and those with a significant subjective burden were included in the GASS for Clozapine (GASS-C). The original authors of the GASS and a group of mental health professionals from the UK and Ireland were enlisted to comment on the questions in the GASS-C based on their clinical experience. 110 clozapine outpatients from two sites completed the GASS-C, the original GASS and a repeat GASS-C. Statistical analyses were performed using SPSS for Windows version 19. RESULTS: The GASS-C was shown to have construct validity, in that Spearman's correlation coefficient was 0.816 (p<0.001) with the original GASS, whilst Cohen's kappa coefficient was >0.77 (p<0.001) for one question and >0.81 (p<0.001) for remaining relevant questions. GASS-C was also shown to have strong test-retest reliability, in that Cronbach's alpha coefficient was >0.907 (p<0.001), whilst Cohen's kappa coefficient was >0.81 (p<0.001) for 12 questions and >0.61 (p<0.001) for the remaining four questions. CONCLUSION: The GASS-C is a valid and reliable clinical tool to enable a systematic assessment of the subjectively unpleasant side-effects of clozapine. Future research should focus on how the scale can be utilised as a clinical tool to improve real-world outcomes such as adherence to clozapine therapy and quality of life.
Resumo:
Software product line modeling aims at capturing a set of software products in an economic yet meaningful way. We introduce a class of variability models that capture the sharing between the software artifacts forming the products of a software product line (SPL) in a hierarchical fashion, in terms of commonalities and orthogonalities. Such models are useful when analyzing and verifying all products of an SPL, since they provide a scheme for divide-and-conquer-style decomposition of the analysis or verification problem at hand. We define an abstract class of SPLs for which variability models can be constructed that are optimal w.r.t. the chosen representation of sharing. We show how the constructed models can be fed into a previously developed algorithmic technique for compositional verification of control-flow temporal safety properties, so that the properties to be verified are iteratively decomposed into simpler ones over orthogonal parts of the SPL, and are not re-verified over the shared parts. We provide tool support for our technique, and evaluate our tool on a small but realistic SPL of cash desks.
Resumo:
The growth of the discipline of translation studies has been accompanied by are newed reflection on the object of research and our metalanguage. These developments have also been necessitated by the diversification of professions within the language industry. The very label translation is often avoided in favour of alternative terms, such as localisation (of software), trans creation (of advertising), trans editing (of information from press agencies). The competences framework developed for the European Master’s in Translation network speaks of experts in multilingual and multimedia communication to account for the complexity of translation competence. This paper addresses the following related questions: (i) How can translation competence in such awide sense be developed in training programmes? (ii) Do some competences required in the industry go beyond translation competence? and (iii) What challenges do labels such as trans creation pose?
Resumo:
PurposeTo develop and validate a classification system for focal vitreomacular traction (VMT) with and without macular hole based on spectral domain optical coherence tomography (SD-OCT), intended to aid in decision-making and prognostication.MethodsA panel of retinal specialists convened to develop this system. A literature review followed by discussion on a wide range of cases formed the basis for the proposed classification. Key features on OCT were identified and analysed for their utility in clinical practice. A final classification was devised based on two sequential, independent validation exercises to improve interobserver variability.ResultsThis classification tool pertains to idiopathic focal VMT assessed by a horizontal line scan using SD-OCT. The system uses width (W), interface features (I), foveal shape (S), retinal pigment epithelial changes (P), elevation of vitreous attachment (E), and inner and outer retinal changes (R) to give the acronym WISPERR. Each category is scored hierarchically. Results from the second independent validation exercise indicated a high level of agreement between graders: intraclass correlation ranged from 0.84 to 0.99 for continuous variables and Fleiss' kappa values ranged from 0.76 to 0.95 for categorical variables.ConclusionsWe present an OCT-based classification system for focal VMT that allows anatomical detail to be scrutinised and scored qualitatively and quantitatively using a simple, pragmatic algorithm, which may be of value in clinical practice as well as in future research studies.
Resumo:
Social software is increasingly being used in higher and further education to support teaching and learning processes. These applications provide students with social and cognitive stimulation and also add to the interaction between students and educators. However, in addition to the benefits the introduction of social software into a course environment can also have adverse implications on students, educators and the education institution as a whole, a phenomenon which has received much less attention in the literature. In this study we explore the various implications of introducing social software into a course environment in order to identify the associated benefits, but also the potential drawbacks. We draw on data from 20 social software initiatives in UK based higher and further education institutions to identify the diverse experiences and concerns of students and educators. The findings are presented in form of a SWOT analysis, which allows us to better understand the otherwise ambiguous implications of social software in terms of its strengths, weaknesses, opportunities and threats. From the analysis we have derived concrete recommendations for the use of social software as a teaching and learning tool.
Resumo:
This thesis addressed the problem of risk analysis in mental healthcare, with respect to the GRiST project at Aston University. That project provides a risk-screening tool based on the knowledge of 46 experts, captured as mind maps that describe relationships between risks and patterns of behavioural cues. Mind mapping, though, fails to impose control over content, and is not considered to formally represent knowledge. In contrast, this thesis treated GRiSTs mind maps as a rich knowledge base in need of refinement; that process drew on existing techniques for designing databases and knowledge bases. Identifying well-defined mind map concepts, though, was hindered by spelling mistakes, and by ambiguity and lack of coverage in the tools used for researching words. A novel use of the Edit Distance overcame those problems, by assessing similarities between mind map texts, and between spelling mistakes and suggested corrections. That algorithm further identified stems, the shortest text string found in related word-forms. As opposed to existing approaches’ reliance on built-in linguistic knowledge, this thesis devised a novel, more flexible text-based technique. An additional tool, Correspondence Analysis, found patterns in word usage that allowed machines to determine likely intended meanings for ambiguous words. Correspondence Analysis further produced clusters of related concepts, which in turn drove the automatic generation of novel mind maps. Such maps underpinned adjuncts to the mind mapping software used by GRiST; one such new facility generated novel mind maps, to reflect the collected expert knowledge on any specified concept. Mind maps from GRiST are stored as XML, which suggested storing them in an XML database. In fact, the entire approach here is ”XML-centric”, in that all stages rely on XML as far as possible. A XML-based query language allows user to retrieve information from the mind map knowledge base. The approach, it was concluded, will prove valuable to mind mapping in general, and to detecting patterns in any type of digital information.
Resumo:
A készpénz-optimalizálás az operációkutatás régóta kutatott területe. Ebben a cikkben valós adatokon mutatok be egy banki készpénz-optimalizálást, melyet lineáris programozási feladatok segítségével végeztem el. A cikkben összehasonlítottam a determinisztikus és a sztochasztikus megközelítéseket is. A hagyományos készpénz-optimalizáción két területen léptem túl: egyrészt vizsgáltam a bankfiók valutagazdálkodását is, másrészről a bankfiókok közötti készpénzszállítás lehetőségét is. A vegyes egészértékű lineáris programozási feladatok megoldására a glpk nevű szabad hozzáférésű szoftvert használtam, így a cikkből képet kaphatunk a megoldó (solver) felhasználhatóságáról és korlátairól is. ___________ In recent years both operational research and quantitative ¯nance have paid much attention to cash management issues. In this paper we present a cash management study which is based on real world data and uses a mixed integer linear programming (MILP) model as the main tool. In the paper we compare deterministic and stochastic approaches. The classical cash management problem is extended in two ways: we considered the possibility of bank offices keeping more than one currency and also investigated the opportunity of cash transports between bank offices. The MILP problem was solved with glpk (GNU Linear Programming Kit), a free software. The reader can also get a feel of how to use this solver.
Resumo:
A methodology for formally modeling and analyzing software architecture of mobile agent systems provides a solid basis to develop high quality mobile agent systems, and the methodology is helpful to study other distributed and concurrent systems as well. However, it is a challenge to provide the methodology because of the agent mobility in mobile agent systems.^ The methodology was defined from two essential parts of software architecture: a formalism to define the architectural models and an analysis method to formally verify system properties. The formalism is two-layer Predicate/Transition (PrT) nets extended with dynamic channels, and the analysis method is a hierarchical approach to verify models on different levels. The two-layer modeling formalism smoothly transforms physical models of mobile agent systems into their architectural models. Dynamic channels facilitate the synchronous communication between nets, and they naturally capture the dynamic architecture configuration and agent mobility of mobile agent systems. Component properties are verified based on transformed individual components, system properties are checked in a simplified system model, and interaction properties are analyzed on models composing from involved nets. Based on the formalism and the analysis method, this researcher formally modeled and analyzed a software architecture of mobile agent systems, and designed an architectural model of a medical information processing system based on mobile agents. The model checking tool SPIN was used to verify system properties such as reachability, concurrency and safety of the medical information processing system. ^ From successful modeling and analyzing the software architecture of mobile agent systems, the conclusion is that PrT nets extended with channels are a powerful tool to model mobile agent systems, and the hierarchical analysis method provides a rigorous foundation for the modeling tool. The hierarchical analysis method not only reduces the complexity of the analysis, but also expands the application scope of model checking techniques. The results of formally modeling and analyzing the software architecture of the medical information processing system show that model checking is an effective and an efficient way to verify software architecture. Moreover, this system shows a high level of flexibility, efficiency and low cost of mobile agent technologies. ^
Resumo:
Modern software systems are often large and complicated. To better understand, develop, and manage large software systems, researchers have studied software architectures that provide the top level overall structural design of software systems for the last decade. One major research focus on software architectures is formal architecture description languages, but most existing research focuses primarily on the descriptive capability and puts less emphasis on software architecture design methods and formal analysis techniques, which are necessary to develop correct software architecture design. ^ Refinement is a general approach of adding details to a software design. A formal refinement method can further ensure certain design properties. This dissertation proposes refinement methods, including a set of formal refinement patterns and complementary verification techniques, for software architecture design using Software Architecture Model (SAM), which was developed at Florida International University. First, a general guideline for software architecture design in SAM is proposed. Second, specification construction through property-preserving refinement patterns is discussed. The refinement patterns are categorized into connector refinement, component refinement and high-level Petri nets refinement. These three levels of refinement patterns are applicable to overall system interaction, architectural components, and underlying formal language, respectively. Third, verification after modeling as a complementary technique to specification refinement is discussed. Two formal verification tools, the Stanford Temporal Prover (STeP) and the Simple Promela Interpreter (SPIN), are adopted into SAM to develop the initial models. Fourth, formalization and refinement of security issues are studied. A method for security enforcement in SAM is proposed. The Role-Based Access Control model is formalized using predicate transition nets and Z notation. The patterns of enforcing access control and auditing are proposed. Finally, modeling and refining a life insurance system is used to demonstrate how to apply the refinement patterns for software architecture design using SAM and how to integrate the access control model. ^ The results of this dissertation demonstrate that a refinement method is an effective way to develop a high assurance system. The method developed in this dissertation extends existing work on modeling software architectures using SAM and makes SAM a more usable and valuable formal tool for software architecture design. ^
Resumo:
Software architecture is the abstract design of a software system. It plays a key role as a bridge between requirements and implementation, and is a blueprint for development. The architecture represents a set of early design decisions that are crucial to a system. Mistakes in those decisions are very costly if they remain undetected until the system is implemented and deployed. This is where formal specification and analysis fits in. Formal specification makes sure that an architecture design is represented in a rigorous and unambiguous way. Furthermore, a formally specified model allows the use of different analysis techniques for verifying the correctness of those crucial design decisions. ^ This dissertation presented a framework, called SAM, for formal specification and analysis of software architectures. In terms of specification, formalisms and mechanisms were identified and chosen to specify software architecture based on different analysis needs. Formalisms for specifying properties were also explored, especially in the case of non-functional properties. In terms of analysis, the dissertation explored both the verification of functional properties and the evaluation of non-functional properties of software architecture. For the verification of functional property, methodologies were presented on how to apply existing model checking techniques on a SAM model. For the evaluation of non-functional properties, the dissertation first showed how to incorporate stochastic information into a SAM model, and then explained how to translate the model to existing tools and conducts the analysis using those tools. ^ To alleviate the analysis work, we also provided a tool to automatically translate a SAM model for model checking. All the techniques and methods described in the dissertation were illustrated by examples or case studies, which also served a purpose of advocating the use of formal methods in practice. ^
A framework for transforming, analyzing, and realizing software designs in unified modeling language
Resumo:
Unified Modeling Language (UML) is the most comprehensive and widely accepted object-oriented modeling language due to its multi-paradigm modeling capabilities and easy to use graphical notations, with strong international organizational support and industrial production quality tool support. However, there is a lack of precise definition of the semantics of individual UML notations as well as the relationships among multiple UML models, which often introduces incomplete and inconsistent problems for software designs in UML, especially for complex systems. Furthermore, there is a lack of methodologies to ensure a correct implementation from a given UML design. The purpose of this investigation is to verify and validate software designs in UML, and to provide dependability assurance for the realization of a UML design.^ In my research, an approach is proposed to transform UML diagrams into a semantic domain, which is a formal component-based framework. The framework I proposed consists of components and interactions through message passing, which are modeled by two-layer algebraic high-level nets and transformation rules respectively. In the transformation approach, class diagrams, state machine diagrams and activity diagrams are transformed into component models, and transformation rules are extracted from interaction diagrams. By applying transformation rules to component models, a (sub)system model of one or more scenarios can be constructed. Various techniques such as model checking, Petri net analysis techniques can be adopted to check if UML designs are complete or consistent. A new component called property parser was developed and merged into the tool SAM Parser, which realize (sub)system models automatically. The property parser generates and weaves runtime monitoring code into system implementations automatically for dependability assurance. The framework in the investigation is creative and flexible since it not only can be explored to verify and validate UML designs, but also provides an approach to build models for various scenarios. As a result of my research, several kinds of previous ignored behavioral inconsistencies can be detected.^