42 resultados para Run-Time Code Generation, Programming Languages, Object-Oriented Programming

em Deakin Research Online - Australia


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The object-oriented finite element method (OOFEM) has attracted the attention of many researchers. Compared with the traditional finite element method, OOFEM software has the advantages of maintenance and reuse. Moreover, it is easier to expand the architecture to a distributed one. In this paper, we introduce a distributed architecture of a object-oriented finite element preprocessor. A comparison between the distributed system and the centralised system shows that the former, presented in the paper, greatly improves the performance of mesh generation. Other finite element analysis modules could be expanded according to this architecture.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of the research for this thesis is to develop techniques in order to build an executable model of a real-time system. This model is to be used early in the development of the system not only to detect errors in the specification of the system but also to validate expectations of the developer as to the operation of the system. A graphical specification of a real-time system called the transformation schema was chosen to be used to build the model. Two executable models of a real-time system are described.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Regular expressions are used to parse textual data to match patterns and extract variables. They have been implemented in a vast number of programming languages with a significant quantity of research devoted to improving their operational efficiency. However, regular expressions are limited to finding linear matches. Little research has been done in the field of object-oriented results which would allow textual or binary data to be converted to multi-layered objects. This is significantly relevant as many of todaypsilas data formats are object-based. This paper extends our previous work by detailing an algorithmic approach to perform object-oriented parsing, and provides an initial study of benchmarks of the algorithms of our contribution

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Finite Element Method (FEM) is widely used in Science and Engineering since 1960’s. The vast majority of FEM software is procedure-oriented. However, this conventional style of designing FEM software encounters problems in maintenance, reuse, and expansion of the software. Recently the object-oriented finite element method attracts the attention of lots of researchers, and now there is a growing interest in this method. In this paper, the object-oriented finite element (OOFE) is briefly introduced. Then the design and development of an integrated OOFE system is described. A comparison of the integrated OOFE system and a procedure-oriented system shows that our OOFE system has many advantages.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the convergence of paper to electronic, the health industry is relying more on technology to maintain and update the well-being of patients. This reliance on technology requires an acute level of protection from unwanted technological disasters and/or human threats. Research shows insufficiencies with the implementation and use of security controls; as well as current analysis methods lacking the techniques to analyse technical and social aspects of security. The aim of this paper is to introduce an information security evaluation methodology for health information systems based on UML.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the convergence of paper to electronic, the health industry is relying more on technology to maintain and update the well-being of patients. This reliance on technology requires an acute level of protection from
unwanted technological disasters and/or human threats. Research shows insufficiencies with the implementation and use of security controls; as well as current analysis methods lacking the techniques to analyse technical and social aspects of security. The aim of this paper is to introduce an information security evaluation methodology for health information systems based on UML.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Requirements engineering is a commencing phase in the development of either software applications or information systems. It is concerned with understanding and specifying the customer's requirements of the system to be delivered. Throughout the literature, this is agreed to be one of the most crucial and, unfortunately, problematic phases in development. Despite the diversity of research directions, approaches and methods, the question of process understanding and management is still limited. Among contemporary approaches to the improvement of the current practice of Requirements Engineering, Formal Object-Oriented Method (FOOM) has been introduced as a new promising solution. The FOOM approach to requirements engineering is based on a synthesis of socio-organisational theory, the object-oriented approach, and mathematical formal specification. The entire FOOM specification process is evolutionary and involves a large volume of changes in requirements. During this process, requirements evolve through various forms of informal, semi-formal, and formal while maintaining a semantic link between these forms and, most importantly, conforming to the customer's requirements. A deep understanding of the complexity of the requirements model and its dynamics is critical in improving requirements engineering process management. This thesis investigates the benefits of documenting both the evolution of the requirements model and the rationale for that evolution. Design explanation explains and justifies the deliberations of, and decisions made during, the design activity. In this thesis, design explanation is used to describe the requirements engineering process in order to improve understandability of, and traceability within, the evolving requirements specification. The design explanation recorded during this research project is also useful in assisting the researcher in gaining insights into the creativity and opportunistic characteristics of the requirements engineering process. This thesis offers an interpretive investigation into incorporating design explanation within FOOM in order to extend and advantage the method. The researcher's interpretation and analysis of collected data highlight an insight-driven and opportunistic process rather than a strictly and systematically predefined one. In fact, the process was not smoothly evolutionary, but involved occasional 'crisis' points at which the model was reconceptualised, simplified and restructured. Therefore, contributions of the thesis lie not only in an effective incorporation of design explanation within FOOM, but also a deep understanding of the dynamic process of requirements engineering. The new understanding of the complexity of the requirements model and its dynamics suggests new directions for future research and forms a basis for a new approach to process management.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A powerful image editing system called OVIE is described, which provides fast and accurate creation, composition, rendering and other manipulation of image contents. Flexibility and convenience of the system are achieved by including two modules: image decomposition and image vectorization to understand and represent an image respectively. To understand an image comprehensively, we propose to integrate image segmentation, shape completion and image completion techniques to ensure a seamless image editing. An array of pixels is replaced by vector data with geometric edit ability for image representation since the geometrically-based editing has physical meanings and thus it is more natural or intuitive for users to edit. Compared to the existing works, our system is more convenient and can generate effects with higher quality. © 2012 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Performance is a crucial attribute for most software, making performance analysis an important software engineering task. The difficulty is that modern applications are challenging to analyse for performance. Many profiling techniques used in real-world software development struggle to provide useful results when applied to large-scale object-oriented applications. There is a substantial body of research into software performance generally but currently there exists no survey of this research that would help identify approaches useful for object-oriented software. To provide such a review we performed a systematic mapping study of empirical performance analysis approaches that are applicable to object-oriented software. Using keyword searches against leading software engineering research databases and manual searches of relevant venues we identified over 5,000 related articles published since January 2000. From these we systematically selected 253 applicable articles and categorised them according to ten facets that capture the intent, implementation and evaluation of the approaches. Our mapping study results allow us to highlight the main contributions of the existing literature and identify areas where there are interesting opportunities. We also find that, despite the research including approaches specifically aimed at object-oriented software, there are significant challenges in providing actionable feedback on the performance of large-scale object-oriented applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The majority of existing application profiling techniques ag- gregate and report performance costs by method or call- ing context. Modern large-scale object-oriented applications consist of thousands of methods with complex calling pat- terns. Consequently, when profiled, their performance costs tend to be thinly distributed across many thousands of loca- tions with few easily identifiable optimisation opportunities. However experienced performance engineers know that there are repeated patterns of method calls in the execution of an application that are induced by the libraries, design patterns and coding idioms used in the software. Automati- cally identifying and aggregating costs over these patterns of method calls allows us to identify opportunities to improve performance based on optimising these patterns. We have developed an analysis technique that is able to identify the entry point methods, which we call subsuming methods, of such patterns. Our ofiine analysis runs over previously collected runtime performance data structured in a calling context tree, such as produced by a large number of existing commercial and open source profilers. We have evaluated our approach on the DaCapo bench- mark suite, showing that our analysis significantly reduces the size and complexity of the runtime performance data set, facilitating its comprehension and interpretation. We also demonstrate, with a collection of case studies, that our analysis identifies new optimisation opportunities that can lead to significant performance improvements (from 20% to over 50% improvement in our case studies).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A common characteristic among parallel/distributed programming languages is that the one language is used to specify not only the overall organisation of the distributed application, but also the functionality of the application. That is, the connectivity and functionality of processes are specified within a single program. Connectivity and functionality are independent aspects of a distributed application. This thesis shows that these two aspects can be specified separately, therefore allowing application designers to freely concentrate on either aspect in a modular fashion. Two new programming languages have been developed for specifying each aspect. These languages are for loosely coupled distributed applications based on message passing, and have been designed to simplify distributed programming by completely removing all low level interprocess communication. A suite of languages and tools has been designed and developed. It includes the two new languages, parsers, a compilation system to generate intermediate C code that is compiled to binary object modules, a run-time system to create, manage and terminate several distributed applications, and a shell to communicate with the run-tune system. DAL (Distributed Application Language) and DAPL (Distributed Application Process Language) are the new programming languages for the specification and development of process oriented, asynchronous message passing, distributed applications. These two languages have been designed and developed as part of this doctorate in order to specify such distributed applications that execute on a cluster of computers. Both languages are used to specify orthogonal components of an application, on the one hand the organisation of processes that constitute an application, and on the other the interface and functionality of each process. Consequently, these components can be created in a modular fashion, individually and concurrently. The DAL language is used to specify not only the connectivity of all processes within an application, but also a cluster of computers for which the application executes. Furthermore, sub-clusters can be specified for individual processes of an application to constrain a process to a particular group of computers. The second language, DAPL, is used to specify the interface, functionality and data structures of application processes. In addition to these languages, a DAL parser, a DAPL parser, and a compilation system have been designed and developed (in this project). This compilation system takes DAL and DAPL programs to generate object modules based on machine code, one module for each application process. These object modules are used by the Distributed Application System (DAS) to instantiate and manage distributed applications. The DAS system is another new component of this project. The purpose of the DAS system is to create, manage, and terminate many distributed applications of similar and different configurations. The creation procedure incorporates the automatic allocation of processes to remote machines. Application management includes several operations such as deletion, addition, replacement, and movement of processes, and also detection and reaction to faults such as a processor crash. A DAS operator communicates with the DAS system via a textual shell called DASH (Distributed Application SHell). This suite of languages and tools allowed distributed applications of varying connectivity and functionality to be specified quickly and simply at a high level of abstraction. DAL and DAPL programs of several processes may require a few dozen lines to specify as compared to several hundred lines of equivalent C code that is generated by the compilation system. Furthermore, the DAL and DAPL compilation system is successful at generating binary object modules, and the DAS system succeeds in instantiating and managing several distributed applications on a cluster.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The “external structure” in an object oriented system refers here to the  graphs of objects and classes. The class structure graph or class model is derived from the object structure graph or object model, and in this operation structural information is lost, or never made explicit. Although object oriented programming languages capture the class model as declarations,  contradictory assumptions about object model properties may be made introducing faults into the design. Consistent assumptions about the object model can be specified in the code using assertions such as Eiffel’s  invariants, preconditions and postconditions. Three examples specifying the external structure are considered.