103 resultados para Run-Time Code Generation, Programming Languages, Object-Oriented Programming
Resumo:
Accurate and detailed road models play an important role in a number of geospatial applications, such as infrastructure planning, traffic monitoring, and driver assistance systems. In this thesis, an integrated approach for the automatic extraction of precise road features from high resolution aerial images and LiDAR point clouds is presented. A framework of road information modeling has been proposed, for rural and urban scenarios respectively, and an integrated system has been developed to deal with road feature extraction using image and LiDAR analysis. For road extraction in rural regions, a hierarchical image analysis is first performed to maximize the exploitation of road characteristics in different resolutions. The rough locations and directions of roads are provided by the road centerlines detected in low resolution images, both of which can be further employed to facilitate the road information generation in high resolution images. The histogram thresholding method is then chosen to classify road details in high resolution images, where color space transformation is used for data preparation. After the road surface detection, anisotropic Gaussian and Gabor filters are employed to enhance road pavement markings while constraining other ground objects, such as vegetation and houses. Afterwards, pavement markings are obtained from the filtered image using the Otsu's clustering method. The final road model is generated by superimposing the lane markings on the road surfaces, where the digital terrain model (DTM) produced by LiDAR data can also be combined to obtain the 3D road model. As the extraction of roads in urban areas is greatly affected by buildings, shadows, vehicles, and parking lots, we combine high resolution aerial images and dense LiDAR data to fully exploit the precise spectral and horizontal spatial resolution of aerial images and the accurate vertical information provided by airborne LiDAR. Objectoriented image analysis methods are employed to process the feature classiffcation and road detection in aerial images. In this process, we first utilize an adaptive mean shift (MS) segmentation algorithm to segment the original images into meaningful object-oriented clusters. Then the support vector machine (SVM) algorithm is further applied on the MS segmented image to extract road objects. Road surface detected in LiDAR intensity images is taken as a mask to remove the effects of shadows and trees. In addition, normalized DSM (nDSM) obtained from LiDAR is employed to filter out other above-ground objects, such as buildings and vehicles. The proposed road extraction approaches are tested using rural and urban datasets respectively. The rural road extraction method is performed using pan-sharpened aerial images of the Bruce Highway, Gympie, Queensland. The road extraction algorithm for urban regions is tested using the datasets of Bundaberg, which combine aerial imagery and LiDAR data. Quantitative evaluation of the extracted road information for both datasets has been carried out. The experiments and the evaluation results using Gympie datasets show that more than 96% of the road surfaces and over 90% of the lane markings are accurately reconstructed, and the false alarm rates for road surfaces and lane markings are below 3% and 2% respectively. For the urban test sites of Bundaberg, more than 93% of the road surface is correctly reconstructed, and the mis-detection rate is below 10%.
Resumo:
Nowadays, Workflow Management Systems (WfMSs) and, more generally, Process Management Systems (PMPs) are process-aware Information Systems (PAISs), are widely used to support many human organizational activities, ranging from well-understood, relatively stable and structures processes (supply chain management, postal delivery tracking, etc.) to processes that are more complicated, less structured and may exhibit a high degree of variation (health-care, emergency management, etc.). Every aspect of a business process involves a certain amount of knowledge which may be complex depending on the domain of interest. The adequate representation of this knowledge is determined by the modeling language used. Some processes behave in a way that is well understood, predictable and repeatable: the tasks are clearly delineated and the control flow is straightforward. Recent discussions, however, illustrate the increasing demand for solutions for knowledge-intensive processes, where these characteristics are less applicable. The actors involved in the conduct of a knowledge-intensive process have to deal with a high degree of uncertainty. Tasks may be hard to perform and the order in which they need to be performed may be highly variable. Modeling knowledge-intensive processes can be complex as it may be hard to capture at design-time what knowledge is available at run-time. In realistic environments, for example, actors lack important knowledge at execution time or this knowledge can become obsolete as the process progresses. Even if each actor (at some point) has perfect knowledge of the world, it may not be certain of its beliefs at later points in time, since tasks by other actors may change the world without those changes being perceived. Typically, a knowledge-intensive process cannot be adequately modeled by classical, state of the art process/workflow modeling approaches. In some respect there is a lack of maturity when it comes to capturing the semantic aspects involved, both in terms of reasoning about them. The main focus of the 1st International Workshop on Knowledge-intensive Business processes (KiBP 2012) was investigating how techniques from different fields, such as Artificial Intelligence (AI), Knowledge Representation (KR), Business Process Management (BPM), Service Oriented Computing (SOC), etc., can be combined with the aim of improving the modeling and the enactment phases of a knowledge-intensive process. The 1st International Workshop on Knowledge-intensive Business process (KiBP 2012) was held as part of the program of the 2012 Knowledge Representation & Reasoning International Conference (KR 2012) in Rome, Italy, in June 2012. The workshop was hosted by the Dipartimento di Ingegneria Informatica, Automatica e Gestionale Antonio Ruberti of Sapienza Universita di Roma, with financial support of the University, through grant 2010-C26A107CN9 TESTMED, and the EU Commission through the projects FP7-25888 Greener Buildings and FP7-257899 Smart Vortex. This volume contains the 5 papers accepted and presented at the workshop. Each paper was reviewed by three members of the internationally renowned Program Committee. In addition, a further paper was invted for inclusion in the workshop proceedings and for presentation at the workshop. There were two keynote talks, one by Marlon Dumas (Institute of Computer Science, University of Tartu, Estonia) on "Integrated Data and Process Management: Finally?" and the other by Yves Lesperance (Department of Computer Science and Engineering, York University, Canada) on "A Logic-Based Approach to Business Processes Customization" completed the scientific program. We would like to thank all the Program Committee members for the valuable work in selecting the papers, Andrea Marrella for his valuable work as publication and publicity chair of the workshop, and Carola Aiello and the consulting agency Consulta Umbria for the organization of this successful event.
Resumo:
Navigation through tessellated solids in GEANT4 can degrade computational performance, especially if the tessellated solid is large and is comprised of many facets. Redefining a tessellated solid as a mesh of tetrahedra is common in other computational techniques such as finite element analysis as computations need only consider local tetrahedrons rather than the tessellated solid as a whole. Here within we describe a technique that allows for automatic tetrahedral meshing of tessellated solids in GEANT4 and the subsequent loading of these meshes as assembly volumes; loading nested tessellated solids and tetrahedral meshes is also examined. As the technique makes the geometry suitable for automatic optimisation using smartvoxels, navigation through a simple tessellated volume has been found to be more than two orders of magnitude faster than that through the equivalent tessellated solid. Speed increases of more than two orders of magnitude were also observed for a more complex tessellated solid with voids and concavities. The technique was benchmarked for geometry load time, simulation run time and memory usage. Source code enabling the described functionality in GEANT4 has been made freely available on the Internet.
Resumo:
The rapid growth of services available on the Internet and exploited through ever globalizing business networks poses new challenges for service interoperability. New services, from consumer “apps”, enterprise suites, platform and infrastructure resources, are vying for demand with quickly evolving and overlapping capabilities, and shorter cycles of extending service access from user interfaces to software interfaces. Services, drawn from a wider global setting, are subject to greater change and heterogeneity, demanding new requirements for structural and behavioral interface adaptation. In this paper, we analyze service interoperability scenarios in global business networks, and propose new patterns for service interactions, above those proposed over the last 10 years through the development of Web service standards and process choreography languages. By contrast, we reduce assumptions of design-time knowledge required to adapt services, giving way to run-time mismatch resolutions, extend the focus from bilateral to multilateral messaging interactions, and propose declarative ways in which services and interactions take part in long-running conversations via the explicit use of state.
Resumo:
This article proposes an approach for real-time monitoring of risks in executable business process models. The approach considers risks in all phases of the business process management lifecycle, from process design, where risks are defined on top of process models, through to process diagnosis, where risks are detected during process execution. The approach has been realized via a distributed, sensor-based architecture. At design-time, sensors are defined to specify risk conditions which when fulfilled, are a likely indicator of negative process states (faults) to eventuate. Both historical and current process execution data can be used to compose such conditions. At run-time, each sensor independently notifies a sensor manager when a risk is detected. In turn, the sensor manager interacts with the monitoring component of a business process management system to prompt the results to process administrators who may take remedial actions. The proposed architecture has been implemented on top of the YAWL system, and evaluated through performance measurements and usability tests with students. The results show that risk conditions can be computed efficiently and that the approach is perceived as useful by the participants in the tests.
Resumo:
The tertiary sector is an important employer and its growth is well above average. The Texo project’s aim is to support this development by making services tradable. The composition of new or value-added services is a cornerstone of the proposed architecture. It is, however, intended to cater for build-time. Yet, at run-time unforseen exceptions may occur and user’s requirements may change. Varying circumstances require immediate sensemaking of the situation’s context and call for prompt extensions of existing services. Lightweight composition technology provided by the RoofTop project enables domain experts to create simple widget-like applications, also termed enterprise mashups, without extensive methodological skills. In this way RoofTop can assist and extend the idea of service delivery through the Texo platform and is a further step towards a next generation internet of services.
Resumo:
Lyngbya majuscula is a cyanobacterium (blue-green algae) occurring naturally in tropical and subtropical coastal areas worldwide. Deception Bay, in Northern Moreton Bay, Queensland, has a history of Lyngbya blooms, and forms a case study for this investigation. The South East Queensland (SEQ) Healthy Waterways Partnership, collaboration between government, industry, research and the community, was formed to address issues affecting the health of the river catchments and waterways of South East Queensland. The Partnership coordinated the Lyngbya Research and Management Program (2005-2007) which culminated in a Coastal Algal Blooms (CAB) Action Plan for harmful and nuisance algal blooms, such as Lyngbya majuscula. This first phase of the project was predominantly of a scientific nature and also facilitated the collection of additional data to better understand Lyngbya blooms. The second phase of this project, SEQ Healthy Waterways Strategy 2007-2012, is now underway to implement the CAB Action Plan and as such is more management focussed. As part of the first phase of the project, a Science model for the initiation of a Lyngbya bloom was built using Bayesian Networks (BN). The structure of the Science Bayesian Network was built by the Lyngbya Science Working Group (LSWG) which was drawn from diverse disciplines. The BN was then quantified with annual data and expert knowledge. Scenario testing confirmed the expected temporal nature of bloom initiation and it was recommended that the next version of the BN be extended to take this into account. Elicitation for this BN thus occurred at three levels: design, quantification and verification. The first level involved construction of the conceptual model itself, definition of the nodes within the model and identification of sources of information to quantify the nodes. The second level included elicitation of expert opinion and representation of this information in a form suitable for inclusion in the BN. The third and final level concerned the specification of scenarios used to verify the model. The second phase of the project provides the opportunity to update the network with the newly collected detailed data obtained during the previous phase of the project. Specifically the temporal nature of Lyngbya blooms is of interest. Management efforts need to be directed to the most vulnerable periods to bloom initiation in the Bay. To model the temporal aspects of Lyngbya we are using Object Oriented Bayesian networks (OOBN) to create ‘time slices’ for each of the periods of interest during the summer. OOBNs provide a framework to simplify knowledge representation and facilitate reuse of nodes and network fragments. An OOBN is more hierarchical than a traditional BN with any sub-network able to contain other sub-networks. Connectivity between OOBNs is an important feature and allows information flow between the time slices. This study demonstrates more sophisticated use of expert information within Bayesian networks, which combine expert knowledge with data (categorized using expert-defined thresholds) within an expert-defined model structure. Based on the results from the verification process the experts are able to target areas requiring greater precision and those exhibiting temporal behaviour. The time slices incorporate the data for that time period for each of the temporal nodes (instead of using the annual data from the previous static Science BN) and include lag effects to allow the effect from one time slice to flow to the next time slice. We demonstrate a concurrent steady increase in the probability of initiation of a Lyngbya bloom and conclude that the inclusion of temporal aspects in the BN model is consistent with the perceptions of Lyngbya behaviour held by the stakeholders. This extended model provides a more accurate representation of the increased risk of algal blooms in the summer months and show that the opinions elicited to inform a static BN can be readily extended to a dynamic OOBN, providing more comprehensive information for decision makers.
Resumo:
Recent studies have linked the ability of novice (CS1) programmers to read and explain code with their ability to write code. This study extends earlier work by asking CS2 students to explain object-oriented data structures problems that involve recursion. Results show a strong correlation between ability to explain code at an abstract level and performance on code writing and code reading test problems for these object-oriented data structures problems. The authors postulate that there is a common set of skills concerned with reasoning about programs that explains the correlation between writing code and explaining code. The authors suggest that an overly exclusive emphasis on code writing may be detrimental to learning to program. Non-code writing learning activities (e.g., reading and explaining code) are likely to improve student ability to reason about code and, by extension, improve student ability to write code. A judicious mix of code-writing and code-reading activities is recommended.
Resumo:
In attempting to build intelligent litigation support tools, we have moved beyond first generation, production rule legal expert systems. Our work supplements rule-based reasoning with case based reasoning and intelligent information retrieval. This research, specifies an approach to the case based retrieval problem which relies heavily on an extended object-oriented / rule-based system architecture that is supplemented with causal background information. Machine learning techniques and a distributed agent architecture are used to help simulate the reasoning process of lawyers. In this paper, we outline our implementation of the hybrid IKBALS II Rule Based Reasoning / Case Based Reasoning system. It makes extensive use of an automated case representation editor and background information.
Resumo:
In attempting to build intelligent litigation support tools, we have moved beyond first generation, production rule legal expert systems. Our work integrates rule based and case based reasoning with intelligent information retrieval. When using the case based reasoning methodology, or in our case the specialisation of case based retrieval, we need to be aware of how to retrieve relevant experience. Our research, in the legal domain, specifies an approach to the retrieval problem which relies heavily on an extended object oriented/rule based system architecture that is supplemented with causal background information. We use a distributed agent architecture to help support the reasoning process of lawyers. Our approach to integrating rule based reasoning, case based reasoning and case based retrieval is contrasted to the CABARET and PROLEXS architectures which rely on a centralised blackboard architecture. We discuss in detail how our various cooperating agents interact, and provide examples of the system at work. The IKBALS system uses a specialised induction algorithm to induce rules from cases. These rules are then used as indices during the case based retrieval process. Because we aim to build legal support tools which can be modified to suit various domains rather than single purpose legal expert systems, we focus on principles behind developing legal knowledge based systems. The original domain chosen was theAccident Compensation Act 1989 (Victoria, Australia), which relates to the provision of benefits for employees injured at work. For various reasons, which are indicated in the paper, we changed our domain to that ofCredit Act 1984 (Victoria, Australia). This Act regulates the provision of loans by financial institutions. The rule based part of our system which provides advice on the Credit Act has been commercially developed in conjunction with a legal firm. We indicate how this work has lead to the development of a methodology for constructing rule based legal knowledge based systems. We explain the process of integrating this existing commercial rule based system with the case base reasoning and retrieval architecture.
Resumo:
Service mismatches involve the adaptation of structural and behavioural interfaces of services, which in practice incurs long lead times through manual, coding e ort. We propose a framework, complementary to conventional service adaptation, to extract comprehensive and seman- tically normalised service interfaces, useful for interoperability in large business networks and the Internet of Services. The framework supports introspection and analysis of large and overloaded operational signa- tures to derive focal artefacts, namely the underlying business objects of services. A more simpli ed and comprehensive service interface layer is created based on these, and rendered into semantically normalised in- terfaces, given an ontology accrued through the framework from service analysis history. This opens up the prospect of supporting capability comparisons across services, and run-time request backtracking and ad- justment, as consumers discover new features of a service's operations through corresponding features of similar services. This paper provides a rst exposition of the service interface synthesis framework, describing patterns having novel requirements for unilateral service adaptation, and algorithms for interface introspection and business object alignment. A prototype implementation and analysis of web services drawn from com- mercial logistic systems are used to validate the algorithms and identify open challenges and future research directions.
Using Agents for Mining Maintenance Data while interacting in 3D Objectoriented Virtual Environments
Resumo:
This report demonstrates the development of: (a) object-oriented representation to provide 3D interactive environment using data provided by Woods Bagot; (b) establishing basis of agent technology for mining building maintenance data, and (C) 3D interaction in virtual environments using object-oriented representation. Applying data mining over industry maintenance database has been demonstrated in the previous report.
Resumo:
Australia has no nationally accepted building products life cycle inventory (LCI) database for use in building Ecologically Sustainable Development (ESD) assessment (BEA) tools. More information about the sustainability of the supply chain is limited by industry’s lack of real capacity to deliver objective information on process and product environmental impact. Recognition of these deficits emerged during compilation of a National LCI database to inform LCADesign, a prototype 3 dimensional object oriented computer aided design (3-D CAD) commercial building design tool. Development of this Australian LCI represents 24 staff years of effort here since 1995. Further development of LCADesign extensions is proposed as being essential to support key applications demanded from a more holistic theoretical framework calling for modules of new building and construction industry tools. A proposed tool, conceptually called LCADetails, is to serve the building product industries own needs as well as that of commercial building design amongst other industries’ prospective needs. In this paper, a proposition is examined that the existing national LCI database should be further expanded to serve Australian building product industries’ needs as well as to provide details for its client-base from a web based portal containing a module of practical supply and procurement applications. Along with improved supply chain assessment services, this proposed portal is envisaged to facilitate industry environmental life cycle improvement assessment and support decision-making to provide accredited data for operational reporting capabilities, load-based reasoning as well as BEA applications. This paper provides an overview of developments to date, including a novel 3-D CAD information and communications technology (ICT) platform for more holistic integration of existing tools for true cost assessment. Further conceptualisation of future prospects, based on a new holistic life cycle assessment framework LCADevelop, considering stakeholder relationships and their need for a range of complementary tools leveraging automated function off such ICT platforms to inform dimensionally defined operations for such as automotive, civil, transport and industrial applications are also explored.
Resumo:
The ability to assess a commercial building for its impact on the environment at the earliest stage of design is a goal which is achievable by integrating several approaches into a single procedure directly from the 3D CAD representation. Such an approach enables building design professionals to make informed decisions on the environmental impact of building and its alternatives during the design development stage instead of at the post-design stage where options become limited. The indicators of interest are those which relate to consumption of resources and energy, contributions to pollution of air, water and soil, and impacts on the health and wellbeing of people in the built environment as a result of constructing and operating buildings. 3D object-oriented CAD files contain a wealth of building information which can be interrogated for details required for analysis of the performance of a design. The quantities of all components in the building can be automatically obtained from the 3D CAD objects and their constituent materials identified to calculate a complete list of the amounts of all building products such as concrete, steel, timber, plastic etc. When this information is combined with a life cycle inventory database, key internationally recognised environmental indicators can be estimated. Such a fully integrated tool known as LCADesign has been created for automated ecoefficiency assessment of commercial buildings direct from 3D CAD. This paper outlines the key features of LCADesign and its application to environmental assessment of commercial buildings.
Resumo:
Buildings consume resources and energy, contribute to pollution of our air, water and soil, impact the health and well-being of populations and constitute an important part of the built environment in which we live. The ability to assess their design with a view to reducing that impact automatically from their 3D CAD representations enables building design professionals to make informed decisions on the environmental impact of building structures. Contemporary 3D object-oriented CAD files contain a wealth of building information. LCADesign has been designed as a fully integrated approach for automated eco-efficiency assessment of commercial buildings direct from 3D CAD. LCADesign accesses the 3D CAD detail through Industry Foundation Classes (IFCs) - the international standard file format for defining architectural and constructional CAD graphic data as 3D real-world objects - to permit construction professionals to interrogate these intelligent drawing objects for analysis of the performance of a design. The automated take-off provides quantities of all building components whose specific production processes, logistics and raw material inputs, where necessary, are identified to calculate a complete list of quantities for all products such as concrete, steel, timber, plastic etc and combines this information with the life cycle inventory database, to estimate key internationally recognised environmental indicators such as CML, EPS and Eco-indicator 99. This paper outlines the key modules of LCADesign and their role in delivering an automated eco-efficiency assessment for commercial buildings.