151 resultados para Adaptive object model

em Queensland University of Technology - ePrints Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we present a unified sequential Monte Carlo (SMC) framework for performing sequential experimental design for discriminating between a set of models. The model discrimination utility that we advocate is fully Bayesian and based upon the mutual information. SMC provides a convenient way to estimate the mutual information. Our experience suggests that the approach works well on either a set of discrete or continuous models and outperforms other model discrimination approaches.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper discusses the use of models in automatic computer forensic analysis, and proposes and elaborates on a novel model for use in computer profiling, the computer profiling object model. The computer profiling object model is an information model which models a computer as objects with various attributes and inter-relationships. These together provide the information necessary for a human investigator or an automated reasoning engine to make judgements as to the probable usage and evidentiary value of a computer system. The computer profiling object model can be implemented so as to support automated analysis to provide an investigator with the information needed to decide whether manual analysis is required.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Despite being poised as a standard for data exchange for operation and maintenance data, the database heritage of the MIMOSA OSA-EAI is clearly evident from using a relational model at its core. The XML schema (XSD) definitions, which are used for communication between asset management systems, are based on the MIMOSA common relational information schema (CRIS), a relational model, and consequently, many database concepts permeate the communications layer. The adoption of a relational model leads to several deficiencies, and overlooks advances in object-oriented approach for an upcoming version of the specification, and the common conceptual object model (CCOM) sees a transition to fully utilising object-oriented features for the standard. Unified modelling language (UML) is used as a medium for documentation as well as facilitating XSD code generation. This paper details some of the decisions faced in developing the CCOM and provides a glimpse into the future of asset management and data exchange models.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Health care systems are highly dynamic not just due to developments and innovations in diagnosis and treatments, but also by virtue of emerging management techniques supported by modern information and communication technology. A multitude of stakeholders such as patients, nurses, general practitioners or social carers can be integrated by modeling complex interactions necessary for managing the provision and consumption of health care services. Furthermore, it is the availability of Service-oriented Architecture (SOA) that supports those integration efforts by enabling the flexible and reusable composition of autonomous, loosely-coupled and web-enabled software components. However, there is still the gap between SOA and predominantly business-oriented perspectives (e.g. business process models). The alignment of both views is crucial not just for the guided development of SOA but also for the sustainable evolution of holistic enterprise architectures. In this paper, we combine the Semantic Object Model (SOM) and the Business Process Modelling Notation (BPMN) towards a model-driven approach to service engineering. By addressing a business system in Home Telecare and deriving a business process model, which can eventually be controlled and executed by machines; in particular by composed web services, the full potential of a process-centric SOA is exploited.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Executive Summary The objective of this report was to use the Sydney Opera House as a case study of the application of Building Information Modelling (BIM). The Sydney opera House is a complex, large building with very irregular building configuration, that makes it a challenging test. A number of key concerns are evident at SOH: • the building structure is complex, and building service systems - already the major cost of ongoing maintenance - are undergoing technology change, with new computer based services becoming increasingly important. • the current “documentation” of the facility is comprised of several independent systems, some overlapping and is inadequate to service current and future services required • the building has reached a milestone age in terms of the condition and maintainability of key public areas and service systems, functionality of spaces and longer term strategic management. • many business functions such as space or event management require up-to-date information of the facility that are currently inadequately delivered, expensive and time consuming to update and deliver to customers. • major building upgrades are being planned that will put considerable strain on existing Facilities Portfolio services, and their capacity to manage them effectively While some of these concerns are unique to the House, many will be common to larger commercial and institutional portfolios. The work described here supported a complementary task which sought to identify if a building information model – an integrated building database – could be created, that would support asset & facility management functions (see Sydney Opera House – FM Exemplar Project, Report Number: 2005-001-C-4 Building Information Modelling for FM at Sydney Opera House), a business strategy that has been well demonstrated. The development of the BIMSS - Open Specification for BIM has been surprisingly straightforward. The lack of technical difficulties in converting the House’s existing conventions and standards to the new model based environment can be related to three key factors: • SOH Facilities Portfolio – the internal group responsible for asset and facility management - have already well established building and documentation policies in place. The setting and adherence to well thought out operational standards has been based on the need to create an environment that is understood by all users and that addresses the major business needs of the House. • The second factor is the nature of the IFC Model Specification used to define the BIM protocol. The IFC standard is based on building practice and nomenclature, widely used in the construction industries across the globe. For example the nomenclature of building parts – eg ifcWall, corresponds to our normal terminology, but extends the traditional drawing environment currently used for design and documentation. This demonstrates that the international IFC model accurately represents local practice for building data representation and management. • a BIM environment sets up opportunities for innovative processes that can exploit the rich data in the model and improve services and functions for the House: for example several high-level processes have been identified that could benefit from standardized Building Information Models such as maintenance processes using engineering data, business processes using scheduling, venue access, security data and benchmarking processes using building performance data. The new technology matches business needs for current and new services. The adoption of IFC compliant applications opens the way forward for shared building model collaboration and new processes, a significant new focus of the BIM standards. In summary, SOH current building standards have been successfully drafted for a BIM environment and are confidently expected to be fully developed when BIM is adopted operationally by SOH. These BIM standards and their application to the Opera House are intended as a template for other organisations to adopt for the own procurement and facility management activities. Appendices provide an overview of the IFC Integrated Object Model and an understanding IFC Model Data.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Computer forensics is the process of gathering and analysing evidence from computer systems to aid in the investigation of a crime. Typically, such investigations are undertaken by human forensic examiners using purpose-built software to discover evidence from a computer disk. This process is a manual one, and the time it takes for a forensic examiner to conduct such an investigation is proportional to the storage capacity of the computer's disk drives. The heterogeneity and complexity of various data formats stored on modern computer systems compounds the problems posed by the sheer volume of data. The decision to undertake a computer forensic examination of a computer system is a decision to commit significant quantities of a human examiner's time. Where there is no prior knowledge of the information contained on a computer system, this commitment of time and energy occurs with little idea of the potential benefit to the investigation. The key contribution of this research is the design and development of an automated process to describe a computer system and its activity for the purposes of a computer forensic investigation. The term proposed for this process is computer profiling. A model of a computer system and its activity has been developed over the course of this research. Using this model a computer system, which is the subj ect of investigation, can be automatically described in terms useful to a forensic investigator. The computer profiling process IS resilient to attempts to disguise malicious computer activity. This resilience is achieved by detecting inconsistencies in the information used to infer the apparent activity of the computer. The practicality of the computer profiling process has been demonstrated by a proof-of concept software implementation. The model and the prototype implementation utilising the model were tested with data from real computer systems. The resilience of the process to attempts to disguise malicious activity has also been demonstrated with practical experiments conducted with the same prototype software implementation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Internet services are important part of daily activities for most of us. These services come with sophisticated authentication requirements which may not be handled by average Internet users. The management of secure passwords for example creates an extra overhead which is often neglected due to usability reasons. Furthermore, password-based approaches are applicable only for initial logins and do not protect against unlocked workstation attacks. In this paper, we provide a non-intrusive identity verification scheme based on behavior biometrics where keystroke dynamics based-on free-text is used continuously for verifying the identity of a user in real-time. We improved existing keystroke dynamics based verification schemes in four aspects. First, we improve the scalability where we use a constant number of users instead of whole user space to verify the identity of target user. Second, we provide an adaptive user model which enables our solution to take the change of user behavior into consideration in verification decision. Next, we identify a new distance measure which enables us to verify identity of a user with shorter text. Fourth, we decrease the number of false results. Our solution is evaluated on a data set which we have collected from users while they were interacting with their mail-boxes during their daily activities.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Ecological dynamics characterizes adaptive behavior as an emergent, self-organizing property of interpersonal interactions in complex social systems. The authors conceptualize and investigate constraints on dynamics of decisions and actions in the multiagent system of team sports. They studied coadaptive interpersonal dynamics in rugby union to model potential control parameter and collective variable relations in attacker–defender dyads. A videogrammetry analysis revealed how some agents generated fluctuations by adapting displacement velocity to create phase transitions and destabilize dyadic subsystems near the try line. Agent interpersonal dynamics exhibited characteristics of chaotic attractors and informational constraints of rugby union boxed dyadic systems into a low dimensional attractor. Data suggests that decisions and actions of agents in sports teams may be characterized as emergent, self-organizing properties, governed by laws of dynamical systems at the ecological scale. Further research needs to generalize this conceptual model of adaptive behavior in performance to other multiagent populations.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper presents a travel time prediction model and evaluates its performance and transferability. Advanced Travelers Information Systems (ATIS) are gaining more and more importance, increasing the need for accurate, timely and useful information to the travelers. Travel time information quantifies the traffic condition in an easy to understand way for the users. The proposed travel time prediction model is based on an efficient use of nearest neighbor search. The model is calibrated for optimal performance using Genetic Algorithms. Results indicate better performance by using the proposed model than the presently used naïve model.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We present a hierarchical model for assessing an object-oriented program's security. Security is quantified using structural properties of the program code to identify the ways in which `classified' data values may be transferred between objects. The model begins with a set of low-level security metrics based on traditional design characteristics of object-oriented classes, such as data encapsulation, cohesion and coupling. These metrics are then used to characterise higher-level properties concerning the overall readability and writability of classified data throughout the program. In turn, these metrics are then mapped to well-known security design principles such as `assigning the least privilege' and `reducing the size of the attack surface'. Finally, the entire program's security is summarised as a single security index value. These metrics allow different versions of the same program, or different programs intended to perform the same task, to be compared for their relative security at a number of different abstraction levels. The model is validated via an experiment involving five open source Java programs, using a static analysis tool we have developed to automatically extract the security metrics from compiled Java bytecode.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A classical condition for fast learning rates is the margin condition, first introduced by Mammen and Tsybakov. We tackle in this paper the problem of adaptivity to this condition in the context of model selection, in a general learning framework. Actually, we consider a weaker version of this condition that allows one to take into account that learning within a small model can be much easier than within a large one. Requiring this “strong margin adaptivity” makes the model selection problem more challenging. We first prove, in a general framework, that some penalization procedures (including local Rademacher complexities) exhibit this adaptivity when the models are nested. Contrary to previous results, this holds with penalties that only depend on the data. Our second main result is that strong margin adaptivity is not always possible when the models are not nested: for every model selection procedure (even a randomized one), there is a problem for which it does not demonstrate strong margin adaptivity.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Preservation and enhancement of transportation infrastructure is critical to continuous economic development in Australia. Of particular importance are the road assets infrastructure, due to their high costs of setting up and their social and economic impact on the national economy. Continuous availability of road assets, however, is contingent upon their effective design, condition monitoring, maintenance, and renovation and upgrading. However, in order to achieve this data exchange, integration, and interoperability is required across municipal boundaries. On the other hand, there are no agreed reference frameworks that consistently describe road infrastructure assets. As a consequence, specifications and technical solutions being chosen to manage road assets do not provide adequate detail and quality of information to support asset lifecycle management processes and decisions taken are based on perception not reality. This paper presents a road asset information model, which works as reference framework to, link other kinds of information with asset information; integrate different data suppliers; and provide a foundation for service driven integrated information framework for community infrastructure and asset management.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Traditional nearest points methods use all the samples in an image set to construct a single convex or affine hull model for classification. However, strong artificial features and noisy data may be generated from combinations of training samples when significant intra-class variations and/or noise occur in the image set. Existing multi-model approaches extract local models by clustering each image set individually only once, with fixed clusters used for matching with various image sets. This may not be optimal for discrimination, as undesirable environmental conditions (eg. illumination and pose variations) may result in the two closest clusters representing different characteristics of an object (eg. frontal face being compared to non-frontal face). To address the above problem, we propose a novel approach to enhance nearest points based methods by integrating affine/convex hull classification with an adapted multi-model approach. We first extract multiple local convex hulls from a query image set via maximum margin clustering to diminish the artificial variations and constrain the noise in local convex hulls. We then propose adaptive reference clustering (ARC) to constrain the clustering of each gallery image set by forcing the clusters to have resemblance to the clusters in the query image set. By applying ARC, noisy clusters in the query set can be discarded. Experiments on Honda, MoBo and ETH-80 datasets show that the proposed method outperforms single model approaches and other recent techniques, such as Sparse Approximated Nearest Points, Mutual Subspace Method and Manifold Discriminant Analysis.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper presents an object-oriented world model for the road traffic environment of autonomous (driver-less) city vehicles. The developed World Model is a software component of the autonomous vehicle's control system, which represents the vehicle's view of its road environment. Regardless whether the information is a priori known, obtained through on-board sensors, or through communication, the World Model stores and updates information in real-time, notifies the decision making subsystem about relevant events, and provides access to its stored information. The design is based on software design patterns, and its application programming interface provides both asynchronous and synchronous access to its information. Experimental results of both a 3D simulation and real-world experiments show that the approach is applicable and real-time capable.