428 resultados para capability-based framework


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Reducing complexity in Information Systems is an important topic in both research and industry. One strategy to deal with complexity is separation of concerns, which results in less complex, easily maintainable and more reusable systems. Separation of concerns can be addressed through the Aspect Oriented paradigm. Although this paradigm has been well researched in programming, it is still at the preliminary stage in the area of Business Process Management. While some efforts have been made to extend business process modelling with aspect oriented capability, it has not yet been investigated how aspect oriented business process models should be executed at runtime. In this paper, we propose a generic solution to support execution of aspect oriented business process models based on the principle behind dynamic weaving of aspects. This solution is formally specified using Coloured Petri Nets. The resulting formal specification serves as the blueprint to the implementation of a service module in the framework of a state-of-the-art Business Process Management System. Using this developed artefact, a case study is performed in which two simplified processes from real business in the domain of banking are modelled and executed in an aspect oriented manner. Through this case study, we also demonstrate that adoption of aspect oriented modularization increases the reusability while reducing the complexity of business process models in practice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this article we develop a hierarchical framework of ordinary capabilities, dynamic functional capabilities, and dynamic learning capabilities. These three levels of capabilities differ across four interdependent internal dimensions of predominant resources, routine patterning, learning, and strategic intent. The levels are also influenced by external environmental velocity. This framework progresses the ongoing debate surrounding the capability hierarchy and offers a novel view of capabilities. We also provide direction regarding how the framework can lead future research toward a validated measurable model that contributes to solving the definitional and associated measurement debates around ordinary and dynamic capabilities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Thailand education reform adopted cooperative learning to improve the quality of education. However, it has been reported that the introduction and maintenance of cooperative learning has been difficult and uncertain because of the cultural differences. The study proposed a conceptual framework developed based on making a connection between Thai cultures and cooperative learning elements, and implemented a small-scale research project in a Thai primary mathematics class with a teacher and thirty-two Grade 4 students. The results uncovered that the three components including preparation of teachers, instructional strategies and preparation of students can be vehicles for the culture integration in cooperative learning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The selection of optimal camera configurations (camera locations, orientations, etc.) for multi-camera networks remains an unsolved problem. Previous approaches largely focus on proposing various objective functions to achieve different tasks. Most of them, however, do not generalize well to large scale networks. To tackle this, we propose a statistical framework of the problem as well as propose a trans-dimensional simulated annealing algorithm to effectively deal with it. We compare our approach with a state-of-the-art method based on binary integer programming (BIP) and show that our approach offers similar performance on small scale problems. However, we also demonstrate the capability of our approach in dealing with large scale problems and show that our approach produces better results than two alternative heuristics designed to deal with the scalability issue of BIP. Last, we show the versatility of our approach using a number of specific scenarios.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose Most barriers and enablers of sustainable projects are related to procurement. This study proposes a framework for evaluating green procurement practices throughout the lifecycle of road construction projects and demonstrates its application through an Australian case study. Design/methodology/approach The study is based on linking the phases of road construction with incentive mechanisms for proactively motivating behavioural change. A holistic view on utilised and potential incentives is attempted with a literature review and a state-of-practice review. The latter is based on interviews and 90 policy and procurement documents across five Australian states. Findings An evaluation framework with seven procurement stages is suggested to describe current state green procurement incentives throughout the delivery lifecycle of road construction projects. The Australian case study was found to provide useful data to identify gaps and strong points of the different states regarding their level of integration of sustainability and greenhouse gas emissions GHG) reduction elements in their procurement practices. This understanding was used to draw recommendations on future advancement of green procurement. Originality/value: Government entities across the globe can impact considerably the achievement of sustainability and GHG targets, by using their procurement practices and requirements to create incentives for contractors and suppliers to engage in more GHG conscious practices. The present study provides a systematic account of how green procurement practices can be underpinned using the Australian road construction industry as a case study, and distinguish between strong and weak links in the green procurement chain to draw recommendations for future initiatives.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes an online learning control system that uses the strategy of Model Predictive Control (MPC) in a model based locally weighted learning framework. The new approach, named Locally Weighted Learning Model Predictive Control (LWL-MPC), is proposed as a solution to learn to control robotic systems with nonlinear and time varying dynamics. This paper demonstrates the capability of LWL-MPC to perform online learning while controlling the joint trajectories of a low cost, three degree of freedom elastic joint robot. The learning performance is investigated in both an initial learning phase, and when the system dynamics change due to a heavy object added to the tool point. The experiment on the real elastic joint robot is presented and LWL-MPC is shown to successfully learn to control the system with and without the object. The results highlight the capability of the learning control system to accommodate the lack of mechanical consistency and linearity in a low cost robot arm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Efficient and effective feature detection and representation is an important consideration when processing videos, and a large number of applications such as motion analysis, 3D scene understanding, tracking etc. depend on this. Amongst several feature description methods, local features are becoming increasingly popular for representing videos because of their simplicity and efficiency. While they achieve state-of-the-art performance with low computational complexity, their performance is still too limited for real world applications. Furthermore, rapid increases in the uptake of mobile devices has increased the demand for algorithms that can run with reduced memory and computational requirements. In this paper we propose a semi binary based feature detectordescriptor based on the BRISK detector, which can detect and represent videos with significantly reduced computational requirements, while achieving comparable performance to the state of the art spatio-temporal feature descriptors. First, the BRISK feature detector is applied on a frame by frame basis to detect interest points, then the detected key points are compared against consecutive frames for significant motion. Key points with significant motion are encoded with the BRISK descriptor in the spatial domain and Motion Boundary Histogram in the temporal domain. This descriptor is not only lightweight but also has lower memory requirements because of the binary nature of the BRISK descriptor, allowing the possibility of applications using hand held devices.We evaluate the combination of detectordescriptor performance in the context of action classification with a standard, popular bag-of-features with SVM framework. Experiments are carried out on two popular datasets with varying complexity and we demonstrate comparable performance with other descriptors with reduced computational complexity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Floods are among the most devastating events that affect primarily tropical, archipelagic countries such as the Philippines. With the current predictions of climate change set to include rising sea levels, intensification of typhoon strength and a general increase in the mean annual precipitation throughout the Philippines, it has become paramount to prepare for the future so that the increased risk of floods on the country does not translate into more economic and human loss. Field work and data gathering was done within the framework of an internship at the former German Technical Cooperation (GTZ) in cooperation with the Local Government Unit of Ormoc City, Leyte, The Philippines, in order to develop a dynamic computer based flood model for the basin of the Pagsangaan River. To this end, different geo-spatial analysis tools such as PCRaster and ArcGIS, hydrological analysis packages and basic engineering techniques were assessed and implemented. The aim was to develop a dynamic flood model and use the development process to determine the required data, availability and impact on the results as case study for flood early warning systems in the Philippines. The hope is that such projects can help to reduce flood risk by including the results of worst case scenario analyses and current climate change predictions into city planning for municipal development, monitoring strategies and early warning systems. The project was developed using a 1D-2D coupled model in SOBEK (Deltares Hydrological modelling software package) and was also used as a case study to analyze and understand the influence of different factors such as land use, schematization, time step size and tidal variation on the flood characteristics. Several sources of relevant satellite data were compared, such as Digital Elevation Models (DEMs) from ASTER and SRTM data, as well as satellite rainfall data from the GIOVANNI server (NASA) and field gauge data. Different methods were used in the attempt to partially calibrate and validate the model to finally simulate and study two Climate Change scenarios based on scenario A1B predictions. It was observed that large areas currently considered not prone to floods will become low flood risk (0.1-1 m water depth). Furthermore, larger sections of the floodplains upstream of the Lilo- an’s Bridge will become moderate flood risk areas (1 - 2 m water depth). The flood hazard maps created for the development of the present project will be presented to the LGU and the model will be used to create a larger set of possible flood prone areas related to rainfall intensity by GTZ’s Local Disaster Risk Management Department and to study possible improvements to the current early warning system and monitoring of the basin section belonging to Ormoc City; recommendations about further enhancement of the geo-hydro-meteorological data to improve the model’s accuracy mainly on areas of interest will also be presented at the LGU.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Textual document set has become an important and rapidly growing information source in the web. Text classification is one of the crucial technologies for information organisation and management. Text classification has become more and more important and attracted wide attention of researchers from different research fields. In this paper, many feature selection methods, the implement algorithms and applications of text classification are introduced firstly. However, because there are much noise in the knowledge extracted by current data-mining techniques for text classification, it leads to much uncertainty in the process of text classification which is produced from both the knowledge extraction and knowledge usage, therefore, more innovative techniques and methods are needed to improve the performance of text classification. It has been a critical step with great challenge to further improve the process of knowledge extraction and effectively utilization of the extracted knowledge. Rough Set decision making approach is proposed to use Rough Set decision techniques to more precisely classify the textual documents which are difficult to separate by the classic text classification methods. The purpose of this paper is to give an overview of existing text classification technologies, to demonstrate the Rough Set concepts and the decision making approach based on Rough Set theory for building more reliable and effective text classification framework with higher precision, to set up an innovative evaluation metric named CEI which is very effective for the performance assessment of the similar research, and to propose a promising research direction for addressing the challenging problems in text classification, text mining and other relative fields.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Post-stroke recovery is demanding. Increasing studies have examined the effectiveness of self-management programs for stroke survivors. However no systematic review has been conducted to summarize the effectiveness of theory-based stroke self-management programs. Objectives The aim is to present the best available research evidence about effectiveness of theory-based self-management programs on community-dwelling stroke survivors’ recovery. Inclusion criteria Types of participants All community-residing adults aged 18 years or above, and had a clinical diagnosis of stroke. Types of interventions Studies which examined effectiveness of a self-management program underpinned by a theoretical or conceptual framework for community-dwelling stroke survivors. Types of studies Randomized controlled trials. Types of outcomes Primary outcomes included health-related quality of life and self-management behaviors. Secondary outcomes included physical (activities of daily living), psychological (self-efficacy, depressive symptoms), and social outcomes (community reintegration, perceived social support). Search Strategy A three-step approach was adopted to identify all relevant published and unpublished studies in English or Chinese. Methodological quality The methodological quality of the included studies was assessed using the Joanna Briggs Institute critical appraisal checklist for experimental studies. Data Collection A standardized JBI data extraction form was used. There was no disagreement between the two reviewers on the data extraction results. Data Synthesis There were incomplete details about the number of participants and the results in two studies, which makes it impossible to perform meta-analysis. A narrative summary of the effectiveness of stroke self-management programs is presented. Results Three studies were included. The key issues of concern in methodological quality included insufficient information about random assignment, allocation concealment, reliability and validity of the measuring instruments, absence of intention-to-treat analysis, and small sample sizes. The three programs were designed based on the Stanford Chronic Disease Self-management program and were underpinned by the principles of self-efficacy. One study showed improvement in the intervention group in family and social roles three months after program completion, and work productivity at six months as measured by the Stroke Specific Quality of Life Scale (SSQOL). The intervention group also had an increased mean self-efficacy score in communicating with physicians six months after program completion. The mean changes from baseline in these variables were significantly different from the control group. No significant difference was found in time spent in aerobic exercise between the intervention and control groups at three and six months after program completion. Another study, using SSQOL, showed a significant interaction effect by treatment and time on family roles, fine motor tasks, self-care, and work productivity. However there was no significant interaction by treatment and time on self-efficacy. The third study showed improvement in quality of life, community participation, and depressive symptoms among the participants receiving the stroke self-management program, Stanford Chronic Disease Self-management program, or usual care six months after program completion. However, there was no significant difference between the groups. Conclusions There is inconclusive evidence about the effectiveness of theory-based stroke self-management programs on community-dwelling stroke survivors’ recovery. However the preliminary evidence suggests potential benefits in improving stroke survivors’ quality of life and self-efficacy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of Wireless Sensor Networks (WSNs) for Structural Health Monitoring (SHM) has become a promising approach due to many advantages such as low cost, fast and flexible deployment. However, inherent technical issues such as data synchronization error and data loss have prevented these distinct systems from being extensively used. Recently, several SHM-oriented WSNs have been proposed and believed to be able to overcome a large number of technical uncertainties. Nevertheless, there is limited research examining effects of uncertainties of generic WSN platform and verifying the capability of SHM-oriented WSNs, particularly on demanding SHM applications like modal analysis and damage identification of real civil structures. This article first reviews the major technical uncertainties of both generic and SHM-oriented WSN platforms and efforts of SHM research community to cope with them. Then, effects of the most inherent WSN uncertainty on the first level of a common Output-only Modal-based Damage Identification (OMDI) approach are intensively investigated. Experimental accelerations collected by a wired sensory system on a benchmark civil structure are initially used as clean data before being contaminated with different levels of data pollutants to simulate practical uncertainties in both WSN platforms. Statistical analyses are comprehensively employed in order to uncover the distribution pattern of the uncertainty influence on the OMDI approach. The result of this research shows that uncertainties of generic WSNs can cause serious impact for level 1 OMDI methods utilizing mode shapes. It also proves that SHM-WSN can substantially lessen the impact and obtain truly structural information without having used costly computation solutions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Currently, the GNSS computing modes are of two classes: network-based data processing and user receiver-based processing. A GNSS reference receiver station essentially contributes raw measurement data in either the RINEX file format or as real-time data streams in the RTCM format. Very little computation is carried out by the reference station. The existing network-based processing modes, regardless of whether they are executed in real-time or post-processed modes, are centralised or sequential. This paper describes a distributed GNSS computing framework that incorporates three GNSS modes: reference station-based, user receiver-based and network-based data processing. Raw data streams from each GNSS reference receiver station are processed in a distributed manner, i.e., either at the station itself or at a hosting data server/processor, to generate station-based solutions, or reference receiver-specific parameters. These may include precise receiver clock, zenith tropospheric delay, differential code biases, ambiguity parameters, ionospheric delays, as well as line-of-sight information such as azimuth and elevation angles. Covariance information for estimated parameters may also be optionally provided. In such a mode the nearby precise point positioning (PPP) or real-time kinematic (RTK) users can directly use the corrections from all or some of the stations for real-time precise positioning via a data server. At the user receiver, PPP and RTK techniques are unified under the same observation models, and the distinction is how the user receiver software deals with corrections from the reference station solutions and the ambiguity estimation in the observation equations. Numerical tests demonstrate good convergence behaviour for differential code bias and ambiguity estimates derived individually with single reference stations. With station-based solutions from three reference stations within distances of 22–103 km the user receiver positioning results, with various schemes, show an accuracy improvement of the proposed station-augmented PPP and ambiguity-fixed PPP solutions with respect to the standard float PPP solutions without station augmentation and ambiguity resolutions. Overall, the proposed reference station-based GNSS computing mode can support PPP and RTK positioning services as a simpler alternative to the existing network-based RTK or regionally augmented PPP systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Agent-based modelling (ABM), like other modelling techniques, is used to answer specific questions from real world systems that could otherwise be expensive or impractical. Its recent gain in popularity can be attributed to some degree to its capacity to use information at a fine level of detail of the system, both geographically and temporally, and generate information at a higher level, where emerging patterns can be observed. This technique is data-intensive, as explicit data at a fine level of detail is used and it is computer-intensive as many interactions between agents, which can learn and have a goal, are required. With the growing availability of data and the increase in computer power, these concerns are however fading. Nonetheless, being able to update or extend the model as more information becomes available can become problematic, because of the tight coupling of the agents and their dependence on the data, especially when modelling very large systems. One large system to which ABM is currently applied is the electricity distribution where thousands of agents representing the network and the consumers’ behaviours are interacting with one another. A framework that aims at answering a range of questions regarding the potential evolution of the grid has been developed and is presented here. It uses agent-based modelling to represent the engineering infrastructure of the distribution network and has been built with flexibility and extensibility in mind. What distinguishes the method presented here from the usual ABMs is that this ABM has been developed in a compositional manner. This encompasses not only the software tool, which core is named MODAM (MODular Agent-based Model) but the model itself. Using such approach enables the model to be extended as more information becomes available or modified as the electricity system evolves, leading to an adaptable model. Two well-known modularity principles in the software engineering domain are information hiding and separation of concerns. These principles were used to develop the agent-based model on top of OSGi and Eclipse plugins which have good support for modularity. Information regarding the model entities was separated into a) assets which describe the entities’ physical characteristics, and b) agents which describe their behaviour according to their goal and previous learning experiences. This approach diverges from the traditional approach where both aspects are often conflated. It has many advantages in terms of reusability of one or the other aspect for different purposes as well as composability when building simulations. For example, the way an asset is used on a network can greatly vary while its physical characteristics are the same – this is the case for two identical battery systems which usage will vary depending on the purpose of their installation. While any battery can be described by its physical properties (e.g. capacity, lifetime, and depth of discharge), its behaviour will vary depending on who is using it and what their aim is. The model is populated using data describing both aspects (physical characteristics and behaviour) and can be updated as required depending on what simulation is to be run. For example, data can be used to describe the environment to which the agents respond to – e.g. weather for solar panels, or to describe the assets and their relation to one another – e.g. the network assets. Finally, when running a simulation, MODAM calls on its module manager that coordinates the different plugins, automates the creation of the assets and agents using factories, and schedules their execution which can be done sequentially or in parallel for faster execution. Building agent-based models in this way has proven fast when adding new complex behaviours, as well as new types of assets. Simulations have been run to understand the potential impact of changes on the network in terms of assets (e.g. installation of decentralised generators) or behaviours (e.g. response to different management aims). While this platform has been developed within the context of a project focussing on the electricity domain, the core of the software, MODAM, can be extended to other domains such as transport which is part of future work with the addition of electric vehicles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper conceptualizes and defines knowledge governance (KG) in project-based organizations (PBOs). Two key contributions towards a multi-faceted view of KG and an understanding of KG in PBOs are advanced, as distinguished from knowledge management and organizational learning concepts. The conceptual framework addresses macro- and micro-level elements of KG and their interaction. Our definition of KG in PBOs highlights the contingent nature of KG processes in relation to their organizational context. These contributions provide a novel platform for understanding KG in PBOs.