375 resultados para Random processes
Resumo:
In the last years, the trade-o between exibility and sup- port has become a leading issue in work ow technology. In this paper we show how an imperative modeling approach used to de ne stable and well-understood processes can be complemented by a modeling ap- proach that enables automatic process adaptation and exploits planning techniques to deal with environmental changes and exceptions that may occur during process execution. To this end, we designed and imple- mented a Custom Service that allows the Yawl execution environment to delegate the execution of subprocesses and activities to the SmartPM execution environment, which is able to automatically adapt a process to deal with emerging changes and exceptions. We demonstrate the fea- sibility and validity of the approach by showing the design and execution of an emergency management process de ned for train derailments.
Resumo:
Process models are usually depicted as directed graphs, with nodes representing activities and directed edges control flow. While structured processes with pre-defined control flow have been studied in detail, flexible processes including ad-hoc activities need further investigation. This paper presents flexible process graph, a novel approach to model processes in the context of dynamic environment and adaptive process participants’ behavior. The approach allows defining execution constraints, which are more restrictive than traditional ad-hoc processes and less restrictive than traditional control flow, thereby balancing structured control flow with unstructured ad-hoc activities. Flexible process graph focuses on what can be done to perform a process. Process participants’ routing decisions are based on the current process state. As a formal grounding, the approach uses hypergraphs, where each edge can associate any number of nodes. Hypergraphs are used to define execution semantics of processes formally. We provide a process scenario to motivate and illustrate the approach.
Resumo:
An in vivo screen has been devised for NF-κB p50 activity in Escherichia coli exploiting the ability of the mammalian transcription factor to emulate a prokaryotic repressor. Active intracellular p50 was shown to repress the expression of a green fluorescent protein reporter gene allowing for visual screening of colonies expressing active p50 on agar plates. A library of mutants was constructed in which the residues Y267, L269, A308 and V310 of the dimer interface were simultaneously randomised and twenty-five novel functional interfaces were selected which repressed the reporter gene to similar levels as the wild-type protein. The leucine-269 alanine-308 core was repeatedly, but not exclusively, selected from the library whilst a diversity of predominantly non-polar residues were selected at positions 267 and 310. These results indicate that L269 and A308 may form a hot spot of interaction and allow an insight into the processes of dimer selectivity and evolution within this family of transcription factors.
Resumo:
Robust facial expression recognition (FER) under occluded face conditions is challenging. It requires robust algorithms of feature extraction and investigations into the effects of different types of occlusion on the recognition performance to gain insight. Previous FER studies in this area have been limited. They have spanned recovery strategies for loss of local texture information and testing limited to only a few types of occlusion and predominantly a matched train-test strategy. This paper proposes a robust approach that employs a Monte Carlo algorithm to extract a set of Gabor based part-face templates from gallery images and converts these templates into template match distance features. The resulting feature vectors are robust to occlusion because occluded parts are covered by some but not all of the random templates. The method is evaluated using facial images with occluded regions around the eyes and the mouth, randomly placed occlusion patches of different sizes, and near-realistic occlusion of eyes with clear and solid glasses. Both matched and mis-matched train and test strategies are adopted to analyze the effects of such occlusion. Overall recognition performance and the performance for each facial expression are investigated. Experimental results on the Cohn-Kanade and JAFFE databases demonstrate the high robustness and fast processing speed of our approach, and provide useful insight into the effects of occlusion on FER. The results on the parameter sensitivity demonstrate a certain level of robustness of the approach to changes in the orientation and scale of Gabor filters, the size of templates, and occlusions ratios. Performance comparisons with previous approaches show that the proposed method is more robust to occlusion with lower reductions in accuracy from occlusion of eyes or mouth.
Resumo:
Service processes such as financial advice, booking a business trip or conducting a consulting project have emerged as units of analysis of high interest for the business process and service management communities in practice and academia. While the transactional nature of production processes is relatively well understood and deployed, the less predictable and highly interactive nature of service processes still lacks in many areas appropriate methodological grounding. This paper proposes a framework of a process laboratory as a new IT artefact in order to facilitate the holistic analysis and simulation of such service processes. Using financial services as an example, it will be shown how such a process laboratory can be used to reduce the complexity of service process analysis and facilitate operational service process control.
Resumo:
Conceptual modeling is an important tool for understanding and revealing weaknesses of business processes. Yet, the current practice in reengineering projects often considers simply the as-is process model as a brain-storming tool. This approach heavily relies on the intuition of the participants and misses a clear description of the quality requirements. Against this background, we identify four generic quality categories of business process quality, and populate them with quality requirements from related research. We refer to the resulting framework as the Quality of Business Process (QoBP) framework. Furthermore, we present the findings from applying the QoBP framework in a case study with a major Australian bank, showing that it helps to systematically fill the white space between as-is and to-be process modeling.
Resumo:
This article presents an overview of two aspects of the role the internet now plays in the court system - first, the extent to which judges, administrators and court officials at the different levels in the court hierarchy are using the internet to deliver enhanced access to the Australian justice system for the community as a whole, and second, how they have embraced that same technology as an aid for accessing information for better judgment delivery and administration.
Resumo:
Recent advances suggest that encoding images through Symmetric Positive Definite (SPD) matrices and then interpreting such matrices as points on Riemannian manifolds can lead to increased classification performance. Taking into account manifold geometry is typically done via (1) embedding the manifolds in tangent spaces, or (2) embedding into Reproducing Kernel Hilbert Spaces (RKHS). While embedding into tangent spaces allows the use of existing Euclidean-based learning algorithms, manifold shape is only approximated which can cause loss of discriminatory information. The RKHS approach retains more of the manifold structure, but may require non-trivial effort to kernelise Euclidean-based learning algorithms. In contrast to the above approaches, in this paper we offer a novel solution that allows SPD matrices to be used with unmodified Euclidean-based learning algorithms, with the true manifold shape well-preserved. Specifically, we propose to project SPD matrices using a set of random projection hyperplanes over RKHS into a random projection space, which leads to representing each matrix as a vector of projection coefficients. Experiments on face recognition, person re-identification and texture classification show that the proposed approach outperforms several recent methods, such as Tensor Sparse Coding, Histogram Plus Epitome, Riemannian Locality Preserving Projection and Relational Divergence Classification.
Resumo:
We investigated the effects of handling and fixation processes on the two-photon fluorescence spectroscopy of endogenous fluorophors in mouse skeletal muscle. The skeletal muscle was handled in one of two ways: either sectioned without storage or sectioned following storage in a freezer. The two-photon fluorescence spectra measured for different storage or fixation periods show a differential among those samples that were stored in water or were fixed either in formalin or methanol. The spectroscopic results indicate that formalin was the least disruptive fixative, having only a weak effect on the two-photon fluorescence spectroscopy of muscle tissue, whereas methanol had a significant influence on one of the autofluorescence peaks. The two handling processes yielded similar spectral information, indicating no different effects between them.
Resumo:
To better understand how freshwater ecosystems respond to changes in catchment land-use, it is important to develop measures of ecological health that include aspects of both ecosystem structure and function. This study investigated measures of nutrient processes as potential indicators of stream ecosystem health across a land-use gradient from relatively undisturbed to highly modified. A total of seven indicators (potential denitrification; an index of denitrification potential relative to sediment organic matter; benthic algal growth on artificial substrates amended with (a) N only, (b) P only, and (c) N and P; and δ15N of aquatic plants and benthic sediment) were measured at 53 streams in southeast Queensland, Australia. The indicators were evaluated by their response to a defined gradient of agricultural land-use disturbance as well as practical aspects of using the indicators as part of a monitoring program. Regression models based on descriptors of the disturbance gradient explained a large proportion of the variation in six of the seven indicators. Denitrification index, algal growth in N amended substrate, and δ15N of aquatic plants demonstrated the best regression. However, the δ15N value of benthic sediment was found to be the best indicator overall for incorporation into a monitoring program, as samples were relatively easy to collect and process, and were successfully collected at more than 90% of the study sites.
Resumo:
Introduction The importance of in vitro biomechanical testing in today’s understanding of spinal pathology and treatment modalities cannot be stressed enough. Different studies have used differing levels of dissection of their spinal segments for their testing protocols[1, 2]. The aim of this study was to assess the impact of removing the costovertebral joints and partial resection of the spinous process sequentially, on the stiffness of the immature thoracic bovine spinal segment. Materials and Methods Thoracic spines from 6-8 week old calves were used. Each spine was dissected and divided into motion segments with 5cm of attached rib on each side and full spinous processes including levels T4-T11 (n=28). They were potted in polymethylemethacrylate. An Instron Biaxial materials testing machine with a custom made jig was used for testing. The segments were tested in flexion/extension, lateral bending and axial rotation at 37⁰C and 100% humidity, using moment control to a maximum 1.75 Nm with a loading rate of 0.3 Nm per second. They were first tested intact for ten load cycles with data collected from the tenth cycle. Progressive dissection was performed by removing first the attached ribs, followed by the spinous process at its base. Biomechanical testing was carried out after each level of dissection using the same protocol. Statistical analysis of the data was performed using repeated measures ANOVA. Results In combined flexion/extension there was a significant reduction in stiffness of 16% (p=0.002). This was mainly after resection of the ribs (14%, p=0.024) and mainly occurred in flexion where stiffness reduced by 22% (p=0.021). In extension, stiffness dropped by 13% (p=0.133). However there was no further significant change in stiffness on resection of the spinous process (<1%) (p=1.00). In lateral bending there was a significant decrease in stiffness of 13% (p<0.001). This comprised a drop of 11% on resection of the ribs (p=0.009) and a further 8% on resection of the spinous process (p=0.014). There was no difference between left and right bending. In axial rotation there was no significant change in stiffness after each stage of dissection (p=0.253). There was no difference between left and right rotation. Conclusion The costovertebral joints play a significant role in providing stability to the bovine thoracic spine in both flexion/extension and lateral bending, whereas the spinous processes play a minor role. Both elements have little effect on axial rotation stability.
Resumo:
The occurrence of extreme water level events along low-lying, highly populated and/or developed coastlines can lead to devastating impacts on coastal infrastructure. Therefore it is very important that the probabilities of extreme water levels are accurately evaluated to inform flood and coastal management and for future planning. The aim of this study was to provide estimates of present day extreme total water level exceedance probabilities around the whole coastline of Australia, arising from combinations of mean sea level, astronomical tide and storm surges generated by both extra-tropical and tropical storms, but exclusive of surface gravity waves. The study has been undertaken in two main stages. In the first stage, a high-resolution (~10 km along the coast) hydrodynamic depth averaged model has been configured for the whole coastline of Australia using the Danish Hydraulics Institute’s Mike21 modelling suite of tools. The model has been forced with astronomical tidal levels, derived from the TPX07.2 global tidal model, and meteorological fields, from the US National Center for Environmental Prediction’s global reanalysis, to generate a 61-year (1949 to 2009) hindcast of water levels. This model output has been validated against measurements from 30 tide gauge sites around Australia with long records. At each of the model grid points located around the coast, time series of annual maxima and the several highest water levels for each year were derived from the multi-decadal water level hindcast and have been fitted to extreme value distributions to estimate exceedance probabilities. Stage 1 provided a reliable estimate of the present day total water level exceedance probabilities around southern Australia, which is mainly impacted by extra-tropical storms. However, as the meteorological fields used to force the hydrodynamic model only weakly include the effects of tropical cyclones the resultant water levels exceedance probabilities were underestimated around western, northern and north-eastern Australia at higher return periods. Even if the resolution of the meteorological forcing was adequate to represent tropical cyclone-induced surges, multi-decadal periods yielded insufficient instances of tropical cyclones to enable the use of traditional extreme value extrapolation techniques. Therefore, in the second stage of the study, a statistical model of tropical cyclone tracks and central pressures was developed using histroic observations. This model was then used to generate synthetic events that represented 10,000 years of cyclone activity for the Australia region, with characteristics based on the observed tropical cyclones over the last ~40 years. Wind and pressure fields, derived from these synthetic events using analytical profile models, were used to drive the hydrodynamic model to predict the associated storm surge response. A random time period was chosen, during the tropical cyclone season, and astronomical tidal forcing for this period was included to account for non-linear interactions between the tidal and surge components. For each model grid point around the coast, annual maximum total levels for these synthetic events were calculated and these were used to estimate exceedance probabilities. The exceedance probabilities from stages 1 and 2 were then combined to provide a single estimate of present day extreme water level probabilities around the whole coastline of Australia.
Resumo:
Spreading cell fronts are essential features of development, repair and disease processes. Many mathematical models used to describe the motion of cell fronts, such as Fisher’s equation, invoke a mean–field assumption which implies that there is no spatial structure, such as cell clustering, present. Here, we examine the presence of spatial structure using a combination of in vitro circular barrier assays, discrete random walk simulations and pair correlation functions. In particular, we analyse discrete simulation data using pair correlation functions to show that spatial structure can form in a spreading population of cells either through sufficiently strong cell–to–cell adhesion or sufficiently rapid cell proliferation. We analyse images from a circular barrier assay describing the spreading of a population of MM127 melanoma cells using the same pair correlation functions. Our results indicate that the spreading melanoma cell populations remain very close to spatially uniform, suggesting that the strength of cell–to–cell adhesion and the rate of cell proliferation are both sufficiently small so as not to induce any spatial patterning in the spreading populations.
Resumo:
Tailoring the density of random single-walled carbon nanotube (SWCNT) networks is of paramount importance for various applications, yet it remains a major challenge due to the insufficient catalyst activation in most growth processes. Here we report on a simple and effective method to maximise the number of active catalyst nanoparticles using catalytic chemical vapor deposition (CCVD). By modulating short pulses of acetylene into a methane-based CCVD growth process, the density of SWCNTs is dramatically increased by up to three orders of magnitude without increasing the catalyst density and degrading the nanotube quality. In the framework of a vapor-liquid-solid model, we attribute the enhanced growth to the high dissociation rate of acetylene at high temperatures at the nucleation stage, which can be effective in both supersaturating the larger catalyst nanoparticles and overcoming the nanotube nucleation energy barrier of the smaller catalyst nanoparticles. These results are highly relevant to numerous applications of random SWCNT networks in next-generation energy, sensing and biomedical devices. © 2011 The Royal Society of Chemistry.
Resumo:
The quick detection of an abrupt unknown change in the conditional distribution of a dependent stochastic process has numerous applications. In this paper, we pose a minimax robust quickest change detection problem for cases where there is uncertainty about the post-change conditional distribution. Our minimax robust formulation is based on the popular Lorden criteria of optimal quickest change detection. Under a condition on the set of possible post-change distributions, we show that the widely known cumulative sum (CUSUM) rule is asymptotically minimax robust under our Lorden minimax robust formulation as a false alarm constraint becomes more strict. We also establish general asymptotic bounds on the detection delay of misspecified CUSUM rules (i.e. CUSUM rules that are designed with post- change distributions that differ from those of the observed sequence). We exploit these bounds to compare the delay performance of asymptotically minimax robust, asymptotically optimal, and other misspecified CUSUM rules. In simulation examples, we illustrate that asymptotically minimax robust CUSUM rules can provide better detection delay performance at greatly reduced computation effort compared to competing generalised likelihood ratio procedures.