815 resultados para find it fast


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Business processes are prone to continuous and unexpected changes. Process workers may start executing a process differently in order to adjust to changes in workload, season, guidelines or regulations for example. Early detection of business process changes based on their event logs – also known as business process drift detection – enables analysts to identify and act upon changes that may otherwise affect process performance. Previous methods for business process drift detection are based on an exploration of a potentially large feature space and in some cases they require users to manually identify the specific features that characterize the drift. Depending on the explored feature set, these methods may miss certain types of changes. This paper proposes a fully automated and statistically grounded method for detecting process drift. The core idea is to perform statistical tests over the distributions of runs observed in two consecutive time windows. By adaptively sizing the window, the method strikes a trade-off between classification accuracy and drift detection delay. A validation on synthetic and real-life logs shows that the method accurately detects typical change patterns and scales up to the extent it is applicable for online drift detection.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article examines variations in performance between fast-growth – the so-called gazelle – firms. Specifically, we investigate how the level of growth affects future profitability and how this relationship is moderated by firm strategy. Hypotheses are developed regarding the moderated growth–profitability relationship and are tested using longitudinal data from a sample of 964 Danish gazelle firms. We find a positive relationship between growth and profitability among gazelle firms. This relationship is moderated, however, by market strategy; it is stronger for firms pursuing a broad market strategy rather than a niche strategy. This study contributes to the current literature by providing a more nuanced view of the growth–profitability relationship and investigating the potential for the future performance of gazelle firms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper asks the question to what scale and speed does society need to reduce its ecological footprint and improve resource productivity to prevent further overshoot and return within the ecological limits of the earth’s ecological life support systems? How fast do these changes need to be achieved? The paper shows that now a large range of studies find that engineering sustainable solutions need to be roughly an order or magnitude resource productivity improvement (sometimes called a Factor of 10, or a 90% reduction) by 2050 to achieve real and lasting ecological sustainability. This marks a significant challenge for engineers – indeed all designers and architects, where best practice in engineering sustainable solutions will need to achieve large resource productivity targets. The paper brings together examples of best practice in achieving these large targets from around the world. The paper also highlights key resources and texts for engineers who wish to learn how to do it. But engineers need to be realistic and patient. Significant barriers exist to achieving Factor 4-10 such as the fact that infrastructure and technology rollover and replacement is often slow. This slow rollover of the built environment and technology is the context within which most engineers work, making the goal of achieving Factor 10 all the more challenging. However, the paper demonstrates that by using best practice in engineering sustainable solutions and by addressing the necessary market, information and institutional failures it is possible to achieve Factor 10 over the next 50 years. This paper draws on recent publications by The Natural Edge Project (TNEP) and partners, including Hargroves, K. Smith, M. (Eds) (2005) The Natural Advantage of Nations: Business Opportunities, Innovation and Governance for the 21st Century, and the TNEP Engineering Sustainable Solutions Program - Critical Literacies for Engineers Portfolio. Both projects have the significant support of Engineers Australia. its College of Environmental Engineers and the Society of Sustainability and Environmental Engineering.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Biochemical systems with relatively low numbers of components must be simulated stochastically in order to capture their inherent noise. Although there has recently been considerable work on discrete stochastic solvers, there is still a need for numerical methods that are both fast and accurate. The Bulirsch-Stoer method is an established method for solving ordinary differential equations that possesses both of these qualities. Results In this paper, we present the Stochastic Bulirsch-Stoer method, a new numerical method for simulating discrete chemical reaction systems, inspired by its deterministic counterpart. It is able to achieve an excellent efficiency due to the fact that it is based on an approach with high deterministic order, allowing for larger stepsizes and leading to fast simulations. We compare it to the Euler τ-leap, as well as two more recent τ-leap methods, on a number of example problems, and find that as well as being very accurate, our method is the most robust, in terms of efficiency, of all the methods considered in this paper. The problems it is most suited for are those with increased populations that would be too slow to simulate using Gillespie’s stochastic simulation algorithm. For such problems, it is likely to achieve higher weak order in the moments. Conclusions The Stochastic Bulirsch-Stoer method is a novel stochastic solver that can be used for fast and accurate simulations. Crucially, compared to other similar methods, it better retains its high accuracy when the timesteps are increased. Thus the Stochastic Bulirsch-Stoer method is both computationally efficient and robust. These are key properties for any stochastic numerical method, as they must typically run many thousands of simulations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since we still know very little about stem cells in their natural environment, it is useful to explore their dynamics through modelling and simulation, as well as experimentally. Most models of stem cell systems are based on deterministic differential equations that ignore the natural heterogeneity of stem cell populations. This is not appropriate at the level of individual cells and niches, when randomness is more likely to affect dynamics. In this paper, we introduce a fast stochastic method for simulating a metapopulation of stem cell niche lineages, that is, many sub-populations that together form a heterogeneous metapopulation, over time. By selecting the common limiting timestep, our method ensures that the entire metapopulation is simulated synchronously. This is important, as it allows us to introduce interactions between separate niche lineages, which would otherwise be impossible. We expand our method to enable the coupling of many lineages into niche groups, where differentiated cells are pooled within each niche group. Using this method, we explore the dynamics of the haematopoietic system from a demand control system perspective. We find that coupling together niche lineages allows the organism to regulate blood cell numbers as closely as possible to the homeostatic optimum. Furthermore, coupled lineages respond better than uncoupled ones to random perturbations, here the loss of some myeloid cells. This could imply that it is advantageous for an organism to connect together its niche lineages into groups. Our results suggest that a potential fruitful empirical direction will be to understand how stem cell descendants communicate with the niche and how cancer may arise as a result of a failure of such communication.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We find in complementary experiments and event-driven simulations of sheared inelastic hard spheres that the velocity autocorrelation function psi(t) decays much faster than t(-3/2) obtained for a fluid of elastic spheres at equilibrium. Particle displacements are measured in experiments inside a gravity-driven flow sheared by a rough wall. The average packing fraction obtained in the experiments is 0.59, and the packing fraction in the simulations is varied between 0.5 and 0.59. The motion is observed to be diffusive over long times except in experiments where there is layering of particles parallel to boundaries, and diffusion is inhibited between layers. Regardless, a rapid decay of psi(t) is observed, indicating that this is a feature of the sheared dissipative fluid, and is independent of the details of the relative particle arrangements. An important implication of our study is that the non-analytic contribution to the shear stress may not be present in a sheared inelastic fluid, leading to a wider range of applicability of kinetic theory approaches to dense granular matter.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many wireless applications demand a fast mechanism to detect the packet from a node with the highest priority ("best node") only, while packets from nodes with lower priority are irrelevant. In this paper, we introduce an extremely fast contention-based multiple access algorithm that selects the best node and requires only local information of the priorities of the nodes. The algorithm, which we call Variable Power Multiple Access Selection (VP-MAS), uses the local channel state information from the accessing nodes to the receiver, and maps the priorities onto the receive power. It is based on a key result that shows that mapping onto a set of discrete receive power levels is optimal, when the power levels are chosen to exploit packet capture that inherently occurs in a wireless physical layer. The VP-MAS algorithm adjusts the expected number of users that contend in each step and their respective transmission powers, depending on whether previous transmission attempts resulted in capture, idle channel, or collision. We also show how reliable information regarding the total received power at the receiver can be used to improve the algorithm by enhancing the feedback mechanism. The algorithm detects the packet from the best node in 1.5 to 2.1 slots, which is considerably lower than the 2.43 slot average achieved by the best algorithm known to date.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The feasibility of different modern analytical techniques for the mass spectrometric detection of anabolic androgenic steroids (AAS) in human urine was examined in order to enhance the prevalent analytics and to find reasonable strategies for effective sports drug testing. A comparative study of the sensitivity and specificity between gas chromatography (GC) combined with low (LRMS) and high resolution mass spectrometry (HRMS) in screening of AAS was carried out with four metabolites of methandienone. Measurements were done in selected ion monitoring mode with HRMS using a mass resolution of 5000. With HRMS the detection limits were considerably lower than with LRMS, enabling detection of steroids at low 0.2-0.5 ng/ml levels. However, also with HRMS, the biological background hampered the detection of some steroids. The applicability of liquid-phase microextraction (LPME) was studied with metabolites of fluoxymesterone, 4-chlorodehydromethyltestosterone, stanozolol and danazol. Factors affecting the extraction process were studied and a novel LPME method with in-fiber silylation was developed and validated for GC/MS analysis of the danazol metabolite. The method allowed precise, selective and sensitive analysis of the metabolite and enabled simultaneous filtration, extraction, enrichment and derivatization of the analyte from urine without any other steps in sample preparation. Liquid chromatographic/tandem mass spectrometric (LC/MS/MS) methods utilizing electrospray ionization (ESI), atmospheric pressure chemical ionization (APCI) and atmospheric pressure photoionization (APPI) were developed and applied for detection of oxandrolone and metabolites of stanozolol and 4-chlorodehydromethyltestosterone in urine. All methods exhibited high sensitivity and specificity. ESI showed, however, the best applicability, and a LC/ESI-MS/MS method for routine screening of nine 17-alkyl-substituted AAS was thus developed enabling fast and precise measurement of all analytes with detection limits below 2 ng/ml. The potential of chemometrics to resolve complex GC/MS data was demonstrated with samples prepared for AAS screening. Acquired full scan spectral data (m/z 40-700) were processed by the OSCAR algorithm (Optimization by Stepwise Constraints of Alternating Regression). The deconvolution process was able to dig out from a GC/MS run more than the double number of components as compared with the number of visible chromatographic peaks. Severely overlapping components, as well as components hidden in the chromatographic background could be isolated successfully. All studied techniques proved to be useful analytical tools to improve detection of AAS in urine. Superiority of different procedures is, however, compound-dependent and different techniques complement each other.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A fast iterative scheme based on the Newton method is described for finding the reciprocal of a finite segment p-adic numbers (Hensel code). The rate of generation of the reciprocal digits per step can be made quadratic or higher order by a proper choice of the starting value and the iterating function. The extension of this method to find the inverse transform of the Hensel code of a rational polynomial over a finite field is also indicated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There’s a polyester mullet skirt gracing a derrière near you. It’s short at the front, long at the back, and it’s also known as the hi-lo skirt. Like fads that preceded it, the mullet skirt has a short fashion life, and although it will remain potentially wearable for years, it’s likely to soon be heading to the charity shop or to landfill...

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The impact of erroneous genotypes having passed standard quality control (QC) can be severe in genome-wide association studies, genotype imputation, and estimation of heritability and prediction of genetic risk based on single nucleotide polymorphisms (SNP). To detect such genotyping errors, a simple two-locus QC method, based on the difference in test statistic of association between single SNPs and pairs of SNPs, was developed and applied. The proposed approach could detect many problematic SNPs with statistical significance even when standard single SNP QC analyses fail to detect them in real data. Depending on the data set used, the number of erroneous SNPs that were not filtered out by standard single SNP QC but detected by the proposed approach varied from a few hundred to thousands. Using simulated data, it was shown that the proposed method was powerful and performed better than other tested existing methods. The power of the proposed approach to detect erroneous genotypes was approximately 80% for a 3% error rate per SNP. This novel QC approach is easy to implement and computationally efficient, and can lead to a better quality of genotypes for subsequent genotype-phenotype investigations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

New algorithms for the continuous wavelet transform are developed that are easy to apply, each consisting of a single-pass finite impulse response (FIR) filter, and several times faster than the fastest existing algorithms. The single-pass filter, named WT-FIR-1, is made possible by applying constraint equations to least-squares estimation of filter coefficients, which removes the need for separate low-pass and high-pass filters. Non-dyadic two-scale relations are developed and it is shown that filters based on them can work more efficiently than dyadic ones. Example applications to the Mexican hat wavelet are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis which consists of an introduction and four peer-reviewed original publications studies the problems of haplotype inference (haplotyping) and local alignment significance. The problems studied here belong to the broad area of bioinformatics and computational biology. The presented solutions are computationally fast and accurate, which makes them practical in high-throughput sequence data analysis. Haplotype inference is a computational problem where the goal is to estimate haplotypes from a sample of genotypes as accurately as possible. This problem is important as the direct measurement of haplotypes is difficult, whereas the genotypes are easier to quantify. Haplotypes are the key-players when studying for example the genetic causes of diseases. In this thesis, three methods are presented for the haplotype inference problem referred to as HaploParser, HIT, and BACH. HaploParser is based on a combinatorial mosaic model and hierarchical parsing that together mimic recombinations and point-mutations in a biologically plausible way. In this mosaic model, the current population is assumed to be evolved from a small founder population. Thus, the haplotypes of the current population are recombinations of the (implicit) founder haplotypes with some point--mutations. HIT (Haplotype Inference Technique) uses a hidden Markov model for haplotypes and efficient algorithms are presented to learn this model from genotype data. The model structure of HIT is analogous to the mosaic model of HaploParser with founder haplotypes. Therefore, it can be seen as a probabilistic model of recombinations and point-mutations. BACH (Bayesian Context-based Haplotyping) utilizes a context tree weighting algorithm to efficiently sum over all variable-length Markov chains to evaluate the posterior probability of a haplotype configuration. Algorithms are presented that find haplotype configurations with high posterior probability. BACH is the most accurate method presented in this thesis and has comparable performance to the best available software for haplotype inference. Local alignment significance is a computational problem where one is interested in whether the local similarities in two sequences are due to the fact that the sequences are related or just by chance. Similarity of sequences is measured by their best local alignment score and from that, a p-value is computed. This p-value is the probability of picking two sequences from the null model that have as good or better best local alignment score. Local alignment significance is used routinely for example in homology searches. In this thesis, a general framework is sketched that allows one to compute a tight upper bound for the p-value of a local pairwise alignment score. Unlike the previous methods, the presented framework is not affeced by so-called edge-effects and can handle gaps (deletions and insertions) without troublesome sampling and curve fitting.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Web data can often be represented in free tree form; however, free tree mining methods seldom exist. In this paper, a computationally fast algorithm FreeS is presented to discover all frequently occurring free subtrees in a database of labelled free trees. FreeS is designed using an optimal canonical form, BOCF that can uniquely represent free trees even during the presence of isomorphism. To avoid enumeration of false positive candidates, it utilises the enumeration approach based on a tree-structure guided scheme. This paper presents lemmas that introduce conditions to conform the generation of free tree candidates during enumeration. Empirical study using both real and synthetic datasets shows that FreeS is scalable and significantly outperforms (i.e. few orders of magnitude faster than) the state-of-the-art frequent free tree mining algorithms, HybridTreeMiner and FreeTreeMiner.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is an increase in the uptake of cloud computing services (CCS). CCS is adopted in the form of a utility, and it incorporates business risks of the service providers and intermediaries. Thus, the adoption of CCS will change the risk profile of an organization. In this situation, organisations need to develop competencies by reconsidering their IT governance structures to achieve a desired level of IT-business alignment and maintain their risk appetite to source business value from CCS. We use the resource-based theories to suggest that collaborative board oversight of CCS, competencies relating to CCS information and financial management, and a CCS-related continuous audit program can contribute to business process performance improvements and overall firm performance. Using survey data, we find evidence of a positive association between these IT governance considerations and business process performance. We also find evidence of positive association between business process performance improvements and overall firm performance. The results suggest that the suggested considerations on IT governance structures can contribute to CCS-related IT-business alignment and lead to anticipated business value from CCS. This study provides guidance to organizations on competencies required to secure business value from CCS.