961 resultados para Forensics computer science
Resumo:
This paper reports on an experiment that was conducted to determine the extent to which group dynamics impacts on the effectiveness of software development teams. The experiment was conducted on software engineering project students at the Queensland University of Technology (QUT).
Resumo:
Software as a Service (SaaS) is gaining more and more attention from software users and providers recently. This has raised many new challenges to SaaS providers in providing better SaaSes that suit everyone needs at minimum costs. One of the emerging approaches in tackling this challenge is by delivering the SaaS as a composite SaaS. Delivering it in such an approach has a number of benefits, including flexible offering of the SaaS functions and decreased cost of subscription for users. However, this approach also introduces new problems for SaaS resource management in a Cloud data centre. We present the problem of composite SaaS resource management in Cloud data centre, specifically on its initial placement and resource optimization problems aiming at improving the SaaS performance based on its execution time as well as minimizing the resource usage. Our approach differs from existing literature because it addresses the problems resulting from composite SaaS characteristics, where we focus on the SaaS requirements, constraints and interdependencies. The problems are tackled using evolutionary algorithms. Experimental results demonstrate the efficiency and the scalability of the proposed algorithms.
Resumo:
Several track-before-detection approaches for image based aircraft detection have recently been examined in an important automated aircraft collision detection application. A particularly popular approach is a two stage processing paradigm which involves: a morphological spatial filter stage (which aims to emphasize the visual characteristics of targets) followed by a temporal or track filter stage (which aims to emphasize the temporal characteristics of targets). In this paper, we proposed new spot detection techniques for this two stage processing paradigm that fuse together raw and morphological images or fuse together various different morphological images (we call these approaches morphological reinforcement). On the basis of flight test data, the proposed morphological reinforcement operations are shown to offer superior signal to-noise characteristics when compared to standard spatial filter options (such as the close-minus-open and adaptive contour morphological operations). However, system operation characterised curves, which examine detection verses false alarm characteristics after both processing stages, illustrate that system performance is very data dependent.
Resumo:
The quick detection of abrupt (unknown) parameter changes in an observed hidden Markov model (HMM) is important in several applications. Motivated by the recent application of relative entropy concepts in the robust sequential change detection problem (and the related model selection problem), this paper proposes a sequential unknown change detection algorithm based on a relative entropy based HMM parameter estimator. Our proposed approach is able to overcome the lack of knowledge of post-change parameters, and is illustrated to have similar performance to the popular cumulative sum (CUSUM) algorithm (which requires knowledge of the post-change parameter values) when examined, on both simulated and real data, in a vision-based aircraft manoeuvre detection problem.
Resumo:
There are many use cases in business process management that require the comparison of behavioral models. For instance, verifying equivalence is the basis for assessing whether a technical workflow correctly implements a business process, or whether a process realization conforms to a reference process. This paper proposes an equivalence relation for models that describe behaviors based on the concurrency semantics of net theory and for which an alignment relation has been defined. This equivalence, called isotactics, preserves the level of concurrency of aligned operations. Furthermore, we elaborate on the conditions under which an alignment relation can be classified as an abstraction. Finally, we show that alignment relations induced by structural refinements of behavioral models are indeed behavioral abstractions.
Resumo:
Recently, Software as a Service (SaaS) in Cloud computing, has become more and more significant among software users and providers. To offer a SaaS with flexible functions at a low cost, SaaS providers have focused on the decomposition of the SaaS functionalities, or known as composite SaaS. This approach has introduced new challenges in SaaS resource management in data centres. One of the challenges is managing the resources allocated to the composite SaaS. Due to the dynamic environment of a Cloud data centre, resources that have been initially allocated to SaaS components may be overloaded or wasted. As such, reconfiguration for the components’ placement is triggered to maintain the performance of the composite SaaS. However, existing approaches often ignore the communication or dependencies between SaaS components in their implementation. In a composite SaaS, it is important to include these elements, as they will directly affect the performance of the SaaS. This paper will propose a Grouping Genetic Algorithm (GGA) for multiple composite SaaS application component clustering in Cloud computing that will address this gap. To the best of our knowledge, this is the first attempt to handle multiple composite SaaS reconfiguration placement in a dynamic Cloud environment. The experimental results demonstrate the feasibility and the scalability of the GGA.
Resumo:
Network RTK (Real-Time Kinematic) is a technology that is based on GPS (Global Positioning System) or more generally on GNSS (Global Navigation Satellite System) observations to achieve centimeter-level accuracy positioning in real time. It is enabled by a network of Continuously Operating Reference Stations (CORS). CORS placement is an important problem in the design of network RTK as it directly affects not only the installation and running costs of the network RTK, but also the Quality of Service (QoS) provided by the network RTK. In our preliminary research on the CORS placement, we proposed a polynomial heuristic algorithm for a so-called location-based CORS placement problem. From a computational point of view, the location-based CORS placement is a largescale combinatorial optimization problem. Thus, although the heuristic algorithm is efficient in computation time it may not be able to find an optimal or near optimal solution. Aiming at improving the quality of solutions, this paper proposes a repairing genetic algorithm (RGA) for the location-based CORS placement problem. The RGA has been implemented and compared to the heuristic algorithm by experiments. Experimental results have shown that the RGA produces better quality of solutions than the heuristic algorithm.
Resumo:
Recognizing the impact of reconfiguration on the QoS of running systems is especially necessary for choosing an appropriate approach to dealing with dynamic evolution of mission-critical or non-stop business systems. The rationale is that the impaired QoS caused by inappropriate use of dynamic approaches is unacceptable for such running systems. To predict in advance the impact, the challenge is two-fold. First, a unified benchmark is necessary to expose QoS problems of existing dynamic approaches. Second, an abstract representation is necessary to provide a basis for modeling and comparing the QoS of existing and new dynamic reconfiguration approaches. Our previous work [8] has successfully evaluated the QoS assurance capabilities of existing dynamic approaches and provided guidance of appropriate use of particular approaches. This paper reinvestigates our evaluations, extending them into concurrent and parallel environments by abstracting hardware and software conditions to design an evaluation context. We report the new evaluation results and conclude with updated impact analysis and guidance.
Resumo:
Client puzzles are cryptographic problems that are neither easy nor hard to solve. Most puzzles are based on either number theoretic or hash inversions problems. Hash-based puzzles are very efficient but so far have been shown secure only in the random oracle model; number theoretic puzzles, while secure in the standard model, tend to be inefficient. In this paper, we solve the problem of constucting cryptographic puzzles that are secure int he standard model and are very efficient. We present an efficient number theoretic puzzle that satisfies the puzzle security definition of Chen et al. (ASIACRYPT 2009). To prove the security of our puzzle, we introduce a new variant of the interval discrete logarithm assumption which may be of independent interest, and show this new problem to be hard under reasonable assumptions. Our experimental results show that, for 512-bit modulus, the solution verification time of our proposed puzzle can be up to 50x and 89x faster than the Karame-Capkum puzzle and the Rivest et al.'s time-lock puzzle respectively. In particular, the solution verification tiem of our puzzle is only 1.4x slower than that of Chen et al.'s efficient hash based puzzle.
Resumo:
Hybrid system representations have been exploited in a number of challenging modelling situations, including situations where the original nonlinear dynamics are too complex (or too imprecisely known) to be directly filtered. Unfortunately, the question of how to best design suitable hybrid system models has not yet been fully addressed, particularly in the situations involving model uncertainty. This paper proposes a novel joint state-measurement relative entropy rate based approach for design of hybrid system filters in the presence of (parameterised) model uncertainty. We also present a design approach suitable for suboptimal hybrid system filters. The benefits of our proposed approaches are illustrated through design examples and simulation studies.
In the pursuit of effective affective computing : the relationship between features and registration
Resumo:
For facial expression recognition systems to be applicable in the real world, they need to be able to detect and track a previously unseen person's face and its facial movements accurately in realistic environments. A highly plausible solution involves performing a "dense" form of alignment, where 60-70 fiducial facial points are tracked with high accuracy. The problem is that, in practice, this type of dense alignment had so far been impossible to achieve in a generic sense, mainly due to poor reliability and robustness. Instead, many expression detection methods have opted for a "coarse" form of face alignment, followed by an application of a biologically inspired appearance descriptor such as the histogram of oriented gradients or Gabor magnitudes. Encouragingly, recent advances to a number of dense alignment algorithms have demonstrated both high reliability and accuracy for unseen subjects [e.g., constrained local models (CLMs)]. This begs the question: Aside from countering against illumination variation, what do these appearance descriptors do that standard pixel representations do not? In this paper, we show that, when close to perfect alignment is obtained, there is no real benefit in employing these different appearance-based representations (under consistent illumination conditions). In fact, when misalignment does occur, we show that these appearance descriptors do work well by encoding robustness to alignment error. For this work, we compared two popular methods for dense alignment-subject-dependent active appearance models versus subject-independent CLMs-on the task of action-unit detection. These comparisons were conducted through a battery of experiments across various publicly available data sets (i.e., CK+, Pain, M3, and GEMEP-FERA). We also report our performance in the recent 2011 Facial Expression Recognition and Analysis Challenge for the subject-independent task.
Resumo:
It is a big challenge to clearly identify the boundary between positive and negative streams. Several attempts have used negative feedback to solve this challenge; however, there are two issues for using negative relevance feedback to improve the effectiveness of information filtering. The first one is how to select constructive negative samples in order to reduce the space of negative documents. The second issue is how to decide noisy extracted features that should be updated based on the selected negative samples. This paper proposes a pattern mining based approach to select some offenders from the negative documents, where an offender can be used to reduce the side effects of noisy features. It also classifies extracted features (i.e., terms) into three categories: positive specific terms, general terms, and negative specific terms. In this way, multiple revising strategies can be used to update extracted features. An iterative learning algorithm is also proposed to implement this approach on RCV1, and substantial experiments show that the proposed approach achieves encouraging performance.
Resumo:
Data quality has become a major concern for organisations. The rapid growth in the size and technology of a databases and data warehouses has brought significant advantages in accessing, storing, and retrieving information. At the same time, great challenges arise with rapid data throughput and heterogeneous accesses in terms of maintaining high data quality. Yet, despite the importance of data quality, literature has usually condensed data quality into detecting and correcting poor data such as outliers, incomplete or inaccurate values. As a result, organisations are unable to efficiently and effectively assess data quality. Having an accurate and proper data quality assessment method will enable users to benchmark their systems and monitor their improvement. This paper introduces a granules mining for measuring the random degree of error data which will enable decision makers to conduct accurate quality assessment and allocate the most severe data, thereby providing an accurate estimation of human and financial resources for conducting quality improvement tasks.
Resumo:
The potential of multiple distribution static synchronous compensators (DSTATCOMs) to improve the voltage profile of radial distribution networks has been reported in the literature by few authors. However, the operation of multiple DSTATCOMs across a distribution feeder may introduce control interactions and/or voltage instability. This study proposes a control scheme that alleviates interactions among controllers and enhances proper reactive power sharing among DSTATCOMs. A generalised mathematical model is presented to analyse the interactions among any number of DSTATCOMs in the network. The criterion for controller design is developed by conducting eigenvalue analysis on this mathematical model. The proposed control scheme is tested in time domain on a sample radial distribution feeder installed with multiple DSTATCOMs and test results are presented.
Resumo:
Learning and then recognizing a route, whether travelled during the day or at night, in clear or inclement weather, and in summer or winter is a challenging task for state of the art algorithms in computer vision and robotics. In this paper, we present a new approach to visual navigation under changing conditions dubbed SeqSLAM. Instead of calculating the single location most likely given a current image, our approach calculates the best candidate matching location within every local navigation sequence. Localization is then achieved by recognizing coherent sequences of these “local best matches”. This approach removes the need for global matching performance by the vision front-end - instead it must only pick the best match within any short sequence of images. The approach is applicable over environment changes that render traditional feature-based techniques ineffective. Using two car-mounted camera datasets we demonstrate the effectiveness of the algorithm and compare it to one of the most successful feature-based SLAM algorithms, FAB-MAP. The perceptual change in the datasets is extreme; repeated traverses through environments during the day and then in the middle of the night, at times separated by months or years and in opposite seasons, and in clear weather and extremely heavy rain. While the feature-based method fails, the sequence-based algorithm is able to match trajectory segments at 100% precision with recall rates of up to 60%.