982 resultados para 104-642


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This full day workshop explores how insights from artefacts, created during data collecting and analysis, are translated into prototypes. It is particularly concerned with getting closer to people's experience of shaping a design space. The workshop draws inspiration from data-products resulting from interactions in natural, unbuilt places with the intention of supporting both those with work integrating understandings of such experiences into design and those interested in the way material provokes ideas and inspiration for design.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent years have seen an increased uptake of business process management technology in industries. This has resulted in organizations trying to manage large collections of business process models. One of the challenges facing these organizations concerns the retrieval of models from large business process model repositories. For example, in some cases new process models may be derived from existing models, thus finding these models and adapting them may be more effective than developing them from scratch. As process model repositories may be large, query evaluation may be time consuming. Hence, we investigate the use of indexes to speed up this evaluation process. Experiments are conducted to demonstrate that our proposal achieves a significant reduction in query evaluation time.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper argues a model of complex system design for sustainable architecture within a framework of entropy evolution. The spectrum of sustainable architecture consists of the efficient use of energy and material resource in life-cycle of buildings, the active involvement of the occupants in micro-climate control within buildings, and the natural environmental context. The interactions of the parameters compose a complex system of sustainable architectural design, of which the conventional linear and fragmented design technologies are insufficient to indicate holistic and ongoing environmental performance. The complexity theory of dissipative structure states a microscopic formulation of open system evolution, which provides a system design framework for the evolution of building environmental performance towards an optimization of sustainability in architecture.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Seasonal patterns have been found in a remarkable range of health conditions, including birth defects, respiratory infections and cardiovascular disease. Accurately estimating the size and timing of seasonal peaks in disease incidence is an aid to understanding the causes and possibly to developing interventions. With global warming increasing the intensity of seasonal weather patterns around the world, a review of the methods for estimating seasonal effects on health is timely. This is the first book on statistical methods for seasonal data written for a health audience. It describes methods for a range of outcomes (including continuous, count and binomial data) and demonstrates appropriate techniques for summarising and modelling these data. It has a practical focus and uses interesting examples to motivate and illustrate the methods. The statistical procedures and example data sets are available in an R package called ‘season’. Adrian Barnett is a senior research fellow at Queensland University of Technology, Australia. Annette Dobson is a Professor of Biostatistics at The University of Queensland, Australia. Both are experienced medical statisticians with a commitment to statistical education and have previously collaborated in research in the methodological developments and applications of biostatistics, especially to time series data. Among other projects, they worked together on revising the well-known textbook "An Introduction to Generalized Linear Models," third edition, Chapman Hall/CRC, 2008. In their new book they share their knowledge of statistical methods for examining seasonal patterns in health.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

ERP systems generally implement controls to prevent certain common kinds of fraud. In addition however, there is an imperative need for detection of more sophisticated patterns of fraudulent activity as evidenced by the legal requirement for company audits and the common incidence of fraud. This paper describes the design and implementation of a framework for detecting patterns of fraudulent activity in ERP systems. We include the description of six fraud scenarios and the process of specifying and detecting the occurrence of those scenarios in ERP user log data using the prototype software which we have developed. The test results for detecting these scenarios in log data have been verified and confirm the success of our approach which can be generalized to ERP systems in general.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

CFD has been successfully used in the optimisation of aerodynamic surfaces using a given set of parameters such as Mach numbers and angle of attack. While carrying out a multidisciplinary design optimisation one deals with situations where the parameters have some uncertain attached. Any optimisation carried out for fixed values of input parameters gives a design which may be totally unacceptable under off-design conditions. The challenge is to develop a robust design procedure which takes into account the fluctuations in the input parameters. In this work, we attempt this using a modified Taguchi approach. This is incorporated into an evolutionary algorithm with many features developed in house. The method is tested for an UCAV design which simultaneously handles aerodynamics, electromagnetics and maneuverability. Results demonstrate that the method has considerable potential.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: The objective of routine outpatient assessment of well functioning patients after primary total hip arthroplasty (THA) is to detect asymptomatic failure of prostheses to guide recommendations for early intervention. We have observed that the revision of THAs in asymptomatic patients is highly uncommon. We therefore question the need for routine follow-up of patients after THA. Methods: A prospective analysis of an orthopaedic database identified 158 patients who received 177 revision THAs over a 4 year period. A retrospective chart review was conducted. Patient demographics, primary and revision surgery parameters and follow-up information was recorded and cross referenced with AOA NJRR data. Results: 110 THAs in 104 patients (average age 70.4 (SD 9.8 years). There were 70 (63.6%) total, 13 (11.8%) femoral and 27 (24.5%) acetabular revisions. The indications for revision were aseptic loosening (70%), dislocation (8.2%), peri-prosthetic fracture (7.3%), osteolysis (6.4%) and infection (4.5%). Only 4 (3.6%) were asymptomatic revisions. A mean of 5.3 (SD 5.2 and 1.9 (SD 5.3 follow-up appointments were required before revision in patients with and without symptoms, respectively. The average time from the primary to revision surgery was 11.8 (SD 7.23) years. Conclusions: We conclude that patients with prostheses with excellent long term clinical results as validated by Joint Registries, routine follow-up of asymptomatic THA should be questioned and requires further investigation. Based on the work of this study, the current practice of routine follow-up of asymptomatic THA may be excessively costly and unnecessary and a less resource-intensive review method may be more appropriate.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes the approach taken to the clustering task at INEX 2009 by a group at the Queensland University of Technology. The Random Indexing (RI) K-tree has been used with a representation that is based on the semantic markup available in the INEX 2009 Wikipedia collection. The RI K-tree is a scalable approach to clustering large document collections. This approach has produced quality clustering when evaluated using two different methodologies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A common scenario in many pairing-based cryptographic protocols is that one argument in the pairing is fixed as a long term secret key or a constant parameter in the system. In these situations, the runtime of Miller's algorithm can be significantly reduced by storing precomputed values that depend on the fixed argument, prior to the input or existence of the second argument. In light of recent developments in pairing computation, we show that the computation of the Miller loop can be sped up by up to 37 if precomputation is employed, with our method being up to 19.5 faster than the previous precomputation techniques.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The advantages of a spherical imaging model are increasingly well recognized within the robotics community. Perhaps less well known is the use of the sphere for attitude estimation, control and scene structure estimation. This paper proposes the sphere as a unifying concept, not just for cameras, but for sensor fusion, estimation and control. We review and summarize relevant work in these areas and illustrate this with relevant simulation examples for spherical visual servoing and scene structure estimation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This report explains the objectives, datasets and evaluation criteria of both the clustering and classification tasks set in the INEX 2009 XML Mining track. The report also describes the approaches and results obtained by the different participants.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The most costly operations encountered in pairing computations are those that take place in the full extension field Fpk . At high levels of security, the complexity of operations in Fpk dominates the complexity of the operations that occur in the lower degree subfields. Consequently, full extension field operations have the greatest effect on the runtime of Miller’s algorithm. Many recent optimizations in the literature have focussed on improving the overall operation count by presenting new explicit formulas that reduce the number of subfield operations encountered throughout an iteration of Miller’s algorithm. Unfortunately, almost all of these improvements tend to suffer for larger embedding degrees where the expensive extension field operations far outweigh the operations in the smaller subfields. In this paper, we propose a new way of carrying out Miller’s algorithm that involves new explicit formulas which reduce the number of full extension field operations that occur in an iteration of the Miller loop, resulting in significant speed ups in most practical situations of between 5 and 30 percent.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Research on efficient pairing implementation has focussed on reducing the loop length and on using high-degree twists. Existence of twists of degree larger than 2 is a very restrictive criterion but luckily constructions for pairing-friendly elliptic curves with such twists exist. In fact, Freeman, Scott and Teske showed in their overview paper that often the best known methods of constructing pairing-friendly elliptic curves over fields of large prime characteristic produce curves that admit twists of degree 3, 4 or 6. A few papers have presented explicit formulas for the doubling and the addition step in Miller’s algorithm, but the optimizations were all done for the Tate pairing with degree-2 twists, so the main usage of the high- degree twists remained incompatible with more efficient formulas. In this paper we present efficient formulas for curves with twists of degree 2, 3, 4 or 6. These formulas are significantly faster than their predecessors. We show how these faster formulas can be applied to Tate and ate pairing variants, thereby speeding up all practical suggestions for efficient pairing implementations over fields of large characteristic.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Miller’s algorithm for computing pairings involves perform- ing multiplications between elements that belong to different finite fields. Namely, elements in the full extension field Fpk are multiplied by elements contained in proper subfields F pk/d , and by elements in the base field Fp . We show that significant speedups in pairing computations can be achieved by delaying these “mismatched” multiplications for an optimal number of iterations. Importantly, we show that our technique can be easily integrated into traditional pairing algorithms; implementers can exploit the computational savings herein by applying only minor changes to existing pairing code.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Minimizing complexity of group key exchange (GKE) protocols is an important milestone towards their practical deployment. An interesting approach to achieve this goal is to simplify the design of GKE protocols by using generic building blocks. In this paper we investigate the possibility of founding GKE protocols based on a primitive called multi key encapsulation mechanism (mKEM) and describe advantages and limitations of this approach. In particular, we show how to design a one-round GKE protocol which satisfies the classical requirement of authenticated key exchange (AKE) security, yet without forward secrecy. As a result, we obtain the first one-round GKE protocol secure in the standard model. We also conduct our analysis using recent formal models that take into account both outsider and insider attacks as well as the notion of key compromise impersonation resilience (KCIR). In contrast to previous models we show how to model both outsider and insider KCIR within the definition of mutual authentication. Our analysis additionally implies that the insider security compiler by Katz and Shin from ACM CCS 2005 can be used to achieve more than what is shown in the original work, namely both outsider and insider KCIR.