73 resultados para LOGICAL OPERATIONS
em Queensland University of Technology - ePrints Archive
Resumo:
This paper improves implementation techniques of Elliptic Curve Cryptography. We introduce new formulae and algorithms for the group law on Jacobi quartic, Jacobi intersection, Edwards, and Hessian curves. The proposed formulae and algorithms can save time in suitable point representations. To support our claims, a cost comparison is made with classic scalar multiplication algorithms using previous and current operation counts. Most notably, the best speeds are obtained from Jacobi quartic curves which provide the fastest timings for most scalar multiplication strategies benefiting from the proposed 12M + 5S + 1D point doubling and 7M + 3S + 1D point addition algorithms. Furthermore, the new addition algorithm provides an efficient way to protect against side channel attacks which are based on simple power analysis (SPA). Keywords: Efficient elliptic curve arithmetic,unified addition, side channel attack.
Resumo:
Forensic analysis requires the acquisition and management of many different types of evidence, including individual disk drives, RAID sets, network packets, memory images, and extracted files. Often the same evidence is reviewed by several different tools or examiners in different locations. We propose a backwards-compatible redesign of the Advanced Forensic Formatdan open, extensible file format for storing and sharing of evidence, arbitrary case related information and analysis results among different tools. The new specification, termed AFF4, is designed to be simple to implement, built upon the well supported ZIP file format specification. Furthermore, the AFF4 implementation has downward comparability with existing AFF files.
Resumo:
Six Sigma provides a framework for quality improvement and business excellence. Introduced in the 1980s in manufacturing, the concept of Six Sigma has gained popularity in service organizations. After initial success in healthcare and banking, Six Sigma has gradually gained traction in other types of service industries, including hotels and lodging. Starwood Hotels and Resorts was the first hospitality giant to embrace Six Sigma. In 2001, Starwood adopted the method to develop innovative, customer-focused solutions and to transfer these solutions throughout the global organization. To analyze Starwood's use of Six Sigma, the authors collected data from articles, interviews, presentations and speeches published in magazines, newspapers and Web sites. This provided details to corroborate information, and they also made inferences from these sources. Financial metrics can explain the success of Six Sigma in any organization. There was no shortage of examples of Starwood's success resulting from Six Sigma project metrics uncovered during the research.
Resumo:
Computer simulation has been widely accepted as an essential tool for the analysis of many engineering systems. It is nowadays perceived to be the most readily available and feasible means of evaluating operations in real railway systems. Based on practical experience and theoretical models developed in various applications, this paper describes the design of a general-purpose simulation system for train operations. Its prime objective is to provide a single comprehensive computer-aided engineering tool for most studies on railway operations so that various aspects of the railway systems with different operation characteristics can be investigated and analysed in depth. This system consists of three levels of simulation. The first is a single-train simulator calculating the running time of a train between specific points under different track geometry and traction conditions. The second is a dual-train simulator which is to find the minimum headway between two trains under different movement constraints, such as signalling systems. The third is a whole-system multi-train simulator which carries out process simulation of the real operation of a railway system according to a practical or planned train schedule or headway; and produces an overall evaluation of system performance.
Resumo:
Operations management is an area concerned with the production of goods and services ensuring that business operations are efficient in utilizing resource and effective to meet customer requirements. It deals with the design and management of products, processes, services and supply chains and considers the acquisition, development, and effective and efficient utilization of resources. Unlike other engineering subjects, content of these units could be very wide and vast. It is therefore necessary to cover the content that is most related to the contemporary industries. It is also necessary to understand what engineering management skills are critical for engineers working in the contemporary organisations. Most of the operations management books contain traditional Operations Management techniques. For example ‘inventory management’ is an important topic in operations management. All OM books deal with effective method of inventory management. However, new trend in OM is Just in time (JIT) delivery or minimization of inventory. It is therefore important to decide whether to emphasise on keeping inventory (as suggested by most books) or minimization of inventory. Similarly, for OM decisions like forecasting, optimization and linear programming most organisations now a day’s use software. Now it is important for us to determine whether some of these software need to be introduced in tutorial/ lab classes. If so, what software? It is established in the Teaching and Learning literature that there must be a strong alignment between unit objectives, assessment and learning activities to engage students in learning. Literature also established that engaging students is vital for learning. However, engineering units (more specifically Operations management) is quite different from other majors. Only alignment between objectives, assessment and learning activities cannot guarantee student engagement. Unit content must be practical oriented and skills to be developed should be those demanded by the industry. Present active learning research, using a multi-method research approach, redesigned the operations management content based on latest developments in Engineering Management area and the necessity of Australian industries. The redesigned unit has significantly helped better student engagement and better learning. It was found that students are engaged in the learning if they find the contents are helpful in developing skills that are necessary in their practical life.
Resumo:
This paper presents a method for calculating the in-bucket payload volume on a dragline for the purpose of estimating the material’s bulk density in real-time. Knowledge of the bulk density can provide instant feedback to mine planning and scheduling to improve blasting and in turn provide a more uniform bulk density across the excavation site. Furthermore costs and emissions in dragline operation, maintenance and downstream material processing can be reduced. The main challenge is to determine an accurate position and orientation of the bucket with the constraint of real-time performance. The proposed solution uses a range bearing and tilt sensor to locate and scan the bucket between the lift and dump stages of the dragline cycle. Various scanning strategies are investigated for their benefits in this real-time application. The bucket is segmented from the scene using cluster analysis while the pose of the bucket is calculated using the iterative closest point (ICP) algorithm. Payload points are segmented from the bucket by a fixed distance neighbour clustering method to preserve boundary points and exclude low density clusters introduced by overhead chains and the spreader bar. A height grid is then used to represent the payload from which the volume can be calculated by summing over the grid cells. We show volume calculated on a scaled system with an accuracy of greater than 95 per cent.
Resumo:
In the face of increasing concern over global warming and climate change, interest in the utilizzation of solar energy for building operations is rapidly growing. In this entry, the importance of using renewable energy in building operations is first introduced. This is followed by a general overview on the energy from the sun and the methods to utilize solar energy. Possible applications of solar energy in building operations are then discussed, which include the use of solar energy in the forms of daylighting, hot water heating, space heating and cooling, and building-integrated photovoltaics.
Resumo:
Freeways are divided roadways designed to facilitate the uninterrupted movement of motor vehicles. However, many freeways now experience demand flows in excess of capacity, leading to recurrent congestion. The Highway Capacity Manual (TRB, 1994) uses empirical macroscopic relationships between speed, flow and density to quantify freeway operations and performance. Capacity may be predicted as the maximum uncongested flow achievable. Although they are effective tools for design and analysis, macroscopic models lack an understanding of the nature of processes taking place in the system. Szwed and Smith (1972, 1974) and Makigami and Matsuo (1990) have shown that microscopic modelling is also applicable to freeway operations. Such models facilitate an understanding of the processes whilst providing for the assessment of performance, through measures of capacity and delay. However, these models are limited to only a few circumstances. The aim of this study was to produce more comprehensive and practical microscopic models. These models were required to accurately portray the mechanisms of freeway operations at the specific locations under consideration. The models needed to be able to be calibrated using data acquired at these locations. The output of the models needed to be able to be validated with data acquired at these sites. Therefore, the outputs should be truly descriptive of the performance of the facility. A theoretical basis needed to underlie the form of these models, rather than empiricism, which is the case for the macroscopic models currently used. And the models needed to be adaptable to variable operating conditions, so that they may be applied, where possible, to other similar systems and facilities. It was not possible to produce a stand-alone model which is applicable to all facilities and locations, in this single study, however the scene has been set for the application of the models to a much broader range of operating conditions. Opportunities for further development of the models were identified, and procedures provided for the calibration and validation of the models to a wide range of conditions. The models developed, do however, have limitations in their applicability. Only uncongested operations were studied and represented. Driver behaviour in Brisbane was applied to the models. Different mechanisms are likely in other locations due to variability in road rules and driving cultures. Not all manoeuvres evident were modelled. Some unusual manoeuvres were considered unwarranted to model. However the models developed contain the principal processes of freeway operations, merging and lane changing. Gap acceptance theory was applied to these critical operations to assess freeway performance. Gap acceptance theory was found to be applicable to merging, however the major stream, the kerb lane traffic, exercises only a limited priority over the minor stream, the on-ramp traffic. Theory was established to account for this activity. Kerb lane drivers were also found to change to the median lane where possible, to assist coincident mergers. The net limited priority model accounts for this by predicting a reduced major stream flow rate, which excludes lane changers. Cowan's M3 model as calibrated for both streams. On-ramp and total upstream flow are required as input. Relationships between proportion of headways greater than 1 s and flow differed for on-ramps where traffic leaves signalised intersections and unsignalised intersections. Constant departure onramp metering was also modelled. Minimum follow-on times of 1 to 1.2 s were calibrated. Critical gaps were shown to lie between the minimum follow-on time, and the sum of the minimum follow-on time and the 1 s minimum headway. Limited priority capacity and other boundary relationships were established by Troutbeck (1995). The minimum average minor stream delay and corresponding proportion of drivers delayed were quantified theoretically in this study. A simulation model was constructed to predict intermediate minor and major stream delays across all minor and major stream flows. Pseudo-empirical relationships were established to predict average delays. Major stream average delays are limited to 0.5 s, insignificant compared with minor stream delay, which reach infinity at capacity. Minor stream delays were shown to be less when unsignalised intersections are located upstream of on-ramps than signalised intersections, and less still when ramp metering is installed. Smaller delays correspond to improved merge area performance. A more tangible performance measure, the distribution of distances required to merge, was established by including design speeds. This distribution can be measured to validate the model. Merging probabilities can be predicted for given taper lengths, a most useful performance measure. This model was also shown to be applicable to lane changing. Tolerable limits to merging probabilities require calibration. From these, practical capacities can be estimated. Further calibration is required of traffic inputs, critical gap and minimum follow-on time, for both merging and lane changing. A general relationship to predict proportion of drivers delayed requires development. These models can then be used to complement existing macroscopic models to assess performance, and provide further insight into the nature of operations.
Resumo:
Object identification and tracking have become critical for automated on-site construction safety assessment. The primary objective of this paper is to present the development of a testbed to analyze the impact of object identification and tracking errors caused by data collection devices and algorithms used for safety assessment. The testbed models workspaces for earthmoving operations and simulates safety-related violations, including speed limit violations, access violations to dangerous areas, and close proximity violations between heavy machinery. Three different cases were analyzed based on actual earthmoving operations conducted at a limestone quarry. Using the testbed, the impacts of device and algorithm errors were investigated for safety planning purposes.
Resumo:
Regardless of technology benefits, safety planners still face difficulties explaining errors related to the use of different technologies and evaluating how the errors impact the performance of safety decision making. This paper presents a preliminary error impact analysis testbed to model object identification and tracking errors caused by image-based devices and algorithms and to analyze the impact of the errors for spatial safety assessment of earthmoving and surface mining activities. More specifically, this research designed a testbed to model workspaces for earthmoving operations, to simulate safety-related violations, and to apply different object identification and tracking errors on the data collected and processed for spatial safety assessment. Three different cases were analyzed based on actual earthmoving operations conducted at a limestone quarry. Using the testbed, the impacts of the errors were investigated for the safety planning purpose.