253 resultados para Graph DBMS, BenchMarking, OLAP, NoSQL
Resumo:
In this paper techniques for scheduling additional train services (SATS) are considered as is train scheduling involving general time window constraints, fixed operations, maintenance activities and periods of section unavailability. The SATS problem is important because additional services must often be given access to the railway and subsequently integrated into current timetables. The SATS problem therefore considers the competition for railway infrastructure between new services and existing services belonging to the same or different operators. The SATS problem is characterised as a hybrid job shop scheduling problem with time window constraints. To solve this problem constructive algorithm and metaheuristic scheduling techniques that operate upon a disjunctive graph model of train operations are utilised. From numerical investigations the proposed framework and associated techniques are tested and shown to be effective.
Resumo:
Train scheduling is a complex and time consuming task of vital importance. To schedule trains more accurately and efficiently than permitted by current techniques a novel hybrid job shop approach has been proposed and implemented. Unique characteristics of train scheduling are first incorporated into a disjunctive graph model of train operations. A constructive algorithm that utilises this model is then developed. The constructive algorithm is a general procedure that constructs a schedule using insertion, backtracking and dynamic route selection mechanisms. It provides a significant search capability and is valid for any objective criteria. Simulated Annealing and Local Search meta-heuristic improvement algorithms are also adapted and extended. An important feature of these approaches is a new compound perturbation operator that consists of many unitary moves that allows trains to be shifted feasibly and more easily within the solution. A numerical investigation and case study is provided and demonstrates that high quality solutions are obtainable on real sized applications.
Resumo:
Background/aim: A timely evaluation of the Australian Competency Standards for Entry-Level Occupational Therapists© (1994) was conducted. This thorough investigation comprised a literature review exploring the concept of competence and the applications of competency standards; systematic benchmarking of the Australian Occupational Therapy Competency Standards (OT AUSTRALIA, 1994) against other national and international competency standards and other affiliated documents, from occupational therapy and other cognate disciplines; and extensive nationwide consultation with the professional community. This paper explores and examines the similarities and disparities between occupational therapy competency standards documents available in English from Australia and other countries.----- Methods: An online search for national occupational therapy competency standards located 10 documents, including the Australian competencies.----- Results: Four 'frameworks' were created to categorise the documents according to their conceptual underpinnings: Technical-Prescriptive, Enabling, Educational and Meta-Cognitive. Other characteristics that appeared to impact the design, content and implementation of competency standards, including definitions of key concepts, authorship, national and cultural priorities, scope of services, intended use and review mechanisms, were revealed.----- Conclusion: The proposed 'frameworks' and identification of influential characteristics provided a 'lens' through which to understand and evaluate competency standards. While consistent application of and attention to some of these characteristics appear to consolidate and affirm the authority of competency standards, it is suggested that the national context should be a critical determinant of the design and content of the final document. The Australian Occupational Therapy Competency Standards (OT AUSTRALIA, 1994) are critiqued accordingly, and preliminary recommendations for revision are proposed.
Resumo:
Mobile robots are widely used in many industrial fields. Research on path planning for mobile robots is one of the most important aspects in mobile robots research. Path planning for a mobile robot is to find a collision-free route, through the robot’s environment with obstacles, from a specified start location to a desired goal destination while satisfying certain optimization criteria. Most of the existing path planning methods, such as the visibility graph, the cell decomposition, and the potential field are designed with the focus on static environments, in which there are only stationary obstacles. However, in practical systems such as Marine Science Research, Robots in Mining Industry, and RoboCup games, robots usually face dynamic environments, in which both moving and stationary obstacles exist. Because of the complexity of the dynamic environments, research on path planning in the environments with dynamic obstacles is limited. Limited numbers of papers have been published in this area in comparison with hundreds of reports on path planning in stationary environments in the open literature. Recently, a genetic algorithm based approach has been introduced to plan the optimal path for a mobile robot in a dynamic environment with moving obstacles. However, with the increase of the number of the obstacles in the environment, and the changes of the moving speed and direction of the robot and obstacles, the size of the problem to be solved increases sharply. Consequently, the performance of the genetic algorithm based approach deteriorates significantly. This motivates the research of this work. This research develops and implements a simulated annealing algorithm based approach to find the optimal path for a mobile robot in a dynamic environment with moving obstacles. The simulated annealing algorithm is an optimization algorithm similar to the genetic algorithm in principle. However, our investigation and simulations have indicated that the simulated annealing algorithm based approach is simpler and easier to implement. Its performance is also shown to be superior to that of the genetic algorithm based approach in both online and offline processing times as well as in obtaining the optimal solution for path planning of the robot in the dynamic environment. The first step of many path planning methods is to search an initial feasible path for the robot. A commonly used method for searching the initial path is to randomly pick up some vertices of the obstacles in the search space. This is time consuming in both static and dynamic path planning, and has an important impact on the efficiency of the dynamic path planning. This research proposes a heuristic method to search the feasible initial path efficiently. Then, the heuristic method is incorporated into the proposed simulated annealing algorithm based approach for dynamic robot path planning. Simulation experiments have shown that with the incorporation of the heuristic method, the developed simulated annealing algorithm based approach requires much shorter processing time to get the optimal solutions in the dynamic path planning problem. Furthermore, the quality of the solution, as characterized by the length of the planned path, is also improved with the incorporated heuristic method in the simulated annealing based approach for both online and offline path planning.
Resumo:
The book within which this chapter appears is published as a research reference book (not a coursework textbook) on Management Information Systems (MIS) for seniors or graduate students in Chinese universities. It is hoped that this chapter, along with the others, will be helpful to MIS scholars and PhD/Masters research students in China who seek understanding of several central Information Systems (IS) research topics and related issues. The subject of this chapter - ‘Evaluating Information Systems’ - is broad, and cannot be addressed in its entirety in any depth within a single book chapter. The chapter proceeds from the truism that organizations have limited resources and those resources need to be invested in a way that provides greatest benefit to the organization. IT expenditure represents a substantial portion of any organization’s investment budget and IT related innovations have broad organizational impacts. Evaluation of the impact of this major investment is essential to justify this expenditure both pre- and post-investment. Evaluation is also important to prioritize possible improvements. The chapter (and most of the literature reviewed herein) admittedly assumes a blackbox view of IS/IT1, emphasizing measures of its consequences (e.g. for organizational performance or the economy) or perceptions of its quality from a user perspective. This reflects the MIS emphasis – a ‘management’ emphasis rather than a software engineering emphasis2, where a software engineering emphasis might be on the technical characteristics and technical performance. Though a black-box approach limits diagnostic specificity of findings from a technical perspective, it offers many benefits. In addition to superior management information, these benefits may include economy of measurement and comparability of findings (e.g. see Part 4 on Benchmarking IS). The chapter does not purport to be a comprehensive treatment of the relevant literature. It does, however, reflect many of the more influential works, and a representative range of important writings in the area. The author has been somewhat opportunistic in Part 2, employing a single journal – The Journal of Strategic Information Systems – to derive a classification of literature in the broader domain. Nonetheless, the arguments for this approach are believed to be sound, and the value from this exercise real. The chapter drills down from the general to the specific. It commences with a highlevel overview of the general topic area. This is achieved in 2 parts: - Part 1 addressing existing research in the more comprehensive IS research outlets (e.g. MISQ, JAIS, ISR, JMIS, ICIS), and Part 2 addressing existing research in a key specialist outlet (i.e. Journal of Strategic Information Systems). Subsequently, in Part 3, the chapter narrows to focus on the sub-topic ‘Information Systems Success Measurement’; then drilling deeper to become even more focused in Part 4 on ‘Benchmarking Information Systems’. In other words, the chapter drills down from Parts 1&2 Value of IS, to Part 3 Measuring Information Systems Success, to Part 4 Benchmarking IS. While the commencing Parts (1&2) are by definition broadly relevant to the chapter topic, the subsequent, more focused Parts (3 and 4) admittedly reflect the author’s more specific interests. Thus, the three chapter foci – value of IS, measuring IS success, and benchmarking IS - are not mutually exclusive, but, rather, each subsequent focus is in most respects a sub-set of the former. Parts 1&2, ‘the Value of IS’, take a broad view, with much emphasis on ‘the business Value of IS’, or the relationship between information technology and organizational performance. Part 3, ‘Information System Success Measurement’, focuses more specifically on measures and constructs employed in empirical research into the drivers of IS success (ISS). (DeLone and McLean 1992) inventoried and rationalized disparate prior measures of ISS into 6 constructs – System Quality, Information Quality, Individual Impact, Organizational Impact, Satisfaction and Use (later suggesting a 7th construct – Service Quality (DeLone and McLean 2003)). These 6 constructs have been used extensively, individually or in some combination, as the dependent variable in research seeking to better understand the important antecedents or drivers of IS Success. Part 3 reviews this body of work. Part 4, ‘Benchmarking Information Systems’, drills deeper again, focusing more specifically on a measure of the IS that can be used as a ‘benchmark’3. This section consolidates and extends the work of the author and his colleagues4 to derive a robust, validated IS-Impact measurement model for benchmarking contemporary Information Systems (IS). Though IS-Impact, like ISS, has potential value in empirical, causal research, its design and validation has emphasized its role and value as a comparator; a measure that is simple, robust and generalizable and which yields results that are as far as possible comparable across time, across stakeholders, and across differing systems and systems contexts.
Resumo:
This is an experimental study into the permeability and compressibility properties of bagasse pulp pads. Three experimental rigs were custom-built for this project. The experimental work is complemented by modelling work. Both the steady-state and dynamic behaviour of pulp pads are evaluated in the experimental and modelling components of this project. Bagasse, the fibrous residue that remains after sugar is extracted from sugarcane, is normally burnt in Australia to generate steam and electricity for the sugar factory. A study into bagasse pulp was motivated by the possibility of making highly value-added pulp products from bagasse for the financial benefit of sugarcane millers and growers. The bagasse pulp and paper industry is a multibillion dollar industry (1). Bagasse pulp could replace eucalypt pulp which is more widely used in the local production of paper products. An opportunity exists for replacing the large quantity of mainly generic paper products imported to Australia. This includes 949,000 tonnes of generic photocopier papers (2). The use of bagasse pulp for paper manufacture is the main application area of interest for this study. Bagasse contains a large quantity of short parenchyma cells called ‘pith’. Around 30% of the shortest fibres are removed from bagasse prior to pulping. Despite the ‘depithing’ operations in conventional bagasse pulp mills, a large amount of pith remains in the pulp. Amongst Australian paper producers there is a perception that the high quantity of short fibres in bagasse pulp leads to poor filtration behaviour at the wet-end of a paper machine. Bagasse pulp’s poor filtration behaviour reduces paper production rates and consequently revenue when compared to paper production using locally made eucalypt pulp. Pulp filtration can be characterised by two interacting factors; permeability and compressibility. Surprisingly, there has previously been very little rigorous investigation into neither bagasse pulp permeability nor compressibility. Only freeness testing of bagasse pulp has been published in the open literature. As a result, this study has focussed on a detailed investigation of the filtration properties of bagasse pulp pads. As part of this investigation, this study investigated three options for improving the permeability and compressibility properties of Australian bagasse pulp pads. Two options for further pre-treating depithed bagasse prior to pulping were considered. Firstly, bagasse was fractionated based on size. Two bagasse fractions were produced, ‘coarse’ and ‘medium’ bagasse fractions. Secondly, bagasse was collected after being processed on two types of juice extraction technology, i.e. from a sugar mill and from a sugar diffuser. Finally one method of post-treating the bagasse pulp was investigated. The effects of chemical additives, which are known to improve freeness, were also assessed for their effect on pulp pad permeability and compressibility. Pre-treated Australian bagasse pulp samples were compared with several benchmark pulp samples. A sample of commonly used kraft Eucalyptus globulus pulp was obtained. A sample of depithed Argentinean bagasse, which is used for commercial paper production, was also obtained. A sample of Australian bagasse which was depithed as per typical factory operations was also produced for benchmarking purposes. The steady-state pulp pad permeability and compressibility parameters were determined experimentally using two purpose-built experimental rigs. In reality, steady-state conditions do not exist on a paper machine. The permeability changes as the sheet compresses over time. Hence, a dynamic model was developed which uses the experimentally determined steady-state permeability and compressibility parameters as inputs. The filtration model was developed with a view to designing pulp processing equipment that is suitable specifically for bagasse pulp. The predicted results of the dynamic model were compared to experimental data. The effectiveness of a polymeric and microparticle chemical additives for improving the retention of short fibres and increasing the drainage rate of a bagasse pulp slurry was determined in a third purpose-built rig; a modified Dynamic Drainage Jar (DDJ). These chemical additives were then used in the making of a pulp pad, and their effect on the steady-state and dynamic permeability and compressibility of bagasse pulp pads was determined. The most important finding from this investigation was that Australian bagasse pulp was produced with higher permeability than eucalypt pulp, despite a higher overall content of short fibres. It is thought this research outcome could enable Australian paper producers to switch from eucalypt pulp to bagasse pulp without sacrificing paper machine productivity. It is thought that two factors contributed to the high permeability of the bagasse pulp pad. Firstly, thicker cell walls of the bagasse pulp fibres resulted in high fibre stiffness. Secondly, the bagasse pulp had a large proportion of fibres longer than 1.3 mm. These attributes helped to reinforce the pulp pad matrix. The steady-state permeability and compressibility parameters for the eucalypt pulp were consistent with those found by previous workers. It was also found that Australian pulp derived from the ‘coarse’ bagasse fraction had higher steady-state permeability than the ‘medium’ fraction. However, there was no difference between bagasse pulp originating from a diffuser or a mill. The bagasse pre-treatment options investigated in this study were not found to affect the steady-state compressibility parameters of a pulp pad. The dynamic filtration model was found to give predictions that were in good agreement with experimental data for pads made from samples of pretreated bagasse pulp, provided at least some pith was removed prior to pulping. Applying vacuum to a pulp slurry in the modified DDJ dramatically reduced the drainage time. At any level of vacuum, bagasse pulp benefitted from chemical additives as quantified by reduced drainage time and increased retention of short fibres. Using the modified DDJ, it was observed that under specific conditions, a benchmark depithed bagasse pulp drained more rapidly than the ‘coarse’ bagasse pulp. In steady-state permeability and compressibility experiments, the addition of chemical additives improved the pad permeability and compressibility of a benchmark bagasse pulp with a high quantity of short fibres. Importantly, this effect was not observed for the ‘coarse’ bagasse pulp. However, dynamic filtration experiments showed that there was also a small observable improvement in filtration for the ‘medium’ bagasse pulp. The mechanism of bagasse pulp pad consolidation appears to be by fibre realignment. Chemical additives assist to lubricate the consolidation process. This study was complemented by pulp physical and chemical property testing and a microscopy study. In addition to its high pulp pad permeability, ‘coarse’ bagasse pulp often (but not always) had superior physical properties than a benchmark depithed bagasse pulp.
Resumo:
Purpose – The purpose of this study is to examine and extend Noer’s theoretical model of the new employment relationship. Design/methodology/approach – Case study methodology is used to scrutinise the model. The results of a literature-based survey on the elements underpinning the five values in the model are analysed from dual perspectives of individual and organization using a multi-source assessment instrument. A schema is developed to guide and inform a series of focus group discussions from an analysis of the survey data. Using content analysis, the transcripts from the focus group discussions are evaluated using the model’s values and their elements. The transcripts are also reviewed for implicit themes. The case studied is Flight Centre Limited, an Australian-based international retail travel company. Findings – Using this approach, some elements of the five values in Noer’s model are identified as characteristic of the company’s psychological contract. Specifically, to some extent, the model’s values of flexible deployment, customer focus, performance focus, project-based work, and human spirit and work can be applied in this case. A further analysis of the transcripts validates three additional values in the psychological contract literature: commitment; learning and development; and open information. As a result of the findings, Noer’s model is extended to eight values. Research limitations/implications – The study offers a research-based model of the new employment relationship. Since generalisations from the case study findings cannot be applied directly to other settings, the opportunity to test this model in a variety of contexts is open to other researchers. Originality/value – In practice, the methodology used is a unique process for benchmarking the psychological contract. The process may be applied in other business settings. By doing so, organization development professionals have a consulting framework for comparing an organization’s dominant psychological contract with the extended model presented here.
Resumo:
Identifying an individual from surveillance video is a difficult, time consuming and labour intensive process. The proposed system aims to streamline this process by filtering out unwanted scenes and enhancing an individual's face through super-resolution. An automatic face recognition system is then used to identify the subject or present the human operator with likely matches from a database. A person tracker is used to speed up the subject detection and super-resolution process by tracking moving subjects and cropping a region of interest around the subject's face to reduce the number and size of the image frames to be super-resolved respectively. In this paper, experiments have been conducted to demonstrate how the optical flow super-resolution method used improves surveillance imagery for visual inspection as well as automatic face recognition on an Eigenface and Elastic Bunch Graph Matching system. The optical flow based method has also been benchmarked against the ``hallucination'' algorithm, interpolation methods and the original low-resolution images. Results show that both super-resolution algorithms improved recognition rates significantly. Although the hallucination method resulted in slightly higher recognition rates, the optical flow method produced less artifacts and more visually correct images suitable for human consumption.
Resumo:
In the title salt, C12H11N2O2+·C7H4NO5-, the cations and anions interact through asymmetric cyclic pyridinium-carboxylate N-HO,O' hydrogen-bonding associations [graph set R12(4)], giving discrete heterodimers having weak cation-anion - aromatic ring interactions [minimum ring centroid separation = 3.7116 (9) Å]
Resumo:
In the structure of the title compound, the salt 2(C12H10N3O4+) (C12H8O6S2)2- . 3H2O, determined at 173 K, the biphenyl-4,4'-disulfonate dianions lie across crystallographic inversion centres with the sulfonate groups interacting head-to-head through centrosymmetric cyclic bis(water)-bridged hydrogen-bonding associations [graph set R4/4(11)], forming chain structures. The 2-(2,4-dinitrobenzyl)pyridinium cations are linked to these chains through N+-H...O(water) hydrogen bonds and a two-dimensional network structure is formed through water bridges between sulfonate and 2-nitro O atoms, while the structure also has weak cation--anion pi-pi aromatic ring interactions [minimum ring centroid separation 3.8441(13)A].
Resumo:
The crystal structure of the 2:1 proton-transfer compound of brucine with biphenyl-4,4’-disulfonate, bis(2,3-dimethoxy-10-oxostrychnidinium) biphenyl-4,4'-disulfonate hexahydrate (1) has been determined at 173 K. Crystals are monoclinic, space group P21 with Z = 2 in a cell with a = 8.0314(2), b = 29.3062(9), c = 12.2625(3) Å, β = 101.331(2)o. The crystallographic asymmetric unit comprises two brucinium cations, a biphenyl-4,4'-disulfonate dianion and six water molecules of solvation. The brucinium cations form a variant of the common undulating and overlapping head-to-tail sheet sub-structure. The sulfonate dianions are also linked head-to-tail by hydrogen bonds into parallel zig-zag chains through clusters of six water molecules of which five are inter-associated, featuring conjoint cyclic eight-membered hydrogen-bonded rings [graph sets R33(8) and R34(8)], comprising four of the water molecules and closed by sulfonate O-acceptors. These chain structures occupy the cavities between the brucinium cation sheets and are linked to them peripherally through both brucine N+-H...Osulfonate and Ocarbonyl…H-Owater to sulfonate O bridging hydrogen bonds, forming an overall three-dimensional framework structure. This structure determination confirms the importance of water in the stabilization of certain brucine compounds which have inherent crystal instability.
Resumo:
Despite all attempts to prevent fraud, it continues to be a major threat to industry and government. Traditionally, organizations have focused on fraud prevention rather than detection, to combat fraud. In this paper we present a role mining inspired approach to represent user behaviour in Enterprise Resource Planning (ERP) systems, primarily aimed at detecting opportunities to commit fraud or potentially suspicious activities. We have adapted an approach which uses set theory to create transaction profiles based on analysis of user activity records. Based on these transaction profiles, we propose a set of (1) anomaly types to detect potentially suspicious user behaviour, and (2) scenarios to identify inadequate segregation of duties in an ERP environment. In addition, we present two algorithms to construct a directed acyclic graph to represent relationships between transaction profiles. Experiments were conducted using a real dataset obtained from a teaching environment and a demonstration dataset, both using SAP R/3, presently the predominant ERP system. The results of this empirical research demonstrate the effectiveness of the proposed approach.
Resumo:
This approach to sustainable design explores the possibility of creating an architectural design process which can iteratively produce optimised and sustainable design solutions. Driven by an evolution process based on genetic algorithms, the system allows the designer to “design the building design generator” rather than to “designs the building”. The design concept is abstracted into a digital design schema, which allows transfer of the human creative vision into the rational language of a computer. The schema is then elaborated into the use of genetic algorithms to evolve innovative, performative and sustainable design solutions. The prioritisation of the project’s constraints and the subsequent design solutions synthesised during design generation are expected to resolve most of the major conflicts in the evaluation and optimisation phases. Mosques are used as the example building typology to ground the research activity. The spatial organisations of various mosque typologies are graphically represented by adjacency constraints between spaces. Each configuration is represented by a planar graph which is then translated into a non-orthogonal dual graph and fed into the genetic algorithm system with fixed constraints and expected performance criteria set to govern evolution. The resultant Hierarchical Evolutionary Algorithmic Design System is developed by linking the evaluation process with environmental assessment tools to rank the candidate designs. The proposed system generates the concept, the seed, and the schema, and has environmental performance as one of the main criteria in driving optimisation.
Resumo:
The structure of the 1:1 proton-transfer compound from the reaction of L-tartaric acid with the azo-dye precursor aniline yellow [4-(phenylazo)aniline], 4-(phenyldiazenyl)anilinium hydrogen 2R,3R-tartrate C12H12N3+ . C4H6O6- has been determined at 200 K. The asymmetric unit of the compound contains two independent phenylazoanilinium cations and two hydrogen L-tartrate anions. The structure is unusual in that all four phenyl rings of both cations have identical 50% rotational disorder. The two hydrogen L-tartrate anions form independent but similar chains through head-to-tail carboxylic O--H...O~carboxyl~ hydrogen bonds [graph set C7] which are then extended into a two-dimensional hydrogen-bonded sheet structure through hydroxyl O--H...O hydrogen-bonding links. The anilinium groups of the phenyldiazenyl cations are incorporated into the sheets and also provide internal hydrogen-bonding extensions while their aromatic tails layer in the structure without significant interaction except for weak \p--\p interactions [minimum ring centroid separation, 3.844(3) \%A]. The hydrogen L-tartrate residues of both anions have the common short intramolecular hydroxyl O--H...O~carboxyl~ hydogen bonds. This work has provided a solution to the unusual disorder problem inherent in the structure of this salt as well as giving another example of the utility of the hydrogen tartrate in the generation of sheet substructures in molecular assembly processes.