842 resultados para Graph DBMS, BenchMarking, OLAP, NoSQL
Resumo:
Since the 1980s, industries and researchers have sought to better understand the quality of services due to the rise in their importance (Brogowicz, Delene and Lyth 1990). More recent developments with online services, coupled with growing recognition of service quality (SQ) as a key contributor to national economies and as an increasingly important competitive differentiator, amplify the need to revisit our understanding of SQ and its measurement. Although ‘SQ’ can be broadly defined as “a global overarching judgment or attitude relating to the overall excellence or superiority of a service” (Parasuraman, Berry and Zeithaml 1988), the term has many interpretations. There has been considerable progress on how to measure SQ perceptions, but little consensus has been achieved on what should be measured. There is agreement that SQ is multi-dimensional, but little agreement as to the nature or content of these dimensions (Brady and Cronin 2001). For example, within the banking sector, there exist multiple SQ models, each consisting of varying dimensions. The existence of multiple conceptions and the lack of a unifying theory bring the credibility of existing conceptions into question, and beg the question of whether it is possible at some higher level to define SQ broadly such that it spans all service types and industries. This research aims to explore the viability of a universal conception of SQ, primarily through a careful re-visitation of the services and SQ literature. The study analyses the strengths and weaknesses of the highly regarded and widely used global SQ model (SERVQUAL) which reflects a single-level approach to SQ measurement. The SERVQUAL model states that customers evaluate SQ (of each service encounter) based on five dimensions namely reliability, assurance, tangibles, empathy and responsibility. SERVQUAL, however, failed to address what needs to be reliable, assured, tangible, empathetic and responsible. This research also addresses a more recent global SQ model from Brady and Cronin (2001); the B&C (2001) model, that has potential to be the successor of SERVQUAL in that it encompasses other global SQ models and addresses the ‘what’ questions that SERVQUAL didn’t. The B&C (2001) model conceives SQ as being multidimensional and multi-level; this hierarchical approach to SQ measurement better reflecting human perceptions. In-line with the initial intention of SERVQUAL, which was developed to be generalizable across industries and service types, this research aims to develop a conceptual understanding of SQ, via literature and reflection, that encompasses the content/nature of factors related to SQ; and addresses the benefits and weaknesses of various SQ measurement approaches (i.e. disconfirmation versus perceptions-only). Such understanding of SQ seeks to transcend industries and service types with the intention of extending our knowledge of SQ and assisting practitioners in understanding and evaluating SQ. The candidate’s research has been conducted within, and seeks to contribute to, the ‘IS-Impact’ research track of the IT Professional Services (ITPS) Research Program at QUT. The vision of the track is “to develop the most widely employed model for benchmarking Information Systems in organizations for the joint benefit of research and practice.” The ‘IS-Impact’ research track has developed an Information Systems (IS) success measurement model, the IS-Impact Model (Gable, Sedera and Chan 2008), which seeks to fulfill the track’s vision. Results of this study will help future researchers in the ‘IS-Impact’ research track address questions such as: • Is SQ an antecedent or consequence of the IS-Impact model or both? • Has SQ already been addressed by existing measures of the IS-Impact model? • Is SQ a separate, new dimension of the IS-Impact model? • Is SQ an alternative conception of the IS? Results from the candidate’s research suggest that SQ dimensions can be classified at a higher level which is encompassed by the B&C (2001) model’s 3 primary dimensions (interaction, physical environment and outcome). The candidate also notes that it might be viable to re-word the ‘physical environment quality’ primary dimension to ‘environment quality’ so as to better encompass both physical and virtual scenarios (E.g: web sites). The candidate does not rule out the global feasibility of the B&C (2001) model’s nine sub-dimensions, however, acknowledges that more work has to be done to better define the sub-dimensions. The candidate observes that the ‘expertise’, ‘design’ and ‘valence’ sub-dimensions are supportive representations of the ‘interaction’, physical environment’ and ‘outcome’ primary dimensions respectively. The latter statement suggests that customers evaluate each primary dimension (or each higher level of SQ classification) namely ‘interaction’, physical environment’ and ‘outcome’ based on the ‘expertise’, ‘design’ and ‘valence’ sub-dimensions respectively. The ability to classify SQ dimensions at a higher level coupled with support for the measures that make up this higher level, leads the candidate to propose the B&C (2001) model as a unifying theory that acts as a starting point to measuring SQ and the SQ of IS. The candidate also notes, in parallel with the continuing validation and generalization of the IS-Impact model, that there is value in alternatively conceptualizing the IS as a ‘service’ and ultimately triangulating measures of IS SQ with the IS-Impact model. These further efforts are beyond the scope of the candidate’s study. Results from the candidate’s research also suggest that both the disconfirmation and perceptions-only approaches have their merits and the choice of approach would depend on the objective(s) of the study. Should the objective(s) be an overall evaluation of SQ, the perceptions-only approached is more appropriate as this approach is more straightforward and reduces administrative overheads in the process. However, should the objective(s) be to identify SQ gaps (shortfalls), the (measured) disconfirmation approach is more appropriate as this approach has the ability to identify areas that need improvement.
Resumo:
Triage is a process that is critical to the effective management of modern emergency departments. Triage systems aim, not only to ensure clinical justice for the patient, but also to provide an effective tool for departmental organisation, monitoring and evaluation. Over the last 20 years, triage systems have been standardised in a number of countries and efforts made to ensure consistency of application. However, the ongoing crowding of emergency departments resulting from access block and increased demand has led to calls for a review of systems of triage. In addition, international variance in triage systems limits the capacity for benchmarking. The aim of this paper is to provide a critical review of the literature pertaining to emergency department triage in order to inform the direction for future research. While education, guidelines and algorithms have been shown to reduce triage variation, there remains significant inconsistency in triage assessment arising from the diversity of factors determining the urgency of any individual patient. It is timely to accept this diversity, what is agreed, and what may be agreeable. It is time to develop and test an International Triage Scale (ITS) which is supported by an international collaborative approach towards a triage research agenda. This agenda would seek to further develop application and moderating tools and to utilise the scales for international benchmarking and research programmes.
Resumo:
Despite all attempts to prevent fraud, it continues to be a major threat to industry and government. Traditionally, organizations have focused on fraud prevention rather than detection, to combat fraud. In this paper we present a role mining inspired approach to represent user behaviour in Enterprise Resource Planning (ERP) systems, primarily aimed at detecting opportunities to commit fraud or potentially suspicious activities. We have adapted an approach which uses set theory to create transaction profiles based on analysis of user activity records. Based on these transaction profiles, we propose a set of (1) anomaly types to detect potentially suspicious user behaviour and (2) scenarios to identify inadequate segregation of duties in an ERP environment. In addition, we present two algorithms to construct a directed acyclic graph to represent relationships between transaction profiles. Experiments were conducted using a real dataset obtained from a teaching environment and a demonstration dataset, both using SAP R/3, presently the most predominant ERP system. The results of this empirical research demonstrate the effectiveness of the proposed approach.
Resumo:
In this paper, the train scheduling problem is modelled as a blocking parallel-machine job shop scheduling (BPMJSS) problem. In the model, trains, single-track sections and multiple-track sections, respectively, are synonymous with jobs, single machines and parallel machines, and an operation is regarded as the movement/traversal of a train across a section. Due to the lack of buffer space, the real-life case should consider blocking or hold-while-wait constraints, which means that a track section cannot release and must hold the train until next section on the routing becomes available. Based on literature review and our analysis, it is very hard to find a feasible complete schedule directly for BPMJSS problems. Firstly, a parallel-machine job-shop-scheduling (PMJSS) problem is solved by an improved shifting bottleneck procedure (SBP) algorithm without considering blocking conditions. Inspired by the proposed SBP algorithm, feasibility satisfaction procedure (FSP) algorithm is developed to solve and analyse the BPMJSS problem, by an alternative graph model that is an extension of the classical disjunctive graph models. The proposed algorithms have been implemented and validated using real-world data from Queensland Rail. Sensitivity analysis has been applied by considering train length, upgrading track sections, increasing train speed and changing bottleneck sections. The outcomes show that the proposed methodology would be a very useful tool for the real-life train scheduling problems
Resumo:
In this paper techniques for scheduling additional train services (SATS) are considered as is train scheduling involving general time window constraints, fixed operations, maintenance activities and periods of section unavailability. The SATS problem is important because additional services must often be given access to the railway and subsequently integrated into current timetables. The SATS problem therefore considers the competition for railway infrastructure between new services and existing services belonging to the same or different operators. The SATS problem is characterised as a hybrid job shop scheduling problem with time window constraints. To solve this problem constructive algorithm and metaheuristic scheduling techniques that operate upon a disjunctive graph model of train operations are utilised. From numerical investigations the proposed framework and associated techniques are tested and shown to be effective.
Resumo:
Train scheduling is a complex and time consuming task of vital importance. To schedule trains more accurately and efficiently than permitted by current techniques a novel hybrid job shop approach has been proposed and implemented. Unique characteristics of train scheduling are first incorporated into a disjunctive graph model of train operations. A constructive algorithm that utilises this model is then developed. The constructive algorithm is a general procedure that constructs a schedule using insertion, backtracking and dynamic route selection mechanisms. It provides a significant search capability and is valid for any objective criteria. Simulated Annealing and Local Search meta-heuristic improvement algorithms are also adapted and extended. An important feature of these approaches is a new compound perturbation operator that consists of many unitary moves that allows trains to be shifted feasibly and more easily within the solution. A numerical investigation and case study is provided and demonstrates that high quality solutions are obtainable on real sized applications.
Resumo:
Background/aim: A timely evaluation of the Australian Competency Standards for Entry-Level Occupational Therapists© (1994) was conducted. This thorough investigation comprised a literature review exploring the concept of competence and the applications of competency standards; systematic benchmarking of the Australian Occupational Therapy Competency Standards (OT AUSTRALIA, 1994) against other national and international competency standards and other affiliated documents, from occupational therapy and other cognate disciplines; and extensive nationwide consultation with the professional community. This paper explores and examines the similarities and disparities between occupational therapy competency standards documents available in English from Australia and other countries.----- Methods: An online search for national occupational therapy competency standards located 10 documents, including the Australian competencies.----- Results: Four 'frameworks' were created to categorise the documents according to their conceptual underpinnings: Technical-Prescriptive, Enabling, Educational and Meta-Cognitive. Other characteristics that appeared to impact the design, content and implementation of competency standards, including definitions of key concepts, authorship, national and cultural priorities, scope of services, intended use and review mechanisms, were revealed.----- Conclusion: The proposed 'frameworks' and identification of influential characteristics provided a 'lens' through which to understand and evaluate competency standards. While consistent application of and attention to some of these characteristics appear to consolidate and affirm the authority of competency standards, it is suggested that the national context should be a critical determinant of the design and content of the final document. The Australian Occupational Therapy Competency Standards (OT AUSTRALIA, 1994) are critiqued accordingly, and preliminary recommendations for revision are proposed.
Resumo:
Mobile robots are widely used in many industrial fields. Research on path planning for mobile robots is one of the most important aspects in mobile robots research. Path planning for a mobile robot is to find a collision-free route, through the robot’s environment with obstacles, from a specified start location to a desired goal destination while satisfying certain optimization criteria. Most of the existing path planning methods, such as the visibility graph, the cell decomposition, and the potential field are designed with the focus on static environments, in which there are only stationary obstacles. However, in practical systems such as Marine Science Research, Robots in Mining Industry, and RoboCup games, robots usually face dynamic environments, in which both moving and stationary obstacles exist. Because of the complexity of the dynamic environments, research on path planning in the environments with dynamic obstacles is limited. Limited numbers of papers have been published in this area in comparison with hundreds of reports on path planning in stationary environments in the open literature. Recently, a genetic algorithm based approach has been introduced to plan the optimal path for a mobile robot in a dynamic environment with moving obstacles. However, with the increase of the number of the obstacles in the environment, and the changes of the moving speed and direction of the robot and obstacles, the size of the problem to be solved increases sharply. Consequently, the performance of the genetic algorithm based approach deteriorates significantly. This motivates the research of this work. This research develops and implements a simulated annealing algorithm based approach to find the optimal path for a mobile robot in a dynamic environment with moving obstacles. The simulated annealing algorithm is an optimization algorithm similar to the genetic algorithm in principle. However, our investigation and simulations have indicated that the simulated annealing algorithm based approach is simpler and easier to implement. Its performance is also shown to be superior to that of the genetic algorithm based approach in both online and offline processing times as well as in obtaining the optimal solution for path planning of the robot in the dynamic environment. The first step of many path planning methods is to search an initial feasible path for the robot. A commonly used method for searching the initial path is to randomly pick up some vertices of the obstacles in the search space. This is time consuming in both static and dynamic path planning, and has an important impact on the efficiency of the dynamic path planning. This research proposes a heuristic method to search the feasible initial path efficiently. Then, the heuristic method is incorporated into the proposed simulated annealing algorithm based approach for dynamic robot path planning. Simulation experiments have shown that with the incorporation of the heuristic method, the developed simulated annealing algorithm based approach requires much shorter processing time to get the optimal solutions in the dynamic path planning problem. Furthermore, the quality of the solution, as characterized by the length of the planned path, is also improved with the incorporated heuristic method in the simulated annealing based approach for both online and offline path planning.
Resumo:
The book within which this chapter appears is published as a research reference book (not a coursework textbook) on Management Information Systems (MIS) for seniors or graduate students in Chinese universities. It is hoped that this chapter, along with the others, will be helpful to MIS scholars and PhD/Masters research students in China who seek understanding of several central Information Systems (IS) research topics and related issues. The subject of this chapter - ‘Evaluating Information Systems’ - is broad, and cannot be addressed in its entirety in any depth within a single book chapter. The chapter proceeds from the truism that organizations have limited resources and those resources need to be invested in a way that provides greatest benefit to the organization. IT expenditure represents a substantial portion of any organization’s investment budget and IT related innovations have broad organizational impacts. Evaluation of the impact of this major investment is essential to justify this expenditure both pre- and post-investment. Evaluation is also important to prioritize possible improvements. The chapter (and most of the literature reviewed herein) admittedly assumes a blackbox view of IS/IT1, emphasizing measures of its consequences (e.g. for organizational performance or the economy) or perceptions of its quality from a user perspective. This reflects the MIS emphasis – a ‘management’ emphasis rather than a software engineering emphasis2, where a software engineering emphasis might be on the technical characteristics and technical performance. Though a black-box approach limits diagnostic specificity of findings from a technical perspective, it offers many benefits. In addition to superior management information, these benefits may include economy of measurement and comparability of findings (e.g. see Part 4 on Benchmarking IS). The chapter does not purport to be a comprehensive treatment of the relevant literature. It does, however, reflect many of the more influential works, and a representative range of important writings in the area. The author has been somewhat opportunistic in Part 2, employing a single journal – The Journal of Strategic Information Systems – to derive a classification of literature in the broader domain. Nonetheless, the arguments for this approach are believed to be sound, and the value from this exercise real. The chapter drills down from the general to the specific. It commences with a highlevel overview of the general topic area. This is achieved in 2 parts: - Part 1 addressing existing research in the more comprehensive IS research outlets (e.g. MISQ, JAIS, ISR, JMIS, ICIS), and Part 2 addressing existing research in a key specialist outlet (i.e. Journal of Strategic Information Systems). Subsequently, in Part 3, the chapter narrows to focus on the sub-topic ‘Information Systems Success Measurement’; then drilling deeper to become even more focused in Part 4 on ‘Benchmarking Information Systems’. In other words, the chapter drills down from Parts 1&2 Value of IS, to Part 3 Measuring Information Systems Success, to Part 4 Benchmarking IS. While the commencing Parts (1&2) are by definition broadly relevant to the chapter topic, the subsequent, more focused Parts (3 and 4) admittedly reflect the author’s more specific interests. Thus, the three chapter foci – value of IS, measuring IS success, and benchmarking IS - are not mutually exclusive, but, rather, each subsequent focus is in most respects a sub-set of the former. Parts 1&2, ‘the Value of IS’, take a broad view, with much emphasis on ‘the business Value of IS’, or the relationship between information technology and organizational performance. Part 3, ‘Information System Success Measurement’, focuses more specifically on measures and constructs employed in empirical research into the drivers of IS success (ISS). (DeLone and McLean 1992) inventoried and rationalized disparate prior measures of ISS into 6 constructs – System Quality, Information Quality, Individual Impact, Organizational Impact, Satisfaction and Use (later suggesting a 7th construct – Service Quality (DeLone and McLean 2003)). These 6 constructs have been used extensively, individually or in some combination, as the dependent variable in research seeking to better understand the important antecedents or drivers of IS Success. Part 3 reviews this body of work. Part 4, ‘Benchmarking Information Systems’, drills deeper again, focusing more specifically on a measure of the IS that can be used as a ‘benchmark’3. This section consolidates and extends the work of the author and his colleagues4 to derive a robust, validated IS-Impact measurement model for benchmarking contemporary Information Systems (IS). Though IS-Impact, like ISS, has potential value in empirical, causal research, its design and validation has emphasized its role and value as a comparator; a measure that is simple, robust and generalizable and which yields results that are as far as possible comparable across time, across stakeholders, and across differing systems and systems contexts.
Resumo:
This is an experimental study into the permeability and compressibility properties of bagasse pulp pads. Three experimental rigs were custom-built for this project. The experimental work is complemented by modelling work. Both the steady-state and dynamic behaviour of pulp pads are evaluated in the experimental and modelling components of this project. Bagasse, the fibrous residue that remains after sugar is extracted from sugarcane, is normally burnt in Australia to generate steam and electricity for the sugar factory. A study into bagasse pulp was motivated by the possibility of making highly value-added pulp products from bagasse for the financial benefit of sugarcane millers and growers. The bagasse pulp and paper industry is a multibillion dollar industry (1). Bagasse pulp could replace eucalypt pulp which is more widely used in the local production of paper products. An opportunity exists for replacing the large quantity of mainly generic paper products imported to Australia. This includes 949,000 tonnes of generic photocopier papers (2). The use of bagasse pulp for paper manufacture is the main application area of interest for this study. Bagasse contains a large quantity of short parenchyma cells called ‘pith’. Around 30% of the shortest fibres are removed from bagasse prior to pulping. Despite the ‘depithing’ operations in conventional bagasse pulp mills, a large amount of pith remains in the pulp. Amongst Australian paper producers there is a perception that the high quantity of short fibres in bagasse pulp leads to poor filtration behaviour at the wet-end of a paper machine. Bagasse pulp’s poor filtration behaviour reduces paper production rates and consequently revenue when compared to paper production using locally made eucalypt pulp. Pulp filtration can be characterised by two interacting factors; permeability and compressibility. Surprisingly, there has previously been very little rigorous investigation into neither bagasse pulp permeability nor compressibility. Only freeness testing of bagasse pulp has been published in the open literature. As a result, this study has focussed on a detailed investigation of the filtration properties of bagasse pulp pads. As part of this investigation, this study investigated three options for improving the permeability and compressibility properties of Australian bagasse pulp pads. Two options for further pre-treating depithed bagasse prior to pulping were considered. Firstly, bagasse was fractionated based on size. Two bagasse fractions were produced, ‘coarse’ and ‘medium’ bagasse fractions. Secondly, bagasse was collected after being processed on two types of juice extraction technology, i.e. from a sugar mill and from a sugar diffuser. Finally one method of post-treating the bagasse pulp was investigated. The effects of chemical additives, which are known to improve freeness, were also assessed for their effect on pulp pad permeability and compressibility. Pre-treated Australian bagasse pulp samples were compared with several benchmark pulp samples. A sample of commonly used kraft Eucalyptus globulus pulp was obtained. A sample of depithed Argentinean bagasse, which is used for commercial paper production, was also obtained. A sample of Australian bagasse which was depithed as per typical factory operations was also produced for benchmarking purposes. The steady-state pulp pad permeability and compressibility parameters were determined experimentally using two purpose-built experimental rigs. In reality, steady-state conditions do not exist on a paper machine. The permeability changes as the sheet compresses over time. Hence, a dynamic model was developed which uses the experimentally determined steady-state permeability and compressibility parameters as inputs. The filtration model was developed with a view to designing pulp processing equipment that is suitable specifically for bagasse pulp. The predicted results of the dynamic model were compared to experimental data. The effectiveness of a polymeric and microparticle chemical additives for improving the retention of short fibres and increasing the drainage rate of a bagasse pulp slurry was determined in a third purpose-built rig; a modified Dynamic Drainage Jar (DDJ). These chemical additives were then used in the making of a pulp pad, and their effect on the steady-state and dynamic permeability and compressibility of bagasse pulp pads was determined. The most important finding from this investigation was that Australian bagasse pulp was produced with higher permeability than eucalypt pulp, despite a higher overall content of short fibres. It is thought this research outcome could enable Australian paper producers to switch from eucalypt pulp to bagasse pulp without sacrificing paper machine productivity. It is thought that two factors contributed to the high permeability of the bagasse pulp pad. Firstly, thicker cell walls of the bagasse pulp fibres resulted in high fibre stiffness. Secondly, the bagasse pulp had a large proportion of fibres longer than 1.3 mm. These attributes helped to reinforce the pulp pad matrix. The steady-state permeability and compressibility parameters for the eucalypt pulp were consistent with those found by previous workers. It was also found that Australian pulp derived from the ‘coarse’ bagasse fraction had higher steady-state permeability than the ‘medium’ fraction. However, there was no difference between bagasse pulp originating from a diffuser or a mill. The bagasse pre-treatment options investigated in this study were not found to affect the steady-state compressibility parameters of a pulp pad. The dynamic filtration model was found to give predictions that were in good agreement with experimental data for pads made from samples of pretreated bagasse pulp, provided at least some pith was removed prior to pulping. Applying vacuum to a pulp slurry in the modified DDJ dramatically reduced the drainage time. At any level of vacuum, bagasse pulp benefitted from chemical additives as quantified by reduced drainage time and increased retention of short fibres. Using the modified DDJ, it was observed that under specific conditions, a benchmark depithed bagasse pulp drained more rapidly than the ‘coarse’ bagasse pulp. In steady-state permeability and compressibility experiments, the addition of chemical additives improved the pad permeability and compressibility of a benchmark bagasse pulp with a high quantity of short fibres. Importantly, this effect was not observed for the ‘coarse’ bagasse pulp. However, dynamic filtration experiments showed that there was also a small observable improvement in filtration for the ‘medium’ bagasse pulp. The mechanism of bagasse pulp pad consolidation appears to be by fibre realignment. Chemical additives assist to lubricate the consolidation process. This study was complemented by pulp physical and chemical property testing and a microscopy study. In addition to its high pulp pad permeability, ‘coarse’ bagasse pulp often (but not always) had superior physical properties than a benchmark depithed bagasse pulp.
Resumo:
Purpose – The purpose of this study is to examine and extend Noer’s theoretical model of the new employment relationship. Design/methodology/approach – Case study methodology is used to scrutinise the model. The results of a literature-based survey on the elements underpinning the five values in the model are analysed from dual perspectives of individual and organization using a multi-source assessment instrument. A schema is developed to guide and inform a series of focus group discussions from an analysis of the survey data. Using content analysis, the transcripts from the focus group discussions are evaluated using the model’s values and their elements. The transcripts are also reviewed for implicit themes. The case studied is Flight Centre Limited, an Australian-based international retail travel company. Findings – Using this approach, some elements of the five values in Noer’s model are identified as characteristic of the company’s psychological contract. Specifically, to some extent, the model’s values of flexible deployment, customer focus, performance focus, project-based work, and human spirit and work can be applied in this case. A further analysis of the transcripts validates three additional values in the psychological contract literature: commitment; learning and development; and open information. As a result of the findings, Noer’s model is extended to eight values. Research limitations/implications – The study offers a research-based model of the new employment relationship. Since generalisations from the case study findings cannot be applied directly to other settings, the opportunity to test this model in a variety of contexts is open to other researchers. Originality/value – In practice, the methodology used is a unique process for benchmarking the psychological contract. The process may be applied in other business settings. By doing so, organization development professionals have a consulting framework for comparing an organization’s dominant psychological contract with the extended model presented here.
Resumo:
Identifying an individual from surveillance video is a difficult, time consuming and labour intensive process. The proposed system aims to streamline this process by filtering out unwanted scenes and enhancing an individual's face through super-resolution. An automatic face recognition system is then used to identify the subject or present the human operator with likely matches from a database. A person tracker is used to speed up the subject detection and super-resolution process by tracking moving subjects and cropping a region of interest around the subject's face to reduce the number and size of the image frames to be super-resolved respectively. In this paper, experiments have been conducted to demonstrate how the optical flow super-resolution method used improves surveillance imagery for visual inspection as well as automatic face recognition on an Eigenface and Elastic Bunch Graph Matching system. The optical flow based method has also been benchmarked against the ``hallucination'' algorithm, interpolation methods and the original low-resolution images. Results show that both super-resolution algorithms improved recognition rates significantly. Although the hallucination method resulted in slightly higher recognition rates, the optical flow method produced less artifacts and more visually correct images suitable for human consumption.
Resumo:
In the title salt, C12H11N2O2+·C7H4NO5-, the cations and anions interact through asymmetric cyclic pyridinium-carboxylate N-HO,O' hydrogen-bonding associations [graph set R12(4)], giving discrete heterodimers having weak cation-anion - aromatic ring interactions [minimum ring centroid separation = 3.7116 (9) Å]
Resumo:
In the structure of the title compound, the salt 2(C12H10N3O4+) (C12H8O6S2)2- . 3H2O, determined at 173 K, the biphenyl-4,4'-disulfonate dianions lie across crystallographic inversion centres with the sulfonate groups interacting head-to-head through centrosymmetric cyclic bis(water)-bridged hydrogen-bonding associations [graph set R4/4(11)], forming chain structures. The 2-(2,4-dinitrobenzyl)pyridinium cations are linked to these chains through N+-H...O(water) hydrogen bonds and a two-dimensional network structure is formed through water bridges between sulfonate and 2-nitro O atoms, while the structure also has weak cation--anion pi-pi aromatic ring interactions [minimum ring centroid separation 3.8441(13)A].