253 resultados para Graph DBMS, BenchMarking, OLAP, NoSQL
Resumo:
Extracting and aggregating the relevant event records relating to an identified security incident from the multitude of heterogeneous logs in an enterprise network is a difficult challenge. Presenting the information in a meaningful way is an additional challenge. This paper looks at solutions to this problem by first identifying three main transforms; log collection, correlation, and visual transformation. Having identified that the CEE project will address the first transform, this paper focuses on the second, while the third is left for future work. To aggregate by correlating event records we demonstrate the use of two correlation methods, simple and composite. These make use of a defined mapping schema and confidence values to dynamically query the normalised dataset and to constrain result events to within a time window. Doing so improves the quality of results, required for the iterative re-querying process being undertaken. Final results of the process are output as nodes and edges suitable for presentation as a network graph.
Resumo:
We consider the problem of maximizing the secure connectivity in wireless ad hoc networks, and analyze complexity of the post-deployment key establishment process constrained by physical layer properties such as connectivity, energy consumption and interference. Two approaches, based on graph augmentation problems with nonlinear edge costs, are formulated. The first one is based on establishing a secret key using only the links that are already secured by shared keys. This problem is in NP-hard and does not accept polynomial time approximation scheme PTAS since minimum cutsets to be augmented do not admit constant costs. The second one extends the first problem by increasing the power level between a pair of nodes that has a secret key to enable them physically connect. This problem can be formulated as the optimal key establishment problem with interference constraints with bi-objectives: (i) maximizing the concurrent key establishment flow, (ii) minimizing the cost. We prove that both problems are NP-hard and MAX-SNP with a reduction to MAX3SAT problem.
Resumo:
The perennial issues of student engagement, success and retention in higher education continue to attract attention as the salience of teaching and learning funding and performance measures has increased. This paper addresses the question of the responsibility or place of higher education institutions (HEIs) for initiating, planning, managing and evaluating their student engagement, success and retention programs and strategies. An evaluation of the current situation indicates the need for a sophisticated approach to assessing the ability of HEIs to proactively design programs and practices that enhance student engagement. An approach—the Student Engagement Success and Retention Maturity Model (SESR-MM)—is proposed and its development, current status, and relationship with and possible use in benchmarking are discussed.
Resumo:
Creative Statement: “There are those who see Planet Earth as a gigantic living being, one that feeds and nurtures humanity and myriad other species – an entity that must be cared for. Then there are those who see it as a rock full of riches to be pilfered heedlessly in a short-term quest for over-abundance. This ‘cradle to grave’ mentality, it would seem, is taking its toll (unless you’re a virulent disbeliever in climate change). Why not, ask artists Priscilla Bracks and Gavin Sade, take a different approach? To this end they have set out on a near impossible task; to visualise the staggering quantity of carbon produced by Australia every year. Their eerie, glowing plastic cube resembles something straight out of Dr Who or The X Files. And, like the best science fiction, it has technical realities at its heart. Every One, Every Day tangibly illustrates our greenhouse gas output – its 27m3 volume is approximately the amount of green-house gas emitted per capita, daily. Every One, Every Dayis lit by an array of LED’s displaying light patterns representing energy use generated by data from the Australian Energy Market. Every One, Every Day was formed from recycled, polyethylene – used milk bottles – ‘lent’ to the artists by a Visy recycling facility. At the end of the Vivid Festival this plastic will be returned to Visy, where it will re-enter the stream of ‘technical nutrients.’ Could we make another world? One that emulates the continuing cycles of nature? One that uses our ‘technical nutrients’ such as plastic and steel in continual cycles, just like a deciduous tree dropping leaves to compost itself and keep it’s roots warm and moist?” (Ashleigh Crawford. Melbourne – April, 2013) Artistic Research Statement: The research focus of this work is on exploring how to represent complex statistics and data at a human scale, and how produce a work where a large percentage of the materials could be recycled. The surface of Every One, Every Day is clad in tiles made from polyethylene, from primarily recycled milk bottles, ‘lent’ to the artists by the Visy recycling facility in Sydney. The tiles will be returned to Visy for recycling. As such the work can be viewed as an intervention in the industrial ecology of polyethylene, and in the process demonstrates how to sustain cycles of technical materials – by taking the output of a recycling facility back to a manufacturer to produce usable materials. In terms of data visualisation, Every One, Every Day takes the form of a cube with a volume of 27 cubic meters. The annual per capita emissions figures for Australia are cited as ranging between 18 to 25 tons. Assuming the lower figure, 18tons per capital annually, the 27 cubic meters represents approximately one day per capita of CO2 emissions – where CO2 is a gas at 15C and 1 atmosphere of pressure. The work also explores real time data visualisation by using an array of 600 controllable LEDs inside the cube. Illumination patterns are derived from a real time data from the Australian Energy Market, using the dispatch interval price and demand graph for New South Wales. The two variables of demand and price are mapped to properties of the illumination - hue, brightness, movement, frequency etc. The research underpinning the project spanned industrial ecology to data visualization and public art practices. The result is that Every One, Every Day is one of the first public artworks that successfully bring together materials, physical form, and real time data representation in a unified whole.
Resumo:
The structures of two ammonium salts of 3-carboxy-4-hydroxybenzenesulfonic acid (5-sulfosalicylic acid, 5-SSA) have been determined at 200 K. In the 1:1 hydrated salt, ammonium 3-carboxy-4-hydroxybenzenesulfonate monohydrate, NH4+·C7H5O6S-·H2O, (I), the 5-SSA- monoanions give two types of head-to-tail laterally linked cyclic hydrogen-bonding associations, both with graph-set R44(20). The first involves both carboxylic acid O-HOwater and water O-HOsulfonate hydrogen bonds at one end, and ammonium N-HOsulfonate and N-HOcarboxy hydrogen bonds at the other. The second association is centrosymmetric, with end linkages through water O-HOsulfonate hydrogen bonds. These conjoined units form stacks down c and are extended into a three-dimensional framework structure through N-HO and water O-HO hydrogen bonds to sulfonate O-atom acceptors. Anhydrous triammonium 3-carboxy-4-hydroxybenzenesulfonate 3-carboxylato-4-hydroxybenzenesulfonate, 3NH4+·C7H4O6S2-·C7H5O6S-, (II), is unusual, having both dianionic 5-SSA2- and monoanionic 5-SSA- species. These are linked by a carboxylic acid O-HO hydrogen bond and, together with the three ammonium cations (two on general sites and the third comprising two independent half-cations lying on crystallographic twofold rotation axes), give a pseudo-centrosymmetric asymmetric unit. Cation-anion hydrogen bonding within this layered unit involves a cyclic R33(8) association which, together with extensive peripheral N-HO hydrogen bonding involving both sulfonate and carboxy/carboxylate acceptors, gives a three-dimensional framework structure. This work further demonstrates the utility of the 5-SSA- monoanion for the generation of stable hydrogen-bonded crystalline materials, and provides the structure of a dianionic 5-SSA2- species of which there are only a few examples in the crystallographic literature.
Resumo:
The structures of the anhydrous proton-transfer compounds of the sulfa drug sulfamethazine with 5-nitrosalicylic acid and picric acid, namely 2-(4-aminobenzenesulfonamido)-4,6-dimethylpyrimidinium 2-hydroxy-5-nitrobenzoate, C12H15N4O2S(+)·C7H4NO4(-), (I), and 2-(4-aminobenzenesulfonamido)-4,6-dimethylpyrimidinium 2,4,6-trinitrophenolate, C12H15N4O2S(+)·C6H2N3O7(-), (II), respectively, have been determined. In the asymmetric unit of (I), there are two independent but conformationally similar cation-anion heterodimer pairs which are formed through duplex intermolecular N(+)-H...Ocarboxylate and N-H...Ocarboxylate hydrogen-bond pairs, giving a cyclic motif [graph set R2(2)(8)]. These heterodimers form separate and different non-associated substructures through aniline N-H...O hydrogen bonds, one one-dimensional, involving carboxylate O-atom acceptors, the other two-dimensional, involving both carboxylate and hydroxy O-atom acceptors. The overall two-dimensional structure is stabilized by π-π interactions between the pyrimidinium ring and the 5-nitrosalicylate ring in both heterodimers [minimum ring-centroid separation = 3.4580 (8) Å]. For picrate (II), the cation-anion interaction involves a slightly asymmetric chelating N-H...O R2(1)(6) hydrogen-bonding association with the phenolate O atom, together with peripheral conjoint R1(2)(6) interactions between the same N-H groups and O atoms of the ortho-related nitro groups. An inter-unit amine N-H...Osulfone hydrogen bond gives one-dimensional chains which extend along a and inter-associate through π-π interactions between the pyrimidinium rings [centroid-centroid separation = 3.4752 (9) Å]. The two structures reported here now bring to a total of four the crystallographically characterized examples of proton-transfer salts of sulfamethazine with strong organic acids.
Resumo:
Computer Experiments, consisting of a number of runs of a computer model with different inputs, are now common-place in scientific research. Using a simple fire model for illustration some guidelines are given for the size of a computer experiment. A graph is provided relating the error of prediction to the sample size which should be of use when designing computer experiments. Methods for augmenting computer experiments with extra runs are also described and illustrated. The simplest method involves adding one point at a time choosing that point with the maximum prediction variance. Another method that appears to work well is to choose points from a candidate set with maximum determinant of the variance covariance matrix of predictions.
Resumo:
Objective: Effective management of multi-resistant organisms is an important issue for hospitals both in Australia and overseas. This study investigates the utility of using Bayesian Network (BN) analysis to examine relationships between risk factors and colonization with Vancomycin Resistant Enterococcus (VRE). Design: Bayesian Network Analysis was performed using infection control data collected over a period of 36 months (2008-2010). Setting: Princess Alexandra Hospital (PAH), Brisbane. Outcome of interest: Number of new VRE Isolates Methods: A BN is a probabilistic graphical model that represents a set of random variables and their conditional dependencies via a directed acyclic graph (DAG). BN enables multiple interacting agents to be studied simultaneously. The initial BN model was constructed based on the infectious disease physician‟s expert knowledge and current literature. Continuous variables were dichotomised by using third quartile values of year 2008 data. BN was used to examine the probabilistic relationships between VRE isolates and risk factors; and to establish which factors were associated with an increased probability of a high number of VRE isolates. Software: Netica (version 4.16). Results: Preliminary analysis revealed that VRE transmission and VRE prevalence were the most influential factors in predicting a high number of VRE isolates. Interestingly, several factors (hand hygiene and cleaning) known through literature to be associated with VRE prevalence, did not appear to be as influential as expected in this BN model. Conclusions: This preliminary work has shown that Bayesian Network Analysis is a useful tool in examining clinical infection prevention issues, where there is often a web of factors that influence outcomes. This BN model can be restructured easily enabling various combinations of agents to be studied.
Resumo:
BACKGROUND: Public hospital EDs in Australia have become increasingly congested because of increasing demand and access block. Six per cent of ED patients attend private hospital EDs whereas 45% of the population hold private health insurance. OBJECTIVES: This study describes the patients attending a small selection of four private hospital EDs in Queensland and Victoria, and tests the feasibility of a private ED database. METHODS: De-identified routinely collected patient data were provided by the four participating private hospital and amalgamated into a single data set. RESULT: The mean age of private ED patients was 52 years. Males outnumbered females in all age groups except > 80 years. Attendance was higher on weekends and Mondays, and between 08.00 and 20.00 h. There were 6.6% of the patients triaged as categories 1 and 2, and 60% were categories 4 or 5. There were 36.4% that required hospital admission. Also, 96% of the patients had some kind of insurance. Furthermore, 72% were self-referred and 12% were referred by private medical practitioners. Approximately 25% arrived by ambulance. There were 69% that completed their ED treatment within 4 h. CONCLUSION: This study is the first public description of patients attending private EDs in Australia. Private EDs have a significant role to play in acute medical care and in providing access to private hospitals which could alleviate pressure on public EDs. This study demonstrates the need for consolidated data based on a consistent data set and data dictionary to enable system-wide analysis, benchmarking and evaluation
Resumo:
This study presents a segmentation pipeline that fuses colour and depth information to automatically separate objects of interest in video sequences captured from a quadcopter. Many approaches assume that cameras are static with known position, a condition which cannot be preserved in most outdoor robotic applications. In this study, the authors compute depth information and camera positions from a monocular video sequence using structure from motion and use this information as an additional cue to colour for accurate segmentation. The authors model the problem similarly to standard segmentation routines as a Markov random field and perform the segmentation using graph cuts optimisation. Manual intervention is minimised and is only required to determine pixel seeds in the first frame which are then automatically reprojected into the remaining frames of the sequence. The authors also describe an automated method to adjust the relative weights for colour and depth according to their discriminative properties in each frame. Experimental results are presented for two video sequences captured using a quadcopter. The quality of the segmentation is compared to a ground truth and other state-of-the-art methods with consistently accurate results.
Resumo:
In this paper, we present an unsupervised graph cut based object segmentation method using 3D information provided by Structure from Motion (SFM), called Grab- CutSFM. Rather than focusing on the segmentation problem using a trained model or human intervention, our approach aims to achieve meaningful segmentation autonomously with direct application to vision based robotics. Generally, object (foreground) and background have certain discriminative geometric information in 3D space. By exploring the 3D information from multiple views, our proposed method can segment potential objects correctly and automatically compared to conventional unsupervised segmentation using only 2D visual cues. Experiments with real video data collected from indoor and outdoor environments verify the proposed approach.
Resumo:
This paper presents a mapping and navigation system for a mobile robot, which uses vision as its sole sensor modality. The system enables the robot to navigate autonomously, plan paths and avoid obstacles using a vision based topometric map of its environment. The map consists of a globally-consistent pose-graph with a local 3D point cloud attached to each of its nodes. These point clouds are used for direction independent loop closure and to dynamically generate 2D metric maps for locally optimal path planning. Using this locally semi-continuous metric space, the robot performs shortest path planning instead of following the nodes of the graph --- as is done with most other vision-only navigation approaches. The system exploits the local accuracy of visual odometry in creating local metric maps, and uses pose graph SLAM, visual appearance-based place recognition and point clouds registration to create the topometric map. The ability of the framework to sustain vision-only navigation is validated experimentally, and the system is provided as open-source software.
Resumo:
Constructing train schedules is vital in railways. This complex and time consuming task is however made more difficult by additional requirements to make train schedules robust to delays and other disruptions. For a timetable to be regarded as robust, it should be insensitive to delays of a specified level and its performance with respect to a given metric, should be within given tolerances. In other words the effect of delays should be identifiable and should be shown to be minimal. To this end, a sensitivity analysis is proposed that identifies affected operations. More specifically a sensitivity analysis for determining what operation delays cause each operation to be affected is proposed. The information provided by this analysis gives another measure of timetable robustness and also provides control information that can be used when delays occur in practice. Several algorithms are proposed to identify this information and they utilise a disjunctive graph model of train operations. Upon completion the sets of affected operations can also be used to define the impact of all delays without further disjunctive graph evaluations.
Resumo:
The internationalization of construction companies has become of significant interest as the global construction market continues to be integrated into a more competitive and turbulent business environment. However, due to the complicated and multifaceted nature of international business and performance, there is as yet no consensus on how to evaluate the performance of international construction firms (ICFs). The purpose of this paper, therefore, is to develop a practical framework for measuring the performance of ICFs. Based on the balanced scorecard (BSC), a framework with detailed measures is developed, investigated, and tested using a three-step research design. In the first step, 27 measures under six dimensions (financial, market, customer, internal business processes, stakeholders, and learning and growth) are determined by literature review, interviews with academics, and seminar discussions. Subsequently, a questionnaire survey is conducted to investigate weights of these 27 performance measures. The questionnaire survey also supports the importance of measuring intangible aspects of international construction performance from the practitioner’s viewpoint. Additionally, a case study is described to test the framework’s robustness and usefulness. This is achieved by benchmarking the performance of a Chinese ICF with nine other counterparts worldwide. It is found that the framework provides an effective basis for benchmarking ICFs to effectively monitor their performance and support the development of strategies for improved competitiveness in the international arena. This paper is the first attempt to present a balanced and practically tested framework for evaluating the performance of ICFs. It contributes to the practice of performance measurement and related internationalization in the construction industry in general.
Resumo:
Cross-Lingual Link Discovery (CLLD) is a new problem in Information Retrieval. The aim is to automatically identify meaningful and relevant hypertext links between documents in different languages. This is particularly helpful in knowledge discovery if a multi-lingual knowledge base is sparse in one language or another, or the topical coverage in each language is different; such is the case with Wikipedia. Techniques for identifying new and topically relevant cross-lingual links are a current topic of interest at NTCIR where the CrossLink task has been running since the 2011 NTCIR-9. This paper presents the evaluation framework for benchmarking algorithms for cross-lingual link discovery evaluated in the context of NTCIR-9. This framework includes topics, document collections, assessments, metrics, and a toolkit for pooling, assessment, and evaluation. The assessments are further divided into two separate sets: manual assessments performed by human assessors; and automatic assessments based on links extracted from Wikipedia itself. Using this framework we show that manual assessment is more robust than automatic assessment in the context of cross-lingual link discovery.