982 resultados para periodic data
Resumo:
Consumer personal information is now a valuable commodity for most corporations. Concomitant with increased value is the expansion of new legal obligations to protect personal information. Mandatory data breach notification laws are an important new development in this regard. Such laws require a corporation that has suffered a data breach, which involves personal information, such as a computer hacking incident, to notify those persons who may have been affected by the breach. Regulators may also need to be notified. Australia currently does not have a mandatory data breach notification law but this may be about to change. The Australian Law Reform Commission has suggested that a data breach notification scheme be implemented through the Privacy Act 1988 (Cth). However, the notification of data breaches may already be required under the continuous disclosure regime stipulated by the Corporations Act 2001 (Cth) and the Australian Stock Exchange (ASX) Listing Rules. Accordingly, this article examines whether the notification of data breaches is a statutory requirement of the existing continuous disclosure regime and whether the ASX should therefore be notified of such incidents.
Resumo:
Queensland University of Technology (QUT) completed an Australian National Data Service (ANDS) funded “Seeding the Commons Project” to contribute metadata to Research Data Australia. The project employed two Research Data Librarians from October 2009 through to July 2010. Technical support for the project was provided by QUT’s High Performance Computing and Research Support Specialists. ---------- The project identified and described QUT’s category 1 (ARC / NHMRC) research datasets. Metadata for the research datasets was stored in QUT’s Research Data Repository (Architecta Mediaflux). Metadata which was suitable for inclusion in Research Data Australia was made available to the Australian Research Data Commons (ARDC) in RIF-CS format. ---------- Several workflows and processes were developed during the project. 195 data interviews took place in connection with 424 separate research activities which resulted in the identification of 492 datasets. ---------- The project had a high level of technical support from QUT High Performance Computing and Research Support Specialists who developed the Research Data Librarian interface to the data repository that enabled manual entry of interview data and dataset metadata, creation of relationships between repository objects. The Research Data Librarians mapped the QUT metadata repository fields to RIF-CS and an application was created by the HPC and Research Support Specialists to generate RIF-CS files for harvest by the Australian Research Data Commons (ARDC). ---------- This poster will focus on the workflows and processes established for the project including: ---------- • Interview processes and instruments • Data Ingest from existing systems (including mapping to RIF-CS) • Data entry and the Data Librarian interface to Mediaflux • Verification processes • Mapping and creation of RIF-CS for the ARDC
Contextualizing the tensions and weaknesses of information privacy and data breach notification laws
Resumo:
Data breach notification laws have detailed numerous failures relating to the protection of personal information that have blighted both corporate and governmental institutions. There are obvious parallels between data breach notification and information privacy law as they both involve the protection of personal information. However, a closer examination of both laws reveals conceptual differences that give rise to vertical tensions between each law and shared horizontal weaknesses within both laws. Tensions emanate from conflicting approaches to the implementation of information privacy law that results in different regimes and the implementation of different types of protections. Shared weaknesses arise from an overt focus on specified types of personal information which results in ‘one size fits all’ legal remedies. The author contends that a greater contextual approach which promotes the importance of social context is required and highlights the effect that contextualization could have on both laws.
Resumo:
Mandatory data breach notification has become a matter of increasing concern for law reformers. In Australia, this issue was recently addressed as part of a comprehensive review of privacy law conducted by the Australian Law Reform Commission (ALRC) which recommended a uniform national regime for protecting personal information applicable to both the public and private sectors. As in all federal systems, the distribution of powers between central and state governments poses problems for national consistency. In the authors’ view, a uniform approach to mandatory data breach notification has greater merit than a ‘jurisdiction specific’ approach epitomized by US state-based laws. The US response has given rise to unnecessary overlaps and inefficiencies as demonstrated by a review of different notification triggers and encryption safe harbors. Reviewing the US response, the authors conclude that a uniform approach to data breach notification is inherently more efficient.
Resumo:
Most information retrieval (IR) models treat the presence of a term within a document as an indication that the document is somehow "about" that term, they do not take into account when a term might be explicitly negated. Medical data, by its nature, contains a high frequency of negated terms - e.g. "review of systems showed no chest pain or shortness of breath". This papers presents a study of the effects of negation on information retrieval. We present a number of experiments to determine whether negation has a significant negative affect on IR performance and whether language models that take negation into account might improve performance. We use a collection of real medical records as our test corpus. Our findings are that negation has some affect on system performance, but this will likely be confined to domains such as medical data where negation is prevalent.
Resumo:
In a seminal data mining article, Leo Breiman [1] argued that to develop effective predictive classification and regression models, we need to move away from the sole dependency on statistical algorithms and embrace a wider toolkit of modeling algorithms that include data mining procedures. Nevertheless, many researchers still rely solely on statistical procedures when undertaking data modeling tasks; the sole reliance on these procedures has lead to the development of irrelevant theory and questionable research conclusions ([1], p.199). We will outline initiatives that the HPC & Research Support group is undertaking to engage researchers with data mining tools and techniques; including a new range of seminars, workshops, and one-on-one consultations covering data mining algorithms, the relationship between data mining and the research cycle, and limitations and problems with these new algorithms. Organisational limitations and restrictions to these initiatives are also discussed.
Resumo:
Acoustic emission (AE) technique is one of the popular diagnostic techniques used for structural health monitoring of mechanical, aerospace and civil structures. But several challenges still exist in successful application of AE technique. This paper explores various tools for analysis of recorded AE data to address two primary challenges: discriminating spurious signals from genuine signals and devising ways to quantify damage levels.
Resumo:
Cell invasion involves a population of cells which are motile and proliferative. Traditional discrete models of proliferation involve agents depositing daughter agents on nearest- neighbor lattice sites. Motivated by time-lapse images of cell invasion, we propose and analyze two new discrete proliferation models in the context of an exclusion process with an undirected motility mechanism. These discrete models are related to a family of reaction- diffusion equations and can be used to make predictions over a range of scales appropriate for interpreting experimental data. The new proliferation mechanisms are biologically relevant and mathematically convenient as the continuum-discrete relationship is more robust for the new proliferation mechanisms relative to traditional approaches.
Resumo:
Gen Y beginning teachers have an edge: they’ve grown up in an era of educational accountability, so when their students have to sit a high-stakes test, they can relate.
Resumo:
Acoustic emission (AE) is the phenomenon where high frequency stress waves are generated by rapid release of energy within a material by sources such as crack initiation or growth. AE technique involves recording these stress waves by means of sensors placed on the surface and subsequent analysis of the recorded signals to gather information such as the nature and location of the source. It is one of the several diagnostic techniques currently used for structural health monitoring (SHM) of civil infrastructure such as bridges. Some of its advantages include ability to provide continuous in-situ monitoring and high sensitivity to crack activity. But several challenges still exist. Due to high sampling rate required for data capture, large amount of data is generated during AE testing. This is further complicated by the presence of a number of spurious sources that can produce AE signals which can then mask desired signals. Hence, an effective data analysis strategy is needed to achieve source discrimination. This also becomes important for long term monitoring applications in order to avoid massive date overload. Analysis of frequency contents of recorded AE signals together with the use of pattern recognition algorithms are some of the advanced and promising data analysis approaches for source discrimination. This paper explores the use of various signal processing tools for analysis of experimental data, with an overall aim of finding an improved method for source identification and discrimination, with particular focus on monitoring of steel bridges.
Resumo:
Background: Efforts to prevent the development of overweight and obesity have increasingly focused early in the life course as we recognise that both metabolic and behavioural patterns are often established within the first few years of life. Randomised controlled trials (RCTs) of interventions are even more powerful when, with forethought, they are synthesised into an individual patient data (IPD) prospective meta-analysis (PMA). An IPD PMA is a unique research design where several trials are identified for inclusion in an analysis before any of the individual trial results become known and the data are provided for each randomised patient. This methodology minimises the publication and selection bias often associated with a retrospective meta-analysis by allowing hypotheses, analysis methods and selection criteria to be specified a priori. Methods/Design: The Early Prevention of Obesity in CHildren (EPOCH) Collaboration was formed in 2009. The main objective of the EPOCH Collaboration is to determine if early intervention for childhood obesity impacts on body mass index (BMI) z scores at age 18-24 months. Additional research questions will focus on whether early intervention has an impact on children’s dietary quality, TV viewing time, duration of breastfeeding and parenting styles. This protocol includes the hypotheses, inclusion criteria and outcome measures to be used in the IPD PMA. The sample size of the combined dataset at final outcome assessment (approximately 1800 infants) will allow greater precision when exploring differences in the effect of early intervention with respect to pre-specified participant- and intervention-level characteristics. Discussion: Finalisation of the data collection procedures and analysis plans will be complete by the end of 2010. Data collection and analysis will occur during 2011-2012 and results should be available by 2013. Trial registration number: ACTRN12610000789066
Resumo:
In this issue Burns et al. report an estimate of the economic loss to Auckland City Hospital from cases of healthcare-associated bloodstream infection. They show that patients with infection stay longer in hospital and this must impose an opportunity cost because beds are blocked. Harder to measure costs fall on patients, their families and non-acute health services. Patients face some risk of dying from the infection.
Resumo:
Advances in data mining have provided techniques for automatically discovering underlying knowledge and extracting useful information from large volumes of data. Data mining offers tools for quick discovery of relationships, patterns and knowledge in large complex databases. Application of data mining to manufacturing is relatively limited mainly because of complexity of manufacturing data. Growing self organizing map (GSOM) algorithm has been proven to be an efficient algorithm to analyze unsupervised DNA data. However, it produced unsatisfactory clustering when used on some large manufacturing data. In this paper a data mining methodology has been proposed using a GSOM tool which was developed using a modified GSOM algorithm. The proposed method is used to generate clusters for good and faulty products from a manufacturing dataset. The clustering quality (CQ) measure proposed in the paper is used to evaluate the performance of the cluster maps. The paper also proposed an automatic identification of variables to find the most probable causative factor(s) that discriminate between good and faulty product by quickly examining the historical manufacturing data. The proposed method offers the manufacturers to smoothen the production flow and improve the quality of the products. Simulation results on small and large manufacturing data show the effectiveness of the proposed method.
Resumo:
OBJECTIVES: To compare three different methods of falls reporting and examine the characteristics of the data missing from the hospital incident reporting system. DESIGN: Fourteen-month prospective observational study nested within a randomized controlled trial. SETTING: Rehabilitation, stroke, medical, surgical, and orthopedic wards in Perth and Brisbane, Australia. PARTICIPANTS: Fallers (n5153) who were part of a larger trial (1,206 participants, mean age 75.1 � 11.0). MEASUREMENTS: Three falls events reporting measures: participants’ self-report of fall events, fall events reported in participants’ case notes, and falls events reported through the hospital reporting systems. RESULTS: The three reporting systems identified 245 falls events in total. Participants’ case notes captured 226 (92.2%) falls events, hospital incident reporting systems captured 185 (75.5%) falls events, and participant selfreport captured 147 (60.2%) falls events. Falls events were significantly less likely to be recorded in hospital reporting systems when a participant sustained a subsequent fall, (P5.01) or when the fall occurred in the morning shift (P5.01) or afternoon shift (P5.01). CONCLUSION: Falls data missing from hospital incident report systems are not missing completely at random and therefore will introduce bias in some analyses if the factor investigated is related to whether the data ismissing.Multimodal approaches to collecting falls data are preferable to relying on a single source alone.
Resumo:
Australia’s Arts and Entertainment Sector underpins cultural and social innovation, improves the quality of community life, is essential to maintaining our cities as world class attractors of talent and investment, and helps create ‘Brand Australia’ in the global marketplace of ideas (QUT Creative Industries Faculty 2010). The sector makes a significant contribution to the Australian economy. So what is the size and nature of this contribution? The Creative Industries Faculty at Queensland University of Technology recently conducted an exercise to source and present statistics in order to produce a data picture of Australia’s Arts and Entertainment Sector. The exercise involved gathering the latest statistics on broadcasting, new media, performing arts, and music composition, distribution and publishing as well as Australia’s performance in world markets.