940 resultados para Data Standards
Contextualizing the tensions and weaknesses of information privacy and data breach notification laws
Resumo:
Data breach notification laws have detailed numerous failures relating to the protection of personal information that have blighted both corporate and governmental institutions. There are obvious parallels between data breach notification and information privacy law as they both involve the protection of personal information. However, a closer examination of both laws reveals conceptual differences that give rise to vertical tensions between each law and shared horizontal weaknesses within both laws. Tensions emanate from conflicting approaches to the implementation of information privacy law that results in different regimes and the implementation of different types of protections. Shared weaknesses arise from an overt focus on specified types of personal information which results in ‘one size fits all’ legal remedies. The author contends that a greater contextual approach which promotes the importance of social context is required and highlights the effect that contextualization could have on both laws.
Resumo:
Mandatory data breach notification has become a matter of increasing concern for law reformers. In Australia, this issue was recently addressed as part of a comprehensive review of privacy law conducted by the Australian Law Reform Commission (ALRC) which recommended a uniform national regime for protecting personal information applicable to both the public and private sectors. As in all federal systems, the distribution of powers between central and state governments poses problems for national consistency. In the authors’ view, a uniform approach to mandatory data breach notification has greater merit than a ‘jurisdiction specific’ approach epitomized by US state-based laws. The US response has given rise to unnecessary overlaps and inefficiencies as demonstrated by a review of different notification triggers and encryption safe harbors. Reviewing the US response, the authors conclude that a uniform approach to data breach notification is inherently more efficient.
Resumo:
Most information retrieval (IR) models treat the presence of a term within a document as an indication that the document is somehow "about" that term, they do not take into account when a term might be explicitly negated. Medical data, by its nature, contains a high frequency of negated terms - e.g. "review of systems showed no chest pain or shortness of breath". This papers presents a study of the effects of negation on information retrieval. We present a number of experiments to determine whether negation has a significant negative affect on IR performance and whether language models that take negation into account might improve performance. We use a collection of real medical records as our test corpus. Our findings are that negation has some affect on system performance, but this will likely be confined to domains such as medical data where negation is prevalent.
Resumo:
In a seminal data mining article, Leo Breiman [1] argued that to develop effective predictive classification and regression models, we need to move away from the sole dependency on statistical algorithms and embrace a wider toolkit of modeling algorithms that include data mining procedures. Nevertheless, many researchers still rely solely on statistical procedures when undertaking data modeling tasks; the sole reliance on these procedures has lead to the development of irrelevant theory and questionable research conclusions ([1], p.199). We will outline initiatives that the HPC & Research Support group is undertaking to engage researchers with data mining tools and techniques; including a new range of seminars, workshops, and one-on-one consultations covering data mining algorithms, the relationship between data mining and the research cycle, and limitations and problems with these new algorithms. Organisational limitations and restrictions to these initiatives are also discussed.
Resumo:
Acoustic emission (AE) technique is one of the popular diagnostic techniques used for structural health monitoring of mechanical, aerospace and civil structures. But several challenges still exist in successful application of AE technique. This paper explores various tools for analysis of recorded AE data to address two primary challenges: discriminating spurious signals from genuine signals and devising ways to quantify damage levels.
Resumo:
Increased industrialisation has brought to the forefront the susceptibility of concrete columns in both buildings and bridges to vehicle impacts. Accurate vulnerability assessments are crucial in the design process due to possible catastrophic nature of the failures that can cause. This chapter reports on research undertaken to investigate the impact capacity of the columns of low to medium raised building designed according to the Australian standards. Numerical simulation techniques were used in the process and validation was done by using experimental results published in the literature. The investigation thus far has confirmed that vulnerability of typical columns in five story buildings located in urban areas to medium velocity car impacts and hence these columns need to be re-designed or retrofitted. In addition, accuracy of the simplified method presented in EN 1991-1-7 to quantify the impact damage was scrutinised. A simplified concept to assess the damage due to all collisions modes was introduced. The research information will be extended to generate a common data base to assess the vulnerability of columns in urban areas against new generation of vehicles.
Resumo:
There is a strong quest in several countries including Australia for greater national consistency in education and intensifying interest in standards for reporting. Given this, it is important to make explicit the intended and unintended consequences of assessment reform strategies and the pressures to pervert and conform. In a policy context that values standardisation, the great danger is that the technical, rationalist approaches that generalise and make superficial assessment practices, will emerge. In this article, the authors contend that the centrality and complexity of teacher judgement practice in such a policy context need to be understood. To this end, we discuss and analyse recorded talk in teacher moderation meetings showing the processes that teachers use as they work with stated standards to award grades (A to E). We show how they move to and fro between (1) supplied textual artefacts, including stated standards and samples of student responses, (2) tacit knowledge of different types, drawing into the moderation, and (3) social processes of dialogue and negotiation. While the stated standards play a part in judgement processes, in and of themselves they are shown to be insufficient to account for how the teachers ascribe value and award a grade to student work in moderation. At issue is the nature of judgement as cognitive and social practice in moderation and the legitimacy (or otherwise) of the mix of factors that shape how judgement occurs.
Resumo:
Cell invasion involves a population of cells which are motile and proliferative. Traditional discrete models of proliferation involve agents depositing daughter agents on nearest- neighbor lattice sites. Motivated by time-lapse images of cell invasion, we propose and analyze two new discrete proliferation models in the context of an exclusion process with an undirected motility mechanism. These discrete models are related to a family of reaction- diffusion equations and can be used to make predictions over a range of scales appropriate for interpreting experimental data. The new proliferation mechanisms are biologically relevant and mathematically convenient as the continuum-discrete relationship is more robust for the new proliferation mechanisms relative to traditional approaches.
Resumo:
Hazard perception in driving is the one of the few driving-specific skills associated with crash involvement. However, this relationship has only been examined in studies where the majority of individuals were younger than 65. We present the first data revealing an association between hazard perception and self-reported crash involvement in drivers aged 65 and over. In a sample of 271 drivers, we found that individuals whose mean response time to traffic hazards was slower than 6.68 seconds (the ROC-curve derived pass mark for the test) were 2.32 times (95% CI 1.46, 3.22) more likely to have been involved in a self-reported crash within the previous five years than those with faster response times. This likelihood ratio became 2.37 (95% CI 1.49, 3.28) when driving exposure was controlled for. As a comparison, individuals who failed a test of useful field of view were 2.70 (95% CI 1.44, 4.44) times more likely to crash than those who passed. The hazard perception test and the useful field of view measure accounted for separate variance in crash involvement. These findings indicate that hazard perception testing and training could be potentially useful for road safety interventions for this age group.
Resumo:
This study investigated preservice teachers’ perceptions for teaching and sustaining gifted and talented students while developing, modifying and implementing activities to cater for the diverse learner. Participants were surveyed at the end of a gifted and talented education program on their perceptions to differentiate the curriculum for meeting the needs of the student (n=22). SPSS data analysis with the five-part Likert scale indicated these preservice teachers agreed or strongly agreed they had developed skills in curriculum planning (91%) with well-designed activities (96%), and lesson preparation skills (96%). They also claimed they were enthusiastic for teaching (91%) and understanding of school practices and policies (96%). However, 46% agreed they had knowledge of syllabus documents with 50% claiming an ability to provide written feedback on student’s learning. Furthermore, nearly two-thirds suggested they had educational language from the syllabus and effective student management strategies. Preservice teachers require more direction on how to cater for diversity and begin creating sustainable societies by building knowledge from direct GAT experiences. Designing diagnostic surveys associated with university coursework can be used to determine further development for specific preservice teacher development in GAT education. Preservice teachers need to create opportunities for students to realise their potential by involving cognitive challenges through a differentiated curriculum. Differentiation requires modification of four primary areas of curriculum development (Maker, 1975) content (what we teach), process (how we teach), product (what we expect the students to do or show) and learning environment (where we teach/our class culture). Ashman and Elkins (2009) and Glasson (2008) emphasise the need for preservice teachers, teachers and other professionals to be able to identify what gifted and talented (GAT) students know and how they learn in relation to effective teaching. Glasson (2008) recommends that educators keep up to date with practices in pedagogy, support, monitoring and profiling of GAT students to create an environment conducive to achieving. Oral feedback is one method to communicate to learners about their progress but has advantages and disadvantages for some students. Oral feedback provides immediate information to the student on progress and performance (Ashman & Elkins, 2009). However, preservice teachers must have clear understandings of key concepts to assist the GAT student. Implementing teaching strategies to engage innovate and extend students is valuable to the preservice teacher in focusing on GAT student learning in the classroom (Killen, 2007). Practical teaching strategies (Harris & Hemming, 2008; Tomlinson et al., 1994) facilitate diverse ways for assisting GAT students to achieve learning outcomes. Such strategies include activities to enhance creativity, co-operative learning and problem-solving activities (Chessman, 2005; NSW Department of Education and Training, 2004; Taylor & Milton, 2006) for GAT students to develop a sense of identity, belonging and self esteem towards becoming an autonomous learner. Preservice teachers need to understand that GAT students learn in a different way and therefore should be assessed differently. Assessment can be through diverse options to demonstrate the student’s competence, demonstrate their understanding of the material in a way that highlights their natural abilities (Glasson, 2008; Mack, 2008). Preservice teachers often are unprepared to assess students understanding but this may be overcome with teacher education training promoting effective communication and collaboration in the classroom, including the provision of a variety of assessment strategies to improve teaching and learning (Callahan et al., 2003; Tomlinson et al., 1994). It is also critical that preservice teachers have enthusiasm for teaching to demonstrate inclusion, involvement and the excitement to communicate to GAT students in the learning process (Baum, 2002). Evaluating and reflecting on teaching practices must be part of a preservice teacher’s repertoire for GAT education. Evaluating teaching practices can assist to further enhance student learning (Mayer, 2008). Evaluation gauges the success or otherwise of specific activities and teaching in general (Mayer, 2008), and ensures that preservice teachers and teachers are well prepared and maintain their commitment to their students and the community. Long and Harris (1999) advocate that reflective practices assist teachers in creating improvements in educational practices. Reflective practices help preservice teachers and teachers to improve their ability to pursue improved learning outcomes and professional growth (Long & Harris, 1999). Context This study is set at a small regional campus of a large university in Queensland. As a way to address departmental policies and the need to prepare preservice teachers for engaging a diverse range of learners (see Queensland College of Teachers, Professional Standards for Teachers, 2006), preservice teachers at this campus completed four elective units within their Bachelor of Education (primary) degree. The electives include: 1. Middle years students and schools 2. Teaching strategies for engaging learners 3. Teaching students with learning difficulties, and 4. Middle-years curriculum, pedagogy and assessment. In the university-based component of this unit, preservice teachers engaged in learning about middle years students and schools, and gained knowledge of government policies pertaining to GAT students. Further explored within in this unit was the importance of: collaboration between teachers, parents/carers and school personnel in supporting middle years GAT students; incorporating challenging learning experiences that promoted higher order thinking and problem solving skills; real world learning experiences for students and; the alignment and design of curriculum, pedagogy and assessment that is relevant to the students development, interests and needs. The participants were third-year Bachelor of Education (primary) preservice teachers who were completing an elective unit as part of the middle years of schooling learning with a focus on GAT students. They were assigned one student from a local school. In the six subsequent ninety minute weekly lessons, the preservice teachers were responsible for designing learning activities that would engage and extend the GAT students. Furthermore, preservice teachers made decisions about suitable pedagogical approaches and designed the assessment task to align with the curriculum and the developmental needs of their middle years GAT student. This research aims to describe preservice teachers’ perceptions of their education for teaching gifted and talented students.
Resumo:
Gen Y beginning teachers have an edge: they’ve grown up in an era of educational accountability, so when their students have to sit a high-stakes test, they can relate.
Resumo:
The present rate of technological advance continues to place significant demands on data storage devices. The sheer amount of digital data being generated each year along with consumer expectations, fuels these demands. At present, most digital data is stored magnetically, in the form of hard disk drives or on magnetic tape. The increase in areal density (AD) of magnetic hard disk drives over the past 50 years has been of the order of 100 million times, and current devices are storing data at ADs of the order of hundreds of gigabits per square inch. However, it has been known for some time that the progress in this form of data storage is approaching fundamental limits. The main limitation relates to the lower size limit that an individual bit can have for stable storage. Various techniques for overcoming these fundamental limits are currently the focus of considerable research effort. Most attempt to improve current data storage methods, or modify these slightly for higher density storage. Alternatively, three dimensional optical data storage is a promising field for the information storage needs of the future, offering very high density, high speed memory. There are two ways in which data may be recorded in a three dimensional optical medium; either bit-by-bit (similar in principle to an optical disc medium such as CD or DVD) or by using pages of bit data. Bit-by-bit techniques for three dimensional storage offer high density but are inherently slow due to the serial nature of data access. Page-based techniques, where a two-dimensional page of data bits is written in one write operation, can offer significantly higher data rates, due to their parallel nature. Holographic Data Storage (HDS) is one such page-oriented optical memory technique. This field of research has been active for several decades, but with few commercial products presently available. Another page-oriented optical memory technique involves recording pages of data as phase masks in a photorefractive medium. A photorefractive material is one by which the refractive index can be modified by light of the appropriate wavelength and intensity, and this property can be used to store information in these materials. In phase mask storage, two dimensional pages of data are recorded into a photorefractive crystal, as refractive index changes in the medium. A low-intensity readout beam propagating through the medium will have its intensity profile modified by these refractive index changes and a CCD camera can be used to monitor the readout beam, and thus read the stored data. The main aim of this research was to investigate data storage using phase masks in the photorefractive crystal, lithium niobate (LiNbO3). Firstly the experimental methods for storing the two dimensional pages of data (a set of vertical stripes of varying lengths) in the medium are presented. The laser beam used for writing, whose intensity profile is modified by an amplitudemask which contains a pattern of the information to be stored, illuminates the lithium niobate crystal and the photorefractive effect causes the patterns to be stored as refractive index changes in the medium. These patterns are read out non-destructively using a low intensity probe beam and a CCD camera. A common complication of information storage in photorefractive crystals is the issue of destructive readout. This is a problem particularly for holographic data storage, where the readout beam should be at the same wavelength as the beam used for writing. Since the charge carriers in the medium are still sensitive to the read light field, the readout beam erases the stored information. A method to avoid this is by using thermal fixing. Here the photorefractive medium is heated to temperatures above 150�C; this process forms an ionic grating in the medium. This ionic grating is insensitive to the readout beam and therefore the information is not erased during readout. A non-contact method for determining temperature change in a lithium niobate crystal is presented in this thesis. The temperature-dependent birefringent properties of the medium cause intensity oscillations to be observed for a beam propagating through the medium during a change in temperature. It is shown that each oscillation corresponds to a particular temperature change, and by counting the number of oscillations observed, the temperature change of the medium can be deduced. The presented technique for measuring temperature change could easily be applied to a situation where thermal fixing of data in a photorefractive medium is required. Furthermore, by using an expanded beam and monitoring the intensity oscillations over a wide region, it is shown that the temperature in various locations of the crystal can be monitored simultaneously. This technique could be used to deduce temperature gradients in the medium. It is shown that the three dimensional nature of the recording medium causes interesting degradation effects to occur when the patterns are written for a longer-than-optimal time. This degradation results in the splitting of the vertical stripes in the data pattern, and for long writing exposure times this process can result in the complete deterioration of the information in the medium. It is shown in that simply by using incoherent illumination, the original pattern can be recovered from the degraded state. The reason for the recovery is that the refractive index changes causing the degradation are of a smaller magnitude since they are induced by the write field components scattered from the written structures. During incoherent erasure, the lower magnitude refractive index changes are neutralised first, allowing the original pattern to be recovered. The degradation process is shown to be reversed during the recovery process, and a simple relationship is found relating the time at which particular features appear during degradation and recovery. A further outcome of this work is that the minimum stripe width of 30 ìm is required for accurate storage and recovery of the information in the medium, any size smaller than this results in incomplete recovery. The degradation and recovery process could be applied to an application in image scrambling or cryptography for optical information storage. A two dimensional numerical model based on the finite-difference beam propagation method (FD-BPM) is presented and used to gain insight into the pattern storage process. The model shows that the degradation of the patterns is due to the complicated path taken by the write beam as it propagates through the crystal, and in particular the scattering of this beam from the induced refractive index structures in the medium. The model indicates that the highest quality pattern storage would be achieved with a thin 0.5 mm medium; however this type of medium would also remove the degradation property of the patterns and the subsequent recovery process. To overcome the simplistic treatment of the refractive index change in the FD-BPM model, a fully three dimensional photorefractive model developed by Devaux is presented. This model shows significant insight into the pattern storage, particularly for the degradation and recovery process, and confirms the theory that the recovery of the degraded patterns is possible since the refractive index changes responsible for the degradation are of a smaller magnitude. Finally, detailed analysis of the pattern formation and degradation dynamics for periodic patterns of various periodicities is presented. It is shown that stripe widths in the write beam of greater than 150 ìm result in the formation of different types of refractive index changes, compared with the stripes of smaller widths. As a result, it is shown that the pattern storage method discussed in this thesis has an upper feature size limit of 150 ìm, for accurate and reliable pattern storage.
Resumo:
Acoustic emission (AE) is the phenomenon where high frequency stress waves are generated by rapid release of energy within a material by sources such as crack initiation or growth. AE technique involves recording these stress waves by means of sensors placed on the surface and subsequent analysis of the recorded signals to gather information such as the nature and location of the source. It is one of the several diagnostic techniques currently used for structural health monitoring (SHM) of civil infrastructure such as bridges. Some of its advantages include ability to provide continuous in-situ monitoring and high sensitivity to crack activity. But several challenges still exist. Due to high sampling rate required for data capture, large amount of data is generated during AE testing. This is further complicated by the presence of a number of spurious sources that can produce AE signals which can then mask desired signals. Hence, an effective data analysis strategy is needed to achieve source discrimination. This also becomes important for long term monitoring applications in order to avoid massive date overload. Analysis of frequency contents of recorded AE signals together with the use of pattern recognition algorithms are some of the advanced and promising data analysis approaches for source discrimination. This paper explores the use of various signal processing tools for analysis of experimental data, with an overall aim of finding an improved method for source identification and discrimination, with particular focus on monitoring of steel bridges.
Resumo:
Background: Efforts to prevent the development of overweight and obesity have increasingly focused early in the life course as we recognise that both metabolic and behavioural patterns are often established within the first few years of life. Randomised controlled trials (RCTs) of interventions are even more powerful when, with forethought, they are synthesised into an individual patient data (IPD) prospective meta-analysis (PMA). An IPD PMA is a unique research design where several trials are identified for inclusion in an analysis before any of the individual trial results become known and the data are provided for each randomised patient. This methodology minimises the publication and selection bias often associated with a retrospective meta-analysis by allowing hypotheses, analysis methods and selection criteria to be specified a priori. Methods/Design: The Early Prevention of Obesity in CHildren (EPOCH) Collaboration was formed in 2009. The main objective of the EPOCH Collaboration is to determine if early intervention for childhood obesity impacts on body mass index (BMI) z scores at age 18-24 months. Additional research questions will focus on whether early intervention has an impact on children’s dietary quality, TV viewing time, duration of breastfeeding and parenting styles. This protocol includes the hypotheses, inclusion criteria and outcome measures to be used in the IPD PMA. The sample size of the combined dataset at final outcome assessment (approximately 1800 infants) will allow greater precision when exploring differences in the effect of early intervention with respect to pre-specified participant- and intervention-level characteristics. Discussion: Finalisation of the data collection procedures and analysis plans will be complete by the end of 2010. Data collection and analysis will occur during 2011-2012 and results should be available by 2013. Trial registration number: ACTRN12610000789066
Resumo:
In this issue Burns et al. report an estimate of the economic loss to Auckland City Hospital from cases of healthcare-associated bloodstream infection. They show that patients with infection stay longer in hospital and this must impose an opportunity cost because beds are blocked. Harder to measure costs fall on patients, their families and non-acute health services. Patients face some risk of dying from the infection.