960 resultados para European copyright code
Resumo:
Australia should introduce a transformative use exception. Transformative use is an important part of the copyright balance: it provides a mechanism through which to balance the rights of past authors against the interests of future authors. In the interests of promoting creativity and innovation, the impact of copyright law on the ability of Australians to create new works should be minimised. The scope of a transformative use exception should be based primarily on demonstrable harm to the direct licensing interests of copyright owners – the core of copyright. Importantly, however, there are unresolved questions about fairness that need to be more clearly addressed before the appropriate scope of a transformative use exception can be determined. This submission does not directly address the desirability of introducing a broader fair use right. It is likely that an open ended fair use exception is required to provide a more adequate balance between copyright owners and non-transformative users of copyright. If a broad fair use style exception is introduced, it would likely be desirable to include transformative uses within that exception. This submission, however, takes the more limited position that regardless of whether a fair use exception is introduced, an exception that permits unlicensed transformative uses is required in Australian copyright law.
Resumo:
The R statistical environment and language has demonstrated particular strengths for interactive development of statistical algorithms, as well as data modelling and visualisation. Its current implementation has an interpreter at its core which may result in a performance penalty in comparison to directly executing user algorithms in the native machine code of the host CPU. In contrast, the C++ language has no built-in visualisation capabilities, handling of linear algebra or even basic statistical algorithms; however, user programs are converted to high-performance machine code, ahead of execution. A new method avoids possible speed penalties in R by using the Rcpp extension package in conjunction with the Armadillo C++ matrix library. In addition to the inherent performance advantages of compiled code, Armadillo provides an easy-to-use template-based meta-programming framework, allowing the automatic pooling of several linear algebra operations into one, which in turn can lead to further speedups. With the aid of Rcpp and Armadillo, conversion of linear algebra centered algorithms from R to C++ becomes straightforward. The algorithms retains the overall structure as well as readability, all while maintaining a bidirectional link with the host R environment. Empirical timing comparisons of R and C++ implementations of a Kalman filtering algorithm indicate a speedup of several orders of magnitude.
Resumo:
Background & aims: One aim of the Australasian Nutrition Care Day Survey was to determine the nutritional status and dietary intake of acute care hospital patients. Methods: Dietitians from 56 hospitals in Australia and New Zealand completed a 24-h survey of nutritional status and dietary intake of adult hospitalised patients. Nutritional risk was evaluated using the Malnutrition Screening Tool. Participants ‘at risk’ underwent nutritional assessment using Subjective Global Assessment. Based on the International Classification of Diseases (Australian modification), participants were also deemed malnourished if their body mass index was <18.5 kg/m2. Dietitians recorded participants’ dietary intake at each main meal and snacks as 0%, 25%, 50%, 75%, or 100% of that offered. Results: 3122 patients (mean age: 64.6 ± 18 years) participated in the study. Forty-one percent of the participants were “at risk” of malnutrition. Overall malnutrition prevalence was 32%. Fifty-five percent of malnourished participants and 35% of well-nourished participants consumed ≤50% of the food during the 24-h audit. “Not hungry” was the most common reason for not consuming everything offered during the audit. Conclusion: Malnutrition and sub-optimal food intake is prevalent in acute care patients across hospitals in Australia and New Zealand and warrants appropriate interventions.
Resumo:
Many of the more extreme bushfire prone landscapes in Australia are located in colder climate regions. For such sites, the National Construction Code regulates that houses satisfy both the Australian Standard for Bushfire (AS 3959:2009) and achieve a 6 Star energy rating. When combined these requirements present a considerable challenge to the construction of affordable housing - a problem which is often exacerbated by the complex topography of bushifre prone landscapes. Dr Weir presents a series of case studies from his architetcural practice which highlight the need for further design-led research into affordable housing - a ground up holistic approach to design which recolciles energy performance, human behaviourm, bushland conservation and bushfire safety.
Resumo:
On 24 March 2011, Attorney-General Robert McClelland referred the National Classification Scheme to the ALRC and asked it to conduct widespread public consultation across the community and industry. The review considered issues including: existing Commonwealth, State and Territory classification laws the current classification categories contained in the Classification Act, Code and Guidelines the rapid pace of technological change the need to improve classification information available to the community the effect of media on children and the desirability of a strong content and distribution industry in Australia. During the inquiry, the ALRC conducted face-to-face consultations with stakeholders, hosted two online discussion forums, and commissioned pilot community and reference group forums into community attitudes to higher level media content. The ALRC published two consultation documents—an Issues Paper and a Discussion Paper—and invited submissions from the public. The Final Report was tabled in Parliament on 1 March 2012. Recommendations: The report makes 57 recommendations for reform. The net effect of the recommendations would be the establishment of a new National Classification Scheme that: applies consistent rules to content that are sufficiently flexible to be adaptive to technological change; places a regulatory focus on restricting access to adult content, helping to promote cyber-safety and protect children from inappropriate content across media platforms; retains the Classification Board as an independent classification decision maker with an essential role in setting benchmarks; promotes industry co-regulation, encouraging greater industry content classification, with government regulation more directly focused on content of higher community concern; provides for pragmatic regulatory oversight, to meet community expectations and safeguard community standards; reduces the overall regulatory burden on media content industries while ensuring that content obligations are focused on what Australians most expect to be classified; and harmonises classification laws across Australia, for the benefit of consumers and content providers.
Resumo:
We review the theory of intellectual property (IP) in the creative industries (CI) from the evolutionary economic perspective based on evidence from China. We argue that many current confusions and dysfunctions about IP can be traced to three widely overlooked aspects of the growth of knowledge context of IP in the CI: (1) the effect of globalization; (2) the dominating relative economic value of reuse of creative output over monopoly incentives to create input; and (3) the evolution of business models in response to institutional change. We conclude that a substantial weakening of copyright will, in theory, produce positive net public and private gain due to the evolutionary dynamics of all three dimensions.
Resumo:
Olivier Corten’s The Law Against War is a comprehensive, meticulously-researched study of contemporary international law governing the use of armed force in international relations. As a translated and updated version of a 2008 book published in French, it offers valuable insights into the positivist methodology that underpins much of the European scholarship of international law. Corten undertakes a rigorous analysis of state practice from 1945 onwards, with a view to clarifying the current meaning and scope of international law’s prohibition on the use of force. His central argument is that the majority of states remain attached to a strict interpretation of this rule. For Corten, state practice indicates that the doctrines of anticipatory self-defence, pre-emptive force and humanitarian intervention have no basis in contemporary international law. His overall position accords with a traditional, restrictive view of the circumstances in which states are permitted to use force...
Resumo:
Student performance on examinations is influenced by the level of difficulty of the questions. It seems reasonable to propose therefore that assessment of the difficulty of exam questions could be used to gauge the level of skills and knowledge expected at the end of a course. This paper reports the results of a study investigating the difficulty of exam questions using a subjective assessment of difficulty and a purpose-built exam question complexity classification scheme. The scheme, devised for exams in introductory programming courses, assesses the complexity of each question using six measures: external domain references, explicitness, linguistic complexity, conceptual complexity, length of code involved in the question and/or answer, and intellectual complexity (Bloom level). We apply the scheme to 20 introductory programming exam papers from five countries, and find substantial variation across the exams for all measures. Most exams include a mix of questions of low, medium, and high difficulty, although seven of the 20 have no questions of high difficulty. All of the complexity measures correlate with assessment of difficulty, indicating that the difficulty of an exam question relates to each of these more specific measures. We discuss the implications of these findings for the development of measures to assess learning standards in programming courses.
A qualitative think aloud study of the early Neo-Piagetian stages of reasoning in novice programmers
Resumo:
Recent research indicates that some of the difficulties faced by novice programmers are manifested very early in their learning. In this paper, we present data from think aloud studies that demonstrate the nature of those difficulties. In the think alouds, novices were required to complete short programming tasks which involved either hand executing ("tracing") a short piece of code, or writing a single sentence describing the purpose of the code. We interpret our think aloud data within a neo-Piagetian framework, demonstrating that some novices reason at the sensorimotor and preoperational stages, not at the higher concrete operational stage at which most instruction is implicitly targeted.
Resumo:
The project examined the responsiveness of the telenursing service provided by the Child Health Line (hereinafter referred to as CHL). It aimed to provide an account of population usage of the service, the call request types and the response of the service to the calls. In so doing, the project extends the current body of knowledge pertaining to the provision of parenting support through telenursing. Approximately 900 calls to the CHL were audio-recorded over the December 2005-2006 Christmas-New Year period. A protocol was developed to code characteristics of the call, the interactional features between the caller and nurse call-taker, and the extent to which there was (a) agreement on problem definition and the plan of action and (b) interactional alignment between nurse and caller. A quantitative analysis examined the frequencies of the main topics covered in calls to the CHL and any statistical associations between types of calls, length of calls and nurse-caller alignment. In addition, a detailed qualitative analysis was conducted on a subset of calls dealing with the nurse management of calls seeking medical advice and information. Key findings include: • Overall, 74% of the calls discussed parenting and child development issues, 48% discussed health/medical issues, and 16% were information-seeking calls. • More specifically: o 21% discussed health/medical and parenting and child development issues. o 3% discussed parenting and information-seeking issues. o 5% discussed health/medical, parenting/development and information issues. o 18% exclusively focussed on health and medical issues and therefore were outside the remit of the intended scope of the CHL. These calls caused interactional dilemmas for the nurse call-takers as they simultaneously dealt with parental expectations for help and the CHL guidelines indicating that offering medical advice was outside the remit of the service. • Most frequent reasons for calling were to discuss sleep, feeding, normative infant physical functions and parenting advice. • The average length of calls to the CHL was 7 minutes. • Longer calls were more likely to involve nurse call-takers giving advice on more than one topic, the caller displaying strong emotions, the caller not specifically providing the reason for the call, and the caller discussing parenting and developmental issues. • Shorter calls were characterised by the nurse suggesting that the child receive immediate medical attention, the nurse emphasising the importance or urgency of the plan of action, the caller referring to or requesting confirmation of a diagnosis, and caller and nurse call-taker discussion of health and medical issues. • The majority of calls, 92%, achieved parent-nurse alignment by the conclusion of the call. However, 8% did not. • The 8% of calls that were not aligned require further quantitative and qualitative investigation of the interactional features. The findings are pertinent in the current context where Child Health Line now resides within 13HEALTH. These findings indicate: 1. A high demand for parenting advice. 2. Nurse call-takers have a high level of competency in dealing with calls about parenting and normal child development, which is the remit of the CHL. 3. Nurse call-takers and callers achieve a high degree of alignment when both parties agree on a course of action. 4. There is scope for developing professional practice in calls that present difficulties in terms of call content, interactional behaviour and call closure. Recommendations of the project: 1. There are numerous opportunities for further research on interactional aspects of calls to the CHL, such as further investigations of the interactional features and the association of the features to alignment and nonalignment. The rich and detailed insights into the patterns of nurse-parent interactions were afforded by the audio-recording and analysis of calls to the CHL. 2. The regular recording of calls would serve as a way of increasing understanding of the type and nature of calls received, and provide a valuable training resource. Recording and analysing calls to CHL provides insight into the operation of the service, including evidence about the effectiveness of triaging calls. 3. Training in both recognising and dealing with problem calls may be beneficial. For example, calls where the caller showed strong emotion, appeared stressed, frustrated or troubled were less likely to be rated as aligned calls. In calls where the callers described being ‘at their wits end’, or responded to each proposed suggestion with ‘I’ve tried that’, the callers were fairly resistant to advice-giving. 4. Training could focus on strategies for managing calls relating to parenting support and advice, and parental well-being. The project found that these calls were more likely to be rated as being nonaligned. 5. With the implementation of 13HEALTH, future research could compare nurse-parent interaction following the implementation of triaging. Of the calls, 21% had both medical and parenting topics discussed and 5.3% discussed medical, parenting and information topics. Added to this, in 12% of calls, there was ambiguity between the caller and nurse call-taker as to whether the problem was medical or behavioural.
Resumo:
Recent advances in the planning and delivery of radiotherapy treatments have resulted in improvements in the accuracy and precision with which therapeutic radiation can be administered. As the complexity of the treatments increases it becomes more difficult to predict the dose distribution in the patient accurately. Monte Carlo methods have the potential to improve the accuracy of the dose calculations and are increasingly being recognised as the “gold standard” for predicting dose deposition in the patient. In this study, software has been developed that enables the transfer of treatment plan information from the treatment planning system to a Monte Carlo dose calculation engine. A database of commissioned linear accelerator models (Elekta Precise and Varian 2100CD at various energies) has been developed using the EGSnrc/BEAMnrc Monte Carlo suite. Planned beam descriptions and CT images can be exported from the treatment planning system using the DICOM framework. The information in these files is combined with an appropriate linear accelerator model to allow the accurate calculation of the radiation field incident on a modelled patient geometry. The Monte Carlo dose calculation results are combined according to the monitor units specified in the exported plan. The result is a 3D dose distribution that could be used to verify treatment planning system calculations. The software, MCDTK (Monte Carlo Dicom ToolKit), has been developed in the Java programming language and produces BEAMnrc and DOSXYZnrc input files, ready for submission on a high-performance computing cluster. The code has been tested with the Eclipse (Varian Medical Systems), Oncentra MasterPlan (Nucletron B.V.) and Pinnacle3 (Philips Medical Systems) planning systems. In this study the software was validated against measurements in homogenous and heterogeneous phantoms. Monte Carlo models are commissioned through comparison with quality assurance measurements made using a large square field incident on a homogenous volume of water. This study aims to provide a valuable confirmation that Monte Carlo calculations match experimental measurements for complex fields and heterogeneous media.
Resumo:
This paper reports on a four year Australian Research Council funded Linkage Project titled Skilling Indigenous Queensland, conducted in regional areas of Queensland, Australia from 2009 to 2013. The project sought to investigate vocational education, training (VET) and teaching, Indigenous learners’ needs, employer cultural and expectations and community culture and expectations to identify best practice in numeracy teaching for Indigenous VET learners. Specifically it focused on ways to enhance the teaching and learning of courses and the associated mathematics in such courses to benefit learners and increase their future opportunities of employment. To date thirty-nine teachers/trainers/teacher aides and two hundred and thirty-one students consented to participate in the project. Nine VET courses were nominated to be the focus on the study. This paper focuses on questionnaire and interview responses from four trainers, two teacher aides and six students. In recent years a considerable amount of funding has been allocated to increasing Indigenous Peoples’ participation in education and employment. This increased funding is predicated on the assumption that it will make a difference and contribute to closing the education gap between Indigenous and non-Indigenous Australians (Council of Australia Governments, 2009). The central tenet is that access to education for Indigenous People will create substantial social and economic benefits for regional and remote Indigenous People. The project’s aim is to address some of the issues associated with the gap. To achieve the aims, the project adopted a mixed methods design aimed at benefitting research participants and included: participatory collaborative action research (Kemmis & McTaggart, 1988) and, community research (Smith, 1999). Participatory collaborative action research refers to a is a “collective, self-reflective enquiry undertaken by participants in social situations in order to improve the rationality and justice of their own social and educational practices” (Kemmis et al., 1988, p. 5). Community research is described as an approach that “conveys a much more intimate, human and self-defined space” (p. 127). Community research relies on and validates the community’s own definitions. As the project is informed by the social at a community level, it is described as “community action research or emancipatory research” (Smith, 1999, p. 127). It seeks to demonstrate benefit to the community, making positive differences in the lives of Indigenous People and communities. The data collection techniques included survey questionnaires, video recording of teaching and learning processes, teacher reflective video analysis of teaching, observations, semi-structured interviews and student numeracy testing. As a result of these processes, the findings indicate that VET course teachers work hard to adopt contextualising strategies to their teaching, however this process is not always straight forward because of the perceptions of how mathematics has been taught and learned historically. Further teachers, trainers and students have high expectations of one another with the view to successful outcomes from the courses.
Resumo:
Image representations derived from simplified models of the primary visual cortex (V1), such as HOG and SIFT, elicit good performance in a myriad of visual classification tasks including object recognition/detection, pedestrian detection and facial expression classification. A central question in the vision, learning and neuroscience communities regards why these architectures perform so well. In this paper, we offer a unique perspective to this question by subsuming the role of V1-inspired features directly within a linear support vector machine (SVM). We demonstrate that a specific class of such features in conjunction with a linear SVM can be reinterpreted as inducing a weighted margin on the Kronecker basis expansion of an image. This new viewpoint on the role of V1-inspired features allows us to answer fundamental questions on the uniqueness and redundancies of these features, and offer substantial improvements in terms of computational and storage efficiency.
Resumo:
The selection of optimal camera configurations (camera locations, orientations etc.) for multi-camera networks remains an unsolved problem. Previous approaches largely focus on proposing various objective functions to achieve different tasks. Most of them, however, do not generalize well to large scale networks. To tackle this, we introduce a statistical formulation of the optimal selection of camera configurations as well as propose a Trans-Dimensional Simulated Annealing (TDSA) algorithm to effectively solve the problem. We compare our approach with a state-of-the-art method based on Binary Integer Programming (BIP) and show that our approach offers similar performance on small scale problems. However, we also demonstrate the capability of our approach in dealing with large scale problems and show that our approach produces better results than 2 alternative heuristics designed to deal with the scalability issue of BIP.
Resumo:
In this submission, we provide evidence for our view that copyright policy in the UK must encourage new digital business models which meet the changing needs of consumers and foster innovation in the UK both within, and beyond, the creative industries. We illustrate our arguments using evidence from the music industry. However, we believe that our key points on the relationship between the copyright system and innovative digital business models apply across the UK creative industries.