874 resultados para photography -- digital techniques
Resumo:
This paper describes a lead project currently underway through Australia’s Sustainable Built Environment National Research Centre, evaluating impacts, diffusion mechanisms and uptake of R&D in the Australian building and construction industry. Building on a retrospective analysis of R&D trends and industry outcomes, a future-focused industry roadmap will be developed to inform R&D policies more attuned to future industry needs to improve investment effectiveness. In particular, this research will evaluate national R&D efforts to develop, test and implement advanced digital modelling technologies into the design/construction/asset management cycle. This research will build new understandings and knowledge relevant to R&D funding strategies, research team formation and management (with involvement from public and private sectors, and research and knowledge institutions), dissemination of outcomes and uptake. This is critical due to the disaggregated nature of the industry, intense competition, limited R&D investment; and new challenges (e.g. digital modelling, integrated project delivery, and the demand for packaged services). The evaluation of leading Australian and international efforts to integrate advanced digital modelling technologies into the design/construction/asset management cycle will be undertaken as one of three case studies. Employing the recently released Australian Guidelines for Digital Modelling developed with buildingSMART (International Alliance for Interoperability) and the Australian Institute of Architects, technical and business benefits across the supply chain will be highlighted as drivers for more integrated R&D efforts.
Resumo:
Digital forensic examiners often need to identify the type of a file or file fragment based only on the content of the file. Content-based file type identification schemes typically use a byte frequency distribution with statistical machine learning to classify file types. Most algorithms analyze the entire file content to obtain the byte frequency distribution, a technique that is inefficient and time consuming. This paper proposes two techniques for reducing the classification time. The first technique selects a subset of features based on the frequency of occurrence. The second speeds classification by sampling several blocks from the file. Experimental results demonstrate that up to a fifteen-fold reduction in file size analysis time can be achieved with limited impact on accuracy.
Resumo:
This chapter explores the idea of virtual participation through the historical example of the republic of letters in early modern Europe (circa 1500-1800). By reflecting on the construction of virtuality in a historical context, and more specifically in a pre-digital environment, it calls attention to accusations of technological determinism in ongoing research concerning the affordances of the Internet and related media of communication. It argues that ‘the virtual’ is not synonymous with ‘the digital’ and suggests that, in order to articulate what is novel about modern technologies, we must first understand the social interactions underpinning the relationships which are facilitated through those technologies. By analysing the construction of virtuality in a pre-digital environment, this chapter thus offers a baseline from which scholars might consider what is different about the modes of interaction and communication being engaged in via modern media.
Resumo:
The Working in Australia’s Digital Games Industry: A Consolidation Report is the outcome of a comprehensive study on the games industry in Australia by Dr Sandra Haukka from the ARC Centre of Excellence for Creative Industries and Innovation (CCI) based at Queensland University of Technology in Brisbane. The study responds to concerns that Australia’s games industry would not reach its full potential due to a lack of local, highly skilled staff, and a lack of appropriately trained graduates with the necessary knowledge and skills. This is the first of two reports produced with the support of the Games Developers’ Association of Australia. Over coming months researchers will develop a future skills strategy report for the industry.
Resumo:
A significant proportion of the cost of software development is due to software testing and maintenance. This is in part the result of the inevitable imperfections due to human error, lack of quality during the design and coding of software, and the increasing need to reduce faults to improve customer satisfaction in a competitive marketplace. Given the cost and importance of removing errors improvements in fault detection and removal can be of significant benefit. The earlier in the development process faults can be found, the less it costs to correct them and the less likely other faults are to develop. This research aims to make the testing process more efficient and effective by identifying those software modules most likely to contain faults, allowing testing efforts to be carefully targeted. This is done with the use of machine learning algorithms which use examples of fault prone and not fault prone modules to develop predictive models of quality. In order to learn the numerical mapping between module and classification, a module is represented in terms of software metrics. A difficulty in this sort of problem is sourcing software engineering data of adequate quality. In this work, data is obtained from two sources, the NASA Metrics Data Program, and the open source Eclipse project. Feature selection before learning is applied, and in this area a number of different feature selection methods are applied to find which work best. Two machine learning algorithms are applied to the data - Naive Bayes and the Support Vector Machine - and predictive results are compared to those of previous efforts and found to be superior on selected data sets and comparable on others. In addition, a new classification method is proposed, Rank Sum, in which a ranking abstraction is laid over bin densities for each class, and a classification is determined based on the sum of ranks over features. A novel extension of this method is also described based on an observed polarising of points by class when rank sum is applied to training data to convert it into 2D rank sum space. SVM is applied to this transformed data to produce models the parameters of which can be set according to trade-off curves to obtain a particular performance trade-off.
Resumo:
Many governments world wide are attempting to increase accountability, transparency, and the quality of services by adopting information and communications technologies (ICTs) to modernize and change the way their administrations work. Meanwhile e-government is becoming a significant decision-making and service tool at local, regional and national government levels. The vast majority of users of these government online services see significant benefits from being able to access services online. The rapid pace of technological development has created increasingly more powerful ICTs that are capable of radically transforming public institutions and private organizations alike. These technologies have proven to be extraordinarily useful instruments in enabling governments to enhance the quality, speed of delivery and reliability of services to citizens and to business (VanderMeer & VanWinden, 2003). However, just because the technology is available does not mean it is accessible to all. The term digital divide has been used since the 1990s to describe patterns of unequal access to ICTs—primarily computers and the Internet—based on income, ethnicity, geography, age, and other factors. Over time it has evolved to more broadly define disparities in technology usage, resulting from a lack of access, skills, or interest in using technology. This article provides an overview of recent literature on e-government and the digital divide, and includes a discussion on the potential of e-government in addressing the digital divide.
Resumo:
New technologies have the potential to both expose children to and protect them from television news footage likely to disturb or frighten. The advent of cheap, portable and widely available digital technology has vastly increased the possibility of violent news events being captured and potentially broadcast. This material has the potential to be particularly disturbing and harmful to young children. But on the flipside, available digital technology could be used to build in protection for young viewers especially when it comes to preserving scheduled television programming and guarding against violent content being broadcast during live crosses from known trouble spots. Based on interviews with news directors, parents and a review of published material two recommendations are put forward: 1. Digital television technology should be employed to prevent news events "overtaking" scheduled children's programming and to protect safe harbours placed in the classifications zones to protect children. 2. Broadcasters should regain control of the images that go to air during "live" feeds from obviously volatile situations by building in short delays in G classification zones.