133 resultados para Open Source Software
Resumo:
Over the last two decades, the internet and e-commerce have reshaped the way we communicate, interact and transact. In the converged environment enabled by high speed broadband, web 2.0, social media, virtual worlds, user-generated content, cloud computing, VoIP, open source software and open content have rapidly become established features of our online experience. Business and government alike are increasingly using the internet as the preferred platform for delivery of their goods and services and for effective engagement with their clients. New ways of doing things online and challenges to existing business, government and social activities have tested current laws and often demand new policies and laws, adapted to the new realities. The focus of this book is the regulation of social, cultural and commercial activity on the World Wide Web. It considers developments in the law that have been, and continue to be, brought about by the emergence of the internet and e-commerce. It analyses how the law is applied to define rights and obligations in relation to online infrastructure, content and practices.
Resumo:
The book examines the correlation between Intellectual Property Law – notably copyright – on the one hand and social and economic development on the other. The main focus of the initial overview is on historical, legal, economic and cultural aspects. Building on that, the work subsequently investigates how intellectual property systems have to be designed in order to foster social and economic growth in developing countries and puts forward theoretical and practical solutions that should be considered and implemented by policy makers, legal experts and the Word Intellectual Property Organization (WIPO).
Resumo:
This paper presents a mapping and navigation system for a mobile robot, which uses vision as its sole sensor modality. The system enables the robot to navigate autonomously, plan paths and avoid obstacles using a vision based topometric map of its environment. The map consists of a globally-consistent pose-graph with a local 3D point cloud attached to each of its nodes. These point clouds are used for direction independent loop closure and to dynamically generate 2D metric maps for locally optimal path planning. Using this locally semi-continuous metric space, the robot performs shortest path planning instead of following the nodes of the graph --- as is done with most other vision-only navigation approaches. The system exploits the local accuracy of visual odometry in creating local metric maps, and uses pose graph SLAM, visual appearance-based place recognition and point clouds registration to create the topometric map. The ability of the framework to sustain vision-only navigation is validated experimentally, and the system is provided as open-source software.
Resumo:
Queensland University of Technology (QUT) Library offers a range of resources and services to researchers as part of their research support portfolio. This poster will present key features of two of the data management services offered by research support staff at QUT Library. The first service is QUT Research Data Finder (RDF), a product of the Australian National Data Service (ANDS) funded Metadata Stores project. RDF is a data registry (metadata repository) that aims to publicise datasets that are research outputs arising from completed QUT research projects. The second is a software and code registry, which is currently under development with the sole purpose of improving discovery of source code and software as QUT research outputs. RESEARCH DATA FINDER As an integrated metadata repository, Research Data Finder aligns with institutional sources of truth, such as QUT’s research administration system, ResearchMaster, as well as QUT’s Academic Profiles system to provide high quality data descriptions that increase awareness of, and access to, shareable research data. The repository and its workflows are designed to foster better data management practices, enhance opportunities for collaboration and research, promote cross-disciplinary research and maximise the impact of existing research data sets. SOFTWARE AND CODE REGISTRY The QUT Library software and code registry project stems from concerns amongst researchers with regards to development activities, storage, accessibility, discoverability and impact, sharing, copyright and IP ownership of software and code. As a result, the Library is developing a registry for code and software research outputs, which will use existing Research Data Finder architecture. The underpinning software for both registries is VIVO, open source software developed by Cornell University. The registry will use the Research Data Finder service instance of VIVO and will include a searchable interface, links to code/software locations and metadata feeds to Research Data Australia. Key benefits of the project include:improving the discoverability and reuse of QUT researchers’ code and software amongst QUT and the QUT research community; increasing the profile of QUT research outputs on a national level by providing a metadata feed to Research Data Australia, and; improving the metrics for access and reuse of code and software in the repository.
Resumo:
Computer modelling has been used extensively in some processes in the sugar industry to achieve significant gains. This paper reviews the investigations carried out over approximately the last twenty five years,including the successes but also areas where problems and delays have been encountered. In that time the capability of both hardware and software have increased dramatically. For some processes such as cane cleaning, cane billet preparation, and sugar drying, the application of computer modelling towards improved equipment design and operation has been quite limited. A particular problem has been the large number of particles and particle interactions in these applications, which, if modelled individually, is computationally very intensive. Despite the problems, some attempts have already been made and knowledge gained on tackling these issues. Even if the detailed modelling is wanting, a model can provide some useful insights into the processes. Some options to attack these more intensive problems include the use of commercial software packages, which are usually very robust and allow the addition of user-supplied subroutines to adapt the software to particular problems. Suppliers of such software usually charge a fee per CPU licence, which is often problematic for large problems that require the use of many CPUs. Another option to consider is using open source software that has been developed with the capability to access large parallel resources. Such software has the added advantage of access to the full internal coding. This paper identifies and discusses the detail of software options with the potential capability to achieve improvements in the sugar industry.
Resumo:
This article considers the challenges posed to intellectual property law by the emerging field of bioinformatics. It examines the intellectual property strategies of established biotechnology companies, such as Celera Genomics, and information technology firms entering into the marketplace, such as IBM. First this paper argues that copyright law is not irrelevant to biotechnology, as some commentators would suggest. It claims that the use of copyright law and contract law is fundamental to the protection of biomedical and genomic databases. Second this article questions whether biotechnology companies are exclusively interested in patenting genes and genetics sequences. Recent evidence suggests that biotechnology companies and IT firms are patenting bioinformatics software and Internet business methods, as well as underlying instrumentation such as microarrays and genechips. Finally, this paper evaluates what impact the privatisation of bioinformatics will have on public research and scientific communication. It raises important questions about integration, interoperability, and the risks of monopoly. It finally considers whether open source software such as the Ensembl Project and peer to peer technology like DSAS will be able to counter this trend of privatisation.
Resumo:
In his 1987 book, The Media Lab: Inventing the Future at MIT, Stewart Brand provides an insight into the visions of the future of the media in the 1970s and 1980s. 1 He notes that Nicolas Negroponte made a compelling case for the foundation of a media laboratory at MIT with diagrams detailing the convergence of three sectors of the media—the broadcast and motion picture industry; the print and publishing industry; and the computer industry. Stewart Brand commented: ‘If Negroponte was right and communications technologies really are converging, you would look for signs that technological homogenisation was dissolving old boundaries out of existence, and you would expect an explosion of new media where those boundaries used to be’. Two decades later, technology developers, media analysts and lawyers have become excited about the latest phase of media convergence. In 2006, the faddish Time Magazine heralded the arrival of various Web 2.0 social networking services: You can learn more about how Americans live just by looking at the backgrounds of YouTube videos—those rumpled bedrooms and toy‐strewn basement rec rooms—than you could from 1,000 hours of network television. And we didn’t just watch, we also worked. Like crazy. We made Facebook profiles and Second Life avatars and reviewed books at Amazon and recorded podcasts. We blogged about our candidates losing and wrote songs about getting dumped. We camcordered bombing runs and built open‐source software. America loves its solitary geniuses—its Einsteins, its Edisons, its Jobses—but those lonely dreamers may have to learn to play with others. Car companies are running open design contests. Reuters is carrying blog postings alongside its regular news feed. Microsoft is working overtime to fend off user‐created Linux. We’re looking at an explosion of productivity and innovation, and it’s just getting started, as millions of minds that would otherwise have drowned in obscurity get backhauled into the global intellectual economy. The magazine announced that Time’s Person of the Year was ‘You’, the everyman and everywoman consumer ‘for seizing the reins of the global media, for founding and framing the new digital democracy, for working for nothing and beating the pros at their own game’. This review essay considers three recent books, which have explored the legal dimensions of new media. In contrast to the unbridled exuberance of Time Magazine, this series of legal works displays an anxious trepidation about the legal ramifications associated with the rise of social networking services. In his tour de force, The Future of Reputation: Gossip, Rumor, and Privacy on the Internet, Daniel Solove considers the implications of social networking services, such as Facebook and YouTube, for the legal protection of reputation under privacy law and defamation law. Andrew Kenyon’s edited collection, TV Futures: Digital Television Policy in Australia, explores the intersection between media law and copyright law in the regulation of digital television and Internet videos. In The Future of the Internet and How to Stop It, Jonathan Zittrain explores the impact of ‘generative’ technologies and ‘tethered applications’—considering everything from the Apple Mac and the iPhone to the One Laptop per Child programme.
Resumo:
We investigate methods for data-based selection of working covariance models in the analysis of correlated data with generalized estimating equations. We study two selection criteria: Gaussian pseudolikelihood and a geodesic distance based on discrepancy between model-sensitive and model-robust regression parameter covariance estimators. The Gaussian pseudolikelihood is found in simulation to be reasonably sensitive for several response distributions and noncanonical mean-variance relations for longitudinal data. Application is also made to a clinical dataset. Assessment of adequacy of both correlation and variance models for longitudinal data should be routine in applications, and we describe open-source software supporting this practice.
Resumo:
Many education providers grapple with “where” to host their online education community. While many have invested significant funding and time into developing a user specific solution, others are using alternative open source software solutions that provide a just in time response. This research paper reports on the importance of the engagement of an online community within an open source learning management system, presents the key aspects of communication occurring and romanticizes the notion that a user specific solution is not a necessary consideration.
Resumo:
A significant proportion of the cost of software development is due to software testing and maintenance. This is in part the result of the inevitable imperfections due to human error, lack of quality during the design and coding of software, and the increasing need to reduce faults to improve customer satisfaction in a competitive marketplace. Given the cost and importance of removing errors improvements in fault detection and removal can be of significant benefit. The earlier in the development process faults can be found, the less it costs to correct them and the less likely other faults are to develop. This research aims to make the testing process more efficient and effective by identifying those software modules most likely to contain faults, allowing testing efforts to be carefully targeted. This is done with the use of machine learning algorithms which use examples of fault prone and not fault prone modules to develop predictive models of quality. In order to learn the numerical mapping between module and classification, a module is represented in terms of software metrics. A difficulty in this sort of problem is sourcing software engineering data of adequate quality. In this work, data is obtained from two sources, the NASA Metrics Data Program, and the open source Eclipse project. Feature selection before learning is applied, and in this area a number of different feature selection methods are applied to find which work best. Two machine learning algorithms are applied to the data - Naive Bayes and the Support Vector Machine - and predictive results are compared to those of previous efforts and found to be superior on selected data sets and comparable on others. In addition, a new classification method is proposed, Rank Sum, in which a ranking abstraction is laid over bin densities for each class, and a classification is determined based on the sum of ranks over features. A novel extension of this method is also described based on an observed polarising of points by class when rank sum is applied to training data to convert it into 2D rank sum space. SVM is applied to this transformed data to produce models the parameters of which can be set according to trade-off curves to obtain a particular performance trade-off.
Resumo:
Software forms an important part of the interface between citizens and their government. An increasing amount of government functions are being performed, controlled, or delivered electronically. This software, like all language, is never value-neutral, but must, to some extent, reflect the values of the coder and proprietor. The move that many governments are making towards e-governance, and the increasing reliance that is being placed upon software in government, necessitates a rethinking of the relationships of power and control that are embodied in software.
Resumo:
QUT Software Finder is a searchable repository of metadata describing software and source code, which has been created as a result of QUT research activities. It was launched in December 2013. https://researchdatafinder.qut.edu.au/scf The registry was designed to aid the discovery and visibility of QUT research outputs and encourage sharing and re-use of code and software throughout the research community, both nationally and internationally. The repository platform used is VIVO (an open source product initially developed at Cornell University). QUT Software Finder records that describe software or code are connected to information about researchers involved, the research groups, related publications and related projects. Links to where the software or code can be accessed from are also provided alongside licencing and re-use information.
Resumo:
Access All was performance produced following a three-month mentorship in web-based performance that I was commissioned to conduct for the performance company Igneous. This live, triple-site performance event for three performers in three remote venues was specifically designed for presentation at Access Grid Nodes - conference rooms located around the globe equipped with a high end, open source computer teleconferencing technology that allowed multiple nodes to cross-connect with each other. Whilst each room was setup somewhat differently they all deployed the same basic infrastructre of multiple projectors, cameras, and sound as well as a reconfigurable floorspace. At that time these relatively formal setups imposed a clear series of limitations in terms of software capabilities and basic infrastructure and so there was much interest in understanding how far its capabilities might be pushed.----- Numerous performance experiments were undertaken between three Access Grid nodes in QUT Brisbane, VISLAB Sydney and Manchester Supercomputing Centre, England, culminating in the public performance staged simultaneously between the sites with local audiences at each venue and others online. Access All was devised in collaboration with interdisciplinary performance company Bonemap, Kelli Dipple (Interarts curator, Tate Modern London) and Mike Stubbs British curator and Director of FACT (Liverpool).----- This period of research and development was instigated and shaped by a public lecture I had earlier delivered in Sydney for the ‘Global Access Grid Network, Super Computing Global Conference’ entitled 'Performance Practice across Electronic Networks'. The findings of this work went on to inform numerous future networked and performative works produced from 2002 onwards.
Resumo:
The SoundCipher software library provides an easy way to create music in the Processing development environment. With the SoundCipher library added to Processing you can write software programs that make music to go along with your graphics and you can add sounds to enhance your Processing animations or games. SoundCipher provides an easy interface for playing 'notes' on the JavaSound synthesizer, for playback of audio files, and comunicating via MIDI. It provides accurate scheduling and allows events to be organised in musical time; using beats and tempo. It uses a 'score' metaphor that allows the construction of simple or complex musical arrangements. SoundCipher is designed to facilitate the basics of algorithmic music and interactive sound design as well as providing a platform for sophisticated computational music, it allows integration with the Minim library when more sophisticated audio and synthesis functionality is required and integration with the oscP5 library for communicating via open sound control.
Resumo:
An issue on generative music in Contemporary Music Review allows space to explore many of these controversies, and to explore the rich algorithmic scene in contemporary practice, as well as the diverse origins and manifestations of such a culture. A roster of interesting exponents from both academic and arts practice backgrounds are involved, matching the broad spectrum of current work. Contributed articles range from generative algorithms in live systems, from live coding to interactive music systems to computer games, through algorithmic modelling of longer-term form, evolutionary algorithms, to interfaces between modalities and mediums, in algorithmic choreography. A retrospective on the intensive experimentation into algorithmic music and sound synthesis at the Institute of Sonology in the 1960s and 70s creates a complementary strand, as well as an open paper on the issues raised by open source, as opposed to proprietary, software and operating systems, with consequences in the creation and archiving of algorithmic work.