239 resultados para catch databases
Resumo:
A database will be protected under Australian law if it is a literary work; expressed in material form; meets the originality test; and has a relevant connection with Australia. Facts and data in themselves are not protected by copyright. However, a collection of data, a dataset, or a database may be protected by copyright if it is sufficiently original. Whether a work is sufficiently original to be protected by copyright depends on whether it has been produced by the application of independent intellectual effort by the author/s, which may involve the exercise of skill, judgement, or creativity in selecting, presenting, or arranging the information. This summary synthesises recent cases regarding originality in factual compilations.
Resumo:
Background Emergency department (ED) crowding caused by access block is an increasing public health issue and has been associated with impaired healthcare delivery, negative patient outcomes and increased staff workload. Aim To investigate the impact of opening a new ED on patient and healthcare service outcomes. Methods A 24-month time series analysis was employed using deterministically linked data from the ambulance service and three ED and hospital admission databases in Queensland, Australia. Results Total volume of ED presentations increased 18%, while local population growth increased by 3%. Healthcare service and patient outcomes at the two pre-existing hospitals did not improve. These outcomes included ambulance offload time: (Hospital A PRE: 10 min, POST: 10 min, P < 0.001; Hospital B PRE: 10 min, POST: 15 min, P < 0.001); ED length of stay: (Hospital A PRE: 242 min, POST: 246 min, P < 0.001; Hospital B PRE: 182 min, POST: 210 min, P < 0.001); and access block: (Hospital A PRE: 41%, POST: 46%, P < 0.001; Hospital B PRE: 23%, POST: 40%, P < 0.001). Time series modelling indicated that the effect was worst at the hospital furthest away from the new ED. Conclusions An additional ED within the region saw an increase in the total volume of presentations at a rate far greater than local population growth, suggesting it either provided an unmet need or a shifting of activity from one sector to another. Future studies should examine patient decision making regarding reasons for presenting to a new or pre-existing ED. There is an inherent need to take a ‘whole of health service area’ approach to solve crowding issues.
Resumo:
Evaluates trends in the imagery built into GIS applications to supplement existing vector data of streets, boundaries, infrastructure and utilities. These include large area digital orthophotos, Landsat and SPOT data. Future developments include 3 to 5 metre pixel resolutions from satellites, 1 to 2 metres from aircraft. GPS and improved image analysis techniques will also assist in improving resolution and accuracy.
Resumo:
Presentation by Dr Amadeo Pugliese, QUT Business School at Managing your research data seminar, 2012
Resumo:
Objective: The study aimed to examine the difference in response rates between opt-out and opt-in participant recruitment in a population-based study of heavy-vehicle drivers involved in a police-attended crash. Methods: Two approaches to subject recruitment were implemented in two different states over a 14-week period and response rates for the two approaches (opt-out versus opt-in recruitment) were compared. Results: Based on the eligible and contactable drivers, the response rates were 54% for the optout group and 16% for the opt-in group. Conclusions and Implications: The opt-in recruitment strategy (which was a consequence of one jurisdiction’s interpretation of the national Privacy Act at the time) resulted in an insufficient and potentially biased sample for the purposes of conducting research into risk factors for heavy-vehicle crashes. Australia’s national Privacy Act 1988 has had a long history of inconsistent practices by state and territory government departments and ethical review committees. These inconsistencies can have profound effects on the validity of research, as shown through the significantly different response rates we reported in this study. It is hoped that a more unified interpretation of the Privacy Act across the states and territories, as proposed under the soon-to-be released Australian Privacy Principles will reduce the recruitment challenges outlined in this study.
Resumo:
Japan's fishery harvest peaked in the late 1980s. To limit the race for fish, each fisherman could be provided with specific catch limits in the form of individual transferable quotas (ITQs). The market for ITQs would also help remove the most inefficient fishers. In this article we estimate the potential cost reduction associated with catch limits, and find that about 300 billion yen or about 3 billion dollars could be saved through the allocation and trading of individual-specific catch shares.
Resumo:
Aerial applications of granular insecticides are preferable because they can effectively penetrate vegetation, there is less drift, and no loss of product due to evaporation. We aimed to 1) assess the field efficacy ofVectoBac G to control Aedes vigilax (Skuse) in saltmarsh pools, 2) develop a stochastic-modeling procedure to monitor application quality, and 3) assess the distribution of VectoBac G after an aerial application. Because ground-based studies with Ae. vigilax immatures found that VectoBac G provided effective control below the recommended label rate of 7 kg/ha, we trialed a nominated aerial rate of 5 kg/ha as a case study. Our distribution pattern modeling method indicated that the variability in the number of VectoBac G particles captured in catch-trays was greater than expected for 5 kg/ha and that the widely accepted contour mapping approach to visualize the deposition pattern provided spurious results and therefore was not statistically appropriate. Based on the results of distribution pattern modeling, we calculated the catch tray size required to analyze the distribution of aerially applied granular formulations. The minimum catch tray size for products with large granules was 4 m2 for Altosid pellets and 2 m2 for VectoBac G. In contrast, the minimum catch-tray size for Altosid XRG, Aquabac G, and Altosand, with smaller granule sizes, was 1 m2. Little gain in precision would be made by increasing the catch-tray size further, when the increased workload and infrastructure is considered. Our improved methods for monitoring the distribution pattern of aerially applied granular insecticides can be adapted for use by both public health and agricultural contractors.
Resumo:
From Queensland’s inception as a self-governing colony in December 1859 the issue of labour relations has preoccupied governments and shaped the experiences of its working men and women. However, despite the often turbulent nature of labour relations in Queensland there has, prior to this book, been no attempt to provide an overview of the system as a whole. This important addition to Queensland’s sesquicentenary celebrations redresses this failure, looking at the diverse range of experiences that, together, made up a unique system of labour relations – including those of employers, women workers, indigenous workers, unions, the Queensland Industrial Relations Commission, labour law, industrial disputation, the workings of health and safety system and life in regional areas. It is argued that, overall, Queensland’s system of industrial regulation was central to its economic and social development. Despite past emphasis on the large-scale strikes that periodically raked the state this book finds that consensus normally prevailed.
Resumo:
This article considers the risk of disclosure in linked databases when statistical analysis of micro-data is permitted. The risk of disclosure needs to be balanced against the utility of the linked data. The current work specifically considers the disclosure risks in permitting regression analysis to be performed on linked data. A new attack based on partitioning of the database is presented.
Resumo:
The collection of basic environmental data by industry members was successful and offers a way of overcoming the problems associated with differences in scale between the environment and fisheries datasets. A simple method of collecting environmental data was developed that was only a small time burden on skippers, yet has the potential to provide very useful information on the same scale as the catch and effort data recorded in the logbooks. The success of this trial was aided by the natural interest of fishers to learn more about the environment in which they fish. The archival temperature-depth tags chosen proved robust, reliable and easy to use. While the use of large scale environmental data may not yield significant improvements in stock assessments for most SESSF species, fine-scale data collected from selected vessels using methods developed during this project may, in the longer term, be useful for incorporation into CPUE standardisations in the future...
Resumo:
Statistical methods are often used to analyse commercial catch and effort data to provide standardised fishing effort and/or a relative index of fish abundance for input into stock assessment models. Achieving reliable results has proved difficult in Australia's Northern Prawn Fishery (NPF), due to a combination of such factors as the biological characteristics of the animals, some aspects of the fleet dynamics, and the changes in fishing technology. For this set of data, we compared four modelling approaches (linear models, mixed models, generalised estimating equations, and generalised linear models) with respect to the outcomes of the standardised fishing effort or the relative index of abundance. We also varied the number and form of vessel covariates in the models. Within a subset of data from this fishery, modelling correlation structures did not alter the conclusions from simpler statistical models. The random-effects models also yielded similar results. This is because the estimators are all consistent even if the correlation structure is mis-specified, and the data set is very large. However, the standard errors from different models differed, suggesting that different methods have different statistical efficiency. We suggest that there is value in modelling the variance function and the correlation structure, to make valid and efficient statistical inferences and gain insight into the data. We found that fishing power was separable from the indices of prawn abundance only when we offset the impact of vessel characteristics at assumed values from external sources. This may be due to the large degree of confounding within the data, and the extreme temporal changes in certain aspects of individual vessels, the fleet and the fleet dynamics.
Resumo:
Although subsampling is a common method for describing the composition of large and diverse trawl catches, the accuracy of these techniques is often unknown. We determined the sampling errors generated from estimating the percentage of the total number of species recorded in catches, as well as the abundance of each species, at each increase in the proportion of the sorted catch. We completely partitioned twenty prawn trawl catches from tropical northern Australia into subsamples of about 10 kg each. All subsamples were then sorted, and species numbers recorded. Catch weights ranged from 71 to 445 kg, and the number of fish species in trawls ranged from 60 to 138, and invertebrate species from 18 to 63. Almost 70% of the species recorded in catches were "rare" in subsamples (less than one individual per 10 kg subsample or less than one in every 389 individuals). A matrix was used to show the increase in the total number of species that were recorded in each catch as the percentage of the sorted catch increased. Simulation modelling showed that sorting small subsamples (about 10% of catch weights) identified about 50% of the total number of species caught in a trawl. Larger subsamples (50% of catch weight on average) identified about 80% of the total species caught in a trawl. The accuracy of estimating the abundance of each species also increased with increasing subsample size. For the "rare" species, sampling error was around 80% after sorting 10% of catch weight and was just less than 50% after 40% of catch weight had been sorted. For the "abundant" species (five or more individuals per 10 kg subsample or five or more in every 389 individuals), sampling error was around 25% after sorting 10% of catch weight, but was reduced to around 10% after 40% of catch weight had been sorted.
Resumo:
Due to the increasing speed of landscape changes and the massive development of computer technologies, the methods of representing heritage landscapes using digital tools have become a worldwide concern in conservation research. The aim of this paper is to demonstrate how an ‘interpretative model’ can be used for contextual design of heritage landscape information systems. This approach is explored through building a geographic information system database for St Helena Island national park in Moreton Bay, South East Queensland, Australia. Stakeholders' interpretations of this landscape were collected through interviews, and then used as a framework for designing the database. The designed database is a digital inventory providing contextual descriptions of the historic infrastructure remnants on St Helena Island. It also reveals the priorities of different sites in terms of historic research, landscape restoration, and tourism development. Additionally, this database produces thematic maps of the intangible heritage values, which could be used for landscape interpretation. This approach is different from the existing methods because building a heritage information system is deemed as an interpretative activity, rather than a value-free replication of the physical environment. This approach also shows how a cultural landscape methodology can be used to create a flexible information system for heritage conservation. The conclusion is that an ‘interpretative model’ of database design facilitates a more explicit focus on information support, and is a potentially effective approach to user-centred design of geographic information systems.