46 resultados para catch and effort data

em Queensland University of Technology - ePrints Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Statistical methods are often used to analyse commercial catch and effort data to provide standardised fishing effort and/or a relative index of fish abundance for input into stock assessment models. Achieving reliable results has proved difficult in Australia's Northern Prawn Fishery (NPF), due to a combination of such factors as the biological characteristics of the animals, some aspects of the fleet dynamics, and the changes in fishing technology. For this set of data, we compared four modelling approaches (linear models, mixed models, generalised estimating equations, and generalised linear models) with respect to the outcomes of the standardised fishing effort or the relative index of abundance. We also varied the number and form of vessel covariates in the models. Within a subset of data from this fishery, modelling correlation structures did not alter the conclusions from simpler statistical models. The random-effects models also yielded similar results. This is because the estimators are all consistent even if the correlation structure is mis-specified, and the data set is very large. However, the standard errors from different models differed, suggesting that different methods have different statistical efficiency. We suggest that there is value in modelling the variance function and the correlation structure, to make valid and efficient statistical inferences and gain insight into the data. We found that fishing power was separable from the indices of prawn abundance only when we offset the impact of vessel characteristics at assumed values from external sources. This may be due to the large degree of confounding within the data, and the extreme temporal changes in certain aspects of individual vessels, the fleet and the fleet dynamics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A simple stochastic model of a fish population subject to natural and fishing mortalities is described. The fishing effort is assumed to vary over different periods but to be constant within each period. A maximum-likelihood approach is developed for estimating natural mortality (M) and the catchability coefficient (q) simultaneously from catch-and-effort data. If there is not enough contrast in the data to provide reliable estimates of both M and q, as is often the case in practice, the method can be used to obtain the best possible values of q for a range of possible values of M. These techniques are illustrated with tiger prawn (Penaeus semisulcatus) data from the Northern Prawn Fishery of Australia.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The effects of fish density distribution and effort distribution on the overall catchability coefficient are examined. Emphasis is also on how aggregation and effort distribution interact to affect overall catch rate [catch per unit effort (cpue)]. In particular, it is proposed to evaluate three indices, the catchability index, the knowledge parameter, and the aggregation index, to describe the effectiveness of targeting and the effects on overall catchability in the stock area. Analytical expressions are provided so that these indices can easily be calculated. The average of the cpue calculated from small units where fishing is random is a better index for measuring the stock abundance. The overall cpue, the ratio of lumped catch and effort, together with the average cpue, can be used to assess the effectiveness of targeting. The proposed methods are applied to the commercial catch and effort data from the Australian northern prawn fishery. The indices are obtained assuming a power law for the effort distribution as an approximation of targeting during the fishing operation. Targeting increased catchability in some areas by 10%, which may have important implications on management advice.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The collection of basic environmental data by industry members was successful and offers a way of overcoming the problems associated with differences in scale between the environment and fisheries datasets. A simple method of collecting environmental data was developed that was only a small time burden on skippers, yet has the potential to provide very useful information on the same scale as the catch and effort data recorded in the logbooks. The success of this trial was aided by the natural interest of fishers to learn more about the environment in which they fish. The archival temperature-depth tags chosen proved robust, reliable and easy to use. While the use of large scale environmental data may not yield significant improvements in stock assessments for most SESSF species, fine-scale data collected from selected vessels using methods developed during this project may, in the longer term, be useful for incorporation into CPUE standardisations in the future...

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Deriving an estimate of optimal fishing effort or even an approximate estimate is very valuable for managing fisheries with multiple target species. The most challenging task associated with this is allocating effort to individual species when only the total effort is recorded. Spatial information on the distribution of each species within a fishery can be used to justify the allocations, but often such information is not available. To determine the long-term overall effort required to achieve maximum sustainable yield (MSY) and maximum economic yield (MEY), we consider three methods for allocating effort: (i) optimal allocation, which optimally allocates effort among target species; (ii) fixed proportions, which chooses proportions based on past catch data; and (iii) economic allocation, which splits effort based on the expected catch value of each species. Determining the overall fishing effort required to achieve these management objectives is a maximizing problem subject to constraints due to economic and social considerations. We illustrated the approaches using a case study of the Moreton Bay Prawn Trawl Fishery in Queensland (Australia). The results were consistent across the three methods. Importantly, our analysis demonstrated the optimal total effort was very sensitive to daily fishing costs-the effort ranged from 9500-11 500 to 6000-7000, 4000 and 2500 boat-days, using daily cost estimates of $0, $500, $750, and $950, respectively. The zero daily cost corresponds to the MSY, while a daily cost of $750 most closely represents the actual present fishing cost. Given the recent debate on which costs should be factored into the analyses for deriving MEY, our findings highlight the importance of including an appropriate cost function for practical management advice. The approaches developed here could be applied to other multispecies fisheries where only aggregated fishing effort data are recorded, as the literature on this type of modelling is sparse.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Geothermal industry in Australia and Queensland is in its infancy and for hot dry rock (HDR) geothermal energy, it is very much in the target identification and resource definition stages. As a key effort to assist the geothermal industry and exploration for HDR in Queensland, we are developing a comprehensive and new integrated geochemical and geochronological database on igneous rocks. To date, around 18,000 igneous rocks have been analysed across Queensland for chemical and/or age information. However, these data currently reside in a number of disparate datasets (e.g., Ozchron, Champion et al., 2007, Geological Survey of Queensland, journal publications, and unpublished university theses). The goal of this project is to collate and integrate these data on Queensland igneous rocks to improve our understanding of high heat producing granites in Queensland, in terms of their distribution (particularly in the subsurface), dimensions, ages, and controlling factors in their genesis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Bactrocera papayae Drew & Hancock, Bactrocera philippinensis Drew & Hancock, Bactrocera carambolae Drew & Hancock, and Bactrocera invadens Drew, Tsuruta & White are four horticultural pest tephritid fruit fly species that are highly similar, morphologically and genetically, to the destructive pest, the Oriental fruit fly, Bactrocera dorsalis (Hendel) (Diptera: Tephritidae). This similarity has rendered the discovery of reliable diagnostic characters problematic, which, in view of the economic importance of these taxa and the international trade implications, has resulted in ongoing difficulties for many areas of plant protection and food security. Consequently, a major international collaborative and integrated multidisciplinary research effort was initiated in 2009 to build upon existing literature with the specific aim of resolving biological species limits among B. papayae, B. philippinensis, B. carambolae, B. invadens and B. dorsalis to overcome constraints to pest management and international trade. Bactrocera philippinensis has recently been synonymized with B. papayae as a result of this initiative and this review corroborates that finding; however, the other names remain in use. While consistent characters have been found to reliably distinguish B. carambolae from B. dorsalis, B. invadens and B. papayae, no such characters have been found to differentiate the latter three putative species. We conclude that B. carambolae is a valid species and that the remaining taxa, B. dorsalis, B. invadens and B. papayae, represent the same species. Thus, we consider B. dorsalis (Hendel) as the senior synonym of B. papayae Drew and Hancock syn.n. and B. invadens Drew, Tsuruta & White syn.n. A redescription of B. dorsalis is provided. Given the agricultural importance of B. dorsalis, this taxonomic decision will have significant global plant biosecurity implications, affecting pest management, quarantine, international trade, postharvest treatment and basic research. Throughout the paper, we emphasize the value of independent and multidisciplinary tools in delimiting species, particularly in complicated cases involving morphologically cryptic taxa.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In daily activities people are using a number of available means for the achievement of balance, such as the use of hands and the co-ordination of balance. One of the approaches that explains this relationship between perception and action is the ecological theory that is based on the work of a) Bernstein (1967), who imposed the problem of ‘the degrees of freedom’, b) Gibson (1979), who referred to the theory of perception and the way which the information is received from the environment in order for a certain movement to be achieved, c) Newell (1986), who proposed that movement can derive from the interaction of the constraints that imposed from the environment and the organism and d) Kugler, Kelso and Turvey (1982), who showed the way which “the degrees of freedom” are connected and interact. According to the above mentioned theories, the development of movement co-ordination can result from the different constraints that imposed into the organism-environment system. The close relation between the environmental and organismic constraints, as well as their interaction is responsible for the movement system that will be activated. These constraints apart from shaping the co-ordination of specific movements can be a rate limiting factor, to a certain degree, in the acquisition and mastering of a new skill. This frame of work can be an essential tool for the study of catching an object (e.g., a ball). The importance of this study becomes obvious due to the fact that movements that involved in catching an object are representative of every day actions and characteristic of the interaction between perception and action.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the advent of Service Oriented Architecture, Web Services have gained tremendous popularity. Due to the availability of a large number of Web services, finding an appropriate Web service according to the requirement of the user is a challenge. This warrants the need to establish an effective and reliable process of Web service discovery. A considerable body of research has emerged to develop methods to improve the accuracy of Web service discovery to match the best service. The process of Web service discovery results in suggesting many individual services that partially fulfil the user’s interest. By considering the semantic relationships of words used in describing the services as well as the use of input and output parameters can lead to accurate Web service discovery. Appropriate linking of individual matched services should fully satisfy the requirements which the user is looking for. This research proposes to integrate a semantic model and a data mining technique to enhance the accuracy of Web service discovery. A novel three-phase Web service discovery methodology has been proposed. The first phase performs match-making to find semantically similar Web services for a user query. In order to perform semantic analysis on the content present in the Web service description language document, the support-based latent semantic kernel is constructed using an innovative concept of binning and merging on the large quantity of text documents covering diverse areas of domain of knowledge. The use of a generic latent semantic kernel constructed with a large number of terms helps to find the hidden meaning of the query terms which otherwise could not be found. Sometimes a single Web service is unable to fully satisfy the requirement of the user. In such cases, a composition of multiple inter-related Web services is presented to the user. The task of checking the possibility of linking multiple Web services is done in the second phase. Once the feasibility of linking Web services is checked, the objective is to provide the user with the best composition of Web services. In the link analysis phase, the Web services are modelled as nodes of a graph and an allpair shortest-path algorithm is applied to find the optimum path at the minimum cost for traversal. The third phase which is the system integration, integrates the results from the preceding two phases by using an original fusion algorithm in the fusion engine. Finally, the recommendation engine which is an integral part of the system integration phase makes the final recommendations including individual and composite Web services to the user. In order to evaluate the performance of the proposed method, extensive experimentation has been performed. Results of the proposed support-based semantic kernel method of Web service discovery are compared with the results of the standard keyword-based information-retrieval method and a clustering-based machine-learning method of Web service discovery. The proposed method outperforms both information-retrieval and machine-learning based methods. Experimental results and statistical analysis also show that the best Web services compositions are obtained by considering 10 to 15 Web services that are found in phase-I for linking. Empirical results also ascertain that the fusion engine boosts the accuracy of Web service discovery by combining the inputs from both the semantic analysis (phase-I) and the link analysis (phase-II) in a systematic fashion. Overall, the accuracy of Web service discovery with the proposed method shows a significant improvement over traditional discovery methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The study described in this paper developed a model of animal movement, which explicitly recognised each individual as the central unit of measure. The model was developed by learning from a real dataset that measured and calculated, for individual cows in a herd, their linear and angular positions and directional and angular speeds. Two learning algorithms were implemented: a Hidden Markov model (HMM) and a long-term prediction algorithm. It is shown that a HMM can be used to describe the animal's movement and state transition behaviour within several “stay” areas where cows remained for long periods. Model parameters were estimated for hidden behaviour states such as relocating, foraging and bedding. For cows’ movement between the “stay” areas a long-term prediction algorithm was implemented. By combining these two algorithms it was possible to develop a successful model, which achieved similar results to the animal behaviour data collected. This modelling methodology could easily be applied to interactions of other animal species.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Accurate and detailed road models play an important role in a number of geospatial applications, such as infrastructure planning, traffic monitoring, and driver assistance systems. In this thesis, an integrated approach for the automatic extraction of precise road features from high resolution aerial images and LiDAR point clouds is presented. A framework of road information modeling has been proposed, for rural and urban scenarios respectively, and an integrated system has been developed to deal with road feature extraction using image and LiDAR analysis. For road extraction in rural regions, a hierarchical image analysis is first performed to maximize the exploitation of road characteristics in different resolutions. The rough locations and directions of roads are provided by the road centerlines detected in low resolution images, both of which can be further employed to facilitate the road information generation in high resolution images. The histogram thresholding method is then chosen to classify road details in high resolution images, where color space transformation is used for data preparation. After the road surface detection, anisotropic Gaussian and Gabor filters are employed to enhance road pavement markings while constraining other ground objects, such as vegetation and houses. Afterwards, pavement markings are obtained from the filtered image using the Otsu's clustering method. The final road model is generated by superimposing the lane markings on the road surfaces, where the digital terrain model (DTM) produced by LiDAR data can also be combined to obtain the 3D road model. As the extraction of roads in urban areas is greatly affected by buildings, shadows, vehicles, and parking lots, we combine high resolution aerial images and dense LiDAR data to fully exploit the precise spectral and horizontal spatial resolution of aerial images and the accurate vertical information provided by airborne LiDAR. Objectoriented image analysis methods are employed to process the feature classiffcation and road detection in aerial images. In this process, we first utilize an adaptive mean shift (MS) segmentation algorithm to segment the original images into meaningful object-oriented clusters. Then the support vector machine (SVM) algorithm is further applied on the MS segmented image to extract road objects. Road surface detected in LiDAR intensity images is taken as a mask to remove the effects of shadows and trees. In addition, normalized DSM (nDSM) obtained from LiDAR is employed to filter out other above-ground objects, such as buildings and vehicles. The proposed road extraction approaches are tested using rural and urban datasets respectively. The rural road extraction method is performed using pan-sharpened aerial images of the Bruce Highway, Gympie, Queensland. The road extraction algorithm for urban regions is tested using the datasets of Bundaberg, which combine aerial imagery and LiDAR data. Quantitative evaluation of the extracted road information for both datasets has been carried out. The experiments and the evaluation results using Gympie datasets show that more than 96% of the road surfaces and over 90% of the lane markings are accurately reconstructed, and the false alarm rates for road surfaces and lane markings are below 3% and 2% respectively. For the urban test sites of Bundaberg, more than 93% of the road surface is correctly reconstructed, and the mis-detection rate is below 10%.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Serving as a powerful tool for extracting localized variations in non-stationary signals, applications of wavelet transforms (WTs) in traffic engineering have been introduced; however, lacking in some important theoretical fundamentals. In particular, there is little guidance provided on selecting an appropriate WT across potential transport applications. This research described in this paper contributes uniquely to the literature by first describing a numerical experiment to demonstrate the shortcomings of commonly-used data processing techniques in traffic engineering (i.e., averaging, moving averaging, second-order difference, oblique cumulative curve, and short-time Fourier transform). It then mathematically describes WT’s ability to detect singularities in traffic data. Next, selecting a suitable WT for a particular research topic in traffic engineering is discussed in detail by objectively and quantitatively comparing candidate wavelets’ performances using a numerical experiment. Finally, based on several case studies using both loop detector data and vehicle trajectories, it is shown that selecting a suitable wavelet largely depends on the specific research topic, and that the Mexican hat wavelet generally gives a satisfactory performance in detecting singularities in traffic and vehicular data.