930 resultados para Impasse set
Resumo:
In CB Richard Ellis (C) Pty Ltd v Wingate Properties Pty Ltd [2005] QDC 399 McGill DCJ examined whether the court now has a discretion to set aside an irregularly entered default judgment.
Resumo:
In C & E Pty Ltd v Corrigan [2006] QCA 47, the Queensland Court of Appeal considered whether r103 of the Uniform Civil Procedure Rules applied to the service of an application to set aside a statutory demand under s459G of the Corporations Act 2001 (Cth). The decision provides analysis and clarification of an issue that has clearly been one of some uncertainty.
Resumo:
Numeric sets can be used to store and distribute important information such as currency exchange rates and stock forecasts. It is useful to watermark such data for proving ownership in case of illegal distribution by someone. This paper analyzes the numerical set watermarking model presented by Sion et. al in “On watermarking numeric sets”, identifies it’s weaknesses, and proposes a novel scheme that overcomes these problems. One of the weaknesses of Sion’s watermarking scheme is the requirement to have a normally-distributed set, which is not true for many numeric sets such as forecast figures. Experiments indicate that the scheme is also susceptible to subset addition and secondary watermarking attacks. The watermarking model we propose can be used for numeric sets with arbitrary distribution. Theoretical analysis and experimental results show that the scheme is strongly resilient against sorting, subset selection, subset addition, distortion, and secondary watermarking attacks.
Resumo:
Ever since Cox et. al published their paper, “A Secure, Robust Watermark for Multimedia” in 1996 [6], there has been tremendous progress in multimedia watermarking. The same pattern re-emerged with Agrawal and Kiernan publishing their work “Watermarking Relational Databases” in 2001 [1]. However, little attention has been given to primitive data collections with only a handful works of research known to the authors [11, 10]. This is primarily due to the absence of an attribute that differentiates marked items from unmarked item during insertion and detection process. This paper presents a distribution-independent, watermarking model that is secure against secondary-watermarking in addition to conventional attacks such as data addition, deletion and distortion. The low false positives and high capacity provide additional strength to the scheme. These claims are backed by experimental results provided in the paper.
Resumo:
Measurements of half-field beam penumbra were taken using EBT2 film for a variety of blocking techniques. It was shown that minimizing the SSD reduces the penumbra as the effects of beam divergence are diminished. The addition of a lead block directly on the surface provides optimal results with a 10-90% penumbra of 0.53 ± 0.02 cm. To resolve the uncertainties encountered in film measurements, future Monte Carlo measurements of halffield penumbras are to be conducted.
Resumo:
With the overwhelming increase in the amount of data on the web and data bases, many text mining techniques have been proposed for mining useful patterns in text documents. Extracting closed sequential patterns using the Pattern Taxonomy Model (PTM) is one of the pruning methods to remove noisy, inconsistent, and redundant patterns. However, PTM model treats each extracted pattern as whole without considering included terms, which could affect the quality of extracted patterns. This paper propose an innovative and effective method that extends the random set to accurately weigh patterns based on their distribution in the documents and their terms distribution in patterns. Then, the proposed approach will find the specific closed sequential patterns (SCSP) based on the new calculated weight. The experimental results on Reuters Corpus Volume 1 (RCV1) data collection and TREC topics show that the proposed method significantly outperforms other state-of-the-art methods in different popular measures.
Resumo:
There has been a paucity of research published in relation to the temporal aspect of destination image change over time. Given increasing investments in destination branding, research is needed to enhance understanding of how to monitor destination brand performance, of which destination image is the core construct, over time. This article reports the results of four studies tracking brand performance of a competitive set of five destinations, between 2003 and 2012. Results indicate minimal changes in perceptions held of the five destinations of interest over the 10 years, supporting the assertion of Gartner (1986) and Gartner and Hunt (1987) that destination image change will only occur slowly over time. While undertaken in Australia, the research approach provides DMOs in other parts of the world with a practical tool for evaluating brand performance over time; in terms of measures of effectiveness of past marketing communications, and indicators of future performance.
Resumo:
The Commission has been asked to identify appropriate options for reducing entry and exit barriers including advice on the potential impacts of the personal/corporate insolvency regimes on business exits...
Resumo:
The Commission has released a Draft Report on Business Set-Up, Transfer and Closure for public consultation and input. It is pleasing to note that three chapters of the Draft Report address aspects of personal and corporate insolvency. Nevertheless, we continue to make the submission to national policy inquiries and discussions that a comprehensive review should be undertaken of the regulation of insolvency and restructuring in Australia. The last comprehensive review of the insolvency system was by the Australian Law Reform Commission (the Harmer Report) and was handed down in 1988. Whilst there have been aspects of our insolvency laws that have been reviewed since that time, none has been able to provide the clear and comprehensive analysis that is able to come from a more considered review. Such a review ought to be conducted by the Australian Law Reform Commission or similar independent panel set up for the task. We also suggest that there is a lack of data available to assist with addressing questions raised by the Draft Report. There is a need to invest in finding out, in a rigorous and informed way, how the current law operates. Until there is a willingness to make a public investment in such research with less reliance upon the anecdotal (often from well-meaning but ultimately inadequately informed participants and others) the government cannot be sure that the insolvency regime we have provides the most effective regime to underpin Australia’s commercial and financial dealings, nor that any change is justified. We also make the submission that there are benefits in a serious investigation into a merged regulatory architecture of personal and corporate insolvency and a combined personal and corporate insolvency regulator.
Resumo:
The microbial mediated production of nitrous oxide (N2O) and its reduction to dinitrogen (N2) via denitrification represents a loss of nitrogen (N) from fertilised agro-ecosystems to the atmosphere. Although denitrification has received great interest by biogeochemists in the last decades, the magnitude of N2lossesand related N2:N2O ratios from soils still are largely unknown due to methodical constraints. We present a novel 15N tracer approach, based on a previous developed tracer method to study denitrification in pure bacterial cultures which was modified for the use on soil incubations in a completely automated laboratory set up. The method uses a background air in the incubation vessels that is replaced with a helium-oxygen gas mixture with a 50-fold reduced N2 background (2 % v/v). This method allows for a direct and sensitive quantification of the N2 and N2O emissions from the soil with isotope-ratio mass spectrometry after 15N labelling of denitrification N substrates and minimises the sensitivity to the intrusion of atmospheric N2 at the same time. The incubation set up was used to determine the influence of different soil moisture levels on N2 and N2O emissions from a sub-tropical pasture soil in Queensland/Australia. The soil was labelled with an equivalent of 50 μg-N per gram dry soil by broadcast application of KNO3solution (4 at.% 15N) and incubated for 3 days at 80% and 100% water filled pore space (WFPS), respectively. The headspace of the incubation vessel was sampled automatically over 12hrs each day and 3 samples (0, 6, and 12 hrs after incubation start) of headspace gas analysed for N2 and N2O with an isotope-ratio mass spectrometer (DELTA V Plus, Thermo Fisher Scientific, Bremen, Germany(. In addition, the soil was analysed for 15N NO3- and NH4+ using the 15N diffusion method, which enabled us to obtain a complete N balance. The method proved to be highly sensitive for N2 and N2O emissions detecting N2O emissions ranging from 20 to 627 μN kg-1soil-1hr-1and N2 emissions ranging from 4.2 to 43 μN kg-1soil-1hr-1for the different treatments. The main end-product of denitrification was N2O for both water contents with N2 accounting for 9% and 13% of the total denitrification losses at 80% and 100%WFPS, respectively. Between 95-100% of the added 15N fertiliser could be recovered. Gross nitrification over the 3 days amounted to 8.6 μN g-1 soil-1 and 4.7 μN g-1 soil-1, denitrification to 4.1 μN g-1 soil-1 and 11.8 μN g-1 soil-1at 80% and 100%WFPS, respectively. The results confirm that the tested method allows for a direct and highly sensitive detection of N2 and N2O fluxes from soils and hence offers a sensitive tool to study denitrification and N turnover in terrestrial agro-ecosystems.
Resumo:
The quality of ultrasound computed tomography imaging is primarily determined by the accuracy of ultrasound transit time measurement. A major problem in analysis is the overlap of signals making it difficult to detect the correct transit time. The current standard is to apply a matched-filtering approach to the input and output signals. This study compares the matched-filtering technique with active set deconvolution to derive a transit time spectrum from a coded excitation chirp signal and the measured output signal. The ultrasound wave travels in a direct and a reflected path to the receiver, resulting in an overlap in the recorded output signal. The matched-filtering and deconvolution techniques were applied to determine the transit times associated with the two signal paths. Both techniques were able to detect the two different transit times; while matched-filtering has a better accuracy (0.13 μs vs. 0.18 μs standard deviation), deconvolution has a 3.5 times improved side-lobe to main-lobe ratio. A higher side-lobe suppression is important to further improve image fidelity. These results suggest that a future combination of both techniques would provide improved signal detection and hence improved image fidelity.
Resumo:
In treatment comparison experiments, the treatment responses are often correlated with some concomitant variables which can be measured before or at the beginning of the experiments. In this article, we propose schemes for the assignment of experimental units that may greatly improve the efficiency of the comparison in such situations. The proposed schemes are based on general ranked set sampling. The relative efficiency and cost-effectiveness of the proposed schemes are studied and compared. It is found that some proposed schemes are always more efficient than the traditional simple random assignment scheme when the total cost is the same. Numerical studies show promising results using the proposed schemes.
Resumo:
This paper considers the one-sample sign test for data obtained from general ranked set sampling when the number of observations for each rank are not necessarily the same, and proposes a weighted sign test because observations with different ranks are not identically distributed. The optimal weight for each observation is distribution free and only depends on its associated rank. It is shown analytically that (1) the weighted version always improves the Pitman efficiency for all distributions; and (2) the optimal design is to select the median from each ranked set.
Resumo:
Nahhas, Wolfe, and Chen (2002, Biometrics 58, 964-971) considered optimal set size for ranked set sampling (RSS) with fixed operational costs. This framework can be very useful in practice to determine whether RSS is beneficial and to obtain the optimal set size that minimizes the variance of the population estimator for a fixed total cost. In this article, we propose a scheme of general RSS in which more than one observation can be taken from each ranked set. This is shown to be more cost-effective in some cases when the cost of ranking is not so small. We demonstrate using the example in Nahhas, Wolfe, and Chen (2002, Biometrics 58, 964-971), by taking two or more observations from one set even with the optimal set size from the RSS design can be more beneficial.