60 resultados para antimedian set
Resumo:
With the overwhelming increase in the amount of data on the web and data bases, many text mining techniques have been proposed for mining useful patterns in text documents. Extracting closed sequential patterns using the Pattern Taxonomy Model (PTM) is one of the pruning methods to remove noisy, inconsistent, and redundant patterns. However, PTM model treats each extracted pattern as whole without considering included terms, which could affect the quality of extracted patterns. This paper propose an innovative and effective method that extends the random set to accurately weigh patterns based on their distribution in the documents and their terms distribution in patterns. Then, the proposed approach will find the specific closed sequential patterns (SCSP) based on the new calculated weight. The experimental results on Reuters Corpus Volume 1 (RCV1) data collection and TREC topics show that the proposed method significantly outperforms other state-of-the-art methods in different popular measures.
Resumo:
There has been a paucity of research published in relation to the temporal aspect of destination image change over time. Given increasing investments in destination branding, research is needed to enhance understanding of how to monitor destination brand performance, of which destination image is the core construct, over time. This article reports the results of four studies tracking brand performance of a competitive set of five destinations, between 2003 and 2012. Results indicate minimal changes in perceptions held of the five destinations of interest over the 10 years, supporting the assertion of Gartner (1986) and Gartner and Hunt (1987) that destination image change will only occur slowly over time. While undertaken in Australia, the research approach provides DMOs in other parts of the world with a practical tool for evaluating brand performance over time; in terms of measures of effectiveness of past marketing communications, and indicators of future performance.
Resumo:
The Commission has been asked to identify appropriate options for reducing entry and exit barriers including advice on the potential impacts of the personal/corporate insolvency regimes on business exits...
Resumo:
The Commission has released a Draft Report on Business Set-Up, Transfer and Closure for public consultation and input. It is pleasing to note that three chapters of the Draft Report address aspects of personal and corporate insolvency. Nevertheless, we continue to make the submission to national policy inquiries and discussions that a comprehensive review should be undertaken of the regulation of insolvency and restructuring in Australia. The last comprehensive review of the insolvency system was by the Australian Law Reform Commission (the Harmer Report) and was handed down in 1988. Whilst there have been aspects of our insolvency laws that have been reviewed since that time, none has been able to provide the clear and comprehensive analysis that is able to come from a more considered review. Such a review ought to be conducted by the Australian Law Reform Commission or similar independent panel set up for the task. We also suggest that there is a lack of data available to assist with addressing questions raised by the Draft Report. There is a need to invest in finding out, in a rigorous and informed way, how the current law operates. Until there is a willingness to make a public investment in such research with less reliance upon the anecdotal (often from well-meaning but ultimately inadequately informed participants and others) the government cannot be sure that the insolvency regime we have provides the most effective regime to underpin Australia’s commercial and financial dealings, nor that any change is justified. We also make the submission that there are benefits in a serious investigation into a merged regulatory architecture of personal and corporate insolvency and a combined personal and corporate insolvency regulator.
Resumo:
The microbial mediated production of nitrous oxide (N2O) and its reduction to dinitrogen (N2) via denitrification represents a loss of nitrogen (N) from fertilised agro-ecosystems to the atmosphere. Although denitrification has received great interest by biogeochemists in the last decades, the magnitude of N2lossesand related N2:N2O ratios from soils still are largely unknown due to methodical constraints. We present a novel 15N tracer approach, based on a previous developed tracer method to study denitrification in pure bacterial cultures which was modified for the use on soil incubations in a completely automated laboratory set up. The method uses a background air in the incubation vessels that is replaced with a helium-oxygen gas mixture with a 50-fold reduced N2 background (2 % v/v). This method allows for a direct and sensitive quantification of the N2 and N2O emissions from the soil with isotope-ratio mass spectrometry after 15N labelling of denitrification N substrates and minimises the sensitivity to the intrusion of atmospheric N2 at the same time. The incubation set up was used to determine the influence of different soil moisture levels on N2 and N2O emissions from a sub-tropical pasture soil in Queensland/Australia. The soil was labelled with an equivalent of 50 μg-N per gram dry soil by broadcast application of KNO3solution (4 at.% 15N) and incubated for 3 days at 80% and 100% water filled pore space (WFPS), respectively. The headspace of the incubation vessel was sampled automatically over 12hrs each day and 3 samples (0, 6, and 12 hrs after incubation start) of headspace gas analysed for N2 and N2O with an isotope-ratio mass spectrometer (DELTA V Plus, Thermo Fisher Scientific, Bremen, Germany(. In addition, the soil was analysed for 15N NO3- and NH4+ using the 15N diffusion method, which enabled us to obtain a complete N balance. The method proved to be highly sensitive for N2 and N2O emissions detecting N2O emissions ranging from 20 to 627 μN kg-1soil-1hr-1and N2 emissions ranging from 4.2 to 43 μN kg-1soil-1hr-1for the different treatments. The main end-product of denitrification was N2O for both water contents with N2 accounting for 9% and 13% of the total denitrification losses at 80% and 100%WFPS, respectively. Between 95-100% of the added 15N fertiliser could be recovered. Gross nitrification over the 3 days amounted to 8.6 μN g-1 soil-1 and 4.7 μN g-1 soil-1, denitrification to 4.1 μN g-1 soil-1 and 11.8 μN g-1 soil-1at 80% and 100%WFPS, respectively. The results confirm that the tested method allows for a direct and highly sensitive detection of N2 and N2O fluxes from soils and hence offers a sensitive tool to study denitrification and N turnover in terrestrial agro-ecosystems.
Resumo:
The quality of ultrasound computed tomography imaging is primarily determined by the accuracy of ultrasound transit time measurement. A major problem in analysis is the overlap of signals making it difficult to detect the correct transit time. The current standard is to apply a matched-filtering approach to the input and output signals. This study compares the matched-filtering technique with active set deconvolution to derive a transit time spectrum from a coded excitation chirp signal and the measured output signal. The ultrasound wave travels in a direct and a reflected path to the receiver, resulting in an overlap in the recorded output signal. The matched-filtering and deconvolution techniques were applied to determine the transit times associated with the two signal paths. Both techniques were able to detect the two different transit times; while matched-filtering has a better accuracy (0.13 μs vs. 0.18 μs standard deviation), deconvolution has a 3.5 times improved side-lobe to main-lobe ratio. A higher side-lobe suppression is important to further improve image fidelity. These results suggest that a future combination of both techniques would provide improved signal detection and hence improved image fidelity.
Resumo:
In treatment comparison experiments, the treatment responses are often correlated with some concomitant variables which can be measured before or at the beginning of the experiments. In this article, we propose schemes for the assignment of experimental units that may greatly improve the efficiency of the comparison in such situations. The proposed schemes are based on general ranked set sampling. The relative efficiency and cost-effectiveness of the proposed schemes are studied and compared. It is found that some proposed schemes are always more efficient than the traditional simple random assignment scheme when the total cost is the same. Numerical studies show promising results using the proposed schemes.
Resumo:
This paper considers the one-sample sign test for data obtained from general ranked set sampling when the number of observations for each rank are not necessarily the same, and proposes a weighted sign test because observations with different ranks are not identically distributed. The optimal weight for each observation is distribution free and only depends on its associated rank. It is shown analytically that (1) the weighted version always improves the Pitman efficiency for all distributions; and (2) the optimal design is to select the median from each ranked set.
Resumo:
Nahhas, Wolfe, and Chen (2002, Biometrics 58, 964-971) considered optimal set size for ranked set sampling (RSS) with fixed operational costs. This framework can be very useful in practice to determine whether RSS is beneficial and to obtain the optimal set size that minimizes the variance of the population estimator for a fixed total cost. In this article, we propose a scheme of general RSS in which more than one observation can be taken from each ranked set. This is shown to be more cost-effective in some cases when the cost of ranking is not so small. We demonstrate using the example in Nahhas, Wolfe, and Chen (2002, Biometrics 58, 964-971), by taking two or more observations from one set even with the optimal set size from the RSS design can be more beneficial.
Resumo:
This article is motivated by a lung cancer study where a regression model is involved and the response variable is too expensive to measure but the predictor variable can be measured easily with relatively negligible cost. This situation occurs quite often in medical studies, quantitative genetics, and ecological and environmental studies. In this article, by using the idea of ranked-set sampling (RSS), we develop sampling strategies that can reduce cost and increase efficiency of the regression analysis for the above-mentioned situation. The developed method is applied retrospectively to a lung cancer study. In the lung cancer study, the interest is to investigate the association between smoking status and three biomarkers: polyphenol DNA adducts, micronuclei, and sister chromatic exchanges. Optimal sampling schemes with different optimality criteria such as A-, D-, and integrated mean square error (IMSE)-optimality are considered in the application. With set size 10 in RSS, the improvement of the optimal schemes over simple random sampling (SRS) is great. For instance, by using the optimal scheme with IMSE-optimality, the IMSEs of the estimated regression functions for the three biomarkers are reduced to about half of those incurred by using SRS.
Resumo:
The single electron transfer-nitroxide radical coupling (SET-NRC) reaction has been used to produce multiblock polymers with high molecular weights in under 3 min at 50◦C by coupling a difunctional telechelic polystyrene (Br-PSTY-Br)with a dinitroxide. The well known combination of dimethyl sulfoxide as solvent and Me6TREN as ligand facilitated the in situ disproportionation of CuIBr to the highly active nascent Cu0 species. This SET reaction allowed polymeric radicals to be rapidly formed from their corresponding halide end-groups. Trapping of these carbon-centred radicals at close to diffusion controlled rates by dinitroxides resulted in high-molecular-weight multiblock polymers. Our results showed that the disproportionation of CuI was critical in obtaining these ultrafast reactions, and confirmed that activation was primarily through Cu0. We took advantage of the reversibility of the NRC reaction at elevated temperatures to decouple the multiblock back to the original PSTY building block through capping the chain-ends with mono-functional nitroxides. These alkoxyamine end-groups were further exchanged with an alkyne mono-functional nitroxide (TEMPO–≡) and ‘clicked’ by a CuI-catalyzed azide/alkyne cycloaddition (CuAAC) reaction with N3–PSTY–N3 to reform the multiblocks. This final ‘click’ reaction, even after the consecutive decoupling and nitroxide-exchange reactions, still produced high molecular-weight multiblocks efficiently. These SET-NRC reactions would have ideal applications in re-usable plastics and possibly as self-healing materials.
Resumo:
Purpose This study evaluated the impact of patient set-up errors on the probability of pulmonary and cardiac complications in the irradiation of left-sided breast cancer. Methods and Materials Using the CMS XiO Version 4.6 (CMS Inc., St Louis, MO) radiotherapy planning system's NTCP algorithm and the Lyman -Kutcher-Burman (LKB) model, we calculated the DVH indices for the ipsilateral lung and heart and the resultant normal tissue complication probabilities (NTCP) for radiation-induced pneumonitis and excess cardiac mortality in 12 left-sided breast cancer patients. Results Isocenter shifts in the posterior direction had the greatest effect on the lung V20, heart V25, mean and maximum doses to the lung and the heart. Dose volume histograms (DVH) results show that the ipsilateral lung V20 tolerance was exceeded in 58% of the patients after 1cm posterior shifts. Similarly, the heart V25 tolerance was exceeded after 1cm antero-posterior and left-right isocentric shifts in 70% of the patients. The baseline NTCPs for radiation-induced pneumonitis ranged from 0.73% - 3.4% with a mean value of 1.7%. The maximum reported NTCP for radiation-induced pneumonitis was 5.8% (mean 2.6%) after 1cm posterior isocentric shift. The NTCP for excess cardiac mortality were 0 % in 100% of the patients (n=12) before and after setup error simulations. Conclusions Set-up errors in left sided breast cancer patients have a statistically significant impact on the Lung NTCPs and DVH indices. However, with a central lung distance of 3cm or less (CLD <3cm), and a maximum heart distance of 1.5cm or less (MHD<1.5cm), the treatment plans could tolerate set-up errors of up to 1cm without any change in the NTCP to the heart.
Resumo:
OBJECTIVE Corneal confocal microscopy is a novel diagnostic technique for the detection of nerve damage and repair in a range of peripheral neuropathies, in particular diabetic neuropathy. Normative reference values are required to enable clinical translation and wider use of this technique. We have therefore undertaken a multicenter collaboration to provide worldwide age-adjusted normative values of corneal nerve fiber parameters. RESEARCH DESIGN AND METHODS A total of 1,965 corneal nerve images from 343 healthy volunteers were pooled from six clinical academic centers. All subjects underwent examination with the Heidelberg Retina Tomograph corneal confocal microscope. Images of the central corneal subbasal nerve plexus were acquired by each center using a standard protocol and analyzed by three trained examiners using manual tracing and semiautomated software (CCMetrics). Age trends were established using simple linear regression, and normative corneal nerve fiber density (CNFD), corneal nerve fiber branch density (CNBD), corneal nerve fiber length (CNFL), and corneal nerve fiber tortuosity (CNFT) reference values were calculated using quantile regression analysis. RESULTS There was a significant linear age-dependent decrease in CNFD (-0.164 no./mm(2) per year for men, P < 0.01, and -0.161 no./mm(2) per year for women, P < 0.01). There was no change with age in CNBD (0.192 no./mm(2) per year for men, P = 0.26, and -0.050 no./mm(2) per year for women, P = 0.78). CNFL decreased in men (-0.045 mm/mm(2) per year, P = 0.07) and women (-0.060 mm/mm(2) per year, P = 0.02). CNFT increased with age in men (0.044 per year, P < 0.01) and women (0.046 per year, P < 0.01). Height, weight, and BMI did not influence the 5th percentile normative values for any corneal nerve parameter. CONCLUSIONS This study provides robust worldwide normative reference values for corneal nerve parameters to be used in research and clinical practice in the study of diabetic and other peripheral neuropathies.
Resumo:
State-of-the-art image-set matching techniques typically implicitly model each image-set with a Gaussian distribution. Here, we propose to go beyond these representations and model image-sets as probability distribution functions (PDFs) using kernel density estimators. To compare and match image-sets, we exploit Csiszar´ f-divergences, which bear strong connections to the geodesic distance defined on the space of PDFs, i.e., the statistical manifold. Furthermore, we introduce valid positive definite kernels on the statistical manifold, which let us make use of more powerful classification schemes to match image-sets. Finally, we introduce a supervised dimensionality reduction technique that learns a latent space where f-divergences reflect the class labels of the data. Our experiments on diverse problems, such as video-based face recognition and dynamic texture classification, evidence the benefits of our approach over the state-of-the-art image-set matching methods.