860 resultados para two-stage sampling


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper seeks to identify and quantify sources of the lagging productivity in Singapore’s retail sector as reported in the Economic Strategies Committee 2010 report. A two-stage analysis is adopted. In the first stage, the Malmquist productivity index is employed which provides measures of productivity change, technological change and efficiency change. In the second stage, technical efficiency estimates are regressed against explanatory variables based on a truncated regression model. Sources of technical efficiency were attributed to quality of workers while product assortment and competition negatively impacted on efficiency.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Spectrum sensing is considered to be one of the most important tasks in cognitive radio. One of the common assumption among current spectrum sensing detectors is the full presence or complete absence of the primary user within the sensing period. In reality, there are many situations where the primary user signal only occupies a portion of the observed signal and the assumption of primary user duty cycle not necessarily fulfilled. In this paper we show that the true detection performance can degrade from the assumed achievable values when the observed primary user exhibits a certain duty cycle. Therefore, a two-stage detection method incorporating primary user duty cycle that enhances the detection performance is proposed. The proposed detector can improve the probability of detection under low duty cycle at the expense of a small decrease in performance at high duty cycle.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Topic modeling has been widely utilized in the fields of information retrieval, text mining, text classification etc. Most existing statistical topic modeling methods such as LDA and pLSA generate a term based representation to represent a topic by selecting single words from multinomial word distribution over this topic. There are two main shortcomings: firstly, popular or common words occur very often across different topics that bring ambiguity to understand topics; secondly, single words lack coherent semantic meaning to accurately represent topics. In order to overcome these problems, in this paper, we propose a two-stage model that combines text mining and pattern mining with statistical modeling to generate more discriminative and semantic rich topic representations. Experiments show that the optimized topic representations generated by the proposed methods outperform the typical statistical topic modeling method LDA in terms of accuracy and certainty.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Bovine intestine samples were heat pump fluidized bed dried at atmospheric pressure and at temperatures below and above the material freezing points equipped with a continuous monitoring system. The investigation of the drying characteristics has been conducted in the temperature range -10~25oC and the airflow in the range 1.5~2.5 m/s. Some experiments were conducted as a single temperature drying experiments and others as two stage drying experiments employing two temperatures. An Arrhenius-type equation was used to interpret the influence of the drying air parameters on the effective diffusivity, calculated with the method of slopes in terms of energy activation, and this was found to be sensitivity of the temperature. The effective diffusion coefficient of moisture transfer was determined by Fickian method using uni-dimensional moisture movement in both moisture, removal by evaporation and combined sublimation and evaporation. Correlations expressing the effective moisture diffusivity and drying temperature are reported.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In a classification problem typically we face two challenging issues, the diverse characteristic of negative documents and sometimes a lot of negative documents that are closed to positive documents. Therefore, it is hard for a single classifier to clearly classify incoming documents into classes. This paper proposes a novel gradual problem solving to create a two-stage classifier. The first stage identifies reliable negatives (negative documents with weak positive characteristics). It concentrates on minimizing the number of false negative documents (recall-oriented). We use Rocchio, an existing recall based classifier, for this stage. The second stage is a precision-oriented “fine tuning”, concentrates on minimizing the number of false positive documents by applying pattern (a statistical phrase) mining techniques. In this stage a pattern-based scoring is followed by threshold setting (thresholding). Experiment shows that our statistical phrase based two-stage classifier is promising.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Network reconfiguration after complete blackout of a power system is an essential step for power system restoration. A new node importance evaluation method is presented based on the concept of regret, and maximisation of the average importance of a path is employed as the objective of finding the optimal restoration path. Then, a two-stage method is presented to optimise the network reconfiguration strategy. Specifically, the restoration sequence of generating units is first optimised so as to maximise the restored generation capacity, then the optimal restoration path is selected to restore the generating nodes concerned and the issues of selecting a serial or parallel restoration mode and the reconnecting failure of a transmission line are next considered. Both the restoration path selection and skeleton-network determination are implemented together in the proposed method, which overcomes the shortcoming of separate decision-making in the existing methods. Finally, the New England 10-unit 39-bus power system and the Guangzhou power system in South China are employed to demonstrate the basic features of the proposed method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents the design process utilised for producing a two stage isolated Unity Power Factor (UPF) rectifier. The important yet less intuitive aspects of the design process are highlighted to aid in the simplification of designing a power converter which meets future UPF standards. Two converter designs are presented, a 200W converter utilising a critical conduction controller and a 750W converter based around a continuous conduction controller. Both designs presented were based on the requirements of an audio power amplifier, but the processes apply equally to a range of applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In a tag-based recommender system, the multi-dimensional correlation should be modeled effectively for finding quality recommendations. Recently, few researchers have used tensor models in recommendation to represent and analyze latent relationships inherent in multi-dimensions data. A common approach is to build the tensor model, decompose it and, then, directly use the reconstructed tensor to generate the recommendation based on the maximum values of tensor elements. In order to improve the accuracy and scalability, we propose an implementation of the -mode block-striped (matrix) product for scalable tensor reconstruction and probabilistically ranking the candidate items generated from the reconstructed tensor. With testing on real-world datasets, we demonstrate that the proposed method outperforms the benchmarking methods in terms of recommendation accuracy and scalability.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Membrane proteins play important roles in many biochemical processes and are also attractive targets of drug discovery for various diseases. The elucidation of membrane protein types provides clues for understanding the structure and function of proteins. Recently we developed a novel system for predicting protein subnuclear localizations. In this paper, we propose a simplified version of our system for predicting membrane protein types directly from primary protein structures, which incorporates amino acid classifications and physicochemical properties into a general form of pseudo-amino acid composition. In this simplified system, we will design a two-stage multi-class support vector machine combined with a two-step optimal feature selection process, which proves very effective in our experiments. The performance of the present method is evaluated on two benchmark datasets consisting of five types of membrane proteins. The overall accuracies of prediction for five types are 93.25% and 96.61% via the jackknife test and independent dataset test, respectively. These results indicate that our method is effective and valuable for predicting membrane protein types. A web server for the proposed method is available at http://www.juemengt.com/jcc/memty_page.php

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We address the issue of complexity for vector quantization (VQ) of wide-band speech LSF (line spectrum frequency) parameters. The recently proposed switched split VQ (SSVQ) method provides better rate-distortion (R/D) performance than the traditional split VQ (SVQ) method, even at the requirement of lower computational complexity. but at the expense of much higher memory. We develop the two stage SVQ (TsSVQ) method, by which we gain both the memory and computational advantages and still retain good R/D performance. The proposed TsSVQ method uses a full dimensional quantizer in its first stage for exploiting all the higher dimensional coding advantages and then, uses an SVQ method for quantizing the residual vector in the second stage so as to reduce the complexity. We also develop a transform domain residual coding method in this two stage architecture such that it further reduces the computational complexity. To design an effective residual codebook in the second stage, variance normalization of Voronoi regions is carried out which leads to the design of two new methods, referred to as normalized two stage SVQ (NTsSVQ) and normalized two stage transform domain SVQ (NTsTrSVQ). These two new methods have complimentary strengths and hence, they are combined in a switched VQ mode which leads to the further improvement in R/D performance, but retaining the low complexity requirement. We evaluate the performances of new methods for wide-band speech LSF parameter quantization and show their advantages over established SVQ and SSVQ methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A two stage Gifford-McMahon cycle cryorefrigerator operating at 20 K is described. This refrigerator uses a very simple ‘spool valve’ and a modified indigenous compressor to compress helium gas. This cryorefrigerator reaches a lowest temperature of 15.5 K; it takes ≈ 50 min to reach 20 K and the cooling capacity is ≈ 2.5 W at 25 K. The cool-down characteristics and load characteristics are presented in graphical form. The effect of changing the operating pressure ratio and the second stage regenerator matrix size are also reported. Pressure-volume (P-V) diagrams obtained at various temperatures indicate that P-V losses form the major fraction of the total losses and this becomes more pronounced as the temperature is decreased. A heat balance analysis shows the relative magnitudes of various losses.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis examines the feasibility of a forest inventory method based on two-phase sampling in estimating forest attributes at the stand or substand levels for forest management purposes. The method is based on multi-source forest inventory combining auxiliary data consisting of remote sensing imagery or other geographic information and field measurements. Auxiliary data are utilized as first-phase data for covering all inventory units. Various methods were examined for improving the accuracy of the forest estimates. Pre-processing of auxiliary data in the form of correcting the spectral properties of aerial imagery was examined (I), as was the selection of aerial image features for estimating forest attributes (II). Various spatial units were compared for extracting image features in a remote sensing aided forest inventory utilizing very high resolution imagery (III). A number of data sources were combined and different weighting procedures were tested in estimating forest attributes (IV, V). Correction of the spectral properties of aerial images proved to be a straightforward and advantageous method for improving the correlation between the image features and the measured forest attributes. Testing different image features that can be extracted from aerial photographs (and other very high resolution images) showed that the images contain a wealth of relevant information that can be extracted only by utilizing the spatial organization of the image pixel values. Furthermore, careful selection of image features for the inventory task generally gives better results than inputting all extractable features to the estimation procedure. When the spatial units for extracting very high resolution image features were examined, an approach based on image segmentation generally showed advantages compared with a traditional sample plot-based approach. Combining several data sources resulted in more accurate estimates than any of the individual data sources alone. The best combined estimate can be derived by weighting the estimates produced by the individual data sources by the inverse values of their mean square errors. Despite the fact that the plot-level estimation accuracy in two-phase sampling inventory can be improved in many ways, the accuracy of forest estimates based mainly on single-view satellite and aerial imagery is a relatively poor basis for making stand-level management decisions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Genotyping in DNA pools reduces the cost and the time required to complete large genotyping projects. The aim of the present study was to evaluate pooling as part of a strategy for fine mapping in regions of significant linkage. Thirty-nine single nucleotide polymorphisms (SNPs) were analyzed in two genomic DNA pools of 384 individuals each and results compared with data after typing all individuals used in the pools. There were no significant differences using data from either 2 or 8 heterozygous individuals to correct frequency estimates for unequal allelic amplification. After correction, the mean difference between estimates from the genomic pool and individual allele frequencies was .033. A major limitation of the use of DNA pools is the time and effort required to carefully adjust the concentration of each individual DNA sample before mixing aliquots. Pools were also constructed by combining DNA after Multiple Displacement Amplification (MDA). The MDA pools gave similar results to pools constructed after careful DNA quantitation (mean difference from individual genotyping .040) and MDA provides a rapid method to generate pools suitable for some applications. Pools provide a rapid and cost-effective screen to eliminate SNPs that are not polymorphic in a test population and can detect minor allele frequencies as low as 1% in the pooled samples. With current levels of accuracy, pooling is best suited to an initial screen in the SNP validation process that can provide high-throughput comparisons between cases and controls to prioritize SNPs for subsequent individual genotyping.