930 resultados para selection methods


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Infrared selection is a potentially powerful way to identify heavily obscured AGNs missed in even the deepest X-ray surveys. Using a 24 μm-selected sample in GOODS-S, we test the reliability and completeness of three infrared AGN selection methods: (1) IRAC color-color selection, (2) IRAC power-law selection, and (3) IR-excess selection; we also evaluate a number of IR-excess approaches. We find that the vast majority of non-power-law IRAC color-selected AGN candidates in GOODS-S have colors consistent with those of star-forming galaxies. Contamination by star-forming galaxies is most prevalent at low 24 μm flux densities (~100 μJy) and high redshifts (z ~ 2), but the fraction of potential contaminants is still high (~50%) at 500 μJy, the highest flux density probed reliably by our survey. AGN candidates selected via a simple, physically motivated power-law criterion ("power-law galaxies," or PLGs), however, appear to be reliable. We confirm that the IR-excess methods successfully identify a number of AGNs, but we also find that such samples may be significantly contaminated by star-forming galaxies. Adding only the secure Spitzer-selected PLG, color-selected, IR-excess, and radio/IR-selected AGN candidates to the deepest X-ray-selected AGN samples directly increases the number of known X-ray AGNs (84) by 54%-77%, and implies an increase to the number of 24 μm-detected AGNs of 71%-94%. Finally, we show that the fraction of MIR sources dominated by an AGN decreases with decreasing MIR flux density, but only down to f_24 μ m = 300 μJy. Below this limit, the AGN fraction levels out, indicating that a nonnegligible fraction (~10%) of faint 24 μm sources (the majority of which are missed in the X-ray) are powered not by star formation, but by the central engine. The fraction of all AGNs (regardless of their MIR properties) exceeds 15% at all 24 μm flux densities.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Prior research has established that idiosyncratic volatility of the securities prices exhibits a positive trend. This trend and other factors have made the merits of investment diversification and portfolio construction more compelling. A new optimization technique, a greedy algorithm, is proposed to optimize the weights of assets in a portfolio. The main benefits of using this algorithm are to: a) increase the efficiency of the portfolio optimization process, b) implement large-scale optimizations, and c) improve the resulting optimal weights. In addition, the technique utilizes a novel approach in the construction of a time-varying covariance matrix. This involves the application of a modified integrated dynamic conditional correlation GARCH (IDCC - GARCH) model to account for the dynamics of the conditional covariance matrices that are employed. The stochastic aspects of the expected return of the securities are integrated into the technique through Monte Carlo simulations. Instead of representing the expected returns as deterministic values, they are assigned simulated values based on their historical measures. The time-series of the securities are fitted into a probability distribution that matches the time-series characteristics using the Anderson-Darling goodness-of-fit criterion. Simulated and actual data sets are used to further generalize the results. Employing the S&P500 securities as the base, 2000 simulated data sets are created using Monte Carlo simulation. In addition, the Russell 1000 securities are used to generate 50 sample data sets. The results indicate an increase in risk-return performance. Choosing the Value-at-Risk (VaR) as the criterion and the Crystal Ball portfolio optimizer, a commercial product currently available on the market, as the comparison for benchmarking, the new greedy technique clearly outperforms others using a sample of the S&P500 and the Russell 1000 securities. The resulting improvements in performance are consistent among five securities selection methods (maximum, minimum, random, absolute minimum, and absolute maximum) and three covariance structures (unconditional, orthogonal GARCH, and integrated dynamic conditional GARCH).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Utility companies provide electricity to a large number of consumers. These companies need to have an accurate forecast of the next day electricity demand. Any forecast errors will result in either reliability issues or increased costs for the company. Because of the widespread roll-out of smart meters, a large amount of high resolution consumption data is now accessible which was not available in the past. This new data can be used to improve the load forecast and as a result increase the reliability and decrease the expenses of electricity providers. In this paper, a number of methods for improving load forecast using smart meter data are discussed. In these methods, consumers are first divided into a number of clusters. Then a neural network is trained for each cluster and forecasts of these networks are added together in order to form the prediction for the aggregated load. In this paper, it is demonstrated that clustering increases the forecast accuracy significantly. Criteria used for grouping consumers play an important role in this process. In this work, three different feature selection methods for clustering consumers are explained and the effect of feature extraction methods on forecast error is investigated.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Physiological signals, which are controlled by the autonomic nervous system (ANS), could be used to detect the affective state of computer users and therefore find applications in medicine and engineering. The Pupil Diameter (PD) seems to provide a strong indication of the affective state, as found by previous research, but it has not been investigated fully yet. In this study, new approaches based on monitoring and processing the PD signal for off-line and on-line affective assessment (“relaxation” vs. “stress”) are proposed. Wavelet denoising and Kalman filtering methods are first used to remove abrupt changes in the raw Pupil Diameter (PD) signal. Then three features (PDmean, PDmax and PDWalsh) are extracted from the preprocessed PD signal for the affective state classification. In order to select more relevant and reliable physiological data for further analysis, two types of data selection methods are applied, which are based on the paired t-test and subject self-evaluation, respectively. In addition, five different kinds of the classifiers are implemented on the selected data, which achieve average accuracies up to 86.43% and 87.20%, respectively. Finally, the receiver operating characteristic (ROC) curve is utilized to investigate the discriminating potential of each individual feature by evaluation of the area under the ROC curve, which reaches values above 0.90. For the on-line affective assessment, a hard threshold is implemented first in order to remove the eye blinks from the PD signal and then a moving average window is utilized to obtain the representative value PDr for every one-second time interval of PD. There are three main steps for the on-line affective assessment algorithm, which are preparation, feature-based decision voting and affective determination. The final results show that the accuracies are 72.30% and 73.55% for the data subsets, which were respectively chosen using two types of data selection methods (paired t-test and subject self-evaluation). In order to further analyze the efficiency of affective recognition through the PD signal, the Galvanic Skin Response (GSR) was also monitored and processed. The highest affective assessment classification rate obtained from GSR processing is only 63.57% (based on the off-line processing algorithm). The overall results confirm that the PD signal should be considered as one of the most powerful physiological signals to involve in future automated real-time affective recognition systems, especially for detecting the “relaxation” vs. “stress” states.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Lung cancer is a leading cause of cancer-related death worldwide. The early diagnosis of cancer has demonstrated to be greatly helpful for curing the disease effectively. Microarray technology provides a promising approach of exploiting gene profiles for cancer diagnosis. In this study, the authors propose a gene expression programming (GEP)-based model to predict lung cancer from microarray data. The authors use two gene selection methods to extract the significant lung cancer related genes, and accordingly propose different GEP-based prediction models. Prediction performance evaluations and comparisons between the authors' GEP models and three representative machine learning methods, support vector machine, multi-layer perceptron and radial basis function neural network, were conducted thoroughly on real microarray lung cancer datasets. Reliability was assessed by the cross-data set validation. The experimental results show that the GEP model using fewer feature genes outperformed other models in terms of accuracy, sensitivity, specificity and area under the receiver operating characteristic curve. It is concluded that GEP model is a better solution to lung cancer prediction problems.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Conventional web search engines are centralised in that a single entity crawls and indexes the documents selected for future retrieval, and the relevance models used to determine which documents are relevant to a given user query. As a result, these search engines suffer from several technical drawbacks such as handling scale, timeliness and reliability, in addition to ethical concerns such as commercial manipulation and information censorship. Alleviating the need to rely entirely on a single entity, Peer-to-Peer (P2P) Information Retrieval (IR) has been proposed as a solution, as it distributes the functional components of a web search engine – from crawling and indexing documents, to query processing – across the network of users (or, peers) who use the search engine. This strategy for constructing an IR system poses several efficiency and effectiveness challenges which have been identified in past work. Accordingly, this thesis makes several contributions towards advancing the state of the art in P2P-IR effectiveness by improving the query processing and relevance scoring aspects of a P2P web search. Federated search systems are a form of distributed information retrieval model that route the user’s information need, formulated as a query, to distributed resources and merge the retrieved result lists into a final list. P2P-IR networks are one form of federated search in routing queries and merging result among participating peers. The query is propagated through disseminated nodes to hit the peers that are most likely to contain relevant documents, then the retrieved result lists are merged at different points along the path from the relevant peers to the query initializer (or namely, customer). However, query routing in P2P-IR networks is considered as one of the major challenges and critical part in P2P-IR networks; as the relevant peers might be lost in low-quality peer selection while executing the query routing, and inevitably lead to less effective retrieval results. This motivates this thesis to study and propose query routing techniques to improve retrieval quality in such networks. Cluster-based semi-structured P2P-IR networks exploit the cluster hypothesis to organise the peers into similar semantic clusters where each such semantic cluster is managed by super-peers. In this thesis, I construct three semi-structured P2P-IR models and examine their retrieval effectiveness. I also leverage the cluster centroids at the super-peer level as content representations gathered from cooperative peers to propose a query routing approach called Inverted PeerCluster Index (IPI) that simulates the conventional inverted index of the centralised corpus to organise the statistics of peers’ terms. The results show a competitive retrieval quality in comparison to baseline approaches. Furthermore, I study the applicability of using the conventional Information Retrieval models as peer selection approaches where each peer can be considered as a big document of documents. The experimental evaluation shows comparative and significant results and explains that document retrieval methods are very effective for peer selection that brings back the analogy between documents and peers. Additionally, Learning to Rank (LtR) algorithms are exploited to build a learned classifier for peer ranking at the super-peer level. The experiments show significant results with state-of-the-art resource selection methods and competitive results to corresponding classification-based approaches. Finally, I propose reputation-based query routing approaches that exploit the idea of providing feedback on a specific item in the social community networks and manage it for future decision-making. The system monitors users’ behaviours when they click or download documents from the final ranked list as implicit feedback and mines the given information to build a reputation-based data structure. The data structure is used to score peers and then rank them for query routing. I conduct a set of experiments to cover various scenarios including noisy feedback information (i.e, providing positive feedback on non-relevant documents) to examine the robustness of reputation-based approaches. The empirical evaluation shows significant results in almost all measurement metrics with approximate improvement more than 56% compared to baseline approaches. Thus, based on the results, if one were to choose one technique, reputation-based approaches are clearly the natural choices which also can be deployed on any P2P network.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The aim of this paper is to aid researchers in selecting appropriate qualitative methods in order to develop and improve future studies in the field of emotional design. These include observations, think-aloud protocols, questionnaires, diaries and interviews. Based on the authors’ experiences, it is proposed that the methods under review can be successfully used for collecting data on emotional responses to evaluate user product relationships. This paper reviews the methods; discusses the suitability, advantages and challenges in relation to design and emotion studies. Furthermore, the paper outlines the potential impact of technology on the application of these methods, discusses the implications of these methods for emotion research and concludes with recommendations for future work in this area.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Under pressure from both the ever increasing level of market competition and the global financial crisis, clients in consumer electronics (CE) industry are keen to understand how to choose the most appropriate procurement method and hence to improve their competitiveness. Four rounds of Delphi questionnaire survey were conducted with 12 experts in order to identify the most appropriate procurement method in the Hong Kong CE industry. Five key selection criteria in the CE industry are highlighted, including product quality, capability, price competition, flexibility and speed. This study also revealed that product quality was found to be the most important criteria for the “First type used commercially” and “Major functional improvements” projects. As for “Minor functional improvements” projects, price competition was the most crucial factor to be considered during the PP selection. These research findings provide owners with useful insights to select the procurement strategies.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Fisheries managers are becoming increasingly aware of the need to quantify all forms of harvest, including that by recreational fishers. This need has been driven by both a growing recognition of the potential impact that noncommercial fishers can have on exploited resources and the requirement to allocate catch limits between different sectors of the wider fishing community in many jurisdictions. Marine recreational fishers are rarely required to report any of their activity, and some form of survey technique is usually required to estimate levels of recreational catch and effort. In this review, we describe and discuss studies that have attempted to estimate the nature and extent of recreational harvests of marine fishes in New Zealand and Australia over the past 20 years. We compare studies by method to show how circumstances dictate their application and to highlight recent developments that other researchers may find of use. Although there has been some convergence of approach, we suggest that context is an important consideration, and many of the techniques discussed here have been adapted to suit local conditions and to address recognized sources of bias. Much of this experience, along with novel improvements to existing approaches, have been reported only in "gray" literature because of an emphasis on providing estimates for immediate management purposes. This paper brings much of that work together for the first time, and we discuss how others might benefit from our experience.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The control of shapes of nanocrystals is crucial for using them as building blocks for various applications. In this paper, we present a critical overview of the issues involved in shape-controlled synthesis of nanostructures. In particular, we focus on the mechanisms by which anisotropic structures of high-symmetry materials (fcc crystals, for instance) could be realized. Such structures require a symmetry-breaking mechanism to be operative that typically leads to selection of one of the facets/directions for growth over all the other symmetry-equivalent crystallographic facets. We show how this selection could arise for the growth of one-dimensional structures leading to ultrafine metal nanowires and for the case of two-dimensional nanostructures where the layer-by-layer growth takes place at low driving forces leading to plate-shaped structures. We illustrate morphology diagrams to predict the formation of two-dimensional structures during wet chemical synthesis. We show the generality of the method by extending it to predict the growth of plate-shaped inorganics produced by a precipitation reaction. Finally, we present the growth of crystals under high driving forces that can lead to the formation of porous structures with large surface areas.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Variety selection in perennial pasture crops involves identifying best varieties from data collected from multiple harvest times in field trials. For accurate selection, the statistical methods for analysing such data need to account for the spatial and temporal correlation typically present. This paper provides an approach for analysing multi-harvest data from variety selection trials in which there may be a large number of harvest times. Methods are presented for modelling the variety by harvest effects while accounting for the spatial and temporal correlation between observations. These methods provide an improvement in model fit compared to separate analyses for each harvest, and provide insight into variety by harvest interactions. The approach is illustrated using two traits from a lucerne variety selection trial. The proposed method provides variety predictions allowing for the natural sources of variation and correlation in multi-harvest data.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The work presented here is part of a larger study to identify novel technologies and biomarkers for early Alzheimer disease (AD) detection and it focuses on evaluating the suitability of a new approach for early AD diagnosis by non-invasive methods. The purpose is to examine in a pilot study the potential of applying intelligent algorithms to speech features obtained from suspected patients in order to contribute to the improvement of diagnosis of AD and its degree of severity. In this sense, Artificial Neural Networks (ANN) have been used for the automatic classification of the two classes (AD and control subjects). Two human issues have been analyzed for feature selection: Spontaneous Speech and Emotional Response. Not only linear features but also non-linear ones, such as Fractal Dimension, have been explored. The approach is non invasive, low cost and without any side effects. Obtained experimental results were very satisfactory and promising for early diagnosis and classification of AD patients.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

For increasing the usability of a medical device the usability engineering standards IEC 60601-1-6 and IEC 62366 suggest incorporating user information in the design and development process. However, practice shows that integrating user information and the related investigation of users, called user research, is difficult in the field of medical devices. In particular, identifying the most appropriate user research methods is a difficult process. This difficulty results from the complexity of the medical device industry, especially with respect to regulations and standards, the characteristics of this market and the broad range of potential user research methods available from various research disciplines. Against this background, this study aimed at guiding designers and engineers in selecting effective user research methods according to their stage in the design process. Two approaches are described which reduce the complexity of method selection by summarizing the high number of methods into homogenous method classes. These approaches are closely connected to the medical device industry characteristic design phases and therefore provide the possibility of selecting design-phase- specific user research methods. In the first approach potential user research methods are classified after their characteristics in the design process. The second approach suggests a method summarization according to their similarity in the data collection techniques and provides an additional linkage to design phase characteristics. Both approaches have been tested in practice and the results show that both approaches facilitate user research method selection. © 2009 Springer-Verlag.