944 resultados para Selection methods
Resumo:
Trenchless methods have been considered to be a viable solution for pipeline projects in urban areas. Their applicability in pipeline projects is expected to increase with the rapid advancements in technology and emerging concerns regarding social costs related to trenching methods. Selecting appropriate project delivery system (PDS) is a key to the success of trenchless projects. To ensure success of the project, the selected project delivery should be tailored to trenchless project specific characteristics and owner needs, since the effectiveness of project delivery systems differs based on different project characteristics and owners requirements. Since different trenchless methods have specific characteristics such rate of installation, lengths of installation, and accuracy, the same project delivery systems may not be equally effective for different methods. The intent of this paper is to evaluate the appropriateness of different PDS for different trenchless methods. PDS are examined through a structured decision-making process called Fuzzy Delivery System Selection Model (FDSSM). The process of incorporating the impacts of: (a) the characteristics of trenchless projects and (b) owners’ needs in the FDSSM is performed by collecting data using questionnaires deployed to professionals involved in the trenchless industry in order to determine the importance of delivery systems selection attributes for different trenchless methods, and then analyzing this data. The sensitivity of PDS rankings with respect to trenchless methods is considered in order to evaluate whether similar project delivery systems are equally effective in different trenchless methods. The effectiveness of PDS with respect to attributes is defined as follows: a project delivery system is most effective with respect to an attribute (e.g., ability to control growth in costs ) if there is no project delivery system that is more effective than that PDS. The results of this study may assist trenchless project owners to select the appropriate PDS for the trenchless method selected.
Resumo:
The principal aim of this paper is to examine the criteria assisting in the selection of biomass for energy generation in Brazil. To reach the aim, this paper adopts case study and survey research methods to collect information from four biomass energy case companies and solicits opinions from experts. The data gathered are analysed in line with a wide range of related data, including selection criteria for biomass and its importance, energy policies in Brazil, availability of biomass feedstock in Brazil and its characteristics, as well as status quo of biomass-based energy in Brazil. The findings of the paper demonstrate that there are ten main criteria in biomass selection for energy generation in Brazil. They comprise geographical conditions, availability of biomass feedstock, demand satisfaction, feedstock costs and oil prices, energy content of biomass feedstock, business and economic growth, CO2 emissions of biomass end-products, effects on soil, water and biodiversity, job creation and local community support, as well as conversion technologies. Furthermore, the research also found that these main criteria cannot be grouped on the basis of sustainability criteria, nor ranked by their importance as there is correlation between each criterion such as a cause and effect relationship, as well as some overlapping areas. Consequently, this means that when selecting biomass more comprehensive consideration is advisable.
Resumo:
This thesis introduces two related lines of study on classification of hyperspectral images with nonlinear methods. First, it describes a quantitative and systematic evaluation, by the author, of each major component in a pipeline for classifying hyperspectral images (HSI) developed earlier in a joint collaboration [23]. The pipeline, with novel use of nonlinear classification methods, has reached beyond the state of the art in classification accuracy on commonly used benchmarking HSI data [6], [13]. More importantly, it provides a clutter map, with respect to a predetermined set of classes, toward the real application situations where the image pixels not necessarily fall into a predetermined set of classes to be identified, detected or classified with.
The particular components evaluated are a) band selection with band-wise entropy spread, b) feature transformation with spatial filters and spectral expansion with derivatives c) graph spectral transformation via locally linear embedding for dimension reduction, and d) statistical ensemble for clutter detection. The quantitative evaluation of the pipeline verifies that these components are indispensable to high-accuracy classification.
Secondly, the work extends the HSI classification pipeline with a single HSI data cube to multiple HSI data cubes. Each cube, with feature variation, is to be classified of multiple classes. The main challenge is deriving the cube-wise classification from pixel-wise classification. The thesis presents the initial attempt to circumvent it, and discuss the potential for further improvement.
Resumo:
Fitting statistical models is computationally challenging when the sample size or the dimension of the dataset is huge. An attractive approach for down-scaling the problem size is to first partition the dataset into subsets and then fit using distributed algorithms. The dataset can be partitioned either horizontally (in the sample space) or vertically (in the feature space), and the challenge arise in defining an algorithm with low communication, theoretical guarantees and excellent practical performance in general settings. For sample space partitioning, I propose a MEdian Selection Subset AGgregation Estimator ({\em message}) algorithm for solving these issues. The algorithm applies feature selection in parallel for each subset using regularized regression or Bayesian variable selection method, calculates the `median' feature inclusion index, estimates coefficients for the selected features in parallel for each subset, and then averages these estimates. The algorithm is simple, involves very minimal communication, scales efficiently in sample size, and has theoretical guarantees. I provide extensive experiments to show excellent performance in feature selection, estimation, prediction, and computation time relative to usual competitors.
While sample space partitioning is useful in handling datasets with large sample size, feature space partitioning is more effective when the data dimension is high. Existing methods for partitioning features, however, are either vulnerable to high correlations or inefficient in reducing the model dimension. In the thesis, I propose a new embarrassingly parallel framework named {\em DECO} for distributed variable selection and parameter estimation. In {\em DECO}, variables are first partitioned and allocated to m distributed workers. The decorrelated subset data within each worker are then fitted via any algorithm designed for high-dimensional problems. We show that by incorporating the decorrelation step, DECO can achieve consistent variable selection and parameter estimation on each subset with (almost) no assumptions. In addition, the convergence rate is nearly minimax optimal for both sparse and weakly sparse models and does NOT depend on the partition number m. Extensive numerical experiments are provided to illustrate the performance of the new framework.
For datasets with both large sample sizes and high dimensionality, I propose a new "divided-and-conquer" framework {\em DEME} (DECO-message) by leveraging both the {\em DECO} and the {\em message} algorithm. The new framework first partitions the dataset in the sample space into row cubes using {\em message} and then partition the feature space of the cubes using {\em DECO}. This procedure is equivalent to partitioning the original data matrix into multiple small blocks, each with a feasible size that can be stored and fitted in a computer in parallel. The results are then synthezied via the {\em DECO} and {\em message} algorithm in a reverse order to produce the final output. The whole framework is extremely scalable.
Resumo:
The GloboLakes project, a global observatory of lake responses to environmental change, aims to exploit current satellite missions and long remote-sensing archives to synoptically study multiple lake ecosystems, assess their current condition, reconstruct past trends to system trajectories, and assess lake sensitivity to multiple drivers of change. Here we describe the selection protocol for including lakes in the global observatory based upon remote-sensing techniques and an initial pool of the largest 3721 lakes and reservoirs in the world, as listed in the Global Lakes and Wetlands Database. An 18-year-long archive of satellite data was used to create spatial and temporal filters for the identification of waterbodies that are appropriate for remote-sensing methods. Further criteria were applied and tested to ensure the candidate sites span a wide range of ecological settings and characteristics; a total 960 lakes, lagoons, and reservoirs were selected. The methodology proposed here is applicable to new generation satellites, such as the European Space Agency Sentinel-series.
Resumo:
The problem of selecting suppliers/partners is a crucial and important part in the process of decision making for companies that intend to perform competitively in their area of activity. The selection of supplier/partner is a time and resource-consuming task that involves data collection and a careful analysis of the factors that can positively or negatively influence the choice. Nevertheless it is a critical process that affects significantly the operational performance of each company. In this work, trough the literature review, there were identified five broad suppliers selection criteria: Quality, Financial, Synergies, Cost, and Production System. Within these criteria, it was also included five sub-criteria. Thereafter, a survey was elaborated and companies were contacted in order to answer which factors have more relevance in their decisions to choose the suppliers. Interpreted the results and processed the data, it was adopted a model of linear weighting to reflect the importance of each factor. The model has a hierarchical structure and can be applied with the Analytic Hierarchy Process (AHP) method or Simple Multi-Attribute Rating Technique (SMART). The result of the research undertaken by the authors is a reference model that represents a decision making support for the suppliers/partners selection process.
Resumo:
Thesis (Master's)--University of Washington, 2016-08
Resumo:
The microbial spoilage of meat and seafood products with short shelf lives is responsible for a significant amount of food waste. Food spoilage is a very heterogeneous process, involving the growth of various, poorly characterized bacterial communities. In this study, we conducted 16S ribosomal RNA gene pyrosequencing on 160 samples of fresh and spoiled foods to comparatively explore the bacterial communities associated with four meat products and four seafood products that are among the most consumed food items in Europe. We show that fresh products are contaminated in part by a microbiota similar to that found on the skin and in the gut of animals. However, this animal-derived microbiota was less prevalent and less abundant than a core microbiota, psychrotrophic in nature, mainly originated from the environment (water reservoirs). We clearly show that this core community found on meat and seafood products is the main reservoir of spoilage bacteria. We also show that storage conditions exert strong selective pressure on the initial microbiota: alpha diversity in fresh samples was 189 +/- 58 operational taxonomic units (OTUs) but dropped to 27 +/- 12 OTUs in spoiled samples. The OTU assemblage associated with spoilage was shaped by low storage temperatures, packaging and the nutritional value of the food matrix itself. These factors presumably act in tandem without any hierarchical pattern. Most notably, we were also able to identify putative new clades of dominant, previously undescribed bacteria occurring on spoiled seafood, a finding that emphasizes the importance of using culture-independent methods when studying food microbiota.
Resumo:
There is still a lack of an engineering approach for building Web systems, and the field of measuring the Web is not yet mature. In particular, there is an uncertainty in the selection of evaluation methods, and there are risks of standardizing inadequate evaluation practices. It is important to know whether we are evaluating the Web or specific website(s). We need a new categorization system, a different focus on evaluation methods, and an in-depth analysis that reveals the strengths and weaknesses of each method. As a contribution to the field of Web evaluation, this study proposes a novel approach to view and select evaluation methods based on the purpose and platforms of the evaluation. It has been shown that the choice of the appropriate evaluation method(s) depends greatly on the purpose of the evaluation.
Resumo:
Although the value of primary forests for biodiversity conservation is well known, the potential biodiversity and conservation value of regenerating forests remains controversial. Many factors likely contribute to this, including: 1. the variable ages of regenerating forests being studied (often dominated by relatively young regenerating forests); 2. the potential for confounding on-going human disturbance (such as logging and hunting); 3. the relatively low number of multi-taxa studies; 4. the lack of studies that directly compare different historic disturbances within the same location; 5. contrasting patterns from different survey methodologies and the paucity of knowledge on the impacts across different vertical levels of rainforest biodiversity (often due to a lack of suitable methodologies available to assess them). We also know relatively little as to how biodiversity is affected by major current impacts, such as unmarked rainforest roads, which contribute to this degradation of habitat and fragmentation. This thesis explores the potential biodiversity value of regenerating rainforests under the best of scenarios and seeks to understand more about the impact of current human disturbance to biodiversity; data comes from case studies from the Manu and Sumaco Biosphere Reserves in the Western Amazon. Specifically, I compare overall biodiversity and conservation value of a best case regenerating rainforest site with a selection of well-studied primary forest sites and with predicted species lists for the region; including a focus on species of key conservation concern. I then investigate the biodiversity of the same study site in reference to different types of historic anthropogenic disturbance. Following this I investigate the impacts to biodiversity from an unmarked rainforest road. In order to understand more about the differential effects of habitat disturbance on arboreal diversity I directly assess how patterns of butterfly biodiversity vary between three vertical strata. Although assessments within the canopy have been made for birds, invertebrates and bats, very few studies have successfully targeted arboreal mammals. I therefore investigate the potential of camera traps for inventorying arboreal mammal species in comparison with traditional methodologies. Finally, in order to investigate the possibility that different survey methodologies might identify different biodiversity patterns in habitat disturbance assessments, I investigate whether two different but commonly used survey methodologies used to assess amphibians, indicate the same or different responses of amphibian biodiversity to historic habitat change by people. The regenerating rainforest study site contained high levels of species richness; both in terms of alpha diversity found in nearby primary forest areas (87% ±3.5) and in terms of predicted primary forest diversity from the region (83% ±6.7). This included 89% (39 out of 44) of the species of high conservation concern predicted for the Manu region. Faunal species richness in once completely cleared regenerating forest was on average 13% (±9.8) lower than historically selectively logged forest. The presence of the small unmarked road significantly altered levels of faunal biodiversity for three taxa, up to and potentially beyond 350m into the forest interior. Most notably, the impact on biodiversity extended to at least 32% of the whole reserve area. The assessment of butterflies across strata showed that different vertical zones within the same rainforest responded differently in areas with different historic human disturbance. A comparison between forest regenerating after selective logging and forest regenerating after complete clearance, showed that there was a 17% greater reduction in canopy species richness in the historically cleared forest compared with the terrestrial community. Comparing arboreal camera traps with traditional ground-based techniques suggests that camera traps are an effective tool for inventorying secretive arboreal rainforest mammal communities and detect a higher number of cryptic species. Finally, the two survey methodologies used to assess amphibian communities identified contrasting biodiversity patterns in a human modified rainforest; one indicated biodiversity differences between forests with different human disturbance histories, whereas the other suggested no differences between forest disturbance types. Overall, in this thesis I find that the conservation and biodiversity value of regenerating and human disturbed tropical forest can potentially contribute to rainforest biodiversity conservation, particularly in the best of circumstances. I also highlight the importance of utilising appropriate study methodologies that to investigate these three-dimensional habitats, and contribute to the development of methodologies to do so. However, care should be taken when using different survey methodologies, which can provide contrasting biodiversity patterns in response to human disturbance.
Resumo:
The use of chemical control measures to reduce the impact of parasite and pest species has frequently resulted in the development of resistance. Thus, resistance management has become a key concern in human and veterinary medicine, and in agricultural production. Although it is known that factors such as gene flow between susceptible and resistant populations, drug type, application methods, and costs of resistance can affect the rate of resistance evolution, less is known about the impacts of density-dependent eco-evolutionary processes that could be altered by drug-induced mortality. The overall aim of this thesis was to take an experimental evolution approach to assess how life history traits respond to drug selection, using a free-living dioecious worm (Caenorhabditis remanei) as a model. In Chapter 2, I defined the relationship between C. remanei survival and Ivermectin dose over a range of concentrations, in order to control the intensity of selection used in the selection experiment described in Chapter 4. The dose-response data were also used to appraise curve-fitting methods, using Akaike Information Criterion (AIC) model selection to compare a series of nonlinear models. The type of model fitted to the dose response data had a significant effect on the estimates of LD50 and LD99, suggesting that failure to fit an appropriate model could give misleading estimates of resistance status. In addition, simulated data were used to establish that a potential cost of resistance could be predicted by comparing survival at the upper asymptote of dose-response curves for resistant and susceptible populations, even when differences were as low as 4%. This approach to dose-response modeling ensures that the maximum amount of useful information relating to resistance is gathered in one study. In Chapter 3, I asked how simulations could be used to inform important design choices used in selection experiments. Specifically, I focused on the effects of both within- and between-line variation on estimated power, when detecting small, medium and large effect sizes. Using mixed-effect models on simulated data, I demonstrated that commonly used designs with realistic levels of variation could be underpowered for substantial effect sizes. Thus, use of simulation-based power analysis provides an effective way to avoid under or overpowering a study designs incorporating variation due to random effects. In Chapter 4, I 3 investigated how Ivermectin dosage and changes in population density affect the rate of resistance evolution. I exposed replicate lines of C. remanei to two doses of Ivermectin (high and low) to assess relative survival of lines selected in drug-treated environments compared to untreated controls over 10 generations. Additionally, I maintained lines where mortality was imposed randomly to control for differences in density between drug treatments and to distinguish between the evolutionary consequences of drug treatment versus ecological processes affected by changes in density-dependent feedback. Intriguingly, both drug-selected and random-mortality lines showed an increase in survivorship when challenged with Ivermectin; the magnitude of this increase varied with the intensity of selection and life-history stage. The results suggest that interactions between density-dependent processes and life history may mediate evolved changes in susceptibility to control measures, which could result in misleading conclusions about the evolution of heritable resistance following drug treatment. In Chapter 5, I investigated whether the apparent changes in drug susceptibility found in Chapter 4 were related to evolved changes in life-history of C. remanei populations after selection in drug-treated and random-mortality environments. Rapid passage of lines in the drug-free environment had no effect on the measured life-history traits. In the drug-free environment, adult size and fecundity of drug-selected lines increased compared to the controls but drug selection did not affect lifespan. In the treated environment, drug-selected lines showed increased lifespan and fecundity relative to controls. Adult size of randomly culled lines responded in a similar way to drug-selected lines in the drug-free environment, but no change in fecundity or lifespan was observed in either environment. The results suggest that life histories of nematodes can respond to selection as a result of the application of control measures. Failure to take these responses into account when applying control measures could result in adverse outcomes, such as larger and more fecund parasites, as well as over-estimation of the development of genetically controlled resistance. In conclusion, my thesis shows that there may be a complex relationship between drug selection, density-dependent regulatory processes and life history of populations challenged with control measures. This relationship could have implications for how resistance is monitored and managed if life histories of parasitic species show such eco-evolutionary responses to drug application.
Resumo:
The goal of Vehicle Routing Problems (VRP) and their variations is to transport a set of orders with the minimum number of vehicles at least cost. Most approaches are designed to solve specific problem variations independently, whereas in real world applications, different constraints are handled concurrently. This research extends solutions obtained for the traveling salesman problem with time windows to a much wider class of route planning problems in logistics. The work describes a novel approach that: supports a heterogeneous fleet of vehicles dynamically reduces the number of vehicles respects individual capacity restrictions satisfies pickup and delivery constraints takes Hamiltonian paths (rather than cycles) The proposed approach uses Monte-Carlo Tree Search and in particular Nested Rollout Policy Adaptation. For the evaluation of the work, real data from the industry was obtained and tested and the results are reported.
Resumo:
Background: Statistical analysis of DNA microarray data provides a valuable diagnostic tool for the investigation of genetic components of diseases. To take advantage of the multitude of available data sets and analysis methods, it is desirable to combine both different algorithms and data from different studies. Applying ensemble learning, consensus clustering and cross-study normalization methods for this purpose in an almost fully automated process and linking different analysis modules together under a single interface would simplify many microarray analysis tasks. Results: We present ArrayMining.net, a web-application for microarray analysis that provides easy access to a wide choice of feature selection, clustering, prediction, gene set analysis and cross-study normalization methods. In contrast to other microarray-related web-tools, multiple algorithms and data sets for an analysis task can be combined using ensemble feature selection, ensemble prediction, consensus clustering and cross-platform data integration. By interlinking different analysis tools in a modular fashion, new exploratory routes become available, e.g. ensemble sample classification using features obtained from a gene set analysis and data from multiple studies. The analysis is further simplified by automatic parameter selection mechanisms and linkage to web tools and databases for functional annotation and literature mining. Conclusion: ArrayMining.net is a free web-application for microarray analysis combining a broad choice of algorithms based on ensemble and consensus methods, using automatic parameter selection and integration with annotation databases.
Resumo:
In this work we focus on pattern recognition methods related to EMG upper-limb prosthetic control. After giving a detailed review of the most widely used classification methods, we propose a new classification approach. It comes as a result of comparison in the Fourier analysis between able-bodied and trans-radial amputee subjects. We thus suggest a different classification method which considers each surface electrodes contribute separately, together with five time domain features, obtaining an average classification accuracy equals to 75% on a sample of trans-radial amputees. We propose an automatic feature selection procedure as a minimization problem in order to improve the method and its robustness.
Resumo:
Aim: To investigate the qualitative aspects in patient selection and the quantitative impact of disease burden in real world treatment of vitreomacular traction (VMT) and implementation of the National Institute for Health and Care Excellence (NICE) guidance (TA297). Methods: A monocentric, retrospective review of consecutive patients undergoing optical coherence tomography (OCT) imaging over a 3 month period. Patients with VMT in at least one eye were identified for further data collection on laterality, visual acuity, symptoms, presence of epiretinal membrane, macular hole and treatment selection. Results: A total of 3472 patients underwent OCT imaging with a total of 6878 eyes scanned. Out of 87 patients, 74 patients had unilateral VMT (38 right, 36 left) and 13 patients had bilateral VMT. Eighteen patients with unilateral VMT satisfied NICE criteria of severe sight problems in the affected eye. Eight were managed for a coexisting pathology, one refused treatment, one patient did not attend, two closed spontaneously, and one received ocriplasmin prior to the study start date. Only two patients with unilateral VMT received ocriplasmin and three underwent vitrectomy. Those failing to meet NICE criteria for unilateral VMT were predominantly asymptomatic (n=49) or had coexisting ERM (n=5) or both (n=2). Conclusion: Ocriplasmin provides an alternative treatment for patients with symptomatic VMT. Our data shows that the majority of patients with VMT do not meet NICE TA297 primarily due to lack of symptoms. Those meeting NICE criteria, but not treated, tended to have coexisting macular pathology. Variation in patient selection due to subjective factors not outlined in NICE guidance suggests that real world outcomes of ocriplasmin therapy should be interpreted with caution.