865 resultados para multi-environments experiments
Resumo:
In marine environments, sediments from different sources are stirred and dispersed, generating beds that are composed of mixed and layered sediments of differing grain sizes. Traditional engineering formulations used to predict erosion thresholds are however, generally for unimodal sediment distributions, and so may be inadequate for commonly occurring coastal sediments. We tested the transport behavior of deposited and mixed sediment beds consisting of a simplified two-grain fraction (silt (D50 = 55 µm) and sand (D50 = 300 µm)) in a laboratory-based annular flume with the objective of investigating the parameters controlling the stability of a sediment bed. To mimic recent deposition of particles following large storm events and the longer-term result of the incorporation of fines in coarse sediment, we designed two suites of experiments: (1) "the layering experiment": in which a sandy bed was covered by a thin layer of silt of varying thickness (0.2 - 3 mm; 0.5 - 3.7 wt %, dry weight in a layer 10 cm deep); and (2) "the mixing experiment" where the bed was composed of sand homogeneously mixed with small amounts of silt (0.07 - 0.7 wt %, dry weight). To initiate erosion and to detect a possible stabilizing effect in both settings, we increased the flow speeds in increments up to 0.30 m/s. Results showed that the sediment bed (or the underlying sand bed in the case of the layering experiment) stabilized with increasing silt composition. The increasing sediment stability was defined by a shift of the initial threshold conditions towards higher flow speeds, combined with, in the case of the mixed bed, decreasing erosion rates. Our results show that even extremely low concentrations of silt play a stabilizing role (1.4% silt (wt %) on a layered sediment bed of 10 cm thickness). In the case of a mixed sediment bed, 0.18% silt (wt %, in a sample of 10 cm depth) stabilized the bed. Both cases show that the depositional history of the sediment fractions can change the erosion characteristics of the seabed. These observations are summarized in a conceptual model that suggests that, in addition to the effect on surface roughness, silt stabilizes the sand bed by pore-space plugging and reducing the inflow in the bed, and hence increases the bed stability. Measurements of hydraulic conductivity on similar bed assemblages qualitatively supported this conclusion by showing that silt could decrease the permeability by up to 22% in the case of a layered bed and by up to 70% in the case of a mixed bed.
Resumo:
Federal Highway Administration, Washington, D.C.
Resumo:
Federal Highway Administration, Washington, D.C.
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Resumo:
While the feasibility of bottleneck-induced speciation is in doubt, population bottlenecks may still affect the speciation process by interacting with divergent selection. To explore this possibility, I conducted a laboratory speciation experiment using Drosophila pseudoobscura involving 78 replicate populations assigned in a two-way factorial design to both bottleneck (present vs. absent) and environment (ancestral vs. novel) treatments. Populations independently evolved under these treatments and were then tested for assortative mating and male mating success against their common ancestor. Bottlenecks alone did not generate any premating isolation, despite an experimental design that was conducive to bottleneck-induced speciation. Premating isolation also did not evolve in the novel environment treatment, neither in the presence nor absence of bottlenecks. However, male mating success was significantly reduced in the novel environment treatment, both as a plastic response to this environment and as a result of environment-dependent inbreeding effects in the bottlenecked populations. Reduced mating success of derived males will hamper speciation by enhancing the mating success of immigrant, ancestral males. Novel environments are generally thought to promote ecological speciation by generating divergent natural selection. In the current experiment, however, the novel environment did not cause the evolution of any premating isolation and it reduced the likelihood of speciation through its effects on male mating success.
Resumo:
When studying genotype X environment interaction in multi-environment trials, plant breeders and geneticists often consider one of the effects, environments or genotypes, to be fixed and the other to be random. However, there are two main formulations for variance component estimation for the mixed model situation, referred to as the unconstrained-parameters (UP) and constrained-parameters (CP) formulations. These formulations give different estimates of genetic correlation and heritability as well as different tests of significance for the random effects factor. The definition of main effects and interactions and the consequences of such definitions should be clearly understood, and the selected formulation should be consistent for both fixed and random effects. A discussion of the practical outcomes of using the two formulations in the analysis of balanced data from multi-environment trials is presented. It is recommended that the CP formulation be used because of the meaning of its parameters and the corresponding variance components. When managed (fixed) environments are considered, users will have more confidence in prediction for them but will not be overconfident in prediction in the target (random) environments. Genetic gain (predicted response to selection in the target environments from the managed environments) is independent of formulation.
Resumo:
This paper shows initial results in deploying the biologically inspired Simultaneous Localisation and Mapping system, RatSLAM, in an outdoor environment. RatSLAM has been widely tested in indoor environments on the task of producing topologically coherent maps based on a fusion of odometric and visual information. This paper details the changes required to deploy RatSLAM on a small tractor equipped with odometry and an omnidirectional camera. The principal changes relate to the vision system, with others required for RatSLAM to use omnidirectional visual data. The initial results from mapping around a 500 m loop are promising, with many improvements still to be made.
Resumo:
In this paper we discuss the first of a series of experiments evaluating earcons for critical care environments. We examine peoples’ ability to monitor earcons conveying systolic and diastolic blood pressure while conducting a distractor task. The results showed that when a beacon is present prior to the earcon, participants’ judgment of pitch and duration information improved. The results of the study also indicated presence of historical information in the earcon may interfere with participants’ judgments. However, since participants felt more confident in their recall of previous values when the historical information was present, the results may reflect insufficient training.
Resumo:
Spatial data are particularly useful in mobile environments. However, due to the low bandwidth of most wireless networks, developing large spatial database applications becomes a challenging process. In this paper, we provide the first attempt to combine two important techniques, multiresolution spatial data structure and semantic caching, towards efficient spatial query processing in mobile environments. Based on the study of the characteristics of multiresolution spatial data (MSD) and multiresolution spatial query, we propose a new semantic caching model called Multiresolution Semantic Caching (MSC) for caching MSD in mobile environments. MSC enriches the traditional three-category query processing in semantic cache to five categories, thus improving the performance in three ways: 1) a reduction in the amount and complexity of the remainder queries; 2) the redundant transmission of spatial data already residing in a cache is avoided; 3) a provision for satisfactory answers before 100% query results have been transmitted to the client side. Our extensive experiments on a very large and complex real spatial database show that MSC outperforms the traditional semantic caching models significantly
Resumo:
We optimized the emission efficiency from a microcavity OLEDs consisting of widely used organic materials, N,N'-di(naphthalene-1-yl)-N,N'-diphenylbenzidine (NPB) as a hole transport layer and tris (8-hydroxyquinoline) (Alq(3)) as emitting and electron transporting layer. LiF/Al was considered as a cathode, while metallic Ag anode was used. TiO2 and Al2O3 layers were stacked on top of the cathode to alter the properties of the top mirror. The electroluminescence emission spectra, electric field distribution inside the device, carrier density, recombination rate and exciton density were calculated as a function of the position of the emission layer. The results show that for certain TiO2 and Al2O3 layer thicknesses, light output is enhanced as a result of the increase in both the reflectance and transmittance of the top mirror. Once the optimum structure has been determined, the microcavity OLED devices can be fabricated and characterized, and comparisons between experiments and theory can be made.
Resumo:
Traditionally, machine learning algorithms have been evaluated in applications where assumptions can be reliably made about class priors and/or misclassification costs. In this paper, we consider the case of imprecise environments, where little may be known about these factors and they may well vary significantly when the system is applied. Specifically, the use of precision-recall analysis is investigated and compared to the more well known performance measures such as error-rate and the receiver operating characteristic (ROC). We argue that while ROC analysis is invariant to variations in class priors, this invariance in fact hides an important factor of the evaluation in imprecise environments. Therefore, we develop a generalised precision-recall analysis methodology in which variation due to prior class probabilities is incorporated into a multi-way analysis of variance (ANOVA). The increased sensitivity and reliability of this approach is demonstrated in a remote sensing application.
Resumo:
A multi-scale model of edge coding based on normalized Gaussian derivative filters successfully predicts perceived scale (blur) for a wide variety of edge profiles [Georgeson, M. A., May, K. A., Freeman, T. C. A., & Hesse, G. S. (in press). From filters to features: Scale-space analysis of edge and blur coding in human vision. Journal of Vision]. Our model spatially differentiates the luminance profile, half-wave rectifies the 1st derivative, and then differentiates twice more, to give the 3rd derivative of all regions with a positive gradient. This process is implemented by a set of Gaussian derivative filters with a range of scales. Peaks in the inverted normalized 3rd derivative across space and scale indicate the positions and scales of the edges. The edge contrast can be estimated from the height of the peak. The model provides a veridical estimate of the scale and contrast of edges that have a Gaussian integral profile. Therefore, since scale and contrast are independent stimulus parameters, the model predicts that the perceived value of either of these parameters should be unaffected by changes in the other. This prediction was found to be incorrect: reducing the contrast of an edge made it look sharper, and increasing its scale led to a decrease in the perceived contrast. Our model can account for these effects when the simple half-wave rectifier after the 1st derivative is replaced by a smoothed threshold function described by two parameters. For each subject, one pair of parameters provided a satisfactory fit to the data from all the experiments presented here and in the accompanying paper [May, K. A. & Georgeson, M. A. (2007). Added luminance ramp alters perceived edge blur and contrast: A critical test for derivative-based models of edge coding. Vision Research, 47, 1721-1731]. Thus, when we allow for the visual system's insensitivity to very shallow luminance gradients, our multi-scale model can be extended to edge coding over a wide range of contrasts and blurs. © 2007 Elsevier Ltd. All rights reserved.
Resumo:
In Information Filtering (IF) a user may be interested in several topics in parallel. But IF systems have been built on representational models derived from Information Retrieval and Text Categorization, which assume independence between terms. The linearity of these models results in user profiles that can only represent one topic of interest. We present a methodology that takes into account term dependencies to construct a single profile representation for multiple topics, in the form of a hierarchical term network. We also introduce a series of non-linear functions for evaluating documents against the profile. Initial experiments produced positive results.
Resumo:
There has been substantial research into the role of distance learning in education. Despite the rise in the popularity and practice of this form of learning in business, there has not been a parallel increase in the amount of research carried out in this field. An extensive investigation was conducted into the entire distance learning system of a multi-national company with particular emphasis on the design, implementation and evaluation of the materials. In addition, the performance and attitudes of trainees were examined. The results of a comparative study indicated that trainees using distance learning had significantly higher test scores than trainees using conventional face-to-face training. The influence of the previous distance learning experience, educational background and selected study environment of trainees was investigated. Trainees with previous experience of distance learning were more likely to complete the course and with significantly higher test scores than trainees with no previous experience. The more advanced the educational background of trainees, the greater the likelihood of their completing the course, although there was no significant difference in the test scores achieved. Trainees preferred to use the materials at home and those opting to study in this environment scored significantly higher than those studying in the office, the study room at work or in a combination of environments. The influence of learning styles (Kolb, 1976) was tested. The results indicated that the convergers had the greatest completion rates and scored significantly higher than trainees with the assimilator, accommodator and diverger learning styles. The attitudes of the trainees, supervisors and trainers were examined using questionnaire, interview and discussion techniques. The findings highlighted the potential problems of lack of awareness and low motivation which could prove to be major obstacles to the success of distance learning in business.
Resumo:
Agent-based technology is playing an increasingly important role in today’s economy. Usually a multi-agent system is needed to model an economic system such as a market system, in which heterogeneous trading agents interact with each other autonomously. Two questions often need to be answered regarding such systems: 1) How to design an interacting mechanism that facilitates efficient resource allocation among usually self-interested trading agents? 2) How to design an effective strategy in some specific market mechanisms for an agent to maximise its economic returns? For automated market systems, auction is the most popular mechanism to solve resource allocation problems among their participants. However, auction comes in hundreds of different formats, in which some are better than others in terms of not only the allocative efficiency but also other properties e.g., whether it generates high revenue for the auctioneer, whether it induces stable behaviour of the bidders. In addition, different strategies result in very different performance under the same auction rules. With this background, we are inevitably intrigued to investigate auction mechanism and strategy designs for agent-based economics. The international Trading Agent Competition (TAC) Ad Auction (AA) competition provides a very useful platform to develop and test agent strategies in Generalised Second Price auction (GSP). AstonTAC, the runner-up of TAC AA 2009, is a successful advertiser agent designed for GSP-based keyword auction. In particular, AstonTAC generates adaptive bid prices according to the Market-based Value Per Click and selects a set of keyword queries with highest expected profit to bid on to maximise its expected profit under the limit of conversion capacity. Through evaluation experiments, we show that AstonTAC performs well and stably not only in the competition but also across a broad range of environments. The TAC CAT tournament provides an environment for investigating the optimal design of mechanisms for double auction markets. AstonCAT-Plus is the post-tournament version of the specialist developed for CAT 2010. In our experiments, AstonCAT-Plus not only outperforms most specialist agents designed by other institutions but also achieves high allocative efficiencies, transaction success rates and average trader profits. Moreover, we reveal some insights of the CAT: 1) successful markets should maintain a stable and high market share of intra-marginal traders; 2) a specialist’s performance is dependent on the distribution of trading strategies. However, typical double auction models assume trading agents have a fixed trading direction of either buy or sell. With this limitation they cannot directly reflect the fact that traders in financial markets (the most popular application of double auction) decide their trading directions dynamically. To address this issue, we introduce the Bi-directional Double Auction (BDA) market which is populated by two-way traders. Experiments are conducted under both dynamic and static settings of the continuous BDA market. We find that the allocative efficiency of a continuous BDA market mainly comes from rational selection of trading directions. Furthermore, we introduce a high-performance Kernel trading strategy in the BDA market which uses kernel probability density estimator built on historical transaction data to decide optimal order prices. Kernel trading strategy outperforms some popular intelligent double auction trading strategies including ZIP, GD and RE in the continuous BDA market by making the highest profit in static games and obtaining the best wealth in dynamic games.