928 resultados para Topologies on an arbitrary set
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
The knowledge, skills, and attitudes manifested in health and physical education school curricula are an arbitrary selection of that which is known and valued at a particular place and time. Bernstein's (2000) theories of the social construction of knowledge offer a way to better understand the relationship among the production, selection, and reproduction of curricular knowledge. This article overviews contemporary knowledge in the primary field (production) on which curriculum writers in the recontextualizing field might draw. It highlights tensions in the knowledge generated within the primary field and, using a case of the USXs National Standards for Physical Education (NASPE), demonstrates how particular discourses become privileged when translated into curriculum documents in the recontextualizing field
Resumo:
Wigner functions play a central role in the phase space formulation of quantum mechanics. Although closely related to classical Liouville densities, Wigner functions are not positive definite and may take negative values on subregions of phase space. We investigate the accumulation of these negative values by studying bounds on the integral of an arbitrary Wigner function over noncompact subregions of the phase plane with hyperbolic boundaries. We show using symmetry techniques that this problem reduces to computing the bounds on the spectrum associated with an exactly solvable eigenvalue problem and that the bounds differ from those on classical Liouville distributions. In particular, we show that the total "quasiprobability" on such a region can be greater than 1 or less than zero. (C) 2005 American Institute of Physics.
Resumo:
Genetic control of adventitious rooting was characterised in two unrelated Pinus elliottii x P. caribaea families, an outbred F-1 (n = 287) and an inbred F-2 ( n = 357). Rooting percentage was assessed in three settings and root biomass was measured on a sub-set of clones ( n = 50) from each family in the third setting. On average, clones in the outbred F-1 had a higher rooting percentage (mean +/- SE; 59 +/- 1.9%) and biomass (mean +/- SD; 0.41 +/- 0.24 g) than clones in the inbred F-2 family ( mean +/- SE; 48 +/- 1.8% and mean +/- SD; 0.19 +/- 0.13 g). Genetic determination for rooting percentage was strong in both families, as indicated by high individual setting clonal repeatabilities ( e. g. Setting 3; outbred F-1 0.62 +/- 0.03 and inbred F-2 0.68 +/- 0.02 (H-2 +/- SE)) and the moderate-to-high genetic correlations amongst the three settings. For root biomass, clonal repeatabilities for both families were lower (outbred F-1 0.35 +/- 0.09 and inbred F-2 0.44 +/- 0.10 (H-2 +/- SE)). Weak positive genetic correlations between rooting percentage and root biomass in both families suggested a concomitant gain in root biomass would be insignificant when selecting solely on the more easily assessable rooting percentage.
Resumo:
This paper provides information on the experimental set-up, data collection methods and results to date for the project Large scale modelling of coarse grained beaches, undertaken at the Large Wave Channel (GWK) of FZK in Hannover by an international group of researchers in Spring 2002. The main objective of the experiments was to provide full scale measurements of cross-shore processes on gravel and mixed beaches for the verification and further development of cross-shore numerical models of gravel and mixed sediment beaches. Identical random and regular wave tests were undertaken for a gravel beach and a mixed sand/gravel beach set up in the flume. Measurements included profile development, water surface elevation along the flume, internal pressures in the swash zone, piezometric head levels within the beach, run-up, flow velocities in the surf-zone and sediment size distributions. The purpose of the paper is to present to the scientific community the experimental procedure, a summary of the data collected, some initial results, as well as a brief outline of the on-going research being carried out with the data by different research groups. The experimental data is available to all the scientific community following submission of a statement of objectives, specification of data requirements and an agreement to abide with the GWK and EU protocols. (C) 2005 Elsevier B.V. All rights reserved.
Resumo:
A calibration methodology based on an efficient and stable mathematical regularization scheme is described. This scheme is a variant of so-called Tikhonov regularization in which the parameter estimation process is formulated as a constrained minimization problem. Use of the methodology eliminates the need for a modeler to formulate a parsimonious inverse problem in which a handful of parameters are designated for estimation prior to initiating the calibration process. Instead, the level of parameter parsimony required to achieve a stable solution to the inverse problem is determined by the inversion algorithm itself. Where parameters, or combinations of parameters, cannot be uniquely estimated, they are provided with values, or assigned relationships with other parameters, that are decreed to be realistic by the modeler. Conversely, where the information content of a calibration dataset is sufficient to allow estimates to be made of the values of many parameters, the making of such estimates is not precluded by preemptive parsimonizing ahead of the calibration process. White Tikhonov schemes are very attractive and hence widely used, problems with numerical stability can sometimes arise because the strength with which regularization constraints are applied throughout the regularized inversion process cannot be guaranteed to exactly complement inadequacies in the information content of a given calibration dataset. A new technique overcomes this problem by allowing relative regularization weights to be estimated as parameters through the calibration process itself. The technique is applied to the simultaneous calibration of five subwatershed models, and it is demonstrated that the new scheme results in a more efficient inversion, and better enforcement of regularization constraints than traditional Tikhonov regularization methodologies. Moreover, it is argued that a joint calibration exercise of this type results in a more meaningful set of parameters than can be achieved by individual subwatershed model calibration. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
The notion of being sure that you have completely eradicated an invasive species is fanciful because of imperfect detection and persistent seed banks. Eradication is commonly declared either on an ad hoc basis, on notions of seed bank longevity, or on setting arbitrary thresholds of 1% or 5% confidence that the species is not present. Rather than declaring eradication at some arbitrary level of confidence, we take an economic approach in which we stop looking when the expected costs outweigh the expected benefits. We develop theory that determines the number of years of absent surveys required to minimize the net expected cost. Given detection of a species is imperfect, the optimal stopping time is a trade-off between the cost of continued surveying and the cost of escape and damage if eradication is declared too soon. A simple rule of thumb compares well to the exact optimal solution using stochastic dynamic programming. Application of the approach to the eradication programme of Helenium amarum reveals that the actual stopping time was a precautionary one given the ranges for each parameter.
Resumo:
Lots of work has been done in texture feature extraction for rectangular images, but not as much attention has been paid to the arbitrary-shaped regions available in region-based image retrieval (RBIR) systems. In This work, we present a texture feature extraction algorithm, based on projection onto convex sets (POCS) theory. POCS iteratively concentrates more and more energy into the selected coefficients from which texture features of an arbitrary-shaped region can be extracted. Experimental results demonstrate the effectiveness of the proposed algorithm for image retrieval purposes.
Resumo:
A novel algorithm for performing registration of dynamic contrast-enhanced (DCE) MRI data of the breast is presented. It is based on an algorithm known as iterated dynamic programming originally devised to solve the stereo matching problem. Using artificially distorted DCE-MRI breast images it is shown that the proposed algorithm is able to correct for movement and distortions over a larger range than is likely to occur during routine clinical examination. In addition, using a clinical DCE-MRI data set with an expertly labeled suspicious region, it is shown that the proposed algorithm significantly reduces the variability of the enhancement curves at the pixel level yielding more pronounced uptake and washout phases.
Resumo:
Three-dimensional computer modelling techniques are being used to develop a probabilistic model of turbulence-related spray transport around various plant architectures to investigate the influence of plant architectures and crop geometry on the sprayapplication process. Plant architecture models that utilise a set of growth rules expressed in the Lindenmayer systems (L-systems) formalism have been developed and programmed using L-studio software. Modules have been added to simulate the movement ofdroplets through the air and deposition on the plant canopy. Deposition of spray on an artificial plant structure was measured in the wind tunnel at the University of Queensland, Gatton campus and the results compared to the model simulation. Further trials are planned to measure the deposition of spray droplets on various crop and weed species and the results from these trials will be used to refine and validate the combined spray and plant architecture model.
Resumo:
We present an analytic solution to the problem of on-line gradient-descent learning for two-layer neural networks with an arbitrary number of hidden units in both teacher and student networks. The technique, demonstrated here for the case of adaptive input-to-hidden weights, becomes exact as the dimensionality of the input space increases.
Resumo:
An adaptive back-propagation algorithm is studied and compared with gradient descent (standard back-propagation) for on-line learning in two-layer neural networks with an arbitrary number of hidden units. Within a statistical mechanics framework, both numerical studies and a rigorous analysis show that the adaptive back-propagation method results in faster training by breaking the symmetry between hidden units more efficiently and by providing faster convergence to optimal generalization than gradient descent.
Resumo:
An adaptive back-propagation algorithm parameterized by an inverse temperature 1/T is studied and compared with gradient descent (standard back-propagation) for on-line learning in two-layer neural networks with an arbitrary number of hidden units. Within a statistical mechanics framework, we analyse these learning algorithms in both the symmetric and the convergence phase for finite learning rates in the case of uncorrelated teachers of similar but arbitrary length T. These analyses show that adaptive back-propagation results generally in faster training by breaking the symmetry between hidden units more efficiently and by providing faster convergence to optimal generalization than gradient descent.
Resumo:
In this paper we review recent theoretical approaches for analysing the dynamics of on-line learning in multilayer neural networks using methods adopted from statistical physics. The analysis is based on monitoring a set of macroscopic variables from which the generalisation error can be calculated. A closed set of dynamical equations for the macroscopic variables is derived analytically and solved numerically. The theoretical framework is then employed for defining optimal learning parameters and for analysing the incorporation of second order information into the learning process using natural gradient descent and matrix-momentum based methods. We will also briefly explain an extension of the original framework for analysing the case where training examples are sampled with repetition.
Resumo:
Purpose - This paper provides a deeper examination of the fundamentals of commonly-used techniques - such as coefficient alpha and factor analysis - in order to more strongly link the techniques used by marketing and social researchers to their underlying psychometric and statistical rationale. Design/methodology approach - A wide-ranging review and synthesis of psychometric and other measurement literature both within and outside the marketing field is used to illuminate and reconsider a number of misconceptions which seem to have evolved in marketing research. Findings - The research finds that marketing scholars have generally concentrated on reporting what are essentially arbitrary figures such as coefficient alpha, without fully understanding what these figures imply. It is argued that, if the link between theory and technique is not clearly understood, use of psychometric measure development tools actually runs the risk of detracting from the validity of the measures rather than enhancing it. Research limitations/implications - The focus on one stage of a particular form of measure development could be seen as rather specialised. The paper also runs the risk of increasing the amount of dogma surrounding measurement, which runs contrary to the spirit of this paper. Practical implications - This paper shows that researchers may need to spend more time interpreting measurement results. Rather than simply referring to precedence, one needs to understand the link between measurement theory and actual technique. Originality/value - This paper presents psychometric measurement and item analysis theory in easily understandable format, and offers an important set of conceptual tools for researchers in many fields. © Emerald Group Publishing Limited.