925 resultados para complexity regularization
Resumo:
Includes bibliography.
Resumo:
"UILU-ENG 79 1734."
Resumo:
"UILU-ENG 78 1740."
Resumo:
"UIUCDCS-R-75-716"
Resumo:
Mode of access: Internet.
Resumo:
Issued also as Thesis (Ph. D.) University of Chicago, 1908.
Resumo:
"August 3, 1990"--Pt. 2.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
Children aged between 3 and 7 years were taught simple and dimension-abstracted oddity discrimination using learning-set training techniques, in which isomorphic problems with varying content were presented with verbal explanation and feedback. Following the training phase, simple oddity (SO), dimension-abstracted oddity with one or two irrelevant dimensions, and non-oddity (NO) tasks were presented (without feedback) to determine the basis of solution. Although dimension-abstracted oddity requires discrimination based on a stimulus that is different from the others, which are all the same as each other on the relevant dimension, this was not the major strategy. The data were more consistent with use of a simple oddity strategy by 3- to 4-year-olds, and a most different strategy by 6- to 7-year-olds. These strategies are interpreted as reducing task complexity. (C) 2002 Elsevier Science Inc. All rights reserved.
Resumo:
Cognitive complexity and control theory and relational complexity theory attribute developmental changes in theory of mind (TOM) to complexity. In 3 studies, 3-, 4-, and 5-year-olds performed TOM tasks (false belief, appearance-reality), less complex connections (Level 1 perspective-taking) tasks, and transformations tasks (understanding the effects of location changes and colored filters) with content similar to TOM. There were also predictor tasks at binary-relational and ternary-relational complexity levels, with different content. Consistent with complexity theories: (a) connections and transformations were easier and mastered earlier than TOM; (b) predictor tasks accounted for more than 80% of age-related variance in TOM; and (c) ternary-relational items accounted for TOM variance, before and after controlling for age and binary-relational items. Prediction did not require hierarchically structured predictor tasks.
Resumo:
We give a simple proof of a formula for the minimal time required to simulate a two-qubit unitary operation using a fixed two-qubit Hamiltonian together with fast local unitaries. We also note that a related lower bound holds for arbitrary n-qubit gates.
Resumo:
Use of nonlinear parameter estimation techniques is now commonplace in ground water model calibration. However, there is still ample room for further development of these techniques in order to enable them to extract more information from calibration datasets, to more thoroughly explore the uncertainty associated with model predictions, and to make them easier to implement in various modeling contexts. This paper describes the use of pilot points as a methodology for spatial hydraulic property characterization. When used in conjunction with nonlinear parameter estimation software that incorporates advanced regularization functionality (such as PEST), use of pilot points can add a great deal of flexibility to the calibration process at the same time as it makes this process easier to implement. Pilot points can be used either as a substitute for zones of piecewise parameter uniformity, or in conjunction with such zones. In either case, they allow the disposition of areas of high and low hydraulic property value to be inferred through the calibration process, without the need for the modeler to guess the geometry of such areas prior to estimating the parameters that pertain to them. Pilot points and regularization can also be used as an adjunct to geostatistically based stochastic parameterization methods. Using the techniques described herein, a series of hydraulic property fields can be generated, all of which recognize the stochastic characterization of an area at the same time that they satisfy the constraints imposed on hydraulic property values by the need to ensure that model outputs match field measurements. Model predictions can then be made using all of these fields as a mechanism for exploring predictive uncertainty.